Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research design problems

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research design problems

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research design problems

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research design problems

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Survey Design 101: The Basics

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Apr 20, 2024 2:57 PM
  • URL: https://libguides.usc.edu/writingguide

Book cover

Applied Research Methods in Urban and Regional Planning pp 23–36 Cite as

Research Design

  • Yanmei Li 3 &
  • Sumei Zhang 4  
  • First Online: 13 April 2022

857 Accesses

This chapter introduces methods to design the research. Research design is the blueprint of how to conduct research from conception to completion. It requires careful crafts to ensure success. The initial step of research design is to theorize key concepts of the research questions, operationalize the variables used to measure the key concepts, and carefully identify the levels of measurements for all the key variables. After theorization of the key concepts, a thorough literature search and synthetization is imperative to explore extant studies related to the research questions. The purpose of literature review is to retrieve ideas, replicate studies, or fill the gap for issues and theories that extant research has (or has not) investigated.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Borrego, M., Douglas, E. P., & Amelink, C. T. (2009). Quantitative, qualitative, and mixed research methods in engineering education. Journal of Engineering Education, 98 (1), 53–66.

Article   Google Scholar  

Creswell, J. W., Plano Clark, V. L., & Garrett, A. L. (2008). Methodological issues in conducting mixed methods research design. In M. M. Bergman (Ed.), Advances in mixed methods research: Theories and application (pp. 66–83). Sage.

Google Scholar  

Li, Y., & Walter, R. (2013). Single-family housing market segmentation, post-foreclosure resale duration, and neighborhood attributes. Housing Policy Debate, 23 (4), 643–665. https://doi.org/10.1080/10511482.2013.835331

Opoku, A., Ahmed, V., & Akotia, J. (2016). Choosing an appropriate research methodology and method. In V. Ahmed, A. Opoku, & Z. Aziz (Eds.), Research methodology in the built environment: A selection of case studies . Routledge.

Pickering, C., Johnson, M., & Byrne, J. (2021). Using systematic quantitative literature reviews for urban analysis. In S. Baum (Ed.). Methods in Urban Analysis (Cities Research Series) (pp. 29–49) . Singapore: Springer.

Download references

Author information

Authors and affiliations.

Florida Atlantic University, Boca Raton, FL, USA

University of Louisville, Louisville, KY, USA

Sumei Zhang

You can also search for this author in PubMed   Google Scholar

Electronic Supplementary Material

(docx 13 kb), rights and permissions.

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Li, Y., Zhang, S. (2022). Research Design. In: Applied Research Methods in Urban and Regional Planning. Springer, Cham. https://doi.org/10.1007/978-3-030-93574-0_3

Download citation

DOI : https://doi.org/10.1007/978-3-030-93574-0_3

Published : 13 April 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-93573-3

Online ISBN : 978-3-030-93574-0

eBook Packages : Mathematics and Statistics Mathematics and Statistics (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

research design problems

Writing about Design

Principles and tips for design-oriented research.

Writing about Design

How to define a research question or a design problem

Introduction.

Many texts state that identifying a good research question (or, equivalently, a design problem) is important for research. Wikipedia, for example, starts (as of writing this text, at least) with the following two sentences:

“A research question is ‘a question that a research project sets out to answer’. Choosing a research question is an essential element of both quantitative and qualitative research.” (Wikipedia, 2020)

However, finding a good research question (RQ) can be a painful experience. It may feel impossible to understand what are the criteria for a good RQ, how a good RQ can be found, and to notice when there are problems with some RQ candidate.

In this text, I will address the pains described above. I start by presenting a scenario of a project that has problems with its RQ. The analysis of that scenario allows me then to describe how to turn the situation described in the scenario for a better research or design project.

Scenario of a problematic project

Let us consider a scenario that you are starting a new research or design project. You have already an idea: your work will be related to communication with instant messaging (IM). Because you are a design-minded person, you are planning to design and develop a new IM feature: a possibility to send predefined replies on a mobile IM app. Your idea is that this feature will allow the user to communicate quickly with others in difficult situations where the they can only connect with others through their mobile phone. Your plan is to supply the mobile IM app with messages like “I’m late by 10 minutes but see you soon”, “I can’t answer back now but will do that later today”, and so on.

Therefore, your plan involves designing such an app, maybe first by sketching it and then illustrating its interaction with a prototyping software like Figma or Adobe XD. You may also decide to make your design functional by programming it and letting a selected number of participants to use it. These kinds of activities will let you demonstrate your skills as a designer-researcher.

Although predefined messages for a mobile IM app can be a topic of a great study, there are some problems with this project that require you to think more about it before you start. As the project is currently defined, it is difficult to provide convincing answers to these challenges:

  • Challenge 1: Why would this be a relevant topic for research or design? Good studies address topics that may interest also other people than the author only. The current research topic, however, does not do that self-evidently yet: it lacks an explanation why it would make sense to equip mobile IM apps with predefined replies. There is only a guess that this could be useful in some situations, but this may not convince the reader about the ingenuity of this project.
  • Challenge 2: How do you demonstrate that your solution is particularly good? For an outsider who will see the project’s outcome, it may not be clear why your final design would be the best one among the other possible designs. If you propose one interaction design for such a feature, what makes that a good one? In other words, the project lacks a yardstick by which its quality should be measured.
  • Challenge 3: How does this project lead to learning or new knowledge? Even if you can show that the topic is relevant (point 1) and that the solution works well (2), the solution may feel too “particularized” – not usable in any other design context. This is an important matter in applied research fields like design and human–computer interaction, because these fields require some form of generalizability from their studies. Findings of a study should result in some kind of knowledge, such as skills, sensitivity to important matters, design solutions or patterns, etc. that could be used also at a later time in other projects, preferably by other people too.

All these problems relate to a problem that this study does not have a RQ yet . Identifying a good research question will help clarify all the above matters, as we will see below.

Adding a research question / design problem

RQs are of many kinds, and they are closely tied to the intended finding of the study: what contribution  should the study deliver. A contribution can be, for example, a solution to a problem or creation of novel information or knowledge. Novel information, in turn, can be a new theory, model or hypothesis, analysis that offers deeper understanding, identification of an unattended problem, description about poorly understood phenomenon, a new viewpoint, or many other things.

The researcher or thesis author usually has a lot of freedom in choosing the exact type of contribution that they want to make. This can feel difficult to the author: there may be no-one telling what they should study. In a way, in such a situation, the thesis/article author is the client of their own research: they both define what needs to be done, and then accomplish that work. Some starting points for narrowing down the space of possibilities is offered here.

Most importantly, the RQ needs to be focused on a topic that the author genuinely does not know, and which is important to find out on the path to the intended contribution. In our scenario about a mobile IM app’s predefined replies, there are currently too many alternatives for an intended contribution, and an outsider would not be able to know which one of them to expect:

  • Demonstration that mobile IM apps will be better to use when they have this new feature.
  • Report on the ways by which people would use the new feature, if their mobile IM apps would have such a feature.
  • Requirements analysis for the specific design and detailed features by which the feature should be designed.
  • Analysis of the situations where the feature would be most needed, and user groups who would most often be in such situations.

All of these are valid contributions, and the author can choose to focus on any one of them. This depends also on the author’s personal interests. This gives a possibility for formulating a RQ for the project. It is important to notice that each one of the possible contributions listed above calls for a different corresponding RQ:

RQ1: Do predefined replies in mobile IM apps improve their usability?

RQ2: How will users start using the predefined replies in mobile IM apps?

RQ3: How should the interaction in the IM app be designed, and what kind of predefined replies need to be offered to the users?

RQ4: When are predefined replies in IM apps needed?

This list of four RQs, matched with the four possible contributions, shows why the scenario presented in the beginning of this text was problematic. Only after asking these kinds of questions one is able to seek to answer to the earlier-presented three challenges in the end of the previous section. Also, each of the RQs needs a different research or design method, and its own kind of background research.

The choice and fine-tuning of the research question / design problem

Which one of the above RQs should our hypothetical researcher/designer choose? Lists of basic requisites for good RQs have been presented in many websites. They can help identify RQs that will still need refinement. Monash University offers the following kind of helpful list:

  • Clear and focused.  In other words, the question should clearly state what the writer needs to do.
  • Not too broad and not too narrow.  The question should have an appropriate scope. If the question is too broad it will not be possible to answer it thoroughly within the word limit. If it is too narrow you will not have enough to write about and you will struggle to develop a strong argument.
  • Not too easy to answer.  For example, the question should require more than a simple yes or no answer.
  • Not too difficult to answer.  You must be able to answer the question thoroughly within the given timeframe and word limit.
  • Researchable.  You must have access to a suitable amount of quality research materials, such as academic books and refereed journal articles.
  • Analytical rather than descriptive.  In other words, your research question should allow you to produce an analysis of an issue or problem rather than a simple description of it.

If a study meets the above criteria, it has a good chance of avoiding a problem of presenting a “non-contribution” : A laboriously produced finding that nonetheless does not provide new, interesting information. The points 3 and 6 above particularly guard against such studies: they warn the readers from focusing their efforts on something that is already known (3) and only describing what was done or what observations were made, instead of analysing them in more detail (6).

In fine-tuning a possible RQ, it is important to situate it to the right scope. The first possible RQ that comes to one’s mind is often too broad and needs to be narrowed. RQ4 above (“ When are predefined replies in IM apps most needed? ”), for example, is a very relevant question, but it is probably too broad.

Why is RQ4 too broad? The reason is that RQs are usually considered very literally. If you leave an aspect in your RQ unspecified, then it means that you intend that your RQ and your findings will be generalisable (i.e., applicable) to all the possible contexts and cases that your RQ can be applied to. Consider the following diagram:

With a question “ When are predefined replies in IM apps most needed?”, you are asking a question that covers both leisure-oriented and work-oriented IM apps which can be of very different kinds. Some of the IM apps are mobile-oriented (such as WhatsApp) and others are desktop-oriented (such as Slack or Teams). Unless you specify your RQ more narrowly, your findings should be applicable to all these kinds of apps. Also, RQ4 is unspecific also about the people that you are thinking as communication partners. It may be impossible for you to make a study so broad that it applies to all of these cases.

Therefore, a more manageable-sized scoping could be something like this:

RQ4 (version 2): In which away-from-desktop leisure life situations are predefined replies in IM apps most needed?

Furthermore, you can also narrow down your focus theoretically. In our example scenario, the researcher/designer can decide, for example, that they will consider predefined IM replies from the viewpoint of “face-work” in social interaction. By adopting this viewpoint, the researcher/designer can decide that they will design the IM’s replies with a goal that they help the user to maintain an active, positive image in the eyes of others. When they start designing the reply feature, they can now ask much more specific questions. For example: how could my design help a user in doing face-work in cases where they are in a hurry and can only send a short and blunt message to another person? How could the predefined replies help in situations where the users would not have time to answer but they know they should? Ultimately, would the predefined replies make it easier for users to do face-work in computer-mediated communications (CMC)?

You can therefore further specify RQ4 into this:

RQ4 (version 3): In which away-from-desktop leisure life situations are predefined replies in IM apps most needed when it is important to react quickly to arriving messages?

As you may notice, it is possible to scope the RQ too narrowly so that it starts to be close to absurd. But if that does not become a problem, the choice of methods (i.e., the research design ) becomes much easier to do.

The benefit of theoretically narrowed-down RQs (in this case, building on the concept of face-work in RQ4 version 3) have the benefit that they point you to useful background literature. Non-theoretical RQs (e.g., RQ4 version 2), in contrast, require that you identify the relevant literature more independently, relying on your own judgment. In the present case, you can base your thinking about IM apps’ on sociological research on interpersonal interaction and self-presentation (e.g., Goffman 1967) and its earlier applications to CMC (Nardi et al., 2000; Salovaara et al., 2011). Such a literature provides the starting points for deeper design considerations. Deeper considerations, in turn, increase the contribution of the research, and make it interesting for the readers.

As said, the first RQ that one comes to think of is not necessarily the best and final one. The RQ may need to be adapted (and also can be adapted) over the course of the research. In qualitative research this is very typical, and the same applies to exploratory design projects that proceed through small design experiments (i.e., through their own smaller RQs).

This text promised to address the pains that definition of a RQ or a design problem may pose for a student or a researcher. The main points of the answer may be summarized as follows:

  • The search for a good RQ is a negotiation process between three objectives : what is personally motivating, what is realistically possible to do (e.g., that the work can be built on some earlier literature and there is a method that can answer to the RQ), and what motivates its relevance (i.e., can it lead to interesting findings).
  • The search for a RQ or a design problem is a process and not a task that must be fixed immediately . It is, however, good to get started somewhere, since a RQ gives a lot of focus for future activities: what to read and what methods to choose, for example.

With the presentation of the scenario and its analysis, I sought to demonstrate why and how choosing an additional analytical viewpoint can be a useful strategy. With it, a project whose meaningfulness may be otherwise questionable for an outsider can become interesting when its underpinnings and assumptions are explicated. That helps ensure that the reader will appreciate the work that the author has done with their research.

In the problematization of the scenario, I presented the three challenges related to it. I can now offer possible answers to them, by highlighting why a RQ can serve as a tool for finding them:  

  • Why would this be a relevant topic for research or design? Choice of a RQ often requires some amount of background research that helps the researcher/designer to understand how much about the problem has already been solved by others. This awareness helps shape the RQ to focus on a topic where information is not yet known and more information is needed for a high-quality outcome.
  • How do you demonstrate that your solution is particularly good? By having a question, it is possible to analyse what are the right methods for answering it. The quality of executing these becomes then evaluatable. The focus on a particular question also will permit that the author compromises optimality in other, less central outcomes. For example, if smoothness of interaction is in the focus, then it is easy to explain why long-term robustness and durability of a prototype may not be critical.
  • How does this project lead to learning or new knowledge? Presentation of the results or findings allows the researcher/design to devote their Discussion section (see the IMRaD article format ) to topics that would have been impossible to predict before the study. That will demonstrate that the project has generated novel understanding: it has generated knowledge that can be considered insightful.

If and when the researcher/designer pursues further in design and research, the experience of thinking about RQs and design problems accumulates. As one reads literature , the ability to consider different research questions becomes better too. Similarly, as one carries out projects with different RQs and problems, and notices how adjusting them along the way helps shape one’s work, the experience similarly grows. Eventually, one may even learn to enjoy the analytical process of identifying a good research question.

As a suggestion for further reading, Carsten Sørensen’s text  (2002) about writing and planning an article in information systems research field is a highly recommended one. It combines the question of choosing the RQ with the question on how to write a paper about it.

Goffman, E. (1967). On face-work: An analysis of ritual elements in social interaction. Psychiatry , 18 (3), 213–231.  https://doi.org/10.1080/00332747.1955.11023008

Nardi, B. A., Whittaker, S., & Bradner, E. (2000). Interaction and outeraction: Instant messaging in action. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (CSCW 2000) (pp. 79–88). New York, NY: ACM Press. https://doi.org/10.1145/358916.358975

Salovaara, A., Lindqvist, A., Hasu, T., & Häkkilä, J. (2011). The phone rings but the user doesn’t answer: unavailability in mobile communication. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI 2011) (pp. 503–512). New York, NY: ACM Press. https://doi.org/10.1145/2037373.2037448

Sørensen, C. (2002): This is Not an Article — Just Some Food for Thoughts on How to Write One. Working Paper. Department of Information Systems, The London School of Economics and Political Science. No. 121.

Wikipedia (2020). Research question. Retrieved from https://en.wikipedia.org/wiki/Research_question (30 November 2020).

One thought on “ How to define a research question or a design problem ”

Pingback: From table of contents to a finished text | Writing about Design

Comments are closed.

  • Short Report
  • Open access
  • Published: 19 August 2011

Common statistical and research design problems in manuscripts submitted to high-impact medical journals

  • Sara Fernandes-Taylor 1 ,
  • Jenny K Hyun 2 ,
  • Rachelle N Reeder 1 &
  • Alex HS Harris 1  

BMC Research Notes volume  4 , Article number:  304 ( 2011 ) Cite this article

14k Accesses

48 Citations

1 Altmetric

Metrics details

To assist educators and researchers in improving the quality of medical research, we surveyed the editors and statistical reviewers of high-impact medical journals to ascertain the most frequent and critical statistical errors in submitted manuscripts.

The Editors-in-Chief and statistical reviewers of the 38 medical journals with the highest impact factor in the 2007 Science Journal Citation Report and the 2007 Social Science Journal Citation Report were invited to complete an online survey about the statistical and design problems they most frequently found in manuscripts. Content analysis of the responses identified major issues. Editors and statistical reviewers (n = 25) from 20 journals responded. Respondents described problems that we classified into two, broad themes: A. statistical and sampling issues and B. inadequate reporting clarity or completeness. Problems included in the first theme were (1) inappropriate or incomplete analysis, including violations of model assumptions and analysis errors, (2) uninformed use of propensity scores, (3) failing to account for clustering in data analysis, (4) improperly addressing missing data, and (5) power/sample size concerns. Issues subsumed under the second theme were (1) Inadequate description of the methods and analysis and (2) Misstatement of results, including undue emphasis on p-values and incorrect inferences and interpretations.

Conclusions

The scientific quality of submitted manuscripts would increase if researchers addressed these common design, analytical, and reporting issues. Improving the application and presentation of quantitative methods in scholarly manuscripts is essential to advancing medical research.

Attention to statistical quality in medical research has increased in recent years owing to the greater complexity of statistics in medicine and the focus on evidence-based practice. The editors and statistical reviewers of medical journals are charged with evaluating the scientific merit of submitted manuscripts, often requiring authors to conduct further analysis or content revisions to ensure the transparency and appropriate interpretation of results. Still, many manuscripts are rejected because of irreparable design flaws or inappropriate analytical strategies. As a result, researchers undertake the long and arduous process of submitting to decreasingly selective journals until the manuscript is eventually published. Aside from padding the authors' résumés, publishing results of dubious validity benefits few and makes development of clinical practice guidelines more time-consuming [ 1 , 2 ]. This undesirable state of affairs might often be prevented by seeking statistical and methodological expertise [ 3 ] during the design and conduct of research and during data analysis and manuscript preparation.

To assist educators and medical researchers in improving the quality of medical research, we conducted a survey of the editors and statistical reviewers of high-impact medical journals to identify the most frequent and critical statistical and design-related errors in submitted manuscripts. Methods experts have documented the use and misuse of quantitative methods in medical research, including statistical errors in published works and how authors use analytical expertise in manuscript preparation [ 3 – 11 ]. However, this is the first multi-journal survey of medical journal editors regarding the problems they see most often and what they would like to communicate to researchers. Scientists may be able to use the results of this study as a springboard to improve the impact of their research, their teaching of medical statistics, and their publication record.

Sample and Procedure

We identified the 20 medical journals from the "Medicine, General & Internal" and "Biomedical" categories with the highest impact factor in each of the 2007 Science Journal Citation Report and the 2007 Social Science Journal Citation Report. Journals that do not publish results with statistical analysis were discarded, yielding 38 high impact journals. Twelve of these journals endorse the CONSORT criteria for randomized controlled trials, 6 endorse the STROBE guidelines for observational studies, and 5 endorse PRISMA criteria for systematic reviews. These journals are listed in Additional file 1 [ 12 ].

The Editors-in-Chief and identifiable statistical reviewers of these journals were mailed a letter informing them of the online survey and describing the forthcoming email invitation that contained an electronic link to the survey instrument (sent within the week). We sent one email reminder a week after the initial email invitation in spring of 2008. We also requested that the Editors-in-Chief forward the invitation to their statistically-oriented editors or reviewers in addition to or instead of completing the survey themselves. An electronic consent form with the principal investigator's contact information was provided to potential respondents emphasizing the voluntary and confidential nature of participation. The Stanford University Panel on Human Subjects approved the protocol. This is one in a series of five studies surveying the editors and reviewers of high-impact journals in health and social science disciplines (medicine, public health, psychology, psychiatry, and health services) [ 13 , 14 ].

Survey Content

The survey contained three parts: (1) Short-answer questions about the journals for which the respondents served, how many manuscripts they handled in a typical month, and their areas of statistical and/or research design expertise; (2) The main, open-ended question which asked: "As an editor-in-chief or a statistically-oriented reviewer, you provide important statistical guidance to many researchers on a manuscript-by-manuscript basis. If you could communicate en masse to researchers in your field, what would you say are the most important (common and high impact) statistical issues you encounter in reviewing manuscripts? Please describe the issues as well as what you consider to be adequate and inadequate strategies for addressing them."; and (3) One to four follow-up questions based on the respondents' self-identified primary area of statistical expertise. These questions were developed by polling 69 researchers regarding what statistical questions they would want to ask the editors or statistical reviewers of major journals.

Responses to the open-ended questions were analyzed qualitatively using content analysis to identify dominant themes. We coded the responses to the main question on the most common and high impact (per the wording of the question) statistical issue and the respondents' proposed solutions to those issues. In the analysis phase, two of the authors resolved coding criteria and sorted the responses according to the two major categories that emerged from the data.

Statistical and sampling issues

Inadequate reporting clarity or completeness

The results are presented in each category from most frequently mentioned to least frequently mentioned.

Respondent Characteristics

Respondents to the survey were comprised of 25 editors and statistical reviewers (of 60 solicited) who manage manuscripts from 20 of the 38 journals in the sampling frame. Respondents indicated reviewing or consulting on a mean of 47 (range: 0.5 to 250) manuscripts per month. The most frequently reported areas of expertise (multiple responses possible) were the design and analysis of clinical trials (n = 12), general statistics (n = 14), quasi-experimental/observational studies (n = 12), and epidemiology (n = 11).

Respondents' Suggestions for Statistical and Sampling Issues

Respondents often noted problems that are fundamental to research design and quantitative methods, including analytical strategies that are incomplete or mismatched with the data structure or scientific questions, failure to address missing data, and low power. Below, we describe the specific issues mentioned by respondents and provide accessible references for more detailed discussion.

Inappropriate or incomplete analysis: In addition to minor arithmetic and calculation errors, respondents expressed concern over researchers' choice of statistical tests. Specifically, frequent problems exist in the appropriateness of statistical tests chosen for the questions of interest and for the data structure. These include using parametric statistical tests when the sample size is small or in the presence of obviously violated assumptions [ 15 ]. In addition, researchers may fail to account for the sampling framework in survey-based studies with appropriate weighting of observations [ 16 , 17 ]. Other errors include confusing the exposure and outcome variables in the analysis phase. That is, in laboratory data, the exposure of interest is mistakenly analyzed as the outcome in analyses. In a similar vein, researchers sometimes mistakenly report the discrimination of a clinical prediction rule or internal validation method (e.g., bootstrap) using the training dataset rather than the test set [ 18 , 19 ]. Other concerns included creating dichotomous variables out of continuous ones without legitimate justification, thereby discarding information, and the use of stepwise regression analysis, which, among other problems, introduces bias into parameter estimates and tends to over-fit the data. See Malek, et al. [ 20 ] for a pithy discussion of the pitfalls of stepwise regression and additional references.

The substantive area of analysis that received the most attention from respondents was the failure to account for clustered data and the use of hierarchical or mixed linear models. The reviewers often observed that authors fail to account for clustering when it is present. Examples of this include data collected on patients over time, where successive observations are dependent upon those in the previous time period(s), or multiple observations are nested in larger units (e.g., patients within hospitals). In these situations, reviewers prefer to see an analytical approach that does not have an independence assumption and properly accounts for clustering, including time series analysis, generalized linear mixed models, or generalized estimating equations where the population-averaged effect is of interest [ 21 – 24 ].

Addressing missing data: Frequently, researchers fail to mention the missing data in their sample or fail to describe the extent of the missing data. Problems with low response rates in studies are often not addressed or are inadequately discussed. In addition, longitudinal studies may fail to address differential dropout rates between groups that may have an effect on the outcome. In addition, those researchers who do discuss missing data often do not describe their methods of data imputation or their evaluation of whether missing data are significantly related to any observed variables. Those researchers who do explicitly address missing data regularly use suboptimal approaches. For example, investigators with longitudinal data often employ complete case analysis, last observation carried forward (LOCF) or other single imputation methods. These approaches can bias estimates and understate the sample variance. Preferably, researchers would evaluate the missing at random (MAR) assumption and conduct additional sensitivity analyses if the MAR assumption is suspect [ 25 , 26 ]. In addition, a detailed qualitative description of the loss process is essential, including the likelihood of MAR and the likely direction of any bias.

Power and sample size issues: Power was another area that reviewers mentioned as problematic. Respondents also noted that power calculations are not done at all or are done post hoc rather than being incorporated into the design and sampling framework [ 27 ]. In novel studies where no basis for power calculations exists, this should be explicitly noted.

Researchers often use propensity scores without recognition of the potential bias caused by unmeasured confounding [ 28 – 30 ]. Propensity scores are the probabilities of the individuals in a study being assigned to a particular condition given a set of known covariates and are used to reduce the confounding of covariates in observational studies. The bias problem arises when an essential confounder is not measured, and the use of propensity scores in this situation can exacerbate the bias already present in an analysis.

Respondents' Suggestions for Inadequate Reporting Clarity or Completeness

In addition to specific analytical concerns, respondents also reported common errors in the text of methods and results sections. Although some of these problems are semantic, others reflect a misinterpretation or misunderstanding of the methods employed.

Inadequate description of methods and analysis: Respondents observed that manuscripts often do not contain a clear description of the analysis. Authors should provide as much methodological detail as possible, including targeted references and a statistical appendix if appropriate. One respondent provided a rule of thumb whereby an independent reader should be able to perform the same analysis based solely on the paper. Other issues included inadequate description of the study cohort, recruitment, and response rate, and the presentation of relative differences (e.g., odds ratio = 1.30) in the absence of absolute differences (e.g., 2.6% versus 2%). As one respondent wrote, "Since basic errors that are easily identified remain common, there is real concern of the presentation of analyses for more complex methods where the errors will not be testable by the reviewer."

Miscommunication of results: Researchers frequently report likelihood ratios for diagnostic tests (the likelihood of an individual having a particular condition relative to the likelihood of an individual not having that condition given a certain test result) without associated sensitivity and specificity. Although this is very useful for learning how well a test of interest predicts the risk of a given result [ 31 , 32 ], editors also appreciate the inclusion of rates of true positives and true negatives to give the reader a complete picture of the analysis.

Respondents also noted an undue emphasis on p-values and excessive focus on significant results. For example, authors often highlight the significance of a categorical dummy that is not significant overall; the overall significance of a multi-category predictor should be tested by using an appropriate joint test of significance [ 33 ]. In turn, non-significant results are seldom presented in manuscripts. Authors leave out indeterminate test results when describing diagnostic test performance and fail to report confidence intervals along with p-values. An analogous problem is the "unthinking acceptance" of p < 0.05 as significant. Researchers can fall prey to alpha errors and take the customary but curious position of touting significance just below p < 0.05 and non-significance just above the 0.05 threshold. In addition, authors may trumpet a significant result in a large study when the size of the difference is clinically unimportant. In this situation, a focus on the effect size could be more appropriate [ 34 ].

Journal editors and statistical reviewers of high-impact medical journals identified several common problems that significantly and frequently affect the quality of submitted manuscripts. The majority of respondents underscored the fundamentals of research methods that should be familiar to all scientists. These include rigorous descriptions of sampling and analytic strategies, recognition of the strengths and drawbacks of a particular analytical approach, and the appropriate handling of missing data. Respondents also discussed concerns about more advanced methods in the medical research toolkit. Specifically, authors may not understand or report the limitations of their analysis strategies and hedge these with sensitivity analyses and more tempered interpretations. Finally, respondents emphasized the importance of the clear and accurate presentation of methods and results.

Although this study was not intended as a systematic or comprehensive catalog of all statistical problems in medical research, it does shed some light on common issues that delay or preclude the publication of research that might otherwise be sound and important. Moreover, the references included in this paper may provide some useful analytical guidance for researchers and for educators. Accordingly, this work serves to inform medical education and research to improve the overall quality of manuscripts and published research and to increase likelihood of publication.

In addition, these data provide evidence for the importance of soup-to-nuts methodological guidance in the research process. Statisticians and methodological experts should be consulted during the study design, analysis, and manuscript writing phases to improve the quality of research and to ensure the clear and appropriate application of quantitative methods. Although this may seem obvious, previous work by Altman and his colleagues demonstrates that this is rarely the case in medical research [ 3 ]. Rather, statistical experts are often consulted only during the analysis phase, if at all, and even then may not be credited with authorship [ 35 ]. In addition to statistical guidance, researchers should consult reporting guidelines associated with their intended research design, such as CONSORT for randomized, controlled trials, STROBE for observational studies, and PRISMA for systematic reviews. Adherence to such guidelines helps to ensure a common standard for reporting and a critical level of transparency in medical research. Professional organizations and prominent journals, including the Cochrane Collaboration and The Lancet, peer-review research protocols, which also helps to create a standard for research design and methods.

This work should be interpreted in light of several important limitations. We did not collect data on the professional position (e.g., academic department, industry, etc.) of the respondents and consequently do not know the composition of the sample or how this may have shaped our findings. Although the response rate was similar to other surveys of journal editors, and we have no reason to suspect significant response bias, the possibility of response bias remains. In addition, the size of our sample may limit the generalizability of our findings

Overall, this work is intended to inform researchers and educators on the most common pitfalls in quantitative medical research, pitfalls that journal editors note as problematic. Given the recent clinical research priorities of health care agenda-setting organizations, such as comparative effectiveness research and evidence-based practice, medical research is expected to meet a new bar in terms of valid and transparent inquiry [ 36 – 39 ]. Improving the application and presentation of quantitative methods in scholarly manuscripts is essential to meeting the current and future goals of medical research.

Steinberg EP, Luce BR: Evidence Based? Caveat Emptor!. Health Affairs. 2005, 24 (1): 80-92. 10.1377/hlthaff.24.1.80.

Article   PubMed   Google Scholar  

GRADE Working Group: Grading quality of evidence and strength of recommendations. BMJ. 2004, 1490-1493.

Google Scholar  

Altman DG, Goodman SN, Schroter S: How statistical expertise is used in medical research. JAMA. 2002, 287 (21): 2817-2820. 10.1001/jama.287.21.2817.

Altman DG: Poor-quality medical research: what can journals do?. JAMA. 2002, 287 (21): 2765-2767. 10.1001/jama.287.21.2765.

Chalmers I, Altman D: How can medical journals help prevent poor medical research? Some opportunities presented by electronic publishing. Lancet. 1999, 353 (9151): 490-493. 10.1016/S0140-6736(98)07618-1.

Article   PubMed   CAS   Google Scholar  

Gardner M, Bond J: An exploratory study of statistical assessment of papers published in the British Medical Journal. JAMA. 1990, 263 (10): 1355-10.1001/jama.263.10.1355.

Goodman S, Altman D, George S: Statistical Reviewing Policies of Medical Journals Caveat Lector?. Journal of general internal medicine. 1998, 13 (11): 753-756. 10.1046/j.1525-1497.1998.00227.x.

Article   PubMed   CAS   PubMed Central   Google Scholar  

Gore S, Jones G, Thompson S: The Lancet's statistical review process: areas for improvement by authors. The Lancet. 1992, 340 (8811): 100-102. 10.1016/0140-6736(92)90409-V.

Article   CAS   Google Scholar  

McKinney W, Young M, Hartz A, Bi-Fong Lee M: The inexact use of Fisher's exact test in six major medical journals. JAMA. 1989, 261 (23): 3430-10.1001/jama.261.23.3430.

Porter A: Misuse of correlation and regression in three medical journals. Journal of the Royal Society of Medicine. 1999, 92 (3): 123-

PubMed   CAS   PubMed Central   Google Scholar  

Schriger DL, Altman DG: Inadequate post-publication review of medical research. BMJ. 341: c3803-

Institute for Scientific Information: Journal Citation Report. 2007, Thompson Scientific

Harris AHS, Reeder R, Hyun JK: Common statistical and research design problems in manuscripts submitted to high-impact psychiatry journals: What editors and reviewers want authors to know. Journal of Psychiatric Research. 2009, 43: 1231-1234. 10.1016/j.jpsychires.2009.04.007.

Harris AHS, Reeder RN, Hyun JK: Common statistical and research design problems in manuscripts submitted to high-impact public health journals. The Open Public Health Journal. 2009, 2: 44-48. 10.2174/1874944500902010044.

Article   Google Scholar  

Sheskin DJ: Handbook of Parametric and Nonparametric Statistical Procedures. 2007, Boca Raton: Chapman & Hall, 4

Korn EL, Graubard BI: Analysis of large health surveys: Accounting for the sampling design. Journal of the Royal Statistical Society Series A (Statistics in Society). 1995, 158 (2): 263-295. 10.2307/2983292.

Lee ES, Forthofer RN, Eds: Analyzing Complex Survey Data. 2006, Thousand Oaks: Sage Publications, Inc, 2

Browne MW: Cross-validation methods. Journal of Mathematical Psychology. 2000, 44 (1): 108-132. 10.1006/jmps.1999.1279.

Efron B, Gong G: A leisurely look at the bootstrap, the jackknife, and cross-validation. The American Statistician. 1983, 37 (1): 36-48. 10.2307/2685844.

Malek MH, Berger DE, Coburn JW: On the inappropriateness of stepwise regression analysis for model building and testing. European Journal of Applied Physiology. 2007, 101: 263-264. 10.1007/s00421-007-0485-9.

Diggle PJ, Heagerty PJ, Liang KY, Zeger SL: Analysis of Longitudinal Data. 2002, New York: Oxford University Press

Hardin JW, Hilbe JM: Generalized Estimating Equations. 2003, Boca Raton: Chapman & Hall

Raudenbush SW, Bryck AS: Hierarchical Linear Models: Applications and Data Analysis Methods. 2002, Thousand Oaks: Sage Publications, Inc, 2

Snijders T, Bosker RJ: Multilevel Analysis. 1999, Thousand Oaks: Sage Publications, Inc

Daniels M, Hogan J: Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis. 2008, New York: Chapman & Hall

Rubin DB: Inference and missing data. Biometrika. 1978, 63: 581-592.

Zumbo B, Hubley A: A note on misconceptions concerning prospective and retrospective power. The Statistician. 1998, 47 (2): 385-388.

D'Agostino RB: Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Statistics in Medicine. 1998, 17: 2265-2281. 10.1002/(SICI)1097-0258(19981015)17:19<2265::AID-SIM918>3.0.CO;2-B.

Luellen JK, Stadish WR, Clark MH: Propensity scores: An introduction and experimental test. Eval Rev. 2005, 29 (6): 530-558. 10.1177/0193841X05275596.

McCaffrey DF, Ridgeway G, Morral AR: Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological Methods. 2004, 9 (4): 403-425.

Altman DG, Bland JM: Diagnostic tests 1: Sensitivity and specificity. BMJ. 1994, 308: 1552-

Deeks JJ, Altman DG: Diagnostic tests 4: Likelihood ratios. BMJ. 2004, 329: 168-169. 10.1136/bmj.329.7458.168.

Article   PubMed   PubMed Central   Google Scholar  

Wooldridge JM: Introductory Econometrics: A modern approach. 2009, Mason, OH: South-western Cengage Learning

Gotzsche PC: Believability of relative risks and odds ratios in abstracts: Cross sectional study. BMJ. 2006, 333 (7561): 231-4. 10.1136/bmj.38895.410451.79.

Bacchetti P: Peer review of statistics in medical research: The other problem. BMJ. 2002, 324 (7348): 1271-3. 10.1136/bmj.324.7348.1271.

Evidence-based Medicine. [ http://www.ahrq.gov/browse/evidmed.htm ]

Institute of Medicine: The Learning Healthcare System: Workshop Summary (IOM Roundtable on Evidence-Based Medicine). 2007, Washington, DC: National Academies Press

Institute of Medicine: Initial National Priorities for Comparative Effectiveness Research. 2009, Washington, DC: National Academies Press

Lang TA, Secic M: How to Report Statistics in Medicine: Annotated guidelines for authors, editors, and reviewers. 1997, Philadelphia: American College of Physicians

Download references

Acknowledgements and Funding

The views expressed herein are the authors' and not those of the Department of Veterans Affairs. This study was partially supported by the VA Office of Research and Development, Health Services Research and Development Service (MRP-05-168-1).

Author information

Authors and affiliations.

Center for Health Care Evaluation, VA Palo Alto Health Care System and Stanford University School of Medicine, 795 Willow Road (MPD-152), Menlo Park, CA, 94025, USA

Sara Fernandes-Taylor, Rachelle N Reeder & Alex HS Harris

National Center for PTSD, VA Palo Alto Health Care System, 795 Willow Road, Menlo Park, CA, 94025, USA

Jenny K Hyun

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sara Fernandes-Taylor .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

SFT was responsible for the analysis and interpretation of data, drafting the manuscript, and final approval of the draft. JKH made substantial contributions to the conception and design of the study and the survey instrument, was involved in revising the manuscript, and gave final approval. RNR aided in data collection, analysis and interpretation of the data, manuscript revisions, and gave final approval. AHSH made substantial contributions to the conception and design of the study and the survey instrument, was involved in revising the manuscript, and gave final approval.

Electronic supplementary material

Additional file 1: appendix. 2007 journal citation report titles included in the sampling frame. (doc 43 kb), rights and permissions.

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Fernandes-Taylor, S., Hyun, J.K., Reeder, R.N. et al. Common statistical and research design problems in manuscripts submitted to high-impact medical journals. BMC Res Notes 4 , 304 (2011). https://doi.org/10.1186/1756-0500-4-304

Download citation

Received : 17 December 2010

Accepted : 19 August 2011

Published : 19 August 2011

DOI : https://doi.org/10.1186/1756-0500-4-304

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Propensity Score
  • Journal Editor
  • Comparative Effectiveness Research
  • Last Observation Carry Forward
  • High Impact Factor

BMC Research Notes

ISSN: 1756-0500

research design problems

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Research design

Research design is a comprehensive plan for data collection in an empirical research project. It is a ‘blueprint’ for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process. The instrument development and sampling processes are described in the next two chapters, and the data collection process—which is often loosely called ‘research design’—is introduced in this chapter and is described in further detail in Chapters 9–12.

Broadly speaking, data collection methods can be grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected—quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth—and analysed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that is not available from either type of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key attributes of a research design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in a hypothesised independent variable, and not by variables extraneous to the research context. Causality requires three conditions: covariation of cause and effect (i.e., if cause happens, then effect also happens; if cause does not happen, effect does not happen), temporal precedence (cause must precede effect in time), and spurious correlation, or there is no plausible alternative explanation for the change. Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are by no means immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalisability refers to whether the observed associations can be generalised from the sample to the population (population validity), or to other people, organisations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalised to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalisability than laboratory experiments where treatments and extraneous variables are more controlled. The variation in internal and external validity for a wide range of research designs is shown in Figure 5.1.

Internal and external validity

Some researchers claim that there is a trade-off between internal and external validity—higher external validity can come only at the cost of internal validity and vice versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs are ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organisational learning are difficult to define, much less measure. For instance, construct validity must ensure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure are valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical tests, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

Different types of validity in scientific research

Improving internal and external validity

The best research designs are those that can ensure high levels of internal and external validity. Such designs would guard against spurious correlations, inspire greater faith in the hypotheses testing, and ensure that the results drawn from a small sample are generalisable to the population at large. Controls are required to ensure internal validity (causality) of research designs, and can be accomplished in five ways: manipulation, elimination, inclusion, and statistical control, and randomisation.

In manipulation , the researcher manipulates the independent variables in one or more levels (called ‘treatments’), and compares the effects of the treatments against a control group where subjects do not receive the treatment. Treatments may include a new drug or different dosage of drug (for treating a medical condition), a teaching style (for students), and so forth. This type of control is achieved in experimental or quasi-experimental designs, but not in non-experimental designs such as surveys. Note that if subjects cannot distinguish adequately between different levels of treatment manipulations, their responses across treatments may not be different, and manipulation would fail.

The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socioeconomic status. In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female). Such technique allows for greater generalisability, but also requires substantially larger samples. In statistical control , extraneous variables are measured and used as covariates during the statistical testing process.

Finally, the randomisation technique is aimed at cancelling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature. Two types of randomisation are: random selection , where a sample is selected randomly from a population, and random assignment , where subjects selected in a non-random manner are randomly assigned to treatment groups.

Randomisation also ensures external validity, allowing inferences drawn from the sample to be generalised to the population from which the sample is drawn. Note that random assignment is mandatory when random selection is not possible because of resource or access constraints. However, generalisability across populations is harder to ascertain since populations may differ on multiple dimensions and you can only control for a few of those dimensions.

Popular research designs

As noted earlier, research designs can be classified into two categories—positivist and interpretive—depending on the goal of the research. Positivist designs are meant for theory testing, while interpretive designs are meant for theory building. Positivist designs seek generalised patterns based on an objective view of reality, while interpretive designs seek subjective interpretations of social phenomena from the perspectives of the subjects involved. Some popular examples of positivist designs include laboratory experiments, field experiments, field surveys, secondary data analysis, and case research, while examples of interpretive designs include case research, phenomenology, and ethnography. Note that case research can be used for theory building or theory testing, though not at the same time. Not all techniques are suited for all kinds of scientific research. Some techniques such as focus groups are best suited for exploratory research, others such as ethnography are best for descriptive research, and still others such as laboratory experiments are ideal for explanatory research. Following are brief descriptions of some of these designs. Additional details are provided in Chapters 9–12.

Experimental studies are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the ‘treatment group’) but not to another group (‘control group’), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a placebo (e.g., a sugar pill with no medicinal value) to subjects in the control group. More complex designs may include multiple treatment groups, such as low versus high dosage of the drug or combining drug administration with dietary interventions. In a true experimental design , subjects must be randomly assigned to each group. If random assignment is not followed, then the design becomes quasi-experimental . Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organisation where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analysed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalisability since real life is often more complex (i.e., involving more extraneous variables) than contrived lab settings. Furthermore, if the research does not identify ex ante relevant extraneous variables and control for such variables, such lack of controls may hurt internal validity and may lead to spurious correlations.

Field surveys are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structured interview. In cross-sectional field surveys , independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire), while in longitudinal field surveys , dependent variables are measured at a later point in time than the independent variables. The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories. However, because of their non-temporal nature, internal validity (cause-effect relationships) are difficult to infer, and surveys may be subject to respondent biases (e.g., subjects may provide a ‘socially desirable’ response rather than their true response) which further hurts internal validity.

Secondary data analysis is an analysis of data that has previously been collected and tabulated by other sources. Such data may include data from government agencies such as employment statistics from the U.S. Bureau of Labor Services or development statistics by countries from the United Nations Development Program, data collected by other researchers (often used in meta-analytic studies), or publicly available third-party data, such as financial data from stock markets or real-time auction data from eBay. This is in contrast to most other research designs where collecting primary data for research is part of the researcher’s job. Secondary data analysis may be an effective means of research where primary data collection is too costly or infeasible, and secondary data is available at a level of analysis suitable for answering the researcher’s questions. The limitations of this design are that the data might not have been collected in a systematic or scientific manner and hence unsuitable for scientific research, since the data was collected for a presumably different purpose, they may not adequately address the research questions of interest to the researcher, and interval validity is problematic if the temporal precedence between cause and effect is unclear.

Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents. Case studies can be positivist in nature (for hypotheses testing) or interpretive (for theory building). The strength of this research method is its ability to discover a wide variety of social, cultural, and political factors potentially related to the phenomenon of interest that may not be known in advance. Analysis tends to be qualitative in nature, but heavily contextualised and nuanced. However, interpretation of findings may depend on the observational and integrative ability of the researcher, lack of control may make it difficult to establish causality, and findings from a single case site may not be readily generalised to other case sites. Generalisability can be improved by replicating and comparing the analysis in other case sites in a multiple case design .

Focus group research is a type of research that involves bringing in a small group of subjects (typically six to ten people) at one location, and having them discuss a phenomenon of interest for a period of one and a half to two hours. The discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that the ideas and experiences of all participants are represented, and attempts to build a holistic understanding of the problem situation based on participants’ comments and experiences. Internal validity cannot be established due to lack of controls and the findings may not be generalised to other settings because of the small sample size. Hence, focus groups are not generally used for explanatory or descriptive research, but are more suited for exploratory research.

Action research assumes that complex social phenomena are best understood by introducing interventions or ‘actions’ into those phenomena and observing the effects of those actions. In this method, the researcher is embedded within a social context such as an organisation and initiates an action—such as new organisational procedures or new technologies—in response to a real problem such as declining profitability or operational bottlenecks. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may cause the desired change. The researcher then observes the results of that action, modifying it as necessary, while simultaneously learning from the action and generating theoretical insights about the target problem and interventions. The initial theory is validated by the extent to which the chosen action successfully solves the target problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from all other research methods, and hence, action research is an excellent method for bridging research and practice. This method is also suited for studying unique social problems that cannot be replicated outside that context, but it is also subject to researcher bias and subjectivity, and the generalisability of findings is often restricted to the context where the study was conducted.

Ethnography is an interpretive research design inspired by anthropology that emphasises that research phenomenon must be studied within the context of its culture. The researcher is deeply immersed in a certain culture over an extended period of time—eight months to two years—and during that period, engages, observes, and records the daily life of the studied culture, and theorises about the evolution and behaviours in that culture. Data is collected primarily via observational techniques, formal and informal interaction with participants in that culture, and personal field notes, while data analysis involves ‘sense-making’. The researcher must narrate her experience in great detail so that readers may experience that same culture without necessarily being there. The advantages of this approach are its sensitiveness to the context, the rich and nuanced understanding it generates, and minimal respondent bias. However, this is also an extremely time and resource-intensive approach, and findings are specific to a given culture and less generalisable to other cultures.

Selecting research designs

Given the above multitude of research designs, which design should researchers choose for their research? Generally speaking, researchers tend to select those research designs that they are most comfortable with and feel most competent to handle, but ideally, the choice should depend on the nature of the research phenomenon being studied. In the preliminary phases of research, when the research problem is unclear and the researcher wants to scope out the nature and extent of a certain research problem, a focus group (for an individual unit of analysis) or a case study (for an organisational unit of analysis) is an ideal strategy for exploratory research. As one delves further into the research domain, but finds that there are no good theories to explain the phenomenon of interest and wants to build a theory to fill in the unmet gap in that area, interpretive designs such as case research or ethnography may be useful designs. If competing theories exist and the researcher wishes to test these different theories or integrate them into a larger theory, positivist designs such as experimental design, survey research, or secondary data analysis are more appropriate.

Regardless of the specific research design chosen, the researcher should strive to collect quantitative and qualitative data using a combination of techniques such as questionnaires, interviews, observations, documents, or secondary data. For instance, even in a highly structured survey questionnaire, intended to collect quantitative data, the researcher may leave some room for a few open-ended questions to collect qualitative data that may generate unexpected insights not otherwise available from structured quantitative data alone. Likewise, while case research employ mostly face-to-face interviews to collect most qualitative data, the potential and value of collecting quantitative data should not be ignored. As an example, in a study of organisational decision-making processes, the case interviewer can record numeric quantities such as how many months it took to make certain organisational decisions, how many people were involved in that decision process, and how many decision alternatives were considered, which can provide valuable insights not otherwise available from interviewees’ narrative responses. Irrespective of the specific research design employed, the goal of the researcher should be to collect as much and as diverse data as possible that can help generate the best possible insights about the phenomenon of interest.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research design problems

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the data collection methods , such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

FREE TRIAL         LEARN MORE

MORE LIKE THIS

customer advocacy software

21 Best Customer Advocacy Software for Customers in 2024

Apr 19, 2024

quantitative data analysis software

10 Quantitative Data Analysis Software for Every Data Scientist

Apr 18, 2024

Enterprise Feedback Management software

11 Best Enterprise Feedback Management Software in 2024

online reputation management software

17 Best Online Reputation Management Software in 2024

Apr 17, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

research design problems

Enago Academy's Most Popular Articles

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Journals Combat Image Manipulation with AI

Science under Surveillance: Journals adopt advanced AI to uncover image manipulation

Journals are increasingly turning to cutting-edge AI tools to uncover deceitful images published in manuscripts.…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

research design problems

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research design problems

What should universities' stance be on AI tools in research and academic writing?

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Problem – Examples, Types and Guide

Research Problem – Examples, Types and Guide

Table of Contents

Research Problem

Research Problem

Definition:

Research problem is a specific and well-defined issue or question that a researcher seeks to investigate through research. It is the starting point of any research project, as it sets the direction, scope, and purpose of the study.

Types of Research Problems

Types of Research Problems are as follows:

Descriptive problems

These problems involve describing or documenting a particular phenomenon, event, or situation. For example, a researcher might investigate the demographics of a particular population, such as their age, gender, income, and education.

Exploratory problems

These problems are designed to explore a particular topic or issue in depth, often with the goal of generating new ideas or hypotheses. For example, a researcher might explore the factors that contribute to job satisfaction among employees in a particular industry.

Explanatory Problems

These problems seek to explain why a particular phenomenon or event occurs, and they typically involve testing hypotheses or theories. For example, a researcher might investigate the relationship between exercise and mental health, with the goal of determining whether exercise has a causal effect on mental health.

Predictive Problems

These problems involve making predictions or forecasts about future events or trends. For example, a researcher might investigate the factors that predict future success in a particular field or industry.

Evaluative Problems

These problems involve assessing the effectiveness of a particular intervention, program, or policy. For example, a researcher might evaluate the impact of a new teaching method on student learning outcomes.

How to Define a Research Problem

Defining a research problem involves identifying a specific question or issue that a researcher seeks to address through a research study. Here are the steps to follow when defining a research problem:

  • Identify a broad research topic : Start by identifying a broad topic that you are interested in researching. This could be based on your personal interests, observations, or gaps in the existing literature.
  • Conduct a literature review : Once you have identified a broad topic, conduct a thorough literature review to identify the current state of knowledge in the field. This will help you identify gaps or inconsistencies in the existing research that can be addressed through your study.
  • Refine the research question: Based on the gaps or inconsistencies identified in the literature review, refine your research question to a specific, clear, and well-defined problem statement. Your research question should be feasible, relevant, and important to the field of study.
  • Develop a hypothesis: Based on the research question, develop a hypothesis that states the expected relationship between variables.
  • Define the scope and limitations: Clearly define the scope and limitations of your research problem. This will help you focus your study and ensure that your research objectives are achievable.
  • Get feedback: Get feedback from your advisor or colleagues to ensure that your research problem is clear, feasible, and relevant to the field of study.

Components of a Research Problem

The components of a research problem typically include the following:

  • Topic : The general subject or area of interest that the research will explore.
  • Research Question : A clear and specific question that the research seeks to answer or investigate.
  • Objective : A statement that describes the purpose of the research, what it aims to achieve, and the expected outcomes.
  • Hypothesis : An educated guess or prediction about the relationship between variables, which is tested during the research.
  • Variables : The factors or elements that are being studied, measured, or manipulated in the research.
  • Methodology : The overall approach and methods that will be used to conduct the research.
  • Scope and Limitations : A description of the boundaries and parameters of the research, including what will be included and excluded, and any potential constraints or limitations.
  • Significance: A statement that explains the potential value or impact of the research, its contribution to the field of study, and how it will add to the existing knowledge.

Research Problem Examples

Following are some Research Problem Examples:

Research Problem Examples in Psychology are as follows:

  • Exploring the impact of social media on adolescent mental health.
  • Investigating the effectiveness of cognitive-behavioral therapy for treating anxiety disorders.
  • Studying the impact of prenatal stress on child development outcomes.
  • Analyzing the factors that contribute to addiction and relapse in substance abuse treatment.
  • Examining the impact of personality traits on romantic relationships.

Research Problem Examples in Sociology are as follows:

  • Investigating the relationship between social support and mental health outcomes in marginalized communities.
  • Studying the impact of globalization on labor markets and employment opportunities.
  • Analyzing the causes and consequences of gentrification in urban neighborhoods.
  • Investigating the impact of family structure on social mobility and economic outcomes.
  • Examining the effects of social capital on community development and resilience.

Research Problem Examples in Economics are as follows:

  • Studying the effects of trade policies on economic growth and development.
  • Analyzing the impact of automation and artificial intelligence on labor markets and employment opportunities.
  • Investigating the factors that contribute to economic inequality and poverty.
  • Examining the impact of fiscal and monetary policies on inflation and economic stability.
  • Studying the relationship between education and economic outcomes, such as income and employment.

Political Science

Research Problem Examples in Political Science are as follows:

  • Analyzing the causes and consequences of political polarization and partisan behavior.
  • Investigating the impact of social movements on political change and policymaking.
  • Studying the role of media and communication in shaping public opinion and political discourse.
  • Examining the effectiveness of electoral systems in promoting democratic governance and representation.
  • Investigating the impact of international organizations and agreements on global governance and security.

Environmental Science

Research Problem Examples in Environmental Science are as follows:

  • Studying the impact of air pollution on human health and well-being.
  • Investigating the effects of deforestation on climate change and biodiversity loss.
  • Analyzing the impact of ocean acidification on marine ecosystems and food webs.
  • Studying the relationship between urban development and ecological resilience.
  • Examining the effectiveness of environmental policies and regulations in promoting sustainability and conservation.

Research Problem Examples in Education are as follows:

  • Investigating the impact of teacher training and professional development on student learning outcomes.
  • Studying the effectiveness of technology-enhanced learning in promoting student engagement and achievement.
  • Analyzing the factors that contribute to achievement gaps and educational inequality.
  • Examining the impact of parental involvement on student motivation and achievement.
  • Studying the effectiveness of alternative educational models, such as homeschooling and online learning.

Research Problem Examples in History are as follows:

  • Analyzing the social and economic factors that contributed to the rise and fall of ancient civilizations.
  • Investigating the impact of colonialism on indigenous societies and cultures.
  • Studying the role of religion in shaping political and social movements throughout history.
  • Analyzing the impact of the Industrial Revolution on economic and social structures.
  • Examining the causes and consequences of global conflicts, such as World War I and II.

Research Problem Examples in Business are as follows:

  • Studying the impact of corporate social responsibility on brand reputation and consumer behavior.
  • Investigating the effectiveness of leadership development programs in improving organizational performance and employee satisfaction.
  • Analyzing the factors that contribute to successful entrepreneurship and small business development.
  • Examining the impact of mergers and acquisitions on market competition and consumer welfare.
  • Studying the effectiveness of marketing strategies and advertising campaigns in promoting brand awareness and sales.

Research Problem Example for Students

An Example of a Research Problem for Students could be:

“How does social media usage affect the academic performance of high school students?”

This research problem is specific, measurable, and relevant. It is specific because it focuses on a particular area of interest, which is the impact of social media on academic performance. It is measurable because the researcher can collect data on social media usage and academic performance to evaluate the relationship between the two variables. It is relevant because it addresses a current and important issue that affects high school students.

To conduct research on this problem, the researcher could use various methods, such as surveys, interviews, and statistical analysis of academic records. The results of the study could provide insights into the relationship between social media usage and academic performance, which could help educators and parents develop effective strategies for managing social media use among students.

Another example of a research problem for students:

“Does participation in extracurricular activities impact the academic performance of middle school students?”

This research problem is also specific, measurable, and relevant. It is specific because it focuses on a particular type of activity, extracurricular activities, and its impact on academic performance. It is measurable because the researcher can collect data on students’ participation in extracurricular activities and their academic performance to evaluate the relationship between the two variables. It is relevant because extracurricular activities are an essential part of the middle school experience, and their impact on academic performance is a topic of interest to educators and parents.

To conduct research on this problem, the researcher could use surveys, interviews, and academic records analysis. The results of the study could provide insights into the relationship between extracurricular activities and academic performance, which could help educators and parents make informed decisions about the types of activities that are most beneficial for middle school students.

Applications of Research Problem

Applications of Research Problem are as follows:

  • Academic research: Research problems are used to guide academic research in various fields, including social sciences, natural sciences, humanities, and engineering. Researchers use research problems to identify gaps in knowledge, address theoretical or practical problems, and explore new areas of study.
  • Business research : Research problems are used to guide business research, including market research, consumer behavior research, and organizational research. Researchers use research problems to identify business challenges, explore opportunities, and develop strategies for business growth and success.
  • Healthcare research : Research problems are used to guide healthcare research, including medical research, clinical research, and health services research. Researchers use research problems to identify healthcare challenges, develop new treatments and interventions, and improve healthcare delivery and outcomes.
  • Public policy research : Research problems are used to guide public policy research, including policy analysis, program evaluation, and policy development. Researchers use research problems to identify social issues, assess the effectiveness of existing policies and programs, and develop new policies and programs to address societal challenges.
  • Environmental research : Research problems are used to guide environmental research, including environmental science, ecology, and environmental management. Researchers use research problems to identify environmental challenges, assess the impact of human activities on the environment, and develop sustainable solutions to protect the environment.

Purpose of Research Problems

The purpose of research problems is to identify an area of study that requires further investigation and to formulate a clear, concise and specific research question. A research problem defines the specific issue or problem that needs to be addressed and serves as the foundation for the research project.

Identifying a research problem is important because it helps to establish the direction of the research and sets the stage for the research design, methods, and analysis. It also ensures that the research is relevant and contributes to the existing body of knowledge in the field.

A well-formulated research problem should:

  • Clearly define the specific issue or problem that needs to be investigated
  • Be specific and narrow enough to be manageable in terms of time, resources, and scope
  • Be relevant to the field of study and contribute to the existing body of knowledge
  • Be feasible and realistic in terms of available data, resources, and research methods
  • Be interesting and intellectually stimulating for the researcher and potential readers or audiences.

Characteristics of Research Problem

The characteristics of a research problem refer to the specific features that a problem must possess to qualify as a suitable research topic. Some of the key characteristics of a research problem are:

  • Clarity : A research problem should be clearly defined and stated in a way that it is easily understood by the researcher and other readers. The problem should be specific, unambiguous, and easy to comprehend.
  • Relevance : A research problem should be relevant to the field of study, and it should contribute to the existing body of knowledge. The problem should address a gap in knowledge, a theoretical or practical problem, or a real-world issue that requires further investigation.
  • Feasibility : A research problem should be feasible in terms of the availability of data, resources, and research methods. It should be realistic and practical to conduct the study within the available time, budget, and resources.
  • Novelty : A research problem should be novel or original in some way. It should represent a new or innovative perspective on an existing problem, or it should explore a new area of study or apply an existing theory to a new context.
  • Importance : A research problem should be important or significant in terms of its potential impact on the field or society. It should have the potential to produce new knowledge, advance existing theories, or address a pressing societal issue.
  • Manageability : A research problem should be manageable in terms of its scope and complexity. It should be specific enough to be investigated within the available time and resources, and it should be broad enough to provide meaningful results.

Advantages of Research Problem

The advantages of a well-defined research problem are as follows:

  • Focus : A research problem provides a clear and focused direction for the research study. It ensures that the study stays on track and does not deviate from the research question.
  • Clarity : A research problem provides clarity and specificity to the research question. It ensures that the research is not too broad or too narrow and that the research objectives are clearly defined.
  • Relevance : A research problem ensures that the research study is relevant to the field of study and contributes to the existing body of knowledge. It addresses gaps in knowledge, theoretical or practical problems, or real-world issues that require further investigation.
  • Feasibility : A research problem ensures that the research study is feasible in terms of the availability of data, resources, and research methods. It ensures that the research is realistic and practical to conduct within the available time, budget, and resources.
  • Novelty : A research problem ensures that the research study is original and innovative. It represents a new or unique perspective on an existing problem, explores a new area of study, or applies an existing theory to a new context.
  • Importance : A research problem ensures that the research study is important and significant in terms of its potential impact on the field or society. It has the potential to produce new knowledge, advance existing theories, or address a pressing societal issue.
  • Rigor : A research problem ensures that the research study is rigorous and follows established research methods and practices. It ensures that the research is conducted in a systematic, objective, and unbiased manner.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

research design problems

Structure in Deep Reinforcement Learning: A Survey and Open Problems

Article sidebar, main article content.

Reinforcement Learning (RL), bolstered by the expressive capabilities of Deep Neural Networks (DNNs) for function approximation, has demonstrated considerable success in numerous applications. However, its practicality in addressing various real-world scenarios, characterized by diverse and unpredictable dynamics, noisy signals, and large state and action spaces, remains limited. This limitation stems from poor data efficiency, limited generalization capabilities, a lack of safety guarantees, and the absence of interpretability, among other factors. To overcome these challenges and improve performance across these crucial metrics, one promising avenue is to incorporate additional structural information about the problem into the RL learning process. Various sub-fields of RL have proposed methods for incorporating such inductive biases. We amalgamate these diverse methodologies under a unified framework, shedding light on the role of structure in the learning problem, and classify these methods into distinct patterns of incorporating structure. By leveraging this comprehensive framework, we provide valuable insights into the challenges of structured RL and lay the groundwork for a design pattern perspective on RL research. This novel perspective paves the way for future advancements and aids in developing more effective and efficient RL algorithms that can potentially handle real-world scenarios better.

Article Details

Undergraduates to design robots for Appalachia’s challenges at WVU summer research program

Wednesday, April 17, 2024

An individual drone navigating a dense forest.

From experimenting with robots that off-road autonomously down country roads, to designing drones that can fly through Appalachia’s dense forest canopies, students who join the WVU Undergraduate Research Experience this summer will do hands-on, real-world work aimed at solving the problems of remote mountain communities. (WVU Photo/Guilherme Pereira)

Starting this summer, undergraduate students will perform hands-on, cutting-edge robotics research that solves real-world problems in Appalachia while working in the five robotics labs at West Virginia University .

The WVU Research Experience for Undergraduates program is funded by a $454,000 grant from the National Science Foundation and is accepting applications from undergraduates in the U.S. through May 10.

Participants in the 10-week program, which starts May 20, will perform experimental research that responds to several challenges of using mobile robotics for field applications within rural environments like Appalachia’s dense forests and harsh terrains.

Mentored by faculty members from the robotics program within the WVU Benjamin M. Statler College of Engineering and Mineral Resources , the undergraduates will conduct independent research in areas such as drone navigation in forests, using autonomous blimps to monitor a farm or helping robots make decisions when driving on forest trails.

“This project aims to open opportunities for participants, largely from the Appalachian region, to use robotics as a tool to enable change,” said Jason Gross , principal investigator, REU site director, and associate professor and chair of the Department of Mechanical, Materials and Aerospace Engineering .

“As an NSF Research Experience for Undergraduates site, we’ll be investigating practical questions that must be addressed to enable the use of robotics in rural settings like much of Appalachia. We are excited that the project focuses on robotics application domains that are relevant to the state and region and that we have this opportunity to explore how robotics can better contribute to the WVU land-grant mission.”

Students from institutions in Appalachia are especially encouraged to apply.

Application reviews will start immediately and positions will be filled on a rolling basis .

According to Gross, participants will study how a drone can fly through vegetation, how to track GPS under a forest canopy and how robotics can adapt swarming behaviors from models found in nature, among other topics critical to building robots that can function in remote, mountainous regions.

For example, Gross explained, “Flying drones is complicated under forest canopies because the availability and quality of the Global Navigation Satellite System are hindered by the signal attenuation of dense forests. On the other hand, this presents an interesting problem, because GNSS is not completely unavailable for use — it can be made available when going above tree cover. Since the nature of tree cover is that some light shines through, students who work on this problem will explore solutions like pairing a fisheye camera with GNSS signals to predict signal quality.”

Guilherme Pereira , associate professor in the Department of Mechanical, Materials and Aerospace Engineering, is co-principal investigator and associate director of the REU site. Pereira pointed to the fact that although important management and preservation activities in Appalachian forests rely on surveying large areas to detect invasive species, fires and tree diseases, current surveying approaches are limited.

“Surveying of our forests is limited in scale by human resources,” Pereira said. “It’s limited by safety when it’s done with manned airplanes and it’s limited by accuracy when we rely on satellite imagery. To overcome these limitations, the use of drones flying under the canopy of the forests has been suggested — but flying in a forest is challenging both due to the large number of unmapped obstacles that need to be avoided and the presence of small flexible obstacles like leaves and twigs that can trap the drone.

“Our student researchers will solve this problem by developing a resilient, intelligent drone that can collide with obstacles to classify them. Once the objects are classified, the drone can deal with them by avoiding or pushing them away.”

All students receive a $700 weekly stipend in addition to coverage of their lodging, meals, travel and training. The program will host ten students a year over the summers of 2024, 2025 and 2026.

Applicants will have the opportunity to specify their research interests and to be assigned to work with mentors including Gross, Pereira, professor Yu Gu , assistant professor Nicholas Szczecinski , research assistant professor Cagri Kilic , assistant professor Xi Yu and teaching assistant professor Dimas Abreu Archanjo Dutra in the WVU Navigation Lab , Field and Aerial Robotics Laboratory , Neuro-Mechanical Intelligence Laboratory , Autonomous Multi-Agent Systems Lab and Interactive Robotics Laboratory .

“The undergraduates who join us this summer will conduct independent research on problems with significant societal impact,” Gross said. “They’ll participate in panel discussions, weekly research presentations, a research symposium, and many other activities — but most of all they will advance the state of the art of mobile robotics.”

Find the program application.

MEDIA CONTACT: Micaela Morrissette Research Writer WVU Research Communications 304-709-6667; [email protected]

Call 1-855-WVU-NEWS for the latest West Virginia University news and information from  WVUToday .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Res Notes

Logo of bmcresnotes

Common statistical and research design problems in manuscripts submitted to high-impact medical journals

Sara fernandes-taylor.

1 Center for Health Care Evaluation, VA Palo Alto Health Care System and Stanford University School of Medicine, 795 Willow Road (MPD-152), Menlo Park, CA 94025, USA

Jenny K Hyun

2 National Center for PTSD, VA Palo Alto Health Care System, 795 Willow Road, Menlo Park, CA 94025, USA

Rachelle N Reeder

Alex hs harris, associated data.

To assist educators and researchers in improving the quality of medical research, we surveyed the editors and statistical reviewers of high-impact medical journals to ascertain the most frequent and critical statistical errors in submitted manuscripts.

The Editors-in-Chief and statistical reviewers of the 38 medical journals with the highest impact factor in the 2007 Science Journal Citation Report and the 2007 Social Science Journal Citation Report were invited to complete an online survey about the statistical and design problems they most frequently found in manuscripts. Content analysis of the responses identified major issues. Editors and statistical reviewers (n = 25) from 20 journals responded. Respondents described problems that we classified into two, broad themes: A. statistical and sampling issues and B. inadequate reporting clarity or completeness. Problems included in the first theme were (1) inappropriate or incomplete analysis, including violations of model assumptions and analysis errors, (2) uninformed use of propensity scores, (3) failing to account for clustering in data analysis, (4) improperly addressing missing data, and (5) power/sample size concerns. Issues subsumed under the second theme were (1) Inadequate description of the methods and analysis and (2) Misstatement of results, including undue emphasis on p-values and incorrect inferences and interpretations.

Conclusions

The scientific quality of submitted manuscripts would increase if researchers addressed these common design, analytical, and reporting issues. Improving the application and presentation of quantitative methods in scholarly manuscripts is essential to advancing medical research.

Attention to statistical quality in medical research has increased in recent years owing to the greater complexity of statistics in medicine and the focus on evidence-based practice. The editors and statistical reviewers of medical journals are charged with evaluating the scientific merit of submitted manuscripts, often requiring authors to conduct further analysis or content revisions to ensure the transparency and appropriate interpretation of results. Still, many manuscripts are rejected because of irreparable design flaws or inappropriate analytical strategies. As a result, researchers undertake the long and arduous process of submitting to decreasingly selective journals until the manuscript is eventually published. Aside from padding the authors' résumés, publishing results of dubious validity benefits few and makes development of clinical practice guidelines more time-consuming [ 1 , 2 ]. This undesirable state of affairs might often be prevented by seeking statistical and methodological expertise [ 3 ] during the design and conduct of research and during data analysis and manuscript preparation.

To assist educators and medical researchers in improving the quality of medical research, we conducted a survey of the editors and statistical reviewers of high-impact medical journals to identify the most frequent and critical statistical and design-related errors in submitted manuscripts. Methods experts have documented the use and misuse of quantitative methods in medical research, including statistical errors in published works and how authors use analytical expertise in manuscript preparation [ 3 - 11 ]. However, this is the first multi-journal survey of medical journal editors regarding the problems they see most often and what they would like to communicate to researchers. Scientists may be able to use the results of this study as a springboard to improve the impact of their research, their teaching of medical statistics, and their publication record.

Sample and Procedure

We identified the 20 medical journals from the "Medicine, General & Internal" and "Biomedical" categories with the highest impact factor in each of the 2007 Science Journal Citation Report and the 2007 Social Science Journal Citation Report. Journals that do not publish results with statistical analysis were discarded, yielding 38 high impact journals. Twelve of these journals endorse the CONSORT criteria for randomized controlled trials, 6 endorse the STROBE guidelines for observational studies, and 5 endorse PRISMA criteria for systematic reviews. These journals are listed in Additional file 1 [ 12 ].

The Editors-in-Chief and identifiable statistical reviewers of these journals were mailed a letter informing them of the online survey and describing the forthcoming email invitation that contained an electronic link to the survey instrument (sent within the week). We sent one email reminder a week after the initial email invitation in spring of 2008. We also requested that the Editors-in-Chief forward the invitation to their statistically-oriented editors or reviewers in addition to or instead of completing the survey themselves. An electronic consent form with the principal investigator's contact information was provided to potential respondents emphasizing the voluntary and confidential nature of participation. The Stanford University Panel on Human Subjects approved the protocol. This is one in a series of five studies surveying the editors and reviewers of high-impact journals in health and social science disciplines (medicine, public health, psychology, psychiatry, and health services) [ 13 , 14 ].

Survey Content

The survey contained three parts: (1) Short-answer questions about the journals for which the respondents served, how many manuscripts they handled in a typical month, and their areas of statistical and/or research design expertise; (2) The main, open-ended question which asked: "As an editor-in-chief or a statistically-oriented reviewer, you provide important statistical guidance to many researchers on a manuscript-by-manuscript basis. If you could communicate en masse to researchers in your field, what would you say are the most important (common and high impact) statistical issues you encounter in reviewing manuscripts? Please describe the issues as well as what you consider to be adequate and inadequate strategies for addressing them."; and (3) One to four follow-up questions based on the respondents' self-identified primary area of statistical expertise. These questions were developed by polling 69 researchers regarding what statistical questions they would want to ask the editors or statistical reviewers of major journals.

Responses to the open-ended questions were analyzed qualitatively using content analysis to identify dominant themes. We coded the responses to the main question on the most common and high impact (per the wording of the question) statistical issue and the respondents' proposed solutions to those issues. In the analysis phase, two of the authors resolved coding criteria and sorted the responses according to the two major categories that emerged from the data.

A. Statistical and sampling issues

B. Inadequate reporting clarity or completeness

The results are presented in each category from most frequently mentioned to least frequently mentioned.

Respondent Characteristics

Respondents to the survey were comprised of 25 editors and statistical reviewers (of 60 solicited) who manage manuscripts from 20 of the 38 journals in the sampling frame. Respondents indicated reviewing or consulting on a mean of 47 (range: 0.5 to 250) manuscripts per month. The most frequently reported areas of expertise (multiple responses possible) were the design and analysis of clinical trials (n = 12), general statistics (n = 14), quasi-experimental/observational studies (n = 12), and epidemiology (n = 11).

Respondents' Suggestions for Statistical and Sampling Issues

Respondents often noted problems that are fundamental to research design and quantitative methods, including analytical strategies that are incomplete or mismatched with the data structure or scientific questions, failure to address missing data, and low power. Below, we describe the specific issues mentioned by respondents and provide accessible references for more detailed discussion.

(1) Inappropriate or incomplete analysis: In addition to minor arithmetic and calculation errors, respondents expressed concern over researchers' choice of statistical tests. Specifically, frequent problems exist in the appropriateness of statistical tests chosen for the questions of interest and for the data structure. These include using parametric statistical tests when the sample size is small or in the presence of obviously violated assumptions [ 15 ]. In addition, researchers may fail to account for the sampling framework in survey-based studies with appropriate weighting of observations [ 16 , 17 ]. Other errors include confusing the exposure and outcome variables in the analysis phase. That is, in laboratory data, the exposure of interest is mistakenly analyzed as the outcome in analyses. In a similar vein, researchers sometimes mistakenly report the discrimination of a clinical prediction rule or internal validation method (e.g., bootstrap) using the training dataset rather than the test set [ 18 , 19 ]. Other concerns included creating dichotomous variables out of continuous ones without legitimate justification, thereby discarding information, and the use of stepwise regression analysis, which, among other problems, introduces bias into parameter estimates and tends to over-fit the data. See Malek, et al. [ 20 ] for a pithy discussion of the pitfalls of stepwise regression and additional references.

(2) The substantive area of analysis that received the most attention from respondents was the failure to account for clustered data and the use of hierarchical or mixed linear models. The reviewers often observed that authors fail to account for clustering when it is present. Examples of this include data collected on patients over time, where successive observations are dependent upon those in the previous time period(s), or multiple observations are nested in larger units (e.g., patients within hospitals). In these situations, reviewers prefer to see an analytical approach that does not have an independence assumption and properly accounts for clustering, including time series analysis, generalized linear mixed models, or generalized estimating equations where the population-averaged effect is of interest [ 21 - 24 ].

(3) Addressing missing data: Frequently, researchers fail to mention the missing data in their sample or fail to describe the extent of the missing data. Problems with low response rates in studies are often not addressed or are inadequately discussed. In addition, longitudinal studies may fail to address differential dropout rates between groups that may have an effect on the outcome. In addition, those researchers who do discuss missing data often do not describe their methods of data imputation or their evaluation of whether missing data are significantly related to any observed variables. Those researchers who do explicitly address missing data regularly use suboptimal approaches. For example, investigators with longitudinal data often employ complete case analysis, last observation carried forward (LOCF) or other single imputation methods. These approaches can bias estimates and understate the sample variance. Preferably, researchers would evaluate the missing at random (MAR) assumption and conduct additional sensitivity analyses if the MAR assumption is suspect [ 25 , 26 ]. In addition, a detailed qualitative description of the loss process is essential, including the likelihood of MAR and the likely direction of any bias.

(4) Power and sample size issues: Power was another area that reviewers mentioned as problematic. Respondents also noted that power calculations are not done at all or are done post hoc rather than being incorporated into the design and sampling framework [ 27 ]. In novel studies where no basis for power calculations exists, this should be explicitly noted.

(5) Researchers often use propensity scores without recognition of the potential bias caused by unmeasured confounding [ 28 - 30 ]. Propensity scores are the probabilities of the individuals in a study being assigned to a particular condition given a set of known covariates and are used to reduce the confounding of covariates in observational studies. The bias problem arises when an essential confounder is not measured, and the use of propensity scores in this situation can exacerbate the bias already present in an analysis.

Respondents' Suggestions for Inadequate Reporting Clarity or Completeness

In addition to specific analytical concerns, respondents also reported common errors in the text of methods and results sections. Although some of these problems are semantic, others reflect a misinterpretation or misunderstanding of the methods employed.

(1) Inadequate description of methods and analysis: Respondents observed that manuscripts often do not contain a clear description of the analysis. Authors should provide as much methodological detail as possible, including targeted references and a statistical appendix if appropriate. One respondent provided a rule of thumb whereby an independent reader should be able to perform the same analysis based solely on the paper. Other issues included inadequate description of the study cohort, recruitment, and response rate, and the presentation of relative differences (e.g., odds ratio = 1.30) in the absence of absolute differences (e.g., 2.6% versus 2%). As one respondent wrote, "Since basic errors that are easily identified remain common, there is real concern of the presentation of analyses for more complex methods where the errors will not be testable by the reviewer."

(2) Miscommunication of results: Researchers frequently report likelihood ratios for diagnostic tests (the likelihood of an individual having a particular condition relative to the likelihood of an individual not having that condition given a certain test result) without associated sensitivity and specificity. Although this is very useful for learning how well a test of interest predicts the risk of a given result [ 31 , 32 ], editors also appreciate the inclusion of rates of true positives and true negatives to give the reader a complete picture of the analysis.

Respondents also noted an undue emphasis on p-values and excessive focus on significant results. For example, authors often highlight the significance of a categorical dummy that is not significant overall; the overall significance of a multi-category predictor should be tested by using an appropriate joint test of significance [ 33 ]. In turn, non-significant results are seldom presented in manuscripts. Authors leave out indeterminate test results when describing diagnostic test performance and fail to report confidence intervals along with p-values. An analogous problem is the "unthinking acceptance" of p < 0.05 as significant. Researchers can fall prey to alpha errors and take the customary but curious position of touting significance just below p < 0.05 and non-significance just above the 0.05 threshold. In addition, authors may trumpet a significant result in a large study when the size of the difference is clinically unimportant. In this situation, a focus on the effect size could be more appropriate [ 34 ].

Journal editors and statistical reviewers of high-impact medical journals identified several common problems that significantly and frequently affect the quality of submitted manuscripts. The majority of respondents underscored the fundamentals of research methods that should be familiar to all scientists. These include rigorous descriptions of sampling and analytic strategies, recognition of the strengths and drawbacks of a particular analytical approach, and the appropriate handling of missing data. Respondents also discussed concerns about more advanced methods in the medical research toolkit. Specifically, authors may not understand or report the limitations of their analysis strategies and hedge these with sensitivity analyses and more tempered interpretations. Finally, respondents emphasized the importance of the clear and accurate presentation of methods and results.

Although this study was not intended as a systematic or comprehensive catalog of all statistical problems in medical research, it does shed some light on common issues that delay or preclude the publication of research that might otherwise be sound and important. Moreover, the references included in this paper may provide some useful analytical guidance for researchers and for educators. Accordingly, this work serves to inform medical education and research to improve the overall quality of manuscripts and published research and to increase likelihood of publication.

In addition, these data provide evidence for the importance of soup-to-nuts methodological guidance in the research process. Statisticians and methodological experts should be consulted during the study design, analysis, and manuscript writing phases to improve the quality of research and to ensure the clear and appropriate application of quantitative methods. Although this may seem obvious, previous work by Altman and his colleagues demonstrates that this is rarely the case in medical research [ 3 ]. Rather, statistical experts are often consulted only during the analysis phase, if at all, and even then may not be credited with authorship [ 35 ]. In addition to statistical guidance, researchers should consult reporting guidelines associated with their intended research design, such as CONSORT for randomized, controlled trials, STROBE for observational studies, and PRISMA for systematic reviews. Adherence to such guidelines helps to ensure a common standard for reporting and a critical level of transparency in medical research. Professional organizations and prominent journals, including the Cochrane Collaboration and The Lancet, peer-review research protocols, which also helps to create a standard for research design and methods.

This work should be interpreted in light of several important limitations. We did not collect data on the professional position (e.g., academic department, industry, etc.) of the respondents and consequently do not know the composition of the sample or how this may have shaped our findings. Although the response rate was similar to other surveys of journal editors, and we have no reason to suspect significant response bias, the possibility of response bias remains. In addition, the size of our sample may limit the generalizability of our findings

Overall, this work is intended to inform researchers and educators on the most common pitfalls in quantitative medical research, pitfalls that journal editors note as problematic. Given the recent clinical research priorities of health care agenda-setting organizations, such as comparative effectiveness research and evidence-based practice, medical research is expected to meet a new bar in terms of valid and transparent inquiry [ 36 - 39 ]. Improving the application and presentation of quantitative methods in scholarly manuscripts is essential to meeting the current and future goals of medical research.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

SFT was responsible for the analysis and interpretation of data, drafting the manuscript, and final approval of the draft. JKH made substantial contributions to the conception and design of the study and the survey instrument, was involved in revising the manuscript, and gave final approval. RNR aided in data collection, analysis and interpretation of the data, manuscript revisions, and gave final approval. AHSH made substantial contributions to the conception and design of the study and the survey instrument, was involved in revising the manuscript, and gave final approval.

Supplementary Material

Appendix . 2007 Journal Citation Report titles included in the sampling frame.

Acknowledgements and Funding

The views expressed herein are the authors' and not those of the Department of Veterans Affairs. This study was partially supported by the VA Office of Research and Development, Health Services Research and Development Service (MRP-05-168-1).

  • Steinberg EP, Luce BR. Evidence Based? Caveat Emptor! Health Affairs. 2005; 24 (1):80–92. doi: 10.1377/hlthaff.24.1.80. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ. 2004. pp. 1490–1493. [ PMC free article ] [ PubMed ]
  • Altman DG, Goodman SN, Schroter S. How statistical expertise is used in medical research. JAMA. 2002; 287 (21):2817–2820. doi: 10.1001/jama.287.21.2817. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Altman DG. Poor-quality medical research: what can journals do? JAMA. 2002; 287 (21):2765–2767. doi: 10.1001/jama.287.21.2765. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chalmers I, Altman D. How can medical journals help prevent poor medical research? Some opportunities presented by electronic publishing. Lancet. 1999; 353 (9151):490–493. doi: 10.1016/S0140-6736(98)07618-1. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gardner M, Bond J. An exploratory study of statistical assessment of papers published in the British Medical Journal. JAMA. 1990; 263 (10):1355. doi: 10.1001/jama.263.10.1355. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goodman S, Altman D, George S. Statistical Reviewing Policies of Medical Journals Caveat Lector? Journal of general internal medicine. 1998; 13 (11):753–756. doi: 10.1046/j.1525-1497.1998.00227.x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gore S, Jones G, Thompson S. The Lancet's statistical review process: areas for improvement by authors. The Lancet. 1992; 340 (8811):100–102. doi: 10.1016/0140-6736(92)90409-V. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McKinney W, Young M, Hartz A, Bi-Fong Lee M. The inexact use of Fisher's exact test in six major medical journals. JAMA. 1989; 261 (23):3430. doi: 10.1001/jama.261.23.3430. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Porter A. Misuse of correlation and regression in three medical journals. Journal of the Royal Society of Medicine. 1999; 92 (3):123. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schriger DL, Altman DG. Inadequate post-publication review of medical research. BMJ. p. c3803. [ PubMed ]
  • Institute for Scientific Information. Journal Citation Report. Thompson Scientific; 2007. [ Google Scholar ]
  • Harris AHS, Reeder R, Hyun JK. Common statistical and research design problems in manuscripts submitted to high-impact psychiatry journals: What editors and reviewers want authors to know. Journal of Psychiatric Research. 2009; 43 :1231–1234. doi: 10.1016/j.jpsychires.2009.04.007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Harris AHS, Reeder RN, Hyun JK. Common statistical and research design problems in manuscripts submitted to high-impact public health journals. The Open Public Health Journal. 2009; 2 :44–48. doi: 10.2174/1874944500902010044. [ CrossRef ] [ Google Scholar ]
  • Sheskin DJ. Handbook of Parametric and Nonparametric Statistical Procedures. 4. Boca Raton: Chapman & Hall; 2007. [ Google Scholar ]
  • Korn EL, Graubard BI. Analysis of large health surveys: Accounting for the sampling design. Journal of the Royal Statistical Society Series A (Statistics in Society) 1995; 158 (2):263–295. doi: 10.2307/2983292. [ CrossRef ] [ Google Scholar ]
  • Lee ES, Forthofer RN, Eds. Analyzing Complex Survey Data. 2. Thousand Oaks: Sage Publications, Inc; 2006. [ Google Scholar ]
  • Browne MW. Cross-validation methods. Journal of Mathematical Psychology. 2000; 44 (1):108–132. doi: 10.1006/jmps.1999.1279. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Efron B, Gong G. A leisurely look at the bootstrap, the jackknife, and cross-validation. The American Statistician. 1983; 37 (1):36–48. doi: 10.2307/2685844. [ CrossRef ] [ Google Scholar ]
  • Malek MH, Berger DE, Coburn JW. On the inappropriateness of stepwise regression analysis for model building and testing. European Journal of Applied Physiology. 2007; 101 :263–264. doi: 10.1007/s00421-007-0485-9. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Diggle PJ, Heagerty PJ, Liang KY, Zeger SL. Analysis of Longitudinal Data. New York: Oxford University Press; 2002. [ Google Scholar ]
  • Hardin JW, Hilbe JM. Generalized Estimating Equations. Boca Raton: Chapman & Hall; 2003. [ Google Scholar ]
  • Raudenbush SW, Bryck AS. Hierarchical Linear Models: Applications and Data Analysis Methods. 2. Thousand Oaks: Sage Publications, Inc; 2002. [ Google Scholar ]
  • Snijders T, Bosker RJ. Multilevel Analysis. Thousand Oaks: Sage Publications, Inc; 1999. [ Google Scholar ]
  • Daniels M, Hogan J. Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis. New York: Chapman & Hall; 2008. [ Google Scholar ]
  • Rubin DB. Inference and missing data. Biometrika. 1978; 63 :581–592. [ Google Scholar ]
  • Zumbo B, Hubley A. A note on misconceptions concerning prospective and retrospective power. The Statistician. 1998; 47 (2):385–388. [ Google Scholar ]
  • D'Agostino RB. Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Statistics in Medicine. 1998; 17 :2265–2281. doi: 10.1002/(SICI)1097-0258(19981015)17:19<2265::AID-SIM918>3.0.CO;2-B. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luellen JK, Stadish WR, Clark MH. Propensity scores: An introduction and experimental test. Eval Rev. 2005; 29 (6):530–558. doi: 10.1177/0193841X05275596. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCaffrey DF, Ridgeway G, Morral AR. Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological Methods. 2004; 9 (4):403–425. [ PubMed ] [ Google Scholar ]
  • Altman DG, Bland JM. Diagnostic tests 1: Sensitivity and specificity. BMJ. 1994; 308 :1552. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Deeks JJ, Altman DG. Diagnostic tests 4: Likelihood ratios. BMJ. 2004; 329 :168–169. doi: 10.1136/bmj.329.7458.168. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wooldridge JM. Introductory Econometrics: A modern approach. Mason, OH: South-western Cengage Learning; 2009. [ Google Scholar ]
  • Gotzsche PC. Believability of relative risks and odds ratios in abstracts: Cross sectional study. BMJ. 2006; 333 (7561):231–4. doi: 10.1136/bmj.38895.410451.79. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bacchetti P. Peer review of statistics in medical research: The other problem. BMJ. 2002; 324 (7348):1271–3. doi: 10.1136/bmj.324.7348.1271. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Evidence-based Medicine. http://www.ahrq.gov/browse/evidmed.htm
  • Institute of Medicine. The Learning Healthcare System: Workshop Summary (IOM Roundtable on Evidence-Based Medicine) Washington, DC: National Academies Press; 2007. [ PubMed ] [ Google Scholar ]
  • Institute of Medicine. Initial National Priorities for Comparative Effectiveness Research. Washington, DC: National Academies Press; 2009. [ Google Scholar ]
  • Lang TA, Secic M. How to Report Statistics in Medicine: Annotated guidelines for authors, editors, and reviewers. Philadelphia: American College of Physicians; 1997. [ Google Scholar ]
  • MyU : For Students, Faculty, and Staff

Fall 2024 CSCI Special Topics Courses

Cloud computing.

Meeting Time: 09:45 AM‑11:00 AM TTh  Instructor: Ali Anwar Course Description: Cloud computing serves many large-scale applications ranging from search engines like Google to social networking websites like Facebook to online stores like Amazon. More recently, cloud computing has emerged as an essential technology to enable emerging fields such as Artificial Intelligence (AI), the Internet of Things (IoT), and Machine Learning. The exponential growth of data availability and demands for security and speed has made the cloud computing paradigm necessary for reliable, financially economical, and scalable computation. The dynamicity and flexibility of Cloud computing have opened up many new forms of deploying applications on infrastructure that cloud service providers offer, such as renting of computation resources and serverless computing.    This course will cover the fundamentals of cloud services management and cloud software development, including but not limited to design patterns, application programming interfaces, and underlying middleware technologies. More specifically, we will cover the topics of cloud computing service models, data centers resource management, task scheduling, resource virtualization, SLAs, cloud security, software defined networks and storage, cloud storage, and programming models. We will also discuss data center design and management strategies, which enable the economic and technological benefits of cloud computing. Lastly, we will study cloud storage concepts like data distribution, durability, consistency, and redundancy. Registration Prerequisites: CS upper div, CompE upper div., EE upper div., EE grad, ITI upper div., Univ. honors student, or dept. permission; no cr for grads in CSci. Complete the following Google form to request a permission number from the instructor ( https://forms.gle/6BvbUwEkBK41tPJ17 ).

CSCI 5980/8980 

Machine learning for healthcare: concepts and applications.

Meeting Time: 11:15 AM‑12:30 PM TTh  Instructor: Yogatheesan Varatharajah Course Description: Machine Learning is transforming healthcare. This course will introduce students to a range of healthcare problems that can be tackled using machine learning, different health data modalities, relevant machine learning paradigms, and the unique challenges presented by healthcare applications. Applications we will cover include risk stratification, disease progression modeling, precision medicine, diagnosis, prognosis, subtype discovery, and improving clinical workflows. We will also cover research topics such as explainability, causality, trust, robustness, and fairness.

Registration Prerequisites: CSCI 5521 or equivalent. Complete the following Google form to request a permission number from the instructor ( https://forms.gle/z8X9pVZfCWMpQQ6o6  ).

Visualization with AI

Meeting Time: 04:00 PM‑05:15 PM TTh  Instructor: Qianwen Wang Course Description: This course aims to investigate how visualization techniques and AI technologies work together to enhance understanding, insights, or outcomes.

This is a seminar style course consisting of lectures, paper presentation, and interactive discussion of the selected papers. Students will also work on a group project where they propose a research idea, survey related studies, and present initial results.

This course will cover the application of visualization to better understand AI models and data, and the use of AI to improve visualization processes. Readings for the course cover papers from the top venues of AI, Visualization, and HCI, topics including AI explainability, reliability, and Human-AI collaboration.    This course is designed for PhD students, Masters students, and advanced undergraduates who want to dig into research.

Registration Prerequisites: Complete the following Google form to request a permission number from the instructor ( https://forms.gle/YTF5EZFUbQRJhHBYA  ). Although the class is primarily intended for PhD students, motivated juniors/seniors and MS students who are interested in this topic are welcome to apply, ensuring they detail their qualifications for the course.

Visualizations for Intelligent AR Systems

Meeting Time: 04:00 PM‑05:15 PM MW  Instructor: Zhu-Tian Chen Course Description: This course aims to explore the role of Data Visualization as a pivotal interface for enhancing human-data and human-AI interactions within Augmented Reality (AR) systems, thereby transforming a broad spectrum of activities in both professional and daily contexts. Structured as a seminar, the course consists of two main components: the theoretical and conceptual foundations delivered through lectures, paper readings, and discussions; and the hands-on experience gained through small assignments and group projects. This class is designed to be highly interactive, and AR devices will be provided to facilitate hands-on learning.    Participants will have the opportunity to experience AR systems, develop cutting-edge AR interfaces, explore AI integration, and apply human-centric design principles. The course is designed to advance students' technical skills in AR and AI, as well as their understanding of how these technologies can be leveraged to enrich human experiences across various domains. Students will be encouraged to create innovative projects with the potential for submission to research conferences.

Registration Prerequisites: Complete the following Google form to request a permission number from the instructor ( https://forms.gle/Y81FGaJivoqMQYtq5 ). Students are expected to have a solid foundation in either data visualization, computer graphics, computer vision, or HCI. Having expertise in all would be perfect! However, a robust interest and eagerness to delve into these subjects can be equally valuable, even though it means you need to learn some basic concepts independently.

Sustainable Computing: A Systems View

Meeting Time: 09:45 AM‑11:00 AM  Instructor: Abhishek Chandra Course Description: In recent years, there has been a dramatic increase in the pervasiveness, scale, and distribution of computing infrastructure: ranging from cloud, HPC systems, and data centers to edge computing and pervasive computing in the form of micro-data centers, mobile phones, sensors, and IoT devices embedded in the environment around us. The growing amount of computing, storage, and networking demand leads to increased energy usage, carbon emissions, and natural resource consumption. To reduce their environmental impact, there is a growing need to make computing systems sustainable. In this course, we will examine sustainable computing from a systems perspective. We will examine a number of questions:   • How can we design and build sustainable computing systems?   • How can we manage resources efficiently?   • What system software and algorithms can reduce computational needs?    Topics of interest would include:   • Sustainable system design and architectures   • Sustainability-aware systems software and management   • Sustainability in large-scale distributed computing (clouds, data centers, HPC)   • Sustainability in dispersed computing (edge, mobile computing, sensors/IoT)

Registration Prerequisites: This course is targeted towards students with a strong interest in computer systems (Operating Systems, Distributed Systems, Networking, Databases, etc.). Background in Operating Systems (Equivalent of CSCI 5103) and basic understanding of Computer Networking (Equivalent of CSCI 4211) is required.

  • Future undergraduate students
  • Future transfer students
  • Future graduate students
  • Future international students
  • Diversity and Inclusion Opportunities
  • Learn abroad
  • Living Learning Communities
  • Mentor programs
  • Programs for women
  • Student groups
  • Visit, Apply & Next Steps
  • Information for current students
  • Departments and majors overview
  • Departments
  • Undergraduate majors
  • Graduate programs
  • Integrated Degree Programs
  • Additional degree-granting programs
  • Online learning
  • Academic Advising overview
  • Academic Advising FAQ
  • Academic Advising Blog
  • Appointments and drop-ins
  • Academic support
  • Commencement
  • Four-year plans
  • Honors advising
  • Policies, procedures, and forms
  • Career Services overview
  • Resumes and cover letters
  • Jobs and internships
  • Interviews and job offers
  • CSE Career Fair
  • Major and career exploration
  • Graduate school
  • Collegiate Life overview
  • Scholarships
  • Diversity & Inclusivity Alliance
  • Anderson Student Innovation Labs
  • Information for alumni
  • Get engaged with CSE
  • Upcoming events
  • CSE Alumni Society Board
  • Alumni volunteer interest form
  • Golden Medallion Society Reunion
  • 50-Year Reunion
  • Alumni honors and awards
  • Outstanding Achievement
  • Alumni Service
  • Distinguished Leadership
  • Honorary Doctorate Degrees
  • Nobel Laureates
  • Alumni resources
  • Alumni career resources
  • Alumni news outlets
  • CSE branded clothing
  • International alumni resources
  • Inventing Tomorrow magazine
  • Update your info
  • CSE giving overview
  • Why give to CSE?
  • College priorities
  • Give online now
  • External relations
  • Giving priorities
  • Donor stories
  • Impact of giving
  • Ways to give to CSE
  • Matching gifts
  • CSE directories
  • Invest in your company and the future
  • Recruit our students
  • Connect with researchers
  • K-12 initiatives
  • Diversity initiatives
  • Research news
  • Give to CSE
  • CSE priorities
  • Corporate relations
  • Information for faculty and staff
  • Administrative offices overview
  • Office of the Dean
  • Academic affairs
  • Finance and Operations
  • Communications
  • Human resources
  • Undergraduate programs and student services
  • CSE Committees
  • CSE policies overview
  • Academic policies
  • Faculty hiring and tenure policies
  • Finance policies and information
  • Graduate education policies
  • Human resources policies
  • Research policies
  • Research overview
  • Research centers and facilities
  • Research proposal submission process
  • Research safety
  • Award-winning CSE faculty
  • National academies
  • University awards
  • Honorary professorships
  • Collegiate awards
  • Other CSE honors and awards
  • Staff awards
  • Performance Management Process
  • Work. With Flexibility in CSE
  • K-12 outreach overview
  • Summer camps
  • Outreach events
  • Enrichment programs
  • Field trips and tours
  • CSE K-12 Virtual Classroom Resources
  • Educator development
  • Sponsor an event

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 20 April 2024

Research on mix design and mechanical performances of MK-GGBFS based geopolymer pastes using central composite design method

  • Ziqi Yao 1 ,
  • Ling Luo 1 , 2 ,
  • Yongjun Qin 1 , 2 ,
  • Jiangbo Cheng 1 &
  • Changwei Qu 1  

Scientific Reports volume  14 , Article number:  9101 ( 2024 ) Cite this article

Metrics details

  • Civil engineering
  • Mechanical properties
  • Structural materials

In order to alleviate environmental problems and reduce CO 2 emissions, geopolymers had drew attention as a kind of alkali-activated materials. Geopolymers are easier access to raw materials, green and environment friendly than traditional cement industry. Its special reaction mechanism and gel structure show excellent characteristics such as quick hardening, high strength, acid and alkali resistance. In this paper, geopolymer pastes were made with metakaolin (MK) and ground granulated blast furnace slag (GGBFS) as precursors. The effects of liquid–solid ratio (L/S) and modulus of sodium silicate (Ms) on the performances of MK-GGBFS based geopolymer paste (MSGP) were characterized by workability, strength and microstructural tests. The regression equations were obtained by central composite design method to optimize the mix design of MSGP. The goodness of fit of all the equations were more than 98%. Based on the results of experiments, the optimum mix design was found to have L/S of 0.75 and Ms of 1.55. The workability of MSGP was significantly improved while maintaining the strength under the optimum mix design. The initial setting time of MSGP decreased by 71.8%, while both of the fluidity and 28-d compressive strength increased by 15.3%, compared with ordinary Portland cement pastes. Therefore, geopolymers are promising alternative cementitious material, which can consume a large amount of MK and GGBFS and promote green and clean production.

Introduction

Ground granulated blast furnace slag (GGBFS) is a solid waste produced from blast furnaces during pig iron smelting. About 0.3–1.0 t of blast furnace slag is produced for every 1 t of iron produced. In China, the production of industrial solid waste was as high as 3.787 billion t in 2020, of which 0.69 billion t of metallurgical waste slag accounting for 18.19% 1 . Meanwhile, the generation of industrial solid waste is accompanied with the emission of greenhouse gases. It is estimated that the year-on-year growth of CO 2 emissions rose from 0.9% in the 1990s to 3% in the 2000s, while annual emissions of CO 2 are nearly 29.6 billion t and on an increasing trend 2 , 3 . The traditional cement industry accounts for about 8–9% of total anthropogenic CO 2 emissions 4 . To mitigate the situation, GGBFS can be considered as a raw material to produce geopolymers. Geopolymer is a kind of alkali-activated materials, which are typically made from GGBFS, metakaolin (MK), and fly ash (FA) 5 , 6 , 7 . These materials have excellent properties such as high compressive strength, good durability, high temperature resistance, and well acid resistance 8 , 9 , 10 , 11 . Compared with the production of ordinary Portland cement (OPC), the CO 2 emissions of alkali-activated materials can be reduced by up to 80%, ensuring material performance and achieving the aim of green and energy-saving at the same time 12 .

Several attempts in different aspects have been made to broaden the application of geopolymers in the construction industry 13 , 14 , 15 , 16 , 17 . Based on previous research, GGBFS-based alkali-activated materials tends to have poor workability, high drying shrinkage and quick setting, while MK-based alkali-activated materials are characterized by slow setting and mitigation of the drying shrinkage 18 , 19 , 20 , 21 , 22 , 23 . A good synergy of GGBFS and MK in alkali-activated materials could obtain both good workability, mechanical strength and durability 23 , 24 , 25 . The study of Alanazi et al. pointed out that, the partial replacement of FA with MK significantly enhanced the early strength (the strength at 3 days increased from 14 to 30 MPa) 26 . Zhang et al. proved that under normal temperature curing conditions, the mechanical performance of MK-based geopolymers were similar to those of OPC, and often exhibits higher flexural strength 27 . Habert et al. addressed that geopolymer concrete made from FA/GGBFS require less sodium silicate solution to activate 28 . Therefore, they have a lower environmental impact than geopolymer concrete made from pure MK.

Besides, the mix design of different precursors based geopolymer is complex, including many influencing factors, as modulus of sodium silicate (Ms), liquid–solid ratio (L/S), curing conditions, and others 29 , 30 , 31 , 32 , 33 . Danish et al. studied the effects of Ms and curing conditions on the properties of prepacked geopolymer mixes. The findings suggested that the specimens cured for 8 h with a given Ms performed higher compressive and flexural strength 29 . Zhang et al. found that the strongest unconfined compressive strength (UCS) of solid alkali-activated geopolymers was obtained when L/S and Ms were 0.64 and 1.16 34 . According to the research of Wang et al., the performance of FA-based geopolymer paste was effectively improved while maintaining the strength by increasing the alkali-activator, decreasing Ms, and adjusting the water-binder ratio (w/b) 35 .

Response surface methodology is a collection of statistical and mathematical techniques, which achieves its goal by improving the settings of the factors, bringing the response closer and closer to the predetermined maximum or minimum value 36 , 37 , 38 . Response surface methodology can be used in the process of designing, developing, and building new products, as well as in improving existing products designs 39 , 40 , 41 , 42 , 43 . Meanwhile, the number of trials can be minimized by response surface methodology and identify the interactions between factors at-a-time 44 . Central composite design (CCD) is the most popular response surface methodology in use, in which the axial distance and the number of center runs can be flexibly selected 37 , 45 . Response surface methodology uses the factorial methods and Analysis of Variance (ANOVA) to model the response values. On top of this, CCD adds extra factors, both within (at the focal point) and outside of the factor region (at the star point), to highlight the results and enhance the predictive capacity of the models 46 .

Many scholars used CCD to optimize the experiment process and distinguish the interactions between factors. Watson et al. identified the interactions between As and natural organic matter during the ferric chloride coagulation via CCD 44 . In order to maximum the properties of the electrospun nanofiber, Rooholghodos et al. used CCD to optimized the crosslinking duration and CQDs-Fe 3 O 4 -RE concentration 47 . Du et al. found the optimum mix proportion of high-volume FA mortar using CCD 48 . However, the studies of improving the performances of geopolymer pastes using CCD were limited.

In the present study, the MK-GGBFS binary composite system was used as the precursor to produce geopolymer paste, and L/S and Ms were selected as the experimental variables. The models of fluidity, initial setting time and UCS (3-d, 7-d, 28-d, 60-d) were established by CCD method. Then, the microstructure was characterized by SEM and XRD. Ultimately, the optimum mix design of MSGP was employed to ensure the workability and mechanical performance.

Materials and experimentation

The Blaine fineness of MK and GGBFS were 620 and 430 m 2 /kg, respectively. The basicity coefficient K b  = (CaO + MgO)/(SiO 2  + Al 2 O 3 ) of GGBFS was 1.23. The OPC was Tianshan P·O 42.5R cement. The chemical compositions of these materials are listed in Table 1 . The alkali-activator was prepared by mixing NaOH particles (96%) and sodium silicate solution in a certain proportion. The chemical composition of sodium silicate solution was SiO 2 (26.6%) and Na 2 O (8.7%), the original modulus was 3.16. In the trial test, the activator concentration of 36%, 37%, 38%, 39% and 40% were used, and it was easier to mix at 37%. Therefore the activator concentration was set at 37% in this study.

The microstructure of MK and GGBFS samples were characterized by Sigma-300 SEM produced by ZEISS, which was illustrated in Fig.  1 . Mastersizer-2000 Laser diffraction tester produced by MALVERN examined the particle size distribution of MK and GGBFS, which were shown in Fig.  2 . The D50 (average particle size) of GGBFS and FA are about 4.52 μm and 18.6 μm. Figure  3 presents the XRD patterns of MK and GGBFS. It is obvious that MK includes many crystalline phases such as quartz (SiO 2 ), kaolinite (Al 4 [Si 4 O 10 ](OH) 8 ), calcium silicate (C 2 S and C 3 S), dolomite (CaMg(CO 3 ) 2 ) and muscovite (K{Al 2 [AlSi 3 O 10 ](OH) 2 }). The humps centring at the 2θ range of 20–30° of MK and 25–35° of GGBFS reflect an amorphous phase 49 , 50 .

figure 1

Physical photo and SEM of MK and GGBFS.

figure 2

Particle size curves of MK and GGBFS.

figure 3

XRD patterns of MK and GGBFS.

Mix design of MSGP based on CCD method

In this study, CCD method was used to investigate the effect of L/S and Ms on the performance of MSGP. According to the relevant studies and a trail test, the primary variation ranges of L/S and Ms were determined 31 , 34 , 51 , 52 , 53 , 54 . L/S ratios were set at 0.7, 0.8, 0.9 and 1.0, and Ms values were set at 1.2, 1.5 and 1.8. The total amount of binders were 450 g. MK and GGBFS were equal as 225 g. The control group OPC pastes had a w/b of 0.5, which using cement as binders and no admixtures. The response values are fluidity, initial setting time, and UCS (3-d, 7-d, 28-d, 60-d). Table 2 summarizes the factors, codes and levels of MSGP mix design under the CCD method.

Experimental methods

Setting time.

MSGP was prepared according to the mix design of the CCD method. The initial and final setting times of pastes were measured referred to Chinese Standard GB/T 1346–2011 55 . The samples used to test the setting time of OPC pastes should be standard consistency ones.

The fluidity tests of freshly mixed MSGP pastes were conducted according to Chinese Standard GB/T 8077–2012 56 . The maximum diameter in two directions perpendicular to each other were measured by calipers, and the average value was taken as fluidity.

  • Compressive strength

The compressive strength on 40 × 40 × 40 mm hardened paste cubes was tested per Chinese Standard GB/T 17,671–2021, which was the average of 3 samples for each group 57 .

SEM and XRD

The microstructure and hydration products of samples were characterized by SEM and XRD, respectively. All samples to be measured were soaked in absolute ethanol immediately for 72 h to stop hydration, and placed in an electric thermostatic drying oven for drying. The powdered samples used in XRD were ground after drying and passed through a sieve of 0.075 mm, then packed for testing. The parameters of the XRD were as follows: copper target, 30 kV, 5–90°, 5°/min.

Experimental results analysis

Influence of l/s and ms on workability.

The effects of L/S and Ms were clarified on setting time and fluidity in Fig.  4 . It was obvious that the fluidity was significantly improved as L/S increased, while the impact of Ms on fluidity was not such clear. For the setting time, it kept an increasing trend with the increase of L/S. In contrast, when Ms increased, the setting time became shorter. Obviously, all MSGP had higher fluidity than OPC pastes.

figure 4

Influence of L/S and Ms on workability.

Since the activator concentration was 37%, MSGP with L/S of 0.8 had the same water content as OPC pastes with w/b of 0.5. For comparison, when L/S = 0.8, Ms = 1.2, 1.5, 1.8, the fluidities were 212, 226, and 234 mm, respectively, which were 15.8%, 23.5%, and 27.9% higher than that of OPC pastes (183 mm). This was because sodium silicate acted not only as a solvent, but also as a gel. From the perspective of microstructure, it could be considered that sodium silicate was a dispersion of amorphous silicate colloids in an alkaline aqueous medium 58 . Therefore, the interaction between MK and GGBFS particles weakened and improved the fluidity of the fresh pastes. For MK, its tabular granular and clay structure led to higher water demand. The incorporation of GGBFS reduced the amount of MK, so it improved the fluidity of MSGP when the water consumption was fixed. With the increase of Ms, the fluidity of MSGP increased, and the fluidity of Ms = 1.8 was increased by 4.6–12.9% compared to Ms = 1.2 under the same L/S. The reason was the formation of independent silicate micelles in solution at high modulus (< 2.5) helped to disperse precursors particles and improve the rheological properties of the pastes 59 .

On the other hand, it was noted that L/S and Ms had an opposite effect on the setting time. In Fig.  4 , as L/S increased, the setting time of MSGP prolonged. The time required for the paste to lose fluidity was related to its kinetics. The concentration of active ingredients in the dissolved medium decreased when the water content was high, hence the time required to convert free water to bound water increased accordingly 5 . However, the increase of Ms played an accelerated role in setting. When Ms was 1.2, the initial setting time of MSGP was about 58–120 min, while the initial setting time of MSGP was greatly shortened to 41–55 min after Ms was increased to 1.8. Therefore, the high Ms would shorten the setting time of MSGP. The reason for the shortened setting time is mainly related to Ca 2+ as charge-balanced ions 60 , 61 . Ca 2+ has a stronger charge attraction and neutralization, so the formation of aluminosilicate gels will be faster. At the same time, the presence of Ca 2+ will cause heterogeneous nucleation effect in the initial reaction process of geopolymers 62 . Heterogeneous nucleation effect also accelerate the formation of geopolymer gels, resulting in a shorter setting time.

Influence of L/S and Ms on UCS and mass loss

It was illustrated in Fig.  5 that the comparison of UCS and mass loss changes of each group under different L/S and Ms. The experimental results found that when Ms was constant, the UCS of MSGP decreased with the increase of L/S. When Ms was 1.2, with L/S increased from 0.7 to 1.0, the 28-d UCS were 69.3, 60.8, 59.5, and 46.6 MPa, which were higher than OPC pastes (45.0 MPa). While they were 36.3, 32.0, 31.3, and 27.0 MPa when Ms was 1.8, which were lower than OPC pastes. The UCS of each group was closer to OPC pastes when Ms was 1.5. In summary, the increase of L/S was not conducive to the hardening performance of MSGP, similar to the influence of w/b on the UCS of OPC pastes. According to Davisovits and Heah et al., the fluid medium is more than the solid in the mix when L/S is high 31 , 63 . The contact distance between the activating solution and the precursors is far and limited because of the large volume of the fluid medium, and the dissolution of the aluminosilicate precursor is slow. Instead, when lower L/S is employed, the contact distance between the activating solution and the precursors is improved and the UCS is enhanced as a result.

figure 5

Influence of Ms and L/S on UCS and mass loss.

However, unlike pure water, activators were usually mixed solutions consisting of alkali, soluble silicon, and water, which greatly affected the driving forces of hydration. From Fig.  5 , it can be found that when Ms increased from 1.2 to 1.5, the UCS of each group decreased significantly. When L/S was 0.8, The 28-d UCS of the three groups was 60.8, 50.8, and 32.0 MPa, respectively. Especially in the process of Ms increased from 1.5 to 1.8, the UCS decreased more obviously. It was observed that the variation of the UCS (curing time > 28 day) of the three groups of MSGP with different Ms was various. When Ms was 1.2, the 60-d UCS was 68.4, 62.8, 60.6, and 53.2 MPa, which decreased by 1.3% and increased by 3.3%, 1.8%, and 14.2% compared with the 28-d UCS respectively. The increase of UCS was not obvious. While Ms was 1.5, the 60-d UCS was 60.1, 54.2, 51.2, and 44.2 MPa, which increased by 8.5%, 6.7%, 16.6%, and 19.1%. The 60-d UCS increased most when Ms increased to 1.8, which was 22.9%, 23.1%, 14.1%, and 23.0%, respectively, reaching 44.6, 39.4, 35.7, and 33.2 MPa.

It is not difficult to see that the high Ms played a more critical role in the late-age strength development, which was due to the difference in the composition of the activators. For the main components of the modified activator (Na 2 SiO 3 , NaOH), NaOH provides higher solution alkalinity, and the solubility of aluminosilicate is greater under strong alkaline environment, which promotes the polymerization reaction and improves the mechanical properties. When preparing high-modulus sodium silicate solution, it requires less NaOH and leads lower Na 2 O content, which inhibits the interactions between active substances and weakening the development of mechanical properties 64 . For the MK-GGBFS system, the high reactivity of GGBFS improves the early reaction of MK-based geopolymers. Increasing Ms within a certain range (it is pointed out that Ms < 2.0 65 ) can improve the strength development at 28 days or longer.

Results and discussion

Results of ccd method.

A total of 13 random mix design tests were performed (include 5 center-point repeat tests) based on the CCD method of the Design-expert software. The mix design and responses are shown in Table 3 . The code for the factor 1 L/S is x 1 , the code for the factor 2 Ms is x 2 . Response 1 is fluidity (mm), response 2 is initial setting time (min), and response 3, 4, 5, 6 is 3-d, 7-d, 28-d, and 60-d UCS (MPa), respectively.

Response surface model fitting and verification

Regression fitting analysis was conducted with the experimental data in Table 4 . The fitting functions are shown as follows:

Model validation was performed on the above response surface functions, and the results are shown in Table 4 .

Table 4 showed that the p -values of the regression models of the fluidity, initial setting time, and UCS (3-d, 7-d, 28-d, 60-d) were all < 0.01, indicating that these six mathematical models were statistically significant. The R 2 of the fitting equations were 0.9920, 0.9816, 0.9892, 0.9973, 0.9965, and 0.9975, respectively, which indicated that the six statistical models could explain the changes in response values of 99.20%, 98.16%, 98.92%, 99.73%, 99.65%, and 99.75%. It informed that the predicted values agree with the actual results approximately and the experimental error was not obvious. In addition, the C.V. of the models were all less than 10%, which showed that the experiment had high reliability and precision. The adequate precision greater than 4 could be considered as desirable, and all the equations above are satisfied. Figure  6 illustrates the relationships between the predicted values and the experimental values.

figure 6

Comparison of predicted and experimental values:( a ) Fluidity; ( b ) Initial setting time; ( c ) 3-d UCS; ( d ) 7-d UCS; ( e ) 28-d UCS; and ( f ) 60-d UCS.

ANOVA and interaction

The ANOVA of the models of the fluidity, initial setting time, UCS (3-d, 7-d, 28-d, 60-d) are shown in Tables 5 – 7 . From the statistical hypothesis testing, if the p -value ≤ 0.05, this factor is considered to have a significant effect on the response value, and vice versa 66 . The p -values of the above six regression equations were all less than 0.01, it could be considered that the fitting of the model is statistically significant. While the p -values of the lack of fit of each equation were greater than 0.05, there was a tiny discrepancy between the model and the experimental results. That was, the model fitted well. For the factor interactions under each response value, the p -value of each item was greater than 0.05, which had almost no effect on the response values.

According to the regression models, as shown in Fig.  7 , the 3D response surface diagrams of different response values could be obtained. The response values were displayed from purple to red in order of smallest to largest. The contours projected from the response surface to the bottom could be used to reflect the change in the response value, and the denser the contours were, the faster the response values changed, then the greater the influence of the factors were. It can be seen from Fig.  7 that the interactions of factors in the design interval were weak, and the maximum value point did not appear in the single response surface. There was a constraint relationship between the response values. For example, the increase of L/S had a positive effect on the fluidity, but it prolonged the setting time and reduced the mechanical properties. The reduction of Ms was beneficial to the mechanical properties, but it affected the workability of the paste and made it difficult to stir and form. These showed that the influence of L/S and Ms on the fluidity, setting time, and compressive strength in the selected interval needed to be considered comprehensively, not for one certain response value.

figure 7

3D response surface diagrams for the effects of L/S and Ms on ( a ) Fluidity; ( b ) Initial setting time; ( c ) 3-d UCS; ( d ) 7-d UCS; ( e ) 28-d UCS; and ( f ) 60-d UCS.

Optimum mix of MSGP based on CCD method

Just as mentioned above, for anisotropic concrete materials, a single response value was not an optimization goal of its performance, but should comprehensively considered the workability, mechanical property and other performances. According to the actual condition, the target fluidity is 220 mm, and the initial setting time is between 30 and 50 min considering the quick setting of GGBFS, then the optimum mix ratio based on the maximum 28-d UCS is: L/S = 0.75, Ms = 1.55. Under the same test conditions, the MSGP was produced with the optimum mix ratio. The measured fluidity was 216 mm, the initial setting time was 53 min, and the 28-d UCS was 53.10 MPa. Table 8 compares the experimental and predicted values of the optimum mix ratio. The mean absolute percentage error (MAPE) of the experimental and predicted values was calculated according to the following formula 67 .

Microstructural analysis

In order to further study the effects of L/S and Ms on the polymerization reaction, workability, and mechanical properties of MSGP, the microstructure was determined by XRD-SEM method. It was explained from two aspects: hydration reaction products and micropore changes.

Figure  8 shows the XRD patterns of MSGP after curing for 7 and 28 days. In the initial hydration, the formation of quartz (SiO 2 ), mullite (3Al 2 O 3 ·2SiO 2 ), kaolinite (Al 2 Si 2 O 5 (OH) 4 ), calcite (CaCO 3 ), etc., as well as the diffuse peak of C–A–S–H (around 2θ = 30°), could be observed. The phase of kaolinite is attributed to the unreacted metakaolin 68 . The presence of calcite is due to the fact that, ambient CO 2 reacted with during the polymerization reaction 69 . The Ca(OH) 2 is generated by the reaction between Ca 2+ dissolved in the MSGP samples and OH − in the alkali solution. As hydration continued, the tobermorite ((CaO) x –SiO 2 –zH 2 O) began to be observed in the spectrum 70 . At the same time, it was found that the formation of C–A–S–H gel and its nearby aragonite and calcite increased. The dissolved alumina in the precursors react with OH − in the alkali solution and form tetrahedral [H 3 AlO 4 ] − and octahedral [Al(OH) 6 ] 3− . Then [H 3 AlO 4 ] − further reacts with Ca 2+ to form C–A–S–H gel 49 , 71 , 72 . L/S and Ms did not affect the phase of the hydration products too much, and then they had little effect on the final hydration products. However, it promoted the polymerization reaction of the paste because the dissolution rate of the aluminosilicate precursor changed.

figure 8

XRD patterns of MSGP: ( a ) Ms = 1.5, 7-d; ( b ) Ms = 1.5, 28-d; ( c ) L/S = 0.8, 7-d; ( d ) L/S = 0.8, 28-d.

The SEM diagrams of 28-d MSGP are shown in Fig.  9 . It could be seen from the diagram that for MSGP with constant Ms, the increase of L/S had an adverse effect on the compactness of the gel structure. It was reflected in the microstructure with the rough and porous gel morphology and the further increase of the width of the microcracks. The mechanical properties of MSGP worsened due to the presence of pores and cracks 69 . This is consistent with the above results of experiments. On the other hand, the increase of Ms improved the workability of MSGP. The frictional resistance between the particles was reduced due to the action of the sodium silicate micelle. However, the rapid polymerization reaction was not conducive to the binding of low active ingredients, which appeared porous and disordered from a micro view, then reducing the macroscopic mechanical properties.

figure 9

28-d SEM diagram of four mix proportion: ( a ) L/S = 0.7, Ms = 1.5; ( b ) L/S = 0.7, Ms = 1.8; ( c ) L/S = 1.0, Ms = 1.5; ( d ) L/S = 1.0, Ms = 1.8.

Conclusions

In this research, the impacts of liquid–solid ratio (L/S) and modulus of sodium silicate (Ms) on the workability and mechanical performances of MK-GGBFS based geopolymer paste (MSGP) were investigated. Then, the optimum mix ratio was found using the central composite design method for all the three properties simultaneously. The main conclusions are listed below:

The synergy between metakaolin (MK) and ground granulated blast furnace slag (GGBFS) is good. The setting time can be extended effectively by partially replacing GGBFS with MK, overcoming the defect of quick harden of GGBFS-based geopolymer.

When L/S was raised from 0.7 to 1, the workability was effectively improved. When Ms was 1.5, the fluidity increased from 209 to 273 mm, and the initial setting time prolonged from 46 to 71 min. With the increasement of Ms from 1.2 to 1.8, the fluidity increased from 201 to 227 mm when L/S was 0.7, but the initial setting time shortened slightly from 58 to 41 min.

The regression models established by central composite design method fitted well on the six response values, and the R 2 were all above 0.98. The optimum mix ratio with L/S ratio of 0.75 and Ms value of 1.55 was obtained. The measured fluid is 216 mm, the initial setting time is 53 min, and the 28-d unconfined compressive strength is 53.1 MPa.

Data availability

All data generated or analyzed during this study are included in this published article.

Industrial Solid Waste. http://www.chinagygfw.com/ .

Castro-Pardo, S. et al. A comprehensive overview of carbon dioxide capture: From materials, methods to industrial status. Mater. Today 60 , 227–270. https://doi.org/10.1016/j.mattod.2022.08.018 (2022).

Article   CAS   Google Scholar  

| Greenhouse Gas (GHG) Emissions|Climate Watch. https://www.climatewatchdata.org/ghg-emissions?end_year=2020&start_year=1990 .

Zhang, Q. et al. Utilization of solid wastes to sequestrate carbon dioxide in cement-based materials and methods to improve carbonation degree: A review. J CO2 Util 72 , 102502. https://doi.org/10.1016/j.jcou.2023.102502 (2023).

Xie, T., Visintin, P., Zhao, X. & Gravina, R. Mix design and mechanical properties of geopolymer and alkali activated concrete: Review of the state-of-the-art and the development of a new unified approach. Constr. Build. Mater. 256 , 119380. https://doi.org/10.1016/j.conbuildmat.2020.119380 (2020).

Bai, T., Song, Z., Wang, H., Wu, Y. & Huang, W. Performance evaluation of metakaolin geopolymer modified by different solid wastes. J. Clean. Prod. 226 , 114–121. https://doi.org/10.1016/j.jclepro.2019.04.093 (2019).

Zhang, P. et al. Properties of fresh and hardened fly ash/slag based geopolymer concrete: A review. J. Clean. Prod. 270 , 122389. https://doi.org/10.1016/j.jclepro.2020.122389 (2020).

Hadi, M. N. S., Zhang, H. & Parkinson, S. Optimum mix design of geopolymer pastes and concretes cured in ambient condition based on compressive strength, setting time and workability. J. Build. Eng. 23 , 301–313. https://doi.org/10.1016/j.jobe.2019.02.006 (2019).

Article   Google Scholar  

Shi, C., Qu, B. & Provis, J. L. Recent progress in low-carbon binders. Cem. Concr. Res. 122 , 227–250. https://doi.org/10.1016/j.cemconres.2019.05.009 (2019).

John, S. K., Nadir, Y. & Girija, K. Effect of source materials, additives on the mechanical properties and durability of fly ash and fly ash-slag geopolymer mortar: A review. Constr. Build. Mater. 280 , 122443. https://doi.org/10.1016/j.conbuildmat.2021.122443 (2021).

Valencia-Saavedra, W. G., Mejía De Gutiérrez, R. & Puertas, F. Performance of FA-based geopolymer concretes exposed to acetic and sulfuric acids. Constr. Build. Mater. 257 , 119503. https://doi.org/10.1016/j.conbuildmat.2020.119503 (2020).

Nodehi, M., Ozbakkaloglu, T., Gholampour, A., Mohammed, T. & Shi, X. The effect of curing regimes on physico-mechanical, microstructural and durability properties of alkali-activated materials: A review. Constr. Build. Mater. 321 , 126335. https://doi.org/10.1016/j.conbuildmat.2022.126335 (2022).

Wang, Y.-S., Peng, K.-D., Alrefaei, Y. & Dai, J.-G. The bond between geopolymer repair mortars and OPC concrete substrate: Strength and microscopic interactions. Cem. Concr. Compos. 119 , 103991. https://doi.org/10.1016/j.cemconcomp.2021.103991 (2021).

Martínez, A. & Miller, S. A. A review of drivers for implementing geopolymers in construction: Codes and constructability. Resour. Conserv. Recycl. 199 , 107238. https://doi.org/10.1016/j.resconrec.2023.107238 (2023).

Provis, J. & Van Deventer, J. Geopolymers: Structures, Processing Properties and Industrial Applications (Elsevier, 2009).

Book   Google Scholar  

Alrefaei, Y. & Dai, J.-G. Tensile behavior and microstructure of hybrid fiber ambient cured one-part engineered geopolymer composites. Constr. Build. Mater. 184 , 419–431. https://doi.org/10.1016/j.conbuildmat.2018.07.012 (2018).

Rajan, H. S. & Kathirvel, P. Sustainable development of geopolymer binder using sodium silicate synthesized from agricultural waste. J. Clean. Prod. 286 , 124959. https://doi.org/10.1016/j.jclepro.2020.124959 (2021).

Fang, S., Lam, E. S. S., Li, B. & Wu, B. Effect of alkali contents, moduli and curing time on engineering properties of alkali activated slag. Constr. Build. Mater. 249 , 118799. https://doi.org/10.1016/j.conbuildmat.2020.118799 (2020).

Nedeljković, M., Li, Z. & Ye, G. Setting, Strength, and Autogenous Shrinkage of Alkali-Activated Fly Ash and Slag Pastes: Effect of Slag Content. Materials 11 , 2121. https://doi.org/10.3390/ma11112121 (2018).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Puertas, F. et al. Alkali-activated slag concrete: Fresh and hardened behaviour. Cem. Concr. Compos. 85 , 22–31. https://doi.org/10.1016/j.cemconcomp.2017.10.003 (2018).

Alventosa, K. M. L. & White, C. E. The effects of calcium hydroxide and activator chemistry on alkali-activated metakaolin pastes. Cem. Concr. Res. 145 , 106453. https://doi.org/10.1016/j.cemconres.2021.106453 (2021).

Chen, L., Wang, Z., Wang, Y. & Feng, J. Preparation and Properties of Alkali Activated Metakaolin-Based Geopolymer. Materials 9 , 767. https://doi.org/10.3390/ma9090767 (2016).

Li, Z., Liang, X., Chen, Y. & Ye, G. Effect of metakaolin on the autogenous shrinkage of alkali-activated slag-fly ash paste. Constr. Build. Mater. 278 , 122397. https://doi.org/10.1016/j.conbuildmat.2021.122397 (2021).

Li, Z., Nedeljković, M., Chen, B. & Ye, G. Mitigating the autogenous shrinkage of alkali-activated slag by metakaolin. Cem. Concr. Res. 122 , 30–41. https://doi.org/10.1016/j.cemconres.2019.04.016 (2019).

Bernal, S. A., Provis, J. L., Rose, V. & Mejía De Gutierrez, R. Evolution of binder structure in sodium silicate-activated slag-metakaolin blends. Cem. Concr. Compos. 33 , 46–54. https://doi.org/10.1016/j.cemconcomp.2010.09.004 (2011).

Alanazi, H., Hu, J. & Kim, Y.-R. Effect of slag, silica fume, and metakaolin on properties and performance of alkali-activated fly ash cured at ambient temperature. Constr. Build. Mater. 197 , 747–756. https://doi.org/10.1016/j.conbuildmat.2018.11.172 (2019).

Zhang, H. Y., Kodur, V., Wu, B., Cao, L. & Wang, F. Thermal behavior and mechanical properties of geopolymer mortar after exposure to elevated temperatures. Constr. Build. Mater. 109 , 17–24. https://doi.org/10.1016/j.conbuildmat.2016.01.043 (2016).

Habert, G., d’Espinose De Lacaillerie, J. B. & Roussel, N. An environmental evaluation of geopolymer based concrete production: Reviewing current research trends. J. Clean. Prod. 19 , 1229–1238. https://doi.org/10.1016/j.jclepro.2011.03.012 (2011).

Danish, A. et al. Performance evaluation and cost analysis of prepacked geopolymers containing waste marble powder under different curing temperatures for sustainable built environment. Resour. Conserv. Recycl. 192 , 106910. https://doi.org/10.1016/j.resconrec.2023.106910 (2023).

Kim, B. et al. Effect of Si/Al molar ratio and curing temperatures on the immobilization of radioactive borate waste in metakaolin-based geopolymer waste form. J. Hazard. Mater. 458 , 131884. https://doi.org/10.1016/j.jhazmat.2023.131884 (2023).

Article   ADS   CAS   PubMed   Google Scholar  

Heah, C. Y. et al. Study on solids-to-liquid and alkaline activator ratios on kaolin-based geopolymers. Constr. Build. Mater. 35 , 912–922. https://doi.org/10.1016/j.conbuildmat.2012.04.102 (2012).

Cheng, H. et al. Effect of solid-to-liquid ratios on the properties of waste catalyst–metakaolin based geopolymers. Constr. Build. Mater. 88 , 74–83. https://doi.org/10.1016/j.conbuildmat.2015.01.005 (2015).

Article   ADS   Google Scholar  

Ranjbar, N., Kashefi, A. & Maheri, M. R. Hot-pressed geopolymer: Dual effects of heat and curing time. Cem. Concr. Compos. 86 , 1–8. https://doi.org/10.1016/j.cemconcomp.2017.11.004 (2018).

Zhang, H., Ji, Z., Zeng, Y. & Pei, Y. Solidification/stabilization of landfill leachate concentrate contaminants using solid alkali-activated geopolymers with a high liquid solid ratio and fixing rate. Chemosphere 288 , 132495. https://doi.org/10.1016/j.chemosphere.2021.132495 (2022).

Article   CAS   PubMed   Google Scholar  

Wang, W., Fan, C., Wang, B., Zhang, X. & Liu, Z. Workability, rheology, and geopolymerization of fly ash geopolymer: Role of alkali content, modulus, and water–binder ratio. Constr. Build. Mater. 367 , 130357. https://doi.org/10.1016/j.conbuildmat.2023.130357 (2023).

Montgomery, D. C. Design and Analysis of Experiments 478–544 (Wiley, 2013).

Google Scholar  

Myers, R. H., Montgomery, D. C. & Anderson, C. Response surface methodology: process and product optimization using designed experiments (Wiley, 2016).

Thomareis, A. S. & Dimitreli, G. Chapter 12 - Techniques used for processed cheese characterization. In Processed cheese science and technology (eds El-Bakry, M. & Mehta, B. M.) 295–349 (Woodhead Publishing, 2022).

Chapter   Google Scholar  

Ramalingam, B. & Das, S. K. Biofabricated graphene-magnetite nanobioaerogel with antibiofilm property: Response surface methodology based optimization for effective removal of heavy metal ions and killing of bacterial pathogens. Chem. Eng. J. 475 , 145976. https://doi.org/10.1016/j.cej.2023.145976 (2023).

Yang, F., Feng, H., Wu, L., Zhang, Z. & Wang, J. Performance prediction and parameters optimization of an opposed-piston free piston engine generator using response surface methodology. Energy Convers. Manag. 295 , 117633. https://doi.org/10.1016/j.enconman.2023.117633 (2023).

Suresh Nair, M., Rajarathinam, R., Velmurugan, S. & Subhani, S. An optimized approach towards bio-capture and carbon dioxide sequestration with microalgae Phormidium valderianum using response surface methodology. Bioresour. Technol. 389 , 129838. https://doi.org/10.1016/j.biortech.2023.129838 (2023).

Zhao, D., Chen, M., Lv, J., Lei, Z. & Song, W. Multi-objective optimization of battery thermal management system combining response surface analysis and NSGA-II algorithm. Energy Convers. Manag. 292 , 117374. https://doi.org/10.1016/j.enconman.2023.117374 (2023).

Lu, H., Dong, Q., Yan, S., Chen, X. & Wang, X. Development of flexible grouting material for cement-stabilized macadam base using response surface and genetic algorithm optimization methodologies. Constr. Build. Mater. 409 , 133823. https://doi.org/10.1016/j.conbuildmat.2023.133823 (2023).

Watson, M. A. et al. Response surface methodology investigation into the interactions between arsenic and humic acid in water during the coagulation process. J. Hazard. Mater. 312 , 150–158. https://doi.org/10.1016/j.jhazmat.2016.03.002 (2016).

Saffron, S. Science Technology and Health 139–167 (Woodhead Publishing, 2020). https://doi.org/10.1016/B978-0-12-818638-1.00009-5 .

Maaze, M. R. & Shrivastava, S. Design optimization of a recycled concrete waste-based brick through alkali activation using Box- Behnken design methodology. J. Build. Eng. 75 , 106863. https://doi.org/10.1016/j.jobe.2023.106863 (2023).

Rooholghodos, S. H., Pourmadadi, M., Yazdian, F. & Rashedi, H. Optimization of electrospun CQDs-Fe3O4-RE loaded PVA-cellulose nanofibrils via central composite design for wound dressing applications: Kinetics and in vitro release study. Int. J. Biol. Macromol. 237 , 124067. https://doi.org/10.1016/j.ijbiomac.2023.124067 (2023).

Du, S., Ge, X. & Zhao, Q. Central composite design-based development of eco-efficient high-volume fly ash mortar. Constr. Build. Mater. 358 , 129411. https://doi.org/10.1016/j.conbuildmat.2022.129411 (2022).

Sun, Z. & Vollpracht, A. Isothermal calorimetry and in-situ XRD study of the NaOH activated fly ash, metakaolin and slag. Cem. Concr. Res. 103 , 110–122. https://doi.org/10.1016/j.cemconres.2017.10.004 (2018).

Oshani, F., Allahverdi, A., Kargari, A., Norouzbeigi, R. & Mahmoodi, N. M. Effect of preparation parameters on properties of metakaolin-based geopolymer activated by silica fume- sodium hydroxide alkaline blend. J. Build. Eng. 60 , 104984. https://doi.org/10.1016/j.jobe.2022.104984 (2022).

Jiang, T., Liu, Z., Tian, X., Wu, J. & Wang, L. Review on the impact of metakaolin-based geopolymer’s reaction chemistry, nanostructure and factors on its properties. Constr. Build. Mater. 412 , 134760. https://doi.org/10.1016/j.conbuildmat.2023.134760 (2024).

Gao, K. et al. Effects SiO 2 /Na 2 O molar ratio on mechanical properties and the microstructure of nano-SiO 2 metakaolin-based geopolymers. Constr. Build. Mater. 53 , 503–510. https://doi.org/10.1016/j.conbuildmat.2013.12.003 (2014).

Zhan, J. et al. Effect of Slag on the Strength and Shrinkage Properties of Metakaolin-Based Geopolymers. Materials 15 , 2944. https://doi.org/10.3390/ma15082944 (2022).

Yang, Z., Shi, P., Zhang, Y. & Li, Z. Influence of liquid-binder ratio on the performance of alkali-activated slag mortar with superabsorbent polymer. J. Build. Eng. 48 , 103934. https://doi.org/10.1016/j.jobe.2021.103934 (2022).

GB/T 1346–2011. Teat method for water requirement of normal consistency, setting time and soundness of Portland cement. 2011.

GB/T 8077–2012. Methods for testing uniformity of concrete admixture. 2012.

GB/T 17671–2021. Methods for testing uniformity of concrete admixture. 2012.

Tognonvi, M. T., Massiot, D., Lecomte, A., Rossignol, S. & Bonnet, J.-P. Identification of solvated species present in concentrated and dilute sodium silicate solutions by combined 29Si NMR and SAXS studies. J. Colloid Interface Sci. 352 , 309–315. https://doi.org/10.1016/j.jcis.2010.09.018 (2010).

Stempkowska, A., Mastalska-Popławska, J., Izak, P., Ogłaza, L. & Turkowska, M. Stabilization of kaolin clay slurry with sodium silicate of different silicate moduli. Appl. Clay Sci. 146 , 147–151. https://doi.org/10.1016/j.clay.2017.05.046 (2017).

Cui, C. et al. Influence of GGBFS content and activator modulus on curing of metakaolin based geopolymer at ambient temperature. J. Build. Mater. 20 , 535–542. https://doi.org/10.3969/j.issn.1007-9629.2017.04.008 (2017).

Jia, Y., Han, M., Meng, X. & Xu, Z. Study on setting time of fly ash-based geopolymer. B. Chin. Ceram. Soc. 28 , 893–899 (2009).

CAS   Google Scholar  

Lee, W. K. W. & Van Deventer, J. S. J. The effect of ionic contaminants on the early-age properties of alkali-activated fly ash-based cements. Cem. Concr. Res. 32 , 577–584. https://doi.org/10.1016/S0008-8846(01)00724-4 (2002).

Davidovits, J. Geopolymer: Chemistry & Applications . (Institut Géopolymère, Saint-Quentin, 2020).

Ling, Y., Wang, K., Wang, X. & Hua, S. Effects of mix design parameters on heat of geopolymerization, set time, and compressive strength of high calcium fly ash geopolymer. Constr. Build. Mater. 228 , 116763. https://doi.org/10.1016/j.conbuildmat.2019.116763 (2019).

Luukkonen, T. et al. Influence of sodium silicate powder silica modulus for mechanical and chemical properties of dry-mix alkali-activated slag mortar. Constr. Build. Mater. 233 , 117354. https://doi.org/10.1016/j.conbuildmat.2019.117354 (2020).

Zhang, W. SPSS Statistical Analysis Advanced Tutoria . 95–106 (2018).

Wang, P. et al. Prediction of complex strain fields in concrete using a deep learning approach. Constr. Build. Mater. 404 , 133257. https://doi.org/10.1016/j.conbuildmat.2023.133257 (2023).

Kuang, L., Li, G., Xiang, J., Ma, W. & Cui, X. Effect of seawater on the properties and microstructure of metakaolin/slag-based geopolymers. Constr. Build. Mater. 397 , 132418. https://doi.org/10.1016/j.conbuildmat.2023.132418 (2023).

Ziada, M., Tanyildizi, H. & Uysal, M. The influence of carbon nanotube on underwater geopolymer paste based on metakaolin and slag. Constr. Build. Mater. 414 , 135047. https://doi.org/10.1016/j.conbuildmat.2024.135047 (2024).

Trincal, V. et al. Effect of drying temperature on the properties of alkali-activated binders - Recommendations for sample preconditioning. Cem. Concr. Res. 151 , 106617. https://doi.org/10.1016/j.cemconres.2021.106617 (2022).

Puertas, F. et al. C-A-S-H gels formed in alkali-activated slag cement pastes. Structure and effect on cement properties and durability. MATEC Web Conf. 11, 01002 (2014). https://doi.org/10.1051/matecconf/20141101002

Liao, Y. et al. Hydration behavior and strength development of supersulfated cement prepared by calcined phosphogypsum and slaked lime. J. Build. Eng. 80 , 108075. https://doi.org/10.1016/j.jobe.2023.108075 (2023).

Download references

This research was funded by Natural Science Foundation of Xinjiang Uygur Autonomous Region, grant number 2022D01D27, Key Research and Development Program of Xinjiang Uygur Autonomous Region, Grant number 2022B03036.

Author information

Authors and affiliations.

College of Civil Engineering and Architecture, Xinjiang University, Urumqi, 830017, China

Ziqi Yao, Ling Luo, Yongjun Qin, Jiangbo Cheng & Changwei Qu

Xinjiang Civil Engineering Technology Research Center, Urumqi, 830017, China

Ling Luo & Yongjun Qin

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, Z.Y. and C.Q.; methodology, C.Q.; software, Z.Y.; validation, Z.Y., C.Q. and L.L.; formal analysis, C.Q. and L.L.; investigation, C.Q.; resources, Y.Q.; data curation, Z.Y. and C.Q.; writing—original draft preparation, Z.Y.; writing—review and editing, L.L. and Y.Q.; visualization, Z.Y. and J.C.; supervision, L.L.; project administration, L.L. and Y.Q.; funding acquisition, L.L. and Y.Q. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Ling Luo .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Yao, Z., Luo, L., Qin, Y. et al. Research on mix design and mechanical performances of MK-GGBFS based geopolymer pastes using central composite design method. Sci Rep 14 , 9101 (2024). https://doi.org/10.1038/s41598-024-59872-0

Download citation

Received : 30 November 2023

Accepted : 16 April 2024

Published : 20 April 2024

DOI : https://doi.org/10.1038/s41598-024-59872-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Central composite design
  • Workability

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research design problems

IMAGES

  1. Infographic: Harness the Power of Design Thinking to Retool How You

    research design problems

  2. How To Identify A Problem Statement In A Research Article

    research design problems

  3. 15 Experimental Design Examples (2024)

    research design problems

  4. Cureus

    research design problems

  5. Research Design

    research design problems

  6. Design Thinking in Practice: Research Methodology

    research design problems

VIDEO

  1. Study Designs (Cross-sectional, Case-control, Cohort)

  2. How to Create a Strong Research Design: 2-minute Summary

  3. What is Research Design?

  4. Lecture 18 Experimental Designs; Completely Randomized Design CRD; One Way ANOVA

  5. WHAT IS RESEARCH DESIGN/ QUANTITATIVE-EXPERIMENTAL RESEARCH DESIGN

  6. defining research problem, formulation of research problem, necessity, example, research methodology

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Design Flaws to Avoid

    The research design establishes the decision-making processes, conceptual structure of investigation, and methods of analysis used to address the study's research problem. Taking the time to develop a thorough research design helps to organize your thoughts, sets the boundaries of your study, maximizes the reliability of your findings, and ...

  3. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  4. Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions.

  5. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  6. Organizing Your Social Sciences Research Paper

    An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome. The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to ...

  7. Finding Researchable Problems

    Formulation of research problem should depict what is to be determined and scope of the study.It also involves key concept definitions questions to be asked. The objective of the present paper highlights the above stated issues. Booth, W. C., Colomb, G. G., & Williams, J. M. (2016). Craft of Research (4th Edition).

  8. Introducing Research Designs

    A research design refers to the overall strategy that a researcher integrates the different components of the research study in a logical way (de Vaus, 2001; Trochim, 2005 ). By doing so, you will address the research problem; it serves as the blueprint for the collection, measurement, and analysis of data.

  9. Research Questions and Research Design

    Abstract. This chapter introduces readers to the initial steps of designing a research project and sets out the major considerations that need to be addressed in research design. It guides the reader through issues around developing a research question and research topic, including how a researcher might come up with a good idea for a project.

  10. Research Design

    Research design is the process to deliberately plan for your research. As mentioned before, identifying the key concepts related to the research problems is the initial step to success. Once the key concepts are identified and mapped to highlight the potential relationships, a literature review is imperative.

  11. How to define a research question or a design problem

    Introduction. Many texts state that identifying a good research question (or, equivalently, a design problem) is important for research. Wikipedia, for example, starts (as of writing this text, at least) with the following two sentences: "A research question is 'a question that a research project sets out to answer'.

  12. Common statistical and research design problems in manuscripts

    To assist educators and researchers in improving the quality of medical research, we surveyed the editors and statistical reviewers of high-impact medical journals to ascertain the most frequent and critical statistical errors in submitted manuscripts. The Editors-in-Chief and statistical reviewers of the 38 medical journals with the highest impact factor in the 2007 Science Journal Citation ...

  13. Study designs: Part 1

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  14. Research design

    Research design is a comprehensive plan for data collection in an empirical research project. It is a 'blueprint' for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process.

  15. Research Design

    The format of a research design typically includes the following sections: Introduction: This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.

  16. Full article: Design-based research: What it is and why it matters to

    The first, Design-Based Implementation Research or DBIR (Penuel et al. Citation 2011) may be thought of as an offshoot from DBR when applied to problems of practice that relate to systems of education, tightly tied to the notion of Research-Practice Partnerships (Penuel et al., Citation 2015). DBR and DBIR share a contextualized, emergent ...

  17. (PDF) Basics of Research Design: A Guide to selecting appropriate

    for validity and reliability. Design is basically concerned with the aims, uses, purposes, intentions and plans within the. pr actical constraint of location, time, money and the researcher's ...

  18. What is a Research Design? Definition, Types, Methods and Examples

    A research design is defined as the overall plan or structure that guides the process of conducting research. Learn more about research design types, methods and examples. ... Researchers work collaboratively with practitioners to address practical problems or implement interventions in real-world settings. 8. Ethnographic Research

  19. Research Design: What it is, Elements & Types

    The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used. The Process of Research Design. The research design process is a systematic and structured approach to conducting research.

  20. Experimental Research Design

    This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems. 5. Research Limitations. Every study has some type of limitations. You should anticipate and incorporate those limitations ...

  21. Research Problem

    Identifying a research problem is important because it helps to establish the direction of the research and sets the stage for the research design, methods, and analysis. It also ensures that the research is relevant and contributes to the existing body of knowledge in the field. A well-formulated research problem should:

  22. (PDF) Research Design

    Research design is the plan, structure and strategy and investigation concaved so as to obtain search question and control variance" (Borwankar, 1995). ... 'A research problem in general, refers ...

  23. Structure in Deep Reinforcement Learning: A Survey and Open Problems

    By leveraging this comprehensive framework, we provide valuable insights into the challenges of structured RL and lay the groundwork for a design pattern perspective on RL research. This novelperspective paves the way for future advancements and aids in developing more effective and efficient RL algorithms that can potentially handle real-world ...

  24. Undergraduates to design robots for Appalachia's challenges at WVU

    Starting this summer, undergraduate students will perform hands-on, cutting-edge robotics research that solves real-world problems in Appalachia while working in the five robotics labs at West Virginia University.. The WVU Research Experience for Undergraduates program is funded by a $454,000 grant from the National Science Foundation and is accepting applications from undergraduates in the U ...

  25. Common statistical and research design problems in manuscripts

    Findings. The Editors-in-Chief and statistical reviewers of the 38 medical journals with the highest impact factor in the 2007 Science Journal Citation Report and the 2007 Social Science Journal Citation Report were invited to complete an online survey about the statistical and design problems they most frequently found in manuscripts.

  26. Fall 2024 CSCI Special Topics Courses

    CSCI 5980 Cloud ComputingMeeting Time: 09:45 AM‑11:00 AM TTh Instructor: Ali AnwarCourse Description: Cloud computing serves many large-scale applications ranging from search engines like Google to social networking websites like Facebook to online stores like Amazon. More recently, cloud computing has emerged as an essential technology to enable emerging fields such as Artificial ...

  27. Research on mix design and mechanical performances of MK-GGBFS ...

    Mix design of MSGP based on CCD method. In this study, CCD method was used to investigate the effect of L/S and Ms on the performance of MSGP. According to the relevant studies and a trail test ...