helpful professor logo

21 Research Limitations Examples

research limitations examples and definition, explained below

Research limitations refer to the potential weaknesses inherent in a study. All studies have limitations of some sort, meaning declaring limitations doesn’t necessarily need to be a bad thing, so long as your declaration of limitations is well thought-out and explained.

Rarely is a study perfect. Researchers have to make trade-offs when developing their studies, which are often based upon practical considerations such as time and monetary constraints, weighing the breadth of participants against the depth of insight, and choosing one methodology or another.

In research, studies can have limitations such as limited scope, researcher subjectivity, and lack of available research tools.

Acknowledging the limitations of your study should be seen as a strength. It demonstrates your willingness for transparency, humility, and submission to the scientific method and can bolster the integrity of the study. It can also inform future research direction.

Typically, scholars will explore the limitations of their study in either their methodology section, their conclusion section, or both.

Research Limitations Examples

Qualitative and quantitative research offer different perspectives and methods in exploring phenomena, each with its own strengths and limitations. So, I’ve split the limitations examples sections into qualitative and quantitative below.

Qualitative Research Limitations

Qualitative research seeks to understand phenomena in-depth and in context. It focuses on the ‘why’ and ‘how’ questions.

It’s often used to explore new or complex issues, and it provides rich, detailed insights into participants’ experiences, behaviors, and attitudes. However, these strengths also create certain limitations, as explained below.

1. Subjectivity

Qualitative research often requires the researcher to interpret subjective data. One researcher may examine a text and identify different themes or concepts as more dominant than others.

Close qualitative readings of texts are necessarily subjective – and while this may be a limitation, qualitative researchers argue this is the best way to deeply understand everything in context.

Suggested Solution and Response: To minimize subjectivity bias, you could consider cross-checking your own readings of themes and data against other scholars’ readings and interpretations. This may involve giving the raw data to a supervisor or colleague and asking them to code the data separately, then coming together to compare and contrast results.

2. Researcher Bias

The concept of researcher bias is related to, but slightly different from, subjectivity.

Researcher bias refers to the perspectives and opinions you bring with you when doing your research.

For example, a researcher who is explicitly of a certain philosophical or political persuasion may bring that persuasion to bear when interpreting data.

In many scholarly traditions, we will attempt to minimize researcher bias through the utilization of clear procedures that are set out in advance or through the use of statistical analysis tools.

However, in other traditions, such as in postmodern feminist research , declaration of bias is expected, and acknowledgment of bias is seen as a positive because, in those traditions, it is believed that bias cannot be eliminated from research, so instead, it is a matter of integrity to present it upfront.

Suggested Solution and Response: Acknowledge the potential for researcher bias and, depending on your theoretical framework , accept this, or identify procedures you have taken to seek a closer approximation to objectivity in your coding and analysis.

3. Generalizability

If you’re struggling to find a limitation to discuss in your own qualitative research study, then this one is for you: all qualitative research, of all persuasions and perspectives, cannot be generalized.

This is a core feature that sets qualitative data and quantitative data apart.

The point of qualitative data is to select case studies and similarly small corpora and dig deep through in-depth analysis and thick description of data.

Often, this will also mean that you have a non-randomized sample size.

While this is a positive – you’re going to get some really deep, contextualized, interesting insights – it also means that the findings may not be generalizable to a larger population that may not be representative of the small group of people in your study.

Suggested Solution and Response: Suggest future studies that take a quantitative approach to the question.

4. The Hawthorne Effect

The Hawthorne effect refers to the phenomenon where research participants change their ‘observed behavior’ when they’re aware that they are being observed.

This effect was first identified by Elton Mayo who conducted studies of the effects of various factors ton workers’ productivity. He noticed that no matter what he did – turning up the lights, turning down the lights, etc. – there was an increase in worker outputs compared to prior to the study taking place.

Mayo realized that the mere act of observing the workers made them work harder – his observation was what was changing behavior.

So, if you’re looking for a potential limitation to name for your observational research study , highlight the possible impact of the Hawthorne effect (and how you could reduce your footprint or visibility in order to decrease its likelihood).

Suggested Solution and Response: Highlight ways you have attempted to reduce your footprint while in the field, and guarantee anonymity to your research participants.

5. Replicability

Quantitative research has a great benefit in that the studies are replicable – a researcher can get a similar sample size, duplicate the variables, and re-test a study. But you can’t do that in qualitative research.

Qualitative research relies heavily on context – a specific case study or specific variables that make a certain instance worthy of analysis. As a result, it’s often difficult to re-enter the same setting with the same variables and repeat the study.

Furthermore, the individual researcher’s interpretation is more influential in qualitative research, meaning even if a new researcher enters an environment and makes observations, their observations may be different because subjectivity comes into play much more. This doesn’t make the research bad necessarily (great insights can be made in qualitative research), but it certainly does demonstrate a weakness of qualitative research.

6. Limited Scope

“Limited scope” is perhaps one of the most common limitations listed by researchers – and while this is often a catch-all way of saying, “well, I’m not studying that in this study”, it’s also a valid point.

No study can explore everything related to a topic. At some point, we have to make decisions about what’s included in the study and what is excluded from the study.

So, you could say that a limitation of your study is that it doesn’t look at an extra variable or concept that’s certainly worthy of study but will have to be explored in your next project because this project has a clearly and narrowly defined goal.

Suggested Solution and Response: Be clear about what’s in and out of the study when writing your research question.

7. Time Constraints

This is also a catch-all claim you can make about your research project: that you would have included more people in the study, looked at more variables, and so on. But you’ve got to submit this thing by the end of next semester! You’ve got time constraints.

And time constraints are a recognized reality in all research.

But this means you’ll need to explain how time has limited your decisions. As with “limited scope”, this may mean that you had to study a smaller group of subjects, limit the amount of time you spent in the field, and so forth.

Suggested Solution and Response: Suggest future studies that will build on your current work, possibly as a PhD project.

8. Resource Intensiveness

Qualitative research can be expensive due to the cost of transcription, the involvement of trained researchers, and potential travel for interviews or observations.

So, resource intensiveness is similar to the time constraints concept. If you don’t have the funds, you have to make decisions about which tools to use, which statistical software to employ, and how many research assistants you can dedicate to the study.

Suggested Solution and Response: Suggest future studies that will gain more funding on the back of this ‘ exploratory study ‘.

9. Coding Difficulties

Data analysis in qualitative research often involves coding, which can be subjective and complex, especially when dealing with ambiguous or contradicting data.

After naming this as a limitation in your research, it’s important to explain how you’ve attempted to address this. Some ways to ‘limit the limitation’ include:

  • Triangulation: Have 2 other researchers code the data as well and cross-check your results with theirs to identify outliers that may need to be re-examined, debated with the other researchers, or removed altogether.
  • Procedure: Use a clear coding procedure to demonstrate reliability in your coding process. I personally use the thematic network analysis method outlined in this academic article by Attride-Stirling (2001).

Suggested Solution and Response: Triangulate your coding findings with colleagues, and follow a thematic network analysis procedure.

10. Risk of Non-Responsiveness

There is always a risk in research that research participants will be unwilling or uncomfortable sharing their genuine thoughts and feelings in the study.

This is particularly true when you’re conducting research on sensitive topics, politicized topics, or topics where the participant is expressing vulnerability .

This is similar to the Hawthorne effect (aka participant bias), where participants change their behaviors in your presence; but it goes a step further, where participants actively hide their true thoughts and feelings from you.

Suggested Solution and Response: One way to manage this is to try to include a wider group of people with the expectation that there will be non-responsiveness from some participants.

11. Risk of Attrition

Attrition refers to the process of losing research participants throughout the study.

This occurs most commonly in longitudinal studies , where a researcher must return to conduct their analysis over spaced periods of time, often over a period of years.

Things happen to people over time – they move overseas, their life experiences change, they get sick, change their minds, and even die. The more time that passes, the greater the risk of attrition.

Suggested Solution and Response: One way to manage this is to try to include a wider group of people with the expectation that there will be attrition over time.

12. Difficulty in Maintaining Confidentiality and Anonymity

Given the detailed nature of qualitative data , ensuring participant anonymity can be challenging.

If you have a sensitive topic in a specific case study, even anonymizing research participants sometimes isn’t enough. People might be able to induce who you’re talking about.

Sometimes, this will mean you have to exclude some interesting data that you collected from your final report. Confidentiality and anonymity come before your findings in research ethics – and this is a necessary limiting factor.

Suggested Solution and Response: Highlight the efforts you have taken to anonymize data, and accept that confidentiality and accountability place extremely important constraints on academic research.

13. Difficulty in Finding Research Participants

A study that looks at a very specific phenomenon or even a specific set of cases within a phenomenon means that the pool of potential research participants can be very low.

Compile on top of this the fact that many people you approach may choose not to participate, and you could end up with a very small corpus of subjects to explore. This may limit your ability to make complete findings, even in a quantitative sense.

You may need to therefore limit your research question and objectives to something more realistic.

Suggested Solution and Response: Highlight that this is going to limit the study’s generalizability significantly.

14. Ethical Limitations

Ethical limitations refer to the things you cannot do based on ethical concerns identified either by yourself or your institution’s ethics review board.

This might include threats to the physical or psychological well-being of your research subjects, the potential of releasing data that could harm a person’s reputation, and so on.

Furthermore, even if your study follows all expected standards of ethics, you still, as an ethical researcher, need to allow a research participant to pull out at any point in time, after which you cannot use their data, which demonstrates an overlap between ethical constraints and participant attrition.

Suggested Solution and Response: Highlight that these ethical limitations are inevitable but important to sustain the integrity of the research.

For more on Qualitative Research, Explore my Qualitative Research Guide

Quantitative Research Limitations

Quantitative research focuses on quantifiable data and statistical, mathematical, or computational techniques. It’s often used to test hypotheses, assess relationships and causality, and generalize findings across larger populations.

Quantitative research is widely respected for its ability to provide reliable, measurable, and generalizable data (if done well!). Its structured methodology has strengths over qualitative research, such as the fact it allows for replication of the study, which underpins the validity of the research.

However, this approach is not without it limitations, explained below.

1. Over-Simplification

Quantitative research is powerful because it allows you to measure and analyze data in a systematic and standardized way. However, one of its limitations is that it can sometimes simplify complex phenomena or situations.

In other words, it might miss the subtleties or nuances of the research subject.

For example, if you’re studying why people choose a particular diet, a quantitative study might identify factors like age, income, or health status. But it might miss other aspects, such as cultural influences or personal beliefs, that can also significantly impact dietary choices.

When writing about this limitation, you can say that your quantitative approach, while providing precise measurements and comparisons, may not capture the full complexity of your subjects of study.

Suggested Solution and Response: Suggest a follow-up case study using the same research participants in order to gain additional context and depth.

2. Lack of Context

Another potential issue with quantitative research is that it often focuses on numbers and statistics at the expense of context or qualitative information.

Let’s say you’re studying the effect of classroom size on student performance. You might find that students in smaller classes generally perform better. However, this doesn’t take into account other variables, like teaching style , student motivation, or family support.

When describing this limitation, you might say, “Although our research provides important insights into the relationship between class size and student performance, it does not incorporate the impact of other potentially influential variables. Future research could benefit from a mixed-methods approach that combines quantitative analysis with qualitative insights.”

3. Applicability to Real-World Settings

Oftentimes, experimental research takes place in controlled environments to limit the influence of outside factors.

This control is great for isolation and understanding the specific phenomenon but can limit the applicability or “external validity” of the research to real-world settings.

For example, if you conduct a lab experiment to see how sleep deprivation impacts cognitive performance, the sterile, controlled lab environment might not reflect real-world conditions where people are dealing with multiple stressors.

Therefore, when explaining the limitations of your quantitative study in your methodology section, you could state:

“While our findings provide valuable information about [topic], the controlled conditions of the experiment may not accurately represent real-world scenarios where extraneous variables will exist. As such, the direct applicability of our results to broader contexts may be limited.”

Suggested Solution and Response: Suggest future studies that will engage in real-world observational research, such as ethnographic research.

4. Limited Flexibility

Once a quantitative study is underway, it can be challenging to make changes to it. This is because, unlike in grounded research, you’re putting in place your study in advance, and you can’t make changes part-way through.

Your study design, data collection methods, and analysis techniques need to be decided upon before you start collecting data.

For example, if you are conducting a survey on the impact of social media on teenage mental health, and halfway through, you realize that you should have included a question about their screen time, it’s generally too late to add it.

When discussing this limitation, you could write something like, “The structured nature of our quantitative approach allows for consistent data collection and analysis but also limits our flexibility to adapt and modify the research process in response to emerging insights and ideas.”

Suggested Solution and Response: Suggest future studies that will use mixed-methods or qualitative research methods to gain additional depth of insight.

5. Risk of Survey Error

Surveys are a common tool in quantitative research, but they carry risks of error.

There can be measurement errors (if a question is misunderstood), coverage errors (if some groups aren’t adequately represented), non-response errors (if certain people don’t respond), and sampling errors (if your sample isn’t representative of the population).

For instance, if you’re surveying college students about their study habits , but only daytime students respond because you conduct the survey during the day, your results will be skewed.

In discussing this limitation, you might say, “Despite our best efforts to develop a comprehensive survey, there remains a risk of survey error, including measurement, coverage, non-response, and sampling errors. These could potentially impact the reliability and generalizability of our findings.”

Suggested Solution and Response: Suggest future studies that will use other survey tools to compare and contrast results.

6. Limited Ability to Probe Answers

With quantitative research, you typically can’t ask follow-up questions or delve deeper into participants’ responses like you could in a qualitative interview.

For instance, imagine you are surveying 500 students about study habits in a questionnaire. A respondent might indicate that they study for two hours each night. You might want to follow up by asking them to elaborate on what those study sessions involve or how effective they feel their habits are.

However, quantitative research generally disallows this in the way a qualitative semi-structured interview could.

When discussing this limitation, you might write, “Given the structured nature of our survey, our ability to probe deeper into individual responses is limited. This means we may not fully understand the context or reasoning behind the responses, potentially limiting the depth of our findings.”

Suggested Solution and Response: Suggest future studies that engage in mixed-method or qualitative methodologies to address the issue from another angle.

7. Reliance on Instruments for Data Collection

In quantitative research, the collection of data heavily relies on instruments like questionnaires, surveys, or machines.

The limitation here is that the data you get is only as good as the instrument you’re using. If the instrument isn’t designed or calibrated well, your data can be flawed.

For instance, if you’re using a questionnaire to study customer satisfaction and the questions are vague, confusing, or biased, the responses may not accurately reflect the customers’ true feelings.

When discussing this limitation, you could say, “Our study depends on the use of questionnaires for data collection. Although we have put significant effort into designing and testing the instrument, it’s possible that inaccuracies or misunderstandings could potentially affect the validity of the data collected.”

Suggested Solution and Response: Suggest future studies that will use different instruments but examine the same variables to triangulate results.

8. Time and Resource Constraints (Specific to Quantitative Research)

Quantitative research can be time-consuming and resource-intensive, especially when dealing with large samples.

It often involves systematic sampling, rigorous design, and sometimes complex statistical analysis.

If resources and time are limited, it can restrict the scale of your research, the techniques you can employ, or the extent of your data analysis.

For example, you may want to conduct a nationwide survey on public opinion about a certain policy. However, due to limited resources, you might only be able to survey people in one city.

When writing about this limitation, you could say, “Given the scope of our research and the resources available, we are limited to conducting our survey within one city, which may not fully represent the nationwide public opinion. Hence, the generalizability of the results may be limited.”

Suggested Solution and Response: Suggest future studies that will have more funding or longer timeframes.

How to Discuss Your Research Limitations

1. in your research proposal and methodology section.

In the research proposal, which will become the methodology section of your dissertation, I would recommend taking the four following steps, in order:

  • Be Explicit about your Scope – If you limit the scope of your study in your research question, aims, and objectives, then you can set yourself up well later in the methodology to say that certain questions are “outside the scope of the study.” For example, you may identify the fact that the study doesn’t address a certain variable, but you can follow up by stating that the research question is specifically focused on the variable that you are examining, so this limitation would need to be looked at in future studies.
  • Acknowledge the Limitation – Acknowledging the limitations of your study demonstrates reflexivity and humility and can make your research more reliable and valid. It also pre-empts questions the people grading your paper may have, so instead of them down-grading you for your limitations; they will congratulate you on explaining the limitations and how you have addressed them!
  • Explain your Decisions – You may have chosen your approach (despite its limitations) for a very specific reason. This might be because your approach remains, on balance, the best one to answer your research question. Or, it might be because of time and monetary constraints that are outside of your control.
  • Highlight the Strengths of your Approach – Conclude your limitations section by strongly demonstrating that, despite limitations, you’ve worked hard to minimize the effects of the limitations and that you have chosen your specific approach and methodology because it’s also got some terrific strengths. Name the strengths.

Overall, you’ll want to acknowledge your own limitations but also explain that the limitations don’t detract from the value of your study as it stands.

2. In the Conclusion Section or Chapter

In the conclusion of your study, it is generally expected that you return to a discussion of the study’s limitations. Here, I recommend the following steps:

  • Acknowledge issues faced – After completing your study, you will be increasingly aware of issues you may have faced that, if you re-did the study, you may have addressed earlier in order to avoid those issues. Acknowledge these issues as limitations, and frame them as recommendations for subsequent studies.
  • Suggest further research – Scholarly research aims to fill gaps in the current literature and knowledge. Having established your expertise through your study, suggest lines of inquiry for future researchers. You could state that your study had certain limitations, and “future studies” can address those limitations.
  • Suggest a mixed methods approach – Qualitative and quantitative research each have pros and cons. So, note those ‘cons’ of your approach, then say the next study should approach the topic using the opposite methodology or could approach it using a mixed-methods approach that could achieve the benefits of quantitative studies with the nuanced insights of associated qualitative insights as part of an in-study case-study.

Overall, be clear about both your limitations and how those limitations can inform future studies.

In sum, each type of research method has its own strengths and limitations. Qualitative research excels in exploring depth, context, and complexity, while quantitative research excels in examining breadth, generalizability, and quantifiable measures. Despite their individual limitations, each method contributes unique and valuable insights, and researchers often use them together to provide a more comprehensive understanding of the phenomenon being studied.

Attride-Stirling, J. (2001). Thematic networks: an analytic tool for qualitative research. Qualitative research , 1 (3), 385-405. ( Source )

Atkinson, P., Delamont, S., Cernat, A., Sakshaug, J., & Williams, R. A. (2021).  SAGE research methods foundations . London: Sage Publications.

Clark, T., Foster, L., Bryman, A., & Sloan, L. (2021).  Bryman’s social research methods . Oxford: Oxford University Press.

Köhler, T., Smith, A., & Bhakoo, V. (2022). Templates in qualitative research methods: Origins, limitations, and new directions.  Organizational Research Methods ,  25 (2), 183-210. ( Source )

Lenger, A. (2019). The rejection of qualitative research methods in economics.  Journal of Economic Issues ,  53 (4), 946-965. ( Source )

Taherdoost, H. (2022). What are different research approaches? Comprehensive review of qualitative, quantitative, and mixed method research, their applications, types, and limitations.  Journal of Management Science & Engineering Research ,  5 (1), 53-63. ( Source )

Walliman, N. (2021).  Research methods: The basics . New York: Routledge.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Self-Actualization Examples (Maslow's Hierarchy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ Forest Schools Philosophy & Curriculum, Explained!
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ Montessori's 4 Planes of Development, Explained!
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ Montessori vs Reggio Emilia vs Steiner-Waldorf vs Froebel

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

How to Write Limitations of the Study (with examples)

This blog emphasizes the importance of recognizing and effectively writing about limitations in research. It discusses the types of limitations, their significance, and provides guidelines for writing about them, highlighting their role in advancing scholarly research.

Updated on August 24, 2023

a group of researchers writing their limitation of their study

No matter how well thought out, every research endeavor encounters challenges. There is simply no way to predict all possible variances throughout the process.

These uncharted boundaries and abrupt constraints are known as limitations in research . Identifying and acknowledging limitations is crucial for conducting rigorous studies. Limitations provide context and shed light on gaps in the prevailing inquiry and literature.

This article explores the importance of recognizing limitations and discusses how to write them effectively. By interpreting limitations in research and considering prevalent examples, we aim to reframe the perception from shameful mistakes to respectable revelations.

What are limitations in research?

In the clearest terms, research limitations are the practical or theoretical shortcomings of a study that are often outside of the researcher’s control . While these weaknesses limit the generalizability of a study’s conclusions, they also present a foundation for future research.

Sometimes limitations arise from tangible circumstances like time and funding constraints, or equipment and participant availability. Other times the rationale is more obscure and buried within the research design. Common types of limitations and their ramifications include:

  • Theoretical: limits the scope, depth, or applicability of a study.
  • Methodological: limits the quality, quantity, or diversity of the data.
  • Empirical: limits the representativeness, validity, or reliability of the data.
  • Analytical: limits the accuracy, completeness, or significance of the findings.
  • Ethical: limits the access, consent, or confidentiality of the data.

Regardless of how, when, or why they arise, limitations are a natural part of the research process and should never be ignored . Like all other aspects, they are vital in their own purpose.

Why is identifying limitations important?

Whether to seek acceptance or avoid struggle, humans often instinctively hide flaws and mistakes. Merging this thought process into research by attempting to hide limitations, however, is a bad idea. It has the potential to negate the validity of outcomes and damage the reputation of scholars.

By identifying and addressing limitations throughout a project, researchers strengthen their arguments and curtail the chance of peer censure based on overlooked mistakes. Pointing out these flaws shows an understanding of variable limits and a scrupulous research process.

Showing awareness of and taking responsibility for a project’s boundaries and challenges validates the integrity and transparency of a researcher. It further demonstrates the researchers understand the applicable literature and have thoroughly evaluated their chosen research methods.

Presenting limitations also benefits the readers by providing context for research findings. It guides them to interpret the project’s conclusions only within the scope of very specific conditions. By allowing for an appropriate generalization of the findings that is accurately confined by research boundaries and is not too broad, limitations boost a study’s credibility .

Limitations are true assets to the research process. They highlight opportunities for future research. When researchers identify the limitations of their particular approach to a study question, they enable precise transferability and improve chances for reproducibility. 

Simply stating a project’s limitations is not adequate for spurring further research, though. To spark the interest of other researchers, these acknowledgements must come with thorough explanations regarding how the limitations affected the current study and how they can potentially be overcome with amended methods.

How to write limitations

Typically, the information about a study’s limitations is situated either at the beginning of the discussion section to provide context for readers or at the conclusion of the discussion section to acknowledge the need for further research. However, it varies depending upon the target journal or publication guidelines. 

Don’t hide your limitations

It is also important to not bury a limitation in the body of the paper unless it has a unique connection to a topic in that section. If so, it needs to be reiterated with the other limitations or at the conclusion of the discussion section. Wherever it is included in the manuscript, ensure that the limitations section is prominently positioned and clearly introduced.

While maintaining transparency by disclosing limitations means taking a comprehensive approach, it is not necessary to discuss everything that could have potentially gone wrong during the research study. If there is no commitment to investigation in the introduction, it is unnecessary to consider the issue a limitation to the research. Wholly consider the term ‘limitations’ and ask, “Did it significantly change or limit the possible outcomes?” Then, qualify the occurrence as either a limitation to include in the current manuscript or as an idea to note for other projects. 

Writing limitations

Once the limitations are concretely identified and it is decided where they will be included in the paper, researchers are ready for the writing task. Including only what is pertinent, keeping explanations detailed but concise, and employing the following guidelines is key for crafting valuable limitations:

1) Identify and describe the limitations : Clearly introduce the limitation by classifying its form and specifying its origin. For example:

  • An unintentional bias encountered during data collection
  • An intentional use of unplanned post-hoc data analysis

2) Explain the implications : Describe how the limitation potentially influences the study’s findings and how the validity and generalizability are subsequently impacted. Provide examples and evidence to support claims of the limitations’ effects without making excuses or exaggerating their impact. Overall, be transparent and objective in presenting the limitations, without undermining the significance of the research. 

3) Provide alternative approaches for future studies : Offer specific suggestions for potential improvements or avenues for further investigation. Demonstrate a proactive approach by encouraging future research that addresses the identified gaps and, therefore, expands the knowledge base.

Whether presenting limitations as an individual section within the manuscript or as a subtopic in the discussion area, authors should use clear headings and straightforward language to facilitate readability. There is no need to complicate limitations with jargon, computations, or complex datasets.

Examples of common limitations

Limitations are generally grouped into two categories , methodology and research process .

Methodology limitations

Methodology may include limitations due to:

  • Sample size
  • Lack of available or reliable data
  • Lack of prior research studies on the topic
  • Measure used to collect the data
  • Self-reported data

methodology limitation example

The researcher is addressing how the large sample size requires a reassessment of the measures used to collect and analyze the data.

Research process limitations

Limitations during the research process may arise from:

  • Access to information
  • Longitudinal effects
  • Cultural and other biases
  • Language fluency
  • Time constraints

research process limitations example

The author is pointing out that the model’s estimates are based on potentially biased observational studies.

Final thoughts

Successfully proving theories and touting great achievements are only two very narrow goals of scholarly research. The true passion and greatest efforts of researchers comes more in the form of confronting assumptions and exploring the obscure.

In many ways, recognizing and sharing the limitations of a research study both allows for and encourages this type of discovery that continuously pushes research forward. By using limitations to provide a transparent account of the project's boundaries and to contextualize the findings, researchers pave the way for even more robust and impactful research in the future.

Charla Viera, MS

See our "Privacy Policy"

Ensure your structure and ideas are consistent and clearly communicated

Pair your Premium Editing with our add-on service Presubmission Review for an overall assessment of your manuscript.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.28(1); Jan-Mar 2024
  • PMC10882193

Logo of jsls

Limitations in Medical Research: Recognition, Influence, and Warning

Douglas e. ott.

Mercer University, Macon, Georgia, USA.

Background:

As the number of limitations increases in a medical research article, their consequences multiply and the validity of findings decreases. How often do limitations occur in a medical article? What are the implications of limitation interaction? How often are the conclusions hedged in their explanation?

To identify the number, type, and frequency of limitations and words used to describe conclusion(s) in medical research articles.

Search, analysis, and evaluation of open access research articles from 2021 and 2022 from the Journal of the Society of Laparoscopic and Robotic Surgery and 2022 Surgical Endoscopy for type(s) of limitation(s) admitted to by author(s) and the number of times they occurred. Limitations not admitted to were found, obvious, and not claimed. An automated text analysis was performed for hedging words in conclusion statements. A limitation index score is proposed to gauge the validity of statements and conclusions as the number of limitations increases.

A total of 298 articles were reviewed and analyzed, finding 1,764 limitations. Four articles had no limitations. The average was between 3.7% and 6.9% per article. Hedging, weasel words and words of estimative probability description was found in 95.6% of the conclusions.

Conclusions:

Limitations and their number matter. The greater the number of limitations and ramifications of their effects, the more outcomes and conclusions are affected. Wording ambiguity using hedging or weasel words shows that limitations affect the uncertainty of claims. The limitation index scoring method shows the diminished validity of finding(s) and conclusion(s).

INTRODUCTION

As the number of limitations in a medical research article increases, does their influence have a more significant effect than each one considered separately, making the findings and conclusions less reliable and valid? Limitations are known variables that influence data collection and findings and compromise outcomes, conclusions, and inferences. A large body of work recognizes the effect(s) and consequence(s) of limitations. 1 – 77 Other than the ones known to the author(s), unknown and unrecognized limitations influence research credibility. This study and analysis aim to determine how frequently and what limitations are found in peer-reviewed open-access medical articles for laparoscopic/endoscopic surgeons.

This research is about limitations, how often they occur and explained and/or justified. Failure to disclose limitations in medical writing limits proper decision-making and understanding of the material presented. All articles have limitations and constraints. Not acknowledging limitations is a lack of candor, ignorance, or a deliberate omission. To reduce the suspicion of invalid conclusions limitations and their effects must be acknowledged and explained. This allows for a clearer more focused assessment of the article’s subject matter without explaining its findings and conclusions using hedging and words of estimative probability. 78 , 79

An evaluation of open access research/meta-analysis/case series/methodologies/review articles published in the Journal of the Society of Laparoendoscopic and Robotic Surgery ( JSLS ) for 2021 and 2022 (129) and commentary/guidelines/new technology/practice guidelines/review/SAGES Masters Program articles in Surgical Endoscopy ( Surg Endosc ) for 2022 (169) totaling 298 were read and evaluated by automated text analysis for limitations admitted to by the paper’s authors using such words as “limitations,” “limits,” “shortcomings,” “inadequacies,” “flaws,” “weaknesses,” “constraints,” “deficiencies,” “problems,” and “drawbacks” in the search. Limitations not mentioned were found by reading the paper and assigning type and frequency. The number of hedging and weasel words used to describe the conclusion or validate findings was determined by reading the article and adding them up.

For JSLS , there were 129 articles having 63 different types of limitations. Authors claimed 476, and an additional 32 were found within the article, totaling 508 limitations (93.7% admitted to and 6.3% discovered that were not mentioned). This was a 3.9 limitation average per article. No article said it was free of limitations. The ten most frequent limitations and their rate of occurrence are in Table 1 . The total number of limitations, frequency, and visual depictions are seen in Figures 1A and ​ and 1B 1B .

An external file that holds a picture, illustration, etc.
Object name is LS-JSLS230045F001.jpg

( A ) Visual depiction of the ranked frequency of limitations for JSLS articles reviewed.

The Ten Most Frequent Limitations Found in JSLS and Surg Endosc Articles

There were 169 articles for Surg Endosc , with 78 different named limitations the authors claimed for a total of 1,162. An additional 94 limitations were found in the articles, totaling 1,256, or 7.4 per article. The authors explicitly stated 92.5% of the limitations, and an additional 7.5% of additional limitations were found within the article. Five claimed zero limitations (5/169 = 3%). The ten most frequent limitations and their rate of occurrence are in Table 1 . The total number of limitations and frequency is shown in Figures 1A and ​ and 1B 1B .

Conclusions were described in hedged, weasel words or words of estimative probability 95.6% of the time (285/298).

A research hypothesis aims to test the idea about expected relationships between variables or to explain an occurrence. The assessment of a hypothesis with limitations embedded in the method reaches a conclusion that is inherently flawed. What is compromised by the limitation(s)? The result is an inferential study in the presence of uncertainty. As the number of limitations increases, the validity of information decreases due to the proliferation of uncertain information. Information gathered and conclusions made in the presence of limitations can be functionally unsound. Hypothesis testing of spurious conditions with limitations and then claiming a conclusion is not a reliable method for generating factual evidence. The authors’ reliance on limitation gathered “evidence” data and asserting that this is valid is spurious reasoning. The bridge between theory and evidence is not through limitations that unquestionably accept findings. A range of conclusion possibilities exists being some percent closer to either more correct or incorrect. Relying on leveraging the pursuit of “fact” in the presence of limitations as the safeguard is akin to the fox watching the hen house. Acknowledgment of the uncertainty limitations create in research and discounting the finding’s reliability would give more credibility to the effort. Shortcomings and widespread misuses of research limitation justifications make findings suspect and falsely justified in many instances.

The JSLS instructions to authors say that in the discussion section of the paper the author(s) must “Comment on any methodological weaknesses of the study” ( http://jsls.sls.org/guidelines-for-authors/ ). In their instructions for authors, Surg Endosc says that in the discussion of the paper, “A paragraph discussing study limitations is required” ( https://www.springer.com/journal/464/submission-guidelines ). A comment for a written article about a limitation should express an opinion or reaction. A paragraph discussing limitations, especially, if there is more than one, requires just that: a paragraph and discussion. These requirements were not met or enforced by JSLS 86% (111/129) of the time and 92.3% (156/169) for Surg Endosc . This is an error in peer reviewing, not adhering to established research publication best practices, and the journals needing to adhere to their guidelines. The International Committee of Medical Journal Editors, uniform requirements for manuscripts recommends that authors “State the limitations of your study, and explore the implications of your findings for future research and for clinical practice or policy. Discuss the influence or association of variables, such as sex and/or gender, on your findings, where appropriate, and the limitations of the data.” It also says, “describe new or substantially modified methods, give reasons for using them, and evaluate their limitations” and “Include in the Discussion section the implications of the findings and their limitations, including implications for future research” and “give references to established methods, including statistical methods (see below); provide references and brief descriptions for methods that have been published but are not well known; describe new or substantially modified methods, give reasons for using them, and evaluate their limitations.” 65 “Reporting guidelines (e.g., CONSORT, 1 ARRIVE 2 ) have been proposed to promote the transparency and accuracy of reporting for biomedical studies, and they often include discussion of limitations as a checklist item. Although such guidelines have been endorsed by high-profile biomedical journals, and compliance with them is associated with improved reporting quality, 3 adherence remains suboptimal.” 4 , 5

Limitations start in the methodologic design phase of research. They require troubleshooting evaluations from the start to consider what limitations exist, what is known and unknown, where, and how to overcome them, and how they will affect the reasonableness and assessment of possible conclusions. A named limitation represents a category with numerous components. Each factor has a unique effect on findings and collectively influences conclusion assessment. Even a single limitation can compromise the study’s implementation and adversely influence research parameters, resulting in diminished value of the findings, outcomes, and conclusions. This becomes more problematic as the number of limitations and their components increase. Any limitation influences a research paper. It is unknown how much and to what extent any limitation affects other limitations, but it does create a cascading domino effect of ever-increasing interactions that compromise findings and conclusions. Considering “research” as a system, it has sensitivity and initial conditions (methodology, data collection, analysis, etc.). The slightest alteration of a study due to limitations can profoundly impact all aspects of the study. The presence and influence of limitations introduce a range of unpredictable influences on findings, results, and conclusions.

Researchers and readers need to pay attention to and discount the effects limitations have on the validity of findings. Richard Feynman said in “Cargo cult science” “the first principle is that you must not fool yourself and you are the easiest person to fool.” 73 We strongly believe our own nonsense or wrong-headed reasoning. Buddhist philosophers say we are attached to our ignorance. Researchers are not critical enough about how they fool themselves regarding their findings with known limitations and then pass them on to readers. The competence of findings with known limitations results in suspect conclusions.

Authors should not ask for dismissal, disregard, or indulgence of their limitations. They should be thoughtful and reflective about the implications and uncertainty the limitations create 67 ; their uncertainties, blind spots, and impact on the research’s relevance. A meaningful presentation of study limitations should describe the limitation, explain its effect, provide possible alternative approaches, and describe steps taken to mitigate the limitation. This was largely absent from the articles reviewed.

Authors use synonyms and phrases describing limitations that hide, deflect, downplay, and divert attention from them, i.e., some drawbacks of the study are …, weaknesses of the study are…, shortcomings are…, and disadvantages of the study are…. They then say their finding(s) lack(s) generalizability, meaning the findings only apply to the study participants or that care, sometimes extreme, must be taken in interpreting the results. Which limitation components are they referring to? Are the authors aware of the extent of their limitations, or are they using convenient phrases to highlight the existence of limitations without detailing their defects?

Limitations negatively weigh on both data and conclusions yet no literature exists to provide a quantifiable measure of this effect. The only acknowledgment is that limitations affect research data and conclusions. The adverse effects of limitations are both specific and contextual to each research article and is part of the parameters that affect research. All the limitations are expressed in words, excuses, and a litany of mea culpas asking for forgiveness and without explaining the extent or magnitude of their impact. It is left to the writer and reader to figure out. It is not known what value writers put on their limitations in the 298 articles reviewed from JSLS and Surg Endosc . Listing limitations without comment and effect on the findings and conclusions is a compromising red flag. Therefore, a limitation scoring method was developed and is proposed to assess the level of suspicion generated by the number of limitations.

It is doubtful that a medical research article is so well designed and executed that there are no limitations. This is doubtful since there are unknown unknowns. This study showed that authors need to acknowledge all the limitations when they are known. They acknowledge the ones they know but do not consider other possibilities. There are the known known limitations; the ones the author(s) are aware of and can be measured, some explained, most not. The known unknowns: limitations authors are aware of but cannot explain or quantify. The unknown unknown limitations: the ones authors are not aware of and have unknown influence(s), i.e., the things they do not know they do not know. These are blind spots (not knowing what they do not know or black swan events). And the unknown knowns; the limitations authors may be aware of but have not disclosed, thoroughly reported, understood, or addressed. They are unexpected and not considered. See Table 2 . 74

Limitations of Known and Unknowns as They Apply to Limitations

It is possible that authors did not identify, want to identify, or acknowledge potential limitations or were unaware of what limitations existed. Cumulative complexity is the result of the presence of multiple limitations because of the accumulation and interaction of limitations and their components. Just mentioning a limitation category and not the specific parts that are the limitation(s) is not enough. Authors telling readers of their known research limitations is a caution to discount the findings and conclusions. At what point does the caution for each limitation, its ramifications, and consequences become a warning? When does the piling up of mistakes, bad and missing data, biases, small sample size, lack of generalizability, confounding factors, etc., reach a point when the findings become s uninterpretable and meaningless? “Caution” indicates a level of potential hazard; a warning is more dire and consequential. Authors use the word “caution” not “warning” to describe their conclusions. There is a point when the number of limitations and their cumulative effects surpasses the point where a caution statement is no longer applicable, and a warning statement is required. This is the reason for establishing a limitations risk score.

Limitations put medical research articles at risk. The accumulation of limitations (variables having additional limitation components) are gaps and flaws diluting the probability of validity. There is currently no assessment method for evaluating the effect(s) of limitations on research outcomes other than awareness that there is an effect. Authors make statements warning that their results may not be reliable or generalizable, and need more research and larger numbers. Just because the weight effect of any given limitation is not known, explained, or how it discounts findings does not negate a causation effect on data, its analysis, and conclusions. Limitation variables and the ramifications of their effects have consequences. The relationship is not zero effect and accumulates with each added limitation.

As a result of this research, a limitation index score (LIS) system and assessment tool were developed. This limitation risk assessment tool gives a scores assessment of the relative validity of conclusions in a medical article having limitations. The adoption of the LIS scoring assessment tool for authors, researchers, editors, reviewers, and readers is a step toward understanding the effects of limitations and their causal relationships to findings and conclusions. The objective is cleaner, tighter methodologies, and better data assessment, to achieve more reliable findings. Adjustments to research conclusions in the presence of limitations are necessary. The degree of modification depends on context. The cumulative effect of this burden must be acknowledged by a tangible reduction and questioning of the legitimacy of statements made under these circumstances. The description calculating the LIS score is detailed in Appendix 1 .

A limitation word or phrase is not one limitation; it is a group of limitations under the heading of that word or phrase having many additional possible components just as an individual named influence. For instance, when an admission of selection bias is noted, the authors do not explain if it was an exclusion criterion, self-selection, nonresponsiveness, lost to follow-up, recruitment error, how it affects external validity, lack of randomization, etc., or any of the least 263 types of known biases causing systematic distortions of the truth whether unintentional or wanton. 40 , 76 Which forms of selection bias are they identifying? 63 Limitations have branches that introduce additional limitations influencing the study’s ability to reach a useful conclusion. Authors rarely tell you the effect consequences and extent limitations have on their study, findings, and conclusions.

This is a sample of limitations and a few of their component variables under the rubric of a single word or phrase. See Table 3 .

A Limitation Word or Phrase is a Limitation Having Additional Components That Are Additional Limitations. When an Author Uses the Limitation Composite Word or Phrase, They Leave out Which One of Its Components is Contributory to the Research Limitations. Each Limitation Interacts with Other Limitations, Creating a Cluster of Cross Complexities of Data, Findings, and Conclusions That Are Tainted and Negatively Affect Findings and Conclusions

Limitations rarely occur alone. If you see one there are many you do not see or appreciate. Limitation s components interact with their own and other limitations, leading to complex connections interacting and discounting the reliability of findings. By how much is context dependent: but it is not zero. Limitations are variables influencing outcomes. As the number of limitations increases, the reliability of the conclusions decreases. How many variables (limitations) does it take to nullify the claims of the findings? The weight and influence of each limitation, its aggregate components, and interconnectedness have an unknown magnitude and effect. The result is a disorderly concoction of hearsay explanations. Table 4 is an example of just two single explanation limitations and some of their components illustrating the complex compounding of their effects on each other.

An Example of Interactions between Only Two Limitations and Some of Their Components Causes 16 Interactions

The novelty of this paper on limitations in medical science is not the identification of research article limitations or their number or frequency; it is the recognition of the multiplier effect(s) limitations and the influence they have on diminishing any conclusion(s) the paper makes. It is possible that limitations contribute to the inability of studies to replicate and why so many are one-time occurrences. Therefore, the generalizability statement that should be given to all readers is BEWARE THERE IS A REDUCTION EFFECT ON THE CONCLUSIONS IN THIS ARTICLE BECAUSE OF ITS LIMITATIONS.

Journals accept studies done with too many limitations, creating forking path situations resulting in an enormous number of possible associations of individual data points as multiple comparisons. 79 The result is confusion, a muddled mess caused by interactions of limitations undermining the ability to make valid inferences. Authors know and acknowledge but rarely explain them or their influence. They also use incomplete and biased databases, biased methods, small sample sizes, and not eliminating confounders, etc., but persist in doing research with these circumstances. Why is that? Is it because when limitations are acknowledged, authors feel justified in their conclusions? It wasn’t my poor research design; it was the limitation(s). How do peer reviewers score and analyze these papers without a method to discount the findings and conclusions in the presence of limitations? What are the calculus editors use to justify papers with multiple limitations, reaching compromised or spurious conclusions? How much caution or warning should a journal say must be taken in interpreting article results? How much? Which results? When? Under what circumstance(s)?

Since a critical component of research is its limitations, the quality and rigor of research are largely defined by, 75 these constraints making it imperative that limitations be exposed and explained. All studies have limitations admitted to or not, and these limitations influence outcomes and conclusions. Unfortunately, they are given insufficient attention, accompanied by feeble excuses, but they all matter. The degrees of freedom of each limitation influence every other limitation, magnifying their ramifications and confusion. Limitations of a scientific article must put the findings in context so the reader can judge the validity and strength of the conclusions. While authors acknowledge the limitations of their study, they influence its legitimacy.

Not only are limitations not properly acknowledged in the scientific literature, 8 but their implications, magnitude, and how they affect a conclusion are not explained or appreciated. Authors work at claiming their work and methods “overcome,” “avoid,” or “circumvent” limitations. Limitations are explained away as “Failure to prove a difference does not prove lack of a difference.” 60 Sample size, bias, confounders, bad data, etc. are not what they seem and do not sully the results. The implication is “trust me.” But that’s not science. Limitations create cognitive distortions and framing (misperception of reality) for the authors and readers. Data in studies with limitations is data having limitations. It was real but tainted.

Limitations are not a trivial aspect of research. It is a tangible something, positive or negative, put into a data set to be analyzed and used to reach a conclusion. How did these extra somethings, known unknowns, not knowns, and unknown knowns, affect the validity of the data set and conclusions? Research presented with the vagaries of explicit limitations is intensified by additional limitations and their component effects on top of the first limitation s , quickly diluting any conclusion making its dependability questionable.

This study’s analysis of limitations in medical articles averaged 3.9% per article for JSLS and 7.4% for Surg Endosc . Authors admit to some and are aware of limitations, but not all of them and discount or leave out others. Limitations were often presented with misleading and hedging language. Authors do not give weight or suggest the percent discount limitations have on the reliance of conclusion(s). Since limitations influence findings, reliability, generalizability, and validity without knowing the magnitude of each and their context, the best that can be said about the conclusions is that they are specific to the study described, context-driven, and suspect.

Limitations mean something is missing, added, incorrect, unseen, unaware of, fabricated, or unknown; circumstances that confuse, confound, and compromise findings and information to the extent that a notice is necessary. All medical articles should have this statement, “Any conclusion drawn from this medical study should be interpreted considering its limitations. Readers should exercise caution, use critical judgement, and consult other sources before accepting these findings. Findings may not be generalizable regardless of sample size, composition, representative data points, and subject groups. Methodologic, analytic, and data collection may have introduced biases or limitations that can affect the accuracy of the results. Controlling for confounding variables, known and unknown, may have influenced the data and/or observations. The accuracy and completeness of the data used to draw a conclusion may not be reliable. The study was specific to time, place, persons, and prevailing circumstances. The weight of each of these factors is unknown to us. Their effect may be limited or compounded and diminish the validity of the proposed conclusions.”

This study and findings are limited and constrained by the limitations of the articles reviewed. They have known and unknown limitations not accounted for, missing data, small sample size, incongruous populations, internal and external validity concerns, confounders, and more. See Tables 2 and ​ and 3 . 3 . Some of these are correctible by the author’s awareness of the consequences of limitations, making plans to address them in the methodology phase of hypothesis assessment and performance of the research to diminish their effects.

Limitations in research articles are expected, but they can be reduced in their effect so that conclusions are closer to being valid. Limitations introduce elements of ignorance and suspicion. They need to be explained so their influence on the believability of the study and its conclusions is closer to meeting construct, content, face, and criterion validity. As the number of limitations increases, common sense, skepticism, study component acceptability, and understanding the ramifications of each limitation are necessary to accept, discount, or reject the author’s findings. As the number of hedging and weasel words used to explain conclusion(s) increases, believability decreases, and raises suspicion regarding claims. Establishing a systematic limitation scoring index limitations for authors, editors, reviewers, and readers and recognizing their cumulative effects will result in a clearer understanding of research content and legitimacy.

How to calculate the Limitation Index Score (LIS). See Tables 5 – 5 . Each limitation admitted to by authors in the article equals (=) one (1) point. Limitations may be generally stated by the author as a broad category, but can have multiple components, such as a retrospective study with these limitation components: 1. data or recall not accurate, 2. data missing, 3. selection bias not controlled, 4. confounders not controlled, 5. no randomization, 6. no blinding, 7. difficult to establish cause and effect, and 8. cannot draw a conclusion of causation. For each component, no matter how many are not explained and corrected, add an additional one (1) point to the score. See Table 2 .

The Limitation Scoring Index is a Numeric Limitation Risk Assessment Score to Rank Risk Categories and Discounting Probability of Validity and Conclusions. The More Limitations in a Study, the Greater the Risk of Unreliable Findings and Conclusions

Limitations May Be Generally Stated by the Author but Have Multiple Components, Such as a Retrospective Study Having Disadvantage Components of 1. Data or Recall Not Accurate, 2. Data Missing, 3. Selection Bias Not Controlled, 4. Confounders Not Controlled, 5. No Randomization, 6. No Blinding, 7 Difficult to Establish Cause and Effect, 8. Results Are Hypothesis Generating, and 9. Cannot Draw a Conclusion of Causation. For Each Component, Not Explained and Corrected, Add an Additional One (1) Point Is Added to the Score. Extra Blanks Are for Additional Limitations

An Automatic 2 Points is Added for Meta-Analysis Studies Since They Have All the Retrospective Detrimental Components. 44 Data from Insurance, State, National, Medicare, and Medicaid, Because of Incorrect Coding, Over Reporting, and Under-Reporting, Etc. Each Component of the Limitation Adds One Additional Point. For Surveys and Questionnaires Add One Additional Point for Each Bias. Extra Blanks Are for Additional Limitations

Automatic Five (5) Points for Manufacturer and User Facility Device Experience (MAUDE) Database Articles. The FDA Access Data Site Says Submissions Can Be “Incomplete, Inaccurate, Untimely, Unverified, or Biased” and “the Incidence or Prevalence of an Event Cannot Be Determined from This Reporting System Alone Due to Under-Reporting of Events, Inaccuracies in Reports, Lack of Verification That the Device Caused the Reported Event, and Lack of Information” and “DR Data Alone Cannot Be Used to Establish Rates of Events, Evaluate a Change in Event Rates over Time or Compare Event Rates between Devices. The Number of Reports Cannot Be Interpreted or Used in Isolation to Reach Conclusions” 80

Total Limitation Index Score

Each limitation not admitted to = two (2) points. A meta-analysis study gets an automatic 2 points since they are retrospective and have detrimental components that should be added to the 2 points. Data from insurance, state, national, Medicare, and Medicaid, because of incorrect coding, over-reporting, and underreporting, etc., score 2 points, and each component adds one additional point. Surveys and questionnaires get 2 points, and add one additional point for each bias. See Table 3 .

Manufacturer and User Facility Device Experience (MAUDE) database articles receive an automatic five (5) points. The FDA access data site says, submissions can be “incomplete, inaccurate, untimely, unverified, or biased” and “the incidence or prevalence of an event cannot be determined from this reporting system alone due to underreporting of events, inaccuracies in reports, lack of verification that the device caused the reported event, and lack of information” and “MDR data alone cannot be used to establish rates of events, evaluate a change in event rates over time or compare event rates between devices. The number of reports cannot be interpreted or used in isolation to reach conclusions.” 80 See Table 4 . Add one additional point for each additional limitation noted in the article.

Add one additional point for each additional limitation and one point for each of its components. Extra blanks are for additional

limitations and their component scores.

Funding sources: none.

Disclosure: none.

Conflict of interests: none.

Acknowledgments: Author would like to thank Lynda Davis for her help with data collection.

References:

All references have been archived at https://archive.org/web/

limitations in research examples

How to Write about Research Limitations Without Reducing Your Impact

limitations in research examples

Being open about what you could not do in your research is actually extremely positive, and it’s viewed favorably by editors and peer reviewers. Writing about your limitations without reducing your impact is a valuable skills that will help your reputation as a researcher.

Areas you might have “failed,” in other words, your limitations, include:

  • Aims and objectives (they were a bit too ambitious)
  • Study design (not quite right)
  • Supporting literature (you’re in uncharted territory)
  • Sampling method (if only you’d snowballed it)
  • Size of your study population (not enough power)
  • Data collection method (bias found its way in)
  • Confounding factors (didn’t see that coming!)

Your limitations don’t harm your work and reputation. Quite the opposite, they validate your work and increase your contribution to your field.

Limitations are quite easy to write about in a useful way that won’t reducing your impact. In fact, it’ll increase it.

Why are limitations so important?

Study design limitations, impact limitations, statistical or data limitations, other limitations, how to describe your limitations, where to write your limitations, structure for writing about a limitation, writing up a broader limitation, dealing with breakthroughs and niche-type limitations, dealing with critical flaws, curb your enthusiasm: manage expectations.

Regrettably, the publish-or-perish mentality has created pressure to only come up with successful results. It’s also not too much to say that journals prefer positive studies – where the findings support the hypothesis.

But success alone is not science. Science is trial and error.

So it’s important to present a well-balanced, comprehensive description of your research. That includes your limitations. Accurately reporting your limitations will:

  • Help prevent research waste on repeated failures
  • Lead to creation of new hypotheses
  • Provide useful information for systematic reviews
  • Further demonstrate the robustness of your study

Adding clear discussion of any negative results and/or outcomes as well as your study limitations makes you much better able to provide your readers (including peer reviewers ) with:

  • Information about your positive results
  • Explanation of why your results are credible
  • Ideas for future hypothesis generation
  • Understanding of why your study has impact

These are good things. There’s even a journal for failure ! That’s how important it is in science.

Some authors find it hard to write about their study limitations, seeing it as an admission of failure. You can do it, and you don’t have to overdo it, either.

Know your limitations and you can anticipate and record them

These might include the procedures, experiments, or reagents (or funding) you have available. As well as specific constraints on the study population. There may be ethical guidelines , and institutional or national policies, that limit what you can do.

These are very common limitations to medical research, for example. We refer to these kinds as study design limitations. Clinical trials, for instance, may have a restriction on interventions expected to have a positive effect. Or there may be restrictions on data collection based on the study population.

Even if your study has a strong design and statistical foundation there might be a strong regional, national, or species-based focus. Or your work could be very population- or experimental-specific.

Your entire field of study, in fact, may only be conducive to incremental findings (e.g., particle physics or molecular biology).

These are inherent limits on impact in that they’re so specific. This limits the extendibility of the findings. It doesn’t however, limit the impact on a specific area or your field. Note the impact and push forward!

Perhaps the most common kind of limitation is statistical or data-based. This category is extremely common in experimental (e.g., chemistry) or field-based (e.g., ecology, population biology, qualitative clinical research) studies.

In many situations, testing hypotheses, you simply may not be able to collect as much data or as good quality data as you want to. Perhaps enrollment was more difficult than expected, under-powering your results.

Statistical limitations can also stem from study design, producing more serious issues in terms of interpreting findings. Seeking expert review from a statistician, such as by using Edanz scientific solutions , may be a good idea before starting your study design.

limitations in research examples

The above three are often interconnected. And they’re certainly not comprehensive.

As mentioned up top, you may also be limited by the literature. By external confounders. By things you didn’t even see coming (like how long it took you to find 10 qualified respondents for a qualitative study).

Once you’ve identified possible limitations in your work, you need to get to the real point of this post – describing them in your manuscript.

Use the perspective of limitations = contribution and impact to maximize your chances of acceptance.

Reviewers, editors, and readers expect you to present your work authoritatively. You’re the expert in the field, after all. This may make them critical. Embrace that. Counter their possibly negative interpretation by explaining each limitation, showing why the results are still important and useful.

Limitations are usually listed at the end of your Discussion section, though they can also be added throughout. Especially for a long manuscript or for an essay or dissertation, the latter may be useful for the reader.

Writing on your limitations: Words and structure

  • This study did have some limitations.
  • Three notable limitations affected this study.
  • While this study successfully x, there were some limitations.

Giving a specific number is useful for the reader and can guide your writing. But if it’s a longer list, no need to number them. For a short list, you can write them as:

But this gets tiring for more than three limitations (bad RX: reader experience).

So, for longer lists, add a bit of variety in the language to engage the reader. Like this:

  • The first issue was…
  • Another limitation was…
  • Additionally,…

An expert editor will be happy to help you make the English more natural and readable.

After your lead-in sentence, follow a pattern of writing on your findings and related limitation(s), giving a quick interpretation, back it with support (if needed), and offer the next steps.

This provides a complete package for the reader: what happened, what it means, why this is the case, and what is now needed.

In that way, you’ve admitted what may be lacking, but you’ve further established your authority. You’ve also provided a quick roadmap for your reader. That’s an impactful contribution!

It might not always be logical or readable to give that much detail. As long as you fully describe and justify the limitation, you’ve done your job well.

Your study looked at a weight intervention over 6 months at primary healthcare clinics in Japan. The results were generally. But because you only looked at Japanese patients, these findings may not be extendible to patients of other cultures/nationalities, etc.

That’s not a failure at all. It’s a success. But it is a limitation. And other researchers can learn from it and build on it. Write it up in the limitations.

Finding: We found that, in the intervention group, BMI was reduced over 6 months.

Interpretation (and support): This suggests a regimen of routine testing and measurement followed by personalized health guidance from primary physicians had a positive effect on patients’ conditions.

Support: Yamazaki (2019) and Endo et al. (2020) found similar results in urban Japanese clinics and hospitals, respectively.

Limitation and how to use it: While these are useful findings, they are limited by only including Japanese populations. This does not ensure these interventions would be as effective in other nations or cultures. Similar interventions, adapted to the local healthcare and cultural conditions, would help to further clarify the methods.

Now you’ve stated the value of your finding, the limitation, and what to do with it. Nice impact!

Another hurdle you may hit is when your results are particularly novel or you’re publishing in a little-researched field. Those are limitations that need to be stated. In this case, you can support your findings by reinforcing the novelty of your results.

When breaking new ground, there are probably still many gaps in the knowledge base that need to be filled. A good follow-up statement for this type of limitation is to describe what, based on these results, the next steps would be to build a stronger overall evidence base.

limitations in research examples

It’s possible that your study will have a fairly “critical” flaw (usually in the study design) that decreases confidence in your findings.

Other experts will likely notice this (in peer review or perhaps on a preprint server, they should notice it), so it’s best to explain why this error or flaw occurred.

You can still explain why the study is worth repeating or how you plan to retest the phenomenon. But you may need to temper your publication goals if you still plan to publish your work.

No one expects science to be perfect the first time and while your peers can be highly critical, no one’s work is beyond limitations. This is important to keep in mind.

Edanz experts can help by giving you an Expert Scientific Review and seeking out your limitations.

Our knowledge base is built on uncovering each piece of the puzzle, one at a time, and limitations show us where new efforts need to be made. Much like peer review , don’t think of limitations as being inherently bad, but more as an opportunity for a new challenge.

Ultimately, your limitations may be someone else’s inspirations. Include them in your submission when you get published in the journal of your choice .

All research faces problems: Being honest impresses people much more than ignoring your limitations.

the science logo

  • Researcher Services
  • English Editing
  • EXCITED by the SCIENCE
  • Smart Tools
  • Journal Selector
  • About Edanz
  • Privacy Policy
  • Terms & Conditions
  • Services & Pricing
  • 特定商取引法に基づく表記

Research Limitations: A Comprehensive Guide

Embarking on a research journey is an exciting endeavor, but every study has its boundaries and constraints. Understanding and transparently acknowledging these limitations is a crucial aspect of scholarly work. In this guide, we'll explore the concept of research limitations, why they matter, and how to effectively address and navigate them in your academic endeavors.

1. Defining Research Limitations:

  • Definition: Research limitations are the constraints or shortcomings that affect the scope, applicability, and generalizability of a study.
  • Inherent in Research: Every research project, regardless of its scale or significance, possesses limitations.

2. Types of Research Limitations:

  • Methodological Limitations: Constraints related to the research design, data collection methods, or analytical techniques.
  • Sampling Limitations: Issues associated with the representativeness or size of the study sample.
  • Contextual Limitations: Restrictions stemming from the specific time, place, or cultural context of the study.
  • Resource Limitations: Constraints related to time, budget, or access to necessary resources.

3. Why Acknowledge Limitations?

  • Transparency: Acknowledging limitations demonstrates transparency and honesty in your research.
  • Robustness of Findings: Recognizing limitations adds nuance to your findings, making them more robust.
  • Future Research Directions: Addressing limitations provides a foundation for future researchers to build upon.

4. Identifying Research Limitations:

  • Reflect on Methodology: Consider the strengths and weaknesses of your research design, data collection methods, and analysis.
  • Examine Sample Characteristics: Evaluate the representativeness and size of your study sample.
  • Consider External Factors: Assess external factors that may impact the generalizability of your findings.

5. How to Address Limitations:

  • In the Methodology Section: Clearly articulate limitations in the methodology section of your research paper.
  • Offer Solutions: If possible, propose ways to mitigate or address identified limitations.
  • Future Research Suggestions: Use limitations as a springboard to suggest areas for future research.

6. Common Phrases to Express Limitations:

  • "This study is not without limitations."
  • "One limitation of our research is..."
  • "It is important to acknowledge the constraints of this study, including..."

7. Examples of Addressing Limitations:

  • Example 1 (Methodological): "While our survey provided valuable insights, the reliance on self-reported data introduces the possibility of response bias."
  • Example 2 (Sampling): "The small sample size of our study limits the generalizability of our findings to a broader population."
  • Example 3 (Resource): "Due to budget constraints, our research was limited to a single geographical location, potentially impacting the external validity."

8. Balancing Strengths and Limitations:

  • Emphasize Contributions: Highlight the contributions and strengths of your research alongside the limitations.
  • Maintain a Positive Tone: Discuss limitations objectively without undermining the significance of your study.

9. Feedback and Peer Review:

  • Seek Feedback: Share your research with peers or mentors to gain valuable insights.
  • Peer Review: Embrace the feedback received during the peer-review process to enhance the robustness of your work.

10. Continuous Reflection:

  • Throughout the Research Process: Continuously reflect on potential limitations during the entire research process.
  • Adjust as Needed: Be willing to adjust your approach as you encounter unforeseen challenges.

Conclusion:

Understanding and effectively addressing research limitations is a hallmark of rigorous and responsible scholarship. By openly acknowledging these constraints, you not only enhance the credibility of your work but also contribute to the broader academic discourse. Embrace the nuances of your research journey, navigate its limitations thoughtfully, and pave the way for future investigations.

Related Guides

  • How to Write an Essay Outline: A Step-by-Step Guide
  • Your Ultimate Guide to In-text Citation
  • Research Methods : A Comprehensive Guide
  • How to Write a Thesis Statement?
  • The Art of Wringing a Research Conclusion
  • Analyze and Discuss Your Research Findings Like a Pro
  • About LiveInnovation.org
  • Prof. Dr. Francisco Tigre Moura
  • Academic Research
  • Live AM: Artist Monitor
  • Live FM: Fan Monitor
  • Books and e-Books
  • Media/Events
  • Consumer Behavior
  • Marketing Research
  • Statistics Support
  • Thesis Writing

LiveInnovation.org

  • Research Support

Don’t Worry! And Write the LIMITATIONS of Your Research!

Do you know someone who thinks they are simply perfect and has no faults? (Well, I know a few and some even become presidents of extremely important countries).

Well, as shocking and disappointing as it may seem to some people: no one is perfect! Some are too tall, some too short, some enjoy country music (nothing personal), some add water to their fine whiskey (honestly, why?) and some do not drink coffee.

The conclusion is: we all have some negative sides! And research is no different!

And what is considered a limitation of a study?

A limitation is any aspect that hinders a study and its findings.

Does it mean that if my study has limitations it is useless?  NO!!!!!!!!!!!

Very often researchers (students or well established researchers) have concerns about clearly describing the limitations of their studies. Why? Because there is sometimes a misconception that if your research limitations are too clear, readers will undermine the relevance of your work. For example, you might be afraid others will think:

“Why are these findings relevant if there are so many limitations to the study?”

All right, first let us make some things clear here:

  • EVERY STUDY HAS LIMITATIONS.
  • Clarifying the limitations of a study allows the reader to better understand under which conditions the results should be interpreted.
  • Clear descriptions of limitations of a study also show that the researcher has a holistic understanding of his/her study. And this is something very positive!  

In other words, clearly describing the limitations of your study should only strengthen your work!

ALSO CHECK :  Read our “STEP BY STEP Thesis Guide” with Many More Tips!

Video Content: Research Limitations 

In case you are enjoying the article, do not forget to watch the video with further support on how to deal with your research limitations.

Examples of Research Limitations

Ok, you got it so far that no one is perfect, that some weird people become presidents and that research limitations should be included in your work.

I guess the next question would be: which limitations should I mention?

Look, it is extremely difficult to describe all possible types of research limitations. It will vary greatly depending on the type and nature of the study.

However, here are some examples:

  • Often studies wish to understand a specific topic (e.g. Brazilian consumers’ perceptions towards a product) but only conduct a study with 50 participants. Considering that the Brazilian population has around 200 million people, can we generalize the results based on only 50 respondents? Clearly NOT! So consider your sample size in relation to the population of your study.
  • For example, many academic studies have used student sampling. There are many advantages for this, such as easy access and low costs for data collection. Nonetheless, using purely student sampling is also extremely limiting if the population of the study is comprised of people with varies profiles.
  • Very often, a method is accurate for a research aim, but it also includes many limitations. For example: Imagine you wish to understand consumers’ use of toilet paper (weird topic, isn’t it?) and the researcher uses in-depth interviews, as the study has an exploratory nature. Would you, as a respondent, feel comfortable to describe your use of toilet paper to a stranger? Probably not! Thus, your answers might be highly biased according to what is expected from him/her or to what is socially acceptable. So your answers might not exactly resemble the truth, due to the method.
  • In the example above, the presence of the researcher influenced the responses, right? But would it be different if the interview had been done over the phone? Perhaps yes. Why? Because the topic is sensitive and private (Literally! ). So the point is: the way in which you collect data can represent a strong limitation. Some researchers collect data in busy areas such as train stations where there are many distractions and respondents are in a rush. Is this a limitation? Certainly! Thus, you must reflect to see if the way in which you collected your data represents a limitation.  
  • Imagine you are developing a study involving virtual reality (VR). You can use many different VR devices, ranging from very expensive ones (that have an extraordinary immersion experience) to cheaper ones (that will provide an immersive experience, but not as real). In other words, the type of device used influenced the study results. So if you use an equipment (e.g. devices, products, etc.) you have to consider if the type used represents a limitation or a strength of your work.
  • Often students have a deadline to turn in their work. Other academics have conference or journal deadlines. Would we do better work if we had more time? Of course! Do we have unlimited time to do research and collect data? NO! For this reason, “time” is a very common limitation for many studies.
  • Are you investigating a phenomenon long after it happened? Did you collect your data in a period that was not exactly suitable for respondents for some specific reason? All of these are examples of how timing might represent a strong limitation for studies.
  • Money is always a problem (at least for me. If it is not for you, we should be friends! ). Sometimes we need it to purchase the necessary equipment for a study, to hire people for data collection, to purchase a specific statistical software or to simply reward participants with products or giveaways for having participated in the study. When financial resources are scarce, all of these possibilities are compromised. Consequently, such limitations might be reflected in the results of the study.
  • In the majority of cases, studies start when researchers identify gaps in the literature and try to address them. However, the identification or understanding that there is a gap depends on the researchers’ level of access to the existing literature. What may seem as a research gap might be a huge misconception simply because the person did not have access to a larger range of scientific literature. Thus, access to literature can also be a limitation.
  • If your study is based on secondary data, pay extra care to the age of the data. Making current assumptions based on old data represents a strong limitation.

Where Should Research Limitations be Included in the Thesis?

Once you are done thinking and considering the limitations of your work, a simple question may arise: Where in my thesis should I include such limitations?

Please note: there is no specific format to this and it may vary from supervisor to supervisor, and sometimes certain universities may have their own guidelines. But USUALLY, the limitations are the VERY LAST section of your thesis, and they appear after the MANAGERIAL RECOMMENDATIONS .

And why? Because as mentioned above, the limitations may be due to any section of your work. For example:

  • Access to literature (literature review or theoretical background)
  • Method and data collection process (methodology)
  • Statistical software (analysis)

For this reason, it doesn’t really make much sense to have it in any other section of your work but the very END .

Got it? Great! Now go ahead and be honest with the limitations of your work! Reviewers will be positively impressed!

Download the Article

Did you like the article and would like to have it with you? Simple!

DOWNLOAD : LiveInnovation.org - Writing Research Limitations.pdf

Final Thoughts

Please note: All the suggestions here are personal, according to my own supervision style. Feel absolutely free to discuss them with your supervisor or other academics. Each one tends to have their own style and expectations.

Hope these tips have been useful for you and wish you all the best!

limitations in research examples

RELATED ARTICLES MORE FROM AUTHOR

limitations in research examples

SOUNDS LIKE A THESIS is now available on Spotify!

limitations in research examples

Download Our e-Book: “Sounds Like A Thesis”

limitations in research examples

SPSS Tutorial Series on YouTube: Learn Quickly and Easily

Privacy overview.

This is an necessary category.

This is an non-necessary category.

scholarsedge.in

ScholarsEdge - Academic Workshops And Research Training

How to Write Limitations of The Study? With Examples

How to Write Limitations of The Study

Researchers usually encounter limitations of study during their academic paper writing. Limitations of a study are the shortcomings or flaws that we stumble upon due to various reasons, such as small sample size, unavailability of resources, etc. Listing the study’s limitations is important as it reflects transparency and shows your understanding of the topic. It also helps structure the research study better.

Table of Contents

Most researchers avoid discussing research limitations because they feel that doing so could impact the value of their research paper from the target audience’s point of view. However, it is important to discuss them and explain how the limitations may affect your study’s opinions and conclusions. Moreover, it shows that as a researcher, you have investigated the research’s shortcomings and have a profound understanding of the research topic .

So, where do you mention the limitations in the paper? It is best to mention them right after highlighting the strong points of your research methodology.

Types of Limitations in Research

Limitations could arise due to tangible factors such as funding and time constraints or the unavailability of participants or equipment. Some of the common types of limitations include:

  • Methodological
  • Theoretical
  • Research Design Limitations

How to Write Limitations of the Study?

Stringent guidelines are followed to narrow down research questions. In this section, you can elucidate the probable weaknesses of the research paper. The study’s limitations are typically written in the discussion section at the beginning, which will provide context for readers or in the concluding part of the discussion section. The basic steps to correctly structure Research Limitations are:

Identify the limitations:

Describe the limitations of your study by categorising it and stipulating its origin. Consider factors like research design, sample size, etc, that might have influenced the research.

  • Explain the Implications:

Provide evidence and examples to support claims of the study’s effects and limitations without exaggerating their impact. Explain the potential influences of each limitation on the study’s findings and results. Be transparent while presenting the limitations, but do not undermine the research’s significance.

  • Offer Solutions and Alternative Perspectives:

After identifying the limitations, potential solutions to address them must also be offered. The solutions could be alternative methodologies, suggestions for future studies that address the identified gaps approaches to minimise bias, etc. Use clear headings and precise language while presenting limitations to enhance readability.

Things You Should Not Include in the Limitations:

  • You need not mention everything that you may personally consider a limitation
  • Do not be defensive and explain why your research has a particular limitation
  • Do not be accusatory and point at other authors about a specific limitation

5 Tips for Writing Limitations of a Research

Acknowledging research limitations is a crucial component of a study. Below are five critical tips for discussing study limitations in your research manuscript.

  • Maintain Transparency —Do not structure limitations in confusing and vague language, and do not try to hide them.
  • Impact on the Research – Discuss the study limitations’ impact on your findings or their general applicability.
  • Be concise – Avoid writing lengthy paragraphs about the limitations, which can make the study look flawed.
  • Future Study —Explain how future studies can avoid these limitations. This may include suggestions for alternative methodologies, research directions, or strategies to minimise bias.
  • Balanced Perspective —Balance reviewing the study’s limitations and its contributions. Although it is crucial to acknowledge the limitations and not hide them, it is also important to highlight the significance and strengths of the research.
  • Do not complicate study limitations with computations, complex datasets, and unnecessary jargon.

Key Takeaways

It is quite normal for every research to have some limitations. It is better to identify the shortcomings of the research and acknowledge them rather than leave them to be cited by your dissertation evaluator. Research limitations often occur due to the following reasons:

  • Broadly formulating the aims and objectives of your research
  • Data collection method may have flaws
  • Sample size is decided based on the research problem and is important in quantitative studies. If it is a small sample size, the test results may not be able to identify significant associations in the data set.

Limitations can be viewed as an opportunity to learn and improve. They can be used to refine research methodologies and advance knowledge in a particular field.

FAQs: Limitations of The Study

FAQs: Limitations of The Study

How to write limitations of research?

Follow these steps to write the limitations of research effectively:

  • Identify Limitations
  • Be transparent
  • Provide Context
  • Offer solutions or mitigations
  • Consider alternative perspectives
  • Maintain a professional tone
  • Conclude thoughtfully

What is scope and limitations in research?

The scope of research explains the study’s boundaries and parameters. It outlines the variables, research questions, objectives, and sample being studied. It also includes techniques and methodologies used to collect data and analyse it.

Limitations in research are all about the factors that may hinder the study. These constraints can be sample size, resource unavailability, or time constraints.

What is limitation in research examples?

The limitations that a researcher typically encounters include:

  • Sampling Bias
  • Confounding variables
  • Measurement error
  • Research design
  • Data Collection methods
  • Response Bias
  • Generalizability
  • Resource and time constraints

How to Present the Limitations of a Study in Research?

Presenting the Limitations of a study in research needs careful consideration. Here are some steps to present Limitations in research:

  • Acknowledge limitations
  • Provide context
  • Be specific
  • Discuss implications
  • Offer suggestions for future research
  • Maintain objectivity

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Academic Writing & Research

An online resource for students and researchers

The limitations section: Common Limitations in Research

Every dissertation should include a limitations section in which you recognise the limits and weaknesses of your research, so here are a few tips on what to cover.

Research, by its nature, is a dynamic and iterative process that aims to explore, analyze, and contribute to knowledge in various fields. However, every research endeavour comes with its set of limitations that researchers must acknowledge, address, and navigate. In this blog post, we’ll delve into the common limitations of research and discuss strategies for mitigating their impact on the validity, reliability, and generalizability of findings.

Five main types of limitations

1. sampling limitations:.

  • Sample Size: Limited sample size can affect the generalizability and representativeness of research findings. Small samples may not adequately capture the diversity or variability within a population, leading to potential biases or limited statistical power.
  • Sampling Bias: Biases in sample selection, such as self-selection bias or sampling from non-representative populations, can compromise the external validity of research outcomes.

2. Measurement and Instrumentation Limitations:

  • Measurement Error: Inaccuracies or inconsistencies in measurement instruments, data collection tools, or operational definitions can introduce measurement error, affecting the reliability and validity of results.
  • Validity Threats: Threats to internal validity (e.g., confounding variables, selection bias) or external validity (e.g., ecological validity, population validity) can impact the robustness and generalizability of research findings.

3. Methodological Limitations:

  • Research Design: Limitations in research design, such as lack of control groups, non-randomized designs, or cross-sectional studies, can constrain the ability to establish causal relationships or infer causality.
  • Data Collection Methods: Issues related to data collection methods, such as self-report biases, social desirability biases, or retrospective data, can introduce inaccuracies or distortions in data interpretation.

Prime student

4. Contextual and External Factors:

  • Contextual Constraints: Research conducted in specific contexts or settings may face limitations in generalizing findings to broader populations or different contexts.
  • Temporal Limitations: Changes over time, evolving trends, or temporal fluctuations can impact the relevance and applicability of research findings beyond a specific timeframe.

5. Ethical and Practical Constraints:

  • Ethical Considerations: Ethical constraints, such as limitations in accessing sensitive data, obtaining informed consent, or ensuring participant confidentiality, can influence the scope and conduct of research.
  • Resource Constraints: Practical limitations, such as budget constraints, time constraints, or access to resources (e.g., data, equipment, expertise), can impact the feasibility and scope of research endeavours.

Strategies for Addressing Limitations:

  • Transparent Reporting: Clearly articulate and disclose limitations in research methodology, sampling, measurement, and design in research reports, ensuring transparency and accountability.
  • Mitigating Biases: Implement strategies to mitigate biases, such as randomization, blinding, control measures, and sensitivity analyses, to enhance the validity and reliability of findings.
  • Sensitivity Analyses: Conduct sensitivity analyses or robustness checks to assess the impact of potential biases, outliers, or variations in data on research outcomes.
  • Triangulation: Employ triangulation methods, combining multiple data sources, methods, or perspectives, to enhance the validity, reliability, and depth of research findings.
  • Longitudinal Studies: Consider longitudinal or follow-up studies to track changes over time, validate findings, and assess the stability of research outcomes.

While every research endeavour has its limitations, acknowledging and addressing these limitations is crucial for maintaining the integrity, rigour, and credibility of research findings. Also, every dissertation should include a limitations section. By adopting transparent reporting practices, implementing mitigation strategies, conducting sensitivity analyses, leveraging triangulation methods, and considering longitudinal approaches, researchers can navigate common limitations effectively and enhance the robustness and applicability of their research contributions.

limitations in research examples

Posted by, Glenn Stevens

Glenn is an academic writing and research specialist with 15 years experience as a writing coach and PhD supervisor. Also a qualified English teacher, he previously had an extensive career in publishing. He is currently the editor of this website. Glenn lives in the UK. Contact Glenn

Share this:

Social Science Works

Center of Deliberation

limitations in research examples

The Limits Of Survey Data: What Questionnaires Can’t Tell Us

  • Latest Posts

Sarah Coughlan

  • Alienation Online: An Analysis of Populist Facebook Pages In Brandenburg - October 24, 2017
  • Shy Tories & Virtue Signalling: How Labour Surged Online - June 16, 2017
  • The Limits Of Survey Data: What Questionnaires Can’t Tell Us - March 7, 2017

All research methodologies have their limitations, as many authors have pointed before (see for example Visser, Krosnick and Lavrakas, 2000). From the generalisabilty of data to the nitty-gritty of bias and question wording, every method has its flaws. In fact, the in-fighting between methodological approaches is one of social science’s worst kept secrets: the hostility between quantitative and qualitative data scholars knows almost no bounds (admittedly that’s ‘almost no bounds’ within the polite world of academic debate) and doesn’t look set to be resolved any time soon. That said, there are some methods that are better suited than others to certain types of studies. This article will examine the role of survey data in values studies and argue that it is a blunt tool for this kind of research and that qualitative study methods, particularly via deliberation, are more appropriate. This article will do so via an examination of a piece of 2016 research published by the German ministry for migrants and refugees (the BAMF) which explored both the demographics and the social values held by refugees that have arrived in Germany in the last three years. This article will argue that surveys are unfit to get at the issues that are most important to people.

The Good, The Bad & The Survey

Germany has been Europe’s leading figure as the refugee crisis has deepened worldwide following the collapse of government in Syria and the rise of ISIS. Today, there are 65.3 million displaced people from across the world and 21.3 million refugees (UNHCR, 2016) , a number that surpasses even the number of refugees following the Second World War. The exact number of refugees living in Germany (official statistics typically count all migrants seeking protection as refugees, although there is some difference between the various legal statuses) is not entirely clear and the figure is unstable. And while this figure still lags behind the efforts made by countries like Turkey and Jordan, this represents the highest total number of refugees in a European country and matches the pro capita efforts of Sweden. Meanwhile, there are signs that Germany’s residents do not always welcome their new neighbours. For example, in 2016, there were almost 2,000 reported attacks on refugees and refugee homes (Antonio-Amedeo Stiftung, 2017) a similar trend was established by Benček and Strasheim (2016), and the rise of the far-right and anti-migrant party, the AfD in local elections last year points to unresolved resentment towards the newcomers.

In this context then, it makes sense for the BAMF ( Bundesamt für Migration und Flüchtlinge ) , the ministry responsible for refugees and migrants in Germany, to respond to pressure in the media and from politicians to get a better overall picture about the kinds of people the refugees to Germany are. As such, their 2016 paper: “Survey of refugees – flight, arrival in Germany and first steps of integration” [1] details a host of information about newcomers in Germany. The study, which relied on questionnaires given by BAMF officials in a number of languages, and a face-to-face or online format (BAMF, 2016, 11), asked questions of 4,500 refugee respondents. For the most part, the study offers excellent insight into the demographic history of refugees to Germany and will be helpful for policymakers looking to ensure that efforts to help settles refugees are appropriately targeted. For example, the study detailed the relatively high level of education enjoyed by typical refugees to Germany (an average between 10 and 11 years of schooling) (ibid., 37) and some of the specific difficulties this group have in successfully navigating the job market (ibid., 46) and where this group turns to for help for this.

In addition to offering the most up-to-date information about refugees’ home countries and their path into Germany, the study is extremely helpful for politicians and scholars looking to enhance their understanding of logistical and practical issues facing migrants; for example, who has access to integration courses? How many unaccompanied children are in Germany? How many men and how many women fled to Germany? Here, the study is undoubtedly helpful.  However, the latter stages of the report purport to examine the social values held by refugees, and it is this part of the study that this article takes issue with.

Respondents were asked to answer questions about their values. The topics included: the right form a government should take, the role of democracy, voting rights of women, the role of religion in the state, men and women’s equality in a marriage, and perceived difference between the values of refugees and Germans among others. While this article doesn’t take issue with the veracity of the findings reported in the article, it does argue that the methods used here are inappropriate for the task at hand. Consider first the questions relating to refugees’ attitudes toward democracy and government. The report found that 96% of refugee respondents agreed with the statement: “One should have a democratic system” [2] compared with 95% of the German control group (ibid., 52). This finding was picked up in the liberal media and heralded as a sign that refugees share central German social values. It is entirely possible that this is true. However, it isn’t difficult to see the ways in which this number might have been accidentally manufactured and should hence be treated with considerable caution.

To do so, one must first consider the circumstances of the interview or questionnaire. As a refugee in Germany, you are confronted with the authority of the BAMF regularly, and you are also likely aware that it is representatives from this organization that ultimately decide  on you and your family’s status in Germany and whether you will have the right to stay or not. You are then asked for detailed information about your family history, your education and your participation in integration courses by a representative from this institution. Finally, the interviewer asks what your views are on democracy, women’s rights and religion. Is it too much of a jump to suggest that someone who has had to flee their home and take the extraordinarily dangerous trip to Europe is savvy enough to spot a potential trap here? In these circumstances, there is a tendency to give the answer the interviewer wants to hear. This interviewer bias effect is not a problem exclusive to surveys of refugees’ social values (Davis, 2013), however the power imbalance in these interactions exacerbates the effect. The argument advanced here is not that refugees do not hold a positive view of democracy, but that the trying to find out their views via a survey of this sort is flawed. In fact, the report doesn’t find any significant points of departure between Germans and refugees on any of the major values other than the difficulties presented by women earning more money than their husbands and its potential to cause marital difficulties (ibid., 54).

The Gillard Government made a commitment in 2010 to release all children from immigration detention by June 2011, but still 1000 children languish in the harsh environment of immigration camps around Australia. The Refugee Action Collective organised a protest on July 9, 2011 outside the Melbourne Immigration Transit accommodation which is used for the detention of unaccompanied minors.

Asking Questions About Essential Contested Concepts

Beyond the serious power imbalance noted above, another key issue not addressed in the BAMF study is the question of contested concepts. Essential contested concepts, an idea first advanced by W.B. Gallie in 1956, are the big topics like art, beauty, fairness and trust. These big topics, which also include traditionally social scientific and political topics like democracy and equality, are defined as ‘essentially contested’ when the premise of the concept – for example ‘freedom’ – is widely accepted, but how best to realise freedom is disputed (Hart, 1961, 156). The BAMF survey uses these big topics without offering a definition to go with them. What do people mean when they say that ‘men and women should have equal rights [3] ’ (BAMF, 2016, 52)? What does equality mean in this context? There are of course many different ways that ‘equality’ between men and women can be interpreted. For example, many conservative Catholic churches argue that men and women are ‘equal’ but different, and have clear family roles for men and women. Likewise, participants could equally mean to say that they believe that men and women should have equal, shared family responsibilities, there is no way to know this from this study. Hence, it is difficult to know how best to interpret these kinds of statistics without considerable context.

As part of the work undertaken by Social Science Works, the team are regularly confronted by these kinds of questions via deliberative workshops with Germans and refugees. In these workshops the team ask questions like “What is democracy?”, “What is freedom?”, “What is equality?”. In doing so, the aim of the workshops is to build a consensus together by formulating and reformulating possible definitions [4] , finding common ground between conflicting perspectives and ultimately defining the concepts as a group. What is among the most striking things about these meetings is the initial reluctance of participants to volunteer answers – there is a real lack of certainty about what these kinds of words mean in practice, even among participants who, for example, have studied social and political sciences or work in politics. With the benefit of hindsight, workshop participants have acknowledged these problems in dealing with essentially contested concepts, participants have commented :

“Social Science Works has encouraged me to question my own views and views more critically and to develop a more precise concept for large and often hard to grasp terms such as “democracy”, “freedom” or “equality”. This experience has shown me how complicated it is for me – as someone who I really felt proficient in these questions – to formulate such ideas concretely.”   (German participant from the 2016 series of workshops) “The central starting point for the training was, for me, the common notion of understanding of democracy and freedom. In the intensive discussion, I realized that these terms, which seem self-evident, are anything but.” (German participant from the 2016 series of workshops).

In attempting to talk about these big issues, it become clear just how little consensus there is on these kinds of topics. The participants quoted here work and volunteer in the German social sector and hence confront these kinds of ideas implicitly on a daily basis. The level of uncertainty pointed at here, and from Social Science Works’ wider experience working with volunteers, social workers and refugees suggests that the lack of fluency in essentially contested concepts is a wider problem. In the context of the BAMF research then, it is clear that readers ought to take the chapter detailing the ‘values’ of refugees and Germans with a generous pinch of salt.

Building Consensus & Moving Forward

This article does not seek to suggest that there is no role for survey data in helping to answer questions relating to refugees in Germany. For the most part, the BAMF research offers excellent data on key questions relating to demographics and current social conditions. Hence, the study ought to make an excellent tool of policy makers seeking to better target their support of refugees. However, it is equally clear that to discuss essentially contested concepts like democracy and equality, a survey is a very blunt tool, and here the BAMF study fails to convince. The study seeks to make clear that the social and political values between Germans and refugees are similar and the differences are minimal. The experience in the deliberative workshops hosted by Social Science Works suggests that this is probably true, insofar as both groups find these concepts difficult to define and have to wrestle to make sense of them. This is not something articulated in the BAMF research, however.

Our collective lack of fluency in these topics, even among social and political scholars, has long roots best described another time. However, if we are to improve our abilities to discuss these kinds of topics and build collective ideas for social change and cohesion, there are much better places to begin than a questionnaire. If we are to build a collective understanding of our political structures and our social values, we need to address this lack of fluency by engaging in discussions with diverse groups and together building a coherent idea about social and political ideas.

[1]  Original German: “Befragung von Geflüchteten – Flucht, Ankunft in Deutschland und erste Schritte der Integration“

[2] Original German: „Man sollte ein demokratisches System haben.“

[3] Original German: „Frauen haben die gleichen Rechte wie Männer“

[4] For a more detailed overview of the deliberative method in these workshops, see Blokland, 2016.

Amadeu Antonio Foundation (2016), Hate Speech Against Refugees , Amadeu Antonio Foundation, Berlin.

Benček, D. and Strasheim, J. (2016), Refugees Welcome? Introducing a New Dataset on Anti-Refugee Violence in Germany, 2014–2015 , Working Paper No. 2032, University of Kiel.

Davis, R. E.; et al. (Feb 2010). Interviewer effects in public health surveys , Health Education Research, Oxford University Press, Oxford.

Hart, H.L.A., (1961),  The Concept of Law , Oxford University Press, Oxford.

IAB-BAMF-SOEP (2016), B efragung von Geflüchteten – Flucht, Ankunft in Deutschland und erste Schritte der Integration , BAMF-Forschungsbericht 29, Nürnberg: Bundesamt für Migration und Flüchtlinge.

UNHCR (2016), Global Trends: Forced Displacement in 2015 , UNHCR, New York.

Visser, P. S., Krosnick, J. A., & Lavrakas, P. (2000), Survey research , in H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social psychology , New York: Cambridge University Press.

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Grad Coach

The Research Gap (Literature Gap)

Everything you need to know to find a quality research gap

By: Ethar Al-Saraf (PhD) | Expert Reviewed By: Eunice Rautenbach (DTech) | November 2022

If you’re just starting out in research, chances are you’ve heard about the elusive research gap (also called a literature gap). In this post, we’ll explore the tricky topic of research gaps. We’ll explain what a research gap is, look at the four most common types of research gaps, and unpack how you can go about finding a suitable research gap for your dissertation, thesis or research project.

Overview: Research Gap 101

  • What is a research gap
  • Four common types of research gaps
  • Practical examples
  • How to find research gaps
  • Recap & key takeaways

What (exactly) is a research gap?

Well, at the simplest level, a research gap is essentially an unanswered question or unresolved problem in a field, which reflects a lack of existing research in that space. Alternatively, a research gap can also exist when there’s already a fair deal of existing research, but where the findings of the studies pull in different directions , making it difficult to draw firm conclusions.

For example, let’s say your research aims to identify the cause (or causes) of a particular disease. Upon reviewing the literature, you may find that there’s a body of research that points toward cigarette smoking as a key factor – but at the same time, a large body of research that finds no link between smoking and the disease. In that case, you may have something of a research gap that warrants further investigation.

Now that we’ve defined what a research gap is – an unanswered question or unresolved problem – let’s look at a few different types of research gaps.

A research gap is essentially an unanswered question or unresolved problem in a field, reflecting a lack of existing research.

Types of research gaps

While there are many different types of research gaps, the four most common ones we encounter when helping students at Grad Coach are as follows:

  • The classic literature gap
  • The disagreement gap
  • The contextual gap, and
  • The methodological gap

Need a helping hand?

limitations in research examples

1. The Classic Literature Gap

First up is the classic literature gap. This type of research gap emerges when there’s a new concept or phenomenon that hasn’t been studied much, or at all. For example, when a social media platform is launched, there’s an opportunity to explore its impacts on users, how it could be leveraged for marketing, its impact on society, and so on. The same applies for new technologies, new modes of communication, transportation, etc.

Classic literature gaps can present exciting research opportunities , but a drawback you need to be aware of is that with this type of research gap, you’ll be exploring completely new territory . This means you’ll have to draw on adjacent literature (that is, research in adjacent fields) to build your literature review, as there naturally won’t be very many existing studies that directly relate to the topic. While this is manageable, it can be challenging for first-time researchers, so be careful not to bite off more than you can chew.

Free Webinar: How To Write A Research Proposal

2. The Disagreement Gap

As the name suggests, the disagreement gap emerges when there are contrasting or contradictory findings in the existing research regarding a specific research question (or set of questions). The hypothetical example we looked at earlier regarding the causes of a disease reflects a disagreement gap.

Importantly, for this type of research gap, there needs to be a relatively balanced set of opposing findings . In other words, a situation where 95% of studies find one result and 5% find the opposite result wouldn’t quite constitute a disagreement in the literature. Of course, it’s hard to quantify exactly how much weight to give to each study, but you’ll need to at least show that the opposing findings aren’t simply a corner-case anomaly .

limitations in research examples

3. The Contextual Gap

The third type of research gap is the contextual gap. Simply put, a contextual gap exists when there’s already a decent body of existing research on a particular topic, but an absence of research in specific contexts .

For example, there could be a lack of research on:

  • A specific population – perhaps a certain age group, gender or ethnicity
  • A geographic area – for example, a city, country or region
  • A certain time period – perhaps the bulk of the studies took place many years or even decades ago and the landscape has changed.

The contextual gap is a popular option for dissertations and theses, especially for first-time researchers, as it allows you to develop your research on a solid foundation of existing literature and potentially even use existing survey measures.

Importantly, if you’re gonna go this route, you need to ensure that there’s a plausible reason why you’d expect potential differences in the specific context you choose. If there’s no reason to expect different results between existing and new contexts, the research gap wouldn’t be well justified. So, make sure that you can clearly articulate why your chosen context is “different” from existing studies and why that might reasonably result in different findings.

Get help finding a research topic

4. The Methodological Gap

Last but not least, we have the methodological gap. As the name suggests, this type of research gap emerges as a result of the research methodology or design of existing studies. With this approach, you’d argue that the methodology of existing studies is lacking in some way , or that they’re missing a certain perspective.

For example, you might argue that the bulk of the existing research has taken a quantitative approach, and therefore there is a lack of rich insight and texture that a qualitative study could provide. Similarly, you might argue that existing studies have primarily taken a cross-sectional approach , and as a result, have only provided a snapshot view of the situation – whereas a longitudinal approach could help uncover how constructs or variables have evolved over time.

limitations in research examples

Practical Examples

Let’s take a look at some practical examples so that you can see how research gaps are typically expressed in written form. Keep in mind that these are just examples – not actual current gaps (we’ll show you how to find these a little later!).

Context: Healthcare

Despite extensive research on diabetes management, there’s a research gap in terms of understanding the effectiveness of digital health interventions in rural populations (compared to urban ones) within Eastern Europe.

Context: Environmental Science

While a wealth of research exists regarding plastic pollution in oceans, there is significantly less understanding of microplastic accumulation in freshwater ecosystems like rivers and lakes, particularly within Southern Africa.

Context: Education

While empirical research surrounding online learning has grown over the past five years, there remains a lack of comprehensive studies regarding the effectiveness of online learning for students with special educational needs.

As you can see in each of these examples, the author begins by clearly acknowledging the existing research and then proceeds to explain where the current area of lack (i.e., the research gap) exists.

Free Webinar: How To Find A Dissertation Research Topic

How To Find A Research Gap

Now that you’ve got a clearer picture of the different types of research gaps, the next question is of course, “how do you find these research gaps?” .

Well, we cover the process of how to find original, high-value research gaps in a separate post . But, for now, I’ll share a basic two-step strategy here to help you find potential research gaps.

As a starting point, you should find as many literature reviews, systematic reviews and meta-analyses as you can, covering your area of interest. Additionally, you should dig into the most recent journal articles to wrap your head around the current state of knowledge. It’s also a good idea to look at recent dissertations and theses (especially doctoral-level ones). Dissertation databases such as ProQuest, EBSCO and Open Access are a goldmine for this sort of thing. Importantly, make sure that you’re looking at recent resources (ideally those published in the last year or two), or the gaps you find might have already been plugged by other researchers.

Once you’ve gathered a meaty collection of resources, the section that you really want to focus on is the one titled “ further research opportunities ” or “further research is needed”. In this section, the researchers will explicitly state where more studies are required – in other words, where potential research gaps may exist. You can also look at the “ limitations ” section of the studies, as this will often spur ideas for methodology-based research gaps.

By following this process, you’ll orient yourself with the current state of research , which will lay the foundation for you to identify potential research gaps. You can then start drawing up a shortlist of ideas and evaluating them as candidate topics . But remember, make sure you’re looking at recent articles – there’s no use going down a rabbit hole only to find that someone’s already filled the gap 🙂

Let’s Recap

We’ve covered a lot of ground in this post. Here are the key takeaways:

  • A research gap is an unanswered question or unresolved problem in a field, which reflects a lack of existing research in that space.
  • The four most common types of research gaps are the classic literature gap, the disagreement gap, the contextual gap and the methodological gap. 
  • To find potential research gaps, start by reviewing recent journal articles in your area of interest, paying particular attention to the FRIN section .

If you’re keen to learn more about research gaps and research topic ideation in general, be sure to check out the rest of the Grad Coach Blog . Alternatively, if you’re looking for 1-on-1 support with your dissertation, thesis or research project, be sure to check out our private coaching service .

limitations in research examples

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

How To Find a Research Gap (Fast)

35 Comments

ZAID AL-ZUBAIDI

This post is REALLY more than useful, Thank you very very much

Abdu Ebrahim

Very helpful specialy, for those who are new for writing a research! So thank you very much!!

Zinashbizu

I found it very helpful article. Thank you.

fanaye

Just at the time when I needed it, really helpful.

Tawana Ngwenya

Very helpful and well-explained. Thank you

ALI ZULFIQAR

VERY HELPFUL

A.M Kwankwameri

We’re very grateful for your guidance, indeed we have been learning a lot from you , so thank you abundantly once again.

ahmed

hello brother could you explain to me this question explain the gaps that researchers are coming up with ?

Aliyu Jibril

Am just starting to write my research paper. your publication is very helpful. Thanks so much

haziel

How to cite the author of this?

kiyyaa

your explanation very help me for research paper. thank you

Bhakti Prasad Subedi

Very important presentation. Thanks.

Best Ideas. Thank you.

Getachew Gobena

I found it’s an excellent blog to get more insights about the Research Gap. I appreciate it!

Juliana Otabil

Kindly explain to me how to generate good research objectives.

Nathan Mbandama

This is very helpful, thank you

Salome Makhuduga Serote

How to tabulate research gap

Favour

Very helpful, thank you.

Vapeuk

Thanks a lot for this great insight!

Effie

This is really helpful indeed!

Guillermo Dimaligalig

This article is really helpfull in discussing how will we be able to define better a research problem of our interest. Thanks so much.

Yisa Usman

Reading this just in good time as i prepare the proposal for my PhD topic defense.

lucy kiende

Very helpful Thanks a lot.

TOUFIK

Thank you very much

Dien Kei

This was very timely. Kudos

Takele Gezaheg Demie

Great one! Thank you all.

Efrem

Thank you very much.

Rev Andy N Moses

This is so enlightening. Disagreement gap. Thanks for the insight.

How do I Cite this document please?

Emmanuel

Research gap about career choice given me Example bro?

Mihloti

I found this information so relevant as I am embarking on a Masters Degree. Thank you for this eye opener. It make me feel I can work diligently and smart on my research proposal.

Bienvenue Concorde

This is very helpful to beginners of research. You have good teaching strategy that use favorable language that limit everyone from being bored. Kudos!!!!!

Hamis Amanje

This plat form is very useful under academic arena therefore im stil learning a lot of informations that will help to reduce the burden during development of my PhD thesis

Foday Abdulai Sesay

This information is beneficial to me.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Limitations Section

This guide will discuss the core concepts of study limitations and provide the foundations for how to formulate this section in an academic research paper.

Scientific research is an imperfect process. The core aspect of research, to investigate research questions, on topics both known and unknown, inherently includes an element of risk. These include human error, barriers to data gathering, limited resources, and bias. Researchers are encouraged to discuss the limitations of their work to enhance the process of research, as well as to allow readers to gain an understanding of the study’s framework and value.

The limitations of a study are defined as any characteristics, traits, actions, or influences that could impact the research process , and therefore its findings . Types of limitations can differ significantly, ranging from internal aspects, such as flaws in design and methodology, to external influences that a researcher was unable to control. A study may have several limitations that impact how its findings withstand validity tests, the generalizability of conclusions, or the appropriateness of the study design in a specific context.

Importance of Discussing Limitations

Many new researchers fear openly and clearly stating the limitations of their studies as they worry it will undermine the validity and relevance of their work for readers and other professionals in the field. That is not the case , as a statement of study limitations allows the reader to better understand the conditions of the study and challenges that the researcher has encountered . Not including this section, or leaving out vital aspects, which can address anything from sampling to the specific research methodology, can be detrimental to the general research field as it establishes an incomplete and potentially fallacious depiction of the research. Within academia, it is expected that all studies have limitations to some extent. Including this section demonstrates a comprehensive and holistic understanding of the research process and topic by the author.

A discussion of limitations should be a subjective learning process that assesses the magnitude, and critically evaluates the extenuating impact of the said limitations. This leads to the importance of stating limitations as it creates opportunities for both the original author and other researchers to improve the quality and validity of any future studies. Including limitations is based on the core principle of transparency in scientific research, with the purpose to maintain mutual integrity and promote further progress in similar studies.

Descriptions of Various Limitations

  • Sample size or profile – sampling is one of the most common limitations mentioned by researchers. This is often due to the difficulty of finding a perfect sample that both fits the size parameters and necessary characteristics of the study to ensure generalizability of results. Various sampling techniques are also open to error and bias, which may potentially influence outcomes. Sometimes researchers are faced with limitations in selecting samples and resort to selective picking of participants or, the opposite, including irrelevant people in the general pool to reach the necessary total.
  • Availability of information or previous research – generally, studies are based on previous knowledge or theoretical concepts on a specific topic. This provides a strong foundation for developing both the design and research problem for the investigation. However, there are instances where research is done on relatively specific topics, or is very progressive. Therefore, a lack of knowledge or other previous studies may limit the scope of the analysis, lead to inaccuracies in the author’s arguments, and present an increased margin for error in many aspects of the research and methodology.
  • Methodology errors – the complexity of modern research leads to potential limitations in methodology. Most often, it is regarding data collection and analysis, as these aspects can strongly influence outcomes. Data collection techniques differ and, although fitting for the study design, present strong limitations in terms of privacy, distractions, or inappropriate levels of detail.
  • Bias – a potential limitation that can affect all researchers. This is a limitation that researchers attempt to avoid by ensuring there are no conflicts of interest, lack of any emotional or prejudiced attitudes towards the topic, and establishing a level of oversight by referring to an ethics committee and peer-review procedures. As humans, it is inherent that bias will be present to some extent. However, it is the responsibility of the researcher to remain objective and attempt to control any potential bias or inaccuracies throughout every stage of the research process.

Structuring and Writing Limitations in Research Paper

The limitation section should be written in such a way that it demonstrates that the author understands the core concepts of bias, confounding, and analytical self-criticism . It is not necessary to highlight every single limitation, but rather the ones that have a direct impact on the study results or the research problem. The thought process of the researcher should be presented, explaining the pros and cons of any decisions made and the circumstances which have led to the limitation. Structuring the limitations should be done in a fourfold approach:

  • Identify and describe the limitation. This should be done through the use of professional terminology and accompanying definitions when necessary. The explanation of the limitation should be brief and precise to ensure that readers have a clear grasp of the issue, as well as being able to follow the author’s pattern of thought.
  • Outline the potential influence or impact that the limitation may have on the study. This consists of elements such as the likelihood of occurrence, the magnitude of impact, and the general direction that a specific limitation has driven the study findings. It is generally accepted that some limitations will have a more profound influence than others. Therefore, it is vital to highlight the impact of the limitation so that readers can decide which issues to consider when examining the topic as limitations with a null value bias are less dangerous.
  • Discuss alternative approaches to the specific limitations , or the research question in general. A justification should be provided by the author to support the particular approach and methodology selected in the specific study and why it was warranted within the context of any limitations. If possible, persuasive evidence should be provided and alternative decisions discussed to some extent. This demonstrates transparency of thought and reassures readers that despite potential limitations, the selected approach was the best alternative for the current research on the topic within the field of study.
  • Describe techniques to minimize any risks resulting from the limitations. This may include reference to previous research and suggestions on the improvement of design and analysis.

Limitations are an inherent part of any research study. Therefore, it is generally accepted in academia to acknowledge various limitations as part of the research process. Issues may vary, ranging from sampling and literature review, to methodology and bias. However, there is a structure for identifying these elements, discussing them, and offering insight or alternatives on how limitations can be mitigated. This not only enhances the process of the research but also helps readers gain a comprehensive understanding of a study’s conditions.

Unfortunately, your browser is too old to work on this site.

For full functionality of this site it is necessary to enable JavaScript.

404 Not found

Case Study Research Method in Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

  • Open access
  • Published: 03 June 2024

Multi-arm multi-stage (MAMS) randomised selection designs: impact of treatment selection rules on the operating characteristics

  • Babak Choodari-Oskooei   ORCID: orcid.org/0000-0001-7679-5899 1   na1 ,
  • Alexandra Blenkinsop   ORCID: orcid.org/0000-0002-2328-8671 2   na1 ,
  • Kelly Handley   ORCID: orcid.org/0000-0003-4036-2375 3 ,
  • Thomas Pinkney   ORCID: orcid.org/0000-0001-7320-6673 4 &
  • Mahesh K. B. Parmar 1  

BMC Medical Research Methodology volume  24 , Article number:  124 ( 2024 ) Cite this article

187 Accesses

5 Altmetric

Metrics details

Multi-arm multi-stage (MAMS) randomised trial designs have been proposed to evaluate multiple research questions in the confirmatory setting. In designs with several interventions, such as the 8-arm 3-stage ROSSINI-2 trial for preventing surgical wound infection, there are likely to be strict limits on the number of individuals that can be recruited or the funds available to support the protocol. These limitations may mean that not all research treatments can continue to accrue the required sample size for the definitive analysis of the primary outcome measure at the final stage. In these cases, an additional treatment selection rule can be applied at the early stages of the trial to restrict the maximum number of research arms that can progress to the subsequent stage(s).

This article provides guidelines on how to implement treatment selection within the MAMS framework. It explores the impact of treatment selection rules, interim lack-of-benefit stopping boundaries and the timing of treatment selection on the operating characteristics of the MAMS selection design.

We outline the steps to design a MAMS selection trial. Extensive simulation studies are used to explore the maximum/expected sample sizes, familywise type I error rate (FWER), and overall power of the design under both binding and non-binding interim stopping boundaries for lack-of-benefit.

Pre-specification of a treatment selection rule reduces the maximum sample size by approximately 25% in our simulations. The familywise type I error rate of a MAMS selection design is smaller than that of the standard MAMS design with similar design specifications without the additional treatment selection rule. In designs with strict selection rules - for example, when only one research arm is selected from 7 arms - the final stage significance levels can be relaxed for the primary analyses to ensure that the overall type I error for the trial is not underspent. When conducting treatment selection from several treatment arms, it is important to select a large enough subset of research arms (that is, more than one research arm) at early stages to maintain the overall power at the pre-specified level.

Conclusions

Multi-arm multi-stage selection designs gain efficiency over the standard MAMS design by reducing the overall sample size. Diligent pre-specification of the treatment selection rule, final stage significance level and interim stopping boundaries for lack-of-benefit are key to controlling the operating characteristics of a MAMS selection design. We provide guidance on these design features to ensure control of the operating characteristics.

Peer Review reports

Introduction

Multi-arm multi-stage (MAMS) trial designs can efficiently evaluate several medical interventions by allowing multiple research arms to be studied under one protocol and enabling interim stopping for lack-of-benefit based on primary (or an intermediate) outcome measure of the trial. In MAMS designs, the research arms are compared against a common control arm (generally, standard-of-care treatment) and these pairwise comparisons can be made in several stages. Royston et al. developed a framework for a MAMS design that allows the use of an intermediate ( I ) outcome at the interim stages that may or may not be the same as the definitive ( D ) outcome at the final analysis [ 1 , 2 , 3 ]. Choodari-Oskooei et al. give an extensive account of Royston et al.’s MAMS design and discuss their underlying principles [ 3 ].

In the Royston et al. standard MAMS design, monotonically decreasing significance levels are defined for the interim-stage lack-of-benefit analyses to determine which research interventions can continue recruiting patients [ 2 ]. In principle, all research arms which perform sufficiently better than the control arm at each interim analysis, by a pre-defined threshold, can continue recruitment and have the potential to reach the final stage efficacy analysis. This approach to treatment selection has been described as a keep all promising ‘rule’ [ 4 ]. However, two challenges may arise under such a framework. First, the maximum sample size, which is achieved when all arms reach the final stage, might become too large if the study includes several research treatment arms. Therefore, the maximum sample size of the standard MAMS design with the keep all promising rule can become unfeasible in settings where the resources (e.g patients/funding) are limited. An example is the ROSSINI-2 trial in surgery - see next section for details [ 5 , 6 ]. Second, there will be large variation in the actual sample size of the trial, depending on how many research arms pass the interim lack-of-benefit analyses. In practice, funders may find it highly desirable to avoid such an uncertainty about the required sample size.

In some settings, there is likely to be a limit on the number of individuals that can be recruited, or the funds available to undertake the protocol. The timeline for a standard (or full) MAMS trial might also be specifically restricted. These constraints can mean not all research treatments can accrue sufficient individuals for the analysis of the primary outcome measure. Therefore, it is highly desirable to consider an additional ‘selection rule’ that determines the maximum number of research arms at each stage, which we henceforth denote a MAMS selection design . This would allow the treatment selection and confirmatory stages to be done under the same master protocol, and provide greater control over the overall sample size and required resources. Furthermore, the MAMS selection design formally allows for interim lack-of-benefit stopping and selection rules based on an intermediate outcome measure [ 7 ]. This offers higher degrees of flexibility and efficiency compared with alternative designs [ 8 ].

This paper addresses several research questions around designing a MAMS trial implementing interim treatment selection rule and allows for interim lack-of-benefit analysis. Previous drop-the-loser designs only allow for interim treatment selection rules [ 9 ], whereas the MAMS selection designs of this article allow for both interim teratment selection rule and lack-of-benefit analysis on the primary or intermediate outcome measures [ 7 ]. The overarching aim is to show how the maximum (and expected) sample size of a MAMS trial can be reduced by implementing an additional treatment selection rule using a pragmatic approach whilst maintaining desirable overall type I error rate and power. It explores the impact of the number of arms selected (selection rule), the timing of treatment selection together with the chosen threshold for lack-of-benefit analysis on the maximum/expected sample sizes, familywise type I error rate (FWER), and overall power of the design. Finally, it provides practical guidance on how a MAMS selection design can be realised and implemented in trials with several research arms and multiple stages, and to illustrate the advantages of such designs in reducing the required resources.

Example: ROSSINI-2 selection design

Trial setting: The Reduction Of Surgical Site Infection using several Novel Interventions (ROSSINI)-2 trial [NCT03838575] is a phase III 8-arm, 3-stage adaptive design investigating in-theatre interventions to reduce surgical site infection (SSI) following abdominal surgery [ 5 , 6 ]. In this trial, three interventions are being tested, with patients being randomised to receive all, none or some of these in combination with 7 research arms in total. The control arm is no intervention. A schema of the trial design is represented by Fig.  1 [ 6 ]. At the design stage, there was a biological rationale for the single interventions to interact when they are used in combination. But there was no information on the degree of this presumed interaction effect. This ruled out a factorial design for this study.

figure 1

Schema for the ROSSINI-2 MAMS selection design. At least 2 research arms are dropped at each interim stage [ 6 ]

Design specification: The treatment effect size (used in all stages) is the difference in proportion of patients who develop SSI up to 30 days after surgery. The target effect size is 5% absolute reduction in the SSI event rate in each of the 7 research arms from the control arm event rate of 15%. Patients are randomised with a 2:1 ratio throughout all stages in favour of the control arm - see the online Supplemental Material for more details. The fixed allocation ratio of 2:1 is important since changing the allocation ratio for a particular comparison midcourse a trial implicitly affects the variance of the estimated treatment effect of interest for that comparison, hence potentially violating the equal variance assumption across all comparisons.

Table 1 shows the design parameters for the ROSSINI-2 trial without a selection rule. This is an optimal design which is optimised for a standard MAMS under certain conditions, minimising a loss function - see [ 10 ] and online Supplemental Material for details. We used the nstagebinopt and nstagebin Stata commands for this purpose [ 10 ]. This standard MAMS design includes two interim lack-of-benefit analyses with interim one-sided significance levels of (0.40, 0.14), acting as the corresponding lack-of-benefit boundaries on the P -value scale - i.e., no formal stopping rule for early evidence of efficacy. In the ROSSINI-2 trial, the familywise type I error rate (FWER) is the overall type I error rate of interest since the combination treatments, which included the single interventions, could not be regarded as distinct therapies [ 11 ]. The FWER is controlled at 2.5% level (one-sided).

The maximum sample size of 8847 for this (optimal) standard MAMS design exceeded the budget of the funding agency. Therefore, the trial planned to restrict the number of research arms recruiting at each stage to a maximum of 5 arms in stage 2 and 3 research arms in the final stage - that is, an additional treatment selection rule of 7:5:3, ensuring a maximum sample size of 6613.

Specification of a MAMS selection design

This section outlines the specification of MAMS selection designs, focusing on superiority trials. We assume that the same primary outcome is used at the interim stages for both treatment selection and lack-of-benefit analysis. The parameter \(\theta\) represents the difference in the outcome measure between a research arm and the control group. For continuous outcome measures, \(\theta\) could be the difference in the means of the two groups; for binary data the difference in the proportions; for time-to-event data a log hazard ratio. Without loss of generality, assume that a negative value of \(\theta _{jk}\) indicates a beneficial effect of treatment k in comparison to the control group at stage j . In trials with K research arms, a set of K null hypotheses are tested at each stage j ,

for some pre-specified null effects \(\theta _{j}^{0}\) . In practice, \(\theta _{j}^{0}\) is usually taken to be 0 on a relevant scale such as the risk (mean) difference for binary (continuous) outcomes or log hazard ratio for survival outcomes [ 3 ]. The direction of the hypotheses can be reversed if a trial is seeking an increase in the outcome measure compared to the control arm. For sample size and power calculations, a minimum target treatment effect (often the minimum clinically important difference \(\theta _{j}^{1}\) ) is also required.

At each stage, the significance level \(\alpha =(\alpha _1,\ldots ,\alpha _J)\) and power \(\omega =(\omega _1,\ldots ,\omega _J)\) are chosen for testing each pairwise comparison of the research treatment k against the control group. \(L=(l_1,\ldots ,l_{J-1})\) is the lower threshold for (interim) lack-of-benefit on the Z -test statistic scale for each pairwise comparion of the research arm k against control, determined by \(\alpha\) ( \(l_j=\Phi (\alpha _j)\) ). The critical value for rejecting the null hypothesis for the selected research arm(s) at the end of the trial is defined as \(c=\Phi (\alpha _J)\) - in general, \(c=l_J\) . A stopping rule for efficacy could also be applied [ 12 , 13 ]; for simplicity we do not consider it in this article. In the MAMS selection design, an additional selection rule is also pre-specified as \(S = (s_1:\ldots :s_{J-1})\) , where \(s_j\) is the maximum number of research arms to be selected at the end of stage j . The selection rule can be written as \(K:s_1:s_2:\ldots :s_{J-1}\) reflecting notation by others [ 8 , 14 ]. Note that \(s_{J-1}\) can be greater than one, which means more than one primary hypothesis can be tested at the final stage. However, in practice fewer arms may be selected at the interim stages if not all \(s_j\) arms pass the lack-of-benefit threshold.

Let \(Z_{jk}= \frac{\hat{\theta }_{jk}}{\sigma _{\hat{\theta }_{jk}}}\) be the Z -test statistic comparing research arm k against the control arm at stage j ( \(j=1,\ldots ,J\) ) where \sigma _{\hat{\theta }_{jk}} is the standard error of the treatment effect estimator for comparison k at stage j. \(Z_{jk}\) follows a standard normal distribution with the (standardised) mean treatment effect \(\Delta _{jk}\) , and \(Z_{jk} \sim N(0,1)\) under the null hypothesis. The joint distribution of the Z-test statistics therefore follows a multivariate normal distribution:

where \(\varvec{\Delta _{JK}}\) and \(\varvec{\Sigma }\) are matrices representing the (standardised) mean treatment effects and the corresponding covariance for the \(J\times K\) test statistics, respectively.

At each interim analysis, the test statistics \((Z_{j1},\cdots ,Z_{jk})\) are ranked in order of effect size, denoted by vector \(\varvec{\psi }_{\varvec{j}} = (\psi _{j1},\cdots ,\psi _{jK})\) , with the rank of research arm k at stage j given by \(\psi _{jk}\)  - e.g., the research arm with the largest effect size at stage j will have rank 1, ψ_jk=1. An interim decision based on two selection mechanisms is used to determine which research arms should continue to recruit in the subsequent stage:

If \(\psi _{jk} \le s_j \bigcap Z_{jk} \le l_j\) , research arm k continues to the next stage.

If \(\psi _{jk}> s_j \bigcup Z_{jk} > l_j\) , research arm k ceases recruitment (‘dropped’).

The operating charactersitics of the design can also be calculated under non-binding interim lack-of-benefit stopping boundaries by replacing \(Z_{jk} \le l_j\) ( \(Z_{jk} > l_j\) ) with \(Z_{jk} \le \infty\) ( \(Z_{jk} > -\infty\) ) at interim stages, which effectively means ‘turning off’ the interim stopping boundaries. At the final analysis, the test statistics of the research arms that reached the final stage are compared to the final stage critical value, corresponding to the significance level \(\alpha _{J}\) , for assessing efficacy:

If \(Z_{Jk} > l_J\) , the primary null hypothesis for comparison k as before cannot be rejected.

If \(Z_{Jk} \le l_J\) , the primary null hypothesis for comparison k is rejected and conclude efficacy for research arm k .

Next, we outline the steps to design a MAMS selection trial.

Steps to design a MAMS selection trial

The following steps should be taken to design a MAMS selection trial with interim lack-of-benefit (and efficacy) stopping boundaries.

Choose the number of experimental (E) arms, K , and stages, J . The number of stages should be chosen based on both practical, e.g. expected accrual rate, and statistical considerations [ 3 ].

Choose the definitive D outcome, and (optionally) I outcome.

Choose the null values for \(\theta\) - e.g. the absolute risk difference on the intermediate ( \(\theta _{I}^{0}\) ) and definitive ( \(\theta _{D}^{0}\) ) outcomes.

Choose the minimum clinically relevant target treatment effect size, e.g. in trials with binary outcomes the absolute risk difference on the intermediate ( \(\theta _{I}^{1}\) ) and definitive ( \(\theta _{D}^{1}\) ) outcomes.

Choose the control arm event rate (median survival) in trials with binary (survival) outcome.

Choose the allocation ratio A (E:C), the number of patients allocated to each experimental arm for every patient allocated to the control arm. For a fixed-sample (1-stage) multi-arm trial, the optimal allocation ratio (i.e. the one that minimizes the sample size for a fixed power) is approximately \(A=1/\sqrt{K}\) . Choodari-Oskooei et al. provide further guidance for the MAMS selection design when only one research arm is selected at stage 1 [ 7 ].

In \(I\ne D\) designs, choose the correlation between the estimated treatment effects for the I and D outcomes. An estimate of the correlation can be obtained by bootstrapping relevant existing trial data.

Choose the accrual rate per stage to calculate the trial timelines.

Choose a one-sided significance level for lack-of-benefit and the target power for each stage ( \(\alpha _{jk}\) , \(\omega _{jk}\) ). The chosen values for \(\alpha _{jk}\) and \(\omega _{jk}\) are used to calculate the required sample sizes for each stage.

Choose whether to allow early stopping for overwhelming efficacy on the primary ( D ) outcome. If yes, choose an appropriate efficacy stopping boundary \(\alpha _{Ej}\) on the D -outcome measure for each stage 1, ...,  J , where \(\alpha _{EJ}=\alpha _{J}\) . Possible choices are Haybittle-Peto or O’Brien-Fleming stopping boundaries used in group sequential designs, or one based on \(\alpha\) -spending functions - see Blenkinsop et al. 2019 [ 13 ] for details.

Choose whether to allow for additional treatment selection at interim stages. If yes, choose an appropriate treatment selection rule. For a trial with J stages, the selection rule is defined by \(K:s_1:s_2:\ldots :s_{J-1}\) .

Given the above design parameters, calculate the number of control and experimental arm (effective) samples sizes required to trigger each analysis and the operating characteristics of the design, i.e. \(n_{jk}\) in trials with continuous and binary outcomes and \(e_{jk}\) in trials with time-to-event outcomes, as well as the overall type I error rate and power. If the desired (pre-specified) overall type I error rate and power have not been maintained, for instance if the overall power is smaller than the pre-specified value, steps 9-11 should be repeated until success. Or, if the overall type I error rate is larger than the pre-specified value, one can choose a more stringent (lower) design alpha for the final stage, \(\alpha _{J}\) , and repeat steps 9-11 until the desired overall type I error rate is achieved.

Operating characteristics of the MAMS selection design

In this article, we use the term ‘operating characteristics’ to refer to both the overall type I error rate and power. The overarching aim of the MAMS selection design is to reduce the maximum and expected sample size. Therefore, we first define the maximum and expected sample sizes.

Maximum and expected sample sizes

The maximum sample size (MSS) is the total sample size for the trial under the assumption that there are \(K, s_1, s_2,\ldots , s_{J - 1}\) experimental treatments in each stage - that is, assuming binding treatment selection rules and non-binding lack-of-benefit stopping rules. In the standard (or full) MAMS design, the selection rule is \(K, K,\ldots , K\) throughout. Therefore, the maximum sample size is calculated assuming that all experimental treatments continue to the final stage. The expected sample sizes (ESS) under the global null ( \(H_{0}\) ) and alternative ( \(H_{1}\) ) hypotheses are also calculated for all the simulation scenarios - see Appendix D of the online Supplemental Material for further details and formula. We used simulations to calculate the expected sample sizes.

Familywise type I error rate (FWER)

In a MAMS selection design, the research arms are implicitly compared against each other at interim selection stages. This process implicitly links the research arms together. This means that we focus here on the control of the FWER as the type I error rate of interest. Since we consider designs with interim lack-of-benefit analysis, the FWER is the overall probability of a false positive trial result in any of the \(s_{J-1}\) comparisons that reach the primary efficacy analysis.

For the standard (keep all promising) MAMS design, the Dunnett probability can be used to calculate the FWER under the global null hypothesis assuming all promising arms are selected [ 15 ]. This controls the FWER in the strong sense [ 16 ]. Analytical derivations have been developed to calculate the FWER in designs when only one arm is selected for the final stage [ 8 , 17 ]. However, in the MAMS selection design with more flexible selection rules, the analytical derivations are more complex. Details are included in the online Supplemental Material. In this article, we use simulations to calculate the FWER.

Overall power

The power of a clinical trial is the probability that under a particular target treatment effect \(\theta ^{1}\) , a truly effective treatment is identified at the final analysis. We use simulations to calculate the overall power when one research arm has the target effect size and the other arms have a null effect (i.e. the remaining arms were ineffective). In this case the overall power is defined as the probability that the effective research arm is chosen at the interim selection stages and the primary null hypothesis at final stage is rejected for the comparison of that research arm against the control. This approach to defining power in a multi-arm setting with selection has been adopted by others [ 18 ]. Furthermore, we calculate the power to identify any effective research arm (any-pair power) under different configurations of treatment effects and effect sizes - reporting in Appendix E of the online Supplemental Material [ 11 ]. Any-pair (or disjunctive) power is the probability that at least one null hypothesis is (correctly) rejected for effective research arms at the final stage.

Simulation study

Simulations were carried out to explore the impact of the number of research arms selected, the timing of treament selection, and threshold for interim lack-of-benefit analyses on the operating characteristics of a MAMS selection trial. Designs with both binding and non-binding lack-of-benefit stopping boundaries are considered.

Trial design parameters

Table 2 presents the trial design parameters in simulation studies. In ROSSINI-2, the first and second interim analyses were scheduled to occur once 21% and 45% of the total control arm patients (that is, information time) were recruited to the trial, respectively. The number of replications is 1,000,000 in each experimental condition. We used Stata 18.0 to conduct all simulations. Further details on the simulation algorithm and the data generating mechanism is included in the online Supplemental Material.

Different selection rules were also considered. A factorial approach was followed, testing each parameter in isolation whilst fixing all other parameters of the design. This was done systematically, starting with a design which selects all research arms given they pass the stopping boundary for lack-of-benefit (i.e. the ‘standard’ MAMS design), and decreasing the selected subset size incrementally. Using combinatorics, for a J-stage design there are \(\left( {\begin{array}{c}J+K-1\\ K-1\end{array}}\right)\) ways of making a subset selection across the \(J-1\) interim analyses. For example, for the ROSSINI-2 design, there are 28 ways to select from 7 research arms across two interim analyses.

Simulation results

Table 3 presents the required maximum sample size for the primary efficacy analysis by different selection rules. The maximum sample size decreases as the selection rule becomes more strict - that is, when a smaller number of research arms are selected at each stage. For example, it decreases by 49% with the most strict selection rule of 7 : 1 : 1. The maximum sample size for the 7 : 7 : 7 selection rule is the same as that of the standard MAMS design. The expected sample sizes can be substantially lower, depending on the underlying treatment effects of the research arms - see Table 1 in Appendix D of the online Supplemental Material. Next, we describe the impact of the reduction of sample size on the overall operating characteristics of the design.

Familywise type I error rate and power

Table 3 presents the results for the overall familywise type I error rate and power for different selection rules under the binding and non-binding interim stopping boundaries for lack-of-benefit.

Impact of treatment selection rules: The results indicate that very extreme selection rules (e.g., 7 : 1 : 1) markedly reduces the overall familywise type I error rate under both binding (0.0125) and non-binding (0.0126) interim stopping boundaries for lack-of-benefit. However, the price of this reduced type I error rate is a substantial reduction on the overall power of the trial under the binding (0.706) and non-binding (0.723) interim stopping boundaries for lack-of-benefit. Even selecting 2 arms at the first stage reduces the overall power to 0.79 (from 0.85 for the standard MAMS design) under the binding stopping boundaries for lack-of-benefit. In general, in designs with several research arms, selecting one or two research arms at the first stage selection can decrease the overall power substantially because, given the small sample size, the chance of incorrect selection is high.

An extreme selection rule (e.g., 7 : 1 : 1) can substantially reduce the overall familywise type I error rate. To ensure that the overall type I error for the trial is not underspent, the final stage significance level for the primary analysis can be relaxed in the selection designs with extreme selection rules. Note that in this case the familywise type I error of the selection designs with no interim lack-of-benefit boundaries is still strongly controlled under the global null hypothesis [ 8 ]. Although it is intuitive that the FWER will also be maximised for designs with both interim selection rules and lack-of-benefit analysis under the global null hypothesis, this has not been formally proved for designs with both interim selection rules and lack-of-benefit analysis. However, weak control of the FWER is guaranteed at the nominal level.

We used simulations to find the appropriate final stage significance level for the selection designs in Table 3 . A grid search was used to find the corresponding value for the final stage significance level in these cases. For the design with 7 : 1 : 1 selection rule, the final stage primary efficacy analysis can be tested at 0.0105 significance level instead of 0.005 level for the standard MAMS design. This further reduces the maximum sample size from 4521 to 4131 for the same overall power of 0.706 and 0.723 under the binding and non-binding interim lack-of-benefit stopping rules, respectively. This results in a further reduction of about 8% - see Table 4 and 5 in Appendix G. Our simulations indicate that for the ROSSINI-2 design with 7 : 5 : 3 selection rule, the final stage significance level of 0.0051 controls the overall FWER at 2.5% (one-sided) - which is very similar (to the fourth decimal place) to that of the standard MAMS design with only interim lack-of-benefit stopping boundaries and a final stage significance level of 0.005. Therefore, the same significance level of 0.005, which is used for final stage primary efficacy analysis in the ROSSINI-2 trial, effectively controls the overall FWER at 2.5% for the standard MAMS design and the ROSSINI-2 selection design. Our simulations have shown that for a MAMS selection design with a selection rule of 7:5:3, the overall operating characteristics of the design are strongly controlled at the pre-specified level with this final stage significance level.

Comparison with the standard MAMS design: The MAMS selection design with \(K:K:\cdots :K\) selection rule (i.e. with no restriction on maximum sample size) resembles the standard MAMS design with no selection rule. Results presented in Table 3 indicate that the FWER of the standard MAMS design (Table 1 ), provides an upper bound for any MAMS selection design with similar design parameters [ 19 , 20 ].

Here, our aim is to find a candidate MAMS selection design which has similar operating characteristics to that of the ‘optimal’ standard MAMS design. The results in Table 3 indicates that selecting less than 5 research arms at the first stage reduces the overall power of the selection design well below 0.85 - which we targeted for the standard MAMS design. The overall results suggests that a design with 7 : 5 : 3 selection rule gives comparable operating characteristics to that of the optimal standard MAMS design. Table 4 compares different MAMS selection designs with those of the optimal standard MAMS design and two-arm trials. Compared with the optimal standard MAMS design, the selection design with 7 : 5 : 3 selection rule, with a maximum sample size of 6613, decreased the maximum sample size by 25%. Furthermore, this selection design gives the three main interventions the chance to be tested for efficacy at the final analysis if they are selected and pass interim lack-of-benefit boundaries.

Timing of treatment selection and early stopping boundaries

This section explores the impact of the timing of treatment selection and interim stopping boundaries for lack-of-benefit on the operating characteristics of a MAMS selection design. The timings of the interim analyses were explored for a range of values of the stagewise significance levels \(\alpha _j\) to investigate the impact of the timing of treatment selection on the operating characteristics of the design. This was done by considering different sample sizes (in terms of information time) and significance levels at the interim stages. The other design parameters remained the same.

For stage 1, we considered 10%, 20%, 30% and 40% of the control arm information time - which correspond to stage 1 significance levels ( \(\alpha _{1}\) ) of 0.625, 0.42, 0.275, and 0.179, respectively. When varying the timing of the stage 1 analysis, we kept the timing of the stage 2 analysis fixed - that is, at 45% of the control arm information time. For stage 2, we considered 45%, 50%, 60% and 65% of the control arm information time - which correspond to stage 2 significance levels ( \(\alpha _{2}\) ) of 0.14, 0.112, 0.07, 0.055, respectively. When varying the timing of the stage 2 analysis, we kept the timing of the stage 1 analysis fixed, that is, at 21% of the control arm information time. We calculated the FWER and overall power under binding interim lack-of-benefit stopping boundaries in all experimental conditions. For brevity, we only present the results for 6 different selection rules. The overall power is calculated when one research arm is effective under the target effect size.

Figure 2 shows the impact of the timing of research arm selection on the FWER and overall power of the MAMS selection design by different selection rules. The top graphs indicate that the timing of the first treatment selection has the most impact on the overall power of a MAMS selection design, since if an efficacious research arm is not selected to continue at the first analysis, the overall power (which is conditional upon selection at stages 1 and 2) cannot be recuperated later. The bottom graphs indicate that the second stage selection time has negligible impact on the operating characteristics of the design.

figure 2

FWER (left) and overall power (right) by the timing of the treatment selection at stage 1 (top) and stage 2 (bottom) and subset selection rule for a three-stage design. The overall power is calculated when one research arm is effective with the target effect size. The X-axis is control arm information time in all graphs

The FWER and overall power increase by the timing of the first stage treatment selection in all selection rules. Delaying treatment selection may allow more data accrue which support a significantly significant result. However, importantly all choices of significance levels presented here preserve the FWER below the nominal level of 0.025, and result in smaller FWER than that of the standard MAMS design.

The overall power decreases substantially when only one arm is selected at a very early selection stage, i.e., the 7:1:1 selection rule which has an overall power of 0.53 (with a maximum sample size of 3849) when the stage 1 selection takes place at 10% control arm information time. The main reason for the reduced power in this scenario is the high uncertainty associated with the estimated risk by selecting the best performing research arm with a sample size of 94. This reduces the probability of correct selection considerably at the first interim stage which limits the overall power. However, this probability increases considerably when selecting more than 3 research arms at the first interim analysis which results in almost the same overall power as selecting all seven. The maximum sample sizes of other scenarios are included in Table 3 of Appendix F.

In some situations, there is a need to constrain the maximum sample size for a MAMS trial because, for example, there is a limit on the number of patients that can be recruited and/or there is a limited funding envelope for the study. To limit the maximum sample size, an additional pre-specified treatment selection rule can be implemented at interim analyses of a standard MAMS design. This reduces the maximum sample size with minimal impact on the operating characteristics of the trial. Table 4 shows that such a rule can reduce the maximum sample size by about 25% and 42% compared with the optimal standard MAMS design and two-arm trials, respectively. The treatment selection rule acts as an upper bound on the number of research arms that are allowed to continue to the next stage. In practice, depending on how many research arms pass the interim lack-of-benefit analyses, the actual number of research arms that are taken to the next stage might be smaller than the selection rule.

The overall familywise type I error rate of a MAMS selection design is smaller than the corresponding standard MAMS design without a selection rule. It becomes smaller as the selection rule becomes more restrictive. Therefore, investigators may consider relaxing the final stage significance levels for the primary analyses to ensure that the overall type I error for the trial is not underspent. This requires simulations to find the appropriate final stage significance level, which should be done independent of the ongoing trial data, otherwise the overall type I error rate can be inflated over the nominal value [ 21 ].

The overall power of a MAMS selection design can be preserved (and remain approximately) at the same level as that of a standard MAMS design if the timing of treatment selection and selection rule are chosen judiciously. The power loss is maximal when only one research arm is selected very early on - i.e., 10% control arm information time in our simulation studies. In this case, to preserve the overall power at above 80%, the timing of the treatment arm selection should be around 40% control arm information time when selecting one effective arm from all possible seven options. In the event more than one arm is to be selected at the first interim analysis, the selection can occur earlier whilst preserving the overall power because the probability that a truly effective research arm is selected is higher at the interim selection stages. This finding accords with previous results [ 7 , 19 ].

Our simulation results suggest that the choice between the binding and non-binding interim lack-of-benefit stopping boundaries has a larger impact on the overall power. The FWER can increase by 0.005 under the non-binding boundaries, whereas the overall power can decrease by more than 5% under the binding boundaries. Further, there is a pre-specified upper bound on the number of research arms in each stage of a MAMS selection design. Therefore, given the context and the impact on the operating characteristics, binding interim stopping boundaries for lack-of-benefit are more appropriate in this setting. This should be considered when calculating the operating characteristics of a MAMS selection design. Moreover, the impact of varying non-zero treatment effects, smaller than the target effect size, on the overall power is an important design consideration. We conducted extensive simulation studies on this issue. The findings are presented in our previous publication on MAMS selection designs [ 7 ].

Finally, the MAMS selection design presented in this article has several advantages over other alternative designs. First, the selection rule is pre-specified and allows for more than one research arm to be selected at the interim stages. Second, the test statistics are based on sufficient statistics, so can be used with covariate adjustment, and also makes the method applicable to different outcome measures. Third, other approaches that allow for more flexible unplanned adaptivity may lose power compared with designs that only allow for pre-planned adaptation if this flexibility is not used in practice [ 22 ]. The pre-specification of all adaptations to the design appears to be favoured and recommended by regulators and reviewers [ 23 , 24 ]. The MAMS selection design satisfies all these considerations. Further, we have implemented the MAMS selection design in the new version of the nstagebin command that is used for sample size calculation. The nstagebin command is available from the Stata’s official archive (ssc) for user written commands. We therefore recommend it as a design to be formally considered in trials in which several research interventions are to be evaluated and where the resources (e.g patients/funding) are limited.

Availability of data and materials

All data is provided in the main manuscript and its online Supplemental Material. Simulation studies have been used in this article - no real trial data and patient information is used.

Abbreviations

Multi-arm multi-stage

  • Familywise type I error rate

The Reduction Of Surgical Site Infection using several Novel Interventions (ROSSINI)-2 trial

Royston P, Parmar MKB, Qian W. Novel designs for multi-arm clinical trials with survival outcomes with an application in ovarian cancer. Stat Med. 2003;22(14):2239–56. https://doi.org/10.1002/sim.1430 .

Article   PubMed   Google Scholar  

Royston P, Barthel FM-S, Parmar MKB, Choodari-Oskooei B, Isham V. Designs for clinical trials with time-to-event outcomes based on stopping guidelines for lack of benefit. Trials. 2011;12(1):81. https://doi.org/10.1186/1745-6215-12-81 .

Article   PubMed   PubMed Central   Google Scholar  

Choodari-Oskooei B, Sydes M, Royston P, Parmar MKB. Multi-arm Multi-stage (MAMS) Platform Randomized Clinical Trials. In: Principles and Practice of Clinical Trials. 1st ed. Cham: Springer; 2022. https://doi.org/10.1007/978-3-319-52677-5_110-1 .

Chapter   Google Scholar  

Magirr D, Stallard N, Jaki T. Flexible sequential designs for multi-arm clinical trials. Stat Med. 2014;33(19):3269–79. https://doi.org/10.1002/sim.6183 .

Article   CAS   PubMed   Google Scholar  

ROSSINI 2 - Reduction of Surgical Site Infection Using Several Novel Interventions. https://clinicaltrials.gov/ct2/show/NCT03838575 . Accessed 17 June 2019.

ROSSINI 2: Reduction Of Surgical Site Infection using several Novel Interventions Trial Protocol. 2018. https://www.birmingham.ac.uk/Documents/college-mds/trials/bctu/rossini-ii/ROSSINI-2-Protocol-V1.0-02.12.2018.pdf . Accessed 17 June 2019.

Choodari-Oskooei B, Thwin S, Blenkinsop A, Widmer M, Althabe F, Parmar MKB. Treatment selection in multi-arm multi-stage designs: With application to a postpartum haemorrhage trial. Clin Trials. 2023;20(1):71–80. https://doi.org/10.1177/17407745221136527 .

Wason J, Stallard N, Bowden J, Jennison C. A multi-stage drop-the-losers design for multi-arm clinical trials. Stat Methods Med Res. 2017;26(1):508–24. https://doi.org/10.1177/0962280214550759 .

Grayling MJ, Wason JM. A web application for the design of multi-arm clinical trials. BMC Cancer. 2020;20(80). https://doi.org/10.1186/s12885-020-6525-0 .

Choodari-Oskooei B, Bratton D, Parmar MKB. Facilities for optimizing and designing multiarm multistage (MAMS) randomized controlled trials with binary outcomes. Stata J. 2023;23(3):744–98. https://doi.org/10.1177/1536867X231196295 .

Article   Google Scholar  

Choodari-Oskooei B, Bratton DJ, Gannon MR, Meade AM, Sydes MR, Parmar MK. Adding new experimental arms to randomised clinical trials: Impact on error rates. Clin Trials. 2020;17(3):273–84. https://doi.org/10.1177/1740774520904346 .

Blenkinsop A, Choodari-Oskooei B. Multiarm, multistage randomized controlled trials with stopping boundaries for efficacy and lack of benefit: An update to nstage. Stata J. 2019;19(4):782–802. https://doi.org/10.1177/1536867X19893616 .

Blenkinsop A, Parmar MKB, Choodari-Oskooei B. Assessing the impact of efficacy stopping rules on the error rates under the MAMS framework. Clin Trials. 2019;16(2):132–41. https://doi.org/10.1177/1740774518823551 .

Bowden J, Glimm E. Conditionally unbiased and near unbiased estimation of the selected treatment mean for multistage drop-the-losers trials. Biom J. 2014;56(2):332–49. https://doi.org/10.1002/bimj.201200245 .

Dunnett CW. A Multiple Comparison Procedure for Comparing Several Treatments with a Control. J Am Stat Assoc. 1955;50(272):1096–121.

Magirr D, Jaki T, Whitehead J. A generalized Dunnett test for multi-arm multi-stage clinical studies with treatment selection. Biometrika. 2012;99(2):494–501. https://doi.org/10.1093/biomet/ass002 .

Lu X, He Y, Wu SS. Interval estimation in multi-stage drop-the-losers designs. Stat Methods Med Res. 2018;27(1):221–33. https://doi.org/10.1177/0962280215626748 .

Kunz CU, Friede T, Parsons N, Todd S, Stallard N. A comparison of methods for treatment selection in seamless phase II/III clinical trials incorporating information on short-term endpoints. J Biopharm Stat. 2015;25(1):170–89. https://doi.org/10.1080/10543406.2013.840646 .

Kelly PJ, Stallard N, Todd S. An Adaptive Group Sequential Design for Phase II/III Clinical Trials that Select a Single Treatment From Several. J Biopharm Stat. 2005;15(4):641–58. https://doi.org/10.1081/BIP-200062857 .

Friede T, Stallard N. A comparison of methods for adaptive treatment selection. Biom J. 2008;50(5):767–81. https://doi.org/10.1002/bimj.200710453 .

Stallard N, Friede T. A group-sequential design for clinical trials with treatment selection. Stat Med. 2008;27(29):6209–27. https://doi.org/10.1002/sim.3436 . NIHMS150003.

Tsiatis AA, Mehta C. Biometrika Trust On the Inefficiency of the Adaptive Design for Monitoring Clinical Trials. Technical Report 2. 2003. https://www.jstor.org/stable/30042046 .

EMA. Reflection paper on methodological issues in confirmatory clinical trials planned with an adaptive design (CHMP/EWP/2459/02), European Medicines Agency, Oct 2007. https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-methodological-issues-confirmatoryclinical-trials-planned-adaptive-design_en.pdf .

FDA. Adaptive Design Clinical Trials for Drugs and Biologics Guidance for Industry. Guidance Document.2019. https://www.fda.gov/media/78495/download .

Download references

Acknowledgements

We would like to thank the handling editor, Dr Michael Grayling, and two external reviewers for their helpful comments and suggestions on the earlier version of this manuscript. We thank Professor Matthew Sydes for his comments on the previous version of this manuscript.

This work was supported by the Medical Research Council (MRC) grant numbers MC_UU_00004_09 and MC_UU_123023_29. The ROSSINI-2 trial is funded by the NIHR Health Technology Assessment Programme (16/31/123).

Author information

Babak Choodari-Oskooei and Alexandra Blenkinsop contributed equally to this work.

Authors and Affiliations

MRC Clinical Trials Unit at UCL, Institute of Clinical Trials and Methodology, UCL, 90 High Holborn, WC1V 6LJ, London, United Kingdom

Babak Choodari-Oskooei & Mahesh K. B. Parmar

Department of Mathematics, Imperial College London, London, UK

Alexandra Blenkinsop

Birmingham Clinical Trials Unit, University of Birmingham, Birmingham, UK

Kelly Handley

Institute of Applied Health Research, University of Birmingham, Birmingham, UK

Thomas Pinkney

You can also search for this author in PubMed   Google Scholar

Contributions

BCO, MP and AB contributed to the study design. BCO drafted the manuscript. AB and MP jointly contributed to drafting the article. BCO and AB carried out the simulation studies. MP, TP, KH, and BCO contributed to the design and analysis of the ROSSINI-2 MAMS trial. All authors critically appraised the final manuscript. The authors red and approved the final manuscript.

Corresponding author

Correspondence to Babak Choodari-Oskooei .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

12874_2024_2247_moesm1_esm.pdf.

Additional file 1. Online Supplemental Material. Multi-arm multi-stage (MAMS) randomised selection designs: Impact of treatment selection rules on the operating characteristics.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Choodari-Oskooei, B., Blenkinsop, A., Handley, K. et al. Multi-arm multi-stage (MAMS) randomised selection designs: impact of treatment selection rules on the operating characteristics. BMC Med Res Methodol 24 , 124 (2024). https://doi.org/10.1186/s12874-024-02247-w

Download citation

Received : 03 December 2023

Accepted : 17 May 2024

Published : 03 June 2024

DOI : https://doi.org/10.1186/s12874-024-02247-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Multi-arm multi-stage randomised clinical trials
  • Treatment selection
  • Adaptive trial designs
  • ROSSINI-2 trial

BMC Medical Research Methodology

ISSN: 1471-2288

limitations in research examples

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Limitations of the Study
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The limitations of the study are those characteristics of design or methodology that impacted or influenced the interpretation of the findings from your research. Study limitations are the constraints placed on the ability to generalize from the results, to further describe applications to practice, and/or related to the utility of findings that are the result of the ways in which you initially chose to design the study or the method used to establish internal and external validity or the result of unanticipated challenges that emerged during the study.

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Theofanidis, Dimitrios and Antigoni Fountouki. "Limitations and Delimitations in the Research Process." Perioperative Nursing 7 (September-December 2018): 155-163. .

Importance of...

Always acknowledge a study's limitations. It is far better that you identify and acknowledge your study’s limitations than to have them pointed out by your professor and have your grade lowered because you appeared to have ignored them or didn't realize they existed.

Keep in mind that acknowledgment of a study's limitations is an opportunity to make suggestions for further research. If you do connect your study's limitations to suggestions for further research, be sure to explain the ways in which these unanswered questions may become more focused because of your study.

Acknowledgment of a study's limitations also provides you with opportunities to demonstrate that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new knowledge but also to confront assumptions and explore what we don't know.

Claiming limitations is a subjective process because you must evaluate the impact of those limitations . Don't just list key weaknesses and the magnitude of a study's limitations. To do so diminishes the validity of your research because it leaves the reader wondering whether, or in what ways, limitation(s) in your study may have impacted the results and conclusions. Limitations require a critical, overall appraisal and interpretation of their impact. You should answer the question: do these problems with errors, methods, validity, etc. eventually matter and, if so, to what extent?

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com.

Descriptions of Possible Limitations

All studies have limitations . However, it is important that you restrict your discussion to limitations related to the research problem under investigation. For example, if a meta-analysis of existing literature is not a stated purpose of your research, it should not be discussed as a limitation. Do not apologize for not addressing issues that you did not promise to investigate in the introduction of your paper.

Here are examples of limitations related to methodology and the research process you may need to describe and discuss how they possibly impacted your results. Note that descriptions of limitations should be stated in the past tense because they were discovered after you completed your research.

Possible Methodological Limitations

  • Sample size -- the number of the units of analysis you use in your study is dictated by the type of research problem you are investigating. Note that, if your sample size is too small, it will be difficult to find significant relationships from the data, as statistical tests normally require a larger sample size to ensure a representative distribution of the population and to be considered representative of groups of people to whom results will be generalized or transferred. Note that sample size is generally less relevant in qualitative research if explained in the context of the research problem.
  • Lack of available and/or reliable data -- a lack of data or of reliable data will likely require you to limit the scope of your analysis, the size of your sample, or it can be a significant obstacle in finding a trend and a meaningful relationship. You need to not only describe these limitations but provide cogent reasons why you believe data is missing or is unreliable. However, don’t just throw up your hands in frustration; use this as an opportunity to describe a need for future research based on designing a different method for gathering data.
  • Lack of prior research studies on the topic -- citing prior research studies forms the basis of your literature review and helps lay a foundation for understanding the research problem you are investigating. Depending on the currency or scope of your research topic, there may be little, if any, prior research on your topic. Before assuming this to be true, though, consult with a librarian! In cases when a librarian has confirmed that there is little or no prior research, you may be required to develop an entirely new research typology [for example, using an exploratory rather than an explanatory research design ]. Note again that discovering a limitation can serve as an important opportunity to identify new gaps in the literature and to describe the need for further research.
  • Measure used to collect the data -- sometimes it is the case that, after completing your interpretation of the findings, you discover that the way in which you gathered data inhibited your ability to conduct a thorough analysis of the results. For example, you regret not including a specific question in a survey that, in retrospect, could have helped address a particular issue that emerged later in the study. Acknowledge the deficiency by stating a need for future researchers to revise the specific method for gathering data.
  • Self-reported data -- whether you are relying on pre-existing data or you are conducting a qualitative research study and gathering the data yourself, self-reported data is limited by the fact that it rarely can be independently verified. In other words, you have to the accuracy of what people say, whether in interviews, focus groups, or on questionnaires, at face value. However, self-reported data can contain several potential sources of bias that you should be alert to and note as limitations. These biases become apparent if they are incongruent with data from other sources. These are: (1) selective memory [remembering or not remembering experiences or events that occurred at some point in the past]; (2) telescoping [recalling events that occurred at one time as if they occurred at another time]; (3) attribution [the act of attributing positive events and outcomes to one's own agency, but attributing negative events and outcomes to external forces]; and, (4) exaggeration [the act of representing outcomes or embellishing events as more significant than is actually suggested from other data].

Possible Limitations of the Researcher

  • Access -- if your study depends on having access to people, organizations, data, or documents and, for whatever reason, access is denied or limited in some way, the reasons for this needs to be described. Also, include an explanation why being denied or limited access did not prevent you from following through on your study.
  • Longitudinal effects -- unlike your professor, who can literally devote years [even a lifetime] to studying a single topic, the time available to investigate a research problem and to measure change or stability over time is constrained by the due date of your assignment. Be sure to choose a research problem that does not require an excessive amount of time to complete the literature review, apply the methodology, and gather and interpret the results. If you're unsure whether you can complete your research within the confines of the assignment's due date, talk to your professor.
  • Cultural and other type of bias -- we all have biases, whether we are conscience of them or not. Bias is when a person, place, event, or thing is viewed or shown in a consistently inaccurate way. Bias is usually negative, though one can have a positive bias as well, especially if that bias reflects your reliance on research that only support your hypothesis. When proof-reading your paper, be especially critical in reviewing how you have stated a problem, selected the data to be studied, what may have been omitted, the manner in which you have ordered events, people, or places, how you have chosen to represent a person, place, or thing, to name a phenomenon, or to use possible words with a positive or negative connotation. NOTE :   If you detect bias in prior research, it must be acknowledged and you should explain what measures were taken to avoid perpetuating that bias. For example, if a previous study only used boys to examine how music education supports effective math skills, describe how your research expands the study to include girls.
  • Fluency in a language -- if your research focuses , for example, on measuring the perceived value of after-school tutoring among Mexican-American ESL [English as a Second Language] students and you are not fluent in Spanish, you are limited in being able to read and interpret Spanish language research studies on the topic or to speak with these students in their primary language. This deficiency should be acknowledged.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Senunyeme, Emmanuel K. Business Research Methods. Powerpoint Presentation. Regent University of Science and Technology; ter Riet, Gerben et al. “All That Glitters Isn't Gold: A Survey on Acknowledgment of Limitations in Biomedical Studies.” PLOS One 8 (November 2013): 1-6.

Structure and Writing Style

Information about the limitations of your study are generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or, the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section.

If you determine that your study is seriously flawed due to important limitations , such as, an inability to acquire critical data, consider reframing it as an exploratory study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in a new study.

But, do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic . If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to revise your study.

When discussing the limitations of your research, be sure to:

  • Describe each limitation in detailed but concise terms;
  • Explain why each limitation exists;
  • Provide the reasons why each limitation could not be overcome using the method(s) chosen to acquire or gather the data [cite to other studies that had similar problems when possible];
  • Assess the impact of each limitation in relation to the overall findings and conclusions of your study; and,
  • If appropriate, describe how these limitations could point to the need for further research.

Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't interview a group of people that you later wish you had]. If this is the case, don't panic. Acknowledge it, and explain how applying a different or more robust methodology might address the research problem more effectively in a future study. A underlying goal of scholarly research is not only to show what works, but to demonstrate what doesn't work or what needs further clarification.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Ioannidis, John P.A. "Limitations are not Properly Acknowledged in the Scientific Literature." Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh. Writing the Empirical Social Science Research Paper: A Guide for the Perplexed. January 24, 2012. Academia.edu; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com; What Is an Academic Paper? Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

Writing Tip

Don't Inflate the Importance of Your Findings!

After all the hard work and long hours devoted to writing your research paper, it is easy to get carried away with attributing unwarranted importance to what you’ve done. We all want our academic work to be viewed as excellent and worthy of a good grade, but it is important that you understand and openly acknowledge the limitations of your study. Inflating the importance of your study's findings could be perceived by your readers as an attempt hide its flaws or encourage a biased interpretation of the results. A small measure of humility goes a long way!

Another Writing Tip

Negative Results are Not a Limitation!

Negative evidence refers to findings that unexpectedly challenge rather than support your hypothesis. If you didn't get the results you anticipated, it may mean your hypothesis was incorrect and needs to be reformulated. Or, perhaps you have stumbled onto something unexpected that warrants further study. Moreover, the absence of an effect may be very telling in many situations, particularly in experimental research designs. In any case, your results may very well be of importance to others even though they did not support your hypothesis. Do not fall into the trap of thinking that results contrary to what you expected is a limitation to your study. If you carried out the research well, they are simply your results and only require additional interpretation.

Lewis, George H. and Jonathan F. Lewis. “The Dog in the Night-Time: Negative Evidence in Social Research.” The British Journal of Sociology 31 (December 1980): 544-558.

Yet Another Writing Tip

Sample Size Limitations in Qualitative Research

Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework. However, it remains true that sample sizes that are too small cannot adequately support claims of having achieved valid conclusions and sample sizes that are too large do not permit the deep, naturalistic, and inductive analysis that defines qualitative inquiry. Determining adequate sample size in qualitative research is ultimately a matter of judgment and experience in evaluating the quality of the information collected against the uses to which it will be applied and the particular research method and purposeful sampling strategy employed. If the sample size is found to be a limitation, it may reflect your judgment about the methodological technique chosen [e.g., single life history study versus focus group interviews] rather than the number of respondents used.

Boddy, Clive Roland. "Sample Size for Qualitative Research." Qualitative Market Research: An International Journal 19 (2016): 426-432; Huberman, A. Michael and Matthew B. Miles. "Data Management and Analysis Methods." In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 428-444; Blaikie, Norman. "Confounding Issues Related to Determining Sample Size in Qualitative Research." International Journal of Social Research Methodology 21 (2018): 635-641; Oppong, Steward Harrison. "The Problem of Sampling in qualitative Research." Asian Journal of Management Sciences and Education 2 (2013): 202-210.

  • << Previous: 8. The Discussion
  • Next: 9. The Conclusion >>
  • Last Updated: May 30, 2024 9:38 AM
  • URL: https://libguides.usc.edu/writingguide

U.S. flag

Federal Acquisition Regulation

Full far download in various formats, browse far part/subpart and download in various formats.

  • Data Initiatives
  • Regulations
  • Smart Matrix
  • Regulations Search
  • Acquisition Regulation Comparator (ARC)
  • Large Agencies
  • Small Agencies
  • CAOC History
  • CAOC Charter
  • Civilian Agency Acquisition Council (CAAC)
  • Federal Acquisition Regulatory Council
  • Interagency Suspension and Debarment Committee (ISDC)

GSA logo

ACQUISITION.GOV

An official website of the General Services Administration

  • Victor Yocco
  • Jun 5, 2024

Presenting UX Research And Design To Stakeholders: The Power Of Persuasion

  • 25 min read
  • UX Research , Communication , UX
  • Share on Twitter ,  LinkedIn

About The Author

Victor Yocco, PhD, has over a decade of experience as a UX researcher and research director. He is currently affiliated with Allelo Design and is taking on … More about Victor ↬

Email Newsletter

Weekly tips on front-end & UX . Trusted by 200,000+ folks.

For UX researchers and designers, our journey doesn’t end with meticulously gathered data or well-crafted design concepts saved on our laptops or in the cloud. Our true impact lies in effectively communicating research findings and design concepts to key stakeholders and securing their buy-in for implementing our user-centered solutions. This is where persuasion and communication theory become powerful tools, empowering UX practitioners to bridge the gap between research and action .

I shared a framework for conducting UX research in my previous article on infusing communication theory and UX. In this article, I’ll focus on communication and persuasion considerations for presenting our research and design concepts to key stakeholder groups.

A Word On Persuasion: Guiding Understanding, Not Manipulation

UX professionals can strategically use persuasion techniques to turn complex research results into clear, practical recommendations that stakeholders can understand and act on. It’s crucial to remember that persuasion is about helping people understand what to do, not tricking them . When stakeholders see the value of designing with the user in mind, they become strong partners in creating products and services that truly meet user needs. We’re not trying to manipulate anyone; we’re trying to make sure our ideas get the attention they deserve in a busy world.

The Hovland-Yale Model Of Persuasion

The Hovland-Yale model, a framework for understanding how persuasion works, was developed by Carl Hovland and his team at Yale University in the 1950s. Their research was inspired by World War II propaganda, as they wanted to figure out what made some messages more convincing than others.

In the Hovland-Yale model, persuasion is understood as a process involving the Independent variables of Source, Message, and Audience . The elements of each factor then lead to the Audience having internal mediating processes around the topic, which, if independent variables are strong enough, can strengthen or change attitudes or behaviors. The interplay of the internal mediating processes leads to persuasion or not, which then leads to the observable effect of the communication (or not, if the message is ineffective). The model proposes that if these elements are carefully crafted and applied, the intended change in attitude or behavior (Effect) is more likely to be successful.

The diagram below helps identify the parts of persuasive communication. It shows what you can control as a presenter, how people think about the message and the impact it has. If done well, it can lead to change. I’ll focus exclusively on the independent variables in the far left side of the diagram in this article because, theoretically, this is what you, as the outside source creating a persuasive message, are in control of and, if done well, would lead to the appropriate mediating processes and desired observable effects.

Effective communication can reinforce currently held positions. You don’t always need to change minds when presenting research; much of what we find and present might align with currently held beliefs and support actions our stakeholders are already considering.

Over the years, researchers have explored the usefulness and limitations of this model in various contexts. I’ve provided a list of citations at the end of this article if you are interested in exploring academic literature on the Hovland-Yale model. Reflecting on some of the research findings can help shape how we create and deliver our persuasive communication. Some consistent from academia highlight that:

  • Source credibility significantly influences the acceptance of a persuasive message. A high-credibility source is more persuasive than a low-credibility one.
  • Messages that are logically structured, clear, and relatively concise are more likely to be persuasive.
  • An audience’s attitude change is also dependent on the channel of communication. Mass media is found to be less effective in changing attitudes than face-to-face communication.
  • The audience’s initial attitude, intelligence, and self-esteem have a significant role in the persuasion process. Research suggests that individuals with high intelligence are typically more resistant to persuasion efforts, and those with moderate self-esteem are easier to persuade than those with low or high self-esteem.
  • The effect of persuasive messages tends to fade over time, especially if delivered by a non-credible source. This suggests a need to reinforce even effective messages on a regular basis to maintain an effect.

I’ll cover the impact of each of these bullets on UX research and design presentations in the relevant sections below.

It’s important to note that while the Hovland-Yale model provides valuable insight into persuasive communication, it remains a simplification of a complex process. Actual attitude change and decision-making can be influenced by a multitude of other factors not covered in this model, like emotional states, group dynamics, and more, necessitating a multi-faceted approach to persuasion. However, the model provides a manageable framework to strengthen the communication of UX research findings , with a focus on elements that are within the control of the researcher and product team. I’ll break down the process of presenting findings to various audiences in the following section.

Let’s move into applying the models to our work as UX practitioners with a focus on how the model applies to how we prepare and present our findings to various stakeholders. You can reference the diagram above as needed as we move through the Independent variables.

Applying The Hovland-Yale Model To Presenting Your UX Research Findings

Let’s break down the key parts of the Hovland-Yale model and see how we can use them when presenting our UX research and design ideas.

Revised: The Hovland-Yale model stresses that where a message comes from greatly affects how believable and effective it is. Research shows that a convincing source needs to be seen as dependable , informed , and trustworthy . In UX research, this source is usually the researcher(s) and other UX team members who present findings, suggest actions, lead workshops, and share design ideas. It’s crucial for the UX team to build trust with their audience, which often includes users, stakeholders, and designers.

You can demonstrate and strengthen your credibility throughout the research process and once again when presenting your findings.

How Can You Make Yourself More Credible?

You should start building your expertise and credibility before you even finish your research. Often, stakeholders will have already formed an opinion about your work before you even walk into the room. Here are a couple of ways to boost your reputation before or at the beginning of a project:

Case Studies

A well-written case study about your past work can be a great way to show stakeholders the benefits of user-centered design. Make sure your case studies match what your stakeholders care about. Don’t just tell an interesting story; tell a story that matters to them. Understand their priorities and tailor your case study to show how your UX work has helped achieve goals like higher ROI, happier customers, or lower turnover. Share these case studies as a document before the project starts so stakeholders can review them and get a positive impression of your work.

Thought Leadership

Sharing insights and expertise that your UX team has developed is another way to build credibility. This kind of “thought leadership” can establish your team as the experts in your field. It can take many forms, like blog posts, articles in industry publications, white papers, presentations, podcasts, or videos. You can share this content on your website, social media, or directly with stakeholders.

For example, if you’re about to start a project on gathering customer feedback, share any relevant articles or guides your team has created with your stakeholders before the project kickoff. If you are about to start developing a voice of the customer program and you happen to have Victor or Dana on your team, share their article on creating a VoC to your group of stakeholders prior to the kickoff meeting. [Shameless self-promotion and a big smile emoji].

You can also build credibility and trust while discussing your research and design, both during the project and when you present your final results.

Business Goals Alignment

To really connect with stakeholders, make sure your UX goals and the company’s business goals work together. Always tie your research findings and design ideas back to the bigger picture. This means showing how your work can affect things like customer happiness, more sales, lower costs, or other important business measures. You can even work with stakeholders to figure out which measures matter most to them. When you present your designs, point out how they’ll help the company reach its goals through good UX.

Industry Benchmarks

These days, it’s easier to find data on how other companies in your industry are doing. Use this to your advantage! Compare your findings to these benchmarks or even to your competitors. This can help stakeholders feel more confident in your work. Show them how your research fits in with industry trends or how it uncovers new ways to stand out. When you talk about your designs, highlight how you’ve used industry best practices or made changes based on what you’ve learned from users.

Methodological Transparency

Be open and honest about how you did your research. This shows you know what you’re doing and that you can be trusted. For example, if you were looking into why fewer people are renewing their subscriptions to a fitness app, explain how you planned your research, who you talked to, how you analyzed the data, and any challenges you faced. This transparency helps people accept your research results and builds trust.

Increasing Credibility Through Design Concepts

Here are some specific ways to make your design concepts more believable and trustworthy to stakeholders:

Ground Yourself in Research. You’ve done the research, so use it! Make sure your design decisions are based on your findings and user data. When you present, highlight the data that supports your choices.

Go Beyond Mockups. It’s helpful for stakeholders to see your designs in action. Static mockups are a good start, but try creating interactive prototypes that show how users will move through and use your design. This is especially important if you’re creating something new that stakeholders might have trouble visualizing.

User Quotes and Testimonials. Include quotes or stories from users in your presentation. This makes the process more personal and shows that you’re focused on user needs. You can use these quotes to explain specific design choices.

Before & After Impact. Use visuals or user journey maps to show how your design solution improves the user experience. If you’ve mapped out the current user journey or documented existing problems, show how your new design fixes those problems. Don’t leave stakeholders guessing about your design choices. Briefly explain why you made key decisions and how they help users or achieve business goals. You should have research and stakeholder input to back up your decisions.

Show Your Process. When presenting a more developed concept, show the work that led up to it. Don’t just share the final product. Include early sketches, wireframes, or simple prototypes to show how the design evolved and the reasoning behind your choices. This is especially helpful for executives or stakeholders who haven’t been involved in the whole process.

Be Open to Feedback and Iteration. Work together with stakeholders. Show that you’re open to their feedback and explain how their input can help you improve your designs.

Much of what I’ve covered above are also general best practices for presenting. Remember, these are just suggestions. You don’t have to use every single one to make your presentations more persuasive. Try different things, see what works best for you and your stakeholders, and have fun with it! The goal is to build trust and credibility with your UX team.

The Hovland-Yale model, along with most other communication models, suggests that what you communicate is just as important as how you communicate it. In UX research, your message is usually your insights, data analysis, findings, and recommendations.

I’ve touched on this in the previous section because it’s hard to separate the source (who’s talking) from the message (what they’re saying). For example, building trust involves being transparent about your research methods, which is part of your message. So, some of what I’m about to say might sound familiar.

For this article, let’s define the message as your research findings and everything that goes with them (e.g., what you say in your presentation, the slides you use, other media), as well as your design concepts (how you show your design solutions, including drawings, wireframes, prototypes, and so on).

The Hovland-Yale model says it’s important to make your message easy to understand , relevant , and impactful . For example, instead of just saying,

“30% of users found the signup process difficult.”

you could say,

“30% of users struggled to sign up because the process was too complicated. This could lead to fewer renewals. Making the signup process easier could increase renewals and improve the overall experience.”

Storytelling is also a powerful way to get your message across. Weaving your findings into a narrative helps people connect with your data on a human level and remember your key points. Using real quotes or stories from users makes your presentation even more compelling.

Here are some other tips for delivering a persuasive message:

  • Practice Makes Perfect Rehearse your presentation. This will help you smooth out any rough spots, anticipate questions, and feel more confident.
  • Anticipate Concerns Think about any objections stakeholders might have and be ready to address them with data.
  • Welcome Feedback Encourage open discussion during your presentation. Listen to what stakeholders have to say and show that you’re willing to adapt your recommendations based on their concerns. This builds trust and makes everyone feel like they’re part of the process.
  • Follow Through is Key After your presentation, send a clear summary of the main points and action items. This shows you’re professional and makes it easy for stakeholders to refer back to your findings.

When presenting design concepts, it’s important to tell , not just show, what you’re proposing. Stakeholders might not have a deep understanding of UX, so just showing them screenshots might not be enough. Use user stories to walk them through the redesigned experience. This helps them understand how users will interact with your design and what benefits it will bring. Static screens show the “what,” but user stories reveal the “why” and “how.” By focusing on the user journey, you can demonstrate how your design solves problems and improves the overall experience.

For example, if you’re suggesting changes to the search bar and adding tooltips, you could say:

“Imagine a user lands on the homepage and sees the new, larger search bar. They enter their search term and get results. If they see an unfamiliar tool or a new action, they can hover over it to see a brief description.”

Here are some other ways to make your design concepts clearer and more persuasive:

  • Clear Design Language Use a consistent and visually appealing design language in your mockups and prototypes. This shows professionalism and attention to detail.
  • Accessibility Best Practices Make sure your design is accessible to everyone. This shows that you care about inclusivity and user-centered design.

One final note on the message is that research has found the likelihood of an audience’s attitude change is also dependent on the channel of communication . Mass media is found to be less effective in changing attitudes than face-to-face communication. Distributed teams and remote employees can employ several strategies to compensate for any potential impact reduction of asynchronous communication:

  • Interactive Elements Incorporate interactive elements into presentations, such as polls, quizzes, or clickable prototypes. This can increase engagement and make the experience more dynamic for remote viewers.
  • Video Summaries Create short video summaries of key findings and recommendations. This adds a personal touch and can help convey nuances that might be lost in text or static slides.
  • Virtual Q&A Sessions Schedule dedicated virtual Q&A sessions where stakeholders can ask questions and engage in discussions. This allows for real-time interaction and clarification, mimicking the benefits of face-to-face communication.
  • Follow-up Communication Actively follow up with stakeholders after they’ve reviewed the materials. Offer to discuss the content, answer questions, and gather feedback. This demonstrates a commitment to communication and can help solidify key takeaways.

Framing Your Message for Maximum Impact

The way you frame an issue can greatly influence how stakeholders see it. Framing is a persuasion technique that can help your message resonate more deeply with specific stakeholders. Essentially, you want to frame your message in a way that aligns with your stakeholders’ attitudes and values and presents your solution as the next logical step. There are many resources on how to frame messages, as this technique has been used often in public safety and public health research to encourage behavior change. This article discusses applying framing techniques for digital design.

You can also frame issues in a way that motivates your stakeholders. For example, instead of calling usability issues “problems,” I like to call them “opportunities.” This emphasizes the potential for improvement. Let’s say your research on a hospital website finds that the appointment booking process is confusing. You could frame this as an opportunity to improve patient satisfaction and maybe even reduce call center volume by creating a simpler online booking system. This way, your solution is a win-win for both patients and the hospital. Highlighting the positive outcomes of your proposed changes and using language that focuses on business benefits and user satisfaction can make a big difference.

Understanding your audience’s goals is essential before embarking on any research or design project. It serves as the foundation for tailoring content, supporting decision-making processes, ensuring clarity and focus, enhancing communication effectiveness, and establishing metrics for evaluation.

One specific aspect to consider is securing buy-in from the product and delivery teams prior to beginning any research or design. Without their investment in the outcomes and input on the process, it can be challenging to find stakeholders who see value in a project you created in a vacuum. Engaging with these teams early on helps align expectations, foster collaboration, and ensure that the research and design efforts are informed by the organization’s objectives.

Once you’ve identified your key stakeholders and secured buy-in, you should then Map the Decision-Making Process or understand the decision-making process your audience goes through, including the pain points, considerations, and influencing factors.

  • How are decisions made, and who makes them?
  • Is it group consensus?
  • Are there key voices that overrule all others?
  • Is there even a decision to be made in regard to the work you will do?

Understanding the decision-making process will enable you to provide the necessary information and support at each stage.

Finally, prior to engaging in any work, set clear objectives with your key stakeholders . Your UX team needs to collaborate with the product and delivery teams to establish clear objectives for the research or design project. These objectives should align with the organization’s goals and the audience’s needs.

By understanding your audience’s goals and involving the product and delivery teams from the outset, you can create research and design outcomes that are relevant, impactful, and aligned with the organization’s objectives.

As the source of your message, it’s your job to understand who you’re talking to and how they see the issue. Different stakeholders have different interests, goals, and levels of knowledge. It’s important to tailor your communication to each of these perspectives. Adjust your language, what you emphasize, and the complexity of your message to suit your audience. Technical jargon might be fine for technical stakeholders, but it could alienate those without a technical background.

Audience Characteristics: Know Your Stakeholders

Remember, your audience’s existing opinions, intelligence, and self-esteem play a big role in how persuasive you can be. Research suggests that people with higher intelligence tend to be more resistant to persuasion, while those with moderate self-esteem are easier to persuade than those with very low or very high self-esteem. Understanding your audience is key to giving a persuasive presentation of your UX research and design concepts. Tailoring your communication to address the specific concerns and interests of your stakeholders can significantly increase the impact of your findings.

To truly know your audience, you need information about who you’ll be presenting to, and the more you know, the better. At the very least, you should identify the different groups of stakeholders in your audience. This could include designers, developers, product managers, and executives. If possible, try to learn more about your key stakeholders. You could interview them at the beginning of your process, or you could give them a short survey to gauge their attitudes and behaviors toward the area your UX team is exploring.

Then, your UX team needs to decide the following:

  • How can you best keep all stakeholders engaged and informed as the project unfolds?
  • How will your presentation or concepts appeal to different interests and roles?
  • How can you best encourage discussion and decision-making with the different stakeholders present?
  • Should you hold separate presentations because of the wide range of stakeholders you need to share your findings with?
  • How will you prioritize information?

Your answers to the previous questions will help you focus on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives might care more about the business impact. If you’re presenting to a mixed audience, include a mix of information and be ready to highlight what’s relevant to each group in a way that grabs their attention. Adapt your communication style to match each group’s preferences. Provide technical details for developers and emphasize user experience benefits for executives.

Let’s say you did UX research for a mobile banking app, and your audience includes designers, developers, and product managers.

  • Focus on: Design-related findings like what users prefer in the interface, navigation problems, and suggestions for the visual design.
  • How to communicate: Use visuals like heatmaps and user journey maps to show design challenges. Talk about how fixing these issues can make the overall user experience better.

Developers:

  • Focus on: Technical stuff, like performance problems, bugs, or challenges with building the app.
  • How to communicate: Share code snippets or technical details about the problems you found. Discuss possible solutions that the developers can actually build. Be realistic about how much work it will take and be ready to talk about a “minimum viable product” (MVP).

Product Managers:

  • Focus on: Findings that affect how users engage with the app, how long they keep using it, and the overall business goals.
  • How to communicate: Use numbers and data to show how UX improvements can help the business. Explain how the research and your ideas fit into the product roadmap and long-term strategy.
By tailoring your presentation to each group, you make sure your message really hits home. This makes it more likely that they’ll support your UX research findings and work together to make decisions. “

The Effect (Impact)

The end goal of presenting your findings and design concepts is to get key stakeholders to take action based on what you learned from users. Make sure the impact of your research is crystal clear. Talk about how your findings relate to business goals, customer happiness, and market success (if those are relevant to your product). Suggest clear, actionable next steps in the form of design concepts and encourage feedback and collaboration from stakeholders . This builds excitement and gets people invested. Make sure to answer any questions and ask for more feedback to show that you value their input. Remember, stakeholders play a big role in the product’s future, so getting them involved increases the value of your research.

The Call to Action (CTA)

Your audience needs to know what you want them to do. End your presentation with a strong call to action (CTA). But to do this well, you need to be clear on what you want them to do and understand any limitations they might have.

For example, if you’re presenting to the CEO, tailor your CTA to their priorities. Focus on the return on investment (ROI) of user-centered design. Show how your recommendations can increase sales, improve customer satisfaction, or give the company a competitive edge. Use clear visuals and explain how user needs translate into business benefits. End with a strong, action-oriented statement, like

“Let’s set up a meeting to discuss how we can implement these user-centered design recommendations to reach your strategic goals.”

If you’re presenting to product managers and business unit leaders, focus on the business goals they care about, like increasing revenue or reducing customer churn. Explain your research findings in terms of ROI. For example, a strong CTA could be:

“Let’s try out the redesigned checkout process and aim for a 10% increase in conversion rates next quarter.”

Remember, the effects of persuasive messages can fade over time , especially if the source isn’t seen as credible. This means you need to keep reinforcing your message to maintain its impact.

Understanding Limitations and Addressing Concerns

Persuasion is about guiding understanding, not tricking people. Be upfront about any limitations your audience might have , like budget constraints or limited development resources. Anticipate their concerns and address them in your CTA. For example, you could say,

“I know implementing the entire redesign might need more resources, so let’s prioritize the high-impact changes we found in our research to improve the checkout process within our current budget.”

By considering both your desired outcome and your audience’s perspective, you can create a clear, compelling, and actionable CTA that resonates with stakeholders and drives user-centered design decisions.

Finally, remember that presenting your research findings and design concepts isn’t the end of the road . The effects of persuasive messages can fade over time. Your team should keep looking for ways to reinforce key messages and decisions as you move forward with implementing solutions. Keep your presentations and concepts in a shared folder, remind people of the reasoning behind decisions, and be flexible if there are multiple ways to achieve the desired outcome. Showing how you’ve addressed stakeholder goals and concerns in your solution will go a long way in maintaining credibility and trust for future projects.

A Tool to Track Your Alignment to the Hovland-Yale Model

You and your UX team are likely already incorporating elements of persuasion into your work. It might be helpful to track how you are doing this to reflect on what works, what doesn’t, and where there are gaps. I’ve provided a spreadsheet in Figure 3 below for you to modify and use as you might see fit. I’ve included sample data to provide an example of what type of information you might want to record. You can set up the structure of a spreadsheet like this as you think about kicking off your next project, or you can fill it in with information from a recently completed project and reflect on what you can incorporate more in the future.

Please use the spreadsheet below as a suggestion and make additions, deletions, or changes as best suited to meet your needs. You don’t need to be dogmatic in adhering to what I’ve covered here. Experiment, find what works best for you, and have fun.

Figure 3: Example of spreadsheet categories to track the application of the Hovland-Yale model to your presentation of UX Research findings.

Foundational Works

  • Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. New Haven, CT: Yale University Press. (The cornerstone text on the Hovland-Yale model).
  • Weiner, B. J., & Hovland, C. I. (1956). Participating vs. nonparticipating persuasive presentations: A further study of the effects of audience participation. Journal of Abnormal and Social Psychology, 52(2), 105-110. (Examines the impact of audience participation in persuasive communication).
  • Kelley, H. H., & Hovland, C. I. (1958). The communication of persuasive content. Psychological Review, 65(4), 314-320. (Delves into the communication of persuasive messages and their effects).

Contemporary Applications

  • Pfau, M., & Dalton, M. J. (2008). The persuasive effects of fear appeals and positive emotion appeals on risky sexual behavior intentions. Journal of Communication, 58(2), 244-265. (Applies the Hovland-Yale model to study the effectiveness of fear appeals).
  • Chen, G., & Sun, J. (2010). The effects of source credibility and message framing on consumer online health information seeking. Journal of Interactive Advertising, 10(2), 75-88. (Analyzes the impact of source credibility and message framing, concepts within the model, on health information seeking).
  • Hornik, R., & McHale, J. L. (2009). The persuasive effects of emotional appeals: A meta-analysis of research on advertising emotions and consumer behavior. Journal of Consumer Psychology, 19(3), 394-403. (Analyzes the role of emotions in persuasion, a key aspect of the model, in advertising).

Smashing Newsletter

Tips on front-end & UX, delivered weekly in your inbox. Just the things you can actually use.

Front-End & UX Workshops, Online

With practical takeaways, live sessions, video recordings and a friendly Q&A.

TypeScript in 50 Lessons

Everything TypeScript, with code walkthroughs and examples. And other printed books.

Enago Academy

Writing Limitations of Research Study — 4 Reasons Why It Is Important!

' src=

It is not unusual for researchers to come across the term limitations of research during their academic paper writing. More often this is interpreted as something terrible. However, when it comes to research study, limitations can help structure the research study better. Therefore, do not underestimate significance of limitations of research study.

Allow us to take you through the context of how to evaluate the limits of your research and conclude an impactful relevance to your results.

Table of Contents

What Are the Limitations of a Research Study?

Every research has its limit and these limitations arise due to restrictions in methodology or research design.  This could impact your entire research or the research paper you wish to publish. Unfortunately, most researchers choose not to discuss their limitations of research fearing it will affect the value of their article in the eyes of readers.

However, it is very important to discuss your study limitations and show it to your target audience (other researchers, journal editors, peer reviewers etc.). It is very important that you provide an explanation of how your research limitations may affect the conclusions and opinions drawn from your research. Moreover, when as an author you state the limitations of research, it shows that you have investigated all the weaknesses of your study and have a deep understanding of the subject. Being honest could impress your readers and mark your study as a sincere effort in research.

peer review

Why and Where Should You Include the Research Limitations?

The main goal of your research is to address your research objectives. Conduct experiments, get results and explain those results, and finally justify your research question . It is best to mention the limitations of research in the discussion paragraph of your research article.

At the very beginning of this paragraph, immediately after highlighting the strengths of the research methodology, you should write down your limitations. You can discuss specific points from your research limitations as suggestions for further research in the conclusion of your thesis.

1. Common Limitations of the Researchers

Limitations that are related to the researcher must be mentioned. This will help you gain transparency with your readers. Furthermore, you could provide suggestions on decreasing these limitations in you and your future studies.

2. Limited Access to Information

Your work may involve some institutions and individuals in research, and sometimes you may have problems accessing these institutions. Therefore, you need to redesign and rewrite your work. You must explain your readers the reason for limited access.

3. Limited Time

All researchers are bound by their deadlines when it comes to completing their studies. Sometimes, time constraints can affect your research negatively. However, the best practice is to acknowledge it and mention a requirement for future study to solve the research problem in a better way.

4. Conflict over Biased Views and Personal Issues

Biased views can affect the research. In fact, researchers end up choosing only those results and data that support their main argument, keeping aside the other loose ends of the research.

Types of Limitations of Research

Before beginning your research study, know that there are certain limitations to what you are testing or possible research results. There are different types that researchers may encounter, and they all have unique characteristics, such as:

1. Research Design Limitations

Certain restrictions on your research or available procedures may affect your final results or research outputs. You may have formulated research goals and objectives too broadly. However, this can help you understand how you can narrow down the formulation of research goals and objectives, thereby increasing the focus of your study.

2. Impact Limitations

Even if your research has excellent statistics and a strong design, it can suffer from the influence of the following factors:

  • Presence of increasing findings as researched
  • Being population specific
  • A strong regional focus.

3. Data or statistical limitations

In some cases, it is impossible to collect sufficient data for research or very difficult to get access to the data. This could lead to incomplete conclusion to your study. Moreover, this insufficiency in data could be the outcome of your study design. The unclear, shabby research outline could produce more problems in interpreting your findings.

How to Correctly Structure Your Research Limitations?

There are strict guidelines for narrowing down research questions, wherein you could justify and explain potential weaknesses of your academic paper. You could go through these basic steps to get a well-structured clarity of research limitations:

  • Declare that you wish to identify your limitations of research and explain their importance,
  • Provide the necessary depth, explain their nature, and justify your study choices.
  • Write how you are suggesting that it is possible to overcome them in the future.

In this section, your readers will see that you are aware of the potential weaknesses in your business, understand them and offer effective solutions, and it will positively strengthen your article as you clarify all limitations of research to your target audience.

Know that you cannot be perfect and there is no individual without flaws. You could use the limitations of research as a great opportunity to take on a new challenge and improve the future of research. In a typical academic paper, research limitations may relate to:

1. Formulating your goals and objectives

If you formulate goals and objectives too broadly, your work will have some shortcomings. In this case, specify effective methods or ways to narrow down the formula of goals and aim to increase your level of study focus.

2. Application of your data collection methods in research

If you do not have experience in primary data collection, there is a risk that there will be flaws in the implementation of your methods. It is necessary to accept this, and learn and educate yourself to understand data collection methods.

3. Sample sizes

This depends on the nature of problem you choose. Sample size is of a greater importance in quantitative studies as opposed to qualitative ones. If your sample size is too small, statistical tests cannot identify significant relationships or connections within a given data set.

You could point out that other researchers should base the same study on a larger sample size to get more accurate results.

4. The absence of previous studies in the field you have chosen

Writing a literature review is an important step in any scientific study because it helps researchers determine the scope of current work in the chosen field. It is a major foundation for any researcher who must use them to achieve a set of specific goals or objectives.

However, if you are focused on the most current and evolving research problem or a very narrow research problem, there may be very little prior research on your topic. For example, if you chose to explore the role of Bitcoin as the currency of the future, you may not find tons of scientific papers addressing the research problem as Bitcoins are only a new phenomenon.

It is important that you learn to identify research limitations examples at each step. Whatever field you choose, feel free to add the shortcoming of your work. This is mainly because you do not have many years of experience writing scientific papers or completing complex work. Therefore, the depth and scope of your discussions may be compromised at different levels compared to academics with a lot of expertise. Include specific points from limitations of research. Use them as suggestions for the future.

Have you ever faced a challenge of writing the limitations of research study in your paper? How did you overcome it? What ways did you follow? Were they beneficial? Let us know in the comments below!

Frequently Asked Questions

Setting limitations in our study helps to clarify the outcomes drawn from our research and enhance understanding of the subject. Moreover, it shows that the author has investigated all the weaknesses in the study.

Scope is the range and limitations of a research project which are set to define the boundaries of a project. Limitations are the impacts on the overall study due to the constraints on the research design.

Limitation in research is an impact of a constraint on the research design in the overall study. They are the flaws or weaknesses in the study, which may influence the outcome of the research.

1. Limitations in research can be written as follows: Formulate your goals and objectives 2. Analyze the chosen data collection method and the sample sizes 3. Identify your limitations of research and explain their importance 4. Provide the necessary depth, explain their nature, and justify your study choices 5. Write how you are suggesting that it is possible to overcome them in the future

' src=

Excellent article ,,,it has helped me big

This is very helpful information. It has given me an insight on how to go about my study limitations.

Good comments and helpful

the topic is well covered

Rate this article Cancel Reply

Your email address will not be published.

limitations in research examples

Enago Academy's Most Popular Articles

retractions and research integrity

  • Publishing Research
  • Trending Now
  • Understanding Ethics

Understanding the Impact of Retractions on Research Integrity – A global study

As we reach the midway point of 2024, ‘Research Integrity’ remains one of the hot…

Gender Bias in Science Funding

  • Diversity and Inclusion

The Silent Struggle: Confronting gender bias in science funding

In the 1990s, Dr. Katalin Kariko’s pioneering mRNA research seemed destined for obscurity, doomed by…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Addressing Biases in the Journey of PhD

Addressing Barriers in Academia: Navigating unconscious biases in the Ph.D. journey

In the journey of academia, a Ph.D. marks a transitional phase, like that of a…

limitations in research examples

  • Manuscripts & Grants
  • Reporting Research

Unraveling Research Population and Sample: Understanding their role in statistical inference

Research population and sample serve as the cornerstones of any scientific inquiry. They hold the…

Research Problem Statement — Find out how to write an impactful one!

How to Develop a Good Research Question? — Types & Examples

5 Effective Ways to Avoid Ghostwriting for Busy Researchers

limitations in research examples

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

limitations in research examples

As a researcher, what do you consider most when choosing an image manipulation detector?

IMAGES

  1. How to Present the Limitations of the Study Examples

    limitations in research examples

  2. Examples of limitations in research

    limitations in research examples

  3. Chapter 4

    limitations in research examples

  4. What Are The Research Study's limitations, And How To Identify Them?

    limitations in research examples

  5. 😱 Research paper limitations examples. Limitations Research Proposal Examples That Really

    limitations in research examples

  6. How to Present the Limitations of the Study Examples

    limitations in research examples

VIDEO

  1. OR EP 04 PHASES , SCOPE & LIMITATIONS OF OPERATION RESEARCH

  2. Meaning and Features of Research

  3. Discriminant Validity #shortsvideo #shortvideo #shortsviral #shortsfeed #shorts #short

  4. Exploring Research Methodologies in the Social Sciences (4 Minutes)

  5. Common problems in experiments

  6. Characteristics, Strengths, Weaknesses, and Kinds of Quantitative Research

COMMENTS

  1. 21 Research Limitations Examples (2024)

    Learn how to identify and acknowledge the potential weaknesses of your research project. See 21 examples of limitations for qualitative and quantitative studies, such as subjectivity, generalizability, and time constraints.

  2. How to Write Limitations of the Study (with examples)

    Learn the importance and types of limitations in research and how to write them effectively. See examples of limitations in different research areas and how to suggest future improvements.

  3. Limitations in Research

    Learn what limitations in research are and how to identify, write, and discuss them in your study. See examples of different types of limitations and their implications for research findings and future research.

  4. Limitations in Medical Research: Recognition, Influence, and Warning

    Limitations put medical research articles at risk. The accumulation of limitations (variables having additional limitation components) are gaps and flaws diluting the probability of validity. There is currently no assessment method for evaluating the effect(s) of limitations on research outcomes other than awareness that there is an effect.

  5. How to Write about Research Limitations Without Reducing Your Impact

    These are very common limitations to medical research, for example. We refer to these kinds as study design limitations. Clinical trials, for instance, may have a restriction on interventions expected to have a positive effect. Or there may be restrictions on data collection based on the study population. Impact limitations

  6. Research Limitations: A Comprehensive Guide

    Inherent in Research: Every research project, regardless of its scale or significance, possesses limitations. 2. Types of Research Limitations: Methodological Limitations: Constraints related to the research design, data collection methods, or analytical techniques. Sampling Limitations: Issues associated with the representativeness or size of ...

  7. Don't Worry! And Write the LIMITATIONS of Your Research!

    Examples of Research Limitations. Ok, you got it so far that no one is perfect, that some weird people become presidents and that research limitations should be included in your work. I guess the next question would be: which limitations should I mention? Look, it is extremely difficult to describe all possible types of research limitations.

  8. How to Write Limitations of The Study? With Examples

    Describe the limitations of your study by categorising it and stipulating its origin. Consider factors like research design, sample size, etc, that might have influenced the research. Explain the Implications: Provide evidence and examples to support claims of the study's effects and limitations without exaggerating their impact.

  9. The limitations section: Common Limitations in Research

    In this blog post, we'll delve into the common limitations of research and discuss strategies for mitigating their impact on the validity, reliability, and generalizability of findings. Five main types of limitations 1. Sampling Limitations: Sample Size: Limited sample size can affect the generalizability and representativeness of research ...

  10. The Limits Of Survey Data: What Questionnaires Can't Tell Us

    All research methodologies have their limitations, as many authors have pointed before (see for example Visser, Krosnick and Lavrakas, 2000). From the generalisabilty of data to the nitty-gritty of bias and question wording, every method has its flaws. In fact, the in-fighting between methodological approaches is one of social science's worst ...

  11. How to Write Research Limitations: A Step-by-Step Guide

    Where to write limitations. The limitations of the research should be part of its conclusion and recommendations for future work. The two may or may not be related. Some of the ideas may come to light from the limitations of the research. Justifications for the limitations. Some plausible justifications for the limitations are:

  12. Best and proper way to write limitation of your research?

    A good way of stating the persisting limitations is to first highlight the strengths of the study like prospective design, randomised sample, blinded evaluation etc. The limitations like sample ...

  13. Qualitative Research Limitations & Advantages

    Discover examples of qualitative and quantitative studies, and identify the advantages and limitations of qualitative research. Updated: 11/21/2023 Table of Contents

  14. Research Limitations: Simple Explainer With Examples

    Limitation #3: Sample Size & Composition. As we've discussed before, the size and representativeness of your sample are crucial, especially in quantitative research where the robustness of your conclusions often depends on these factors.All too often though, students run into issues achieving a sufficient sample size and composition. To ensure adequacy in terms of your sample size, it's ...

  15. What Is A Research Gap (With Examples)

    1. The Classic Literature Gap. First up is the classic literature gap. This type of research gap emerges when there's a new concept or phenomenon that hasn't been studied much, or at all. For example, when a social media platform is launched, there's an opportunity to explore its impacts on users, how it could be leveraged for marketing, its impact on society, and so on.

  16. Why It Is Important to Discuss the Limitations of Research

    Including limitations is based on the core principle of transparency in scientific research, with the purpose to maintain mutual integrity and promote further progress in similar studies. Descriptions of Various Limitations. Sample size or profile - sampling is one of the most common limitations mentioned by researchers. This is often due to ...

  17. Research Limitations vs Research Delimitations

    If you're new to that world of research, you've probable heard the terms "research limitations" and "research delimitations" being thrown nearby, repeatedly quite loosely.In this post, we'll unpack how both off these mean, how they're similar or how they're different - so such you canister write up these sections the right way.

  18. Research Limitations vs Research Delimitations

    If you're new to the world of researching, you've probably heard aforementioned terms "research limitations" and "research delimitations" being thrown around, often quite loosely.In this post, we'll unpacking get both of diesen mean, how they're similar and how they're different - so which you can write up these sectors the right way.

  19. Case Study Research Method in Psychology

    Case study research involves an in-depth, detailed examination of a single case, such as a person, group, event, organization, or location, to explore causation in order to find underlying principles and gain insight for further research. ... Examples Famous Case Studies. ... Analyze the case, exploring contributing factors, limitations of the ...

  20. Research Limitations vs Research Delimitations

    If you're new to the world of research, you've probably heard the varying "research limitations" and "research delimitations" being thrown around, repeatedly quite loosely.In this post, we'll unpack what both the these mean, instructions they're similar and how they're different - so that them can write up diesen sections who right way.

  21. Examples Of Limitations In Research Paper

    Some typical limitations are sample size. methodology constraints. length of the study. and response rate. " For limitations — examples: 1. There may be unknown conditions or factors at the facility where the participants reside. work. or study that could bias the responses of the participants. 5. 2 Limitations of the Research Regardless of ...

  22. What are the limitations in research and how to write them?

    Learn what limitations in research are and how to acknowledge them in your paper. See common types of limitations, such as sample size, data availability, bias, and access, and how to structure your limitations section.

  23. Multi-arm multi-stage (MAMS) randomised selection designs: impact of

    These limitations may mean that not all research treatments can continue to accrue the required sample size for the definitive analysis of the primary outcome measure at the final stage. In these cases, an additional treatment selection rule can be applied at the early stages of the trial to restrict the maximum number of research arms that can ...

  24. What is Natural Language Processing? Definition and Examples

    NLP limitations. NLP can be used for a wide variety of applications but it's far from perfect. In fact, many NLP tools struggle to interpret sarcasm, emotion, slang, context, errors, and other types of ambiguous statements. ... Natural language processing examples. ... Learners are advised to conduct additional research to ensure that courses ...

  25. Introduction to Social Exchange Theory in Social Work With Examples in

    An example of the social exchange theory in the workplace is when coworkers collaborate on a project. Each team member contributes their skills, time, and effort to achieve a common goal. They may even go as far as enrolling in a 6 month MBA program to contribute more knowledge and skills.

  26. Limitations of the Study

    Sample Size Limitations in Qualitative Research. Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework.

  27. FAR

    FAC Number Effective Date HTML DITA PDF Word EPub Apple Books Kindle; 2024-05: 05/22/2024

  28. Presenting UX Research And Design To Stakeholders: The Power Of

    Example. Let's say you did UX research for a mobile banking app, and your audience includes designers, developers, and product managers. Designers: ... Be upfront about any limitations your audience might have, like budget constraints or limited development resources. Anticipate their concerns and address them in your CTA.

  29. Limitations of a Research Study

    3. Identify your limitations of research and explain their importance. 4. Provide the necessary depth, explain their nature, and justify your study choices. 5. Write how you are suggesting that it is possible to overcome them in the future. Limitations can help structure the research study better.

  30. Sustainability

    Egg production is amongst the most rapidly expanding livestock sectors worldwide. A large share of non-renewable energy use in egg production is due to the operation of heating, ventilation, and air conditioning (HVAC) systems. Reducing energy use, therefore, is essential to decreasing the environmental impacts of intensive egg production. This review identifies market-ready alternatives (such ...