• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 7. The Results
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The results section is where you report the findings of your study based upon the methodology [or methodologies] you applied to gather information. The results section should state the findings of the research arranged in a logical sequence without bias or interpretation. A section describing results should be particularly detailed if your paper includes data generated from your own research.

Annesley, Thomas M. "Show Your Cards: The Results Section and the Poker Game." Clinical Chemistry 56 (July 2010): 1066-1070.

Importance of a Good Results Section

When formulating the results section, it's important to remember that the results of a study do not prove anything . Findings can only confirm or reject the hypothesis underpinning your study. However, the act of articulating the results helps you to understand the problem from within, to break it into pieces, and to view the research problem from various perspectives.

The page length of this section is set by the amount and types of data to be reported . Be concise. Use non-textual elements appropriately, such as figures and tables, to present findings more effectively. In deciding what data to describe in your results section, you must clearly distinguish information that would normally be included in a research paper from any raw data or other content that could be included as an appendix. In general, raw data that has not been summarized should not be included in the main text of your paper unless requested to do so by your professor.

Avoid providing data that is not critical to answering the research question . The background information you described in the introduction section should provide the reader with any additional context or explanation needed to understand the results. A good strategy is to always re-read the background section of your paper after you have written up your results to ensure that the reader has enough context to understand the results [and, later, how you interpreted the results in the discussion section of your paper that follows].

Bavdekar, Sandeep B. and Sneha Chandak. "Results: Unraveling the Findings." Journal of the Association of Physicians of India 63 (September 2015): 44-46; Brett, Paul. "A Genre Analysis of the Results Section of Sociology Articles." English for Specific Speakers 13 (1994): 47-59; Go to English for Specific Purposes on ScienceDirect;Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008; Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section. San Francisco Edit; "Reporting Findings." In Making Sense of Social Research Malcolm Williams, editor. (London;: SAGE Publications, 2003) pp. 188-207.

Structure and Writing Style

I.  Organization and Approach

For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results . Both approaches are appropriate in how you report your findings, but use only one approach.

  • Present a synopsis of the results followed by an explanation of key findings . This approach can be used to highlight important findings. For example, you may have noticed an unusual correlation between two variables during the analysis of your findings. It is appropriate to highlight this finding in the results section. However, speculating as to why this correlation exists and offering a hypothesis about what may be happening belongs in the discussion section of your paper.
  • Present a result and then explain it, before presenting the next result then explaining it, and so on, then end with an overall synopsis . This is the preferred approach if you have multiple results of equal significance. It is more common in longer papers because it helps the reader to better understand each finding. In this model, it is helpful to provide a brief conclusion that ties each of the findings together and provides a narrative bridge to the discussion section of the your paper.

NOTE :   Just as the literature review should be arranged under conceptual categories rather than systematically describing each source, you should also organize your findings under key themes related to addressing the research problem. This can be done under either format noted above [i.e., a thorough explanation of the key results or a sequential, thematic description and explanation of each finding].

II.  Content

In general, the content of your results section should include the following:

  • Introductory context for understanding the results by restating the research problem underpinning your study . This is useful in re-orientating the reader's focus back to the research problem after having read a review of the literature and your explanation of the methods used for gathering and analyzing information.
  • Inclusion of non-textual elements, such as, figures, charts, photos, maps, tables, etc. to further illustrate key findings, if appropriate . Rather than relying entirely on descriptive text, consider how your findings can be presented visually. This is a helpful way of condensing a lot of data into one place that can then be referred to in the text. Consider referring to appendices if there is a lot of non-textual elements.
  • A systematic description of your results, highlighting for the reader observations that are most relevant to the topic under investigation . Not all results that emerge from the methodology used to gather information may be related to answering the " So What? " question. Do not confuse observations with interpretations; observations in this context refers to highlighting important findings you discovered through a process of reviewing prior literature and gathering data.
  • The page length of your results section is guided by the amount and types of data to be reported . However, focus on findings that are important and related to addressing the research problem. It is not uncommon to have unanticipated results that are not relevant to answering the research question. This is not to say that you don't acknowledge tangential findings and, in fact, can be referred to as areas for further research in the conclusion of your paper. However, spending time in the results section describing tangential findings clutters your overall results section and distracts the reader.
  • A short paragraph that concludes the results section by synthesizing the key findings of the study . Highlight the most important findings you want readers to remember as they transition into the discussion section. This is particularly important if, for example, there are many results to report, the findings are complicated or unanticipated, or they are impactful or actionable in some way [i.e., able to be pursued in a feasible way applied to practice].

NOTE:   Always use the past tense when referring to your study's findings. Reference to findings should always be described as having already happened because the method used to gather the information has been completed.

III.  Problems to Avoid

When writing the results section, avoid doing the following :

  • Discussing or interpreting your results . Save this for the discussion section of your paper, although where appropriate, you should compare or contrast specific results to those found in other studies [e.g., "Similar to the work of Smith [1990], one of the findings of this study is the strong correlation between motivation and academic achievement...."].
  • Reporting background information or attempting to explain your findings. This should have been done in your introduction section, but don't panic! Often the results of a study point to the need for additional background information or to explain the topic further, so don't think you did something wrong. Writing up research is rarely a linear process. Always revise your introduction as needed.
  • Ignoring negative results . A negative result generally refers to a finding that does not support the underlying assumptions of your study. Do not ignore them. Document these findings and then state in your discussion section why you believe a negative result emerged from your study. Note that negative results, and how you handle them, can give you an opportunity to write a more engaging discussion section, therefore, don't be hesitant to highlight them.
  • Including raw data or intermediate calculations . Ask your professor if you need to include any raw data generated by your study, such as transcripts from interviews or data files. If raw data is to be included, place it in an appendix or set of appendices that are referred to in the text.
  • Be as factual and concise as possible in reporting your findings . Do not use phrases that are vague or non-specific, such as, "appeared to be greater than other variables..." or "demonstrates promising trends that...." Subjective modifiers should be explained in the discussion section of the paper [i.e., why did one variable appear greater? Or, how does the finding demonstrate a promising trend?].
  • Presenting the same data or repeating the same information more than once . If you want to highlight a particular finding, it is appropriate to do so in the results section. However, you should emphasize its significance in relation to addressing the research problem in the discussion section. Do not repeat it in your results section because you can do that in the conclusion of your paper.
  • Confusing figures with tables . Be sure to properly label any non-textual elements in your paper. Don't call a chart an illustration or a figure a table. If you are not sure, go here .

Annesley, Thomas M. "Show Your Cards: The Results Section and the Poker Game." Clinical Chemistry 56 (July 2010): 1066-1070; Bavdekar, Sandeep B. and Sneha Chandak. "Results: Unraveling the Findings." Journal of the Association of Physicians of India 63 (September 2015): 44-46; Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008;  Caprette, David R. Writing Research Papers. Experimental Biosciences Resources. Rice University; Hancock, Dawson R. and Bob Algozzine. Doing Case Study Research: A Practical Guide for Beginning Researchers . 2nd ed. New York: Teachers College Press, 2011; Introduction to Nursing Research: Reporting Research Findings. Nursing Research: Open Access Nursing Research and Review Articles. (January 4, 2012); Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section. San Francisco Edit ; Ng, K. H. and W. C. Peh. "Writing the Results." Singapore Medical Journal 49 (2008): 967-968; Reporting Research Findings. Wilder Research, in partnership with the Minnesota Department of Human Services. (February 2009); Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Schafer, Mickey S. Writing the Results. Thesis Writing in the Sciences. Course Syllabus. University of Florida.

Writing Tip

Why Don't I Just Combine the Results Section with the Discussion Section?

It's not unusual to find articles in scholarly social science journals where the author(s) have combined a description of the findings with a discussion about their significance and implications. You could do this. However, if you are inexperienced writing research papers, consider creating two distinct sections for each section in your paper as a way to better organize your thoughts and, by extension, your paper. Think of the results section as the place where you report what your study found; think of the discussion section as the place where you interpret the information and answer the "So What?" question. As you become more skilled writing research papers, you can consider melding the results of your study with a discussion of its implications.

Driscoll, Dana Lynn and Aleksandra Kasztalska. Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

  • << Previous: Insiderness
  • Next: Using Non-Textual Elements >>
  • Last Updated: Mar 26, 2024 10:40 AM
  • URL: https://libguides.usc.edu/writingguide

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • How to Write Discussions and Conclusions

How to Write Discussions and Conclusions

The discussion section contains the results and outcomes of a study. An effective discussion informs readers what can be learned from your experiment and provides context for the results.

What makes an effective discussion?

When you’re ready to write your discussion, you’ve already introduced the purpose of your study and provided an in-depth description of the methodology. The discussion informs readers about the larger implications of your study based on the results. Highlighting these implications while not overstating the findings can be challenging, especially when you’re submitting to a journal that selects articles based on novelty or potential impact. Regardless of what journal you are submitting to, the discussion section always serves the same purpose: concluding what your study results actually mean.

A successful discussion section puts your findings in context. It should include:

  • the results of your research,
  • a discussion of related research, and
  • a comparison between your results and initial hypothesis.

Tip: Not all journals share the same naming conventions.

You can apply the advice in this article to the conclusion, results or discussion sections of your manuscript.

Our Early Career Researcher community tells us that the conclusion is often considered the most difficult aspect of a manuscript to write. To help, this guide provides questions to ask yourself, a basic structure to model your discussion off of and examples from published manuscripts. 

key findings in research example

Questions to ask yourself:

  • Was my hypothesis correct?
  • If my hypothesis is partially correct or entirely different, what can be learned from the results? 
  • How do the conclusions reshape or add onto the existing knowledge in the field? What does previous research say about the topic? 
  • Why are the results important or relevant to your audience? Do they add further evidence to a scientific consensus or disprove prior studies? 
  • How can future research build on these observations? What are the key experiments that must be done? 
  • What is the “take-home” message you want your reader to leave with?

How to structure a discussion

Trying to fit a complete discussion into a single paragraph can add unnecessary stress to the writing process. If possible, you’ll want to give yourself two or three paragraphs to give the reader a comprehensive understanding of your study as a whole. Here’s one way to structure an effective discussion:

key findings in research example

Writing Tips

While the above sections can help you brainstorm and structure your discussion, there are many common mistakes that writers revert to when having difficulties with their paper. Writing a discussion can be a delicate balance between summarizing your results, providing proper context for your research and avoiding introducing new information. Remember that your paper should be both confident and honest about the results! 

What to do

  • Read the journal’s guidelines on the discussion and conclusion sections. If possible, learn about the guidelines before writing the discussion to ensure you’re writing to meet their expectations. 
  • Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. 
  • Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and limitations of the research. 
  • State whether the results prove or disprove your hypothesis. If your hypothesis was disproved, what might be the reasons? 
  • Introduce new or expanded ways to think about the research question. Indicate what next steps can be taken to further pursue any unresolved questions. 
  • If dealing with a contemporary or ongoing problem, such as climate change, discuss possible consequences if the problem is avoided. 
  • Be concise. Adding unnecessary detail can distract from the main findings. 

What not to do

Don’t

  • Rewrite your abstract. Statements with “we investigated” or “we studied” generally do not belong in the discussion. 
  • Include new arguments or evidence not previously discussed. Necessary information and evidence should be introduced in the main body of the paper. 
  • Apologize. Even if your research contains significant limitations, don’t undermine your authority by including statements that doubt your methodology or execution. 
  • Shy away from speaking on limitations or negative results. Including limitations and negative results will give readers a complete understanding of the presented research. Potential limitations include sources of potential bias, threats to internal or external validity, barriers to implementing an intervention and other issues inherent to the study design. 
  • Overstate the importance of your findings. Making grand statements about how a study will fully resolve large questions can lead readers to doubt the success of the research. 

Snippets of Effective Discussions:

Consumer-based actions to reduce plastic pollution in rivers: A multi-criteria decision analysis approach

Identifying reliable indicators of fitness in polar bears

  • How to Write a Great Title
  • How to Write an Abstract
  • How to Write Your Methods
  • How to Report Statistics
  • How to Edit Your Work

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

Writing a Research Paper Conclusion | Step-by-Step Guide

Published on October 30, 2022 by Jack Caulfield . Revised on April 13, 2023.

  • Restate the problem statement addressed in the paper
  • Summarize your overall arguments or findings
  • Suggest the key takeaways from your paper

Research paper conclusion

The content of the conclusion varies depending on whether your paper presents the results of original empirical research or constructs an argument through engagement with sources .

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Step 1: restate the problem, step 2: sum up the paper, step 3: discuss the implications, research paper conclusion examples, frequently asked questions about research paper conclusions.

The first task of your conclusion is to remind the reader of your research problem . You will have discussed this problem in depth throughout the body, but now the point is to zoom back out from the details to the bigger picture.

While you are restating a problem you’ve already introduced, you should avoid phrasing it identically to how it appeared in the introduction . Ideally, you’ll find a novel way to circle back to the problem from the more detailed ideas discussed in the body.

For example, an argumentative paper advocating new measures to reduce the environmental impact of agriculture might restate its problem as follows:

Meanwhile, an empirical paper studying the relationship of Instagram use with body image issues might present its problem like this:

“In conclusion …”

Avoid starting your conclusion with phrases like “In conclusion” or “To conclude,” as this can come across as too obvious and make your writing seem unsophisticated. The content and placement of your conclusion should make its function clear without the need for additional signposting.

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

key findings in research example

Having zoomed back in on the problem, it’s time to summarize how the body of the paper went about addressing it, and what conclusions this approach led to.

Depending on the nature of your research paper, this might mean restating your thesis and arguments, or summarizing your overall findings.

Argumentative paper: Restate your thesis and arguments

In an argumentative paper, you will have presented a thesis statement in your introduction, expressing the overall claim your paper argues for. In the conclusion, you should restate the thesis and show how it has been developed through the body of the paper.

Briefly summarize the key arguments made in the body, showing how each of them contributes to proving your thesis. You may also mention any counterarguments you addressed, emphasizing why your thesis holds up against them, particularly if your argument is a controversial one.

Don’t go into the details of your evidence or present new ideas; focus on outlining in broad strokes the argument you have made.

Empirical paper: Summarize your findings

In an empirical paper, this is the time to summarize your key findings. Don’t go into great detail here (you will have presented your in-depth results and discussion already), but do clearly express the answers to the research questions you investigated.

Describe your main findings, even if they weren’t necessarily the ones you expected or hoped for, and explain the overall conclusion they led you to.

Having summed up your key arguments or findings, the conclusion ends by considering the broader implications of your research. This means expressing the key takeaways, practical or theoretical, from your paper—often in the form of a call for action or suggestions for future research.

Argumentative paper: Strong closing statement

An argumentative paper generally ends with a strong closing statement. In the case of a practical argument, make a call for action: What actions do you think should be taken by the people or organizations concerned in response to your argument?

If your topic is more theoretical and unsuitable for a call for action, your closing statement should express the significance of your argument—for example, in proposing a new understanding of a topic or laying the groundwork for future research.

Empirical paper: Future research directions

In a more empirical paper, you can close by either making recommendations for practice (for example, in clinical or policy papers), or suggesting directions for future research.

Whatever the scope of your own research, there will always be room for further investigation of related topics, and you’ll often discover new questions and problems during the research process .

Finish your paper on a forward-looking note by suggesting how you or other researchers might build on this topic in the future and address any limitations of the current paper.

Full examples of research paper conclusions are shown in the tabs below: one for an argumentative paper, the other for an empirical paper.

  • Argumentative paper
  • Empirical paper

While the role of cattle in climate change is by now common knowledge, countries like the Netherlands continually fail to confront this issue with the urgency it deserves. The evidence is clear: To create a truly futureproof agricultural sector, Dutch farmers must be incentivized to transition from livestock farming to sustainable vegetable farming. As well as dramatically lowering emissions, plant-based agriculture, if approached in the right way, can produce more food with less land, providing opportunities for nature regeneration areas that will themselves contribute to climate targets. Although this approach would have economic ramifications, from a long-term perspective, it would represent a significant step towards a more sustainable and resilient national economy. Transitioning to sustainable vegetable farming will make the Netherlands greener and healthier, setting an example for other European governments. Farmers, policymakers, and consumers must focus on the future, not just on their own short-term interests, and work to implement this transition now.

As social media becomes increasingly central to young people’s everyday lives, it is important to understand how different platforms affect their developing self-conception. By testing the effect of daily Instagram use among teenage girls, this study established that highly visual social media does indeed have a significant effect on body image concerns, with a strong correlation between the amount of time spent on the platform and participants’ self-reported dissatisfaction with their appearance. However, the strength of this effect was moderated by pre-test self-esteem ratings: Participants with higher self-esteem were less likely to experience an increase in body image concerns after using Instagram. This suggests that, while Instagram does impact body image, it is also important to consider the wider social and psychological context in which this usage occurs: Teenagers who are already predisposed to self-esteem issues may be at greater risk of experiencing negative effects. Future research into Instagram and other highly visual social media should focus on establishing a clearer picture of how self-esteem and related constructs influence young people’s experiences of these platforms. Furthermore, while this experiment measured Instagram usage in terms of time spent on the platform, observational studies are required to gain more insight into different patterns of usage—to investigate, for instance, whether active posting is associated with different effects than passive consumption of social media content.

If you’re unsure about the conclusion, it can be helpful to ask a friend or fellow student to read your conclusion and summarize the main takeaways.

  • Do they understand from your conclusion what your research was about?
  • Are they able to summarize the implications of your findings?
  • Can they answer your research question based on your conclusion?

You can also get an expert to proofread and feedback your paper with a paper editing service .

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

key findings in research example

The conclusion of a research paper has several key elements you should make sure to include:

  • A restatement of the research problem
  • A summary of your key arguments and/or findings
  • A short discussion of the implications of your research

No, it’s not appropriate to present new arguments or evidence in the conclusion . While you might be tempted to save a striking argument for last, research papers follow a more formal structure than this.

All your findings and arguments should be presented in the body of the text (more specifically in the results and discussion sections if you are following a scientific structure). The conclusion is meant to summarize and reflect on the evidence and arguments you have already presented, not introduce new ones.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, April 13). Writing a Research Paper Conclusion | Step-by-Step Guide. Scribbr. Retrieved March 29, 2024, from https://www.scribbr.com/research-paper/research-paper-conclusion/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, writing a research paper introduction | step-by-step guide, how to create a structured research paper outline | example, checklist: writing a great research paper, what is your plagiarism score.

Research Guide

Chapter 7 presenting your findings.

Now that you have worked so hard in your project, how to ensure that you can communicate your findings in an effective and efficient way? In this section, I will introduce a few tips that could help you prepare your slides and preparing for your final presentation.

7.1 Sections of the Presentation

When preparing your slides, you need to ensure that you have a clear roadmap. You have a limited time to explain the context of your study, your results, and the main takeaways. Thus, you need to be organized and efficient when deciding what material will be included in the slides.

You need to ensure that your presentation contains the following sections:

  • Motivation : Why did you choose your topic? What is the bigger question?
  • Research question : Needs to be clear and concise. Include secondary questions, if available, but be clear about what is your research question.
  • Literature Review : How does your paper fit in the overall literature? What are your contributions?
  • Context : Give an overview of the issue and the population/countries that you analyzed
  • Study Characteristics : This section is key, as it needs to include your model, identification strategy, and introduce your data (sources, summary statistics, etc.).
  • Results : In this section, you need to answer your research question(s). Include tables that are readable.
  • Additional analysis : Here, include any additional information that your public needs to know. For instance, did you try different specifications? did you find an obstacle (i.e. your data is very noisy, the sample is very small, something else) that may bias your results or create some issues in your analysis? Tell your audience! No research project is perfect, but you need to be clear about the imperfections of your project.
  • Conclusion : Be repetitive! What was your research question? How did you answer it? What did you find? What is next in this topic?

7.2 How to prepare your slides

When preparing your slides, remember that humans have a limited capacity to pay attention. If you want to convey your convey your message in an effective way, you need to ensure that the message is simple and that you keep your audience attention. Here are some strategies that you may want to follow:

  • Have a clear roadmap at the beginning of the presentation. Tell your audience what to expect.
  • Number your slides. This will help you and your audience to know where you are in your analysis.
  • Ensure that each slide has a purpose
  • Ensure that each slide is connected to your key point.
  • Make just one argument per slide
  • State the objective of each slide in the headline
  • Use bullet points. Do not include more than one sentence per bullet point.
  • Choose a simple background.
  • If you want to direct your audience attention to a specific point, make it more attractive (using a different font color)
  • Each slide needs to have a similar structure (going from the general to the particular detauls).
  • Use images/graphs when possible. Ensure that the axes for the graphs are clear.
  • Use a large font for your tables. Keep them as simple as possible.
  • If you can say it with an image, choose it over a table.
  • Have an Appendix with slides that address potential questions.

7.3 How to prepare your presentation

One of the main constraints of having simple presentations is that you cannot rely on them and read them. Instead, you need to have extra notes and memorize them to explain things beyond what is on your slides. The following are some suggestions on how to ensure you communicate effectively during your presentation.

  • Practice, practice, practice!
  • Keep the right volume (practice will help you with that)
  • Be journalistic about your presentation. Indicate what you want to say, then say it.
  • Ensure that your audience knows where you are going
  • Avoid passive voice.
  • Be consistent with the terms you are using. You do not want to confuse your audience, even if using synonyms.
  • Face your audience and keep an eye contact.
  • Do not try reading your slides
  • Ensure that your audience is focused on what you are presenting and there are no other distractions that you can control.
  • Do not rush your presentation. Speak calmly and controlled.
  • Be comprehensive when answering questions. Avoid yes/no answers. Instead, rephrase question (to ensure you are answering the right question), then give a short answer, then develop.
  • If you lose track, do not panick. Go back a little bit or ask your audience for assistance.
  • Again, practice is the secret.

You have worked so hard in your final project, and the presentation is your opportunity to share that work with the rest of the world. Use this opportunity to shine and enjoy it.

Since this is the first iteration of the Guide, I expect that there are going to be multiple typos and structure issues. Please feel free to let me know, and I will correct accordingly. ↩︎

Note that you would still need to refine some of the good questions even more. ↩︎

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write the Results/Findings Section in Research

key findings in research example

What is the research paper Results section and what does it do?

The Results section of a scientific research paper represents the core findings of a study derived from the methods applied to gather and analyze information. It presents these findings in a logical sequence without bias or interpretation from the author, setting up the reader for later interpretation and evaluation in the Discussion section. A major purpose of the Results section is to break down the data into sentences that show its significance to the research question(s).

The Results section appears third in the section sequence in most scientific papers. It follows the presentation of the Methods and Materials and is presented before the Discussion section —although the Results and Discussion are presented together in many journals. This section answers the basic question “What did you find in your research?”

What is included in the Results section?

The Results section should include the findings of your study and ONLY the findings of your study. The findings include:

  • Data presented in tables, charts, graphs, and other figures (may be placed into the text or on separate pages at the end of the manuscript)
  • A contextual analysis of this data explaining its meaning in sentence form
  • All data that corresponds to the central research question(s)
  • All secondary findings (secondary outcomes, subgroup analyses, etc.)

If the scope of the study is broad, or if you studied a variety of variables, or if the methodology used yields a wide range of different results, the author should present only those results that are most relevant to the research question stated in the Introduction section .

As a general rule, any information that does not present the direct findings or outcome of the study should be left out of this section. Unless the journal requests that authors combine the Results and Discussion sections, explanations and interpretations should be omitted from the Results.

How are the results organized?

The best way to organize your Results section is “logically.” One logical and clear method of organizing research results is to provide them alongside the research questions—within each research question, present the type of data that addresses that research question.

Let’s look at an example. Your research question is based on a survey among patients who were treated at a hospital and received postoperative care. Let’s say your first research question is:

results section of a research paper, figures

“What do hospital patients over age 55 think about postoperative care?”

This can actually be represented as a heading within your Results section, though it might be presented as a statement rather than a question:

Attitudes towards postoperative care in patients over the age of 55

Now present the results that address this specific research question first. In this case, perhaps a table illustrating data from a survey. Likert items can be included in this example. Tables can also present standard deviations, probabilities, correlation matrices, etc.

Following this, present a content analysis, in words, of one end of the spectrum of the survey or data table. In our example case, start with the POSITIVE survey responses regarding postoperative care, using descriptive phrases. For example:

“Sixty-five percent of patients over 55 responded positively to the question “ Are you satisfied with your hospital’s postoperative care ?” (Fig. 2)

Include other results such as subcategory analyses. The amount of textual description used will depend on how much interpretation of tables and figures is necessary and how many examples the reader needs in order to understand the significance of your research findings.

Next, present a content analysis of another part of the spectrum of the same research question, perhaps the NEGATIVE or NEUTRAL responses to the survey. For instance:

  “As Figure 1 shows, 15 out of 60 patients in Group A responded negatively to Question 2.”

After you have assessed the data in one figure and explained it sufficiently, move on to your next research question. For example:

  “How does patient satisfaction correspond to in-hospital improvements made to postoperative care?”

results section of a research paper, figures

This kind of data may be presented through a figure or set of figures (for instance, a paired T-test table).

Explain the data you present, here in a table, with a concise content analysis:

“The p-value for the comparison between the before and after groups of patients was .03% (Fig. 2), indicating that the greater the dissatisfaction among patients, the more frequent the improvements that were made to postoperative care.”

Let’s examine another example of a Results section from a study on plant tolerance to heavy metal stress . In the Introduction section, the aims of the study are presented as “determining the physiological and morphological responses of Allium cepa L. towards increased cadmium toxicity” and “evaluating its potential to accumulate the metal and its associated environmental consequences.” The Results section presents data showing how these aims are achieved in tables alongside a content analysis, beginning with an overview of the findings:

“Cadmium caused inhibition of root and leave elongation, with increasing effects at higher exposure doses (Fig. 1a-c).”

The figure containing this data is cited in parentheses. Note that this author has combined three graphs into one single figure. Separating the data into separate graphs focusing on specific aspects makes it easier for the reader to assess the findings, and consolidating this information into one figure saves space and makes it easy to locate the most relevant results.

results section of a research paper, figures

Following this overall summary, the relevant data in the tables is broken down into greater detail in text form in the Results section.

  • “Results on the bio-accumulation of cadmium were found to be the highest (17.5 mg kgG1) in the bulb, when the concentration of cadmium in the solution was 1×10G2 M and lowest (0.11 mg kgG1) in the leaves when the concentration was 1×10G3 M.”

Captioning and Referencing Tables and Figures

Tables and figures are central components of your Results section and you need to carefully think about the most effective way to use graphs and tables to present your findings . Therefore, it is crucial to know how to write strong figure captions and to refer to them within the text of the Results section.

The most important advice one can give here as well as throughout the paper is to check the requirements and standards of the journal to which you are submitting your work. Every journal has its own design and layout standards, which you can find in the author instructions on the target journal’s website. Perusing a journal’s published articles will also give you an idea of the proper number, size, and complexity of your figures.

Regardless of which format you use, the figures should be placed in the order they are referenced in the Results section and be as clear and easy to understand as possible. If there are multiple variables being considered (within one or more research questions), it can be a good idea to split these up into separate figures. Subsequently, these can be referenced and analyzed under separate headings and paragraphs in the text.

To create a caption, consider the research question being asked and change it into a phrase. For instance, if one question is “Which color did participants choose?”, the caption might be “Color choice by participant group.” Or in our last research paper example, where the question was “What is the concentration of cadmium in different parts of the onion after 14 days?” the caption reads:

 “Fig. 1(a-c): Mean concentration of Cd determined in (a) bulbs, (b) leaves, and (c) roots of onions after a 14-day period.”

Steps for Composing the Results Section

Because each study is unique, there is no one-size-fits-all approach when it comes to designing a strategy for structuring and writing the section of a research paper where findings are presented. The content and layout of this section will be determined by the specific area of research, the design of the study and its particular methodologies, and the guidelines of the target journal and its editors. However, the following steps can be used to compose the results of most scientific research studies and are essential for researchers who are new to preparing a manuscript for publication or who need a reminder of how to construct the Results section.

Step 1 : Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study.

  • The guidelines will generally outline specific requirements for the results or findings section, and the published articles will provide sound examples of successful approaches.
  • Note length limitations on restrictions on content. For instance, while many journals require the Results and Discussion sections to be separate, others do not—qualitative research papers often include results and interpretations in the same section (“Results and Discussion”).
  • Reading the aims and scope in the journal’s “ guide for authors ” section and understanding the interests of its readers will be invaluable in preparing to write the Results section.

Step 2 : Consider your research results in relation to the journal’s requirements and catalogue your results.

  • Focus on experimental results and other findings that are especially relevant to your research questions and objectives and include them even if they are unexpected or do not support your ideas and hypotheses.
  • Catalogue your findings—use subheadings to streamline and clarify your report. This will help you avoid excessive and peripheral details as you write and also help your reader understand and remember your findings. Create appendices that might interest specialists but prove too long or distracting for other readers.
  • Decide how you will structure of your results. You might match the order of the research questions and hypotheses to your results, or you could arrange them according to the order presented in the Methods section. A chronological order or even a hierarchy of importance or meaningful grouping of main themes or categories might prove effective. Consider your audience, evidence, and most importantly, the objectives of your research when choosing a structure for presenting your findings.

Step 3 : Design figures and tables to present and illustrate your data.

  • Tables and figures should be numbered according to the order in which they are mentioned in the main text of the paper.
  • Information in figures should be relatively self-explanatory (with the aid of captions), and their design should include all definitions and other information necessary for readers to understand the findings without reading all of the text.
  • Use tables and figures as a focal point to tell a clear and informative story about your research and avoid repeating information. But remember that while figures clarify and enhance the text, they cannot replace it.

Step 4 : Draft your Results section using the findings and figures you have organized.

  • The goal is to communicate this complex information as clearly and precisely as possible; precise and compact phrases and sentences are most effective.
  • In the opening paragraph of this section, restate your research questions or aims to focus the reader’s attention to what the results are trying to show. It is also a good idea to summarize key findings at the end of this section to create a logical transition to the interpretation and discussion that follows.
  • Try to write in the past tense and the active voice to relay the findings since the research has already been done and the agent is usually clear. This will ensure that your explanations are also clear and logical.
  • Make sure that any specialized terminology or abbreviation you have used here has been defined and clarified in the  Introduction section .

Step 5 : Review your draft; edit and revise until it reports results exactly as you would like to have them reported to your readers.

  • Double-check the accuracy and consistency of all the data, as well as all of the visual elements included.
  • Read your draft aloud to catch language errors (grammar, spelling, and mechanics), awkward phrases, and missing transitions.
  • Ensure that your results are presented in the best order to focus on objectives and prepare readers for interpretations, valuations, and recommendations in the Discussion section . Look back over the paper’s Introduction and background while anticipating the Discussion and Conclusion sections to ensure that the presentation of your results is consistent and effective.
  • Consider seeking additional guidance on your paper. Find additional readers to look over your Results section and see if it can be improved in any way. Peers, professors, or qualified experts can provide valuable insights.

One excellent option is to use a professional English proofreading and editing service  such as Wordvice, including our paper editing service . With hundreds of qualified editors from dozens of scientific fields, Wordvice has helped thousands of authors revise their manuscripts and get accepted into their target journals. Read more about the  proofreading and editing process  before proceeding with getting academic editing services and manuscript editing services for your manuscript.

As the representation of your study’s data output, the Results section presents the core information in your research paper. By writing with clarity and conciseness and by highlighting and explaining the crucial findings of their study, authors increase the impact and effectiveness of their research manuscripts.

For more articles and videos on writing your research manuscript, visit Wordvice’s Resources page.

Wordvice Resources

  • How to Write a Research Paper Introduction 
  • Which Verb Tenses to Use in a Research Paper
  • How to Write an Abstract for a Research Paper
  • How to Write a Research Paper Title
  • Useful Phrases for Academic Writing
  • Common Transition Terms in Academic Papers
  • Active and Passive Voice in Research Papers
  • 100+ Verbs That Will Make Your Research Writing Amazing
  • Tips for Paraphrasing in Research Papers

Jump to navigation

Home

Cochrane Training

Chapter 14: completing ‘summary of findings’ tables and grading the certainty of the evidence.

Holger J Schünemann, Julian PT Higgins, Gunn E Vist, Paul Glasziou, Elie A Akl, Nicole Skoetz, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group (formerly Applicability and Recommendations Methods Group) and the Cochrane Statistical Methods Group

Key Points:

  • A ‘Summary of findings’ table for a given comparison of interventions provides key information concerning the magnitudes of relative and absolute effects of the interventions examined, the amount of available evidence and the certainty (or quality) of available evidence.
  • ‘Summary of findings’ tables include a row for each important outcome (up to a maximum of seven). Accepted formats of ‘Summary of findings’ tables and interactive ‘Summary of findings’ tables can be produced using GRADE’s software GRADEpro GDT.
  • Cochrane has adopted the GRADE approach (Grading of Recommendations Assessment, Development and Evaluation) for assessing certainty (or quality) of a body of evidence.
  • The GRADE approach specifies four levels of the certainty for a body of evidence for a given outcome: high, moderate, low and very low.
  • GRADE assessments of certainty are determined through consideration of five domains: risk of bias, inconsistency, indirectness, imprecision and publication bias. For evidence from non-randomized studies and rarely randomized studies, assessments can then be upgraded through consideration of three further domains.

Cite this chapter as: Schünemann HJ, Higgins JPT, Vist GE, Glasziou P, Akl EA, Skoetz N, Guyatt GH. Chapter 14: Completing ‘Summary of findings’ tables and grading the certainty of the evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

14.1 ‘Summary of findings’ tables

14.1.1 introduction to ‘summary of findings’ tables.

‘Summary of findings’ tables present the main findings of a review in a transparent, structured and simple tabular format. In particular, they provide key information concerning the certainty or quality of evidence (i.e. the confidence or certainty in the range of an effect estimate or an association), the magnitude of effect of the interventions examined, and the sum of available data on the main outcomes. Cochrane Reviews should incorporate ‘Summary of findings’ tables during planning and publication, and should have at least one key ‘Summary of findings’ table representing the most important comparisons. Some reviews may include more than one ‘Summary of findings’ table, for example if the review addresses more than one major comparison, or includes substantially different populations that require separate tables (e.g. because the effects differ or it is important to show results separately). In the Cochrane Database of Systematic Reviews (CDSR),  all ‘Summary of findings’ tables for a review appear at the beginning, before the Background section.

14.1.2 Selecting outcomes for ‘Summary of findings’ tables

Planning for the ‘Summary of findings’ table starts early in the systematic review, with the selection of the outcomes to be included in: (i) the review; and (ii) the ‘Summary of findings’ table. This is a crucial step, and one that review authors need to address carefully.

To ensure production of optimally useful information, Cochrane Reviews begin by developing a review question and by listing all main outcomes that are important to patients and other decision makers (see Chapter 2 and Chapter 3 ). The GRADE approach to assessing the certainty of the evidence (see Section 14.2 ) defines and operationalizes a rating process that helps separate outcomes into those that are critical, important or not important for decision making. Consultation and feedback on the review protocol, including from consumers and other decision makers, can enhance this process.

Critical outcomes are likely to include clearly important endpoints; typical examples include mortality and major morbidity (such as strokes and myocardial infarction). However, they may also represent frequent minor and rare major side effects, symptoms, quality of life, burdens associated with treatment, and resource issues (costs). Burdens represent the impact of healthcare workload on patient function and well-being, and include the demands of adhering to an intervention that patients or caregivers (e.g. family) may dislike, such as having to undergo more frequent tests, or the restrictions on lifestyle that certain interventions require (Spencer-Bonilla et al 2017).

Frequently, when formulating questions that include all patient-important outcomes for decision making, review authors will confront reports of studies that have not included all these outcomes. This is particularly true for adverse outcomes. For instance, randomized trials might contribute evidence on intended effects, and on frequent, relatively minor side effects, but not report on rare adverse outcomes such as suicide attempts. Chapter 19 discusses strategies for addressing adverse effects. To obtain data for all important outcomes it may be necessary to examine the results of non-randomized studies (see Chapter 24 ). Cochrane, in collaboration with others, has developed guidance for review authors to support their decision about when to look for and include non-randomized studies (Schünemann et al 2013).

If a review includes only randomized trials, these trials may not address all important outcomes and it may therefore not be possible to address these outcomes within the constraints of the review. Review authors should acknowledge these limitations and make them transparent to readers. Review authors are encouraged to include non-randomized studies to examine rare or long-term adverse effects that may not adequately be studied in randomized trials. This raises the possibility that harm outcomes may come from studies in which participants differ from those in studies used in the analysis of benefit. Review authors will then need to consider how much such differences are likely to impact on the findings, and this will influence the certainty of evidence because of concerns about indirectness related to the population (see Section 14.2.2 ).

Non-randomized studies can provide important information not only when randomized trials do not report on an outcome or randomized trials suffer from indirectness, but also when the evidence from randomized trials is rated as very low and non-randomized studies provide evidence of higher certainty. Further discussion of these issues appears also in Chapter 24 .

14.1.3 General template for ‘Summary of findings’ tables

Several alternative standard versions of ‘Summary of findings’ tables have been developed to ensure consistency and ease of use across reviews, inclusion of the most important information needed by decision makers, and optimal presentation (see examples at Figures 14.1.a and 14.1.b ). These formats are supported by research that focused on improved understanding of the information they intend to convey (Carrasco-Labra et al 2016, Langendam et al 2016, Santesso et al 2016). They are available through GRADE’s official software package developed to support the GRADE approach: GRADEpro GDT (www.gradepro.org).

Standard Cochrane ‘Summary of findings’ tables include the following elements using one of the accepted formats. Further guidance on each of these is provided in Section 14.1.6 .

  • A brief description of the population and setting addressed by the available evidence (which may be slightly different to or narrower than those defined by the review question).
  • A brief description of the comparison addressed in the ‘Summary of findings’ table, including both the experimental and comparison interventions.
  • A list of the most critical and/or important health outcomes, both desirable and undesirable, limited to seven or fewer outcomes.
  • A measure of the typical burden of each outcomes (e.g. illustrative risk, or illustrative mean, on comparator intervention).
  • The absolute and relative magnitude of effect measured for each (if both are appropriate).
  • The numbers of participants and studies contributing to the analysis of each outcomes.
  • A GRADE assessment of the overall certainty of the body of evidence for each outcome (which may vary by outcome).
  • Space for comments.
  • Explanations (formerly known as footnotes).

Ideally, ‘Summary of findings’ tables are supported by more detailed tables (known as ‘evidence profiles’) to which the review may be linked, which provide more detailed explanations. Evidence profiles include the same important health outcomes, and provide greater detail than ‘Summary of findings’ tables of both of the individual considerations feeding into the grading of certainty and of the results of the studies (Guyatt et al 2011a). They ensure that a structured approach is used to rating the certainty of evidence. Although they are rarely published in Cochrane Reviews, evidence profiles are often used, for example, by guideline developers in considering the certainty of the evidence to support guideline recommendations. Review authors will find it easier to develop the ‘Summary of findings’ table by completing the rating of the certainty of evidence in the evidence profile first in GRADEpro GDT. They can then automatically convert this to one of the ‘Summary of findings’ formats in GRADEpro GDT, including an interactive ‘Summary of findings’ for publication.

As a measure of the magnitude of effect for dichotomous outcomes, the ‘Summary of findings’ table should provide a relative measure of effect (e.g. risk ratio, odds ratio, hazard) and measures of absolute risk. For other types of data, an absolute measure alone (such as a difference in means for continuous data) might be sufficient. It is important that the magnitude of effect is presented in a meaningful way, which may require some transformation of the result of a meta-analysis (see also Chapter 15, Section 15.4 and Section 15.5 ). Reviews with more than one main comparison should include a separate ‘Summary of findings’ table for each comparison.

Figure 14.1.a provides an example of a ‘Summary of findings’ table. Figure 15.1.b  provides an alternative format that may further facilitate users’ understanding and interpretation of the review’s findings. Evidence evaluating different formats suggests that the ‘Summary of findings’ table should include a risk difference as a measure of the absolute effect and authors should preferably use a format that includes a risk difference .

A detailed description of the contents of a ‘Summary of findings’ table appears in Section 14.1.6 .

Figure 14.1.a Example of a ‘Summary of findings’ table

Summary of findings (for interactive version click here )

a All the stockings in the nine studies included in this review were below-knee compression stockings. In four studies the compression strength was 20 mmHg to 30 mmHg at the ankle. It was 10 mmHg to 20 mmHg in the other four studies. Stockings come in different sizes. If a stocking is too tight around the knee it can prevent essential venous return causing the blood to pool around the knee. Compression stockings should be fitted properly. A stocking that is too tight could cut into the skin on a long flight and potentially cause ulceration and increased risk of DVT. Some stockings can be slightly thicker than normal leg covering and can be potentially restrictive with tight foot wear. It is a good idea to wear stockings around the house prior to travel to ensure a good, comfortable fit. Participants put their stockings on two to three hours before the flight in most of the studies. The availability and cost of stockings can vary.

b Two studies recruited high risk participants defined as those with previous episodes of DVT, coagulation disorders, severe obesity, limited mobility due to bone or joint problems, neoplastic disease within the previous two years, large varicose veins or, in one of the studies, participants taller than 190 cm and heavier than 90 kg. The incidence for the seven studies that excluded high risk participants was 1.45% and the incidence for the two studies that recruited high-risk participants (with at least one risk factor) was 2.43%. We have used 10 and 30 per 1000 to express different risk strata, respectively.

c The confidence interval crosses no difference and does not rule out a small increase.

d The measurement of oedema was not validated (indirectness of the outcome) or blinded to the intervention (risk of bias).

e If there are very few or no events and the number of participants is large, judgement about the certainty of evidence (particularly judgements about imprecision) may be based on the absolute effect. Here the certainty rating may be considered ‘high’ if the outcome was appropriately assessed and the event, in fact, did not occur in 2821 studied participants.

f None of the other studies reported adverse effects, apart from four cases of superficial vein thrombosis in varicose veins in the knee region that were compressed by the upper edge of the stocking in one study.

Figure 14.1.b Example of alternative ‘Summary of findings’ table

14.1.4 Producing ‘Summary of findings’ tables

The GRADE Working Group’s software, GRADEpro GDT ( www.gradepro.org ), including GRADE’s interactive handbook, is available to assist review authors in the preparation of ‘Summary of findings’ tables. GRADEpro can use data on the comparator group risk and the effect estimate (entered by the review authors or imported from files generated in RevMan) to produce the relative effects and absolute risks associated with experimental interventions. In addition, it leads the user through the process of a GRADE assessment, and produces a table that can be used as a standalone table in a review (including by direct import into software such as RevMan or integration with RevMan Web), or an interactive ‘Summary of findings’ table (see help resources in GRADEpro).

14.1.5 Statistical considerations in ‘Summary of findings’ tables

14.1.5.1 dichotomous outcomes.

‘Summary of findings’ tables should include both absolute and relative measures of effect for dichotomous outcomes. Risk ratios, odds ratios and risk differences are different ways of comparing two groups with dichotomous outcome data (see Chapter 6, Section 6.4.1 ). Furthermore, there are two distinct risk ratios, depending on which event (e.g. ‘yes’ or ‘no’) is the focus of the analysis (see Chapter 6, Section 6.4.1.5 ). In the presence of a non-zero intervention effect, any variation across studies in the comparator group risks (i.e. variation in the risk of the event occurring without the intervention of interest, for example in different populations) makes it impossible for more than one of these measures to be truly the same in every study.

It has long been assumed in epidemiology that relative measures of effect are more consistent than absolute measures of effect from one scenario to another. There is empirical evidence to support this assumption (Engels et al 2000, Deeks and Altman 2001, Furukawa et al 2002). For this reason, meta-analyses should generally use either a risk ratio or an odds ratio as a measure of effect (see Chapter 10, Section 10.4.3 ). Correspondingly, a single estimate of relative effect is likely to be a more appropriate summary than a single estimate of absolute effect. If a relative effect is indeed consistent across studies, then different comparator group risks will have different implications for absolute benefit. For instance, if the risk ratio is consistently 0.75, then the experimental intervention would reduce a comparator group risk of 80% to 60% in the intervention group (an absolute risk reduction of 20 percentage points), but would also reduce a comparator group risk of 20% to 15% in the intervention group (an absolute risk reduction of 5 percentage points).

‘Summary of findings’ tables are built around the assumption of a consistent relative effect. It is therefore important to consider the implications of this effect for different comparator group risks (these can be derived or estimated from a number of sources, see Section 14.1.6.3 ), which may require an assessment of the certainty of evidence for prognostic evidence (Spencer et al 2012, Iorio et al 2015). For any comparator group risk, it is possible to estimate a corresponding intervention group risk (i.e. the absolute risk with the intervention) from the meta-analytic risk ratio or odds ratio. Note that the numbers provided in the ‘Corresponding risk’ column are specific to the ‘risks’ in the adjacent column.

For the meta-analytic risk ratio (RR) and assumed comparator risk (ACR) the corresponding intervention risk is obtained as:

key findings in research example

As an example, in Figure 14.1.a , the meta-analytic risk ratio for symptomless deep vein thrombosis (DVT) is RR = 0.10 (95% CI 0.04 to 0.26). Assuming a comparator risk of ACR = 10 per 1000 = 0.01, we obtain:

key findings in research example

For the meta-analytic odds ratio (OR) and assumed comparator risk, ACR, the corresponding intervention risk is obtained as:

key findings in research example

Upper and lower confidence limits for the corresponding intervention risk are obtained by replacing RR or OR by their upper and lower confidence limits, respectively (e.g. replacing 0.10 with 0.04, then with 0.26, in the example). Such confidence intervals do not incorporate uncertainty in the assumed comparator risks.

When dealing with risk ratios, it is critical that the same definition of ‘event’ is used as was used for the meta-analysis. For example, if the meta-analysis focused on ‘death’ (as opposed to survival) as the event, then corresponding risks in the ‘Summary of findings’ table must also refer to ‘death’.

In (rare) circumstances in which there is clear rationale to assume a consistent risk difference in the meta-analysis, in principle it is possible to present this for relevant ‘assumed risks’ and their corresponding risks, and to present the corresponding (different) relative effects for each assumed risk.

The risk difference expresses the difference between the ACR and the corresponding intervention risk (or the difference between the experimental and the comparator intervention).

For the meta-analytic risk ratio (RR) and assumed comparator risk (ACR) the corresponding risk difference is obtained as (note that risks can also be expressed using percentage or percentage points):

key findings in research example

As an example, in Figure 14.1.b the meta-analytic risk ratio is 0.41 (95% CI 0.29 to 0.55) for diarrhoea in children less than 5 years of age. Assuming a comparator group risk of 22.3% we obtain:

key findings in research example

For the meta-analytic odds ratio (OR) and assumed comparator risk (ACR) the absolute risk difference is obtained as (percentage points):

key findings in research example

Upper and lower confidence limits for the absolute risk difference are obtained by re-running the calculation above while replacing RR or OR by their upper and lower confidence limits, respectively (e.g. replacing 0.41 with 0.28, then with 0.55, in the example). Such confidence intervals do not incorporate uncertainty in the assumed comparator risks.

14.1.5.2 Time-to-event outcomes

Time-to-event outcomes measure whether and when a particular event (e.g. death) occurs (van Dalen et al 2007). The impact of the experimental intervention relative to the comparison group on time-to-event outcomes is usually measured using a hazard ratio (HR) (see Chapter 6, Section 6.8.1 ).

A hazard ratio expresses a relative effect estimate. It may be used in various ways to obtain absolute risks and other interpretable quantities for a specific population. Here we describe how to re-express hazard ratios in terms of: (i) absolute risk of event-free survival within a particular period of time; (ii) absolute risk of an event within a particular period of time; and (iii) median time to the event. All methods are built on an assumption of consistent relative effects (i.e. that the hazard ratio does not vary over time).

(i) Absolute risk of event-free survival within a particular period of time Event-free survival (e.g. overall survival) is commonly reported by individual studies. To obtain absolute effects for time-to-event outcomes measured as event-free survival, the summary HR can be used in conjunction with an assumed proportion of patients who are event-free in the comparator group (Tierney et al 2007). This proportion of patients will be specific to a period of time of observation. However, it is not strictly necessary to specify this period of time. For instance, a proportion of 50% of event-free patients might apply to patients with a high event rate observed over 1 year, or to patients with a low event rate observed over 2 years.

key findings in research example

As an example, suppose the meta-analytic hazard ratio is 0.42 (95% CI 0.25 to 0.72). Assuming a comparator group risk of event-free survival (e.g. for overall survival people being alive) at 2 years of ACR = 900 per 1000 = 0.9 we obtain:

key findings in research example

so that that 956 per 1000 people will be alive with the experimental intervention at 2 years. The derivation of the risk should be explained in a comment or footnote.

(ii) Absolute risk of an event within a particular period of time To obtain this absolute effect, again the summary HR can be used (Tierney et al 2007):

key findings in research example

In the example, suppose we assume a comparator group risk of events (e.g. for mortality, people being dead) at 2 years of ACR = 100 per 1000 = 0.1. We obtain:

key findings in research example

so that that 44 per 1000 people will be dead with the experimental intervention at 2 years.

(iii) Median time to the event Instead of absolute numbers, the time to the event in the intervention and comparison groups can be expressed as median survival time in months or years. To obtain median survival time the pooled HR can be applied to an assumed median survival time in the comparator group (Tierney et al 2007):

key findings in research example

In the example, assuming a comparator group median survival time of 80 months, we obtain:

key findings in research example

For all three of these options for re-expressing results of time-to-event analyses, upper and lower confidence limits for the corresponding intervention risk are obtained by replacing HR by its upper and lower confidence limits, respectively (e.g. replacing 0.42 with 0.25, then with 0.72, in the example). Again, as for dichotomous outcomes, such confidence intervals do not incorporate uncertainty in the assumed comparator group risks. This is of special concern for long-term survival with a low or moderate mortality rate and a corresponding high number of censored patients (i.e. a low number of patients under risk and a high censoring rate).

14.1.6 Detailed contents of a ‘Summary of findings’ table

14.1.6.1 table title and header.

The title of each ‘Summary of findings’ table should specify the healthcare question, framed in terms of the population and making it clear exactly what comparison of interventions are made. In Figure 14.1.a , the population is people taking long aeroplane flights, the intervention is compression stockings, and the control is no compression stockings.

The first rows of each ‘Summary of findings’ table should provide the following ‘header’ information:

Patients or population This further clarifies the population (and possibly the subpopulations) of interest and ideally the magnitude of risk of the most crucial adverse outcome at which an intervention is directed. For instance, people on a long-haul flight may be at different risks for DVT; those using selective serotonin reuptake inhibitors (SSRIs) might be at different risk for side effects; while those with atrial fibrillation may be at low (< 1%), moderate (1% to 4%) or high (> 4%) yearly risk of stroke.

Setting This should state any specific characteristics of the settings of the healthcare question that might limit the applicability of the summary of findings to other settings (e.g. primary care in Europe and North America).

Intervention The experimental intervention.

Comparison The comparator intervention (including no specific intervention).

14.1.6.2 Outcomes

The rows of a ‘Summary of findings’ table should include all desirable and undesirable health outcomes (listed in order of importance) that are essential for decision making, up to a maximum of seven outcomes. If there are more outcomes in the review, review authors will need to omit the less important outcomes from the table, and the decision selecting which outcomes are critical or important to the review should be made during protocol development (see Chapter 3 ). Review authors should provide time frames for the measurement of the outcomes (e.g. 90 days or 12 months) and the type of instrument scores (e.g. ranging from 0 to 100).

Note that review authors should include the pre-specified critical and important outcomes in the table whether data are available or not. However, they should be alert to the possibility that the importance of an outcome (e.g. a serious adverse effect) may only become known after the protocol was written or the analysis was carried out, and should take appropriate actions to include these in the ‘Summary of findings’ table.

The ‘Summary of findings’ table can include effects in subgroups of the population for different comparator risks and effect sizes separately. For instance, in Figure 14.1.b effects are presented for children younger and older than 5 years separately. Review authors may also opt to produce separate ‘Summary of findings’ tables for different populations.

Review authors should include serious adverse events, but it might be possible to combine minor adverse events as a single outcome, and describe this in an explanatory footnote (note that it is not appropriate to add events together unless they are independent, that is, a participant who has experienced one adverse event has an unaffected chance of experiencing the other adverse event).

Outcomes measured at multiple time points represent a particular problem. In general, to keep the table simple, review authors should present multiple time points only for outcomes critical to decision making, where either the result or the decision made are likely to vary over time. The remainder should be presented at a common time point where possible.

Review authors can present continuous outcome measures in the ‘Summary of findings’ table and should endeavour to make these interpretable to the target audience. This requires that the units are clear and readily interpretable, for example, days of pain, or frequency of headache, and the name and scale of any measurement tools used should be stated (e.g. a Visual Analogue Scale, ranging from 0 to 100). However, many measurement instruments are not readily interpretable by non-specialist clinicians or patients, for example, points on a Beck Depression Inventory or quality of life score. For these, a more interpretable presentation might involve converting a continuous to a dichotomous outcome, such as >50% improvement (see Chapter 15, Section 15.5 ).

14.1.6.3 Best estimate of risk with comparator intervention

Review authors should provide up to three typical risks for participants receiving the comparator intervention. For dichotomous outcomes, we recommend that these be presented in the form of the number of people experiencing the event per 100 or 1000 people (natural frequency) depending on the frequency of the outcome. For continuous outcomes, this would be stated as a mean or median value of the outcome measured.

Estimated or assumed comparator intervention risks could be based on assessments of typical risks in different patient groups derived from the review itself, individual representative studies in the review, or risks derived from a systematic review of prognosis studies or other sources of evidence which may in turn require an assessment of the certainty for the prognostic evidence (Spencer et al 2012, Iorio et al 2015). Ideally, risks would reflect groups that clinicians can easily identify on the basis of their presenting features.

An explanatory footnote should specify the source or rationale for each comparator group risk, including the time period to which it corresponds where appropriate. In Figure 14.1.a , clinicians can easily differentiate individuals with risk factors for deep venous thrombosis from those without. If there is known to be little variation in baseline risk then review authors may use the median comparator group risk across studies. If typical risks are not known, an option is to choose the risk from the included studies, providing the second highest for a high and the second lowest for a low risk population.

14.1.6.4 Risk with intervention

For dichotomous outcomes, review authors should provide a corresponding absolute risk for each comparator group risk, along with a confidence interval. This absolute risk with the (experimental) intervention will usually be derived from the meta-analysis result presented in the relative effect column (see Section 14.1.6.6 ). Formulae are provided in Section 14.1.5 . Review authors should present the absolute effect in the same format as the risks with comparator intervention (see Section 14.1.6.3 ), for example as the number of people experiencing the event per 1000 people.

For continuous outcomes, a difference in means or standardized difference in means should be presented with its confidence interval. These will typically be obtained directly from a meta-analysis. Explanatory text should be used to clarify the meaning, as in Figures 14.1.a and 14.1.b .

14.1.6.5 Risk difference

For dichotomous outcomes, the risk difference can be provided using one of the ‘Summary of findings’ table formats as an additional option (see Figure 14.1.b ). This risk difference expresses the difference between the experimental and comparator intervention and will usually be derived from the meta-analysis result presented in the relative effect column (see Section 14.1.6.6 ). Formulae are provided in Section 14.1.5 . Review authors should present the risk difference in the same format as assumed and corresponding risks with comparator intervention (see Section 14.1.6.3 ); for example, as the number of people experiencing the event per 1000 people or as percentage points if the assumed and corresponding risks are expressed in percentage.

For continuous outcomes, if the ‘Summary of findings’ table includes this option, the mean difference can be presented here and the ‘corresponding risk’ column left blank (see Figure 14.1.b ).

14.1.6.6 Relative effect (95% CI)

The relative effect will typically be a risk ratio or odds ratio (or occasionally a hazard ratio) with its accompanying 95% confidence interval, obtained from a meta-analysis performed on the basis of the same effect measure. Risk ratios and odds ratios are similar when the comparator intervention risks are low and effects are small, but may differ considerably when comparator group risks increase. The meta-analysis may involve an assumption of either fixed or random effects, depending on what the review authors consider appropriate, and implying that the relative effect is either an estimate of the effect of the intervention, or an estimate of the average effect of the intervention across studies, respectively.

14.1.6.7 Number of participants (studies)

This column should include the number of participants assessed in the included studies for each outcome and the corresponding number of studies that contributed these participants.

14.1.6.8 Certainty of the evidence (GRADE)

Review authors should comment on the certainty of the evidence (also known as quality of the body of evidence or confidence in the effect estimates). Review authors should use the specific evidence grading system developed by the GRADE Working Group (Atkins et al 2004, Guyatt et al 2008, Guyatt et al 2011a), which is described in detail in Section 14.2 . The GRADE approach categorizes the certainty in a body of evidence as ‘high’, ‘moderate’, ‘low’ or ‘very low’ by outcome. This is a result of judgement, but the judgement process operates within a transparent structure. As an example, the certainty would be ‘high’ if the summary were of several randomized trials with low risk of bias, but the rating of certainty becomes lower if there are concerns about risk of bias, inconsistency, indirectness, imprecision or publication bias. Judgements other than of ‘high’ certainty should be made transparent using explanatory footnotes or the ‘Comments’ column in the ‘Summary of findings’ table (see Section 14.1.6.10 ).

14.1.6.9 Comments

The aim of the ‘Comments’ field is to help interpret the information or data identified in the row. For example, this may be on the validity of the outcome measure or the presence of variables that are associated with the magnitude of effect. Important caveats about the results should be flagged here. Not all rows will need comments, and it is best to leave a blank if there is nothing warranting a comment.

14.1.6.10 Explanations

Detailed explanations should be included as footnotes to support the judgements in the ‘Summary of findings’ table, such as the overall GRADE assessment. The explanations should describe the rationale for important aspects of the content. Table 14.1.a lists guidance for useful explanations. Explanations should be concise, informative, relevant, easy to understand and accurate. If explanations cannot be sufficiently described in footnotes, review authors should provide further details of the issues in the Results and Discussion sections of the review.

Table 14.1.a Guidance for providing useful explanations in ‘Summary of findings’ (SoF) tables. Adapted from Santesso et al (2016)

14.2 Assessing the certainty or quality of a body of evidence

14.2.1 the grade approach.

The Grades of Recommendation, Assessment, Development and Evaluation Working Group (GRADE Working Group) has developed a system for grading the certainty of evidence (Schünemann et al 2003, Atkins et al 2004, Schünemann et al 2006, Guyatt et al 2008, Guyatt et al 2011a). Over 100 organizations including the World Health Organization (WHO), the American College of Physicians, the American Society of Hematology (ASH), the Canadian Agency for Drugs and Technology in Health (CADTH) and the National Institutes of Health and Clinical Excellence (NICE) in the UK have adopted the GRADE system ( www.gradeworkinggroup.org ).

Cochrane has also formally adopted this approach, and all Cochrane Reviews should use GRADE to evaluate the certainty of evidence for important outcomes (see MECIR Box 14.2.a ).

MECIR Box 14.2.a Relevant expectations for conduct of intervention reviews

For systematic reviews, the GRADE approach defines the certainty of a body of evidence as the extent to which one can be confident that an estimate of effect or association is close to the quantity of specific interest. Assessing the certainty of a body of evidence involves consideration of within- and across-study risk of bias (limitations in study design and execution or methodological quality), inconsistency (or heterogeneity), indirectness of evidence, imprecision of the effect estimates and risk of publication bias (see Section 14.2.2 ), as well as domains that may increase our confidence in the effect estimate (as described in Section 14.2.3 ). The GRADE system entails an assessment of the certainty of a body of evidence for each individual outcome. Judgements about the domains that determine the certainty of evidence should be described in the results or discussion section and as part of the ‘Summary of findings’ table.

The GRADE approach specifies four levels of certainty ( Figure 14.2.a ). For interventions, including diagnostic and other tests that are evaluated as interventions (Schünemann et al 2008b, Schünemann et al 2008a, Balshem et al 2011, Schünemann et al 2012), the starting point for rating the certainty of evidence is categorized into two types:

  • randomized trials; and
  • non-randomized studies of interventions (NRSI), including observational studies (including but not limited to cohort studies, and case-control studies, cross-sectional studies, case series and case reports, although not all of these designs are usually included in Cochrane Reviews).

There are many instances in which review authors rely on information from NRSI, in particular to evaluate potential harms (see Chapter 24 ). In addition, review authors can obtain relevant data from both randomized trials and NRSI, with each type of evidence complementing the other (Schünemann et al 2013).

In GRADE, a body of evidence from randomized trials begins with a high-certainty rating while a body of evidence from NRSI begins with a low-certainty rating. The lower rating with NRSI is the result of the potential bias induced by the lack of randomization (i.e. confounding and selection bias).

However, when using the new Risk Of Bias In Non-randomized Studies of Interventions (ROBINS-I) tool (Sterne et al 2016), an assessment tool that covers the risk of bias due to lack of randomization, all studies may start as high certainty of the evidence (Schünemann et al 2018). The approach of starting all study designs (including NRSI) as high certainty does not conflict with the initial GRADE approach of starting the rating of NRSI as low certainty evidence. This is because a body of evidence from NRSI should generally be downgraded by two levels due to the inherent risk of bias associated with the lack of randomization, namely confounding and selection bias. Not downgrading NRSI from high to low certainty needs transparent and detailed justification for what mitigates concerns about confounding and selection bias (Schünemann et al 2018). Very few examples of where not rating down by two levels is appropriate currently exist.

The highest certainty rating is a body of evidence when there are no concerns in any of the GRADE factors listed in Figure 14.2.a . Review authors often downgrade evidence to moderate, low or even very low certainty evidence, depending on the presence of the five factors in Figure 14.2.a . Usually, certainty rating will fall by one level for each factor, up to a maximum of three levels for all factors. If there are very severe problems for any one domain (e.g. when assessing risk of bias, all studies were unconcealed, unblinded and lost over 50% of their patients to follow-up), evidence may fall by two levels due to that factor alone. It is not possible to rate lower than ‘very low certainty’ evidence.

Review authors will generally grade evidence from sound non-randomized studies as low certainty, even if ROBINS-I is used. If, however, such studies yield large effects and there is no obvious bias explaining those effects, review authors may rate the evidence as moderate or – if the effect is large enough – even as high certainty ( Figure 14.2.a ). The very low certainty level is appropriate for, but is not limited to, studies with critical problems and unsystematic clinical observations (e.g. case series or case reports).

Figure 14.2.a Levels of the certainty of a body of evidence in the GRADE approach. *Upgrading criteria are usually applicable to non-randomized studies only (but exceptions exist).

14.2.2 Domains that can lead to decreasing the certainty level of a body of evidence   

We now describe in more detail the five reasons (or domains) for downgrading the certainty of a body of evidence for a specific outcome. In each case, if no reason is found for downgrading the evidence, it should be classified as 'no limitation or not serious' (not important enough to warrant downgrading). If a reason is found for downgrading the evidence, it should be classified as 'serious' (downgrading the certainty rating by one level) or 'very serious' (downgrading the certainty grade by two levels). For non-randomized studies assessed with ROBINS-I, rating down by three levels should be classified as 'extremely' serious.

(1) Risk of bias or limitations in the detailed design and implementation

Our confidence in an estimate of effect decreases if studies suffer from major limitations that are likely to result in a biased assessment of the intervention effect. For randomized trials, these methodological limitations include failure to generate a random sequence, lack of allocation sequence concealment, lack of blinding (particularly with subjective outcomes that are highly susceptible to biased assessment), a large loss to follow-up or selective reporting of outcomes. Chapter 8 provides a discussion of study-level assessments of risk of bias in the context of a Cochrane Review, and proposes an approach to assessing the risk of bias for an outcome across studies as ‘Low’ risk of bias, ‘Some concerns’ and ‘High’ risk of bias for randomized trials. Levels of ‘Low’. ‘Moderate’, ‘Serious’ and ‘Critical’ risk of bias arise for non-randomized studies assessed with ROBINS-I ( Chapter 25 ). These assessments should feed directly into this GRADE domain. In particular, ‘Low’ risk of bias would indicate ‘no limitation’; ‘Some concerns’ would indicate either ‘no limitation’ or ‘serious limitation’; and ‘High’ risk of bias would indicate either ‘serious limitation’ or ‘very serious limitation’. ‘Critical’ risk of bias on ROBINS-I would indicate extremely serious limitations in GRADE. Review authors should use their judgement to decide between alternative categories, depending on the likely magnitude of the potential biases.

Every study addressing a particular outcome will differ, to some degree, in the risk of bias. Review authors should make an overall judgement on whether the certainty of evidence for an outcome warrants downgrading on the basis of study limitations. The assessment of study limitations should apply to the studies contributing to the results in the ‘Summary of findings’ table, rather than to all studies that could potentially be included in the analysis. We have argued in Chapter 7, Section 7.6.2 , that the primary analysis should be restricted to studies at low (or low and unclear) risk of bias where possible.

Table 14.2.a presents the judgements that must be made in going from assessments of the risk of bias to judgements about study limitations for each outcome included in a ‘Summary of findings’ table. A rating of high certainty evidence can be achieved only when most evidence comes from studies that met the criteria for low risk of bias. For example, of the 22 studies addressing the impact of beta-blockers on mortality in patients with heart failure, most probably or certainly used concealed allocation of the sequence, all blinded at least some key groups and follow-up of randomized patients was almost complete (Brophy et al 2001). The certainty of evidence might be downgraded by one level when most of the evidence comes from individual studies either with a crucial limitation for one item, or with some limitations for multiple items. An example of very serious limitations, warranting downgrading by two levels, is provided by evidence on surgery versus conservative treatment in the management of patients with lumbar disc prolapse (Gibson and Waddell 2007). We are uncertain of the benefit of surgery in reducing symptoms after one year or longer, because the one study included in the analysis had inadequate concealment of the allocation sequence and the outcome was assessed using a crude rating by the surgeon without blinding.

(2) Unexplained heterogeneity or inconsistency of results

When studies yield widely differing estimates of effect (heterogeneity or variability in results), investigators should look for robust explanations for that heterogeneity. For instance, drugs may have larger relative effects in sicker populations or when given in larger doses. A detailed discussion of heterogeneity and its investigation is provided in Chapter 10, Section 10.10 and Section 10.11 . If an important modifier exists, with good evidence that important outcomes are different in different subgroups (which would ideally be pre-specified), then a separate ‘Summary of findings’ table may be considered for a separate population. For instance, a separate ‘Summary of findings’ table would be used for carotid endarterectomy in symptomatic patients with high grade stenosis (70% to 99%) in which the intervention is, in the hands of the right surgeons, beneficial, and another (if review authors considered it relevant) for asymptomatic patients with low grade stenosis (less than 30%) in which surgery appears harmful (Orrapin and Rerkasem 2017). When heterogeneity exists and affects the interpretation of results, but review authors are unable to identify a plausible explanation with the data available, the certainty of the evidence decreases.

(3) Indirectness of evidence

Two types of indirectness are relevant. First, a review comparing the effectiveness of alternative interventions (say A and B) may find that randomized trials are available, but they have compared A with placebo and B with placebo. Thus, the evidence is restricted to indirect comparisons between A and B. Where indirect comparisons are undertaken within a network meta-analysis context, GRADE for network meta-analysis should be used (see Chapter 11, Section 11.5 ).

Second, a review may find randomized trials that meet eligibility criteria but address a restricted version of the main review question in terms of population, intervention, comparator or outcomes. For example, suppose that in a review addressing an intervention for secondary prevention of coronary heart disease, most identified studies happened to be in people who also had diabetes. Then the evidence may be regarded as indirect in relation to the broader question of interest because the population is primarily related to people with diabetes. The opposite scenario can equally apply: a review addressing the effect of a preventive strategy for coronary heart disease in people with diabetes may consider studies in people without diabetes to provide relevant, albeit indirect, evidence. This would be particularly likely if investigators had conducted few if any randomized trials in the target population (e.g. people with diabetes). Other sources of indirectness may arise from interventions studied (e.g. if in all included studies a technical intervention was implemented by expert, highly trained specialists in specialist centres, then evidence on the effects of the intervention outside these centres may be indirect), comparators used (e.g. if the comparator groups received an intervention that is less effective than standard treatment in most settings) and outcomes assessed (e.g. indirectness due to surrogate outcomes when data on patient-important outcomes are not available, or when investigators seek data on quality of life but only symptoms are reported). Review authors should make judgements transparent when they believe downgrading is justified, based on differences in anticipated effects in the group of primary interest. Review authors may be aided and increase transparency of their judgements about indirectness if they use Table 14.2.b available in the GRADEpro GDT software (Schünemann et al 2013).

(4) Imprecision of results

When studies include few participants or few events, and thus have wide confidence intervals, review authors can lower their rating of the certainty of the evidence. The confidence intervals included in the ‘Summary of findings’ table will provide readers with information that allows them to make, to some extent, their own rating of precision. Review authors can use a calculation of the optimal information size (OIS) or review information size (RIS), similar to sample size calculations, to make judgements about imprecision (Guyatt et al 2011b, Schünemann 2016). The OIS or RIS is calculated on the basis of the number of participants required for an adequately powered individual study. If the 95% confidence interval excludes a risk ratio (RR) of 1.0, and the total number of events or patients exceeds the OIS criterion, precision is adequate. If the 95% CI includes appreciable benefit or harm (an RR of under 0.75 or over 1.25 is often suggested as a very rough guide) downgrading for imprecision may be appropriate even if OIS criteria are met (Guyatt et al 2011b, Schünemann 2016).

(5) High probability of publication bias

The certainty of evidence level may be downgraded if investigators fail to report studies on the basis of results (typically those that show no effect: publication bias) or outcomes (typically those that may be harmful or for which no effect was observed: selective outcome non-reporting bias). Selective reporting of outcomes from among multiple outcomes measured is assessed at the study level as part of the assessment of risk of bias (see Chapter 8, Section 8.7 ), so for the studies contributing to the outcome in the ‘Summary of findings’ table this is addressed by domain 1 above (limitations in the design and implementation). If a large number of studies included in the review do not contribute to an outcome, or if there is evidence of publication bias, the certainty of the evidence may be downgraded. Chapter 13 provides a detailed discussion of reporting biases, including publication bias, and how it may be tackled in a Cochrane Review. A prototypical situation that may elicit suspicion of publication bias is when published evidence includes a number of small studies, all of which are industry-funded (Bhandari et al 2004). For example, 14 studies of flavanoids in patients with haemorrhoids have shown apparent large benefits, but enrolled a total of only 1432 patients (i.e. each study enrolled relatively few patients) (Alonso-Coello et al 2006). The heavy involvement of sponsors in most of these studies raises questions of whether unpublished studies that suggest no benefit exist (publication bias).

A particular body of evidence can suffer from problems associated with more than one of the five factors listed here, and the greater the problems, the lower the certainty of evidence rating that should result. One could imagine a situation in which randomized trials were available, but all or virtually all of these limitations would be present, and in serious form. A very low certainty of evidence rating would result.

Table 14.2.a Further guidelines for domain 1 (of 5) in a GRADE assessment: going from assessments of risk of bias in studies to judgements about study limitations for main outcomes across studies

Table 14.2.b Judgements about indirectness by outcome (available in GRADEpro GDT)

Intervention:

Comparator:

Direct comparison:

Final judgement about indirectness across domains:

14.2.3 Domains that may lead to increasing the certainty level of a body of evidence

Although NRSI and downgraded randomized trials will generally yield a low rating for certainty of evidence, there will be unusual circumstances in which review authors could ‘upgrade’ such evidence to moderate or even high certainty ( Table 14.3.a ).

  • Large effects On rare occasions when methodologically well-done observational studies yield large, consistent and precise estimates of the magnitude of an intervention effect, one may be particularly confident in the results. A large estimated effect (e.g. RR >2 or RR <0.5) in the absence of plausible confounders, or a very large effect (e.g. RR >5 or RR <0.2) in studies with no major threats to validity, might qualify for this. In these situations, while the NRSI may possibly have provided an over-estimate of the true effect, the weak study design may not explain all of the apparent observed benefit. Thus, despite reservations based on the observational study design, review authors are confident that the effect exists. The magnitude of the effect in these studies may move the assigned certainty of evidence from low to moderate (if the effect is large in the absence of other methodological limitations). For example, a meta-analysis of observational studies showed that bicycle helmets reduce the risk of head injuries in cyclists by a large margin (odds ratio (OR) 0.31, 95% CI 0.26 to 0.37) (Thompson et al 2000). This large effect, in the absence of obvious bias that could create the association, suggests a rating of moderate-certainty evidence.  Note : GRADE guidance suggests the possibility of rating up one level for a large effect if the relative effect is greater than 2.0. However, if the point estimate of the relative effect is greater than 2.0, but the confidence interval is appreciably below 2.0, then some hesitation would be appropriate in the decision to rate up for a large effect. Another situation allows inference of a strong association without a formal comparative study. Consider the question of the impact of routine colonoscopy versus no screening for colon cancer on the rate of perforation associated with colonoscopy. Here, a large series of representative patients undergoing colonoscopy may provide high certainty evidence about the risk of perforation associated with colonoscopy. When the risk of the event among patients receiving the relevant comparator is known to be near 0 (i.e. we are certain that the incidence of spontaneous colon perforation in patients not undergoing colonoscopy is extremely low), case series or cohort studies of representative patients can provide high certainty evidence of adverse effects associated with an intervention, thereby allowing us to infer a strong association from even a limited number of events.
  • Dose-response The presence of a dose-response gradient may increase our confidence in the findings of observational studies and thereby enhance the assigned certainty of evidence. For example, our confidence in the result of observational studies that show an increased risk of bleeding in patients who have supratherapeutic anticoagulation levels is increased by the observation that there is a dose-response gradient between the length of time needed for blood to clot (as measured by the international normalized ratio (INR)) and an increased risk of bleeding (Levine et al 2004). A systematic review of NRSI investigating the effect of cyclooxygenase-2 inhibitors on cardiovascular events found that the summary estimate (RR) with rofecoxib was 1.33 (95% CI 1.00 to 1.79) with doses less than 25mg/d, and 2.19 (95% CI 1.64 to 2.91) with doses more than 25mg/d. Although residual confounding is likely to exist in the NRSI that address this issue, the existence of a dose-response gradient and the large apparent effect of higher doses of rofecoxib markedly increase our strength of inference that the association cannot be explained by residual confounding, and is therefore likely to be both causal and, at high levels of exposure, substantial.  Note : GRADE guidance suggests the possibility of rating up one level for a large effect if the relative effect is greater than 2.0. Here, the fact that the point estimate of the relative effect is greater than 2.0, but the confidence interval is appreciably below 2.0 might make some hesitate in the decision to rate up for a large effect
  • Plausible confounding On occasion, all plausible biases from randomized or non-randomized studies may be working to under-estimate an apparent intervention effect. For example, if only sicker patients receive an experimental intervention or exposure, yet they still fare better, it is likely that the actual intervention or exposure effect is larger than the data suggest. For instance, a rigorous systematic review of observational studies including a total of 38 million patients demonstrated higher death rates in private for-profit versus private not-for-profit hospitals (Devereaux et al 2002). One possible bias relates to different disease severity in patients in the two hospital types. It is likely, however, that patients in the not-for-profit hospitals were sicker than those in the for-profit hospitals. Thus, to the extent that residual confounding existed, it would bias results against the not-for-profit hospitals. The second likely bias was the possibility that higher numbers of patients with excellent private insurance coverage could lead to a hospital having more resources and a spill-over effect that would benefit those without such coverage. Since for-profit hospitals are likely to admit a larger proportion of such well-insured patients than not-for-profit hospitals, the bias is once again against the not-for-profit hospitals. Since the plausible biases would all diminish the demonstrated intervention effect, one might consider the evidence from these observational studies as moderate rather than low certainty. A parallel situation exists when observational studies have failed to demonstrate an association, but all plausible biases would have increased an intervention effect. This situation will usually arise in the exploration of apparent harmful effects. For example, because the hypoglycaemic drug phenformin causes lactic acidosis, the related agent metformin was under suspicion for the same toxicity. Nevertheless, very large observational studies have failed to demonstrate an association (Salpeter et al 2007). Given the likelihood that clinicians would be more alert to lactic acidosis in the presence of the agent and over-report its occurrence, one might consider this moderate, or even high certainty, evidence refuting a causal relationship between typical therapeutic doses of metformin and lactic acidosis.

14.3 Describing the assessment of the certainty of a body of evidence using the GRADE framework

Review authors should report the grading of the certainty of evidence in the Results section for each outcome for which this has been performed, providing the rationale for downgrading or upgrading the evidence, and referring to the ‘Summary of findings’ table where applicable.

Table 14.3.a provides a framework and examples for how review authors can justify their judgements about the certainty of evidence in each domain. These justifications should also be included in explanatory notes to the ‘Summary of Findings’ table (see Section 14.1.6.10 ).

Chapter 15, Section 15.6 , describes in more detail how the overall GRADE assessment across all domains can be used to draw conclusions about the effects of the intervention, as well as providing implications for future research.

Table 14.3.a Framework for describing the certainty of evidence and justifying downgrading or upgrading

14.4 Chapter information

Authors: Holger J Schünemann, Julian PT Higgins, Gunn E Vist, Paul Glasziou, Elie A Akl, Nicole Skoetz, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group (formerly Applicability and Recommendations Methods Group) and the Cochrane Statistical Methods Group

Acknowledgements: Andrew D Oxman contributed to earlier versions. Professor Penny Hawe contributed to the text on adverse effects in earlier versions. Jon Deeks provided helpful contributions on an earlier version of this chapter. For details of previous authors and editors of the Handbook , please refer to the Preface.

Funding: This work was in part supported by funding from the Michael G DeGroote Cochrane Canada Centre and the Ontario Ministry of Health.

14.5 References

Alonso-Coello P, Zhou Q, Martinez-Zapata MJ, Mills E, Heels-Ansdell D, Johanson JF, Guyatt G. Meta-analysis of flavonoids for the treatment of haemorrhoids. British Journal of Surgery 2006; 93 : 909-920.

Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, Guyatt GH, Harbour RT, Haugh MC, Henry D, Hill S, Jaeschke R, Leng G, Liberati A, Magrini N, Mason J, Middleton P, Mrukowicz J, O'Connell D, Oxman AD, Phillips B, Schünemann HJ, Edejer TT, Varonen H, Vist GE, Williams JW, Jr., Zaza S. Grading quality of evidence and strength of recommendations. BMJ 2004; 328 : 1490.

Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, Guyatt GH. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology 2011; 64 : 401-406.

Bhandari M, Busse JW, Jackowski D, Montori VM, Schünemann H, Sprague S, Mears D, Schemitsch EH, Heels-Ansdell D, Devereaux PJ. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. Canadian Medical Association Journal 2004; 170 : 477-480.

Brophy JM, Joseph L, Rouleau JL. Beta-blockers in congestive heart failure. A Bayesian meta-analysis. Annals of Internal Medicine 2001; 134 : 550-560.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P, Meerpohl JJ, Vandvik PO, Brozek JL, Akl EA, Bossuyt P, Churchill R, Glenton C, Rosenbaum S, Tugwell P, Welch V, Garner P, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary of findings tables with a new format. Journal of Clinical Epidemiology 2016; 74 : 7-18.

Deeks JJ, Altman DG. Effect measures for meta-analysis of trials with binary outcomes. In: Egger M, Davey Smith G, Altman DG, editors. Systematic Reviews in Health Care: Meta-analysis in Context . 2nd ed. London (UK): BMJ Publication Group; 2001. p. 313-335.

Devereaux PJ, Choi PT, Lacchetti C, Weaver B, Schünemann HJ, Haines T, Lavis JN, Grant BJ, Haslam DR, Bhandari M, Sullivan T, Cook DJ, Walter SD, Meade M, Khan H, Bhatnagar N, Guyatt GH. A systematic review and meta-analysis of studies comparing mortality rates of private for-profit and private not-for-profit hospitals. Canadian Medical Association Journal 2002; 166 : 1399-1406.

Engels EA, Schmid CH, Terrin N, Olkin I, Lau J. Heterogeneity and statistical significance in meta-analysis: an empirical study of 125 meta-analyses. Statistics in Medicine 2000; 19 : 1707-1728.

Furukawa TA, Guyatt GH, Griffith LE. Can we individualize the 'number needed to treat'? An empirical study of summary effect measures in meta-analyses. International Journal of Epidemiology 2002; 31 : 72-76.

Gibson JN, Waddell G. Surgical interventions for lumbar disc prolapse: updated Cochrane Review. Spine 2007; 32 : 1735-1747.

Guyatt G, Oxman A, Vist G, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann H. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336 : 3.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology 2011a; 64 : 383-394.

Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D, Devereaux PJ, Montori VM, Freyschuss B, Vist G, Jaeschke R, Williams JW, Jr., Murad MH, Sinclair D, Falck-Ytter Y, Meerpohl J, Whittington C, Thorlund K, Andrews J, Schünemann HJ. GRADE guidelines 6. Rating the quality of evidence--imprecision. Journal of Clinical Epidemiology 2011b; 64 : 1283-1293.

Iorio A, Spencer FA, Falavigna M, Alba C, Lang E, Burnand B, McGinn T, Hayden J, Williams K, Shea B, Wolff R, Kujpers T, Perel P, Vandvik PO, Glasziou P, Schünemann H, Guyatt G. Use of GRADE for assessment of evidence about prognosis: rating confidence in estimates of event rates in broad categories of patients. BMJ 2015; 350 : h870.

Langendam M, Carrasco-Labra A, Santesso N, Mustafa RA, Brignardello-Petersen R, Ventresca M, Heus P, Lasserson T, Moustgaard R, Brozek J, Schünemann HJ. Improving GRADE evidence tables part 2: a systematic survey of explanatory notes shows more guidance is needed. Journal of Clinical Epidemiology 2016; 74 : 19-27.

Levine MN, Raskob G, Landefeld S, Kearon C, Schulman S. Hemorrhagic complications of anticoagulant treatment: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy. Chest 2004; 126 : 287S-310S.

Orrapin S, Rerkasem K. Carotid endarterectomy for symptomatic carotid stenosis. Cochrane Database of Systematic Reviews 2017; 6 : CD001081.

Salpeter S, Greyber E, Pasternak G, Salpeter E. Risk of fatal and nonfatal lactic acidosis with metformin use in type 2 diabetes mellitus. Cochrane Database of Systematic Reviews 2007; 4 : CD002967.

Santesso N, Carrasco-Labra A, Langendam M, Brignardello-Petersen R, Mustafa RA, Heus P, Lasserson T, Opiyo N, Kunnamo I, Sinclair D, Garner P, Treweek S, Tovey D, Akl EA, Tugwell P, Brozek JL, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 3: detailed guidance for explanatory footnotes supports creating and understanding GRADE certainty in the evidence judgments. Journal of Clinical Epidemiology 2016; 74 : 28-39.

Schünemann HJ, Best D, Vist G, Oxman AD, Group GW. Letters, numbers, symbols and words: how to communicate grades of evidence and recommendations. Canadian Medical Association Journal 2003; 169 : 677-680.

Schünemann HJ, Jaeschke R, Cook DJ, Bria WF, El-Solh AA, Ernst A, Fahy BF, Gould MK, Horan KL, Krishnan JA, Manthous CA, Maurer JR, McNicholas WT, Oxman AD, Rubenfeld G, Turino GM, Guyatt G. An official ATS statement: grading the quality of evidence and strength of recommendations in ATS guidelines and recommendations. American Journal of Respiratory and Critical Care Medicine 2006; 174 : 605-614.

Schünemann HJ, Oxman AD, Brozek J, Glasziou P, Jaeschke R, Vist GE, Williams JW, Jr., Kunz R, Craig J, Montori VM, Bossuyt P, Guyatt GH. Grading quality of evidence and strength of recommendations for diagnostic tests and strategies. BMJ 2008a; 336 : 1106-1110.

Schünemann HJ, Oxman AD, Brozek J, Glasziou P, Bossuyt P, Chang S, Muti P, Jaeschke R, Guyatt GH. GRADE: assessing the quality of evidence for diagnostic recommendations. ACP Journal Club 2008b; 149 : 2.

Schünemann HJ, Mustafa R, Brozek J. [Diagnostic accuracy and linked evidence--testing the chain]. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 2012; 106 : 153-160.

Schünemann HJ, Tugwell P, Reeves BC, Akl EA, Santesso N, Spencer FA, Shea B, Wells G, Helfand M. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods 2013; 4 : 49-62.

Schünemann HJ. Interpreting GRADE's levels of certainty or quality of the evidence: GRADE for statisticians, considering review information size or less emphasis on imprecision? Journal of Clinical Epidemiology 2016; 75 : 6-15.

Schünemann HJ, Cuello C, Akl EA, Mustafa RA, Meerpohl JJ, Thayer K, Morgan RL, Gartlehner G, Kunz R, Katikireddi SV, Sterne J, Higgins JPT, Guyatt G, Group GW. GRADE guidelines: 18. How ROBINS-I and other tools to assess risk of bias in nonrandomized studies should be used to rate the certainty of a body of evidence. Journal of Clinical Epidemiology 2018.

Spencer-Bonilla G, Quinones AR, Montori VM, International Minimally Disruptive Medicine W. Assessing the Burden of Treatment. Journal of General Internal Medicine 2017; 32 : 1141-1145.

Spencer FA, Iorio A, You J, Murad MH, Schünemann HJ, Vandvik PO, Crowther MA, Pottie K, Lang ES, Meerpohl JJ, Falck-Ytter Y, Alonso-Coello P, Guyatt GH. Uncertainties in baseline risk estimates and confidence in treatment effects. BMJ 2012; 345 : e7401.

Sterne JAC, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JPT. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016; 355 : i4919.

Thompson DC, Rivara FP, Thompson R. Helmets for preventing head and facial injuries in bicyclists. Cochrane Database of Systematic Reviews 2000; 2 : CD001855.

Tierney JF, Stewart LA, Ghersi D, Burdett S, Sydes MR. Practical methods for incorporating summary time-to-event data into meta-analysis. Trials 2007; 8 .

van Dalen EC, Tierney JF, Kremer LCM. Tips and tricks for understanding and using SR results. No. 7: time‐to‐event data. Evidence-Based Child Health 2007; 2 : 1089-1090.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

How to Write a Conclusion for Research Papers (with Examples)

How to Write a Conclusion for Research Papers (with Examples)

The conclusion of a research paper is a crucial section that plays a significant role in the overall impact and effectiveness of your research paper. However, this is also the section that typically receives less attention compared to the introduction and the body of the paper. The conclusion serves to provide a concise summary of the key findings, their significance, their implications, and a sense of closure to the study. Discussing how can the findings be applied in real-world scenarios or inform policy, practice, or decision-making is especially valuable to practitioners and policymakers. The research paper conclusion also provides researchers with clear insights and valuable information for their own work, which they can then build on and contribute to the advancement of knowledge in the field.

The research paper conclusion should explain the significance of your findings within the broader context of your field. It restates how your results contribute to the existing body of knowledge and whether they confirm or challenge existing theories or hypotheses. Also, by identifying unanswered questions or areas requiring further investigation, your awareness of the broader research landscape can be demonstrated.

Remember to tailor the research paper conclusion to the specific needs and interests of your intended audience, which may include researchers, practitioners, policymakers, or a combination of these.

Table of Contents

What is a conclusion in a research paper, summarizing conclusion, editorial conclusion, externalizing conclusion, importance of a good research paper conclusion, how to write a conclusion for your research paper, research paper conclusion examples.

  • How to write a research paper conclusion with Paperpal? 

Frequently Asked Questions

A conclusion in a research paper is the final section where you summarize and wrap up your research, presenting the key findings and insights derived from your study. The research paper conclusion is not the place to introduce new information or data that was not discussed in the main body of the paper. When working on how to conclude a research paper, remember to stick to summarizing and interpreting existing content. The research paper conclusion serves the following purposes: 1

  • Warn readers of the possible consequences of not attending to the problem.
  • Recommend specific course(s) of action.
  • Restate key ideas to drive home the ultimate point of your research paper.
  • Provide a “take-home” message that you want the readers to remember about your study.

key findings in research example

Types of conclusions for research papers

In research papers, the conclusion provides closure to the reader. The type of research paper conclusion you choose depends on the nature of your study, your goals, and your target audience. I provide you with three common types of conclusions:

A summarizing conclusion is the most common type of conclusion in research papers. It involves summarizing the main points, reiterating the research question, and restating the significance of the findings. This common type of research paper conclusion is used across different disciplines.

An editorial conclusion is less common but can be used in research papers that are focused on proposing or advocating for a particular viewpoint or policy. It involves presenting a strong editorial or opinion based on the research findings and offering recommendations or calls to action.

An externalizing conclusion is a type of conclusion that extends the research beyond the scope of the paper by suggesting potential future research directions or discussing the broader implications of the findings. This type of conclusion is often used in more theoretical or exploratory research papers.

Align your conclusion’s tone with the rest of your research paper. Start Writing with Paperpal Now!  

The conclusion in a research paper serves several important purposes:

  • Offers Implications and Recommendations : Your research paper conclusion is an excellent place to discuss the broader implications of your research and suggest potential areas for further study. It’s also an opportunity to offer practical recommendations based on your findings.
  • Provides Closure : A good research paper conclusion provides a sense of closure to your paper. It should leave the reader with a feeling that they have reached the end of a well-structured and thought-provoking research project.
  • Leaves a Lasting Impression : Writing a well-crafted research paper conclusion leaves a lasting impression on your readers. It’s your final opportunity to leave them with a new idea, a call to action, or a memorable quote.

key findings in research example

Writing a strong conclusion for your research paper is essential to leave a lasting impression on your readers. Here’s a step-by-step process to help you create and know what to put in the conclusion of a research paper: 2

  • Research Statement : Begin your research paper conclusion by restating your research statement. This reminds the reader of the main point you’ve been trying to prove throughout your paper. Keep it concise and clear.
  • Key Points : Summarize the main arguments and key points you’ve made in your paper. Avoid introducing new information in the research paper conclusion. Instead, provide a concise overview of what you’ve discussed in the body of your paper.
  • Address the Research Questions : If your research paper is based on specific research questions or hypotheses, briefly address whether you’ve answered them or achieved your research goals. Discuss the significance of your findings in this context.
  • Significance : Highlight the importance of your research and its relevance in the broader context. Explain why your findings matter and how they contribute to the existing knowledge in your field.
  • Implications : Explore the practical or theoretical implications of your research. How might your findings impact future research, policy, or real-world applications? Consider the “so what?” question.
  • Future Research : Offer suggestions for future research in your area. What questions or aspects remain unanswered or warrant further investigation? This shows that your work opens the door for future exploration.
  • Closing Thought : Conclude your research paper conclusion with a thought-provoking or memorable statement. This can leave a lasting impression on your readers and wrap up your paper effectively. Avoid introducing new information or arguments here.
  • Proofread and Revise : Carefully proofread your conclusion for grammar, spelling, and clarity. Ensure that your ideas flow smoothly and that your conclusion is coherent and well-structured.

Write your research paper conclusion 2x faster with Paperpal. Try it now!

Remember that a well-crafted research paper conclusion is a reflection of the strength of your research and your ability to communicate its significance effectively. It should leave a lasting impression on your readers and tie together all the threads of your paper. Now you know how to start the conclusion of a research paper and what elements to include to make it impactful, let’s look at a research paper conclusion sample.

key findings in research example

How to write a research paper conclusion with Paperpal?

A research paper conclusion is not just a summary of your study, but a synthesis of the key findings that ties the research together and places it in a broader context. A research paper conclusion should be concise, typically around one paragraph in length. However, some complex topics may require a longer conclusion to ensure the reader is left with a clear understanding of the study’s significance. Paperpal, an AI writing assistant trusted by over 800,000 academics globally, can help you write a well-structured conclusion for your research paper. 

  • Sign Up or Log In: Create a new Paperpal account or login with your details.  
  • Navigate to Features : Once logged in, head over to the features’ side navigation pane. Click on Templates and you’ll find a suite of generative AI features to help you write better, faster.  
  • Generate an outline: Under Templates, select ‘Outlines’. Choose ‘Research article’ as your document type.  
  • Select your section: Since you’re focusing on the conclusion, select this section when prompted.  
  • Choose your field of study: Identifying your field of study allows Paperpal to provide more targeted suggestions, ensuring the relevance of your conclusion to your specific area of research. 
  • Provide a brief description of your study: Enter details about your research topic and findings. This information helps Paperpal generate a tailored outline that aligns with your paper’s content. 
  • Generate the conclusion outline: After entering all necessary details, click on ‘generate’. Paperpal will then create a structured outline for your conclusion, to help you start writing and build upon the outline.  
  • Write your conclusion: Use the generated outline to build your conclusion. The outline serves as a guide, ensuring you cover all critical aspects of a strong conclusion, from summarizing key findings to highlighting the research’s implications. 
  • Refine and enhance: Paperpal’s ‘Make Academic’ feature can be particularly useful in the final stages. Select any paragraph of your conclusion and use this feature to elevate the academic tone, ensuring your writing is aligned to the academic journal standards. 

By following these steps, Paperpal not only simplifies the process of writing a research paper conclusion but also ensures it is impactful, concise, and aligned with academic standards. Sign up with Paperpal today and write your research paper conclusion 2x faster .  

The research paper conclusion is a crucial part of your paper as it provides the final opportunity to leave a strong impression on your readers. In the research paper conclusion, summarize the main points of your research paper by restating your research statement, highlighting the most important findings, addressing the research questions or objectives, explaining the broader context of the study, discussing the significance of your findings, providing recommendations if applicable, and emphasizing the takeaway message. The main purpose of the conclusion is to remind the reader of the main point or argument of your paper and to provide a clear and concise summary of the key findings and their implications. All these elements should feature on your list of what to put in the conclusion of a research paper to create a strong final statement for your work.

A strong conclusion is a critical component of a research paper, as it provides an opportunity to wrap up your arguments, reiterate your main points, and leave a lasting impression on your readers. Here are the key elements of a strong research paper conclusion: 1. Conciseness : A research paper conclusion should be concise and to the point. It should not introduce new information or ideas that were not discussed in the body of the paper. 2. Summarization : The research paper conclusion should be comprehensive enough to give the reader a clear understanding of the research’s main contributions. 3 . Relevance : Ensure that the information included in the research paper conclusion is directly relevant to the research paper’s main topic and objectives; avoid unnecessary details. 4 . Connection to the Introduction : A well-structured research paper conclusion often revisits the key points made in the introduction and shows how the research has addressed the initial questions or objectives. 5. Emphasis : Highlight the significance and implications of your research. Why is your study important? What are the broader implications or applications of your findings? 6 . Call to Action : Include a call to action or a recommendation for future research or action based on your findings.

The length of a research paper conclusion can vary depending on several factors, including the overall length of the paper, the complexity of the research, and the specific journal requirements. While there is no strict rule for the length of a conclusion, but it’s generally advisable to keep it relatively short. A typical research paper conclusion might be around 5-10% of the paper’s total length. For example, if your paper is 10 pages long, the conclusion might be roughly half a page to one page in length.

In general, you do not need to include citations in the research paper conclusion. Citations are typically reserved for the body of the paper to support your arguments and provide evidence for your claims. However, there may be some exceptions to this rule: 1. If you are drawing a direct quote or paraphrasing a specific source in your research paper conclusion, you should include a citation to give proper credit to the original author. 2. If your conclusion refers to or discusses specific research, data, or sources that are crucial to the overall argument, citations can be included to reinforce your conclusion’s validity.

The conclusion of a research paper serves several important purposes: 1. Summarize the Key Points 2. Reinforce the Main Argument 3. Provide Closure 4. Offer Insights or Implications 5. Engage the Reader. 6. Reflect on Limitations

Remember that the primary purpose of the research paper conclusion is to leave a lasting impression on the reader, reinforcing the key points and providing closure to your research. It’s often the last part of the paper that the reader will see, so it should be strong and well-crafted.

  • Makar, G., Foltz, C., Lendner, M., & Vaccaro, A. R. (2018). How to write effective discussion and conclusion sections. Clinical spine surgery, 31(8), 345-346.
  • Bunton, D. (2005). The structure of PhD conclusion chapters.  Journal of English for academic purposes ,  4 (3), 207-224.

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • 5 Reasons for Rejection After Peer Review
  • Ethical Research Practices For Research with Human Subjects

7 Ways to Improve Your Academic Writing Process

  • Paraphrasing in Academic Writing: Answering Top Author Queries

Preflight For Editorial Desk: The Perfect Hybrid (AI + Human) Assistance Against Compromised Manuscripts

You may also like, what are journal guidelines on using generative ai..., quillbot review: features, pricing, and free alternatives, what is an academic paper types and elements , should you use ai tools like chatgpt for..., publish research papers: 9 steps for successful publications , what are the different types of research papers, how to make translating academic papers less challenging, self-plagiarism in research: what it is and how..., 6 tips for post-doc researchers to take their..., presenting research data effectively through tables and figures.

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Paper Conclusion – Writing Guide and Examples

Research Paper Conclusion – Writing Guide and Examples

Table of Contents

Research Paper Conclusion

Research Paper Conclusion

Definition:

A research paper conclusion is the final section of a research paper that summarizes the key findings, significance, and implications of the research. It is the writer’s opportunity to synthesize the information presented in the paper, draw conclusions, and make recommendations for future research or actions.

The conclusion should provide a clear and concise summary of the research paper, reiterating the research question or problem, the main results, and the significance of the findings. It should also discuss the limitations of the study and suggest areas for further research.

Parts of Research Paper Conclusion

The parts of a research paper conclusion typically include:

Restatement of the Thesis

The conclusion should begin by restating the thesis statement from the introduction in a different way. This helps to remind the reader of the main argument or purpose of the research.

Summary of Key Findings

The conclusion should summarize the main findings of the research, highlighting the most important results and conclusions. This section should be brief and to the point.

Implications and Significance

In this section, the researcher should explain the implications and significance of the research findings. This may include discussing the potential impact on the field or industry, highlighting new insights or knowledge gained, or pointing out areas for future research.

Limitations and Recommendations

It is important to acknowledge any limitations or weaknesses of the research and to make recommendations for how these could be addressed in future studies. This shows that the researcher is aware of the potential limitations of their work and is committed to improving the quality of research in their field.

Concluding Statement

The conclusion should end with a strong concluding statement that leaves a lasting impression on the reader. This could be a call to action, a recommendation for further research, or a final thought on the topic.

How to Write Research Paper Conclusion

Here are some steps you can follow to write an effective research paper conclusion:

  • Restate the research problem or question: Begin by restating the research problem or question that you aimed to answer in your research. This will remind the reader of the purpose of your study.
  • Summarize the main points: Summarize the key findings and results of your research. This can be done by highlighting the most important aspects of your research and the evidence that supports them.
  • Discuss the implications: Discuss the implications of your findings for the research area and any potential applications of your research. You should also mention any limitations of your research that may affect the interpretation of your findings.
  • Provide a conclusion : Provide a concise conclusion that summarizes the main points of your paper and emphasizes the significance of your research. This should be a strong and clear statement that leaves a lasting impression on the reader.
  • Offer suggestions for future research: Lastly, offer suggestions for future research that could build on your findings and contribute to further advancements in the field.

Remember that the conclusion should be brief and to the point, while still effectively summarizing the key findings and implications of your research.

Example of Research Paper Conclusion

Here’s an example of a research paper conclusion:

Conclusion :

In conclusion, our study aimed to investigate the relationship between social media use and mental health among college students. Our findings suggest that there is a significant association between social media use and increased levels of anxiety and depression among college students. This highlights the need for increased awareness and education about the potential negative effects of social media use on mental health, particularly among college students.

Despite the limitations of our study, such as the small sample size and self-reported data, our findings have important implications for future research and practice. Future studies should aim to replicate our findings in larger, more diverse samples, and investigate the potential mechanisms underlying the association between social media use and mental health. In addition, interventions should be developed to promote healthy social media use among college students, such as mindfulness-based approaches and social media detox programs.

Overall, our study contributes to the growing body of research on the impact of social media on mental health, and highlights the importance of addressing this issue in the context of higher education. By raising awareness and promoting healthy social media use among college students, we can help to reduce the negative impact of social media on mental health and improve the well-being of young adults.

Purpose of Research Paper Conclusion

The purpose of a research paper conclusion is to provide a summary and synthesis of the key findings, significance, and implications of the research presented in the paper. The conclusion serves as the final opportunity for the writer to convey their message and leave a lasting impression on the reader.

The conclusion should restate the research problem or question, summarize the main results of the research, and explain their significance. It should also acknowledge the limitations of the study and suggest areas for future research or action.

Overall, the purpose of the conclusion is to provide a sense of closure to the research paper and to emphasize the importance of the research and its potential impact. It should leave the reader with a clear understanding of the main findings and why they matter. The conclusion serves as the writer’s opportunity to showcase their contribution to the field and to inspire further research and action.

When to Write Research Paper Conclusion

The conclusion of a research paper should be written after the body of the paper has been completed. It should not be written until the writer has thoroughly analyzed and interpreted their findings and has written a complete and cohesive discussion of the research.

Before writing the conclusion, the writer should review their research paper and consider the key points that they want to convey to the reader. They should also review the research question, hypotheses, and methodology to ensure that they have addressed all of the necessary components of the research.

Once the writer has a clear understanding of the main findings and their significance, they can begin writing the conclusion. The conclusion should be written in a clear and concise manner, and should reiterate the main points of the research while also providing insights and recommendations for future research or action.

Characteristics of Research Paper Conclusion

The characteristics of a research paper conclusion include:

  • Clear and concise: The conclusion should be written in a clear and concise manner, summarizing the key findings and their significance.
  • Comprehensive: The conclusion should address all of the main points of the research paper, including the research question or problem, the methodology, the main results, and their implications.
  • Future-oriented : The conclusion should provide insights and recommendations for future research or action, based on the findings of the research.
  • Impressive : The conclusion should leave a lasting impression on the reader, emphasizing the importance of the research and its potential impact.
  • Objective : The conclusion should be based on the evidence presented in the research paper, and should avoid personal biases or opinions.
  • Unique : The conclusion should be unique to the research paper and should not simply repeat information from the introduction or body of the paper.

Advantages of Research Paper Conclusion

The advantages of a research paper conclusion include:

  • Summarizing the key findings : The conclusion provides a summary of the main findings of the research, making it easier for the reader to understand the key points of the study.
  • Emphasizing the significance of the research: The conclusion emphasizes the importance of the research and its potential impact, making it more likely that readers will take the research seriously and consider its implications.
  • Providing recommendations for future research or action : The conclusion suggests practical recommendations for future research or action, based on the findings of the study.
  • Providing closure to the research paper : The conclusion provides a sense of closure to the research paper, tying together the different sections of the paper and leaving a lasting impression on the reader.
  • Demonstrating the writer’s contribution to the field : The conclusion provides the writer with an opportunity to showcase their contribution to the field and to inspire further research and action.

Limitations of Research Paper Conclusion

While the conclusion of a research paper has many advantages, it also has some limitations that should be considered, including:

  • I nability to address all aspects of the research: Due to the limited space available in the conclusion, it may not be possible to address all aspects of the research in detail.
  • Subjectivity : While the conclusion should be objective, it may be influenced by the writer’s personal biases or opinions.
  • Lack of new information: The conclusion should not introduce new information that has not been discussed in the body of the research paper.
  • Lack of generalizability: The conclusions drawn from the research may not be applicable to other contexts or populations, limiting the generalizability of the study.
  • Misinterpretation by the reader: The reader may misinterpret the conclusions drawn from the research, leading to a misunderstanding of the findings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

National Academies Press: OpenBook

Integrating Social and Behavioral Sciences Within the Weather Enterprise (2018)

Chapter: 7 summary of key findings and recommendations, summary of key findings and recommendations.

Of the many thoughts and suggestions raised in the preceding chapters, the Committee highlights the following findings and recommendations:

7.1 FINDINGS

  • While efforts to advance meteorological research and numerical weather prediction must continue, realizing the greatest return on investment from such efforts requires fully engaging the social and behavioral sciences (SBS)—both to expand the frontiers of knowledge within social and behavioral science disciplines, and to foster more extensive application of these sciences across the weather enterprise.
  • SBS research offers great potential not just for improving communications of hazardous weather warnings, but also for improving preparedness and mitigation for weather risks, for hazard monitoring, assessment, and forecasting processes, for emergency management and response, and for long-term recovery efforts.
  • The past few decades have seen a variety of innovative research projects and activities bring social and behavioral sciences within the weather enterprise; these efforts have made demonstrable contributions both to the social and behavioral sciences and to meteorology. However, the accumulation of knowledge has been hampered by the relatively small scale, intermittency, and inconsistency of investment in these sorts of efforts.
  • As current research activities demonstrate, exciting opportunities exist for advancing weather-related research that addresses important societal needs, both within the social and behavioral sciences, and across social and physical sciences. A variety of research advances are providing transformative new opportunities for expanding these contributions to the weather enterprise. For instance, new tools and models are making it possible to collect, analyze, interpret, and apply data and information both at smaller scales—for example, eye-tracking of the use of visual information—and at larger scales—for

example, through social media analyses of the spread and influence of information across social networks, and the application of big data, data analytics and cognitive computing to this context.

  • Existing federal agency data collection activities by the National Oceanic and Atmospheric Administration (NOAA), the Federal Emergency Management Agency (FEMA), and the Centers for Disease Control and Prevention (CDC) could, with modest additions and greater interagency coordination, significantly expand our understanding of the social context of hazardous weather.
  • Meteorologists and others in the weather enterprise could benefit from a more realistic understanding of the diverse disciplines, theories, and research methodologies used within the social and behavioral sciences; of the time and resources needed for robust SBS research; and of the inherent limitations in providing simple, universally applicable answers to complex social science questions.
  • Organizations across the weather enterprise—including several federal agencies, private-sector weather companies, academic institutions, and professional societies—have shared motivations for actively contributing to the integration of SBS within the weather enterprise, through a variety of practical roles that are discussed herein.
  • Numerous previous reports going back many years have highlighted needs and challenges similar to those noted here—yet many of the same challenges remain today. Recent history demonstrates that overcoming these challenges and making progress is not idea limited, but rather, is resource limited.

7.2 RECOMMENDATIONS

Social and behavioral scientific research focused on weather applications is advancing during a time of accelerating social and technological change, both within the weather enterprise and across society at large. In this context, the Committee offers a broad-based framework for action, which leverages leadership to build awareness and demand for increased capacity, and identifies key knowledge gaps to target with that increased capacity. The Committee advocates that all sectors of the weather attend to these three main areas:

Invest in Leadership to Build Awareness

Effectively integrating social and behavioral sciences into organizations that have historically been rooted in the physical sciences requires leadership at the highest levels.

Across the weather enterprise, leaders themselves need to invest time in understanding and in spreading awareness to key constituencies and stakeholders about the many ways that social and behavioral sciences can help advance their organization’s goals related to preparedness and mitigation for weather hazards; hazard monitoring, assessment, and forecasting processes; emergency management and response; and long-term recovery. To aid these efforts, federal agencies, private companies, and leading academic programs within the weather enterprise need to augment their leadership teams to include executives and managers with strong and diverse social science backgrounds.

Recommendation: Leaders of the weather enterprise should take steps to accelerate this paradigm shift by underscoring the importance of social and behavioral science (SBS) contributions in fulfilling their organizational missions and achieving operational and research goals, bringing SBS expertise into their leadership teams, and establishing relevant policies and goals to effect necessary organizational changes.

Build Capacity Throughout the Weather Enterprise

Building SBS research capacity is an enterprise-wide concern and responsibility. However, NOAA will need to play a central role in driving forward this research in order to achieve the agency’s goals of improving the nation’s weather readiness. Building capacity to support and implement SBS research depends on more sustained funding and increased intellectual resources (i.e., professional staff trained and experienced in SBS research and its effective application). Several possible mechanisms for NOAA to advance SBS capacity are described in this report, such as innovative public–private partnerships for interdisciplinary weather research, the development of an SBS-focused NOAA Cooperative Institute, or creation of SBS-focused programs within existing Cooperative Institutes. New sustained efforts by other key federal agencies, in particular the National Science Foundation (NSF), the Department of Homeland Security (DHS), and the Federal Highway Administration (FHWA), will also be critical for expanding capacity to support research and operations at the SBS-weather interface.

Just as important as the mechanisms for supporting research are the research assessment and agenda-setting activities, community-building programs, and information sharing venues that help build a professional community working at the SBS-weather interface. Some existing platforms for sustained dialogue and strategic planning among public-sector, private-sector, and academic representatives could provide an effective base for SBS-related strategic planning as well. Interagency cooperation

and collaboration could be pursued through mechanisms the federal government currently employs, such as interagency working groups or university-based research centers supported by multiple agencies.

Targeted training programs can help researchers from the social, physical, and engineering sciences better understand each other’s diversity of research methodologies, and capacities and limitations. Viable approaches include interdisciplinary or joint degree programs, training at multi- or transdisciplinary centers in team science, building on NOAA’s currently developing SBS training efforts, and utilizing existing training platforms such as FEMA’s Emergency Management Institute, and the University Corporation for Atmospheric Research (UCAR) COMET program.

Recommendation: Federal agencies and private-sector weather companies should, together with leading social and behavioral science (SBS) scholars with diverse expertise, immediately begin a planning process to identify specific investments and activities that collectively advance research at the SBS-weather interface. This planning process should also address critical supporting activities for research assessment, agenda setting, community building, and information sharing, and the development of methods to collectively track funding support for this suite of research activities at the SBS-weather interface.

In addition, the National Oceanic and Atmospheric Administration should build more sustainable institutional capacity for research and operations at the SBS-weather interface and should advance cooperative planning to expand SBS research among other federal agencies that play critical roles in weather-related research operations. In particular, this should include leadership from:

  • The National Science Foundation for a strong standing program that supports interdisciplinary research at the SBS-weather interface;
  • The Federal Highway Administration for research related to weather impacts on driver choices and behaviors; and
  • The Federal Emergency Management Agency for research on the social and human factors that affect weather readiness, including decisions and actions by individuals, communities and emergency management to prepare for, prevent, respond to, mitigate, and recover from weather hazards.

All parties in the weather enterprise should continue to develop and implement training programs for current and next generation workforces in order to expand capacity for SBS-weather research and applications in the weather enterprise.

Focus on Critical Knowledge Gaps

Building scientific understanding of weather-related actions, behaviors, and decisions will require investing wisely in research that addresses specific knowledge gaps and will help accelerate the maturation of the field overall. The Committee identified a series of key near-term research questions that span the different stages of weather communication and decision support shown in Figure S.1 . The research questions, which are detailed in this report, can be broadly grouped into the following topical areas listed below.

Recommendation: The weather enterprise should support research efforts in the following areas:

  • Weather enterprise system-focused research. To address this gap requires system-level studies of weather information production, dissemination, and evaluation; studies of how forecasters, broadcast media, emergency and transportation managers, and private weather companies create information, interact, and communicate among themselves; studies of forecaster decision making, such as what observational platforms and numerical weather prediction guidance forecasters use and how they use them; studies of how to assess the economic value of weather services; and studies of team performance and organizational behavior within weather forecast offices and other parts of the weather enterprise.
  • Risk assessments and responses, and factors influencing these processes. This includes research on how to better reach and inform special-interest populations that have unique needs, such as vehicle drivers and others vulnerable to hazardous weather due to their location, resources, and capabilities. It also includes research on how people’s interest in, access to, and interpretation of weather information, as well as their decisions and actions in response, are affected by their specific social or physical context, prior experiences, cultural background, and personal values.
  • Message design, delivery, interpretation, and use. Persistent challenges include understanding how communicating forecast uncertainties in different formats influences understanding and action; how to balance consistency in messaging with needs for flexibility to suit different geographical, cultural, and use contexts, including warning specificity and impact-based warnings; and how new communication and information technologies—including the proliferation of different sources, content, and channels of weather information—interact with message design and are changing people’s weather information access, interpretations, preparedness, and response.

This page intentionally left blank.

Our ability to observe and forecast severe weather events has improved markedly over the past few decades. Forecasts of snow and ice storms, hurricanes and storm surge, extreme heat, and other severe weather events are made with greater accuracy, geographic specificity, and lead time to allow people and communities to take appropriate protective measures. Yet hazardous weather continues to cause loss of life and result in other preventable social costs.

There is growing recognition that a host of social and behavioral factors affect how we prepare for, observe, predict, respond to, and are impacted by weather hazards. For example, an individual's response to a severe weather event may depend on their understanding of the forecast, prior experience with severe weather, concerns about their other family members or property, their capacity to take the recommended protective actions, and numerous other factors. Indeed, it is these factors that can determine whether or not a potential hazard becomes an actual disaster. Thus, it is essential to bring to bear expertise in the social and behavioral sciences (SBS)—including disciplines such as anthropology, communication, demography, economics, geography, political science, psychology, and sociology—to understand how people's knowledge, experiences, perceptions, and attitudes shape their responses to weather risks and to understand how human cognitive and social dynamics affect the forecast process itself.

Integrating Social and Behavioral Sciences Within the Weather Enterprise explores and provides guidance on the challenges of integrating social and behavioral sciences within the weather enterprise. It assesses current SBS activities, describes the potential value of improved integration of SBS and barriers that impede this integration, develops a research agenda, and identifies infrastructural and institutional arrangements for successfully pursuing SBS-weather research and the transfer of relevant findings to operational settings.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.74(8); 2010 Oct 11

Presenting and Evaluating Qualitative Research

The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education . It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and focus groups are included. The paper concludes with guidance for publishing qualitative research and a checklist for authors and reviewers.

INTRODUCTION

Policy and practice decisions, including those in education, increasingly are informed by findings from qualitative as well as quantitative research. Qualitative research is useful to policymakers because it often describes the settings in which policies will be implemented. Qualitative research is also useful to both pharmacy practitioners and pharmacy academics who are involved in researching educational issues in both universities and practice and in developing teaching and learning.

Qualitative research involves the collection, analysis, and interpretation of data that are not easily reduced to numbers. These data relate to the social world and the concepts and behaviors of people within it. Qualitative research can be found in all social sciences and in the applied fields that derive from them, for example, research in health services, nursing, and pharmacy. 1 It looks at X in terms of how X varies in different circumstances rather than how big is X or how many Xs are there? 2 Textbooks often subdivide research into qualitative and quantitative approaches, furthering the common assumption that there are fundamental differences between the 2 approaches. With pharmacy educators who have been trained in the natural and clinical sciences, there is often a tendency to embrace quantitative research, perhaps due to familiarity. A growing consensus is emerging that sees both qualitative and quantitative approaches as useful to answering research questions and understanding the world. Increasingly mixed methods research is being carried out where the researcher explicitly combines the quantitative and qualitative aspects of the study. 3 , 4

Like healthcare, education involves complex human interactions that can rarely be studied or explained in simple terms. Complex educational situations demand complex understanding; thus, the scope of educational research can be extended by the use of qualitative methods. Qualitative research can sometimes provide a better understanding of the nature of educational problems and thus add to insights into teaching and learning in a number of contexts. For example, at the University of Nottingham, we conducted in-depth interviews with pharmacists to determine their perceptions of continuing professional development and who had influenced their learning. We also have used a case study approach using observation of practice and in-depth interviews to explore physiotherapists' views of influences on their leaning in practice. We have conducted in-depth interviews with a variety of stakeholders in Malawi, Africa, to explore the issues surrounding pharmacy academic capacity building. A colleague has interviewed and conducted focus groups with students to explore cultural issues as part of a joint Nottingham-Malaysia pharmacy degree program. Another colleague has interviewed pharmacists and patients regarding their expectations before and after clinic appointments and then observed pharmacist-patient communication in clinics and assessed it using the Calgary Cambridge model in order to develop recommendations for communication skills training. 5 We have also performed documentary analysis on curriculum data to compare pharmacist and nurse supplementary prescribing courses in the United Kingdom.

It is important to choose the most appropriate methods for what is being investigated. Qualitative research is not appropriate to answer every research question and researchers need to think carefully about their objectives. Do they wish to study a particular phenomenon in depth (eg, students' perceptions of studying in a different culture)? Or are they more interested in making standardized comparisons and accounting for variance (eg, examining differences in examination grades after changing the way the content of a module is taught). Clearly a quantitative approach would be more appropriate in the last example. As with any research project, a clear research objective has to be identified to know which methods should be applied.

Types of qualitative data include:

  • Audio recordings and transcripts from in-depth or semi-structured interviews
  • Structured interview questionnaires containing substantial open comments including a substantial number of responses to open comment items.
  • Audio recordings and transcripts from focus group sessions.
  • Field notes (notes taken by the researcher while in the field [setting] being studied)
  • Video recordings (eg, lecture delivery, class assignments, laboratory performance)
  • Case study notes
  • Documents (reports, meeting minutes, e-mails)
  • Diaries, video diaries
  • Observation notes
  • Press clippings
  • Photographs

RIGOUR IN QUALITATIVE RESEARCH

Qualitative research is often criticized as biased, small scale, anecdotal, and/or lacking rigor; however, when it is carried out properly it is unbiased, in depth, valid, reliable, credible and rigorous. In qualitative research, there needs to be a way of assessing the “extent to which claims are supported by convincing evidence.” 1 Although the terms reliability and validity traditionally have been associated with quantitative research, increasingly they are being seen as important concepts in qualitative research as well. Examining the data for reliability and validity assesses both the objectivity and credibility of the research. Validity relates to the honesty and genuineness of the research data, while reliability relates to the reproducibility and stability of the data.

The validity of research findings refers to the extent to which the findings are an accurate representation of the phenomena they are intended to represent. The reliability of a study refers to the reproducibility of the findings. Validity can be substantiated by a number of techniques including triangulation use of contradictory evidence, respondent validation, and constant comparison. Triangulation is using 2 or more methods to study the same phenomenon. Contradictory evidence, often known as deviant cases, must be sought out, examined, and accounted for in the analysis to ensure that researcher bias does not interfere with or alter their perception of the data and any insights offered. Respondent validation, which is allowing participants to read through the data and analyses and provide feedback on the researchers' interpretations of their responses, provides researchers with a method of checking for inconsistencies, challenges the researchers' assumptions, and provides them with an opportunity to re-analyze their data. The use of constant comparison means that one piece of data (for example, an interview) is compared with previous data and not considered on its own, enabling researchers to treat the data as a whole rather than fragmenting it. Constant comparison also enables the researcher to identify emerging/unanticipated themes within the research project.

STRENGTHS AND LIMITATIONS OF QUALITATIVE RESEARCH

Qualitative researchers have been criticized for overusing interviews and focus groups at the expense of other methods such as ethnography, observation, documentary analysis, case studies, and conversational analysis. Qualitative research has numerous strengths when properly conducted.

Strengths of Qualitative Research

  • Issues can be examined in detail and in depth.
  • Interviews are not restricted to specific questions and can be guided/redirected by the researcher in real time.
  • The research framework and direction can be quickly revised as new information emerges.
  • The data based on human experience that is obtained is powerful and sometimes more compelling than quantitative data.
  • Subtleties and complexities about the research subjects and/or topic are discovered that are often missed by more positivistic enquiries.
  • Data usually are collected from a few cases or individuals so findings cannot be generalized to a larger population. Findings can however be transferable to another setting.

Limitations of Qualitative Research

  • Research quality is heavily dependent on the individual skills of the researcher and more easily influenced by the researcher's personal biases and idiosyncrasies.
  • Rigor is more difficult to maintain, assess, and demonstrate.
  • The volume of data makes analysis and interpretation time consuming.
  • It is sometimes not as well understood and accepted as quantitative research within the scientific community
  • The researcher's presence during data gathering, which is often unavoidable in qualitative research, can affect the subjects' responses.
  • Issues of anonymity and confidentiality can present problems when presenting findings
  • Findings can be more difficult and time consuming to characterize in a visual way.

PRESENTATION OF QUALITATIVE RESEARCH FINDINGS

The following extracts are examples of how qualitative data might be presented:

Data From an Interview.

The following is an example of how to present and discuss a quote from an interview.

The researcher should select quotes that are poignant and/or most representative of the research findings. Including large portions of an interview in a research paper is not necessary and often tedious for the reader. The setting and speakers should be established in the text at the end of the quote.

The student describes how he had used deep learning in a dispensing module. He was able to draw on learning from a previous module, “I found that while using the e learning programme I was able to apply the knowledge and skills that I had gained in last year's diseases and goals of treatment module.” (interviewee 22, male)

This is an excerpt from an article on curriculum reform that used interviews 5 :

The first question was, “Without the accreditation mandate, how much of this curriculum reform would have been attempted?” According to respondents, accreditation played a significant role in prompting the broad-based curricular change, and their comments revealed a nuanced view. Most indicated that the change would likely have occurred even without the mandate from the accreditation process: “It reflects where the profession wants to be … training a professional who wants to take on more responsibility.” However, they also commented that “if it were not mandated, it could have been a very difficult road.” Or it “would have happened, but much later.” The change would more likely have been incremental, “evolutionary,” or far more limited in its scope. “Accreditation tipped the balance” was the way one person phrased it. “Nobody got serious until the accrediting body said it would no longer accredit programs that did not change.”

Data From Observations

The following example is some data taken from observation of pharmacist patient consultations using the Calgary Cambridge guide. 6 , 7 The data are first presented and a discussion follows:

Pharmacist: We will soon be starting a stop smoking clinic. Patient: Is the interview over now? Pharmacist: No this is part of it. (Laughs) You can't tell me to bog off (sic) yet. (pause) We will be starting a stop smoking service here, Patient: Yes. Pharmacist: with one-to-one and we will be able to help you or try to help you. If you want it. In this example, the pharmacist has picked up from the patient's reaction to the stop smoking clinic that she is not receptive to advice about giving up smoking at this time; in fact she would rather end the consultation. The pharmacist draws on his prior relationship with the patient and makes use of a joke to lighten the tone. He feels his message is important enough to persevere but he presents the information in a succinct and non-pressurised way. His final comment of “If you want it” is important as this makes it clear that he is not putting any pressure on the patient to take up this offer. This extract shows that some patient cues were picked up, and appropriately dealt with, but this was not the case in all examples.

Data From Focus Groups

This excerpt from a study involving 11 focus groups illustrates how findings are presented using representative quotes from focus group participants. 8

Those pharmacists who were initially familiar with CPD endorsed the model for their peers, and suggested it had made a meaningful difference in the way they viewed their own practice. In virtually all focus groups sessions, pharmacists familiar with and supportive of the CPD paradigm had worked in collaborative practice environments such as hospital pharmacy practice. For these pharmacists, the major advantage of CPD was the linking of workplace learning with continuous education. One pharmacist stated, “It's amazing how much I have to learn every day, when I work as a pharmacist. With [the learning portfolio] it helps to show how much learning we all do, every day. It's kind of satisfying to look it over and see how much you accomplish.” Within many of the learning portfolio-sharing sessions, debates emerged regarding the true value of traditional continuing education and its outcome in changing an individual's practice. While participants appreciated the opportunity for social and professional networking inherent in some forms of traditional CE, most eventually conceded that the academic value of most CE programming was limited by the lack of a systematic process for following-up and implementing new learning in the workplace. “Well it's nice to go to these [continuing education] events, but really, I don't know how useful they are. You go, you sit, you listen, but then, well I at least forget.”

The following is an extract from a focus group (conducted by the author) with first-year pharmacy students about community placements. It illustrates how focus groups provide a chance for participants to discuss issues on which they might disagree.

Interviewer: So you are saying that you would prefer health related placements? Student 1: Not exactly so long as I could be developing my communication skill. Student 2: Yes but I still think the more health related the placement is the more I'll gain from it. Student 3: I disagree because other people related skills are useful and you may learn those from taking part in a community project like building a garden. Interviewer: So would you prefer a mixture of health and non health related community placements?

GUIDANCE FOR PUBLISHING QUALITATIVE RESEARCH

Qualitative research is becoming increasingly accepted and published in pharmacy and medical journals. Some journals and publishers have guidelines for presenting qualitative research, for example, the British Medical Journal 9 and Biomedcentral . 10 Medical Education published a useful series of articles on qualitative research. 11 Some of the important issues that should be considered by authors, reviewers and editors when publishing qualitative research are discussed below.

Introduction.

A good introduction provides a brief overview of the manuscript, including the research question and a statement justifying the research question and the reasons for using qualitative research methods. This section also should provide background information, including relevant literature from pharmacy, medicine, and other health professions, as well as literature from the field of education that addresses similar issues. Any specific educational or research terminology used in the manuscript should be defined in the introduction.

The methods section should clearly state and justify why the particular method, for example, face to face semistructured interviews, was chosen. The method should be outlined and illustrated with examples such as the interview questions, focusing exercises, observation criteria, etc. The criteria for selecting the study participants should then be explained and justified. The way in which the participants were recruited and by whom also must be stated. A brief explanation/description should be included of those who were invited to participate but chose not to. It is important to consider “fair dealing,” ie, whether the research design explicitly incorporates a wide range of different perspectives so that the viewpoint of 1 group is never presented as if it represents the sole truth about any situation. The process by which ethical and or research/institutional governance approval was obtained should be described and cited.

The study sample and the research setting should be described. Sampling differs between qualitative and quantitative studies. In quantitative survey studies, it is important to select probability samples so that statistics can be used to provide generalizations to the population from which the sample was drawn. Qualitative research necessitates having a small sample because of the detailed and intensive work required for the study. So sample sizes are not calculated using mathematical rules and probability statistics are not applied. Instead qualitative researchers should describe their sample in terms of characteristics and relevance to the wider population. Purposive sampling is common in qualitative research. Particular individuals are chosen with characteristics relevant to the study who are thought will be most informative. Purposive sampling also may be used to produce maximum variation within a sample. Participants being chosen based for example, on year of study, gender, place of work, etc. Representative samples also may be used, for example, 20 students from each of 6 schools of pharmacy. Convenience samples involve the researcher choosing those who are either most accessible or most willing to take part. This may be fine for exploratory studies; however, this form of sampling may be biased and unrepresentative of the population in question. Theoretical sampling uses insights gained from previous research to inform sample selection for a new study. The method for gaining informed consent from the participants should be described, as well as how anonymity and confidentiality of subjects were guaranteed. The method of recording, eg, audio or video recording, should be noted, along with procedures used for transcribing the data.

Data Analysis.

A description of how the data were analyzed also should be included. Was computer-aided qualitative data analysis software such as NVivo (QSR International, Cambridge, MA) used? Arrival at “data saturation” or the end of data collection should then be described and justified. A good rule when considering how much information to include is that readers should have been given enough information to be able to carry out similar research themselves.

One of the strengths of qualitative research is the recognition that data must always be understood in relation to the context of their production. 1 The analytical approach taken should be described in detail and theoretically justified in light of the research question. If the analysis was repeated by more than 1 researcher to ensure reliability or trustworthiness, this should be stated and methods of resolving any disagreements clearly described. Some researchers ask participants to check the data. If this was done, it should be fully discussed in the paper.

An adequate account of how the findings were produced should be included A description of how the themes and concepts were derived from the data also should be included. Was an inductive or deductive process used? The analysis should not be limited to just those issues that the researcher thinks are important, anticipated themes, but also consider issues that participants raised, ie, emergent themes. Qualitative researchers must be open regarding the data analysis and provide evidence of their thinking, for example, were alternative explanations for the data considered and dismissed, and if so, why were they dismissed? It also is important to present outlying or negative/deviant cases that did not fit with the central interpretation.

The interpretation should usually be grounded in interviewees or respondents' contributions and may be semi-quantified, if this is possible or appropriate, for example, “Half of the respondents said …” “The majority said …” “Three said…” Readers should be presented with data that enable them to “see what the researcher is talking about.” 1 Sufficient data should be presented to allow the reader to clearly see the relationship between the data and the interpretation of the data. Qualitative data conventionally are presented by using illustrative quotes. Quotes are “raw data” and should be compiled and analyzed, not just listed. There should be an explanation of how the quotes were chosen and how they are labeled. For example, have pseudonyms been given to each respondent or are the respondents identified using codes, and if so, how? It is important for the reader to be able to see that a range of participants have contributed to the data and that not all the quotes are drawn from 1 or 2 individuals. There is a tendency for authors to overuse quotes and for papers to be dominated by a series of long quotes with little analysis or discussion. This should be avoided.

Participants do not always state the truth and may say what they think the interviewer wishes to hear. A good qualitative researcher should not only examine what people say but also consider how they structured their responses and how they talked about the subject being discussed, for example, the person's emotions, tone, nonverbal communication, etc. If the research was triangulated with other qualitative or quantitative data, this should be discussed.

Discussion.

The findings should be presented in the context of any similar previous research and or theories. A discussion of the existing literature and how this present research contributes to the area should be included. A consideration must also be made about how transferrable the research would be to other settings. Any particular strengths and limitations of the research also should be discussed. It is common practice to include some discussion within the results section of qualitative research and follow with a concluding discussion.

The author also should reflect on their own influence on the data, including a consideration of how the researcher(s) may have introduced bias to the results. The researcher should critically examine their own influence on the design and development of the research, as well as on data collection and interpretation of the data, eg, were they an experienced teacher who researched teaching methods? If so, they should discuss how this might have influenced their interpretation of the results.

Conclusion.

The conclusion should summarize the main findings from the study and emphasize what the study adds to knowledge in the area being studied. Mays and Pope suggest the researcher ask the following 3 questions to determine whether the conclusions of a qualitative study are valid 12 : How well does this analysis explain why people behave in the way they do? How comprehensible would this explanation be to a thoughtful participant in the setting? How well does the explanation cohere with what we already know?

CHECKLIST FOR QUALITATIVE PAPERS

This paper establishes criteria for judging the quality of qualitative research. It provides guidance for authors and reviewers to prepare and review qualitative research papers for the American Journal of Pharmaceutical Education . A checklist is provided in Appendix 1 to assist both authors and reviewers of qualitative data.

ACKNOWLEDGEMENTS

Thank you to the 3 reviewers whose ideas helped me to shape this paper.

Appendix 1. Checklist for authors and reviewers of qualitative research.

Introduction

  • □ Research question is clearly stated.
  • □ Research question is justified and related to the existing knowledge base (empirical research, theory, policy).
  • □ Any specific research or educational terminology used later in manuscript is defined.
  • □ The process by which ethical and or research/institutional governance approval was obtained is described and cited.
  • □ Reason for choosing particular research method is stated.
  • □ Criteria for selecting study participants are explained and justified.
  • □ Recruitment methods are explicitly stated.
  • □ Details of who chose not to participate and why are given.
  • □ Study sample and research setting used are described.
  • □ Method for gaining informed consent from the participants is described.
  • □ Maintenance/Preservation of subject anonymity and confidentiality is described.
  • □ Method of recording data (eg, audio or video recording) and procedures for transcribing data are described.
  • □ Methods are outlined and examples given (eg, interview guide).
  • □ Decision to stop data collection is described and justified.
  • □ Data analysis and verification are described, including by whom they were performed.
  • □ Methods for identifying/extrapolating themes and concepts from the data are discussed.
  • □ Sufficient data are presented to allow a reader to assess whether or not the interpretation is supported by the data.
  • □ Outlying or negative/deviant cases that do not fit with the central interpretation are presented.
  • □ Transferability of research findings to other settings is discussed.
  • □ Findings are presented in the context of any similar previous research and social theories.
  • □ Discussion often is incorporated into the results in qualitative papers.
  • □ A discussion of the existing literature and how this present research contributes to the area is included.
  • □ Any particular strengths and limitations of the research are discussed.
  • □ Reflection of the influence of the researcher(s) on the data, including a consideration of how the researcher(s) may have introduced bias to the results is included.

Conclusions

  • □ The conclusion states the main finings of the study and emphasizes what the study adds to knowledge in the subject area.
  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Write an “Implications of Research” Section

How to Write an “Implications of Research” Section

4-minute read

  • 24th October 2022

When writing research papers , theses, journal articles, or dissertations, one cannot ignore the importance of research. You’re not only the writer of your paper but also the researcher ! Moreover, it’s not just about researching your topic, filling your paper with abundant citations, and topping it off with a reference list. You need to dig deep into your research and provide related literature on your topic. You must also discuss the implications of your research.

Interested in learning more about implications of research? Read on! This post will define these implications, why they’re essential, and most importantly, how to write them. If you’re a visual learner, you might enjoy this video .

What Are Implications of Research?

Implications are potential questions from your research that justify further exploration. They state how your research findings could affect policies, theories, and/or practices.

Implications can either be practical or theoretical. The former is the direct impact of your findings on related practices, whereas the latter is the impact on the theories you have chosen in your study.

Example of a practical implication: If you’re researching a teaching method, the implication would be how teachers can use that method based on your findings.

Example of a theoretical implication: You added a new variable to Theory A so that it could cover a broader perspective.

Finally, implications aren’t the same as recommendations, and it’s important to know the difference between them .

Questions you should consider when developing the implications section:

●  What is the significance of your findings?

●  How do the findings of your study fit with or contradict existing research on this topic?

●  Do your results support or challenge existing theories? If they support them, what new information do they contribute? If they challenge them, why do you think that is?

Why Are Implications Important?

You need implications for the following reasons:

● To reflect on what you set out to accomplish in the first place

● To see if there’s a change to the initial perspective, now that you’ve collected the data

● To inform your audience, who might be curious about the impact of your research

How to Write an Implications Section

Usually, you write your research implications in the discussion section of your paper. This is the section before the conclusion when you discuss all the hard work you did. Additionally, you’ll write the implications section before making recommendations for future research.

Implications should begin with what you discovered in your study, which differs from what previous studies found, and then you can discuss the implications of your findings.

Your implications need to be specific, meaning you should show the exact contributions of your research and why they’re essential. They should also begin with a specific sentence structure.

Examples of starting implication sentences:

●  These results build on existing evidence of…

●  These findings suggest that…

●  These results should be considered when…

●  While previous research has focused on x , these results show that y …

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

You should write your implications after you’ve stated the results of your research. In other words, summarize your findings and put them into context.

The result : One study found that young learners enjoy short activities when learning a foreign language.

The implications : This result suggests that foreign language teachers use short activities when teaching young learners, as they positively affect learning.

 Example 2

The result : One study found that people who listen to calming music just before going to bed sleep better than those who watch TV.

The implications : These findings suggest that listening to calming music aids sleep quality, whereas watching TV does not.

To summarize, remember these key pointers:

●  Implications are the impact of your findings on the field of study.

●  They serve as a reflection of the research you’ve conducted.              

●  They show the specific contributions of your findings and why the audience should care.

●  They can be practical or theoretical.

●  They aren’t the same as recommendations.

●  You write them in the discussion section of the paper.

●  State the results first, and then state their implications.

Are you currently working on a thesis or dissertation? Once you’ve finished your paper (implications included), our proofreading team can help ensure that your spelling, punctuation, and grammar are perfect. Consider submitting a 500-word document for free.

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

The benefits of using an online proofreading service.

Proofreading is important to ensure your writing is clear and concise for your readers. Whether...

2-minute read

6 Online AI Presentation Maker Tools

Creating presentations can be time-consuming and frustrating. Trying to construct a visually appealing and informative...

What Is Market Research?

No matter your industry, conducting market research helps you keep up to date with shifting...

8 Press Release Distribution Services for Your Business

In a world where you need to stand out, press releases are key to being...

3-minute read

How to Get a Patent

In the United States, the US Patent and Trademarks Office issues patents. In the United...

The 5 Best Ecommerce Website Design Tools 

A visually appealing and user-friendly website is essential for success in today’s competitive ecommerce landscape....

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

  • Open access
  • Published: 22 March 2024

Who are vaccine champions and what implementation strategies do they use to improve adolescent HPV vaccination? Findings from a national survey of primary care professionals

  • Micaela K. Brewington   ORCID: orcid.org/0000-0002-3404-3987 1 ,
  • Tara L. Queen 1 ,
  • Jennifer Heisler-MacKinnon 1 ,
  • William A. Calo 2 ,
  • Sandra Weaver 3 ,
  • Chris Barry 4 ,
  • Wei Yi Kong 1 ,
  • Kathryn L. Kennedy 1 ,
  • Christopher M. Shea 5 &
  • Melissa B. Gilkey 1 , 6  

Implementation Science Communications volume  5 , Article number:  28 ( 2024 ) Cite this article

223 Accesses

2 Altmetric

Metrics details

Implementation science researchers often cite clinical champions as critical to overcoming organizational resistance and other barriers to the implementation of evidence-based health services, yet relatively little is known about who champions are or how they effect change. To inform future efforts to identify and engage champions to support HPV vaccination, we sought to describe the key characteristics and strategies of vaccine champions working in adolescent primary care.

In 2022, we conducted a national survey with a web-based panel of 2527 primary care professionals (PCPs) with a role in adolescent HPV vaccination (57% response rate). Our sample consisted of pediatricians (26%), family medicine physicians (22%), advanced practice providers (24%), and nursing staff (28%). Our survey assessed PCPs’ experience with vaccine champions, defined as health care professionals “known for helping their colleagues improve vaccination rates.”

Overall, 85% of PCPs reported currently working with one or more vaccine champions. Among these 2144 PCPs, most identified the champion with whom they worked most closely as being a physician (40%) or nurse (40%). Almost all identified champions worked to improve vaccination rates for vaccines in general (45%) or HPV vaccine specifically (49%). PCPs commonly reported that champion implementation strategies included sharing information (79%), encouragement (62%), and vaccination data (59%) with colleagues, but less than half reported that champions led quality improvement projects (39%). Most PCPs perceived their closest champion as being moderately to extremely effective at improving vaccination rates (91%). PCPs who did versus did not work with champions more often recommended HPV vaccination at the earliest opportunity of ages 9–10 rather than later ages (44% vs. 33%, p < 0.001).

Conclusions

Findings of our national study suggest that vaccine champions are common in adolescent primary care, but only a minority lead quality improvement projects. Interventionists seeking to identify champions to improve HPV vaccination rates can expect to find them among both physicians and nurses, but should be prepared to offer support to more fully engage them in implementing interventions.

Peer Review reports

Contributions to the literature

We surveyed 2527 US primary care professionals (PCPs) to describe key characteristics and strategies of vaccine champions in adolescent primary care.

Most PCPs (85%) worked with vaccine champions, with similar proportions identifying a physician or nurse as their closest champion.

PCPs commonly reported that champion provided information (79%), encouragement (62%), and vaccination data (59%) to colleagues, but less than half reported champions led quality improvement projects (39%).

Working with a champion correlated with more positive HPV vaccine recommendation practices and clinic performance perceptions.

Findings suggest vaccine champions are common, but may need more support to be quality improvement leaders.

Introduction

Implementation science research emphasizes the importance of clinical champions in scaling up the implementation of evidence-based health services. According to the Expert Recommendations for Implementing Change (ERIC), champions are “individuals who dedicate themselves to supporting, marketing, and driving through an implementation, overcoming indifference or resistance that the intervention may provoke in an organization” [ 1 ]. Champions are characterized by their persistence, enthusiasm, and conviction in pushing implementations forward, even when it means putting their reputations on the line [ 2 ]. They differ from related concepts, such as “opinion leaders,” who more passively exert an influence on the flow of information within networks [ 3 ]. In this way, champions constitute an implementation strategy in and of themselves [ 1 ], while also having robust potential to effectively deliver training and other support to improve the provision of evidence-based services within clinics and larger health care systems. Perhaps not surprisingly, interventions in clinical settings commonly feature a champion component [ 2 , 4 , 5 , 6 ].

Human papillomavirus (HPV) vaccination is a useful case study for investigating the role of champions. Widespread HPV vaccination could prevent over 90% of the nearly 36,500 HPV cancers diagnosed in the United States each year [ 7 ]. Unfortunately, despite national recommendations for adolescents to receive the two-dose HPV vaccine series between ages 9 and 12, only 50% of 13-year-olds were fully vaccinated in 2021, with consistently lower coverage in rural areas [ 8 , 9 ]. Importantly, younger age at initiation of the HPV vaccine series is associated with higher rates of on-time series completion [ 10 ]. The reasons for low uptake are complex, but one key factor is primary care professionals’ (PCPs’) infrequent and ineffective recommendation of HPV vaccination [ 11 , 12 , 13 ]. Evidence-based implementation strategies that combine provider communication training, assessment and feedback, and other techniques are emerging to improve HPV vaccination within health care settings [ 14 , 15 , 16 ]. Given their role as change agents, training champions to use these implementation strategies could help address challenges with scaling routine HPV vaccination across health care systems.

Despite the implementation research literature consistently emphasizes the critical importance of champions, relatively little work has provided insight into how to identify and engage champions to best meet implementation needs. For example, no prior studies have examined the extent to which champion relationships are characterized by homophily in clinical role such that physicians look to physicians as champions, while nurses look to other nurses. Further, prior work has not specifically explored champions in the context of HPV vaccination, though the presence of a champion has been positively associated with HPV vaccination performance in primary care [ 17 ]. Thus, we conducted a national survey of adolescent PCPs to evaluate how common vaccine champions are, their roles and attributes, and implementation strategies they use to promote adolescent vaccination, including HPV vaccination. Our findings may guide future efforts to identify, engage, and train champions to deliver evidence-based interventions to support HPV vaccination within clinical settings.

Participants and procedures

We conducted a web-based survey in May–July 2022 to assess PCPs’ perceptions of and experiences working with vaccine champions in adolescent primary care. Eligible PCPs were physicians, advanced practice providers (i.e., physician assistants and advanced practice nurses), and nursing staff (registered nurses, licensed practical/vocational nurses, medical assistants, and certified nursing assistants). Additionally, eligible respondents (1) were certified to practice in the US; (2) worked in pediatrics or family medicine and general practice (hereafter “family medicine”); and (3) had one or more roles in HPV vaccination for children ages 9–12. Roles in HPV vaccination were specified as assessing children’s vaccination status, notifying parents when children are due for the vaccine, recommending the vaccine, addressing parent questions and concerns, or administering the vaccine.

We contracted with a survey company, WebMD Market Research, to recruit PCPs through the Medscape Network, which provides web-based information, continuing education, and research participation opportunities to the medical community. About 60% of US physicians are members of the network, and Medscape verifies physicians’ and advanced practice providers’ licenses upon registration. In the pre-recruitment phase, the survey company constructed a survey panel by emailing members with the appropriate medical training (i.e., physicians, advanced practice providers, and nursing staff) to assess their interest in survey participation and to filter out inactive members. Members who responded affirmatively were eligible to join the study.

In the recruitment phase, the survey company emailed 6278 panel members a link to the web-based survey, followed by up to four reminders for members who did not respond. We used quotas to ensure balance in our sample by medical training. More specifically, we aimed to include roughly equal proportions of pediatricians, family physicians, advanced practice providers, and nursing staff. Because of rural-urban disparities in HPV vaccination, we oversampled PCPs practicing in clinics located in rural counties, as defined by USDA Rural-Urban Continuum Codes (RUCC) 4-9 [ 9 , 18 ].

Respondents who clicked the survey link began by completing a 4-item screener that ensured they met eligibility criteria (Supplemental Table 1 ). A total of 2527 PCPs were eligible, provided informed consent, and completed the survey, yielding a response rate of 57% (Response rate 3, [ 19 ]). Respondents in our sample compared favorably to those in the general population on key demographic characteristics (Supplemental Table 2 ). The median completion time for our survey was 19 min, and respondents received an incentive of up to $45 depending on local market rates for survey research participation. The University of North Carolina Institutional Review Board approved the study protocol. We used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) cross-sectional study guidelines to develop this manuscript [ 20 ].

Our survey began by defining “vaccine champion” with the following statement:

Some health care professionals are known for helping their colleagues improve vaccination rates. They are passionate about sharing vaccine-related information, data, tools, and encouragement with others in their clinic. We will call them vaccine champions.

Respondents next reported how many champions they currently work with, using nine response options that ranged from “0 champions” to “8 or more champions.” This item instructed respondents to “Consider anyone who goes above and beyond to help you or others in your clinic improve vaccination rates. You can count physicians, nursing staff, administrators, quality improvement staff, and others” (Supplemental Table 1 ). We re-categorized responses as working with any vaccine champion (≥ 1 champion) versus none (0 champions) in order to compare these two groups on their characteristics to understand which PCPs may lack this resource.

For PCPs who worked with any champions, the survey used seven closed-ended items to characterize the champion with whom the respondent worked most closely. One of these items assessed how closely the respondent worked with the champion, using a 5-point response scale to rate the tie as “extremely” to “not at all” close. Another three items used prespecified lists to assess the champion’s medical training, clinical role, and how their role as a vaccine champion is recognized in the clinic. The remaining three items used prespecified lists to assess strategies the champion uses to improve vaccination rates, the kind of vaccination rates they work to improve, and the qualities that best describe them.

We used three survey items to assess champion effectiveness. One of these items assessed perceived effectiveness; respondents used a 5-point response scale to rate their closest champion on effectiveness at improving vaccination rates (“Not at all effective” [1] to “extremely effective” [5]). One closed-ended item assessed respondents’ own HPV vaccine-related behavior in terms of the age at which they begin routinely recommending HPV vaccination for their patients; we recategorized response as 9–10 years, 11–12 years, ≥ 13 years, or never. We used a skip pattern to offer this item only to respondents who indicated having a role in HPV vaccine recommendations. One closed-ended item assessed respondents’ perception of their own clinic’s HPV vaccination rates in terms of whether those rates were at or above their state’s average versus below it. For this item, the survey displayed their state’s HPV vaccination rates for their reference.

Our survey assessed the characteristics of respondents and the clinics in which they worked. Demographic and professional characteristics included PCPs’ gender, race/ethnicity, number of years in practice, and number of patients, ages 9-12, that they see in a typical week. Clinical characteristics included practice type and whether the clinic was part of a healthcare system or network. Two items were collected the county and state of the PCP’s primary clinic, which we used to categorize clinics as rural (RUCC 4–9) or nonrural (RUCC 1-3) [ 18 ].

Prior to fielding our survey, we cognitively tested subsets of survey items with 16 PCPs recruited for that purpose, as well as with seven additional PCPs who made up our study’s clinical advisory board. These PCPs included physicians, advance practice providers, nurses, and medical assistants who worked in primary care and were not survey participants. Cognitive interviews used “think aloud” activities to assess whether participants interpreted concepts such as “vaccine champion” as intended by the research team. Their feedback helped the study team to define champions in a way that better distinguished the role of “helping colleagues improve” from more general vaccine promotion with patients and their families. PCPs also provided feedback on the comprehensibility of survey items, including the appropriateness of item wording and response options [ 21 ].

Statistical analysis

We used bivariate logistic regression to identify correlates of working with any vaccine champions, modeling the outcome as yes (“≥1 champion”) versus no (“0 champions”). We then entered statistically significant correlates into a multivariable model. We used chi-square tests to assess the association between working with any vaccine champions and each of two effectiveness measures: the age at which respondents delivered routine HPV vaccine recommendations and respondents’ perception of their clinics’ HPV vaccination rates. We conducted analyses using SAS (v 9.4). Statistical tests were two-tailed with a critical alpha of .05.

Participant characteristics

Our sample was comprised of pediatricians (26%), family physicians (22%), advanced practice providers (24%), and nursing staff (28%, Table 1 ). Over two-thirds of PCPs were women (72%). Most respondents identified as White (66%), Asian (14%), Black (5%), or Hispanic (4%). Our sample included PCPs with a range of practice experience, from low (0–9 years, 37%) to medium (10–19 years, 29%) to high (≥ 20 years, 33%).

Correlates of working with a vaccine champion

Overall, 85% of respondents reported that they currently work with one or more vaccine champions, with 3 champions being the median response for the sample overall. In the multivariable analysis, working with a champion was more common among family physicians, advanced practice providers, and nursing staff compared to pediatricians (81%, 87%, and 90% vs. 80%, p< .05, Table 2 ). Working with a champion was also more common among PCPs who saw medium and high versus lower volumes of 9- to 12-year-old patients (86% and 90% vs. 78%, p< .05), as well as among those who did versus did not work in a healthcare system (86% vs.82%, p< .05). Working with a champion was less common among PCPs working in the South and the West versus the Northeast (84% and 83% vs. 88%, p< .05). Although PCP female gender correlated with working with a champion in bivariate analyses, this association did not retain statistical significance in the multivariable model.

Champion attributes

PCPs who worked with at least one champion reported on attributes of the champion with whom they worked mostly closely (Table 3 ). Among these 2,144 PCPs, most reported that they worked very (41%) to extremely (19%) closely with this champion versus moderately closely or less. Champions identified by PCPs most often worked as patient care team members (80%), and about half of PCPs reported that their closest champions’ role was recognized in their job description (38%) and/or job title (19%).

Over one-third of PCPs identified their closest champion as a physician (40%) or nursing staff member (40%), while the remaining one-fifth identified an advance practice provider (17%) or other role (2%, Table 3 ). With respect to homophily, physician respondents ( n =983) identified similar proportions of physicians and non-physicians as their closest champion (49% vs. 49%, Fig. 1 ). Less than half of nursing staff respondents ( n =634) identified another nurse as their closest champion, compared to over half who identified a non-nurse (41% vs. 56%). Only about one-fourth (28%) of advanced practice providers (n=527) identified another advance practice provider as their closest champions, compared to almost three-quarters who identified a physician or nurse (28% vs 71%).

figure 1

Training of PCPs’ closest champion ( n =2144)

Most PCPs described their closest champion as being knowledgeable about vaccines (91%), trusted by patients and families (84%), an effective communicator (83%), knowledgeable about their clinic (77%), and highly respected by colleagues (74%). Regarding the strategies used to improve vaccination rates, PCPs most often reported that their closest champion communicates effectively with patients and families (85%), shares information with colleagues (79%), encourages colleagues to improve (62%), and shares data on vaccination rates (59%); only a minority of PCPs reported their closest champion leads quality improvement projects (39%). Nearly half of respondents reported that their closest champion works to improve vaccination rates for all vaccines (45%) versus select vaccines such as HPV (49%), seasonal influenza (47%), or COVID-19 vaccines (36%).

Champion effectiveness

Most PCPs perceived their closest vaccine champion to be moderately to extremely effective at improving vaccination rates (91%, Table 3 ). Furthermore, working with a vaccine champion was associated with HPV vaccine recommendation timing ( χ 2 = 18.07, p < .001, Fig. 2 ). More specifically, among the 2294 PCPs who reported recommending HPV vaccine, those who did versus did not work with champions more often reported beginning routine HPV vaccine recommendations at the earliest opportunity of ages 9-10 (44% vs. 33%) and less often reported recommending HPV vaccine later or never. Finally, working with vaccine champions was associated with higher perceived HPV vaccination rates ( χ 2 = 31.78, p < .001, Fig. 3 ); PCPs working with champions more often perceived that their clinic’s vaccination rates were at or above their state’s average (68%) compared to those who do not work with a champion (54%).

figure 2

Timing of PCPs’ HPV vaccine recommendations ( n =2294). Bars show standard error

figure 3

PCPs’ perceptions of their clinic’s HPV vaccination rates ( n =2527). Bars show standard error

Our study is among the first to detail the roles and characteristics of vaccine champions. Our findings suggest that such champions are common in adolescent primary care, with over four-fifths of PCPs in our national sample reporting that they currently work with one or more champion. Most PCPs characterized the tie to their closest champion as very or extremely close and endorsed that person as having broadly positive qualities. Common champion implementation strategies included encouraging colleagues and sharing information and vaccination data, although only a minority of PCPs reported that champions led quality improvement projects. In this way, champions appear to be a pervasive, but potentially underused resource. Champions may require additional training and support if they are to engage their colleagues in more formal initiatives to improve vaccination rates [ 6 ]. Future research should explore barriers and facilitators to champions conducting such work, including champion motivation and willingness, as well as opportunities to support them in selecting the most appropriate implementation strategies for meeting their goals.

In addition to underscoring the importance of vaccine champions for improving vaccination rates in general, our study suggests that champions influence HPV vaccination specifically. PCPs perceived champions as effective in improving vaccination rates and most often identified HPV vaccination as the vaccine on which they focused their efforts as a champion. Furthermore, PCPs who worked with champions reported more positive HPV vaccine recommendation practices and perceptions of their clinic’s HPV vaccination rates. Taken together, these findings suggest that champions may be effective at increasing HPV vaccination, although our study’s cross-sectional design and reliance on self-reported data preclude our ability to establish causality. In prior research, several quasi-experimental and observational studies have identified positive associations between vaccine champions and influenza vaccination, the use of vaccine reminder and recall messages in pediatric and public clinics, and the presence of standing order programs in primary care [ 22 , 23 , 24 ]. In contrast, several cluster-randomized trials assessing multimodal interventions, including the designation of a champion, found no or modest effects on several vaccines in obstetrics and gynecology clinics, but these studies were not designed to evaluate the impact of champions specifically [ 25 , 26 , 27 ]. Thus, while vaccine champions are a highly promising implementation strategy, further randomized studies will be needed to provide higher quality evidence of their effectiveness for changing their colleagues’ practices and perceptions and improving HPV vaccination rates.

Towards that end, our findings provide several points of guidance for researchers and quality improvement leaders who seek to engage vaccine champions. First, our finding that champions are highly prevalent suggests that interventionists can expect to consistently find champions in adolescent primary care, although more targeted efforts to identify them may be needed in lower-volume practices that are not part of healthcare systems or that are located in the South or West, where champions were less common. Second, we found that champions came from diverse backgrounds in terms of training, which suggests that interventionists should consider physicians, advanced practice providers, and nurses in the champion role. In fact, given the diversity in PCPs’ relationships to champions, multidisciplinary teams of champions may be the ideal. Such an approach would be consistent with prior studies which have found that engaging multiple champions is preferable to having champions serve alone and could also offer potential relief for over-burdened physicians with limited time for additional duties [ 2 ]. Finally, when asked to identify their closest champion, PCPs were equally likely to identify a colleague who was or was not recognized for being a champion in their professional title or formal job description. For this reason, interventionists should consider both institutionally-recognized champions as well as champions who may take on the role more informally, based on their own interest and dedication.

Strengths of this study include data from a large, national sample of PCPs with multidisciplinary representation across physicians, advanced practice providers, and nursing staff in adolescent primary care. Our cross-sectional study design allowed us to collect novel data on champions’ attributes and strategies, but also constitutes a limitation insofar as we cannot establish whether associations, such as that between knowing a champion and positive HPV vaccine recommendations, are causal in nature. Another limitation to our study is the challenge of defining a vaccine champion to PCPs working in adolescent primary care, a field in which support for vaccination services is the norm. We conducted extensive cognitive testing to define vaccine champions as those who help their colleagues improve vaccination rates, as opposed to more general promotion of vaccines to patients and their families. Nevertheless, this concept is vulnerable to misinterpretation, which could lead to overestimation of champion prevalence. Similarly, though we asked PCPs about various champion implementation strategies and whether champions led quality improvement projects, it is possible these champions contribute in various ways or undertake strategies not captured by our survey. Finally, we note that our findings are based on PCPs’ perceptions and self-report. Results describing PCP’s outlook on their performance and the performance of their clinics are subject to biases, including social desirability, but are nonetheless valuable in providing data to inform future intervention research to establish the champions’ impact on vaccination rates.

While the implementation science literature frequently invokes champions, studies directly assessing their role in improving clinical outcomes like vaccination are scarce. Champions are highlighted for their potential to successfully implement clinic-based interventions, but overcoming status quo and other organizational resistance are inherently challenging, and a more detailed understanding of champions will better inform efforts to deliver and sustain health services. To this end, our study finds that vaccine champions are widespread but underutilized in quality improvement projects in adolescent primary care, include PCPs of various training backgrounds, and may or may not have a formal title. The relatively low proportion of champions who participate in quality improvement efforts may indicate the need for training and support for champions to lead more formal initiatives. Future research should explore barriers and facilitators to champions’ work in guiding implementation of health services and promoting adolescent vaccines. Importantly, we find an intriguing association between working with a champion and more positive HPV vaccination behaviors and perceptions, which warrant further evaluation in randomized controlled trials.

Availability of data and materials

The datasets generated and/or analyzed during the current study are available upon request upon study completion.

Abbreviations

Expert recommendations for implementing change

Human papillomavirus

Primary care professional

United states

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, Kirchner JE. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21. https://doi.org/10.1186/s13012-015-0209-1 .

Article   PubMed   PubMed Central   Google Scholar  

Miech EJ, Rattray NA, Flanagan ME, Damschroder L, Schmid AA, Damush TM. Inside help: an integrative review of champions in healthcare-related implementation. SAGE Open Med. 2018;6:2050312118773261. https://doi.org/10.1177/2050312118773261 .

Rogers EM. Diffusion of innovations. Simon and Schuster; 2010.

Google Scholar  

Damschroder LJ, Banaszak-Holl J, Kowalski CP, Forman J, Saint S, Krein SL. The role of the champion in infection prevention: results from a multisite qualitative study. Qual Saf Health Care. 2009;18(6):434–40. https://doi.org/10.1136/qshc.2009.034199 .

Article   CAS   PubMed   Google Scholar  

Santos WJ, Graham ID, Lalonde M, Varin MD, Squires JE. The effectiveness of champions in implementing innovations in health care: a systematic review. Implement Sci Commun. 2022;3:80. https://doi.org/10.1186/s43058-022-00315-0 .

Shea CM. A conceptual model to guide research on the activities and effects of innovation champions. Implement Res Pract. 2021;2:2633489521990443. https://doi.org/10.1177/2633489521990443 .

Centers for Disease Control and Prevention. HPV cancers are preventable. Centers for Disease Control and Prevention. 2021. Retrieved June 15, 2023, from https://www.cdc.gov/hpv/hcp/protecting-patients.html .

Meites E, Kempe A, Markowitz LE. Use of a 2-dose schedule for human papillomavirus vaccination — updated recommendations of the Advisory Committee on Immunization Practices. MMWR Morb Mortal Wkly Rep 2016. 2016;65(49):1405–8. https://doi.org/10.15585/mmwr.mm6549a5 .

Article   Google Scholar  

Pingali C, Yankey D, Elam-Evans LD, Markowitz LE, Valier MR, Fredua B, Crowe SJ, DeSisto CL, Stokley S, Singleton JA. Vaccination coverage among adolescents aged 13–17 years—National Immunization Survey-Teen, United States, 2022. MMWR Morb Mortal Wkly Rep. 2023;72(34):912–9. https://doi.org/10.15585/mmwr.mm7135a1 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

St Sauver JL, Rutten LJF, Ebbert JO, Jacobson DJ, McGree ME, Jacobson RM. Younger age at initiation of the human papillomavirus (HPV) vaccination series is associated with higher rates of on-time completion. Prev Med. 2016;89:327–33. https://doi.org/10.1016/j.ypmed.2016.02.039 .

Gilkey MB, Calo WA, Moss JL, Shah PD, Marciniak MW, Brewer NT. Provider communication and HPV vaccination: the impact of recommendation quality. Vaccine. 2016;34(9):1187–92. https://doi.org/10.1016/j.vaccine.2016.01.023 .

Gilkey MB, Malo TL, Shah PD, Hall ME, Brewer NT. Quality of physician communication about human papillomavirus vaccine: findings from a national survey. Cancer Epidemiol Biomarkers Prev : Publ Am Assoc Cancer Res Cosponsored Am Soc Prev Oncol. 2015;24(11):1673–9. https://doi.org/10.1158/1055-9965.EPI-15-0326 .

Gilkey MB, McRee AL. Provider communication about HPV vaccination: a systematic review. Hum Vaccin Immunother. 2016;12(6):1454–68. https://doi.org/10.1080/21645515.2015.1129090 .

Brewer NT, Hall ME, Malo TL, Gilkey MB, Quinn B, Lathren C. Announcements versus conversations to improve HPV vaccination coverage: a randomized trial. Pediatrics. 2017;139(1):e20161764. https://doi.org/10.1542/peds.2016-1764 .

Gilkey MB, Heisler-MacKinnon J, Boynton MH, Calo WA, Moss JL, Brewer NT. Impact of brief quality improvement coaching on adolescent HPV vaccination coverage: a pragmatic cluster randomized trial. In: Cancer epidemiology, biomarkers & prevention: a publication of the American Association for Cancer Research, cosponsored by the American Society of Preventive Oncology, EPI-22-0866. Advance online publication; 2022. https://doi.org/10.1158/1055-9965.EPI-22-0866 .

Chapter   Google Scholar  

Perkins RB, Legler A, Jansen E, Bernstein J, Pierre-Joseph N, Eun TJ, Biancarelli DL, Schuch TJ, Leschly K, Fenton ATHR, Adams WG, Clark JA, Drainoni ML, Hanchate A. Improving HPV vaccination rates: a stepped-wedge randomized trial. Pediatrics. 2020;146(1):e20192737. https://doi.org/10.1542/peds.2019-2737 .

Article   PubMed   Google Scholar  

Lollier A, Rodriguez EM, Saad-Harfouche FG, Widman CA, Mahoney MC. HPV vaccination: pilot study assessing characteristics of high and low performing primary care offices. Prev Med Rep. 2018;10:157–61. https://doi.org/10.1016/j.pmedr.2018.03.002 .

US Department of Agriculture. USDA Economic Research Service—Rural-Urban Continuum Codes. USDA Economic Research Service; 2020. Retrieved May 3, 2023, from https://www.ers.usda.gov/data-products/rural-urban-continuum-codes/ .

American Association for Public Opinion Research. Standard definitions: final dispositions of case codes and outcome rates for surveys. 9th ed. AAPOR; 2016.

von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, Initiative STROBE. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. J Clin Epidemiol. 2008;61(4):344–9. https://doi.org/10.1016/j.jclinepi.2007.11.008 .

Terwee CB, Prinsen CAC, Chiarotto A, De Vet HCW, Westerman MJ, Patrick DL, Alonso J, Bouter LM, Mokkink LB. COSMIN standards and criteria for evaluating the content validity of health-related Patient-Reported Outcome Measures: a Delphi study. Qual Life Res in press; 2017.

Albert SM, Nowalk MP, Yonas MA, Zimmerman RK, Ahmed F. Standing orders for influenza and pneumococcal polysaccharide vaccination: correlates identified in a national survey of U.S. Primary care physicians. BMC Fam Pract. 2012;13(1):22. https://doi.org/10.1186/1471-2296-13-22 .

Slaunwhite JM, Smith SM, Fleming MT, Strang R, Lockhart C. Increasing vaccination rates among health care workers using unit “champions” as a motivator. Can J Infect Control: Off J Community Hospital Infect Control Assoc-Can = Revue Can Prev Infect. 2009;24(3):159–64.

Tierney CD, Yusuf H, McMahon SR, Rusinak D, O’Brien MA, Massoudi MS, Lieu TA. Adoption of reminder and recall messages for immunizations by pediatricians and public health clinics. Pediatrics. 2003;112(5):1076–82. https://doi.org/10.1542/peds.112.5.1076 .

Chamberlain AT, Seib K, Ault KA, Rosenberg ES, Frew PM, Cortés M, Whitney EAS, Berkelman RL, Orenstein WA, Omer SB. Improving influenza and Tdap vaccination during pregnancy: a cluster-randomized trial of a multi-component antenatal vaccine promotion package in late influenza season. Vaccine. 2015;33(30):3571–9. https://doi.org/10.1016/j.vaccine.2015.05.048 .

Mazzoni SE, Brewer SE, Pyrzanowski JL, Durfee MJ, Dickinson LM, Barnard JG, Dempsey AF, O’Leary ST. Effect of a multi-modal intervention on immunization rates in obstetrics and gynecology clinics. Am J Obstet Gynecol. 2016;214(5):617.e1-617.e7. https://doi.org/10.1016/j.ajog.2015.11.018 .

O’Leary ST, Pyrzanowski J, Brewer SE, Sevick C, Miriam Dickinson L, Dempsey AF. Effectiveness of a multimodal intervention to increase vaccination in obstetrics/gynecology settings. Vaccine. 2019;37(26):3409–18. https://doi.org/10.1016/j.vaccine.2019.05.034 .

Download references

Acknowledgements

Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number P01CA250989. The content is solely the responsibility of the authors and does not necessarily represent the official view of the National Institutes of Health.

Author information

Authors and affiliations.

Department of Health Behavior, Gillings School of Global Public Health, University of North Carolina, Chapel Hill, NC, USA

Micaela K. Brewington, Tara L. Queen, Jennifer Heisler-MacKinnon, Wei Yi Kong, Kathryn L. Kennedy & Melissa B. Gilkey

Department of Public Health Sciences, Penn State College of Medicine, Hershey, PA, USA

William A. Calo

UNC Family Medicine and Pediatrics, UNC Health, Chapel Hill, NC, USA

Sandra Weaver

JMA Pediatrics, Raleigh, NC, USA

Chris Barry

Department of Health Policy and Management, Gillings School of Global Public Health, University of North Carolina, Chapel Hill, NC, USA

Christopher M. Shea

Lineberger Comprehensive Cancer Center, University of North Carolina, Chapel Hill, NC, USA

Melissa B. Gilkey

You can also search for this author in PubMed   Google Scholar

Contributions

MG, JHM, and WC conceptualized and designed the study. TQ led data collection and analysis. MB, TQ, and MG wrote the initial draft of the manuscript. SW, CB, WYK, KK, and CS reviewed and provided feedback to the draft manuscript. SW and CB also served as clinical advisory board members who guided and provided feedback on study design and measures. All authors contributed to the interpretation of data, critically reviewed and approved the submitted version of the manuscript, and have agreed to be personally accountable for the work.

Corresponding author

Correspondence to Micaela K. Brewington .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval for this study was provided by the University of North Carolina Institutional Review Board (IRB).

Consent for publication

Competing interests.

The authors declare that they have no conflicts of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: supplemental table 1..

Survey items

Additional file 2. Supplementary Table 2.

Characteristics of PCPs in our sample versus those in the Current Population Survey.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Brewington, M.K., Queen, T.L., Heisler-MacKinnon, J. et al. Who are vaccine champions and what implementation strategies do they use to improve adolescent HPV vaccination? Findings from a national survey of primary care professionals. Implement Sci Commun 5 , 28 (2024). https://doi.org/10.1186/s43058-024-00557-0

Download citation

Received : 25 August 2023

Accepted : 12 February 2024

Published : 22 March 2024

DOI : https://doi.org/10.1186/s43058-024-00557-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • HPV vaccines
  • Immunizations
  • Primary care
  • Implementation strategy
  • Evidence-based practice
  • Adolescent health services
  • Health communication

Implementation Science Communications

ISSN: 2662-2211

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

key findings in research example

VIDEO

  1. What is research

  2. Research basics

  3. Operation Research Example

  4. Operation Research Example

  5. Operation Research Example 7 1

  6. Operation Research Example

COMMENTS

  1. Research Findings

    Conclusion: This section provides a summary of the key findings and the main conclusions of the study. ... Following is a Research Findings Example sample for students: Title: The Effects of Exercise on Mental Health. Sample: 500 participants, both men and women, between the ages of 18-45.

  2. Research Summary

    Research Summary. Definition: A research summary is a brief and concise overview of a research project or study that highlights its key findings, main points, and conclusions. It typically includes a description of the research problem, the research methods used, the results obtained, and the implications or significance of the findings.

  3. How to Write a Results Section

    A two-sample t test was used to test the hypothesis that higher social distance from environmental problems would reduce the intent to donate to environmental organizations, with donation intention (recorded as a score from 1 to 10) as the outcome variable and social distance (categorized as either a low or high level of social distance) as the predictor variable.Social distance was found to ...

  4. Research Results Section

    Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

  5. How to Write a Discussion Section

    Step 1: Summarize your key findings. Start this section by reiterating your research problem and concisely summarizing your major findings. To speed up the process you can use a summarizer to quickly get an overview of all important findings. Don't just repeat all the data you have already reported—aim for a clear statement of the overall result that directly answers your main research ...

  6. Organizing Your Social Sciences Research Paper

    For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results. Both approaches are appropriate in how you report your findings, but use only one approach. Present a synopsis of the results followed by an explanation of key findings. This approach can be used to highlight important findings.

  7. How to Write Discussions and Conclusions

    Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and ...

  8. Chapter 15: Interpreting results and drawing conclusions

    In this chapter, we address first one of the key aspects of interpreting findings that is also fundamental in completing a 'Summary of findings' table: the certainty of evidence related to each of the outcomes. ... Implications for research. Examples for research statements. Implications for practice. Risk of bias.

  9. Writing a Research Paper Conclusion

    Having summed up your key arguments or findings, the conclusion ends by considering the broader implications of your research. This means expressing the key takeaways, practical or theoretical, from your paper—often in the form of a call for action or suggestions for future research. ... Example A research paper outline makes writing your ...

  10. Chapter 7 Presenting your Findings

    7.1 Sections of the Presentation. When preparing your slides, you need to ensure that you have a clear roadmap. You have a limited time to explain the context of your study, your results, and the main takeaways. Thus, you need to be organized and efficient when deciding what material will be included in the slides.

  11. PDF Analyzing and Interpreting Findings

    Taking time to reflect on your findings and what these might possibly mean requires some serious mind work—so do not try and rush this phase. Spend a few days away from your research, giving careful thought to the findings, trying to put them in perspective, and trying to gain some deeper insights. To begin facilitating the kind of thinking ...

  12. How to Write the Results/Findings Section in Research

    Step 1: Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study. The guidelines will generally outline specific requirements for the results or findings section, and the published articles will ...

  13. Chapter 14: Completing 'Summary of findings' tables and ...

    Figure 14.1.a provides an example of a 'Summary of findings' table. Figure 15.1.b provides an alternative format that may further facilitate users' understanding and interpretation of the review's findings. Evidence evaluating different formats suggests that the 'Summary of findings' table should include a risk difference as a ...

  14. Draft the Summary of Findings

    Draft Summary of Findings: Draft a paragraph or two of discussion for each finding in your study. Assert the finding. Tell the reader how the finding is important or relevant to your studies aim and focus. Compare your finding to the literature. Be specific in the use of the literature. The link or connection should be clear to the reader.

  15. How to Write a Conclusion for Research Papers (with Examples)

    A conclusion in a research paper is the final section where you summarize and wrap up your research, presenting the key findings and insights derived from your study. The research paper conclusion is not the place to introduce new information or data that was not discussed in the main body of the paper.

  16. PDF Results Section for Research Papers

    key findings of the study. This paves the way for the discussion section of the research paper, wherein the results are interpreted and put in conversation with existing literature. Sample of Qualitative Results Section with Annotations Take a look at this example of a results section from a peer-reviewed journal article, noticing the

  17. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  18. Research Paper Conclusion

    Research Paper Conclusion. Definition: A research paper conclusion is the final section of a research paper that summarizes the key findings, significance, and implications of the research. It is the writer's opportunity to synthesize the information presented in the paper, draw conclusions, and make recommendations for future research or ...

  19. Discussing your findings

    Your discussion should begin with a cogent, one-paragraph summary of the study's key findings, but then go beyond that to put the findings into context, says Stephen Hinshaw, PhD, chair of the psychology department at the University of California, Berkeley. "The point of a discussion, in my view, is to transcend 'just the facts,' and engage in ...

  20. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  21. 7 Summary of Key Findings and Recommendations

    It assesses current SBS activities, describes the potential value of improved integration of SBS and barriers that impede this integration, develops a research agenda, and identifies infrastructural and institutional arrangements for successfully pursuing SBS-weather research and the transfer of relevant findings to operational settings.

  22. Presenting and Evaluating Qualitative Research

    The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education. It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and ...

  23. How to Write an "Implications of Research" Section

    To summarize, remember these key pointers: Implications are the impact of your findings on the field of study. They serve as a reflection of the research you've conducted. They show the specific contributions of your findings and why the audience should care. They can be practical or theoretical. They aren't the same as recommendations.

  24. Navigating Executive Decisions with Research Findings

    7. Here's what else to consider. Be the first to add your personal experience. Navigating a sea of data and research can be daunting, especially when the pressure is on to make strategic ...

  25. Who are vaccine champions and what implementation strategies do they

    Respondents in our sample compared favorably to those in the general population on key demographic characteristics (Supplemental Table 2). The median completion time for our survey was 19 min, and respondents received an incentive of up to $45 depending on local market rates for survey research participation.

  26. The Effects of Climate Change

    These findings are from the Third 3 and Fourth 4 National Climate Assessment Reports, released by the U.S. Global Change Research Program. Northeast. Heat waves, heavy downpours, and sea level rise pose increasing challenges to many aspects of life in the Northeast. Infrastructure, agriculture, fisheries, and ecosystems will be increasingly ...