DistillerSR Logo

About Systematic Reviews

Are Systematic Reviews Qualitative or Quantitative?

systematic literature review qualitative or quantitative

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

A systematic review is designed to be transparent and replicable. Therefore, systematic reviews are considered reliable tools in scientific research and clinical practice. They synthesize the results using multiple primary studies by using strategies that minimize bias and random errors. Depending on the research question and the objectives of the research, the reviews can either be qualitative or quantitative. Qualitative reviews deal with understanding concepts, thoughts, or experiences. Quantitative reviews are employed when researchers want to test or confirm a hypothesis or theory. Let’s look at some of the differences between these two types of reviews.

To learn more about how long it takes to do a systematic review , you can check out the link to our full article on the topic.

Differences between Qualitative and Quantitative Reviews

The differences lie in the scope of the research, the methodology followed, and the type of questions they attempt to answer. Some of these differences include:

Research Questions

As mentioned earlier qualitative reviews attempt to answer open-ended research questions to understand or formulate hypotheses. This type of research is used to gather in-depth insights into new topics. Quantitative reviews, on the other hand, test or confirm existing hypotheses. This type of research is used to establish generalizable facts about a topic.

Type of Sample Data

The data collected for both types of research differ significantly. For qualitative research, data is collected as words using observations, interviews, and interactions with study subjects or from literature reviews. Quantitative studies collect data as numbers, usually from a larger sample size.

Data Collection Methods

To collect data as words for a qualitative study, researchers can employ tools such as interviews, recorded observations, focused groups, videos, or by collecting literature reviews on the same subject. For quantitative studies, data from primary sources is collected as numbers using rating scales and counting frequencies. The data for these studies can also be collected as measurements of variables from a well-designed experiment carried out under pre-defined, monitored conditions.

Data Analysis Methods

Data by itself cannot prove or demonstrate anything unless it is analyzed. Qualitative data is more challenging to analyze than quantitative data. A few different approaches to analyzing qualitative data include content analysis, thematic analysis, and discourse analysis. The goal of all of these approaches is to carefully analyze textual data to identify patterns, themes, and the meaning of words or phrases.

Quantitative data, since it is in the form of numbers, is analyzed using simple math or statistical methods. There are several software programs that can be used for mathematical and statistical analysis of numerical data.

Presentation of Results

Learn more about distillersr.

(Article continues below)

systematic literature review qualitative or quantitative

Final Takeaway – Qualitative or Quantitative?

3 reasons to connect.

systematic literature review qualitative or quantitative

  • Quantitative vs. Qualitative Research

Research can be   quantitative or qualitative  or both:

  • A quantitative systematic review will include studies that have numerical data.
  • A qualitative systematic review derives data from observation, interviews, or verbal interactions and focuses on the meanings and interpretations of the participants. It may include focus groups, interviews, observations and diaries.

Video source:  UniversityNow: Quantitative vs. Qualitative Research

For more information on searching for qualitative evidence see:

Booth, A. (2016). Searching for qualitative research for inclusion in systematic reviews: A structured methodological review.  Systematic Reviews, 5 (1), 1–23. https://doi.org/10.1186/S13643-016-0249-X/TABLES/5

  • << Previous: Study Types & Terminology
  • Next: Critical Appraisal of Studies >>
  • Types of Questions
  • Key Features and Limitations
  • Is a Systematic Review Right for Your Research?
  • Integrative Review
  • Scoping Review
  • Rapid Review
  • Meta-Analysis/Meta-Synthesis
  • Reducing Bias
  • Guidelines for Student Researchers
  • Register Your Protocol
  • Handbooks & Manuals
  • Reporting Guidelines
  • PRESS 2015 Guidelines
  • Search Strategies
  • Selected Databases
  • Grey Literature
  • Handsearching
  • Citation Searching
  • Study Types & Terminology
  • Critical Appraisal of Studies
  • Broad Functionality Programs & Tools
  • Search Strategy Tools
  • Deduplication Tools
  • CItation Screening
  • Critical Appraisal Tools
  • Quality Assessment/Risk of Bias Tools
  • Data Collection/Extraction
  • Meta Analysis Tools
  • Books on Systematic Reviews
  • Finding Systematic Review Articles in the Databases
  • Systematic Review Journals
  • More Resources
  • Evidence-Based Practice Research in Nursing
  • Citation Management Programs
  • Last Updated: Jan 26, 2024 3:26 PM
  • URL: https://libguides.adelphi.edu/Systematic_Reviews
  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters & Hedges
  • Grey Literature
  • Managing & Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Feb 20, 2024 9:33 AM
  • URL: https://guides.library.ucla.edu/systematicreviews

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

systematic literature review qualitative or quantitative

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved February 19, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

Systematic & scoping reviews

Systematic reviews.

From Munn et al (2018): “Systematic reviews can be broadly defined as a type of research synthesis that are conducted by review groups with specialized skills, who set out to identify and retrieve international evidence that is relevant to a particular question or questions and to appraise and synthesize the results of this search to inform practice, policy and in some cases, further research. .. Systematic reviews follow a structured and pre-defined process that requires rigorous methods to ensure that the results are both reliable and meaningful to end users. .. A systematic review may be undertaken to confirm or refute whether or not current practice is based on relevant evidence, to establish the quality of that evidence, and to address any uncertainty or variation in practice that may be occurring. .. Conducting a systematic review may also identify gaps, deficiencies, and trends in the current evidence and can help underpin and inform future research in the area. .. Indications for systematic reviews are:

  • Uncover the international evidence
  • Confirm current practice/ address any variation/ identify new practices
  • Identify and inform areas for future research
  • Identify and investigate conflicting results
  • Produce statements to guide decision-making”

Scoping reviews

From Munn et al (2018): “Scoping reviews are an ideal tool to determine the scope or coverage of a body of literature on a given topic and give clear indication of the volume of literature and studies available as well as an overview (broad or detailed) of its focus. Scoping reviews are useful for examining emerging evidence when it is still unclear what other, more specific questions can be posed and valuably addressed by a more precise systematic review. They can report on the types of evidence that address and inform practice in the field and the way the research has been conducted. The general purpose for conducting scoping reviews is to identify and map the available evidence . Purposes for conducting a scoping review:

  • To identify the types of available evidence in a given field
  • To clarify key concepts/ definitions in the literature
  • To examine how research is conducted on a certain topic or field
  • To identify key characteristics or factors related to a concept
  • As a precursor to a systematic review
  • To identify and analyse knowledge gaps”

Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology , 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x

Reviews can be quantitative or qualitative

A quantitative review will include studies that have numerical data. A qualitative review derives data from observation, interviews, or verbal interactions and focuses on the meanings and interpretations of the participants. It will include focus groups, interviews, observations and diaries. See the qualitative research section for more information.

PRISMA Statement

PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses.

The PRISMA 2020 statement was published in 2021 and comprises a 27-item checklist addressing the introduction, methods, results and discussion sections of a systematic review report. It is intended to be accompanied by the PRISMA 2020 Explanation and Elaboration document .

The PRISMA extension for scoping reviews (PRISMA-ScR) was published in 2018. The checklist contains 20 essential reporting items and 2 optional items to include when completing a scoping review.

Steps in a systematic review

A systematic review involves the following steps:

  • Check for existing reviews/protocols . If a systematic review answering your question has been conducted, or is being undertaken, you may need to amend or refine your question
  • Formulate a specific research question that is clear and focused. Use the PICO tool (for quantitative reviews) or PICo (for qualitative reviews)
  • Develop and register your protocol , including the rationale for the review, and eligibility criteria
  • Design a robust search strategy that is explicit and reproducible
  • Conduct a comprehensive search of the literature by searching the relevant databases and other sources
  • Select and critically appraise the quality of included studies
  • Extract relevant data from individual studies and use established methods to synthesise the data
  • Interpret your results and prepare a comprehensive report on all aspects of your systematic review. Present your findings so that they can be translated into clinical practice.

systematic literature review qualitative or quantitative

Comparison of different types of reviews

This table outlines the differences between a systematic review and a literature review:

Adapted from: University of Newcastle Australia Library

This table outlines the differences between a systematic review and a scoping review:

Adapted from: University of South Australia

References:

Pollock, D., Davies, E. L., Peters, M. D. J., et al. (2021). Undertaking a scoping review: A practical guide for nursing and midwifery students, clinicians, researchers, and academics. J Adv Nurs, 77, 2102-2113. https://doi.org/10.1111/jan.14743

“Rapid reviews have emerged as a streamlined approach to synthesizing evidence-typically for informing emergent decisions faced by decision makers in health care setting”.

Source: Khangura, S., Konyu, K., Cushman, R., Grimshaw, J. & Moher, D. (2012). Evidence summaries: the evolution of a rapid review approach. Systematic Review, 1-10. https://doi.org/10.1186/2046-4053-1-10

Examples of different types of reviews:

Literature review: A Literature review of mentorship programs in academic nursing https://doi.org/10.1016/j.profnurs.2017.02.007

Narrative review: A silent burden—prolapse, incontinence, and infertility in Australian Aboriginal and Torres Strait Islander women: A systematic search and narrative review https://doi.org/10.1002/ijgo.13920

Rapid review: Blended foods for tube-fed children: a safe and realistic option? A rapid review of the evidence https://doi.org/10.1136/archdischild-2016-311030

Scoping review: How do patients experience caring? Scoping review https://doi.org/10.1016/j.pec.2017.03.029

Systematic review: Barriers and facilitators to health screening in men: A systematic review https://doi.org/10.1016/j.socscimed.2016.07.023

A typology of reviews: an analysis of 14 review types and associated methodologies (2009) https://doi.org/10.1111/j.1471-1842.2009.00848.x

  • Open access
  • Published: 15 December 2015

Qualitative and mixed methods in systematic reviews

  • David Gough 1  

Systematic Reviews volume  4 , Article number:  181 ( 2015 ) Cite this article

21k Accesses

47 Citations

23 Altmetric

Metrics details

Expanding the range of methods of systematic review

The logic of systematic reviews is very simple. We use transparent rigorous approaches to undertake primary research, and so we should do the same in bringing together studies to describe what has been studied (a research map) or to integrate the findings of the different studies to answer a research question (a research synthesis). We should not really need to use the term ‘systematic’ as it should be assumed that researchers are using and reporting systematic methods in all of their research, whether primary or secondary. Despite the universality of this logic, systematic reviews (maps and syntheses) are much better known in health research and for answering questions of the effectiveness of interventions (what works). Systematic reviews addressing other sorts of questions have been around for many years, as in, for example, meta ethnography [ 1 ] and other forms of conceptual synthesis [ 2 ], but only recently has there been a major increase in the use of systematic review approaches to answer other sorts of research questions.

There are probably several reasons for this broadening of approach. One may be that the increased awareness of systematic reviews has made people consider the possibilities for all areas of research. A second related factor may be that more training and funding resources have become available and increased the capacity to undertake such varied review work.

A third reason could be that some of the initial anxieties about systematic reviews have subsided. Initially, there were concerns that their use was being promoted by a new managerialism where reviews, particularly effectiveness reviews, were being used to promote particular ideological and theoretical assumptions and to indirectly control research agendas. However, others like me believe that explicit methods should be used to enable transparency of perspectives driving research and to open up access to and participation in research agendas and priority setting [ 3 ] as illustrated, for example, by the James Lind Alliance (see http://www.jla.nihr.ac.uk/ ).

A fourth possible reason for the development of new approaches is that effectiveness reviews have themselves broadened. Some ‘what works’ reviews can be open to criticism for only testing a ‘black box’ hypothesis of what works with little theorizing or any logic model about why any such hypothesis should be true and the mechanisms involved in such processes. There is now more concern to develop theory and to test how variables combine and interact. In primary research, qualitative strategies are advised prior to undertaking experimental trials [ 4 , 5 ] and similar approaches are being advocated to address complexity in reviews [ 6 ], in order to ask questions and use methods that address theories and processes that enable an understanding of both impact and context.

This Special Issue of Systematic Reviews Journal is providing a focus for these new methods of review whether these use qualitative review methods on their own or mixed together with more quantitative approaches. We are linking together with the sister journal Trials for this Special Issue as there is a similar interest in what qualitative approaches can and should contribute to primary research using experimentally controlled trials (see Trials Special Issue editorial by Claire Snowdon).

Dimensions of difference in reviews

Developing the range of methods to address different questions for review creates a challenge in describing and understanding such methods. There are many names and brands for the new methods which may or may not withstand the changes of historical time, but another way to comprehend the changes and new developments is to consider the dimensions on which the approaches to review differ [ 7 , 8 ].

One important distinction is the research question being asked and the associated paradigm underlying the method used to address this question. Research assumes a particular theoretical position and then gathers data within this conceptual lens. In some cases, this is a very specific hypothesis that is then tested empirically, and sometimes, the research is more exploratory and iterative with concepts being emergent and constructed during the research process. This distinction is often labelled as quantitative or positivist versus qualitative or constructionist. However, this can be confusing as much research taking a ‘quantitative’ perspective does not have the necessary numeric data to analyse. Even if it does have such data, this might be explored for emergent properties. Similarly, research taking a ‘qualitative’ perspective may include implicit quantitative themes in terms of the extent of different qualitative findings reported by a study.

Sandelowski and colleagues’ solution is to consider the analytic activity and whether this aggregates (adds up) or configures (arranges) the data [ 9 ]. In a randomized controlled trial and an effectiveness review of such studies, the main analysis is the aggregation of data using a priori non-emergent strategies with little iteration. However, there may also be post hoc analysis that is more exploratory in arranging (configuring) data to identify patterns as in, for example, meta regression or qualitative comparative analysis aiming to identify the active ingredients of effective interventions [ 10 ]. Similarly, qualitative primary research or reviews of such research are predominantly exploring emergent patterns and developing concepts iteratively, yet there may be some aggregation of data to make statements of generalizations of extent.

Even where the analysis is predominantly configuration, there can be a wide variation in the dimensions of difference of iteration of theories and concepts. In thematic synthesis [ 11 ], there may be few presumptions about the concepts that will be configured. In meta ethnography which can be richer in theory, there may be theoretical assumptions underlying the review question framing the analysis. In framework synthesis, there is an explicit conceptual framework that is iteratively developed and changed through the review process [ 12 , 13 ].

In addition to the variation in question, degree of configuration, complexity of theory, and iteration are many other dimensions of difference between reviews. Some of these differences follow on from the research questions being asked and the research paradigm being used such as in the approach to searching (exhaustive or based on exploration or saturation) and the appraisal of the quality and relevance of included studies (based more on risk of bias or more on meaning). Others include the extent that reviews have a broad question, depth of analysis, and the extent of resultant ‘work done’ in terms of progressing a field of inquiry [ 7 , 8 ].

Mixed methods reviews

As one reason for the growth in qualitative synthesis is what they can add to quantitative reviews, it is not surprising that there is also growing interest in mixed methods reviews. This reflects similar developments in primary research in mixing methods to examine the relationship between theory and empirical data which is of course the cornerstone of much research. But, both primary and secondary mixed methods research also face similar challenges in examining complex questions at different levels of analysis and of combining research findings investigated in different ways and may be based on very different epistemological assumptions [ 14 , 15 ].

Some mixed methods approaches are convergent in that they integrate different data and methods of analysis together at the same time [ 16 , 17 ]. Convergent systematic reviews could be described as having broad inclusion criteria (or two or more different sets of criteria) for methods of primary studies and have special methods for the synthesis of the resultant variation in data. Other reviews (and also primary mixed methods studies) are sequences of sub-reviews in that one sub-study using one research paradigm is followed by another sub-study with a different research paradigm. In other words, a qualitative synthesis might be used to explore the findings of a prior quantitative synthesis or vice versa [ 16 , 17 ].

An example of a predominantly aggregative sub-review followed by a configuring sub-review is the EPPI-Centre’s mixed methods review of barriers to healthy eating [ 18 ]. A sub-review on the effectiveness of public health interventions showed a modest effect size. A configuring review of studies of children and young people’s understanding and views about eating provided evidence that the public health interventions did not take good account of such user views research, and that the interventions most closely aligned to the user views were the most effective. The already mentioned qualitative comparative analysis to identify the active ingredients within interventions leading to impact could also be considered a qualitative configuring investigation of an existing quantitative aggregative review [ 10 ].

An example of a predominantly configurative review followed by an aggregative review is realist synthesis. Realist reviews examine the evidence in support of mid-range theories [ 19 ] with a first stage of a configuring review of what is proposed by the theory or proposal (what would need to be in place and what casual pathways would have to be effective for the outcomes proposed by the theory to be supported?) and a second stage searching for empirical evidence to test for those necessary conditions and effectiveness of the pathways. The empirical testing does not however use a standard ‘what works’ a priori methods approach but rather a more iterative seeking out of evidence that confirms or undermines the theory being evaluated [ 20 ].

Although sequential mixed methods approaches are considered to be sub-parts of one larger study, they could be separate studies as part of a long-term strategic approach to studying an issue. We tend to see both primary studies and reviews as one-off events, yet reviews are a way of examining what we know and what more we want to know as a strategic approach to studying an issue over time. If we are in favour of mixing paradigms of research to enable multiple levels and perspectives and mixing of theory development and empirical evaluation, then we are really seeking mixed methods research strategies rather than simply mixed methods studies and reviews.

Noblit G. Hare RD: meta-ethnography: synthesizing qualitative studies. Newbury Park NY: Sage Publications; 1988.

Google Scholar  

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9:59.

Article   PubMed   PubMed Central   Google Scholar  

Gough D, Elbourne D. Systematic research synthesis to inform policy, practice and democratic debate. Soc Pol Soc. 2002;2002:1.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance 2015. BMJ. 2015;350:h1258

Candy B, Jone L, King M, Oliver S. Using qualitative evidence to help understand complex palliative care interventions: a novel evidence synthesis approach. BMJ Support Palliat Care. 2014;4:Supp A41–A42.

Article   Google Scholar  

Noyes J, Gough D, Lewin S, Mayhew A, Michie S, Pantoja T, et al. A research and development agenda for systematic reviews that ask complex questions about complex interventions. J Clin Epidemiol. 2013;66:11.

Gough D, Oliver S, Thomas J. Introduction to systematic reviews. London: Sage; 2012.

Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Syst Rev. 2012;1:28.

Sandelowski M, Voils CJ, Leeman J, Crandlee JL. Mapping the mixed methods-mixed research synthesis terrain. J Mix Methods Res. 2012;6:4.

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3:67.

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45.

Oliver S, Rees R, Clarke-Jones L, Milne R, Oakley AR, Gabbay J, et al. A multidimensional conceptual framework for analysing public involvement in health services research. Health Exp. 2008;11:72–84.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Qual Saf. 2015. 2014-003642.

Brannen J. Mixed methods research: a discussion paper. NCRM Methods Review Papers, 2006. NCRM/005.

Creswell J. Mapping the developing landscape of mixed methods research. In: Teddlie C, Tashakkori A, editors. SAGE handbook of mixed methods in social & behavioral research. New York: Sage; 2011.

Morse JM. Principles of mixed method and multi-method research design. In: Teddlie C, Tashakkori A, editors. Handbook of mixed methods in social and behavioural research. London: Sage; 2003.

Pluye P, Hong QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu Rev Public Health. 2014;35:29–45.

Harden A, Thomas J. Mixed methods and systematic reviews: examples and emerging issues. In: Tashakkori A, Teddlie C, editors. Handbook of mixed methods in the social and behavioral sciences. 2nd ed. London: Sage; 2010. p. 749–74.

Chapter   Google Scholar  

Pawson R. Evidenced-based policy: a realist perspective. London: Sage; 2006.

Book   Google Scholar  

Gough D. Meta-narrative and realist reviews: guidance, rules, publication standards and quality appraisal. BMC Med. 2013;11:22.

Download references

Author information

Authors and affiliations.

EPPI-Centre, Social Science Research Unit, University College London, London, WC1H 0NR, UK

David Gough

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David Gough .

Additional information

Competing interests.

The author is a writer and researcher in this area. The author declares that he has no other competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gough, D. Qualitative and mixed methods in systematic reviews. Syst Rev 4 , 181 (2015). https://doi.org/10.1186/s13643-015-0151-y

Download citation

Received : 13 October 2015

Accepted : 29 October 2015

Published : 15 December 2015

DOI : https://doi.org/10.1186/s13643-015-0151-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

systematic literature review qualitative or quantitative

SMU Libraries logo

  •   SMU Libraries
  • Scholarship & Research
  • Teaching & Learning
  • Bridwell Library
  • Business Library
  • DeGolyer Library
  • Fondren Library
  • Hamon Arts Library
  • Underwood Law Library
  • Fort Burgwin Library
  • Exhibits & Digital Collections
  • SMU Scholar
  • Special Collections & Archives
  • Connect With Us
  • Research Guides by Subject
  • How Do I . . . ? Guides
  • Find Your Librarian
  • Writing Support

Evidence Syntheses and Systematic Reviews: Overview

  • Choosing a Review

Analyze and Report

What is evidence synthesis.

Evidence Synthesis: general term used to refer to any method of identifying, selecting, and combining results from multiple studies. There are several types of reviews which fall under this term; the main ones are in the table below: 

Types of Reviews

General steps for conducting systematic reviews.

The number of steps for conducting Evidence Synthesis varies a little, depending on the source that one consults. However, the following steps are generally accepted in how Systematic Reviews are done:

  • Identify a gap in the literature and form a well-developed and answerable research question which will form the basis of your search
  • Select a framework that will help guide the type of study you’re undertaking
  • Different guidelines are used for documenting and reporting the protocols of your systematic review before the review is conducted. The protocol is created following whatever guideline you select.
  • Select Databases and Grey Literature Sources
  • For steps 3 and 4, it is advisable to consult a librarian before embarking on this phase of the review process. They can recommend databases and other sources to use and even help design complex searches.
  • A protocol is a detailed plan for the project, and after it is written, it should be registered with an appropriate registry.
  • Search Databases and Other Sources
  • Not all databases use the same search syntax, so when searching multiple databases, use search syntaxes that would work in individual databases.
  • Use a citation management tool to help store and organize your citations during the review process; great help when de-duplicating your citation results
  • Inclusion and exclusion criteria already developed help you remove articles that are not relevant to your topic. 
  • Assess the quality of your findings to eliminate bias in either the design of the study or in the results/conclusions (generally not done outside of Systematic Reviews).

Extract and Synthesize

  • Extract the data from what's left of the studies that have been analyzed
  • Extraction tools are used to get data from individual studies that will be analyzed or summarized. 
  • Synthesize the main findings of your research

Report Findings

Report the results using a statistical approach or in a narrative form.

Need More Help?

Librarians can:

  • Provide guidance on which methodology best suits your goals
  • Recommend databases and other information sources for searching
  • Design and implement comprehensive and reproducible database-specific search strategies 
  • Recommend software for article screening
  • Assist with the use of citation management
  • Offer best practices on documentation of searches

Related Guides

  • Literature Reviews
  • Choose a Citation Manager
  • Project Management

Steps of a Systematic Review - Video

  • Next: Choosing a Review >>
  • Last Updated: Feb 16, 2024 5:40 PM
  • URL: https://guides.smu.edu/evidencesyntheses

Banner

Systematic Reviews

Who is this guide for and what can be found in it, what are systematic reviews, how do systematic reviews differ from narrative literature reviews.

  • Types of Systematic Reviews
  • Reading Systematic Reviews
  • Resources for Conducting Systematic Reviews
  • Getting Help with Systematic Reviews from the Library
  • History of Systematic Reviews
  • Acknowledgements

Contact Us!

Ask a question

This guide aims to support all OHSU members' systematic review education and activities, orienting OHSU members who are new to systematic reviews and facilitating the quality, rigor, and reproducibility of systematic reviews produced by OHSU members.

In it you will find:

  • A definition of what systematic reviews are, how they compare to other evidence, and how they differ from narrative literature reviews
  • Descriptions of the different types of systematic reviews , with links to resources on methods, protocols, reporting, additional information, and selecting the right type of systematic review for your research question
  • Guidance on how to read and evaluate systematic reviews for strength, quality, and potential for bias
  • A high-level overview of how systematic reviews are conducted , including team size and roles, standards, and processes
  • Links to resources and tools for conducting systematic reviews
  • Information about how to get assistance with conducting a systematic review from the OHSU Library
  • A history of systematic reviews to provide contextual understanding of how they have developed over time
"A systematic review is a summary of the medical literature that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and random errors."

Gopalakrishnan S, Ganeshkumar P. Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare . J Family Med Prim Care . 2013;2(1):9-14. doi:10.4103/2249-4863.109934

Systematic Reviews are a vital resource used in the pursuit of Evidence-Based Practice (EBP):

  • These studies can be found near the top of the Evidence Pyramid , which ranks sources of information and study designs by the level of evidence contained within them
  • This ranking is based on the level of scientific rigor employed in their methods and the quality and reliability of the evidence contained within these sources
  • A higher ranking means that we can be more confident that their conclusions are accurate and have taken measures to limit bias

Research design and evidence , by CFCF , CC BY-SA 4.0 , via Wikimedia Commons

Things to know about systematic reviews:

  • Systematic reviews are a type of research study
  • Systematic reviews aim to provide a comprehensive and unbiased summary of the existing evidence on a particular research question
  • There are many types of systematic reviews , each designed to address a specific type of research purpose and with their own strengths and weaknesses
  • The choice of what type of review to produce typically will depend on the nature of the research question and the resources that are available on the topic

The practice of producing systematic reviews is sometimes referred to by other names such as:

  • Evidence Synthesis
  • Knowledge Synthesis
  • Research Synthesis

This guide tries to stick with the term "Systematic Reviews" unless a specific type of systematic review is being discussed.

While all reviews combat information overload in the health sciences by summarizing the literature on a topic, different types of reviews have different approaches. The term systematic review is often conflated with narrative literature reviews , which can lead to confusion and misunderstandings when seeking help with conducting them. This table helps clarify the differences.

  • Next: Types of Systematic Reviews >>
  • Last Updated: Feb 12, 2024 5:59 PM
  • URL: https://libguides.ohsu.edu/systematic-reviews

Book cover

World Conference on Qualitative Research

WCQR 2022: Computer Supported Qualitative Research pp 194–210 Cite as

How to Operate Literature Review Through Qualitative and Quantitative Analysis Integration?

  • Eduardo Amadeu Dutra Moresi   ORCID: orcid.org/0000-0001-6058-3883 13 ,
  • Isabel Pinho   ORCID: orcid.org/0000-0003-1714-8979 14 &
  • António Pedro Costa   ORCID: orcid.org/0000-0002-4644-5879 14  
  • Conference paper
  • First Online: 05 May 2022

419 Accesses

1 Citations

Part of the Lecture Notes in Networks and Systems book series (LNNS,volume 466)

Usually, a literature review takes time and becomes a demanding step in any research project. The proposal presented in this article intends to structure this work in an organised and transparent way for all project participants and the structured elaboration of its report. Integrating qualitative and quantitative analysis provides opportunities to carry out a solid, practical, and in-depth literature review. The purpose of this article is to present a guide that explores the potentials of qualitative and quantitative analysis integration to develop a solid and replicable literature review. The paper proposes an integrative approach comprising six steps: 1) research design; 2) Data Collection for bibliometric analysis; 3) Search string refinement; 4) Bibliometric analysis; 5) qualitative analysis; and 6) report and dissemination of research results. These guidelines can facilitate the bibliographic analysis process and relevant article sample selection. Once the sample of publications is defined, it is possible to conduct a deep analysis through Content Analysis. Software tools, such as R Bibliometrix, VOSviewer, Gephi, yEd and webQDA, can be used for practical work during all collection, analysis, and reporting processes. From a large amount of data, selecting a sample of relevant literature is facilitated by interpreting bibliometric results. The specification of the methodology allows the replication and updating of the literature review in an interactive, systematic, and collaborative way giving a more transparent and organised approach to improving the literature review.

  • Quantitative analysis
  • Qualitative analysis
  • Bibliometric analysis
  • Science mapping

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Pritchard, A.: Statistical bibliography or bibliometrics? J. Doc. 25 (4), 348–349 (1969)

Google Scholar  

Nalimov, V., Mulcjenko, B.: Measurement of Science: Study of the Development of Science as an Information Process. Foreign Technology Division, Washington DC (1971)

Hugar, J.G., Bachlapur, M.M., Gavisiddappa, A.: Research contribution of bibliometric studies as reflected in web of science from 2013 to 2017. Libr. Philos. Pract. (e-journal), 1–13 (2019). https://digitalcommons.unl.edu/libphilprac/2319

Verma, M.K., Shukla, R.: Library herald-2008–2017: a bibliometric study. Libr. Philos. Pract. (e-journal), 2–12 (2018). https://digitalcommons.unl.edu/libphilprac/1762

Pandita, R.: Annals of library and information studies (ALIS) journal: a bibliometric study (2002–2012). DESIDOC J. Libr. Inf. Technol. 33 (6), 493–497 (2013)

Article   Google Scholar  

Kannan, P., Thanuskodi, S.: Bibliometric analysis of library philosophy and practice: a study based on scopus database. Libr. Philos. Pract. (e-journal), 1–13 (2019). https://digitalcommons.unl.edu/libphilprac/2300/

Marín-Marín, J.-A., Moreno-Guerrero, A.-J., Dúo-Terrón, P., López-Belmonte, J.: STEAM in education: a bibliometric analysis of performance and co-words in Web of Science. Int. J. STEM Educ. 8 (1) (2021). Article number 41

Khalife, M.A., Dunay, A., Illés, C.B.: Bibliometric analysis of articles on project management research. Periodica Polytechnica Soc. Manag. Sci. 29 (1), 70–83 (2021)

Pech, G., Delgado, C.: Screening the most highly cited papers in longitudinal bibliometric studies and systematic literature reviews of a research field or journal: widespread used metrics vs a percentile citation-based approach. J. Informet. 15 (3), 101161 (2021)

Das, D.: Journal of informetrics: a bibliometric study. Libr. Philos. Pract. (e-journal), 1–15 (2021). https://digitalcommons.unl.edu/libphilprac/5495/

Schmidt, F.: Meta-analysis: a constantly evolving research integration tool. Organ. Res. Methods 11 (1), 96–113 (2008)

Zupic, I., Cater, T.: Bibliometric methods in management organisation. Organ. Res. Methods 18 (3), 429–472 (2014)

Noyons, E., Moed, H., Luwel, M.: Combining mapping and citation analysis for evaluative bibliometric purposes: a bibliometric study. J. Am. Soc. Inf. Sci. 50 , 115–131 (1999)

van Rann, A.: Measuring science. Capita selecta of current main issues. In: Moed, H., Glänzel, W., Schmoch, U. (eds.) Handbook of Quantitative Science and Technology Research, pp. 19–50. Kluwer Academic, Dordrecht (2004)

Chapter   Google Scholar  

Garfield, E.: Citation analysis as a tool in journal evaluation. Science 178 , 417–479 (1972)

Hirsch, J.: An index to quantify an individuals scientific research output. In: Proceedings of the National Academy of Sciences, vol. 102, pp. 16569–1657. National Academy of Sciences, Washington DC (2005)

Cobo, M., López-Herrera, A., Herrera-Viedma, E., Herrera, F.: Science mapping software tools: review, analysis and cooperative study among tools. J. Am. Soc. Inform. Sci. Technol. 62 , 1382–1402 (2011)

Noyons, E., Moed, H., van Rann, A.: Integrating research perfomance analysis and science mapping. Scientometrics 46 , 591–604 (1999)

Donthu, N., Kumar, S., Mukherjee, D., Pandey, N., Lim, W.M.: How to conduct a bibliometric analysis: an overview and guidelines. J. Bus. Res. 133 , 285–296 (2021)

Aria, M., Cuccurullo, C.: Bibliometrix: an R-tool for comprehensive science mapping analysis. J. Informet. 11 (4), 959–975 (2017)

Aria, M., Cuccurullo, C.: Package ‘bibliometrix’. https://cran.r-project.org/web/packages/bibliometrix/bibliometrix.pdf . Accessed 10 July 2021

Börner, K., Chen, C., Boyack, K.: Visualisingg knowledge domains. Ann. Rev. Inf. Sci. Technol. 37 , 179–255 (2003)

Morris, S., van der Veer Martens, B.: Mapping research specialities. Ann. Rev. Inf. Sci. Technol. 42 , 213–295 (2008)

Zitt, M., Ramanana-Rahary, S., Bassecoulard, E.: Relativity of citation performance and excellence measures: from cross-field to cross-scale effects of field-normalisation. Scientometrics 63 (2), 373–401 (2005)

Li, L.L., Ding, G., Feng, N., Wang, M.-H., Ho, Y.-S.: Global stem cell research trend: bibliometric analysis as a tool for mapping trends from 1991 to 2006. Scientometrics 80 (1), 9–58 (2009)

Ebrahim, A.N., Salehi, H., Embi, M.A., Tanha, F.H., Gholizadeh, H., Motahar, S.M.: Visibility and citation impact. Int. Educ. Stud. 7 (4), 120–125 (2014)

Canas-Guerrero, I., Mazarrón, F.R., Calleja-Perucho, C., Pou-Merina, A.: Bibliometric analysis in the international context of the “construction & building technology” category from the web of science database. Constr. Build. Mater. 53 , 13–25 (2014)

Gaviria-Marin, M., Merigó, J.M., Baier-Fuentes, H.: Knowledge management: a global examination based on bibliometric analysis. Technol. Forecast. Soc. Chang. 140 , 194–220 (2019)

Heradio, R., Perez-Morago, H., Fernandez-Amoros, D., Javier Cabrerizo, F., Herrera-Viedma, E.: A bibliometric analysis of 20 years of research on software product lines. Inf. Softw. Technol. 72 , 1–15 (2016)

Furstenau, L.B., et al.: Link between sustainability and industry 4.0: trends, challenges and new perspectives. IEEE Access 8 , 140079–140096 (2020). Article 9151934

van Eck, N.J., Waltman, L.: VOSviewer manual. Universiteit Leiden, Leiden (2021)

Bastian, M., Heymann, S., Jacomy, M.: Gephi: an open source software for exploring and manipulating networks. In: Proceedings of the Third International ICWSM Conference, pp. 361–362. Association for the Advancement of Artificial Intelligence, San Jose CA (2009)

Chen, C.: How to use CiteSpace. Leanpub, Victoria, British Columbia, CA (2019)

yWorks.: yEd Graph Editor Manual. https://yed.yworks.com/support/manual/index.html . Accessed 13 July 2020

Moresi, E.A.D., Pierozzi Júnior, I.: Representação do conhecimento para ciência e tecnologia: construindo uma sistematização metodológica. In: 16th International Conference on Information Systems and Technology Management, TECSI, São Paulo SP (2019). Article 6275

Moresi, E.A.D., Pinho, I.: Proposta de abordagem para refinamento de pesquisa bibliográfica. New Trends Qual. Res. 9 , 11–20 (2021)

Moresi, E.A.D., Pinho, I.: Como identificar os tópicos emergentes de um tema de investigação? New Trends Qual. Res. 9 , 46–55 (2021)

Chen, Y.H., Chen, C.Y., Lee, S.C.: Technology forecasting of new clean energy: the example of hydrogen energy and fuel cell. Afr. J. Bus. Manag. 4 (7), 1372–1380 (2010)

Ernst, H.: The use of patent data for technological forecasting: the diffusion of CNC-technology in the machine tool industry. Small Bus. Econ. 9 (4), 361–381 (1997)

Chen, C.: Science mapping: a systematic review of the literature. J. Data Inf. Sci. 2 (2), 1–40 (2017)

Prabhakaran, T., Lathabai, H.H., Changat, M.: Detection of paradigm shifts and emerging fields using scientific network: a case study of information technology for engineering. Technol. Forecast. Soc. Change 91 , 124–145 (2015)

Klavans, R., Boyack, K.W.: Identifying a better measure of relatedness for mapping science. J. Am. Soc. Inf. Sci. 57 (2), 251–263 (2006)

Kauffman, J., Kittas, A., Bennett, L., Tsoka, S.: DyCoNet: a Gephi plugin for community detection in dynamic complex networks. PLoS ONE 9 (7), e101357 (2014)

Grant, M.J., Booth, A.: A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26 (2), 91–108 (2009)

Costa, A.P., Soares, C.B., Fornari, L., Pinho, I.: Revisão da Literatura com Apoio de Software - Contribuição da Pesquisa Qualitativa. Ludomedia, Aveiro Portugal (2019)

Tranfield, D., Denyer, D., Smart, P.: Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br. J. Manag. 14 (3), 207–222 (2003)

Costa, A.P., Amado, J.: Content Analysis Supported by Software. Ludomedia, Oliveira de Azeméis - Aveiro - Portugal (2018)

Pinho, I., Leite, D.: Doing a literature review using content analysis - research networks review. In: Atas CIAIQ 2014 - Investigação Qualitativa em Ciências Sociais, vol. 3, pp. 377–378. Ludomedia, Aveiro Portugal (2014)

White, M.D., Marsh, E.E.: Content analysis: a flexible methodology. Libr. Trends 55 (1), 22–45 (2006)

Souza, F.N., Neri, D., Costa, A.P.: Asking questions in the qualitative research context. Qual. Rep. 21 (13), 6–18 (2016)

Pinho, I., Pinho, C., Rosa, M.J.: Research evaluation: mapping the field structure. Avaliação: Revista da Avaliação da Educação Superior (Campinas) 25 , 546–574 (2020)

Costa, A., Moreira, A. de Souza, F.: webQDA - Qualitative Data Analysis (2019). www.webqda.net

Download references

Author information

Authors and affiliations.

Catholic University of Brasília, Brasília, DF, 71966-700, Brazil

Eduardo Amadeu Dutra Moresi

University of Aveiro, 3810-193, Aveiro, Portugal

Isabel Pinho & António Pedro Costa

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Eduardo Amadeu Dutra Moresi .

Editor information

Editors and affiliations.

Department of Education and Psychology, University of Aveiro, Aveiro, Portugal

António Pedro Costa

António Moreira

Department Didactics, Organization and Research Methods, University of Salamanca, Salamanca, Salamanca, Spain

Maria Cruz Sánchez‑Gómez

Adventist University of Africa, Nairobi, Kenya

Safary Wa-Mbaleka

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Moresi, E.A.D., Pinho, I., Costa, A.P. (2022). How to Operate Literature Review Through Qualitative and Quantitative Analysis Integration?. In: Costa, A.P., Moreira, A., Sánchez‑Gómez, M.C., Wa-Mbaleka, S. (eds) Computer Supported Qualitative Research. WCQR 2022. Lecture Notes in Networks and Systems, vol 466. Springer, Cham. https://doi.org/10.1007/978-3-031-04680-3_13

Download citation

DOI : https://doi.org/10.1007/978-3-031-04680-3_13

Published : 05 May 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-04679-7

Online ISBN : 978-3-031-04680-3

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Understanding the impact of children's and young people's self-harm on parental well-being: a systematic literature review of qualitative and quantitative findings

Affiliations.

  • 1 School of Psychology, Cardiff University, Cardiff, UK.
  • 2 Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK.
  • 3 School of Applied Sciences, University of the West of England, Bristol, UK.
  • 4 Oxford Health NHS Foundation Trust, Oxford, UK.
  • 5 School of Health Sciences, University of Surrey, Guildford, UK.
  • 6 School of Social Sciences, University of the West of England, Bristol, UK.
  • PMID: 38362819
  • DOI: 10.1111/camh.12692

Background: Self-harm in children and young people is increasing. Parents are vital in supporting young people; however, parents may experience distress linked to the self-harm. Previous reviews have highlighted the emotional impact and need for information and support, however, have not elucidated the relationships between these themes, nor examined the quantitative data examining parents' well-being.

Methods: We conducted a mixed methods review, with qualitative meta-synthesis focusing on links between themes and quantitative synthesis of parental well-being findings, including pooled means. PsycInfo, Medline, EMBASE, AMED, CINHAL and Web of Science were searched to identify relevant records. References of included studies were also searched. Every abstract was screened by two authors. Data were extracted by one author and checked by another.

Results: We identified 39 reports of 32 studies: 16 with qualitative data and 17 with quantitative data (one had both). Qualitative findings showed how parents' emotions were associated to their knowledge and beliefs about self-harm. Parents' emotions often evidenced the need to self-care, but emotions of guilt reduced engagement in self-care. How parents supported their young person was linked to their knowledge, and the management of their own emotions, and influenced if they could engage in self-care. Quantitative findings were mixed, however suggested poor general mental health amongst these parents.

Conclusions: Further good quality quantitative studies are needed, with measurement of psychological mechanisms that may underpin parental distress. Current evidence supports peer-support and interventions that go beyond information provision to address the connected factors of knowledge, emotion, self-care, and parenting behaviours.

Keywords: Systematic review; children; parents; self injury; self-harm; young people.

© 2024 Association for Child and Adolescent Mental Health.

Publication types

Grants and funding.

  • ES/S004726/2/Emerging Minds Network and the UKRI Grant

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 20 February 2024

What is the power of a genomic multidisciplinary team approach? A systematic review of implementation and sustainability

  • Alan Ma   ORCID: orcid.org/0000-0002-9293-4753 1 , 2 , 3 ,
  • Rosie O’Shea 1 ,
  • Laura Wedd 2 , 3 ,
  • Claire Wong   ORCID: orcid.org/0000-0002-3129-3438 1 , 2 ,
  • Robyn V Jamieson   ORCID: orcid.org/0000-0002-7285-0253 1 , 2 , 3 &
  • Nicole Rankin 4 , 5  

European Journal of Human Genetics ( 2024 ) Cite this article

1 Altmetric

Metrics details

  • Genetic testing
  • Medical genomics

Due to the increasing complexity of genomic data interpretation, and need for close collaboration with clinical, laboratory, and research expertise, genomics often requires a multidisciplinary team (MDT) approach. This systematic review aims to establish the evidence for effectiveness of the genomic multidisciplinary team, and the implementation components of this model that can inform precision care. MEDLINE, Embase and PsycINFO databases were searched in 2022 and 2023. We included qualitative and quantitative studies of the genomic MDT, including observational and cohort studies, for diagnosis and management, and implementation outcomes of effectiveness, adoption, efficiency, safety, and acceptability. A narrative synthesis was mapped against the Genomic Medicine Integrative Research framework. 1530 studies were screened, and 17 papers met selection criteria. All studies pointed towards the effectiveness of the genomic MDT approach, with 10-78% diagnostic yield depending on clinical context, and an increased yield of 6-25% attributed to the MDT. The genomic MDT was found to be highly efficient in interpretation of variants of uncertain significance, timeliness for a rapid result, made a significant impact on management, and was acceptable for adoption by a wide variety of subspecialists. Only one study utilized an implementation science based approach. The genomic MDT approach appears to be highly effective and efficient, facilitating higher diagnostic rates and improved patient management. However, key gaps remain in health systems readiness for this collaborative model, and there is a lack of implementation science based research especially addressing the cost, sustainability, scale up, and equity of access.

Introduction

The genomic era has expanded the availability of diagnostic testing, management and options for precision care for many genetic conditions. Diagnostic yields have improved by 40-80% depending on the condition, unlocking new genetic diagnoses, family planning options and access to advanced therapeutics including gene therapies [ 1 ]. The publication of the American College of Medical Genetics (ACMG) guidelines on variant interpretation [ 2 ], brought about increased uniformity in laboratory reporting of results, and efforts to enhance pathogenicity calling and reduce the burden of variants of uncertain significance (VUS).

Much of the current challenge is in the interpretation of novel variants and genes, assigning pathogenicity and clinical correlation to a multitude of scenarios, often requiring inter-disciplinary expertise in molecular, clinical, functional genomics, as well as in organ-specific areas such as oncology, cardiology, nephrology, ophthalmology, and neurology. In addition, several cutting-edge applications of genomics such as advanced therapeutics, prenatal and acute care genomics, require additional expertise and close collaboration and liaison between clinical and, laboratory staff, as well as researchers, in order to maximise the benefits of genomic testing and diagnosis. Due to this complexity and knowledge-specific requirements, multi-disciplinary genomic teams (hereafter ‘MDT’), have been increasingly utilized and recommended in a number of guidelines [ 3 , 4 ]. This follows the example of other similar complex team approaches in disciplines such as oncology.

However, despite many recommendations for a genomic MDT approach in guidelines and position statements, there remains a paucity of evidence about ideal approaches to multidisciplinary care in the genomics field. Due to the rapidity of advances in genomic/precision medicine, evidence-based models on the ‘best practice’ approach for the distinct and complex needs of genomic medicine do not yet exist. International studies describe an insufficient genomics workforce to meet demand [ 5 , 6 ], and models of engaging highly specialized clinicians and scientists in yet more meetings and discussions raises whether the MDTs are the most effective and efficient use of limited time and resources. In addition, few studies have evaluated the characteristics and factors that promote effective genomic MDTs and their impact on patient, health service, and implementation level outcomes, such as acceptability, feasibility, adoption and sustainability [ 7 ].

A wealth of literature exists in the non-genetics fields regarding the success and effectiveness of MDT models [ 8 ]. MDTs have demonstrated improved outcomes in increasingly complex health care systems and are widely accepted as ‘gold standard’ in cancer care delivery worldwide. By harnessing the combined expertise of disciplines including surgeons, oncologists, radiologists, pathologists, nurses and physicians, to meet and discuss complex cancer care, optimal management plans and pathways are utilized and this is now standard of care in cancer [ 9 ]. Further research into effective cancer MDT practices have highlighted the importance of team relationship, communication, leadership, inclusiveness, and careful consideration of patient and psychosocial issues in team decision making processes [ 8 ], although key evidence-practice gaps still exist, highlighting the need for more implementation research in this field [ 10 ].

Implementation research is especially needed in the rapidly developing field of genomics [ 1 ] and its application to clinical practice, such as the uptake of MDT approaches. Without an evaluation of the key implementation factors, the evidence for genomic MDTs as an effective intervention and potential for adaptability and scalability is lacking. This review aims to use an implementation science framework to examine the core components of the genomic MDT that achieve a diagnostic rate and inform clinical care for patients undergoing genomic testing. Using implementation frameworks can improve the study of interventions such as MDTs, to understand the health services factors and outcomes that promote uptake and successful implementation [ 7 ]. The Genomic Medicine Integrative Research [GMIR] framework [ 11 ] was designed by the genomics community and ‘Implementing Genomics into Practice’ [IGNITE] consortium [ 12 ] by adopting the well-used implementation Consolidated Framework for Implementation Research [CFIR] constructs [ 13 , 14 ] into the genomic context. It has been utilized in implementation research as an adaptable tool for evaluating the clinical implementation of genomic programs [ 15 ]. GMIR garners broad evidence of context, process, interventions and outcomes of such programs to understand sustainability.

O ur primary review question is: How effective is the coordinated genomic multidisciplinary care approach, in facilitating genetic diagnostics and precision medicine ?
Our secondary review questions are: What are the key implementation components and outcomes of the genomic multidisciplinary care model? What are the evidence gaps and determinants of practice that can inform a model of multidisciplinary care in genomics?

The findings of this study have been reported in line with Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement (Fig.  1 ) [ 16 ]. The study protocol was registered on PROSPERO ( https://www.crd.york.ac.uk/prospero/ ID CRD42022373661) on 17 th November 2022.

figure 1

Flow diagram demonstrating the screening, removal, and selection of studies in this review, with reasons.

We searched MEDLINE, Embase, PsycINFO databases for papers published after 2010 that were in the English language. Search terms (Supp. Table  1 ) were developed according to the Population, Intervention, Comparison and Outcome (PICO) with MESH and Emtree terms, focussing on the genomic literature with an interdisciplinary/multidisciplinary/shared decision making approach to care and communication. A preliminary search in November 2022 was followed by a repeat search once data extraction was complete in March 2023, to identify any new publications.

This systematic review seeks to evaluate the effectiveness of a coordinated genomic multidisciplinary team approach, which incorporates genetic/genomic expertise, medical subspecialists, and laboratory/scientific involvement for providing genomic diagnostics and medical management of genetic conditions.

The population consisted of both: patients undergoing genomic testing in a medical setting for diagnosis, and the clinicians referring these patients for genomic diagnostic opinion and management.
The intervention is a coordinated multidisciplinary care approach to genomics, which involved a close collaboration (either virtual or in person via meetings) of both:
Genetics/genomics expertise (usually clinical geneticists, molecular genetics laboratory scientists and genetic counsellors), identifying genetic diagnoses (genotyping) for patients, and
Subspecialist clinicians and subject matter experts (from varied disciplines) working in conjunction with genomics to identify clinical diagnoses (phenotyping) and management implications
The search comparator/control: In comparison to the coordinated multidisciplinary approach to genomic diagnostics, the main alternatives include ‘standard care’ via individual clinicians, laboratories and genetics services. Many studies (especially qualitative) may not include a comparator or control group.

Main outcomes reported by the study include:

Evidence on the effectiveness of the genomic multidisciplinary care approach, in facilitating genetic diagnostics and precision medicine and management outcomes
Clinical Service level outcomes such as efficiency, safety, equity, timeliness
Implementation outcomes including acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, sustainability

Types of study included

We included qualitative and quantitative studies of the genomic MDT approach, including observational and cohort studies. Purely descriptive studies were included if they were primarily describing a coordinated multidisciplinary team approach to genomic diagnostics and precision care, the focus of this review.

Studies were excluded where they did not demonstrate a focus on the coordinated MDT approach to genomic diagnostics and precision medicine, such as:

Studies that focussed mainly on the genetic counselling, clinical management or surgical decision-making aspects without addressing the genomic diagnostic issues

Papers that were conference proceedings, case reports, systematic reviews, editorial or commentaries.

Study selection

Covidence ( www.covidence.org ) was utilized to import all studies for abstract screening by three authors against the key inclusion and exclusion criteria.

The criteria were initially piloted by two authors for the first 20 articles, then further refined after discussion of discrepancies, and adjudication with a third author.

Studies that initially met the selection criteria were retained for eligibility checks by two authors and all reasons for excluding were documented in Covidence. Once authors reached consensus on screened abstracts, these proceeded to full text review. Two authors conducted the full text review to ensure consistency according to the above criteria, and final papers were selected for data extraction. Any conflicts were resolved by discussion and cross-checking with a third author.

Data extraction

A data extraction form was developed by one author, and refined in consultation with the study team. Data was extracted for the following fields: (Supp. Table  2 ) population, intervention (adapted criteria from TIDieR checklist [ 17 ] and STARI guidelines[ 18 ]), and use of implementation framework, study design, setting, and intervention outcomes mapped to Proctor et al.’s evaluative framework outcomes at the service and implementation levels [ 7 ].

Proctor’s evaluative framework consists of healthcare service (Efficiency, safety, effectiveness, equity, patient centeredness, timeliness) and implementation (acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, sustainability) outcomes. These are well-used constructs in the implementation science literature since 2011, in over 400 papers as a standard measure of implementation outcomes [ 19 ]. These measures, and their definitions in the genomic MDT context, were further characterized by GMIR domain mapping to understand the genomic MDT outcomes (Supp table S 2 ) by two authors prior to extraction.

While Proctor et al. provide a comprehensive overview of service and implementation level factors, the GMIR framework was used for its specific application in the genomics field. In particular, it takes into account the contextual (system and clinician), interventional, process, and broader outcomes (health policy and economic utility) factors relevant to genomic medicine.

Two authors independently extracted the data from 20% of studies each, selecting different study designs for consistency across types, and compared notes to minimize bias and improve accuracy, with discrepancies resolved via discussion with a third author. Once consistent extraction was achieved, one author completed data extraction for the remaining 60% of articles.

Risk of bias/quality

As both qualitative and quantitative studies were included, the QualSyst Assessment Tool [ 20 ] was used to assess quality. One author assessed all studies for risk of bias, and a second author assessed 50% of the studies, and discrepancies in scoring were resolved by discussion and third author for resolution.

Strategy for data synthesis

A narrative synthesis was performed using both Proctor [ 7 ] and GMIR [ 11 ]. The first step in the synthesis was based on the Proctor et al. evaluative framework outcomes [ 7 ] at service, and implementation level and further subthemes developed from this structure (Tables S 2 and S 3 ). This was performed by one author, and further correlated by a second. Where possible, quantitative data on the effectiveness of the MDT approach were collected. Further analysis of implementation outcomes and evidence gaps were mapped to the GMIR framework [ 11 ], which is based on the Consolidated Framework for Implementation Research (CFIR) [ 13 ]. This was used to facilitate an understanding of the processes and core components of genomic MDTs as defined at the contextual, intervention, process and outcomes levels (Supp. Table S 3 ). The GMIR provides a simple, clear, comprehensive framework for genomic research, interventions, and understanding processes that influence implementation. The narrative synthesis combined the summary and explanation of the intervention characteristics and potential effects.

A total of 2590 studies were imported to Covidence for screening (Fig.  1 ). Duplicates ( n  = 1060) were removed, leaving 1530 studies to be screened, of which 72 were agreed upon for full text review. A total of 55 studies were excluded, and 17 papers met the selection criteria. Of the 17 genomic MDT papers, five were qualitative, two were mixed methodology, and the remaining ten were mainly descriptive cohort studies with quantitative elements (Tables  1 , 2 , and Supplementary Table S 3 ).

The healthcare contexts (Tables  1 , 2 ) of the papers revealed a predominance of Western English speaking developed nations, including the United Kingdom (5 studies), North America (4 studies), Europe (3 studies) and Australia (5 studies). The clinical contexts were varied and included eight main areas: rare diseases (4 studies), prenatal genomics (4 studies), acute paediatric care (3 studies), renal (2 studies), and brain malformations, epilepsy, cardiac, cancer (1 study each). Most studies were conducted in Western specialised tertiary or national referral centres, with access to funded genomic testing and specialized expertise in subspecialty medicine and high degrees of genomic literacy. Only one study [ 21 ] utilized an implementation framework, and almost all (except for one qualitative study) exclusively focused on healthcare professionals as participants.

The quality scores for all studies were relatively low (average 0.5, range 0.2–0.7) due to a number of factors, especially in the lack of comparator, blinding, and relatively small sizes of the study cohorts. Some papers were more case reports [ 22 ] than research studies, and very few papers had a structured approach to the analytical methods, control for confounding, or assessment for bias.

The MDTs role in facilitating genetic diagnosis including variants of uncertain significance

The intervention of the genomic MDT was found to be effective and an efficient use of resources and expertise, requiring close collaboration and specialist clinical access to maximise diagnostic yield (Fig.  2 , Table  2 ). All 17 studies pointed towards high effectiveness of the genomic MDT approach, utilizing interdisciplinary collaboration and expertise to achieve higher diagnostic yields in complex genomic cases (Fig.  2 , Table  2 ). From the quantitative studies, genomic testing in the MDT context had a diagnostic yield of around 10-78% depending on the clinical contexts. Studies where the MDT’s role was examined specifically in this claimed an increased yield of around 6-25% additional to ‘standard’ testing (Table  2 , Fig.  2 ).

figure 2

Summary of the main findings of the 17 studies, outlining the context, interventions, processes and outcomes of the genomic MDT.

The MDT’s role in efficient discussion and resolving of VUS was a significant theme, as these can be very complex and require detailed and time-consuming scientific and clinical correlation. The MDT approach led to VUS resolution by maximising the clinical-scientific interaction for better variant interpretation [ 23 ], preventing false positives [ 24 ], and reducing the overall number of variants requiring curation and time even up to 2/3 [ 25 ]. Others [ 26 ] highlighted the MDT’s role in providing more accurate genotype-phenotype correlation due to expert clinical inclusion; recommended re-examination of patients with possible syndromal features led to an ‘uplift’ in diagnosis in an additional 25% of patients in their small cohort of hypertrophic cardiomyopathy (HCM) being diagnosed [ 26 ]. The renal genomic MDT was essential for pre-test gene curation, leading to improved reports when the correct gene was reported and interpreted [ 27 , 28 ].

While some studies examined the MDT’s role, many of these studies did not differentiate between having an MDT model in place or not, and therefore did not consider a ‘no comparator’ or control group in their design. This was mostly due to the MDT being integral to the actual study process, for example, including genomic sequencing and interpretation with an integrated MDT. These studies claimed a higher yield, using an MDT integrated approach due to the ability to maximise the collaboration between subspecialist expertise, laboratory and genetics in order to tackle the issues of triaging testing, careful selection of genes for analysis, timeliness, and clinical correlation of variants (Table  2 ).

The MDT promotes collaboration, improved patient management, and genomic mainstreaming

Apart from the impact on diagnostic yield and efficient resolution of VUS, the MDT was also found to have additional benefits for the healthcare system and clinicians. In the qualitative studies with semi structured interviews in the setting of general genomics [ 29 ], cancer [ 30 ], and acute care genomics [ 31 ], interdisciplinary collaboration was described as a major contributor to the MDT’s high degree of acceptability and adoption across these different clinical contexts (Table  2 ). This was highlighted in the acute care setting [ 31 , 32 ], as well as the brain malformations study [ 22 ]. Beyond the increased diagnostic yield components, the MDT process of a clinically-focussed discussion between subject matter experts (eg. clinical, radiology, research, genomic) aided novel gene discovery, diagnostic improvement, and functional genomic options for patients (Fig.  2 , Table  2 ).

The genomic MDT intervention appeared to have an immediate impact on patient management, by being able to inform decision making in acute situations such as in 57% of acute rapid exome sequencing (rWES) cases [ 21 ], for sick neonates in intensive care. In epilepsy, 92% of diagnoses had an immediate impact on clinical management [ 25 ]. In the renal clinic, having a MDT confirmed clinical diagnosis (34%) or reclassification (39%) of diagnosis aided management in 59% of patients who avoided additional invasive testing such as biopsies, or had better surveillance and treatment options[ 27 ]. This was quantified by direct cost savings such as $AUD2300 as the cost of sequencing, leading to avoiding invasive renal biopsies of up to double the cost. In the acute cohort an estimated total $AUD534K savings were made, while the cost was $AUD13,388 per diagnosis [ 21 ].

Overall, the role of the MDT was to achieve better resolution of genetic variants in the correct clinical context, with the collaboration and expertise of clinical and laboratory expertise, research and clinical subspecialists. One study directly interviewed MDT members regarding the function of the MDT, asking members about decision making and the factors that affected their genomic practice [ 29 ]. They found that the MDT functioned as a triaging/selection process for genomic sequencing, and was beneficial in this process especially for shared decision making, genomic outreach to other subspecialties, and education in non-genetics professions. These are often cited as key factors for successful ‘mainstreaming’ of genomics [ 33 ].

The genomic MDT processes are safe, timely, and acceptable in a variety of clinical contexts

A number of service level outcomes were highlighted including increased safety and timeliness of processes via the MDT (Table  2 ). There were several negative examples of how the genomic MDT impacted patient safety. A prenatal study found that using commercial laboratories without a genomic MDT approach had an almost unacceptably high number of VUS, which affected patient management in the time-pressured prenatal scenario, with 33% of tests reporting one or more VUS [ 34 ]. Most of these VUS were considered by MDT review to be unrelated to the actual diagnosis. In a molecular tumour board study (MTB) [ 30 ], patients were not recognised as potentially harbouring germline cancer predisposing variants due to the lack of genetic expertise and representation on the MTB. Without adequate genetic representation, there was significant confusion about interpretation of results, role delineation, and ultimately poor patient management. Therefore, it appears that is not sufficient just to have multiple members of an MDT, but there is a need to have the correct subspecialty/expertise for the cases being discussed, or risk an inadequate interpretation or implementation of outcomes, risking patient harm.

The genomic MDT approach also improved timeliness of results in time-pressured situations such as acute genomics and prenatal scenarios. In the prenatal situation, this made a significant difference due to the need for precise and rapid results to guide Foetal Medicine Unit specialists and patient counselling requiring a team approach for consensus and advice. Interestingly, over a 10 year period, a pattern emerged [ 35 ] of more complex cases and results required an MDT tertiary centre review, while more ‘simple’ cases were increasingly being able to be handled locally, possibly demonstrating changing roles of centres with and without MDT access.

In the acute care genomic scenario, the initial barrier of timeliness improved when MDTs were changed from every 16 days to weekly, in response to an implementation science based qualitative study, with additional workforce and laboratory resourcing provided to meet tight deadlines [ 21 ]. This did challenge existing models in genetics that were very much based on office hours and requiring time for detailed genetic counselling [ 31 ]. In addition, the MDT was found to be a facilitator of rapid processes due to the collaborative communication nature of the approach, and this became a key component of one acute care service [ 32 ].

Further to these service/patient level outcomes, the genomic MDT implementation outcomes included high levels of acceptability and adoption, although there were some potential tensions raised especially in the role of junior vs senior members of the MDT team, and the need for efficiency and high volume of cases sometimes overriding the quality review and clinical decision making process. In the rare disease diagnostics context, the genomic MDT was found to have clear benefits, for adoption, but there were potential issues of sustainability due to the time and resources required for consent and discussion, often outside of the allocated time for standard clinical consultations [ 29 ]. Similarly, the genomic MDT model was found to be appropriate in the renal context due to improving genomic education of nephrologists [ 28 ]. High adoption was reported in the prenatal context due to faster and more accurate diagnostic yield [ 36 ] impacting management.

One study quantified the level of acceptability of genomic adoption by non-genetics healthcare professionals and showed a shift from 66 to 95% confidence in genomic testing ( p  = 0.004) which is a statistically and clinically significant result [ 25 ]. Qualitatively the neurologists reported that the MDT approach with genetics/lab/research helped improve their education, understand the complexities of genomic testing and variant curation, and interpretation, and also facilitated being able to make comprehensive plans for return of results to families and management. This is significant as often the main barrier to mainstreaming of genomics reported by non-genetic clinicians includes education, understanding, and interpretation of genomic results [ 33 ].

The genomic MDT approach appears to be highly effective and efficient, facilitating higher diagnostic rates and improved patient management. There were additional flow on effects of improved acceptability of genomics, facilitating education and mainstreaming into the non-genetics workforce. However, there still remain significance evidence gaps in the actual costs, sustainability, and equity of access to the genomic MDT. Also, a systematic implementation science based approach to the genomic MDT is required to ensure its adoption into different health contexts, and inform health policy and practice.

The genomic MDT is a model for interdisciplinary collaboration

The interdisciplinary genomic MDT provides a model for genomic medicine that incorporates interdisciplinary collaboration, with the most effective teams having the relevant subspecialist, clinical genetics, genetic counsellor, and research/functional genomics expertise to maximise diagnostic yield and minimize VUS. This is particularly important, with some studies reporting as many as 36% of patients receiving VUS from genomic testing [ 37 ]. This has been found to cause higher levels of anxiety and distress in some patients, and even uncertainty over management decisions [ 38 ]. In addition, VUS often end up being reclassified, many as benign, but this takes a considerable amount of time both in terms of laboratory technicians reviewing the evidence and data, and for research such as functional genomics and databases to help reclassify [ 39 ]. These efforts to help VUS interpretation include international massive population genomic databases, collaborative forums to refine variant curation and interpretation, and functional genomic research including machine learning and AI. In contrast, many of the studies included in this review reported the reduction in VUS, without the need for resource and time intensive international databases or functional genomics research, simply by incorporating the collaboration of laboratory scientific and clinical subspecialty expertise.

Genomic MDT cost, sustainability, equity and scale up

Although most studies have favourably portrayed the impact of the genomic MDT, there are a number of evidence gaps including the sustainability, costs, equity of access, and scale up of genomic MDTs. This has important implications for precision medicine, as genomics and advanced therapies continue to ‘mainstream’ into standard healthcare [ 40 ].

In terms of costs, only three studies analysed the actual cost or cost effectiveness of the MDT approach [ 21 , 27 , 41 ]. These studies highlighted costs savings to health systems by reaching rapid diagnoses, and reduced ‘diagnostic odyssey’ including invasive testing, but these are indirect costs not directly due to the MDT approach. Also, some issues were raised such as whether insurance would cover the cost of testing, raising equity and access barriers. None of these studies gave an accurate quantification of the exact costs, resources and personnel required in the genomic MDT. This evidence would more accurately reveal the resources required for such an approach, realising that for many services the MDT is not within ‘normal business’ and therefore not independently funded, unlike genomic testing and clinical services. However, there could be a potential positive role for MDT in helping to educate non-genetics clinicians so that simpler cases could be handled locally without MDT input, and saving resources for more complex/timely cases. More research is required to fully elucidate the costs of an MDT approach, and any potential savings to the healthcare system from this model.

Another evidence gap is sustainability, as MDTs often occurred in a time constrained environment, such as acute paediatrics and prenatal genomics. In these scenarios, the need for a rapid answer sometimes led to tensions and difficulties in feasibility for standard practice, such as the time required to consent and discuss cases with patients and families. While one purpose of the MDT was to educate/feedback to clinicians regarding results, and discuss complex issues, these were often laid aside for the sake of efficiency [ 29 , 41 ]. Also there was a recognised ‘blurring’ between research and clinical work and the additional workload of the MDT, often funded by research, was putting constraints on clinical services [ 29 ]. The ongoing sustainability of such an approach, combined with cost data, would inform health systems planning and policy around the MDT.

Implementation factors to consider for genomic MDT sustainability

There are some additional considerations for the successful implementation of a genomic MDT. Firstly, these studies almost exclusively in highly specialized Western tertiary centres, requiring access to genomic testing funding, expertise including subspecialists and genomics laboratories, and sometimes even functional genomics research access (Table  2 , Supp. Table S 3 ). This raises the issue of equity, diversity, and adaptability to less complex, primary care, and non-tertiary settings. These MDTs were embedded within Western healthcare systems with clear networks of referrals, expertise, laboratory access, and mostly English speaking. Most importantly, access and payment for genomic testing was assumed, whereas this is not always available pending insurance coverage, local availability and funding mechanisms, especially in developing countries. This excludes a large proportion of worldwide healthcare systems and populations, and limits the generalizability of these studies to developing countries and non-english speaking contexts. It also risks perpetuating pre-existing inequalities of access to genomics and research away from under-served populations, where arguably there is much greater clinical need [ 42 ]. Also, the clinicians involved were highly qualified experts, with high genomic literacy and resourcing. Where such staff were not available there were potential issues of safety [ 30 ].

Secondly, the genomic MDT is particularly relevant for genomic ‘mainstreaming’ models being proposed worldwide, such as Genomics England [ 43 ] and in Australian Genomics [ 44 ]. The MDT, with its concentrated expertise and collaborative model, could be a vital piece in the efforts to facilitate non-genetics healthcare professionals undertaking genomics, by providing an intervention that is genomics informed, collaborative, and educational. This was found in one study where the adult epilepsy genomic MDT improved clinician confidence to undertake genomics themselves, by experiential learning [ 25 ]. This model has potential for facilitation of mainstreaming at scale, and address some of the workforce gaps in genetics worldwide. However, the equity, access, and diversity issues must be addressed, in order to have fair and equal access to the benefits of precision medicine for all.

Finally, we identified another gap in the lack of patient and family data on satisfaction with the MDT approach. No studies directly asked patients and families whether they wanted a MDT approach, with multiple people discussing their case and genetic information. This has also been identified previously as a gap in the cancer MDT literature [ 10 ]. Privacy concerns, litigation factors, and consumer involvement are potential areas for future study, as there is no consumer voice in any of these studies especially in optimal discussion and disclosure of results, and the direct impact of the MDT on patients themselves.

More implementation science based research is required in genomics and precision medicine

A major weakness to these studies were that they were mostly descriptive cohorts, without use of a robust framework, model, or theory for implementation. This can be said of the genomic literature in general, where only a small fraction (1.75%) of papers are in the field of implementation, thus limiting their utility and ability to transform research into clinical practice [ 45 ]. While qualitative and descriptive studies do have a key role in research and have uncovered many important aspects of the MDT, most studies reported solely on the genomic diagnostic yield, but provide no evidence on implementing the genomic MDT into ‘best practice’ in our complex health systems. Also, the quality scores of these studies were very low, particularly a few which were mainly case vignettes. Only one implementation science-based study took a systems approach [ 21 ] to address barriers to the acute paediatric MDT. Questions that still need answering include what the optimal makeup of a genomic MDT for maximum impact, cost effectiveness, the MDT processes of discussion, documentation, and followup, and the patient/consumer perspective on these. In addition, the secondary outcomes of the MDT could include improved collaboration, research, education, and ‘mainstreaming’ of genomics – and these need to be studied both in terms of effectiveness and the scalability and adaptability of these for precision medicine.

While there are major gaps in evidence, the studies reviewed all point towards the benefits of the genomic MDT and a need for such an approach for more effective and efficient patient diagnosis and management. The MDT harnesses improved collaboration and discussion of complex clinical scenarios and genomic results (especially for VUS) for improved diagnostic rate and patient care. More quality research drawing on implementation science methods are needed to evaluate the full potential of genomic MDTs. Such research could propose how best to adaptation interventions for local use, study scalability and sustainability, and study the health systems factors that can enable scale up whilst promoting access and equity.

O’Shea R, Ma AS, Jamieson RV, Rankin NM. Precision medicine in Australia: now is the time to get it right. Med J Aust. 2022;217:559–63.

Article   PubMed   PubMed Central   Google Scholar  

Richards S, Aziz N, Bale S, Bick D, Das S, Gastier-Foster J, et al. Standards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology. Genet Med. 2015;17:405–24.

RANZCO 2020 Guidelines for the assessment and management of patients with inherited retinal degenertions. ( https://ranzco.edu/policies_and_guideli/guidelines-for-the-assessment-and-management-of-patients-with-inherited-retinal-degenerations-ird/ ).

Long JC, Gaff C, Clay C. Transforming the genomics workforce to sustain high value care. Australia: Deeble Institute; 2022 24 March (2022).

Dragojlovic N, Borle K, Kopac N, Ellis U, Birch P, Adam S, et al. The composition and capacity of the clinical genetics workforce in high-income countries: a scoping review. Genet Med. 2020;22:1437–49.

Article   PubMed   Google Scholar  

Geneticist Workforce Faces Critical Shortage. Am J Med Genet A. 2021;185:2290-1.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76.

Soukup T, Lamb BW, Arora S, Darzi A, Sevdalis N, Green JS. Successful strategies in implementing a multidisciplinary team working in the care of patients with cancer: an overview and synthesis of the available literature. J Multidiscip Health. 2018;11:49–61.

Article   Google Scholar  

Pillay B, Wootten AC, Crowe H, Corcoran N, Tran B, Bowden P, et al. The impact of multidisciplinary team meetings on patient assessment, management and outcomes in oncology settings: A systematic review of the literature. Cancer Treat Rev. 2016;42:56–72.

Rankin NM, Fradgley EA, Barnes DJ. Implementation of lung cancer multidisciplinary teams: a review of evidence-practice gaps. Transl Lung Cancer Res. 2020;9:1667–79.

Horowitz CR, Orlando LA, Slavotinek AM, Peterson J, Angelo F, Biesecker B, et al. The genomic medicine integrative research framework: a conceptual framework for conducting genomic medicine research. Am J Hum Genet. 2019;104:1088–96.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Orlando LA, Sperber NR, Voils C, Nichols M, Myers RA, Wu RR, et al. Developing a common framework for evaluating the implementation of genomic medicine interventions in clinical care: the IGNITE Network’s Common Measures Working Group. Genet Med. 2018;20:655–63.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated consolidated framework for implementation research based on user feedback. Implement Sci. 2022;17:75.

O’Shea R, Crook A, Jacobs C, Kentwell M, Gleeson M, Tucker KM, et al. A mainstreaming oncogenomics model: improving the identification of Lynch syndrome. Front Oncol. 2023;13:1140135.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Rev Esp Cardiol (Engl Ed). 2021;74:790–9.

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for reporting implementation studies (StaRI) statement. BMJ. 2017;356:i6795.

Proctor EK, Bunger AC, Lengnick-Hall R, Gerke DR, Martin JK, Phillips RJ, et al. Ten years of implementation outcomes research: a scoping review. Implement Sci. 2023;18:31.

Kmet L, Lee R. Standard quality assessment criteria for evaluating primary research papers from a variety of Fields AHFMRHTA Initiative20040213. HTA Initiative. 2004;2:4–5.

Lunke S, Tan NB, Stapleton R, Kumble S, Phelan DG, Chong B, et al. Meeting the challenges of implementing rapid genomic testing in acute pediatric care. Genet Med. 2018;20:1554–63.

Smits DJ, Dekker J, Brooks AS, van Ham T, Lequin MH, Fornerod M, et al. Multidisciplinary interaction and MCD gene discovery. The perspective of the clinical geneticist. Eur J Paediatr Neurol. 2021;35:27–34.

Lazaridis KN, McAllister TM, Farrugia G, Cousin MA, Babovic-Vuksanovic D, Pichurin PN, et al. Outcome of whole exome sequencing for diagnostic odyssey cases of an individualized medicine clinic: The Mayo clinic experience. Mayo Clin Proc. 2016;91:297–307.

Marinakis NM, Svingou M, Veltra D, Kekou K, Kosma K, Tsoutsou E, et al. Phenotype-driven variant filtration strategy in exome sequencing toward a high diagnostic yield and identification of 85 novel variants in 400 patients with rare Mendelian disorders. Am J Med Genet Part A. 2021;185:2561–71.

Article   CAS   PubMed   Google Scholar  

Vadlamudi L, Bennett CM, Tom M, Abdulrasool G, Brion K, Lundie B, et al. A multi-disciplinary team approach to genomic testing for drug-resistant epilepsy patients-The GENIE Study. J Clin Med. 2022;11:4238.

Rupp S, Felimban M, Schranz D, Logeswaran T, Neuhauser C, Thul J, et al. Genetic basis of hypertrophic cardiomyopathy in children. Clin Res Cardiol. 2019;108:282–9.

Kerr PG, Ryan J, Jayasinghe K, Wardrop L, Stark Z, Lunke S, et al. Clinical impact of genomic testing in patients with suspected monogenic kidney disease. Genet Med. 2021;23:183–91.

Mallett AJ, McCarthy HJ, Ho G, Holman K, Farnsworth E, Patel C, et al. Massively parallel sequencing and targeted exomes in familial kidney disease can diagnose underlying genetic disorders. Kidney Int. 2017;92:1493–506.

Ormondroyd E, Mackley MP, Blair E, Craft J, Knight JC, Taylor J, et al. Insights from early experience of a Rare Disease Genomic Medicine Multidisciplinary Team: a qualitative study. Eur J Hum Genet. 2017;25:680–6.

Fishler KP, Breese EH, Walters-Sen L, McGowan ML Experiences of a multidisciplinary genomic tumor board interpreting risk for underlying germline variants in tumor-only sequencing results. JCO Precision Oncology. 2019;3:1–8.

Lynch F, Nisselle A, Gaff CL, McClaren B. Rapid acute care genomics: challenges and opportunities for genetic counselors. J Genet Couns. 2021;30:30–41.

Hill M, Hammond J, Lewis C, Mellis R, Chitty LS, Clement E. Delivering genome sequencing for rapid genetic diagnosis in critically ill children: parent and professional views, experiences and challenges. Eur J Hum Genet. 2020;28:1529–40.

O’Shea R, Taylor N, Crook A, Jacobs C, Jung Kang Y, Lewis S, et al. Health system interventions to integrate genetic testing in routine oncology services: A systematic review. PLoS One. 2021;16:e0250379.

Cornthwaite M, Turner K, Armstrong L, Boerkoel CF, Chang C, Lehman A, et al. Impact of variation in practice in the prenatal reporting of variants of uncertain significance by commercial laboratories: Need for greater adherence to published guidelines. Prenat Diagn. 2022;42:1514–24.

Mone F, O’Connor C, Hamilton S, Quinlan-Jones E, Allen S, Marton T, et al. Evolution of a prenatal genetic clinic-A 10-year cohort study. Prenat Diagn. 2020;40:618–25.

Chandler N, Best S, Hayward J, Faravelli F, Mansour S, Kivuva E, et al. Rapid prenatal diagnosis using targeted exome sequencing: a cohort study to assess feasibility and potential impact on prenatal counseling and pregnancy management. Genet Med. 2018;20:1430–7.

Buys SS, Sandbach JF, Gammon A, Patel G, Kidd J, Brown KL, et al. A study of over 35,000 women with breast cancer tested with a 25-gene panel of hereditary cancer genes. Cancer. 2017;123:1721–30.

Mighton C, Shickh S, Uleryk E, Pechlivanoglou P, Bombard Y. Clinical and psychological outcomes of receiving a variant of uncertain significance from multigene panel testing or genomic sequencing: a systematic review and meta-analysis. Genet Med. 2021;23:22–33.

Burke W, Parens E, Chung WK, Berger SM, Appelbaum PS. The challenge of genetic variants of uncertain clinical significance: a narrative review. Ann Intern Med. 2022;175:994–1000.

Long JC, Gul H, McPherson E, Best S, Augustsson H, Churruca K, et al. A dynamic systems view of clinical genomics: a rich picture of the landscape in Australia using a complexity science lens. BMC Med Genomics. 2021;14:63.

Taylor J, Craft J, Blair E, Wordsworth S, Beeson D, Chandratre S, et al. Implementation of a genomic medicine multi-disciplinary team approach for rare disease in the clinical setting: a prospective exome sequencing case series. Genome Med. 2019;11:46.

Atutornu J, Milne R, Costa A, Patch C, Middleton A. Towards equitable and trustworthy genomics research. EBioMedicine. 2022;76:103879.

Simpson S, Seller A, Bishop M. Using the findings of a national survey to inform the work of England’s genomics education programme. Front Genet. 2019;10:1265.

Stark Z, Boughtwood T, Phillips P, Christodoulou J, Hansen DP, Braithwaite J, et al. Australian genomics: a federated model for integrating genomics into healthcare. Am J Hum Genet. 2019;105:7–14.

Roberts MC, Clyne M, Kennedy AE, Chambers DA, Khoury MJ. The current state of funded NIH grants in implementation science in genomic medicine: a portfolio analysis. Genet Med. 2019;21:1218–23.

Petrovski S, Aggarwal V, Giordano JL, Stosic M, Wou K, Bier L, et al. Whole-exome sequencing in the evaluation of fetal structural anomalies: a prospective cohort study. Lancet. 2019;393:758–67.

Download references

AM is the recipient of a Sydney Health Partners’ Research Translation Fellowship ( https://sydneyhealthpartners.org.au/work‐with‐us/research‐translation‐fellowships/ ), which funds part of his clinical time to perform implementation science based research in genomics. Robyn V Jamieson co-chairs the Sydney Health Partners Clinical Academic Group in Genomics and Precision Medicine, which is focussed on implementation science research to enable genomic mainstreaming and multidisciplinary approaches to precision medicine. Open Access funding enabled and organized by CAUL and its Member Institutions.

Author information

Authors and affiliations.

Specialty of Genomic Medicine, University of Sydney, Sydney, NSW, Australia

Alan Ma, Rosie O’Shea, Claire Wong & Robyn V Jamieson

Department of Clinical Genetics, Children’s Hospital at Westmead, The Sydney Children’s Hospitals Network, Sydney, NSW, Australia

Alan Ma, Laura Wedd, Claire Wong & Robyn V Jamieson

Eye Genetics Research Unit, Children’s Medical Research Institute, Sydney, NSW, Australia

Alan Ma, Laura Wedd & Robyn V Jamieson

Melbourne School of Population and Global Health, University of Melbourne, Melbourne, VIC, Australia

Nicole Rankin

Sydney School of Public Health, University of Sydney, Sydney, NSW, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

AM conceptualised the review, and led the literature retrieval, screening, extraction and analysis, as well as preparation of the initial draft and revisions. ROS, LW, and CW contributed to the conceptualisation of the review, literature screening and analysis, and draft revisions. RJ and NR contributed to the conceptualisation of the review, overall project supervision, and draft revisions.

Corresponding author

Correspondence to Alan Ma .

Ethics declarations

Competing interests.

The authors declare no competing interests

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Prisma checklist, supplementary tables, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ma, A., O’Shea, R., Wedd, L. et al. What is the power of a genomic multidisciplinary team approach? A systematic review of implementation and sustainability. Eur J Hum Genet (2024). https://doi.org/10.1038/s41431-024-01555-5

Download citation

Received : 13 June 2023

Revised : 07 December 2023

Accepted : 26 January 2024

Published : 20 February 2024

DOI : https://doi.org/10.1038/s41431-024-01555-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

systematic literature review qualitative or quantitative

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Rapid reviews methods series: guidance on rapid qualitative evidence synthesis
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-4808-3880 Andrew Booth 1 , 2 ,
  • Isolde Sommer 3 , 4 ,
  • Jane Noyes 2 , 5 ,
  • Catherine Houghton 2 , 6 ,
  • Fiona Campbell 1 , 7
  • The Cochrane Rapid Reviews Methods Group and Cochrane Qualitative and Implementation Methods Group (CQIMG)
  • 1 EnSyGN Sheffield Evidence Synthesis Group , University of Sheffield , Sheffield , UK
  • 2 Cochrane Qualitative and Implementation Methods Group (CQIMG) , London , UK
  • 3 Department for Evidence-based Medicine and Evaluation , University for Continuing Education Krems , Krems , Austria
  • 4 Cochrane Rapid Reviews Group & Cochrane Austria , Krems , Austria
  • 5 Bangor University , Bangor , UK
  • 6 University of Galway , Galway , Ireland
  • 7 University of Newcastle upon Tyne , Newcastle upon Tyne , UK
  • Correspondence to Professor Andrew Booth, Univ Sheffield, Sheffield, UK; a.booth{at}sheffield.ac.uk

This paper forms part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group and addresses rapid qualitative evidence syntheses (QESs), which use modified systematic, transparent and reproducible methodsu to accelerate the synthesis of qualitative evidence when faced with resource constraints. This guidance covers the review process as it relates to synthesis of qualitative research. ‘Rapid’ or ‘resource-constrained’ QES require use of templates and targeted knowledge user involvement. Clear definition of perspectives and decisions on indirect evidence, sampling and use of existing QES help in targeting eligibility criteria. Involvement of an information specialist, especially in prioritising databases, targeting grey literature and planning supplemental searches, can prove invaluable. Use of templates and frameworks in study selection and data extraction can be accompanied by quality assurance procedures targeting areas of likely weakness. Current Cochrane guidance informs selection of tools for quality assessment and of synthesis method. Thematic and framework synthesis facilitate efficient synthesis of large numbers of studies or plentiful data. Finally, judicious use of Grading of Recommendations Assessment, Development and Evaluation approach for assessing the Confidence of Evidence from Reviews of Qualitative research assessments and of software as appropriate help to achieve a timely and useful review product.

  • Systematic Reviews as Topic
  • Patient Care

Data availability statement

No data are available. Not applicable. All data is from published articles.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjebm-2023-112620

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Rapid Qualitative Evidence Synthesis (QES) is a relatively recent innovation in evidence synthesis and few published examples currently exists.

Guidance for authoring a rapid QES is scattered and requires compilation and summary.

WHAT THIS STUDY ADDS

This paper represents the first attempt to compile current guidance, illustrated by the experience of several international review teams.

We identify features of rapid QES methods that could be accelerated or abbreviated and where methods resemble those for conventional QESs.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

This paper offers guidance for researchers when conducting a rapid QES and informs commissioners of research and policy-makers what to expect when commissioning such a review.

Introduction

This paper forms part of a series from the Cochrane Rapid Reviews Methods Group providing methodological guidance for rapid reviews. While other papers in the series 1–4 focus on generic considerations, we aim to provide in-depth recommendations specific to a resource-constrained (or rapid) qualitative evidence synthesis (rQES). 5 This paper is accompanied by recommended resources ( online supplemental appendix A ) and an elaboration with practical considerations ( online supplemental appendix B ).

Supplemental material

The role of qualitative evidence in decision-making is increasingly recognised. 6 This, in turn, has led to appreciation of the value of qualitative evidence syntheses (QESs) that summarise findings across multiple contexts. 7 Recognition of the need for such syntheses to be available at the time most useful to decision-making has, in turn, driven demand for rapid qualitative evidence syntheses. 8 The breadth of potential rQES mirrors the versatility of QES in general (from focused questions to broad overviews) and outputs range from descriptive thematic maps through to theory-informed syntheses (see table 1 ).

  • View inline

Glossary of important terms (alphabetically)

As with other resource-constrained reviews, no one size fits all. A team should start by specifying the phenomenon of interest, the review question, 9 the perspectives to be included 9 and the sample to be determined and selected. 10 Subsequently, the team must finalise the appropriate choice of synthesis. 11 Above all, the review team should consider the intended knowledge users, 3 including requirements of the funder.

An rQES team, in particular, cannot afford any extra time or resource requirements that might arise from either a misunderstanding of the review question, an unclear picture of user requirements or an inappropriate choice of methods. The team seeks to align the review question and the requirements of the knowledge user with available time and resources. They also need to ensure that the choice of data and choice of synthesis are appropriate to the intended ‘knowledge claims’ (epistemology) made by the rQES. 11 This involves the team asking ‘what types of data are meaningful for this review question?’, ‘what types of data are trustworthy?’ and ‘is the favoured synthesis method appropriate for this type of data?’. 12 This paper aims to help rQES teams to choose methods that best fit their project while understanding the limitations of those choices. Our recommendations derive from current QES guidance, 5 evidence on modified QES methods, 8 13 and practical experience. 14 15

This paper presents an overview of considerations and recommendations as described in table 2 . Supplemental materials including additional resources details of our recommendations and practical examples are provided in online supplemental appendices A and B .

Recommendations for resource-constrained qualitative evidence synthesis (rQES)

Setting the review question and topic refinement

Rapid reviews summarise information from multiple research studies to produce evidence for ‘the public, researchers, policymakers and funders in a systematic, resource-efficient manner’. 16 Involvement of knowledge users is critical. 3 Given time constraints, individual knowledge users could be asked only to feedback on very specific decisions and tasks or on selective sections of the protocol. Specifically, whenever a QES is abbreviated or accelerated, a team should ensure that the review question is agreed by a minimum number of knowledge users with expertise or experience that reflects all the important review perspectives and with authority to approve the final version 2 5 11 ( table 2 , item R1).

Involvement of topic experts can ensure that the rQES is responsive to need. 14 17 One Cochrane rQES saved considerable time by agreeing the review topic within a single meeting and one-phase iteration. 9 Decisions on topics to be omitted are also informed by a knowledge of existing QESs. 17

An information specialist can help to manage the quantity and quality of available evidence by setting conceptual boundaries and logistic limits. A structured question format, such as Setting-Perspective-Interest, phenomenon of-Comparison-Evaluation or Population-Interest, phenomenon of-Context helps in communicating the scope and, subsequently, in operationalising study selection. 9 18

Scoping (of review parameters) and mapping (of key types of evidence and likely richness of data) helps when planning the review. 5 19 The option to choose purposive sampling over comprehensive sampling approaches, as offered by standard QES, may be particularly helpful in the context of a rapid QES. 8 Once a team knows the approximate number and distribution of studies, perhaps mapping them against country, age, ethnicity, etc), they can decide whether or not to use purposive sampling. 12 An rQES for the WHO combined purposive with variation sampling. Sampling in two stages started by reducing the initial number of studies to a more manageable sampling frame and then sampling approximately a third of the remaining studies from within the sampling frame. 20

Sampling may target richer studies and/or privilege diversity. 8 21 A rich qualitative study typically illustrates findings with verbatim extracts from transcripts from interviews or textual responses from questionnaires. Rich studies are often found in specialist qualitative research or social science journals. In contrast, less rich studies may itemise themes with an occasional indicative text extract and tend to summarise findings. In clinical or biomedical journals less rich findings may be placed within a single table or box.

No rule exists on an optimal number of studies; too many studies makes it challenging to ‘maintain insight’, 22 too few does not sustain rigorous analysis. 23 Guidance on sampling is available from the forthcoming Cochrane-Campbell QES Handbook.

A review team can use templates to fast-track writing of a protocol. The protocol should always be publicly available ( table 2 , item R2). 24 25 Formal registration may require that the team has not commenced data extraction but should be considered if it does not compromise the rQES timeframe. Time pressures may require that methods are left suitably flexible to allow well-justified changes to be made as a detailed picture of the studies and data emerge. 26 The first Cochrane rQES drew heavily on text from a joint protocol/review template previously produced within Cochrane. 24

Setting eligibility criteria

An rQES team may need to limit the number of perspectives, focusing on those most important for decision-making 5 9 27 ( table 2 , item R3). Beyond the patients/clients each additional perspective (eg, family members, health professionals, other professionals, etc) multiplies the additional effort involved.

A rapid QES may require strict date and setting restrictions 17 and language restrictions that accommodate the specific requirements of the review. Specifically, the team should consider whether changes in context over time or substantive differences between geographical regions could be used to justify a narrower date range or a limited coverage of countries and/or languages. The team should also decide if ‘indirect evidence’ is to substitute for the absence of direct evidence. An rQES typically focuses on direct evidence, except when only indirect evidence is available 28 ( table 2 , item R4). Decisions on relevance are challenging—precautions for swine influenza may inform precautions for bird influenza. 28 A smoking ban may operate similarly to seat belt legislation, etc. A review team should identify where such shared mechanisms might operate. 28 An rQES team must also decide whether to use frameworks or models to focus the review. Theories may be unearthed within the topic search or be already known to team members, fro example, Theory of Planned Behaviour. 29

Options for managing the quantity and quality of studies and data emerge during the scoping (see above). In summary, the review team should consider privileging rich qualitative studies 2 ; consider a stepwise approach to inclusion of qualitative data and explore the possibility of sampling ( table 2 , item R5). For example, where data is plentiful an rQES may be limited to qualitative research and/or to mixed methods studies. Where data is less plentiful then surveys or other qualitative data sources may need to be included. Where plentiful reviews already exist, a team may decide to conduct a review of reviews 5 by including multiple QES within a mega-synthesis 28 29 ( table 2 , item R6).

Searching for QES merits its own guidance, 21–23 30 this section reinforces important considerations from guidance specific to qualitative research. Generic guidance for rapid reviews in this series broadly applies to rapid QESs. 1

In addition to journal articles, by far the most plentiful source, qualitative research is found in book chapters, theses and in published and unpublished reports. 21 Searches to support an rQES can (a) limit the number of databases searched, deliberately selecting databases from diverse disciplines, (b) use abbreviated study filters to retrieve qualitative designs and (c) employ high yield complementary methods (eg, reference checking, citation searching and Related Articles features). An information specialist (eg, librarian) should be involved in prioritising sources and search methods ( table 2 , item R7). 11 14

According to empirical evidence optimal database combinations include Scopus plus CINAHL or Scopus plus ProQuest Dissertations and Theses Global (two-database combinations) and Scopus plus CINAHL plus ProQuest Dissertations and Theses Global (three-database combination) with both choices retrieving between 89% and 92% of relevant studies. 30

If resources allow, searches should include one or two specialised databases ( table 2 , item R8) from different disciplines or contexts 21 (eg, social science databases, specialist discipline databases or regional or institutional repositories). Even when resources are limited, the information specialist should factor in time for peer review of at least one search strategy ( table 2 , item R9). 31 Searches for ‘grey literature’ should selectively target appropriate types of grey literature (such as theses or process evaluations) and supplemental searches, including citation chaining or Related Articles features ( table 2 , item R10). 32 The first Cochrane rQES reported that searching reference lists of key papers yielded an extra 30 candidate papers for review. However, the team documented exclusion of grey literature as a limitation of their review. 15

Study selection

Consistency in study selection is achieved by using templates, by gaining a shared team understanding of the audience and purpose, and by ongoing communication within, and beyond, the team. 2 33 Individuals may work in parallel on the same task, as in the first Cochrane rQES, or follow a ‘segmented’ approach where each reviewer is allocated a different task. 14 The use of machine learning in the specific context of rQES remains experimental. However, the possibility of developing qualitative study classifiers comparable to those for randomised controlled trials offers an achievable aspiration. 34

Title and abstract screening

The entire screening team should use pre-prepared, pretested title and abstract templates to limit the scale of piloting, calibration and testing ( table 2 , item R11). 1 14 The first Cochrane rQES team double-screened titles and abstracts within Covidence review software. 14 Disagreements were resolved with reference to a third reviewer achieving a shared understanding of the eligibility criteria and enhancing familiarity with target studies and insight from data. 14 The team should target and prioritise identified risks of either over-zealous inclusion or over-exclusion specific to each rQES ( table 2 , item R12). 14 The team should maximise opportunities to capture divergent views and perspectives within study findings. 35

Full-text screening

Full-text screening similarly benefits from using a pre-prepared pretested standardised template where possible 1 14 ( table 2 , item R11). If a single reviewer undertakes full-text screening, 8 the team should identify likely risks to trustworthiness of findings and focus quality control procedures (eg, use of additional reviewers and percentages for double screening) on specific threats 14 ( table 2 , item R13). The Cochrane rQES team opted for double screening to assist their immersion within the topic. 14

Data extraction

Data extraction of descriptive/contextual data may be facilitated by review management software (eg, EPPI-Reviewer) or home-made approaches using Google Forms, or other survey software. 36 Where extraction of qualitative findings requires line-by-line coding with multiple iterations of the data then a qualitative data management analysis package, such as QSR NVivo, reaps dividends. 36 The team must decide if, collectively, they favour extracting data to a template or coding direct within an electronic version of an article.

Quality control must be fit for purpose but not excessive. Published examples typically use a single reviewer for data extraction 8 with use of two independent reviewers being the exception. The team could limit data extraction to minimal essential items. They may also consider re-using descriptive details and findings previously extracted within previous well-conducted QES ( table 2 , item R14). A pre-existing framework, where readily identified, may help to structure the data extraction template. 15 37 The same framework may be used to present the findings. Some organisations may specify a preferred framework, such as an evidence-to-decision-making framework. 38

Assessment of methodological limitations

The QES community assess ‘methodological limitations’ rather than use ‘risk of bias’ terminology. An rQES team should pick an approach appropriate to their specific review. For example, a thematic map may not require assessment of individual studies—a brief statement of the generic limitations of the set of studies may be sufficient. However, for any synthesis that underpins practice recommendations 39 assessment of included studies is integral to the credibility of findings. In any decision-making context that involves recommendations or guidelines, an assessment of methodological limitations is mandatory. 40 41

Each review team should work with knowledge users to determine a review-specific approach to quality assessment. 27 While ‘traffic lights’, similar to the outputs from the Cochrane Risk of Bias tool, may facilitate rapid interpretation, accompanying textual notes are invaluable in highlighting specific areas for concern. In particular, the rQES team should demonstrate that they are aware (a) that research designs for qualitative research seek to elicit divergent views, rather than control for variation; (b) that, for qualitative research, the selection of the sample is far more informative than the size of the sample; and (c) that researchers from primary research, and equally reviewers for the qualitative synthesis, need to be thoughtful and reflexive about their possible influences on interpretation of either the primary data or the synthesised findings.

Selection of checklist

Numerous scales and checklists exist for assessing the quality of qualitative studies. In the absence of validated risk of bias tools for qualitative studies, the team should choose a tool according to Cochrane Qualitative and Implementation Methods Group (CQIMG) guidance together with expediency (according to ease of use, prior familiarity, etc) ( table 2 , item R15). 41 In comparison to the Critical Appraisal Skills Programme checklist which was never designed for use in synthesis, 42 the Cochrane qualitative tool is similarly easy to use and was designed for QES use. Work is underway to identify an assessment process that is compatible with QESs that support decision-making. 41 For now the choice of a checklist remains determined by interim Cochrane guidance and, beyond this, by personal preference and experience. For an rQES a team could use a single reviewer to assess methodological limitations, with verification of judgements (and support statements) by a second reviewer ( table 2 , item R16).

The CQIMG endorses three types of synthesis; thematic synthesis, framework synthesis and meta-ethnography ( box 1 ). 43 44 Rapid QES favour descriptive thematic synthesis 45 or framework synthesis, 46 47 except when theory generation (meta-ethnography 48 49 or analytical thematic synthesis) is a priority ( table 2 , item R17).

Choosing a method for rapid qualitative synthesis

Thematic synthesis: first choice method for rQES. 45 For example, in their rapid QES Crooks and colleagues 44 used a thematic synthesis to understand the experiences of both academic and lived experience coresearchers within palliative and end of life research. 45

Framework synthesis: alternative where a suitable framework can be speedily identified. 46 For example, Bright and colleagues 46 considered ‘best-fit framework synthesis’ as appropriate for mapping study findings to an ‘a priori framework of dimensions measured by prenatal maternal anxiety tools’ within their ‘streamlined and time-limited evidence review’. 47

Less commonly, an adapted meta-ethnographical approach was used for an implementation model of social distancing where supportive data (29 studies) was plentiful. 48 However, this QES demonstrates several features that subsequently challenge its original identification as ‘rapid’. 49

Abbrevations: QES, qualitative evidence synthesis; rQES, resource-constrained qualitative evidence synthesis.

The team should consider whether a conceptual model, theory or framework offers a rapid way for organising, coding, interpreting and presenting findings ( table 2 , item R18). If the extracted data appears rich enough to sustain further interpretation, data from a thematic or framework synthesis can subsequently be explored within a subsequent meta-ethnography. 43 However, this requires a team with substantial interpretative expertise. 11

Assessments of confidence in the evidence 4 are central to any rQES that seeks to support decision-making and the QES-specific Grading of Recommendations Assessment, Development and Evaluation approach for assessing the Confidence of Evidence from Reviews of Qualitative research (GRADE-CERQual) approach is designed to assess confidence in qualitative evidence. 50 This can be performed by a single reviewer, confirmed by a second reviewer. 26 Additional reviewers could verify all, or a sample of, assessments. For a rapid assessment a team must prioritise findings, using objective criteria; a WHO rQES focused only on the three ‘highly synthesised findings’. 20 The team could consider reusing GRADE-CERQual assessments from published QESs if findings are relevant and of demonstrable high quality ( table 2 , item R19). 50 No rapid approach to full application of GRADE-CERQual currently exists.

Reporting and record management

Little is written on optimal use of technology. 8 A rapid review is not a good time to learn review management software or qualitative analysis management software. Using such software for all general QES processes ( table 2 , item R20), and then harnessing these skills and tools when specifically under resource pressures, is a sounder strategy. Good file labelling and folder management and a ‘develop once, re-use multi-times’ approach facilitates resource savings.

Reporting requirements include the meta-ethnography reporting guidance (eMERGe) 51 and the Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement. 52 An rQES should describe limitations and their implications for confidence in the evidence even more thoroughly than a regular QES; detailing the consequences of fast-tracking, streamlining or of omitting processes all together. 8 Time spent documenting reflexivity is similarly important. 27 If QES methodology is to remain credible rapid approaches must be applied with insight and documented with circumspection. 53 54 (56)

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Klerings I ,
  • Robalino S ,
  • Booth A , et al
  • Nussbaumer-Streit B ,
  • Hamel C , et al
  • Garritty C ,
  • Tricco AC ,
  • Smith M , et al
  • Gartlehner G ,
  • Devane D , et al
  • NHS Scotland
  • Campbell F ,
  • Flemming K , et al
  • Glenton C ,
  • Lubarsky S ,
  • Varpio L , et al
  • Meskell P ,
  • Glenton C , et al
  • Houghton C ,
  • Delaney H , et al
  • Beecher C ,
  • Maeso B , et al
  • McKenzie JE , et al
  • Harris JL ,
  • Cargo M , et al
  • Varley-Campbell J , et al
  • Downe S , et al
  • Shamseer L ,
  • Clarke M , et al
  • Nussbaumer-Streit B , et al
  • Finlayson KW ,
  • Lawrie TA , et al
  • Lewin S , et al
  • Frandsen TF ,
  • Gildberg FA ,
  • Tingleff EB
  • Mshelia S ,
  • Analo CV , et al
  • Husk K , et al
  • Carmona C ,
  • Carroll C ,
  • Ilott I , et al
  • Meehan B , et al
  • Munthe-Kaas H ,
  • Bohren MA ,
  • Munthe-Kaas HM ,
  • French DP ,
  • Flemming K ,
  • Garside R , et al
  • Shulman C , et al
  • Dixon-Woods M
  • Bright KS ,
  • Norris JM ,
  • Letourneau NL , et al
  • Sadjadi M ,
  • Mörschel KS ,
  • Petticrew M
  • France EF ,
  • Cunningham M ,
  • Ring N , et al
  • McInnes E , et al
  • Britten N ,
  • Garside R ,
  • Pope C , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Contributors All authors (AB, IS, JN, CH, FC) have made substantial contributions to the conception and design of the guidance document. AB led on drafting the work and revising it critically for important intellectual content. All other authors (IS, JN, CH, FC) contributed to revisions of the document. All authors (AB, IS, JN, CH, FC) have given final approval of the version to be published. As members of the Cochrane Qualitative and Implementation Methods Group and/or the Cochrane Rapid Reviews Methods Group all authors (AB, IS, JN, CH, FC) agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests AB is co-convenor of the Cochrane Qualitative and Implementation Methods Group. In the last 36 months, he received royalties from Systematic Approaches To a Successful Literature Review (Sage 3rd edition), honoraria from the Agency for Healthcare Research and Quality, and travel support from the WHO. JN is lead convenor of the Cochrane Qualitative and Implementation Methods Group. In the last 36 months, she has received honoraria from the Agency for Healthcare Research and Quality and travel support from the WHO. CH is co-convenor of the Cochrane Qualitative and Implementation Methods Group.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; internally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • v.9(1); 2015 Feb

Qualitative systematic reviews: their importance for our understanding of research relevant to pain

This article outlines what a qualitative systematic review is and explores what it can contribute to our understanding of pain. Many of us use evidence of effectiveness for various interventions when working with people in pain. A good systematic review can be invaluable in bringing together research evidence to help inform our practice and help us understand what works. In addition to evidence of effectiveness, understanding how people with pain experience both their pain and their care can help us when we are working with them to provide care that meets their needs. A rigorous qualitative systematic review can also uncover new understandings, often helping illuminate ‘why’ and can help build theory. Such a review can answer the question ‘What is it like to have chronic pain?’ This article presents the different stages of meta-ethnography, which is the most common methodology used for qualitative systematic reviews. It presents evidence from four meta-ethnographies relevant to pain to illustrate the types of findings that can emerge from this approach. It shows how new understandings may emerge and gives an example of chronic musculoskeletal pain being experienced as ‘an adversarial struggle’ across many aspects of the person’s life. This article concludes that evidence from qualitative systematic reviews has its place alongside or integrated with evidence from more quantitative approaches.

Many of us use evidence of effectiveness for various interventions when working with people in pain. A good systematic review can be invaluable in bringing together research evidence to help inform our practice and help us understand what works. In addition to evidence of effectiveness, understanding how people with pain experience both their pain and their care can help us when we are working with them to provide care that meets their needs. A high-quality qualitative systematic review can also uncover new understandings, often helping illuminate ‘why’ and can help build theory. A qualitative systematic review could answer the question ‘What is it like to have chronic non-malignant pain?’

The purpose of this article is to outline what a qualitative systematic review is and explore what it can contribute to our understanding of pain. A qualitative systematic review brings together research on a topic, systematically searching for research evidence from primary qualitative studies and drawing the findings together. There is a debate over whether the search needs to be exhaustive. 1 , 2 Methods for systematic reviews of quantitative research are well established and explicit and have been pioneered through the Cochrane Collaboration. Methods for qualitative systematic reviews have been developed more recently and are still evolving. The Cochrane Collaboration now has a Qualitative and Implementation Methods Group, including a register of protocols, illustrating the recognition of the importance of qualitative research within the Cochrane Collaboration. In November 2013, an editorial described the Cochrane Collaboration’s first publication of a qualitative systematic review as ‘a new milestone’ for Cochrane. 3 Other editorials have raised awareness of qualitative systematic reviews in health. 4

Noblit and Hare 5 were pioneers in the area of synthesising qualitative data. They describe such reviews as aggregated or as interpretative. The aggregated review summarises the data, and Hannes and Pearson 6 provide a worked example of an aggregation approach. Interpretative approaches, as the name suggests, interpret the data, and from that interpretation, new understandings can develop that may lead to development of a theory that helps us to understand or predict behaviour. Types of interpretative qualitative systematic reviews include meta-ethnography, critical interpretative synthesis, realist synthesis and narrative synthesis. More details about these and other approaches can be found in other papers and books. 1 , 5 , 7 – 11 This article will describe one approach, meta-ethnography, as it was identified as the most frequently used approach, 1 and there are some examples using meta-ethnography that focus on pain. A meta-ethnographic approach can be used with a variety of qualitative methodologies, not only ethnography. The data for a meta-ethnography are the concepts or themes described by the authors of the primary studies.

Noblit and Hare 5 outlined the seven steps of a meta-ethnography: (1) getting started, (2) deciding what is relevant, (3) reading the studies, (4) determining how studies are related to each other, (5) translating studies into each other, (6) synthesising translations and (7) expressing the synthesis.

The first three might seem relatively straightforward, although Lee et al. 12 emphasised both the importance and nuances of the reading stage, and Toye et al. 13 discuss the complexities of making quality assessments of qualitative papers and searching for this type of study. You need to understand what data to extract from the papers and how you are going to do this.

You have to first identify what is a concept and what is purely descriptive. Toye et al. 2 describe a process for collaboratively identifying concepts. In determining how studies are related to each other and translating them into each other, the meta-ethnographer compares the concepts found in each study with each other and then groups similar concepts into conceptual themes. Translating studies into each other involves looking at where concepts between studies agree (reciprocal synthesis) and where they do not agree (refutational synthesis). Developing conceptual categories can be challenging as you need to judge the extent to which a concept from one study adequately reflects concepts from other studies and choose one that seems to fit best. This is discussed in more detail in Toye et al. 2 , 13

To synthesise the translation, a line of argument is then developed from the conceptual categories. How the concepts group and relate to each other are developed. This provides an overall interpretation of the findings, ensuring this is grounded in the data from the primary studies. You are aiming to explain, and new concepts and understandings may emerge, which can then go on to underpin development of theory. For example, a qualitative systematic review that explored medicine taking found that ‘resistance’ was a new concept, revealed through meta-ethnography, and this helped understanding of lay responses to medicine taking. 1 Hannes and Macaitis, 14 in a review of published papers, reported that over time, authors have become more transparent about searching and critical appraisal, but that the synthesis element of reviews is often not well described. Being transparent about decisions that are interpretative has its own challenges. Working collaboratively to challenge interpretations and assumptions can be helpful. 2 , 12 The next section will use examples of qualitative systematic reviews from the pain field to illuminate what this type of review can contribute to our understanding of pain.

What can a qualitative systematic review contribute to the field of pain – some examples

Toye et al. 2 , 15 undertook a meta-ethnography to look at patients’ experiences of chronic non-malignant musculoskeletal pain. At the time of this research, no other qualitative systematic reviews had been published in this area. Their review included 77 papers reporting 60 individual studies, resulting from searches of six electronic bibliographic databases (MEDLINE, EMBASE, CINAHL, PsycINFO, AMED and HMIC) from inception until February 2012 and hand-searching key journals from 2001 to 2012.

They developed a new concept which they identified as an ‘adversarial struggle’. This struggle took place across five main dimensions: (1) there was a struggle to affirm themselves, where there was a tension between the ‘real me’ (without pain) and ‘not real me’ (me with pain). (2) The present and future were often unpredictable, and construction of time was altered and they struggled to reconstruct themselves in time. (3) People struggled to find an acceptable explanation for their pain and suffering. (4) There was a struggle to negotiate the healthcare system and (5) a struggle for pain to be seen as legitimate, including the need to be believed, and a struggle to know whether to show or hide their pain. Some people were able to move forward with pain. They saw their body as more integrated, they re-defined what was normal, they told people about their pain, they were part of a community of people with pain and they felt more expert on how their pain affected them and what they could do about it.

So, this meta-ethnography highlighted the adversarial nature of having chronic musculoskeletal pain and how this struggle pervaded many different areas of their life. It also illustrated how by showing patients their pain is understood and being alongside the person in pain, they can start to move forward. A short film based on the 77 papers in this meta-ethnography has been made and is available on YouTube. 16 This film was made as an attempt to disseminate the findings of a meta-ethnography in a way that is accessible to a range of people.

Snelgrove and Liossi 17 undertook a meta-ethnography of qualitative research in chronic low back pain (CLBP) using meta-ethnography. They included 33 papers of 28 studies published between 2000 and 2012. They identified three overarching themes of (1) the impact of CLBP on self, (2) relationships with others (health professionals and family and friends) and (3) coping with CLBP. They found that very few successful coping strategies were reported. Like Toye et al., 2 , 15 they also reported disruption to self, distancing their valued self from their painful self, legitimising pain, the struggle to manage daily living and the importance of social relationships alongside negotiation of their care in the health system.

MacNeela et al. 18 also undertook a meta-ethnography of experiences of CLBP. They included 38 articles published between 1994 and 2012 representing 28 studies. They identified four themes: (1) the undermining influence of pain, (2) the disempowering impact on all levels, (3) unsatisfying relationships with healthcare professionals and (4) learning to live with the pain. They reported the findings being dominated by ‘wide-ranging distress and loss’. They discussed the disempowering consequences of pain and a search for help. However, they also highlighted self-determination and resilience and suggested these could offer ‘pathways to endurance’. They emphasised self-management and adaptation, which resonates with the moving forward category reported by Toye et al. 2 , 15

Froud et al. 19 looked at the impact of low back pain on people’s lives. They describe their approach as meta-ethnographic and meta-narrative. They included 49 papers of about 42 studies from inception of databases searched until July 2011. They described five themes: activities, relationships, work, stigma and changing outlook, which they derived from ‘participant-level data’. They described their findings as showing patients wanted to be believed. They highlighted the importance of social factors when developing relevant outcome measures. There are other examples of qualitative systematic reviews relevant to pain. 20 – 23

Different qualitative systematic reviews on a similar subject may come up with overlapping but also some different findings. This could be, for example, because different search periods or different inclusion criteria are used, so different primary studies may be included in different reviews. In addition, undertaking a qualitative systematic review requires researchers to interpret concepts. This interpretation does not need to be a limitation. For example, to ensure rigour and transparency, Toye et al. 24 report a process of collaborative interpretation of concepts among a team of experienced qualitative researchers to ensure individual interpretations were challenged and remained grounded in the original studies. They also published a detailed audit trail of the processes and decisions made. 2 Campbell et al. 1 argue ‘Meta-ethnography is a highly interpretative method requiring considerable immersion in the individual studies to achieve a synthesis. It places substantial demands upon the synthesiser and requires a high degree of qualitative research skill’. It is important to be able to think conceptually when undertaking a meta-ethnography, and it can be a time-consuming process. However, the ability of a meta-ethnography to synthesise a large number of primary research studies, generate new conceptual understandings and thus increase our understanding of patients’ experiences of pain makes it a very useful resource for our evidence-based practice.

The way forward

A register of qualitative systematic reviews would be useful for researchers and clinicians, so there was a clear way of identifying existing qualitative reviews or reviews that are planned or underway. The Cochrane Collaboration does now have a register for protocols of qualitative systematic reviews being undertaken under the aegis of the Cochrane Qualitative and Implementation Methods Group. It would help those wanting to undertake qualitative systematic reviews if reviews that were underway were registered and described more clearly to prevent duplication of effort, for example, using ‘qualitative systematic review’ and the methodological approach used (such as meta-ethnography) in the title and/or abstract. The Toye et al. 2 protocol 25 was accessible on the National Institutes for Health website from 2010. The Snelgrove and Liossi 17 study was done without external funding, so it would be difficult to pick up that it was underway. The MacNeela et al. 18 study was listed on the Irish Research Council for the Humanities and Social Sciences under their Research Development Initiative 2008–2009, but was described as ‘Motivation and Beliefs among People Experiencing Chronic Low Back Pain’, so it was not clearly identified at that stage as a qualitative systematic review. Finally, the Froud et al. 19 award details 26 do not mention qualitative systematic reviews or meta-ethnography. This highlights the difficulty of finding some of these reviews and the importance of a register of both completed and ongoing reviews.

This article has argued that qualitative systematic reviews have their place alongside or integrated with more quantitative approaches. There is an increasing body of evidence from qualitative systematic reviews. They can synthesise primary research, and this can be helpful for the busy practitioner. The methods for these approaches are still developing, and attention to rigour at each stage is crucial. It is important that each stage of the synthesis is reported transparently and that the researchers’ stance is clearly reported. 27 Meta-ethnographies published over the last year 2 , 15 , 17 – 19 have drawn together a wide range of primary studies and shown that people’s lives can be markedly changed by their pain across multiple dimensions of their life.

Declaration of Conflicting Interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

IMAGES

  1. Integrating qualitative research with trials in systematic reviews

    systematic literature review qualitative or quantitative

  2. What is a Systematic Literature Review?

    systematic literature review qualitative or quantitative

  3. Flow diagram for systematic review of qualitative and quantitative

    systematic literature review qualitative or quantitative

  4. 15 Literature Review Examples (2024)

    systematic literature review qualitative or quantitative

  5. (PDF) Quantitative Analysis of Qualitative Information From Interviews

    systematic literature review qualitative or quantitative

  6. 10 Steps to Write a Systematic Literature Review Paper in 2023

    systematic literature review qualitative or quantitative

VIDEO

  1. Systematic literature review

  2. Research Methods: Writing a Literature Review

  3. SYSTEMATIC AND LITERATURE REVIEWS

  4. Disadvantages of Compact Academic Writing Programs

  5. Write Your Literature Review FAST

  6. Want To Finish Your PhD Or Publish Papers FASTER? Do This

COMMENTS

  1. Are Systematic Reviews Qualitative or Quantitative

    A systematic review is a type of review in which authors assess the evidence from a large number of studies to answer a specifically formulated clinical question. To know more about how to write a systematic review, click this link. A systematic review is designed to be transparent and replicable.

  2. Guidance on Conducting a Systematic Literature Review

    After coding, translating qualitative and quantitative research into each other through an integrative grid (see table 5 and figure 2 in the paper), ... For literature reviews to be reliable and independently repeatable, the process of systematic literature review must be reported in sufficient detail ...

  3. Quantitative vs. Qualitative Research

    A quantitative systematic review will include studies that have numerical data. A qualitative systematic review derives data from observation, interviews, or verbal interactions and focuses on the meanings and interpretations of the participants. It may include focus groups, interviews, observations and diaries.

  4. Systematic reviews: Brief overview of methods, limitations, and

    Describes core standards and principles for systematic review and frequently encountered problems. Differentiates between literature reviews and reviewing literature. Addresses quantitative and qualitative research synthesis. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: Elaboration and explanation 12

  5. Systematic and other reviews: criteria and complexities

    The evidence gathered in systematic reviews can be qualitative or quantitative. However, if adequate and comparable quantitative data are available then a meta-analysis can be performed to assess the weighted and summarized effect size of the studies included.

  6. Research Guides: Systematic Reviews: Types of Literature Reviews

    Mixed studies review/mixed methods review: Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies

  7. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review

  8. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  9. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  10. How-to conduct a systematic literature review: A quick guide for

    A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., ... Thus, information can be synthesized from the extracted data for qualitative or quantitative analysis [16]. This documentation supports clarity, precise reporting, and the ability to scrutinize ...

  11. How to Do a Systematic Review: A Best Practice Guide for ...

    This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems.

  12. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality.

  13. Review types

    A systematic literature review is a review of a clearly formulated question that uses systematic and reproducible methods to identify, select and critically appraise all relevant research. ... Reviews can be quantitative or qualitative. A quantitative review will include studies that have numerical data. A qualitative review derives data from ...

  14. Qualitative and mixed methods in systematic reviews

    This Special Issue of Systematic Reviews Journal is providing a focus for these new methods of review whether these use qualitative review methods on their own or mixed together with more quantitative approaches.

  15. The PRISMA 2020 statement: an updated guideline for reporting ...

    The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted.39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually ...

  16. PDF Conducting a Systematic Review: Methodology and Steps

    and presenting findings.6 For a systematic review that includes quantitative, qualitative studies and mixed-method studies we adapt the PICO technique to guide formulation of a good question as discussed below. Using this guiding principle, let us analyse the following title for a systematic review: "The effect of training, innovation and new

  17. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  18. Systematic Reviews of Systematic Quantitative, Qualitative, and Mixed

    In this methodological discussion paper, we propose (a) a typology for categorizing various types of review of reviews and (b) an exploration of criteria pertaining to three existing critical appraisal tools (ROBIS, AMSTAR 2, and MMSR) to identify those that could be adapted for qualitative and mixed studies reviews.

  19. Evidence Syntheses and Systematic Reviews: Overview

    A statistical technique for combining the findings from disparate quantitative studies, may stand alone or be part of a systematic review: Searches results from multiple studies on a specific research question; may include unpublished studies: Yes: Yes: Quantitative synthesis; numerical analysis of measures of effect: Narrative literature review

  20. Literature review as a research methodology: An ...

    A literature review can broadly be described as a more or less systematic way of collecting and synthesizing previous research ( Baumeister & Leary, 1997Tranfield, Denyer, & Smart, 2003 ).

  21. LibGuides: Systematic Reviews: Introduction to Systematic Reviews

    "A systematic review is a summary of the medical literature that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and random errors." Gopalakrishnan S, Ganeshkumar P. Systematic Reviews and Meta-analysis ...

  22. How to Write a Systematic Review: A Narrative Review

    A systematic review is done in one of two methods, quantitative (meta-analysis) and qualitative. In a meta-analysis, the results of two or more studies for the evaluation of say health interventions are combined to measure the effect of treatment, while in the qualitative method, the findings of other studies are combined without using ...

  23. How to Operate Literature Review Through Qualitative and Quantitative

    Scholars have traditionally used the qualitative approach to a systematic literature review and the quantitative approach to a meta-analysis . Zupic and Cater [ 12 ] introduced a third method - science mapping - based on the quantitative approach of bibliometric research methods and is increasingly being used to map the structure and ...

  24. Understanding the impact of children's and young people's self‐harm on

    Understanding the impact of children's and young people's self-harm on parental well-being: a systematic literature review of qualitative and quantitative findings. Faith Martin, Corresponding Author. Faith Martin [email protected] ... 16 with qualitative data and 17 with quantitative data (one had both). Qualitative findings showed how parents ...

  25. Understanding the impact of children's and young people's self-harm on

    Understanding the impact of children's and young people's self-harm on parental well-being: a systematic literature review of qualitative and quantitative findings Child Adolesc Ment Health. 2024 Feb 16. doi: 10.1111 ... Results: We identified 39 reports of 32 studies: 16 with qualitative data and 17 with quantitative data (one had both ...

  26. What is the power of a genomic multidisciplinary team approach? A

    Due to the increasing complexity of genomic data interpretation, and need for close collaboration with clinical, laboratory, and research expertise, genomics often requires a multidisciplinary ...

  27. Rapid reviews methods series: guidance on rapid qualitative evidence

    This paper forms part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group and addresses rapid qualitative evidence syntheses (QESs), which use modified systematic, transparent and reproducible methodsu to accelerate the synthesis of qualitative evidence when faced with resource constraints. This guidance covers the review process as it relates to synthesis of ...

  28. Qualitative systematic reviews: their importance for our understanding

    A qualitative systematic review brings together research on a topic, systematically searching for research evidence from primary qualitative studies and drawing the findings together. There is a debate over whether the search needs to be exhaustive. 1 , 2 Methods for systematic reviews of quantitative research are well established and explicit ...