What is Comparative Analysis and How to Conduct It? (+ Examples)

Appinio Research · 30.10.2023 · 36min read

What Is Comparative Analysis and How to Conduct It Examples

Have you ever faced a complex decision, wondering how to make the best choice among multiple options? In a world filled with data and possibilities, the art of comparative analysis holds the key to unlocking clarity amidst the chaos.

In this guide, we'll demystify the power of comparative analysis, revealing its practical applications, methodologies, and best practices. Whether you're a business leader, researcher, or simply someone seeking to make more informed decisions, join us as we explore the intricacies of comparative analysis and equip you with the tools to chart your course with confidence.

What is Comparative Analysis?

Comparative analysis is a systematic approach used to evaluate and compare two or more entities, variables, or options to identify similarities, differences, and patterns. It involves assessing the strengths, weaknesses, opportunities, and threats associated with each entity or option to make informed decisions.

The primary purpose of comparative analysis is to provide a structured framework for decision-making by:

  • Facilitating Informed Choices: Comparative analysis equips decision-makers with data-driven insights, enabling them to make well-informed choices among multiple options.
  • Identifying Trends and Patterns: It helps identify recurring trends, patterns, and relationships among entities or variables, shedding light on underlying factors influencing outcomes.
  • Supporting Problem Solving: Comparative analysis aids in solving complex problems by systematically breaking them down into manageable components and evaluating potential solutions.
  • Enhancing Transparency: By comparing multiple options, comparative analysis promotes transparency in decision-making processes, allowing stakeholders to understand the rationale behind choices.
  • Mitigating Risks : It helps assess the risks associated with each option, allowing organizations to develop risk mitigation strategies and make risk-aware decisions.
  • Optimizing Resource Allocation: Comparative analysis assists in allocating resources efficiently by identifying areas where resources can be optimized for maximum impact.
  • Driving Continuous Improvement: By comparing current performance with historical data or benchmarks, organizations can identify improvement areas and implement growth strategies.

Importance of Comparative Analysis in Decision-Making

  • Data-Driven Decision-Making: Comparative analysis relies on empirical data and objective evaluation, reducing the influence of biases and subjective judgments in decision-making. It ensures decisions are based on facts and evidence.
  • Objective Assessment: It provides an objective and structured framework for evaluating options, allowing decision-makers to focus on key criteria and avoid making decisions solely based on intuition or preferences.
  • Risk Assessment: Comparative analysis helps assess and quantify risks associated with different options. This risk awareness enables organizations to make proactive risk management decisions.
  • Prioritization: By ranking options based on predefined criteria, comparative analysis enables decision-makers to prioritize actions or investments, directing resources to areas with the most significant impact.
  • Strategic Planning: It is integral to strategic planning, helping organizations align their decisions with overarching goals and objectives. Comparative analysis ensures decisions are consistent with long-term strategies.
  • Resource Allocation: Organizations often have limited resources. Comparative analysis assists in allocating these resources effectively, ensuring they are directed toward initiatives with the highest potential returns.
  • Continuous Improvement: Comparative analysis supports a culture of continuous improvement by identifying areas for enhancement and guiding iterative decision-making processes.
  • Stakeholder Communication: It enhances transparency in decision-making, making it easier to communicate decisions to stakeholders. Stakeholders can better understand the rationale behind choices when supported by comparative analysis.
  • Competitive Advantage: In business and competitive environments , comparative analysis can provide a competitive edge by identifying opportunities to outperform competitors or address weaknesses.
  • Informed Innovation: When evaluating new products , technologies, or strategies, comparative analysis guides the selection of the most promising options, reducing the risk of investing in unsuccessful ventures.

In summary, comparative analysis is a valuable tool that empowers decision-makers across various domains to make informed, data-driven choices, manage risks, allocate resources effectively, and drive continuous improvement. Its structured approach enhances decision quality and transparency, contributing to the success and competitiveness of organizations and research endeavors.

How to Prepare for Comparative Analysis?

1. define objectives and scope.

Before you begin your comparative analysis, clearly defining your objectives and the scope of your analysis is essential. This step lays the foundation for the entire process. Here's how to approach it:

  • Identify Your Goals: Start by asking yourself what you aim to achieve with your comparative analysis. Are you trying to choose between two products for your business? Are you evaluating potential investment opportunities? Knowing your objectives will help you stay focused throughout the analysis.
  • Define Scope: Determine the boundaries of your comparison. What will you include, and what will you exclude? For example, if you're analyzing market entry strategies for a new product, specify whether you're looking at a specific geographic region or a particular target audience.
  • Stakeholder Alignment: Ensure that all stakeholders involved in the analysis understand and agree on the objectives and scope. This alignment will prevent misunderstandings and ensure the analysis meets everyone's expectations.

2. Gather Relevant Data and Information

The quality of your comparative analysis heavily depends on the data and information you gather. Here's how to approach this crucial step:

  • Data Sources: Identify where you'll obtain the necessary data. Will you rely on primary sources , such as surveys and interviews, to collect original data? Or will you use secondary sources, like published research and industry reports, to access existing data? Consider the advantages and disadvantages of each source.
  • Data Collection Plan: Develop a plan for collecting data. This should include details about the methods you'll use, the timeline for data collection, and who will be responsible for gathering the data.
  • Data Relevance: Ensure that the data you collect is directly relevant to your objectives. Irrelevant or extraneous data can lead to confusion and distract from the core analysis.

3. Select Appropriate Criteria for Comparison

Choosing the right criteria for comparison is critical to a successful comparative analysis. Here's how to go about it:

  • Relevance to Objectives: Your chosen criteria should align closely with your analysis objectives. For example, if you're comparing job candidates, your criteria might include skills, experience, and cultural fit.
  • Measurability: Consider whether you can quantify the criteria. Measurable criteria are easier to analyze. If you're comparing marketing campaigns, you might measure criteria like click-through rates, conversion rates, and return on investment.
  • Weighting Criteria : Not all criteria are equally important. You'll need to assign weights to each criterion based on its relative importance. Weighting helps ensure that the most critical factors have a more significant impact on the final decision.

4. Establish a Clear Framework

Once you have your objectives, data, and criteria in place, it's time to establish a clear framework for your comparative analysis. This framework will guide your process and ensure consistency. Here's how to do it:

  • Comparative Matrix: Consider using a comparative matrix or spreadsheet to organize your data. Each row in the matrix represents an option or entity you're comparing, and each column corresponds to a criterion. This visual representation makes it easy to compare and contrast data.
  • Timeline: Determine the time frame for your analysis. Is it a one-time comparison, or will you conduct ongoing analyses? Having a defined timeline helps you manage the analysis process efficiently.
  • Define Metrics: Specify the metrics or scoring system you'll use to evaluate each criterion. For example, if you're comparing potential office locations, you might use a scoring system from 1 to 5 for factors like cost, accessibility, and amenities.

With your objectives, data, criteria, and framework established, you're ready to move on to the next phase of comparative analysis: data collection and organization.

Comparative Analysis Data Collection

Data collection and organization are critical steps in the comparative analysis process. We'll explore how to gather and structure the data you need for a successful analysis.

1. Utilize Primary Data Sources

Primary data sources involve gathering original data directly from the source. This approach offers unique advantages, allowing you to tailor your data collection to your specific research needs.

Some popular primary data sources include:

  • Surveys and Questionnaires: Design surveys or questionnaires and distribute them to collect specific information from individuals or groups. This method is ideal for obtaining firsthand insights, such as customer preferences or employee feedback.
  • Interviews: Conduct structured interviews with relevant stakeholders or experts. Interviews provide an opportunity to delve deeper into subjects and gather qualitative data, making them valuable for in-depth analysis.
  • Observations: Directly observe and record data from real-world events or settings. Observational data can be instrumental in fields like anthropology, ethnography, and environmental studies.
  • Experiments: In controlled environments, experiments allow you to manipulate variables and measure their effects. This method is common in scientific research and product testing.

When using primary data sources, consider factors like sample size , survey design, and data collection methods to ensure the reliability and validity of your data.

2. Harness Secondary Data Sources

Secondary data sources involve using existing data collected by others. These sources can provide a wealth of information and save time and resources compared to primary data collection.

Here are common types of secondary data sources:

  • Public Records: Government publications, census data, and official reports offer valuable information on demographics, economic trends, and public policies. They are often free and readily accessible.
  • Academic Journals: Scholarly articles provide in-depth research findings across various disciplines. They are helpful for accessing peer-reviewed studies and staying current with academic discourse.
  • Industry Reports: Industry-specific reports and market research publications offer insights into market trends, consumer behavior, and competitive landscapes. They are essential for businesses making strategic decisions.
  • Online Databases: Online platforms like Statista , PubMed , and Google Scholar provide a vast repository of data and research articles. They offer search capabilities and access to a wide range of data sets.

When using secondary data sources, critically assess the credibility, relevance, and timeliness of the data. Ensure that it aligns with your research objectives.

3. Ensure and Validate Data Quality

Data quality is paramount in comparative analysis. Poor-quality data can lead to inaccurate conclusions and flawed decision-making. Here's how to ensure data validation and reliability:

  • Cross-Verification: Whenever possible, cross-verify data from multiple sources. Consistency among different sources enhances the reliability of the data.
  • Sample Size : Ensure that your data sample size is statistically significant for meaningful analysis. A small sample may not accurately represent the population.
  • Data Integrity: Check for data integrity issues, such as missing values, outliers, or duplicate entries. Address these issues before analysis to maintain data quality.
  • Data Source Reliability: Assess the reliability and credibility of the data sources themselves. Consider factors like the reputation of the institution or organization providing the data.

4. Organize Data Effectively

Structuring your data for comparison is a critical step in the analysis process. Organized data makes it easier to draw insights and make informed decisions. Here's how to structure data effectively:

  • Data Cleaning: Before analysis, clean your data to remove inconsistencies, errors, and irrelevant information. Data cleaning may involve data transformation, imputation of missing values, and removing outliers.
  • Normalization: Standardize data to ensure fair comparisons. Normalization adjusts data to a standard scale, making comparing variables with different units or ranges possible.
  • Variable Labeling: Clearly label variables and data points for easy identification. Proper labeling enhances the transparency and understandability of your analysis.
  • Data Organization: Organize data into a format that suits your analysis methods. For quantitative analysis, this might mean creating a matrix, while qualitative analysis may involve categorizing data into themes.

By paying careful attention to data collection, validation, and organization, you'll set the stage for a robust and insightful comparative analysis. Next, we'll explore various methodologies you can employ in your analysis, ranging from qualitative approaches to quantitative methods and examples.

Comparative Analysis Methods

When it comes to comparative analysis, various methodologies are available, each suited to different research goals and data types. In this section, we'll explore five prominent methodologies in detail.

Qualitative Comparative Analysis (QCA)

Qualitative Comparative Analysis (QCA) is a methodology often used when dealing with complex, non-linear relationships among variables. It seeks to identify patterns and configurations among factors that lead to specific outcomes.

  • Case-by-Case Analysis: QCA involves evaluating individual cases (e.g., organizations, regions, or events) rather than analyzing aggregate data. Each case's unique characteristics are considered.
  • Boolean Logic: QCA employs Boolean algebra to analyze data. Variables are categorized as either present or absent, allowing for the examination of different combinations and logical relationships.
  • Necessary and Sufficient Conditions: QCA aims to identify necessary and sufficient conditions for a specific outcome to occur. It helps answer questions like, "What conditions are necessary for a successful product launch?"
  • Fuzzy Set Theory: In some cases, QCA may use fuzzy set theory to account for degrees of membership in a category, allowing for more nuanced analysis.

QCA is particularly useful in fields such as sociology, political science, and organizational studies, where understanding complex interactions is essential.

Quantitative Comparative Analysis

Quantitative Comparative Analysis involves the use of numerical data and statistical techniques to compare and analyze variables. It's suitable for situations where data is quantitative, and relationships can be expressed numerically.

  • Statistical Tools: Quantitative comparative analysis relies on statistical methods like regression analysis, correlation, and hypothesis testing. These tools help identify relationships, dependencies, and trends within datasets.
  • Data Measurement: Ensure that variables are measured consistently using appropriate scales (e.g., ordinal, interval, ratio) for meaningful analysis. Variables may include numerical values like revenue, customer satisfaction scores, or product performance metrics.
  • Data Visualization: Create visual representations of data using charts, graphs, and plots. Visualization aids in understanding complex relationships and presenting findings effectively.
  • Statistical Significance: Assess the statistical significance of relationships. Statistical significance indicates whether observed differences or relationships are likely to be real rather than due to chance.

Quantitative comparative analysis is commonly applied in economics, social sciences, and market research to draw empirical conclusions from numerical data.

Case Studies

Case studies involve in-depth examinations of specific instances or cases to gain insights into real-world scenarios. Comparative case studies allow researchers to compare and contrast multiple cases to identify patterns, differences, and lessons.

  • Narrative Analysis: Case studies often involve narrative analysis, where researchers construct detailed narratives of each case, including context, events, and outcomes.
  • Contextual Understanding: In comparative case studies, it's crucial to consider the context within which each case operates. Understanding the context helps interpret findings accurately.
  • Cross-Case Analysis: Researchers conduct cross-case analysis to identify commonalities and differences across cases. This process can lead to the discovery of factors that influence outcomes.
  • Triangulation: To enhance the validity of findings, researchers may use multiple data sources and methods to triangulate information and ensure reliability.

Case studies are prevalent in fields like psychology, business, and sociology, where deep insights into specific situations are valuable.

SWOT Analysis

SWOT Analysis is a strategic tool used to assess the Strengths, Weaknesses, Opportunities, and Threats associated with a particular entity or situation. While it's commonly used in business, it can be adapted for various comparative analyses.

  • Internal and External Factors: SWOT Analysis examines both internal factors (Strengths and Weaknesses), such as organizational capabilities, and external factors (Opportunities and Threats), such as market conditions and competition.
  • Strategic Planning: The insights from SWOT Analysis inform strategic decision-making. By identifying strengths and opportunities, organizations can leverage their advantages. Likewise, addressing weaknesses and threats helps mitigate risks.
  • Visual Representation: SWOT Analysis is often presented as a matrix or a 2x2 grid, making it visually accessible and easy to communicate to stakeholders.
  • Continuous Monitoring: SWOT Analysis is not a one-time exercise. Organizations use it periodically to adapt to changing circumstances and make informed decisions.

SWOT Analysis is versatile and can be applied in business, healthcare, education, and any context where a structured assessment of factors is needed.

Benchmarking

Benchmarking involves comparing an entity's performance, processes, or practices to those of industry leaders or best-in-class organizations. It's a powerful tool for continuous improvement and competitive analysis.

  • Identify Performance Gaps: Benchmarking helps identify areas where an entity lags behind its peers or industry standards. These performance gaps highlight opportunities for improvement.
  • Data Collection: Gather data on key performance metrics from both internal and external sources. This data collection phase is crucial for meaningful comparisons.
  • Comparative Analysis: Compare your organization's performance data with that of benchmark organizations. This analysis can reveal where you excel and where adjustments are needed.
  • Continuous Improvement: Benchmarking is a dynamic process that encourages continuous improvement. Organizations use benchmarking findings to set performance goals and refine their strategies.

Benchmarking is widely used in business, manufacturing, healthcare, and customer service to drive excellence and competitiveness.

Each of these methodologies brings a unique perspective to comparative analysis, allowing you to choose the one that best aligns with your research objectives and the nature of your data. The choice between qualitative and quantitative methods, or a combination of both, depends on the complexity of the analysis and the questions you seek to answer.

How to Conduct Comparative Analysis?

Once you've prepared your data and chosen an appropriate methodology, it's time to dive into the process of conducting a comparative analysis. We will guide you through the essential steps to extract meaningful insights from your data.

What Is Comparative Analysis and How to Conduct It Examples

1. Identify Key Variables and Metrics

Identifying key variables and metrics is the first crucial step in conducting a comparative analysis. These are the factors or indicators you'll use to assess and compare your options.

  • Relevance to Objectives: Ensure the chosen variables and metrics align closely with your analysis objectives. When comparing marketing strategies, relevant metrics might include customer acquisition cost, conversion rate, and retention.
  • Quantitative vs. Qualitative : Decide whether your analysis will focus on quantitative data (numbers) or qualitative data (descriptive information). In some cases, a combination of both may be appropriate.
  • Data Availability: Consider the availability of data. Ensure you can access reliable and up-to-date data for all selected variables and metrics.
  • KPIs: Key Performance Indicators (KPIs) are often used as the primary metrics in comparative analysis. These are metrics that directly relate to your goals and objectives.

2. Visualize Data for Clarity

Data visualization techniques play a vital role in making complex information more accessible and understandable. Effective data visualization allows you to convey insights and patterns to stakeholders. Consider the following approaches:

  • Charts and Graphs: Use various types of charts, such as bar charts, line graphs, and pie charts, to represent data. For example, a line graph can illustrate trends over time, while a bar chart can compare values across categories.
  • Heatmaps: Heatmaps are particularly useful for visualizing large datasets and identifying patterns through color-coding. They can reveal correlations, concentrations, and outliers.
  • Scatter Plots: Scatter plots help visualize relationships between two variables. They are especially useful for identifying trends, clusters, or outliers.
  • Dashboards: Create interactive dashboards that allow users to explore data and customize views. Dashboards are valuable for ongoing analysis and reporting.
  • Infographics: For presentations and reports, consider using infographics to summarize key findings in a visually engaging format.

Effective data visualization not only enhances understanding but also aids in decision-making by providing clear insights at a glance.

3. Establish Clear Comparative Frameworks

A well-structured comparative framework provides a systematic approach to your analysis. It ensures consistency and enables you to make meaningful comparisons. Here's how to create one:

  • Comparison Matrices: Consider using matrices or spreadsheets to organize your data. Each row represents an option or entity, and each column corresponds to a variable or metric. This matrix format allows for side-by-side comparisons.
  • Decision Trees: In complex decision-making scenarios, decision trees help map out possible outcomes based on different criteria and variables. They visualize the decision-making process.
  • Scenario Analysis: Explore different scenarios by altering variables or criteria to understand how changes impact outcomes. Scenario analysis is valuable for risk assessment and planning.
  • Checklists: Develop checklists or scoring sheets to systematically evaluate each option against predefined criteria. Checklists ensure that no essential factors are overlooked.

A well-structured comparative framework simplifies the analysis process, making it easier to draw meaningful conclusions and make informed decisions.

4. Evaluate and Score Criteria

Evaluating and scoring criteria is a critical step in comparative analysis, as it quantifies the performance of each option against the chosen criteria.

  • Scoring System: Define a scoring system that assigns values to each criterion for every option. Common scoring systems include numerical scales, percentage scores, or qualitative ratings (e.g., high, medium, low).
  • Consistency: Ensure consistency in scoring by defining clear guidelines for each score. Provide examples or descriptions to help evaluators understand what each score represents.
  • Data Collection: Collect data or information relevant to each criterion for all options. This may involve quantitative data (e.g., sales figures) or qualitative data (e.g., customer feedback).
  • Aggregation: Aggregate the scores for each option to obtain an overall evaluation. This can be done by summing the individual criterion scores or applying weighted averages.
  • Normalization: If your criteria have different measurement scales or units, consider normalizing the scores to create a level playing field for comparison.

5. Assign Importance to Criteria

Not all criteria are equally important in a comparative analysis. Weighting criteria allows you to reflect their relative significance in the final decision-making process.

  • Relative Importance: Assess the importance of each criterion in achieving your objectives. Criteria directly aligned with your goals may receive higher weights.
  • Weighting Methods: Choose a weighting method that suits your analysis. Common methods include expert judgment, analytic hierarchy process (AHP), or data-driven approaches based on historical performance.
  • Impact Analysis: Consider how changes in the weights assigned to criteria would affect the final outcome. This sensitivity analysis helps you understand the robustness of your decisions.
  • Stakeholder Input: Involve relevant stakeholders or decision-makers in the weighting process. Their input can provide valuable insights and ensure alignment with organizational goals.
  • Transparency: Clearly document the rationale behind the assigned weights to maintain transparency in your analysis.

By weighting criteria, you ensure that the most critical factors have a more significant influence on the final evaluation, aligning the analysis more closely with your objectives and priorities.

With these steps in place, you're well-prepared to conduct a comprehensive comparative analysis. The next phase involves interpreting your findings, drawing conclusions, and making informed decisions based on the insights you've gained.

Comparative Analysis Interpretation

Interpreting the results of your comparative analysis is a crucial phase that transforms data into actionable insights. We'll delve into various aspects of interpretation and how to make sense of your findings.

  • Contextual Understanding: Before diving into the data, consider the broader context of your analysis. Understand the industry trends, market conditions, and any external factors that may have influenced your results.
  • Drawing Conclusions: Summarize your findings clearly and concisely. Identify trends, patterns, and significant differences among the options or variables you've compared.
  • Quantitative vs. Qualitative Analysis: Depending on the nature of your data and analysis, you may need to balance both quantitative and qualitative interpretations. Qualitative insights can provide context and nuance to quantitative findings.
  • Comparative Visualization: Visual aids such as charts, graphs, and tables can help convey your conclusions effectively. Choose visual representations that align with the nature of your data and the key points you want to emphasize.
  • Outliers and Anomalies: Identify and explain any outliers or anomalies in your data. Understanding these exceptions can provide valuable insights into unusual cases or factors affecting your analysis.
  • Cross-Validation: Validate your conclusions by comparing them with external benchmarks, industry standards, or expert opinions. Cross-validation helps ensure the reliability of your findings.
  • Implications for Decision-Making: Discuss how your analysis informs decision-making. Clearly articulate the practical implications of your findings and their relevance to your initial objectives.
  • Actionable Insights: Emphasize actionable insights that can guide future strategies, policies, or actions. Make recommendations based on your analysis, highlighting the steps needed to capitalize on strengths or address weaknesses.
  • Continuous Improvement: Encourage a culture of continuous improvement by using your analysis as a feedback mechanism. Suggest ways to monitor and adapt strategies over time based on evolving circumstances.

Comparative Analysis Applications

Comparative analysis is a versatile methodology that finds application in various fields and scenarios. Let's explore some of the most common and impactful applications.

Business Decision-Making

Comparative analysis is widely employed in business to inform strategic decisions and drive success. Key applications include:

Market Research and Competitive Analysis

  • Objective: To assess market opportunities and evaluate competitors.
  • Methods: Analyzing market trends, customer preferences, competitor strengths and weaknesses, and market share.
  • Outcome: Informed product development, pricing strategies, and market entry decisions.

Product Comparison and Benchmarking

  • Objective: To compare the performance and features of products or services.
  • Methods: Evaluating product specifications, customer reviews, and pricing.
  • Outcome: Identifying strengths and weaknesses, improving product quality, and setting competitive pricing.

Financial Analysis

  • Objective: To evaluate financial performance and make investment decisions.
  • Methods: Comparing financial statements, ratios, and performance indicators of companies.
  • Outcome: Informed investment choices, risk assessment, and portfolio management.

Healthcare and Medical Research

In the healthcare and medical research fields, comparative analysis is instrumental in understanding diseases, treatment options, and healthcare systems.

Clinical Trials and Drug Development

  • Objective: To compare the effectiveness of different treatments or drugs.
  • Methods: Analyzing clinical trial data, patient outcomes, and side effects.
  • Outcome: Informed decisions about drug approvals, treatment protocols, and patient care.

Health Outcomes Research

  • Objective: To assess the impact of healthcare interventions.
  • Methods: Comparing patient health outcomes before and after treatment or between different treatment approaches.
  • Outcome: Improved healthcare guidelines, cost-effectiveness analysis, and patient care plans.

Healthcare Systems Evaluation

  • Objective: To assess the performance of healthcare systems.
  • Methods: Comparing healthcare delivery models, patient satisfaction, and healthcare costs.
  • Outcome: Informed healthcare policy decisions, resource allocation, and system improvements.

Social Sciences and Policy Analysis

Comparative analysis is a fundamental tool in social sciences and policy analysis, aiding in understanding complex societal issues.

Educational Research

  • Objective: To compare educational systems and practices.
  • Methods: Analyzing student performance, curriculum effectiveness, and teaching methods.
  • Outcome: Informed educational policies, curriculum development, and school improvement strategies.

Political Science

  • Objective: To study political systems, elections, and governance.
  • Methods: Comparing election outcomes, policy impacts, and government structures.
  • Outcome: Insights into political behavior, policy effectiveness, and governance reforms.

Social Welfare and Poverty Analysis

  • Objective: To evaluate the impact of social programs and policies.
  • Methods: Comparing the well-being of individuals or communities with and without access to social assistance.
  • Outcome: Informed policymaking, poverty reduction strategies, and social program improvements.

Environmental Science and Sustainability

Comparative analysis plays a pivotal role in understanding environmental issues and promoting sustainability.

Environmental Impact Assessment

  • Objective: To assess the environmental consequences of projects or policies.
  • Methods: Comparing ecological data, resource use, and pollution levels.
  • Outcome: Informed environmental mitigation strategies, sustainable development plans, and regulatory decisions.

Climate Change Analysis

  • Objective: To study climate patterns and their impacts.
  • Methods: Comparing historical climate data, temperature trends, and greenhouse gas emissions.
  • Outcome: Insights into climate change causes, adaptation strategies, and policy recommendations.

Ecosystem Health Assessment

  • Objective: To evaluate the health and resilience of ecosystems.
  • Methods: Comparing biodiversity, habitat conditions, and ecosystem services.
  • Outcome: Conservation efforts, restoration plans, and ecological sustainability measures.

Technology and Innovation

Comparative analysis is crucial in the fast-paced world of technology and innovation.

Product Development and Innovation

  • Objective: To assess the competitiveness and innovation potential of products or technologies.
  • Methods: Comparing research and development investments, technology features, and market demand.
  • Outcome: Informed innovation strategies, product roadmaps, and patent decisions.

User Experience and Usability Testing

  • Objective: To evaluate the user-friendliness of software applications or digital products.
  • Methods: Comparing user feedback, usability metrics, and user interface designs.
  • Outcome: Improved user experiences, interface redesigns, and product enhancements.

Technology Adoption and Market Entry

  • Objective: To analyze market readiness and risks for new technologies.
  • Methods: Comparing market conditions, regulatory landscapes, and potential barriers.
  • Outcome: Informed market entry strategies, risk assessments, and investment decisions.

These diverse applications of comparative analysis highlight its flexibility and importance in decision-making across various domains. Whether in business, healthcare, social sciences, environmental studies, or technology, comparative analysis empowers researchers and decision-makers to make informed choices and drive positive outcomes.

Comparative Analysis Best Practices

Successful comparative analysis relies on following best practices and avoiding common pitfalls. Implementing these practices enhances the effectiveness and reliability of your analysis.

  • Clearly Defined Objectives: Start with well-defined objectives that outline what you aim to achieve through the analysis. Clear objectives provide focus and direction.
  • Data Quality Assurance: Ensure data quality by validating, cleaning, and normalizing your data. Poor-quality data can lead to inaccurate conclusions.
  • Transparent Methodologies: Clearly explain the methodologies and techniques you've used for analysis. Transparency builds trust and allows others to assess the validity of your approach.
  • Consistent Criteria: Maintain consistency in your criteria and metrics across all options or variables. Inconsistent criteria can lead to biased results.
  • Sensitivity Analysis: Conduct sensitivity analysis by varying key parameters, such as weights or assumptions, to assess the robustness of your conclusions.
  • Stakeholder Involvement: Involve relevant stakeholders throughout the analysis process. Their input can provide valuable perspectives and ensure alignment with organizational goals.
  • Critical Evaluation of Assumptions: Identify and critically evaluate any assumptions made during the analysis. Assumptions should be explicit and justifiable.
  • Holistic View: Take a holistic view of the analysis by considering both short-term and long-term implications. Avoid focusing solely on immediate outcomes.
  • Documentation: Maintain thorough documentation of your analysis, including data sources, calculations, and decision criteria. Documentation supports transparency and facilitates reproducibility.
  • Continuous Learning: Stay updated with the latest analytical techniques, tools, and industry trends. Continuous learning helps you adapt your analysis to changing circumstances.
  • Peer Review: Seek peer review or expert feedback on your analysis. External perspectives can identify blind spots and enhance the quality of your work.
  • Ethical Considerations: Address ethical considerations, such as privacy and data protection, especially when dealing with sensitive or personal data.

By adhering to these best practices, you'll not only improve the rigor of your comparative analysis but also ensure that your findings are reliable, actionable, and aligned with your objectives.

Comparative Analysis Examples

To illustrate the practical application and benefits of comparative analysis, let's explore several real-world examples across different domains. These examples showcase how organizations and researchers leverage comparative analysis to make informed decisions, solve complex problems, and drive improvements:

Retail Industry - Price Competitiveness Analysis

Objective: A retail chain aims to assess its price competitiveness against competitors in the same market.

Methodology:

  • Collect pricing data for a range of products offered by the retail chain and its competitors.
  • Organize the data into a comparative framework, categorizing products by type and price range.
  • Calculate price differentials, averages, and percentiles for each product category.
  • Analyze the findings to identify areas where the retail chain's prices are higher or lower than competitors.

Outcome: The analysis reveals that the retail chain's prices are consistently lower in certain product categories but higher in others. This insight informs pricing strategies, allowing the retailer to adjust prices to remain competitive in the market.

Healthcare - Comparative Effectiveness Research

Objective: Researchers aim to compare the effectiveness of two different treatment methods for a specific medical condition.

  • Recruit patients with the medical condition and randomly assign them to two treatment groups.
  • Collect data on treatment outcomes, including symptom relief, side effects, and recovery times.
  • Analyze the data using statistical methods to compare the treatment groups.
  • Consider factors like patient demographics and baseline health status as potential confounding variables.

Outcome: The comparative analysis reveals that one treatment method is statistically more effective than the other in relieving symptoms and has fewer side effects. This information guides medical professionals in recommending the more effective treatment to patients.

Environmental Science - Carbon Emission Analysis

Objective: An environmental organization seeks to compare carbon emissions from various transportation modes in a metropolitan area.

  • Collect data on the number of vehicles, their types (e.g., cars, buses, bicycles), and fuel consumption for each mode of transportation.
  • Calculate the total carbon emissions for each mode based on fuel consumption and emission factors.
  • Create visualizations such as bar charts and pie charts to represent the emissions from each transportation mode.
  • Consider factors like travel distance, occupancy rates, and the availability of alternative fuels.

Outcome: The comparative analysis reveals that public transportation generates significantly lower carbon emissions per passenger mile compared to individual car travel. This information supports advocacy for increased public transit usage to reduce carbon footprint.

Technology Industry - Feature Comparison for Software Development Tools

Objective: A software development team needs to choose the most suitable development tool for an upcoming project.

  • Create a list of essential features and capabilities required for the project.
  • Research and compile information on available development tools in the market.
  • Develop a comparative matrix or scoring system to evaluate each tool's features against the project requirements.
  • Assign weights to features based on their importance to the project.

Outcome: The comparative analysis highlights that Tool A excels in essential features critical to the project, such as version control integration and debugging capabilities. The development team selects Tool A as the preferred choice for the project.

Educational Research - Comparative Study of Teaching Methods

Objective: A school district aims to improve student performance by comparing the effectiveness of traditional classroom teaching with online learning.

  • Randomly assign students to two groups: one taught using traditional methods and the other through online courses.
  • Administer pre- and post-course assessments to measure knowledge gain.
  • Collect feedback from students and teachers on the learning experiences.
  • Analyze assessment scores and feedback to compare the effectiveness and satisfaction levels of both teaching methods.

Outcome: The comparative analysis reveals that online learning leads to similar knowledge gains as traditional classroom teaching. However, students report higher satisfaction and flexibility with the online approach. The school district considers incorporating online elements into its curriculum.

These examples illustrate the diverse applications of comparative analysis across industries and research domains. Whether optimizing pricing strategies in retail, evaluating treatment effectiveness in healthcare, assessing environmental impacts, choosing the right software tool, or improving educational methods, comparative analysis empowers decision-makers with valuable insights for informed choices and positive outcomes.

Conclusion for Comparative Analysis

Comparative analysis is your compass in the world of decision-making. It helps you see the bigger picture, spot opportunities, and navigate challenges. By defining your objectives, gathering data, applying methodologies, and following best practices, you can harness the power of Comparative Analysis to make informed choices and drive positive outcomes.

Remember, Comparative analysis is not just a tool; it's a mindset that empowers you to transform data into insights and uncertainty into clarity. So, whether you're steering a business, conducting research, or facing life's choices, embrace Comparative Analysis as your trusted guide on the journey to better decisions. With it, you can chart your course, make impactful choices, and set sail toward success.

How to Conduct Comparative Analysis in Minutes?

Are you ready to revolutionize your approach to market research and comparative analysis? Appinio , a real-time market research platform, empowers you to harness the power of real-time consumer insights for swift, data-driven decisions. Here's why you should choose Appinio:

  • Speedy Insights:  Get from questions to insights in minutes, enabling you to conduct comparative analysis without delay.
  • User-Friendly:  No need for a PhD in research – our intuitive platform is designed for everyone, making it easy to collect and analyze data.
  • Global Reach:  With access to over 90 countries and the ability to define your target group from 1200+ characteristics, Appinio provides a worldwide perspective for your comparative analysis

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Time Series Analysis Definition Types Techniques Examples

16.05.2024 | 30min read

Time Series Analysis: Definition, Types, Techniques, Examples

Experimental Research Definition Types Design Examples

14.05.2024 | 31min read

Experimental Research: Definition, Types, Design, Examples

Interval Scale Definition Characteristics Examples

07.05.2024 | 29min read

Interval Scale: Definition, Characteristics, Examples

What is comparative analysis? A complete guide

Last updated

18 April 2023

Reviewed by

Jean Kaluza

Comparative analysis is a valuable tool for acquiring deep insights into your organization’s processes, products, and services so you can continuously improve them. 

Similarly, if you want to streamline, price appropriately, and ultimately be a market leader, you’ll likely need to draw on comparative analyses quite often.

When faced with multiple options or solutions to a given problem, a thorough comparative analysis can help you compare and contrast your options and make a clear, informed decision.

If you want to get up to speed on conducting a comparative analysis or need a refresher, here’s your guide.

Make comparative analysis less tedious

Dovetail streamlines comparative analysis to help you uncover and share actionable insights

  • What exactly is comparative analysis?

A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets.

For instance, you could use comparative analysis to investigate how your product features measure up to the competition.

After a successful comparative analysis, you should be able to identify strengths and weaknesses and clearly understand which product is more effective.

You could also use comparative analysis to examine different methods of producing that product and determine which way is most efficient and profitable.

The potential applications for using comparative analysis in everyday business are almost unlimited. That said, a comparative analysis is most commonly used to examine

Emerging trends and opportunities (new technologies, marketing)

Competitor strategies

Financial health

Effects of trends on a target audience

  • Why is comparative analysis so important? 

Comparative analysis can help narrow your focus so your business pursues the most meaningful opportunities rather than attempting dozens of improvements simultaneously.

A comparative approach also helps frame up data to illuminate interrelationships. For example, comparative research might reveal nuanced relationships or critical contexts behind specific processes or dependencies that wouldn’t be well-understood without the research.

For instance, if your business compares the cost of producing several existing products relative to which ones have historically sold well, that should provide helpful information once you’re ready to look at developing new products or features.

  • Comparative vs. competitive analysis—what’s the difference?

Comparative analysis is generally divided into three subtypes, using quantitative or qualitative data and then extending the findings to a larger group. These include

Pattern analysis —identifying patterns or recurrences of trends and behavior across large data sets.

Data filtering —analyzing large data sets to extract an underlying subset of information. It may involve rearranging, excluding, and apportioning comparative data to fit different criteria. 

Decision tree —flowcharting to visually map and assess potential outcomes, costs, and consequences.

In contrast, competitive analysis is a type of comparative analysis in which you deeply research one or more of your industry competitors. In this case, you’re using qualitative research to explore what the competition is up to across one or more dimensions.

For example

Service delivery —metrics like the Net Promoter Scores indicate customer satisfaction levels.

Market position — the share of the market that the competition has captured.

Brand reputation —how well-known or recognized your competitors are within their target market.

  • Tips for optimizing your comparative analysis

Conduct original research

Thorough, independent research is a significant asset when doing comparative analysis. It provides evidence to support your findings and may present a perspective or angle not considered previously. 

Make analysis routine

To get the maximum benefit from comparative research, make it a regular practice, and establish a cadence you can realistically stick to. Some business areas you could plan to analyze regularly include:

Profitability

Competition

Experiment with controlled and uncontrolled variables

In addition to simply comparing and contrasting, explore how different variables might affect your outcomes.

For example, a controllable variable would be offering a seasonal feature like a shopping bot to assist in holiday shopping or raising or lowering the selling price of a product.

Uncontrollable variables include weather, changing regulations, the current political climate, or global pandemics.

Put equal effort into each point of comparison

Most people enter into comparative research with a particular idea or hypothesis already in mind to validate. For instance, you might try to prove the worthwhileness of launching a new service. So, you may be disappointed if your analysis results don’t support your plan.

However, in any comparative analysis, try to maintain an unbiased approach by spending equal time debating the merits and drawbacks of any decision. Ultimately, this will be a practical, more long-term sustainable approach for your business than focusing only on the evidence that favors pursuing your argument or strategy.

Writing a comparative analysis in five steps

To put together a coherent, insightful analysis that goes beyond a list of pros and cons or similarities and differences, try organizing the information into these five components:

1. Frame of reference

Here is where you provide context. First, what driving idea or problem is your research anchored in? Then, for added substance, cite existing research or insights from a subject matter expert, such as a thought leader in marketing, startup growth, or investment

2. Grounds for comparison Why have you chosen to examine the two things you’re analyzing instead of focusing on two entirely different things? What are you hoping to accomplish?

3. Thesis What argument or choice are you advocating for? What will be the before and after effects of going with either decision? What do you anticipate happening with and without this approach?

For example, “If we release an AI feature for our shopping cart, we will have an edge over the rest of the market before the holiday season.” The finished comparative analysis will weigh all the pros and cons of choosing to build the new expensive AI feature including variables like how “intelligent” it will be, what it “pushes” customers to use, how much it takes off the plates of customer service etc.

Ultimately, you will gauge whether building an AI feature is the right plan for your e-commerce shop.

4. Organize the scheme Typically, there are two ways to organize a comparative analysis report. First, you can discuss everything about comparison point “A” and then go into everything about aspect “B.” Or, you alternate back and forth between points “A” and “B,” sometimes referred to as point-by-point analysis.

Using the AI feature as an example again, you could cover all the pros and cons of building the AI feature, then discuss the benefits and drawbacks of building and maintaining the feature. Or you could compare and contrast each aspect of the AI feature, one at a time. For example, a side-by-side comparison of the AI feature to shopping without it, then proceeding to another point of differentiation.

5. Connect the dots Tie it all together in a way that either confirms or disproves your hypothesis.

For instance, “Building the AI bot would allow our customer service team to save 12% on returns in Q3 while offering optimizations and savings in future strategies. However, it would also increase the product development budget by 43% in both Q1 and Q2. Our budget for product development won’t increase again until series 3 of funding is reached, so despite its potential, we will hold off building the bot until funding is secured and more opportunities and benefits can be proved effective.”

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 25 November 2023

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

is comparative analysis a methodology

Users report unexpectedly high data usage, especially during streaming sessions.

is comparative analysis a methodology

Users find it hard to navigate from the home page to relevant playlists in the app.

is comparative analysis a methodology

It would be great to have a sleep timer feature, especially for bedtime listening.

is comparative analysis a methodology

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Qualitative Comparative Analysis (QCA)

Introduction.

  • The Emergence of QCA
  • Comparisons with Other Techniques
  • Criticisms of QCA
  • Case Selection and Combining Cross-Case and Within-Case Analysis
  • Model Specification and Parameters of Fit
  • Applications of QCA
  • QCA Software

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Qualitative Methods in Sociological Research
  • Quantitative Methods in Sociological Research

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Consumer Credit and Debt
  • Economic Globalization
  • Global Inequalities
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Qualitative Comparative Analysis (QCA) by Axel Marx LAST REVIEWED: 28 November 2016 LAST MODIFIED: 28 November 2016 DOI: 10.1093/obo/9780199756384-0188

The social sciences use a wide range of research methods and techniques ranging from experiments to techniques which analyze observational data such as statistical techniques, qualitative text analytic techniques, ethnographies, and many others. In the 1980s a new technique emerged, named Qualitative Comparative Analysis (QCA), which aimed to provide a formalized way to systematically compare a small number (5<N<75) of case studies. John Gerring in the 2001 version of his introduction to social sciences identified QCA as one of the only genuine methodological innovations of the last few decades. In recent years, QCA has also been applied to large-N studies ( Glaesser 2015 , cited under Applications of QCA ; Ragin 2008 , cited under The Essential Features of QCA ) and the application of QCA to perform large-N analysis is in full development. This annotated bibliography aims to provide an overview of the main contributions of QCA as a research technique as well as an introduction to some specific issues as well as QCA applications. The contribution starts with sketching the emergence of QCA and situating the method in the debate between “qualitative” and “quantitative” methods. This contextualization is important to understand and appreciate that QCA in essence is a qualitative case-based research technique and not a quantitative variable-oriented technique. Next, the article discusses some key features of QCA and identifies some of the main books and handbooks on QCA as well as some of the criticism. In a third section, the overview focuses attention on the importance of cases and case selection in QCA. The fourth section introduces the way in which QCA builds explanatory models and presents the key contributions on the selection of explanatory factors, model specification, and testing. The fifth section canvasses the applications of QCA in the social sciences and identifies some interesting examples. Finally, since QCA is a formalized data-analytic technique based on algorithms, the overview ends with an overview of the main software package which can assist in the application of QCA.

Qualitative Case-Based Research in the Social Sciences

This section grounds Qualitative Comparative Analysis (QCA) in the tradition of qualitative case-based methods. As a research approach QCA mainly focuses on the systematic comparison of cases in order to find patterns of difference and similarity between cases. The initial intention of Ragin 1987 (cited under The Essential Features of QCA ) was to develop an original “synthetic strategy” as a middle way between the case-oriented (or “qualitative”) and the variable-oriented (or “quantitative”) approaches, which would “integrate the best features of the case-oriented approach with the best features of the variable-oriented approach” ( Ragin 1987 , p. 84). However, instead of grounding qualitative research on the premises of quantitative research such as King, et al. 1994 did, Ragin aimed to develop a method which is firmly rooted on a case-based qualitative approach ( Ragin and Becker 1992 ; Ragin 1997 for a systematic discussion of the differences between QCA and the approach advocated by King, et al. 1994 ). In recent years the fundamental differences between case-based and variable-oriented approaches have been further elaborated in terms of selection of units of observation or cases, approaches to explanation, causal analysis, measurement of concepts, and external validity (scope and generalization). Many researchers including Charles Ragin, Andrew Bennett ( George and Bennett 2005 ), John Gerring ( Gerring 2007 , Gerring 2012 ), David Collier ( Brady and Collier 2004 ) and James Mahoney ( Mahoney and Rueschemeyer 2003 ) have contributed significantly to identifying the key ontological, epistemological, and logical differences between the two approaches. Goertz and Mahoney 2012 brings this literature together and shows the distinct differences between quantitative and qualitative research. The authors refer to two “cultures” of conducting social-scientific research. In this distinction QCA falls firmly in the “camp” of qualitative research. The overview below identifies some key texts which discuss these differences more in depth.

Brady, H., and D. Collier, eds. 2004. Rethinking social inquiry: Diverse tools, shared standards . Lanham, MD: Rowman and Littlefield.

This edited volume goes into a detailed discussion with King, et al. 1994 and shows the distinctive strengths of different approaches with a strong emphasis on the distinctive strengths of qualitative case-based methods. Book also introduces the idea of process-tracing for within-case analysis. Reprint 2010.

George, A., and A. Bennett. 2005. Case research and theory development . Cambridge, MA: MIT.

Very extensive treatment of how case-based research focusing on longitudinal analysis and process-tracing can contribute to both theory development and theory testing. Discusses many examples from empirical political science research.

Gerring, J. 2007. Case study research: Principles and practice . Cambridge, UK: Cambridge Univ. Press.

Very good introduction into what a case study is and what analytic and descriptive purposes it serves in social science research.

Gerring, J. 2012. Social science methodology: A unified framework . Cambridge, UK: Cambridge Univ. Press.

An update of the 2001 volume which provides a concise introduction to different research approaches and techniques in the social sciences. Clearly shows the added value of different approaches and aims to overcome “the one versus the other” approaches.

Goertz, G., and J. Mahoney. 2012. A tale of two cultures: Qualitative and quantitative research in the social sciences . Princeton, NJ: Princeton Univ. Press.

Book elaborates the differences between qualitative and quantitative research. They elaborate these differences in terms of (1) approaches to explanation, (2) conceptions of causation, (3) approaches toward multivariate explanations, (4) equifinality, (5) scope and causal generalization, (6) case selection, (7) weighting observations, (8) substantively important cases, (9) lack of fit, and (10) concepts and measurement.

King, G., R. Keohane, and S. Verba. 1994. Designing social enquiry: Scientific inference in qualitative research . Princeton, NJ: Princeton Univ. Press.

A much-quoted and highly influential book on research design for the social sciences. This book aimed to discuss and assess qualitative research and argued that qualitative research should be benchmarked against standards used in quantitative research such as never select cases on the dependent variables, making sure one has always more observations than variables, maximize variation, and so on.

Mahoney, J., and D. Rueschemeyer, eds. 2003. Comparative historical analysis in the social sciences . Cambridge, UK: Cambridge Univ. Press.

This is a very impressive volume with chapters written by the best researchers in macro-sociological research and comparative politics. It shows the key strengths of comparative historical research for explaining key social phenomena such as revolutions, social provisions, and democracy. In addition it combines masterfully substantive discussions with methodological implications and challenges and in this way shows how case-based research contributes fundamentally to understanding social change.

Poteete, A., M. Janssen, and E. Ostrom. 2010. Working together: Collective action, the commons and multiple methods in practice . Princeton, NJ: Princeton Univ. Press.

The study of Common Pool Resources (CPRs) has been one of the most theoretically advanced subjects in social sciences. This excellent book introduces different research designs to analyze questions related to the governance of CPRs and situates QCA nicely in the universe of different research designs and strategies.

Ragin, C. C. 1997. Turning the tables: How case-oriented methods challenge variable-oriented methods. Comparative Social Research 16:27–42.

Engages directly with the work of King, et al. 1994 and fundamentally disagrees with its authors Ragin argues that qualitative case-based research is based on different standards and that this type of research should be assessed on the basis of these standards.

Ragin, C. C., and H. Becker. 1992. What is a case? Exploring the foundations of social inquiry . Cambridge, UK: Cambridge Univ. Press.

Brings together leading researchers to discuss the deceptively easy question “what is a case?” and shows the many different approaches toward case-study research. One red line going through the contributions is the emphasis on thinking hard about the question “what is my case a case of?” in theoretical terms.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Sociology »
  • Meet the Editorial Board »
  • Actor-Network Theory
  • Adolescence
  • African Americans
  • African Societies
  • Agent-Based Modeling
  • Analysis, Spatial
  • Analysis, World-Systems
  • Anomie and Strain Theory
  • Arab Spring, Mobilization, and Contentious Politics in the...
  • Asian Americans
  • Assimilation
  • Authority and Work
  • Bell, Daniel
  • Biosociology
  • Bourdieu, Pierre
  • Catholicism
  • Causal Inference
  • Chicago School of Sociology
  • Chinese Cultural Revolution
  • Chinese Society
  • Citizenship
  • Civil Rights
  • Civil Society
  • Cognitive Sociology
  • Cohort Analysis
  • Collective Efficacy
  • Collective Memory
  • Comparative Historical Sociology
  • Comte, Auguste
  • Conflict Theory
  • Conservatism
  • Consumer Culture
  • Consumption
  • Contemporary Family Issues
  • Contingent Work
  • Conversation Analysis
  • Corrections
  • Cosmopolitanism
  • Crime, Cities and
  • Cultural Capital
  • Cultural Classification and Codes
  • Cultural Economy
  • Cultural Omnivorousness
  • Cultural Production and Circulation
  • Culture and Networks
  • Culture, Sociology of
  • Development
  • Discrimination
  • Doing Gender
  • Du Bois, W.E.B.
  • Durkheim, Émile
  • Economic Institutions and Institutional Change
  • Economic Sociology
  • Education and Health
  • Education Policy in the United States
  • Educational Policy and Race
  • Empires and Colonialism
  • Entrepreneurship
  • Environmental Sociology
  • Epistemology
  • Ethnic Enclaves
  • Ethnomethodology and Conversation Analysis
  • Exchange Theory
  • Families, Postmodern
  • Family Policies
  • Feminist Theory
  • Field, Bourdieu's Concept of
  • Forced Migration
  • Foucault, Michel
  • Frankfurt School
  • Gender and Bodies
  • Gender and Crime
  • Gender and Education
  • Gender and Health
  • Gender and Incarceration
  • Gender and Professions
  • Gender and Social Movements
  • Gender and Work
  • Gender Pay Gap
  • Gender, Sexuality, and Migration
  • Gender Stratification
  • Gender, Welfare Policy and
  • Gendered Sexuality
  • Gentrification
  • Gerontology
  • Globalization and Labor
  • Goffman, Erving
  • Historic Preservation
  • Human Trafficking
  • Immigration
  • Indian Society, Contemporary
  • Institutions
  • Intellectuals
  • Intersectionalities
  • Interview Methodology
  • Job Quality
  • Knowledge, Critical Sociology of
  • Labor Markets
  • Latino/Latina Studies
  • Law and Society
  • Law, Sociology of
  • LGBT Parenting and Family Formation
  • LGBT Social Movements
  • Life Course
  • Lipset, S.M.
  • Markets, Conventions and Categories in
  • Marriage and Divorce
  • Marxist Sociology
  • Masculinity
  • Mass Incarceration in the United States and its Collateral...
  • Material Culture
  • Mathematical Sociology
  • Medical Sociology
  • Mental Illness
  • Methodological Individualism
  • Middle Classes
  • Military Sociology
  • Money and Credit
  • Multiculturalism
  • Multilevel Models
  • Multiracial, Mixed-Race, and Biracial Identities
  • Nationalism
  • Non-normative Sexuality Studies
  • Occupations and Professions
  • Organizations
  • Panel Studies
  • Parsons, Talcott
  • Political Culture
  • Political Economy
  • Political Sociology
  • Popular Culture
  • Proletariat (Working Class)
  • Protestantism
  • Public Opinion
  • Public Space
  • Qualitative Comparative Analysis (QCA)
  • Race and Sexuality
  • Race and Violence
  • Race and Youth
  • Race in Global Perspective
  • Race, Organizations, and Movements
  • Rational Choice
  • Relationships
  • Religion and the Public Sphere
  • Residential Segregation
  • Revolutions
  • Role Theory
  • Rural Sociology
  • Scientific Networks
  • Secularization
  • Sequence Analysis
  • Sex versus Gender
  • Sexual Identity
  • Sexualities
  • Sexuality Across the Life Course
  • Simmel, Georg
  • Single Parents in Context
  • Small Cities
  • Social Capital
  • Social Change
  • Social Closure
  • Social Construction of Crime
  • Social Control
  • Social Darwinism
  • Social Disorganization Theory
  • Social Epidemiology
  • Social History
  • Social Indicators
  • Social Mobility
  • Social Movements
  • Social Network Analysis
  • Social Networks
  • Social Policy
  • Social Problems
  • Social Psychology
  • Social Stratification
  • Social Theory
  • Socialization, Sociological Perspectives on
  • Sociolinguistics
  • Sociological Approaches to Character
  • Sociological Research on the Chinese Society
  • Sociological Research, Qualitative Methods in
  • Sociological Research, Quantitative Methods in
  • Sociology, History of
  • Sociology of Manners
  • Sociology of Music
  • Sociology of War, The
  • Suburbanism
  • Survey Methods
  • Symbolic Boundaries
  • Symbolic Interactionism
  • The Division of Labor after Durkheim
  • Tilly, Charles
  • Time Use and Childcare
  • Time Use and Time Diary Research
  • Tourism, Sociology of
  • Transnational Adoption
  • Unions and Inequality
  • Urban Ethnography
  • Urban Growth Machine
  • Urban Inequality in the United States
  • Veblen, Thorstein
  • Visual Arts, Music, and Aesthetic Experience
  • Wallerstein, Immanuel
  • Welfare, Race, and the American Imagination
  • Welfare States
  • Women’s Employment and Economic Inequality Between Househo...
  • Work and Employment, Sociology of
  • Work/Life Balance
  • Workplace Flexibility
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|185.80.151.41]
  • 185.80.151.41

Sociology Group: Welcome to Social Sciences Blog

How to Do Comparative Analysis in Research ( Examples )

Comparative analysis is a method that is widely used in social science . It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and explain every element of data that comparing. 

Comparative Analysis in Social SCIENCE RESEARCH

We often compare and contrast in our daily life. So it is usual to compare and contrast the culture and human society. We often heard that ‘our culture is quite good than theirs’ or ‘their lifestyle is better than us’. In social science, the social scientist compares primitive, barbarian, civilized, and modern societies. They use this to understand and discover the evolutionary changes that happen to society and its people.  It is not only used to understand the evolutionary processes but also to identify the differences, changes, and connections between societies.

Most social scientists are involved in comparative analysis. Macfarlane has thought that “On account of history, the examinations are typically on schedule, in that of other sociologies, transcendently in space. The historian always takes their society and compares it with the past society, and analyzes how far they differ from each other.

The comparative method of social research is a product of 19 th -century sociology and social anthropology. Sociologists like Emile Durkheim, Herbert Spencer Max Weber used comparative analysis in their works. For example, Max Weber compares the protestant of Europe with Catholics and also compared it with other religions like Islam, Hinduism, and Confucianism.

To do a systematic comparison we need to follow different elements of the method.

1. Methods of comparison The comparison method

In social science, we can do comparisons in different ways. It is merely different based on the topic, the field of study. Like Emile Durkheim compare societies as organic solidarity and mechanical solidarity. The famous sociologist Emile Durkheim provides us with three different approaches to the comparative method. Which are;

  • The first approach is to identify and select one particular society in a fixed period. And by doing that, we can identify and determine the relationship, connections and differences exist in that particular society alone. We can find their religious practices, traditions, law, norms etc.
  •  The second approach is to consider and draw various societies which have common or similar characteristics that may vary in some ways. It may be we can select societies at a specific period, or we can select societies in the different periods which have common characteristics but vary in some ways. For example, we can take European and American societies (which are universally similar characteristics) in the 20 th century. And we can compare and contrast their society in terms of law, custom, tradition, etc. 
  • The third approach he envisaged is to take different societies of different times that may share some similar characteristics or maybe show revolutionary changes. For example, we can compare modern and primitive societies which show us revolutionary social changes.

2 . The unit of comparison

We cannot compare every aspect of society. As we know there are so many things that we cannot compare. The very success of the compare method is the unit or the element that we select to compare. We are only able to compare things that have some attributes in common. For example, we can compare the existing family system in America with the existing family system in Europe. But we are not able to compare the food habits in china with the divorce rate in America. It is not possible. So, the next thing you to remember is to consider the unit of comparison. You have to select it with utmost care.

3. The motive of comparison

As another method of study, a comparative analysis is one among them for the social scientist. The researcher or the person who does the comparative method must know for what grounds they taking the comparative method. They have to consider the strength, limitations, weaknesses, etc. He must have to know how to do the analysis.

Steps of the comparative method

1. Setting up of a unit of comparison

As mentioned earlier, the first step is to consider and determine the unit of comparison for your study. You must consider all the dimensions of your unit. This is where you put the two things you need to compare and to properly analyze and compare it. It is not an easy step, we have to systematically and scientifically do this with proper methods and techniques. You have to build your objectives, variables and make some assumptions or ask yourself about what you need to study or make a hypothesis for your analysis.

The best casings of reference are built from explicit sources instead of your musings or perceptions. To do that you can select some attributes in the society like marriage, law, customs, norms, etc. by doing this you can easily compare and contrast the two societies that you selected for your study. You can set some questions like, is the marriage practices of Catholics are different from Protestants? Did men and women get an equal voice in their mate choice? You can set as many questions that you wanted. Because that will explore the truth about that particular topic. A comparative analysis must have these attributes to study. A social scientist who wishes to compare must develop those research questions that pop up in your mind. A study without those is not going to be a fruitful one.

2. Grounds of comparison

The grounds of comparison should be understandable for the reader. You must acknowledge why you selected these units for your comparison. For example, it is quite natural that a person who asks why you choose this what about another one? What is the reason behind choosing this particular society? If a social scientist chooses primitive Asian society and primitive Australian society for comparison, he must acknowledge the grounds of comparison to the readers. The comparison of your work must be self-explanatory without any complications.

If you choose two particular societies for your comparative analysis you must convey to the reader what are you intended to choose this and the reason for choosing that society in your analysis.

3 . Report or thesis

The main element of the comparative analysis is the thesis or the report. The report is the most important one that it must contain all your frame of reference. It must include all your research questions, objectives of your topic, the characteristics of your two units of comparison, variables in your study, and last but not least the finding and conclusion must be written down. The findings must be self-explanatory because the reader must understand to what extent did they connect and what are their differences. For example, in Emile Durkheim’s Theory of Division of Labour, he classified organic solidarity and Mechanical solidarity . In which he means primitive society as Mechanical solidarity and modern society as Organic Solidarity. Like that you have to mention what are your findings in the thesis.

4. Relationship and linking one to another

Your paper must link each point in the argument. Without that the reader does not understand the logical and rational advance in your analysis. In a comparative analysis, you need to compare the ‘x’ and ‘y’ in your paper. (x and y mean the two-unit or things in your comparison). To do that you can use likewise, similarly, on the contrary, etc. For example, if we do a comparison between primitive society and modern society we can say that; ‘in the primitive society the division of labour is based on gender and age on the contrary (or the other hand), in modern society, the division of labour is based on skill and knowledge of a person.

Demerits of comparison

Comparative analysis is not always successful. It has some limitations. The broad utilization of comparative analysis can undoubtedly cause the feeling that this technique is a solidly settled, smooth, and unproblematic method of investigation, which because of its undeniable intelligent status can produce dependable information once some specialized preconditions are met acceptably.

Perhaps the most fundamental issue here respects the independence of the unit picked for comparison. As different types of substances are gotten to be analyzed, there is frequently a fundamental and implicit supposition about their independence and a quiet propensity to disregard the mutual influences and common impacts among the units.

One more basic issue with broad ramifications concerns the decision of the units being analyzed. The primary concern is that a long way from being a guiltless as well as basic assignment, the decision of comparison units is a basic and precarious issue. The issue with this sort of comparison is that in such investigations the depictions of the cases picked for examination with the principle one will in general turn out to be unreasonably streamlined, shallow, and stylised with contorted contentions and ends as entailment.

However, a comparative analysis is as yet a strategy with exceptional benefits, essentially due to its capacity to cause us to perceive the restriction of our psyche and check against the weaknesses and hurtful results of localism and provincialism. We may anyway have something to gain from history specialists’ faltering in utilizing comparison and from their regard for the uniqueness of settings and accounts of people groups. All of the above, by doing the comparison we discover the truths the underlying and undiscovered connection, differences that exist in society.

Also Read: How to write a Sociology Analysis? Explained with Examples

is comparative analysis a methodology

Sociology Group

We believe in sharing knowledge with everyone and making a positive change in society through our work and contributions. If you are interested in joining us, please check our 'About' page for more information

is comparative analysis a methodology

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

is comparative analysis a methodology

Home Market Research Research Tools and Apps

Comparative Analysis: What It Is & How to Conduct It

Comparative analysis compares your site or tool to those of your competitors. It's better to know what your competitors have to offer.

When a business wants to start a marketing campaign or grow, a comparative analysis can give them information that helps them make crucial decisions. This analysis gathers different data sets to compare different options so a business can make good decisions for its customers and itself. If you or your business want to make good decisions, learning about comparative analyses could be helpful. 

In this article, we’ll explain the comparative analysis and its importance. We’ll also learn how to do a good in-depth analysis .

What is comparative analysis?

Comparative analysis is a way to look at two or more similar things to see how they are different and what they have in common. 

It is used in many ways and fields to help people understand the similarities and differences between products better. It can help businesses make good decisions about key issues.

One meaningful way it’s used is when applied to scientific data. Scientific data is information that has been gathered through scientific research and will be used for a certain purpose.

When it is used on scientific data, it determines how consistent and reliable the data is. It also helps scientists make sure their data is accurate and valid.

Importance of comparative analysis 

Comparative analyses are important if you want to understand a problem better or find answers to important questions. Here are the main goals businesses want to reach through comparative analysis.

  • It is a part of the diagnostic phase of business analytics. It can answer many of the most important questions a company may have and help you figure out how to fix problems at the company’s core to improve performance and even make more money.
  • It encourages a deep understanding of the opportunities that apply to specific processes, departments, or business units. This analysis also ensures that we’re addressing the real reasons for performance gaps.
  • It is used a lot because it helps people understand the challenges an organization has faced in the past and the ones it faces now. This method gives objective, fact-based information about performance and ways to improve it.

How to successfully conduct it

Consider using the advice below to carry out a successful comparative analysis:

Conduct research

Before doing an analysis, it’s important to do a lot of research . Research not only gives you evidence to back up your conclusions, but it might also show you something you hadn’t thought of before.

Research could also tell you how your competitors might handle a problem.

Make a list of what’s different and what’s the same.

When comparing two things in a comparative analysis, you need to make a detailed list of the similarities and differences.

Try to figure out how a change to one thing might affect another. Such as how increasing the number of vacation days affects sales, production, or costs. 

A comparative analysis can also help you find outside causes, such as economic conditions or environmental analysis problems.

Describe both sides

Comparative analysis may try to show that one argument or idea is better, but the analysis must cover both sides equally. The analysis shows both sides of the main arguments and claims. 

For example, to compare the benefits and drawbacks of starting a recycling program, one might examine both the positive effects, such as corporate responsibility and the potential negative effects, such as high implementation costs, to make wise, practical decisions or come up with alternate solutions.

Include variables

A thorough comparison unit of analysis is usually more than just a list of pros and cons because it usually considers factors that affect both sides.

Variables can be both things that can’t be changed, like how the weather in the summer affects shipping speeds, and things that can be changed, like when to work with a local shipper.

Do analyses regularly

Comparative analyses are important for any business practice. Consider the different areas and factors that a comparative analysis looks at:

  • Competitors
  • How well do stocks
  • Financial position
  • Profitability
  • Dividends and revenue
  • Development and research

Because a comparative analysis can help more than one department in a company, doing them often can help you keep up with market changes and stay relevant.

We’ve talked about how good a comparative analysis is for your business. But things always have two sides. It is a good workaround, but still do your own user interviews or user tests if you can. 

We hope you have fun doing comparative analyses! Comparative analysis is always a method you like to use, and the point of learning from competitors is to add your own ideas. In this way, you are not just following but also learning and making.

QuestionPro can help you with your analysis process, create and design a survey to meet your goals, and analyze data for your business’s comparative analysis.

At QuestionPro, we give researchers tools for collecting data, like our survey software and a library of insights for all kinds of l ong-term research . If you want to book a demo or learn more about our platform, just click here.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Open access
  • Published: 07 May 2021

The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions

  • Benjamin Hanckel 1 ,
  • Mark Petticrew 2 ,
  • James Thomas 3 &
  • Judith Green 4  

BMC Public Health volume  21 , Article number:  877 ( 2021 ) Cite this article

22k Accesses

43 Citations

34 Altmetric

Metrics details

Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA approaches identified in published studies, and identify implications for future research and reporting.

PubMed, Scopus and Web of Science were systematically searched for peer-reviewed studies published in English up to December 2019 that had used QCA methods to identify the conditions associated with the uptake and/or effectiveness of interventions for public health. Data relating to the interventions studied (settings/level of intervention/populations), methods (type of QCA, case level, source of data, other methods used) and reported strengths and weaknesses of QCA were extracted and synthesised narratively.

The search identified 1384 papers, of which 27 (describing 26 studies) met the inclusion criteria. Interventions evaluated ranged across: nutrition/obesity ( n  = 8); physical activity ( n  = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation ( n  = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2), and general population health ( n  = 1). The majority of studies ( n  = 24) were of interventions solely or predominantly in high income countries. Key strengths reported were that QCA provides a method for addressing causal complexity; and that it provides a systematic approach for understanding the mechanisms at work in implementation across contexts. Weaknesses reported related to data availability limitations, especially on ineffective interventions. The majority of papers demonstrated good knowledge of cases, and justification of case selection, but other criteria of methodological quality were less comprehensively met.

QCA is a promising approach for addressing the role of context in complex interventions, and for identifying causal configurations of conditions that predict implementation and/or outcomes when there is sufficiently detailed understanding of a series of comparable cases. As the use of QCA in evaluative health research increases, there may be a need to develop advice for public health researchers and journals on minimum criteria for quality and reporting.

Peer Review reports

Interest in the use of Qualitative Comparative Analysis (QCA) arises in part from growing recognition of the need to broaden methodological capacity to address causality in complex systems [ 1 , 2 , 3 ]. Guidance for researchers for evaluating complex interventions suggests process evaluations [ 4 , 5 ] can provide evidence on the mechanisms of change, and the ways in which context affects outcomes. However, this does not address the more fundamental problems with trial and quasi-experimental designs arising from system complexity [ 6 ]. As Byrne notes, the key characteristic of complex systems is ‘emergence’ [ 7 ]: that is, effects may accrue from combinations of components, in contingent ways, which cannot be reduced to any one level. Asking about ‘what works’ in complex systems is not to ask a simple question about whether an intervention has particular effects, but rather to ask: “how the intervention works in relation to all existing components of the system and to other systems and their sub-systems that intersect with the system of interest” [ 7 ]. Public health interventions are typically attempts to effect change in systems that are themselves dynamic; approaches to evaluation are needed that can deal with emergence [ 8 ]. In short, understanding the uptake and impact of interventions requires methods that can account for the complex interplay of intervention conditions and system contexts.

To build a useful evidence base for public health, evaluations thus need to assess not just whether a particular intervention (or component) causes specific change in one variable, in controlled circumstances, but whether those interventions shift systems, and how specific conditions of interventions and setting contexts interact to lead to anticipated outcomes. There have been a number of calls for the development of methods in intervention research to address these issues of complex causation [ 9 , 10 , 11 ], including calls for the greater use of case studies to provide evidence on the important elements of context [ 12 , 13 ]. One approach for addressing causality in complex systems is Qualitative Comparative Analysis (QCA): a systematic way of comparing the outcomes of different combinations of system components and elements of context (‘conditions’) across a series of cases.

The potential of qualitative comparative analysis

QCA is an approach developed by Charles Ragin [ 14 , 15 ], originating in comparative politics and macrosociology to address questions of comparative historical development. Using set theory, QCA methods explore the relationships between ‘conditions’ and ‘outcomes’ by identifying configurations of necessary and sufficient conditions for an outcome. The underlying logic is different from probabilistic reasoning, as the causal relationships identified are not inferred from the (statistical) likelihood of them being found by chance, but rather from comparing sets of conditions and their relationship to outcomes. It is thus more akin to the generative conceptualisations of causality in realist evaluation approaches [ 16 ]. QCA is a non-additive and non-linear method that emphasises diversity, acknowledging that different paths can lead to the same outcome. For evaluative research in complex systems [ 17 ], QCA therefore offers a number of benefits, including: that QCA can identify more than one causal pathway to an outcome (equifinality); that it accounts for conjectural causation (where the presence or absence of conditions in relation to other conditions might be key); and that it is asymmetric with respect to the success or failure of outcomes. That is, that specific factors explain success does not imply that their absence leads to failure (causal asymmetry).

QCA was designed, and is typically used, to compare data from a medium N (10–50) series of cases that include those with and those without the (dichotomised) outcome. Conditions can be dichotomised in ‘crisp sets’ (csQCA) or represented in ‘fuzzy sets’ (fsQCA), where set membership is calibrated (either continuously or with cut offs) between two extremes representing fully in (1) or fully out (0) of the set. A third version, multi-value QCA (mvQCA), infrequently used, represents conditions as ‘multi-value sets’, with multinomial membership [ 18 ]. In calibrating set membership, the researcher specifies the critical qualitative anchors that capture differences in kind (full membership and full non-membership), as well as differences in degree in fuzzy sets (partial membership) [ 15 , 19 ]. Data on outcomes and conditions can come from primary or secondary qualitative and/or quantitative sources. Once data are assembled and coded, truth tables are constructed which “list the logically possible combinations of causal conditions” [ 15 ], collating the number of cases where those configurations occur to see if they share the same outcome. Analysis of these truth tables assesses first whether any conditions are individually necessary or sufficient to predict the outcome, and then whether any configurations of conditions are necessary or sufficient. Necessary conditions are assessed by examining causal conditions shared by cases with the same outcome, whilst identifying sufficient conditions (or combinations of conditions) requires examining cases with the same causal conditions to identify if they have the same outcome [ 15 ]. However, as Legewie argues, the presence of a condition, or a combination of conditions in actual datasets, are likely to be “‘quasi-necessary’ or ‘quasi-sufficient’ in that the causal relation holds in a great majority of cases, but some cases deviate from this pattern” [ 20 ]. Following reduction of the complexity of the model, the final model is tested for coverage (the degree to which a configuration accounts for instances of an outcome in the empirical cases; the proportion of cases belonging to a particular configuration) and consistency (the degree to which the cases sharing a combination of conditions align with a proposed subset relation). The result is an analysis of complex causation, “defined as a situation in which an outcome may follow from several different combinations of causal conditions” [ 15 ] illuminating the ‘causal recipes’, the causally relevant conditions or configuration of conditions that produce the outcome of interest.

QCA, then, has promise for addressing questions of complex causation, and recent calls for the greater use of QCA methods have come from a range of fields related to public health, including health research [ 17 ], studies of social interventions [ 7 ], and policy evaluation [ 21 , 22 ]. In making arguments for the use of QCA across these fields, researchers have also indicated some of the considerations that must be taken into account to ensure robust and credible analyses. There is a need, for instance, to ensure that ‘contradictions’, where cases with the same configurations show different outcomes, are resolved and reported [ 15 , 23 , 24 ]. Additionally, researchers must consider the ratio of cases to conditions, and limit the number of conditions to cases to ensure the validity of models [ 25 ]. Marx and Dusa, examining crisp set QCA, have provided some guidance to the ‘ceiling’ number of conditions which can be included relative to the number of cases to increase the probability of models being valid (that is, with a low probability of being generated through random data) [ 26 ].

There is now a growing body of published research in public health and related fields drawing on QCA methods. This is therefore a timely point to map the field and assess the potential of QCA as a method for contributing to the evidence base for what works in improving public health. To inform future methodological development of robust methods for addressing complexity in the evaluation of public health interventions, we undertook a systematic review to map existing evidence, identify gaps in, and strengths and weakness of, the QCA literature to date, and identify the implications of these for conducting and reporting future QCA studies for public health evaluation. We aimed to address the following specific questions [ 27 ]:

1. How is QCA used for public health evaluation? What populations, settings, methods used in source case studies, unit/s and level of analysis (‘cases’), and ‘conditions’ have been included in QCA studies?

2. What strengths and weaknesses have been identified by researchers who have used QCA to understand complex causation in public health evaluation research?

3. What are the existing gaps in, and strengths and weakness of, the QCA literature in public health evaluation, and what implications do these have for future research and reporting of QCA studies for public health?

This systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 29 April 2019 ( CRD42019131910 ). A protocol was prepared in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) 2015 statement [ 28 ], and published in 2019 [ 27 ], where the methods are explained in detail. EPPI-Reviewer 4 was used to manage the process and undertake screening of abstracts [ 29 ].

Search strategy

We searched for peer-reviewed published papers in English, which used QCA methods to examine causal complexity in evaluating the implementation, uptake and/or effects of a public health intervention, in any region of the world, for any population. ‘Public health interventions’ were defined as those which aim to promote or protect health, or prevent ill health, in the population. No date exclusions were made, and papers published up to December 2019 were included.

Search strategies used the following phrases “Qualitative Comparative Analysis” and “QCA”, which were combined with the keywords “health”, “public health”, “intervention”, and “wellbeing”. See Additional file  1 for an example. Searches were undertaken on the following databases: PubMed, Web of Science, and Scopus. Additional searches were undertaken on Microsoft Academic and Google Scholar in December 2019, where the first pages of results were checked for studies that may have been missed in the initial search. No additional studies were identified. The list of included studies was sent to experts in QCA methods in health and related fields, including authors of included studies and/or those who had published on QCA methodology. This generated no additional studies within scope, but a suggestion to check the COMPASSS (Comparative Methods for Systematic Cross-Case Analysis) database; this was searched, identifying one further study that met the inclusion criteria [ 30 ]. COMPASSS ( https://compasss.org/ ) collates publications of studies using comparative case analysis.

We excluded studies where no intervention was evaluated, which included studies that used QCA to examine public health infrastructure (i.e. staff training) without a specific health outcome, and papers that report on prevalence of health issues (i.e. prevalence of child mortality). We also excluded studies of health systems or services interventions where there was no public health outcome.

After retrieval, and removal of duplicates, titles and abstracts were screened by one of two authors (BH or JG). Double screening of all records was assisted by EPPI Reviewer 4’s machine learning function. Of the 1384 papers identified after duplicates were removed, we excluded 820 after review of titles and abstracts (Fig.  1 ). The excluded studies included: a large number of papers relating to ‘quantitative coronary angioplasty’ and some which referred to the Queensland Criminal Code (both of which are also abbreviated to ‘QCA’); papers that reported methodological issues but not empirical studies; protocols; and papers that used the phrase ‘qualitative comparative analysis’ to refer to qualitative studies that compared different sub-populations or cases within the study, but did not include formal QCA methods.

figure 1

Flow Diagram

Full texts of the 51 remaining studies were screened by BH and JG for inclusion, with 10 papers double coded by both authors, with complete agreement. Uncertain inclusions were checked by the third author (MP). Of the full texts, 24 were excluded because: they did not report a public health intervention ( n  = 18); had used a methodology inspired by QCA, but had not undertaken a QCA ( n  = 2); were protocols or methodological papers only ( n  = 2); or were not published in peer-reviewed journals ( n  = 2) (see Fig.  1 ).

Data were extracted manually from the 27 remaining full texts by BH and JG. Two papers relating to the same research question and dataset were combined, such that analysis was by study ( n  = 26) not by paper. We retrieved data relating to: publication (journal, first author country affiliation, funding reported); the study setting (country/region setting, population targeted by the intervention(s)); intervention(s) studied; methods (aims, rationale for using QCA, crisp or fuzzy set QCA, other analysis methods used); data sources drawn on for cases (source [primary data, secondary data, published analyses], qualitative/quantitative data, level of analysis, number of cases, final causal conditions included in the analysis); outcome explained; and claims made about strengths and weaknesses of using QCA (see Table  1 ). Data were synthesised narratively, using thematic synthesis methods [ 31 , 32 ], with interventions categorised by public health domain and level of intervention.

Quality assessment

There are no reporting guidelines for QCA studies in public health, but there are a number of discussions of best practice in the methodological literature [ 25 , 26 , 33 , 34 ]. These discussions suggest several criteria for strengthening QCA methods that we used as indicators of methodological and/or reporting quality: evidence of familiarity of cases; justification for selection of cases; discussion and justification of set membership score calibration; reporting of truth tables; reporting and justification of solution formula; and reporting of consistency and coverage measures. For studies using csQCA, and claiming an explanatory analysis, we additionally identified whether the number of cases was sufficient for the number of conditions included in the model, using a pragmatic cut-off in line with Marx & Dusa’s guideline thresholds, which indicate how many cases are sufficient for given numbers of conditions to reject a 10% probability that models could be generated with random data [ 26 ].

Overview of scope of QCA research in public health

Twenty-seven papers reporting 26 studies were included in the review (Table  1 ). The earliest was published in 2005, and 17 were published after 2015. The majority ( n  = 19) were published in public health/health promotion journals, with the remainder published in other health science ( n  = 3) or in social science/management journals ( n  = 4). The public health domain(s) addressed by each study were broadly coded by the main area of focus. They included nutrition/obesity ( n  = 8); physical activity (PA) (n = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation (n = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2); or general population health ( n  = 1). The majority ( n  = 24) of studies were conducted solely or predominantly in high-income countries (systematic reviews in general searched global sources, but commented that the overwhelming majority of studies were from high-income countries). Country settings included: any ( n  = 6); OECD countries ( n  = 3); USA ( n  = 6); UK ( n  = 6) and one each from Nepal, Austria, Belgium, Netherlands and Africa. These largely reflected the first author’s country affiliations in the UK ( n  = 13); USA ( n  = 9); and one each from South Africa, Austria, Belgium, and the Netherlands. All three studies primarily addressing health inequalities [ 35 , 36 , 37 ] were from the UK.

Eight of the interventions evaluated were individual-level behaviour change interventions (e.g. weight management interventions, case management, self-management for chronic conditions); eight evaluated policy/funding interventions; five explored settings-based health promotion/behaviour change interventions (e.g. schools-based physical activity intervention, store-based food choice interventions); three evaluated community empowerment/engagement interventions, and two studies evaluated networks and their impact on health outcomes.

Methods and data sets used

Fifteen studies used crisp sets (csQCA), 11 used fuzzy sets (fsQCA). No study used mvQCA. Eleven studies included additional analyses of the datasets drawn on for the QCA, including six that used qualitative approaches (narrative synthesis, case comparisons), typically to identify cases or conditions for populating the QCA; and four reporting additional statistical analyses (meta-regression, linear regression) to either identify differences overall between cases prior to conducting a QCA (e.g. [ 38 ]) or to explore correlations in more detail (e.g. [ 39 ]). One study used an additional Boolean configurational technique to reduce the number of conditions in the QCA analysis [ 40 ]. No studies reported aiming to compare the findings from the QCA with those from other techniques for evaluating the uptake or effectiveness of interventions, although some [ 41 , 42 ] were explicitly using the study to showcase the possibilities of QCA compared with other approaches in general. Twelve studies drew on primary data collected specifically for the study, with five of those additionally drawing on secondary data sets; five drew only on secondary data sets, and nine used data from systematic reviews of published research. Seven studies drew primarily on qualitative data, generally derived from interviews or observations.

Many studies were undertaken in the context of one or more trials, which provided evidence of effect. Within single trials, this was generally for a process evaluation, with cases being trial sites. Fernald et al’s study, for instance, was in the context of a trial of a programme to support primary care teams in identifying and implementing self-management support tools for their patients, which measured patient and health care provider level outcomes [ 43 ]. The QCA reported here used qualitative data from the trial to identify a set of necessary conditions for health care provider practices to implement the tools successfully. In studies drawing on data from systematic reviews, cases were always at the level of intervention or intervention component, with data included from multiple trials. Harris et al., for instance, undertook a mixed-methods systematic review of school-based self-management interventions for asthma, using meta-analysis methods to identify effective interventions and QCA methods to identify which intervention features were aligned with success [ 44 ].

The largest number of studies ( n  = 10), including all the systematic reviews, analysed cases at the level of the intervention, or a component of the intervention; seven analysed organisational level cases (e.g. school class, network, primary care practice); five analysed sub-national region level cases (e.g. state, local authority area), and two each analysed country or individual level cases. Sample sizes ranged from 10 to 131, with no study having small N (< 10) sample sizes, four having large N (> 50) sample sizes, and the majority (22) being medium N studies (in the range 10–50).

Rationale for using QCA

Most papers reported a rationale for using QCA that mentioned ‘complexity’ or ‘context’, including: noting that QCA is appropriate for addressing causal complexity or multiple pathways to outcome [ 37 , 43 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ]; noting the appropriateness of the method for providing evidence on how context impacts on interventions [ 41 , 50 ]; or the need for a method that addressed causal asymmetry [ 52 ]. Three stated that the QCA was an ‘exploratory’ analysis [ 53 , 54 , 55 ]. In addition to the empirical aims, several papers (e.g. [ 42 , 48 ]) sought to demonstrate the utility of QCA, or to develop QCA methods for health research (e.g. [ 47 ]).

Reported strengths and weaknesses of approach

There was a general agreement about the strengths of QCA. Specifically, that it was a useful tool to address complex causality, providing a systematic approach to understand the mechanisms at work in implementation across contexts [ 38 , 39 , 43 , 45 , 46 , 47 , 55 , 56 , 57 ], particularly as they relate to (in) effective intervention implementation [ 44 , 51 ] and the evaluation of interventions [ 58 ], or “where it is not possible to identify linearity between variables of interest and outcomes” [ 49 ]. Authors highlighted the strengths of QCA as providing possibilities for examining complex policy problems [ 37 , 59 ]; for testing existing as well as new theory [ 52 ]; and for identifying aspects of interventions which had not been previously perceived as critical [ 41 ] or which may have been missed when drawing on statistical methods that use, for instance, linear additive models [ 42 ]. The strengths of QCA in terms of providing useful evidence for policy were flagged in a number of studies, particularly where the causal recipes suggested that conventional assumptions about effectiveness were not confirmed. Blackman et al., for instance, in a series of studies exploring why unequal health outcomes had narrowed in some areas of the UK and not others, identified poorer outcomes in settings with ‘better’ contracting [ 35 , 36 , 37 ]; Harting found, contrary to theoretical assumptions about the necessary conditions for successful implementation of public health interventions, that a multisectoral network was not a necessary condition [ 30 ].

Weaknesses reported included the limitations of QCA in general for addressing complexity, as well as specific limitations with either the csQCA or the fsQCA methods employed. One general concern discussed across a number of studies was the problem of limited empirical diversity, which resulted in: limitations in the possible number of conditions included in each study, particularly with small N studies [ 58 ]; missing data on important conditions [ 43 ]; or limited reported diversity (where, for instance, data were drawn from systematic reviews, reflecting publication biases which limit reporting of ineffective interventions) [ 41 ]. Reported methodological limitations in small and intermediate N studies included concerns about the potential that case selection could bias findings [ 37 ].

In terms of potential for addressing causal complexity, the limitations of QCA for identifying unintended consequences, tipping points, and/or feedback loops in complex adaptive systems were noted [ 60 ], as were the potential limitations (especially in csQCA studies) of reducing complex conditions, drawn from detailed qualitative understanding, to binary conditions [ 35 ]. The impossibility of doing this was a rationale for using fsQCA in one study [ 57 ], where detailed knowledge of conditions is needed to make theoretically justified calibration decisions. However, others [ 47 ] make the case that csQCA provides more appropriate findings for policy: dichotomisation forces a focus on meaningful distinctions, including those related to decisions that practitioners/policy makers can action. There is, then, a potential trade-off in providing ‘interpretable results’, but ones which preclude potential for utilising more detailed information [ 45 ]. That QCA does not deal with probabilistic causation was noted [ 47 ].

Quality of published studies

Assessment of ‘familiarity with cases’ was made subjectively on the basis of study authors’ reports of their knowledge of the settings (empirical or theoretical) and the descriptions they provided in the published paper: overall, 14 were judged as sufficient, and 12 less than sufficient. Studies which included primary data were more likely to be judged as demonstrating familiarity ( n  = 10) than those drawing on secondary sources or systematic reviews, of which only two were judged as demonstrating familiarity. All studies justified how the selection of cases had been made; for those not using the full available population of cases, this was in general (appropriately) done theoretically: following previous research [ 52 ]; purposively to include a range of positive and negative outcomes [ 41 ]; or to include a diversity of cases [ 58 ]. In identifying conditions leading to effective/not effective interventions, one purposive strategy was to include a specified percentage or number of the most effective and least effective interventions (e.g. [ 36 , 40 , 51 , 52 ]). Discussion of calibration of set membership scores was judged adequate in 15 cases, and inadequate in 11; 10 reported raw data matrices in the paper or supplementary material; 21 reported truth tables in the paper or supplementary material. The majority ( n  = 21) reported at least some detail on the coverage (the number of cases with a particular configuration) and consistency (the percentage of similar causal configurations which result in the same outcome). The majority ( n  = 21) included truth tables (or explicitly provided details of how to obtain them); fewer ( n  = 10) included raw data. Only five studies met all six of these quality criteria (evidence of familiarity with cases, justification of case selection, discussion of calibration, reporting truth tables, reporting raw data matrices, reporting coverage and consistency); a further six met at least five of them.

Of the csQCA studies which were not reporting an exploratory analysis, four appeared to have insufficient cases for the large number of conditions entered into at least one of the models reported, with a consequent risk to the validity of the QCA models [ 26 ].

QCA has been widely used in public health research over the last decade to advance understanding of causal inference in complex systems. In this review of published evidence to date, we have identified studies using QCA to examine the configurations of conditions that lead to particular outcomes across contexts. As noted by most study authors, QCA methods have promised advantages over probabilistic statistical techniques for examining causation where systems and/or interventions are complex, providing public health researchers with a method to test the multiple pathways (configurations of conditions), and necessary and sufficient conditions that lead to desired health outcomes.

The origins of QCA approaches are in comparative policy studies. Rihoux et al’s review of peer-reviewed journal articles using QCA methods published up to 2011 found the majority of published examples were from political science and sociology, with fewer than 5% of the 313 studies they identified coming from health sciences [ 61 ]. They also reported few examples of the method being used in policy evaluation and implementation studies [ 62 ]. In the decade since their review of the field [ 61 ], there has been an emerging body of evaluative work in health: we identified 26 studies in the field of public health alone, with the majority published in public health journals. Across these studies, QCA has been used for evaluative questions in a range of settings and public health domains to identify the conditions under which interventions are implemented and/or have evidence of effect for improving population health. All studies included a series of cases that included some with and some without the outcome of interest (such as behaviour change, successful programme implementation, or good vaccination uptake). The dominance of high-income countries in both intervention settings and author affiliations is disappointing, but reflects the disproportionate location of public health research in the global north more generally [ 63 ].

The largest single group of studies included were systematic reviews, using QCA to compare interventions (or intervention components) to identify successful (and non-successful) configurations of conditions across contexts. Here, the value of QCA lies in its potential for synthesis with quantitative meta-synthesis methods to identify the particular conditions or contexts in which interventions or components are effective. As Parrott et al. note, for instance, their meta-analysis could identify probabilistic effects of weight management programmes, and the QCA analysis enabled them to address the “role that the context of the [paediatric weight management] intervention has in influencing how, when, and for whom an intervention mix will be successful” [ 50 ]. However, using QCA to identify configurations of conditions that lead to effective or non- effective interventions across particular areas of population health is an application that does move away in some significant respects from the origins of the method. First, researchers drawing on evidence from systematic reviews for their data are reliant largely on published evidence for information on conditions (such as the organisational contexts in which interventions were implemented, or the types of behaviour change theory utilised). Although guidance for describing interventions [ 64 ] advises key aspects of context are included in reports, this may not include data on the full range of conditions that might be causally important, and review research teams may have limited knowledge of these ‘cases’ themselves. Second, less successful interventions are less likely to be published, potentially limiting the diversity of cases, particularly of cases with unsuccessful outcomes. A strength of QCA is the separate analysis of conditions leading to positive and negative outcomes: this is precluded where there is insufficient evidence on negative outcomes [ 50 ]. Third, when including a range of types of intervention, it can be unclear whether the cases included are truly comparable. A QCA study requires a high degree of theoretical and pragmatic case knowledge on the part of the researcher to calibrate conditions to qualitative anchors: it is reliant on deep understanding of complex contexts, and a familiarity with how conditions interact within and across contexts. Perhaps surprising is that only seven of the studies included here clearly drew on qualitative data, given that QCA is primarily seen as a method that requires thick, detailed knowledge of cases, particularly when the aim is to understand complex causation [ 8 ]. Whilst research teams conducting QCA in the context of systematic reviews may have detailed understanding in general of interventions within their spheres of expertise, they are unlikely to have this for the whole range of cases, particularly where a diverse set of contexts (countries, organisational settings) are included. Making a theoretical case for the valid comparability of such a case series is crucial. There may, then, be limitations in the portability of QCA methods for conducting studies entirely reliant on data from published evidence.

QCA was developed for small and medium N series of cases, and (as in the field more broadly, [ 61 ]), the samples in our studies predominantly had between 10 and 50 cases. However, there is increasing interest in the method as an alternative or complementary technique to regression-oriented statistical methods for larger samples [ 65 ], such as from surveys, where detailed knowledge of cases is likely to be replaced by theoretical knowledge of relationships between conditions (see [ 23 ]). The two larger N (> 100 cases) studies in our sample were an individual level analysis of survey data [ 46 , 47 ] and an analysis of intervention arms from a systematic review [ 50 ]. Larger sample sizes allow more conditions to be included in the analysis [ 23 , 26 ], although for evaluative research, where the aim is developing a causal explanation, rather than simply exploring patterns, there remains a limit to the number of conditions that can be included. As the number of conditions included increases, so too does the number of possible configurations, increasing the chance of unique combinations and of generating spurious solutions with a high level of consistency. As a rule of thumb, once the number of conditions exceeds 6–8 (with up to 50 cases) or 10 (for larger samples), the credibility of solutions may be severely compromised [ 23 ].

Strengths and weaknesses of the study

A systematic review has the potential advantages of transparency and rigour and, if not exhaustive, our search is likely to be representative of the body of research using QCA for evaluative public health research up to 2020. However, a limitation is the inevitable difficulty in operationalising a ‘public health’ intervention. Exclusions on scope are not straightforward, given that most social, environmental and political conditions impact on public health, and arguably a greater range of policy and social interventions (such as fiscal or trade policies) that have been the subject of QCA analyses could have been included, or a greater range of more clinical interventions. However, to enable a manageable number of papers to review, and restrict our focus to those papers that were most directly applicable to (and likely to be read by) those in public health policy and practice, we operationalised ‘public health interventions’ as those which were likely to be directly impacting on population health outcomes, or on behaviours (such as increased physical activity) where there was good evidence for causal relationships with public health outcomes, and where the primary research question of the study examined the conditions leading to those outcomes. This review has, of necessity, therefore excluded a considerable body of evidence likely to be useful for public health practice in terms of planning interventions, such as studies on how to better target smoking cessation [ 66 ] or foster social networks [ 67 ] where the primary research question was on conditions leading to these outcomes, rather than on conditions for outcomes of specific interventions. Similarly, there are growing number of descriptive epidemiological studies using QCA to explore factors predicting outcomes across such diverse areas as lupus and quality of life [ 68 ]; length of hospital stay [ 69 ]; constellations of factors predicting injury [ 70 ]; or the role of austerity, crisis and recession in predicting public health outcomes [ 71 ]. Whilst there is undoubtedly useful information to be derived from studying the conditions that lead to particular public health problems, these studies were not directly evaluating interventions, so they were also excluded.

Restricting our search to publications in English and to peer reviewed publications may have missed bodies of work from many regions, and has excluded research from non-governmental organisations using QCA methods in evaluation. As this is a rapidly evolving field, with relatively recent uptake in public health (all our included studies were after 2005), our studies may not reflect the most recent advances in the area.

Implications for conducting and reporting QCA studies

This systematic review has reviewed studies that deployed an emergent methodology, which has no reporting guidelines and has had, to date, a relatively low level of awareness among many potential evidence users in public health. For this reason, many of the studies reviewed were relatively detailed on the methods used, and the rationale for utilising QCA.

We did not assess quality directly, but used indicators of good practice discussed in QCA methodological literature, largely written for policy studies scholars, and often post-dating the publication dates of studies included in this review. It is also worth noting that, given the relatively recent development of QCA methods, methodological debate is still thriving on issues such as the reliability of causal inferences [ 72 ], alongside more general critiques of the usefulness of the method for policy decisions (see, for instance, [ 73 ]). The authors of studies included in this review also commented directly on methodological development: for instance, Thomas et al. suggests that QCA may benefit from methods development for sensitivity analyses around calibration decisions [ 42 ].

However, we selected quality criteria that, we argue, are relevant for public health research> Justifying the selection of cases, discussing and justifying the calibration of set membership, making data sets available, and reporting truth tables, consistency and coverage are all good practice in line with the usual requirements of transparency and credibility in methods. When QCA studies aim to provide explanation of outcomes (rather than exploring configurations), it is also vital that they are reported in ways that enhance the credibility of claims made, including justifying the number of conditions included relative to cases. Few of the studies published to date met all these criteria, at least in the papers included here (although additional material may have been provided in other publications). To improve the future discoverability and uptake up of QCA methods in public health, and to strengthen the credibility of findings from these methods, we therefore suggest the following criteria should be considered by authors and reviewers for reporting QCA studies which aim to provide causal evidence about the configurations of conditions that lead to implementation or outcomes:

The paper title and abstract state the QCA design;

The sampling unit for the ‘case’ is clearly defined (e.g.: patient, specified geographical population, ward, hospital, network, policy, country);

The population from which the cases have been selected is defined (e.g.: all patients in a country with X condition, districts in X country, tertiary hospitals, all hospitals in X country, all health promotion networks in X province, European policies on smoking in outdoor places, OECD countries);

The rationale for selection of cases from the population is justified (e.g.: whole population, random selection, purposive sample);

There are sufficient cases to provide credible coverage across the number of conditions included in the model, and the rationale for the number of conditions included is stated;

Cases are comparable;

There is a clear justification for how choices of relevant conditions (or ‘aspects of context’) have been made;

There is sufficient transparency for replicability: in line with open science expectations, datasets should be available where possible; truth tables should be reported in publications, and reports of coverage and consistency provided.

Implications for future research

In reviewing methods for evaluating natural experiments, Craig et al. focus on statistical techniques for enhancing causal inference, noting only that what they call ‘qualitative’ techniques (the cited references for these are all QCA studies) require “further studies … to establish their validity and usefulness” [ 2 ]. The studies included in this review have demonstrated that QCA is a feasible method when there are sufficient (comparable) cases for identifying configurations of conditions under which interventions are effective (or not), or are implemented (or not). Given ongoing concerns in public health about how best to evaluate interventions across complex contexts and systems, this is promising. This review has also demonstrated the value of adding QCA methods to the tool box of techniques for evaluating interventions such as public policies, health promotion programmes, and organisational changes - whether they are implemented in a randomised way or not. Many of the studies in this review have clearly generated useful evidence: whether this evidence has had more or less impact, in terms of influencing practice and policy, or is more valid, than evidence generated by other methods is not known. Validating the findings of a QCA study is perhaps as challenging as validating the findings from any other design, given the absence of any gold standard comparators. Comparisons of the findings of QCA with those from other methods are also typically constrained by the rather different research questions asked, and the different purposes of the analysis. In our review, QCA were typically used alongside other methods to address different questions, rather than to compare methods. However, as the field develops, follow up studies, which evaluate outcomes of interventions designed in line with conditions identified as causal in prior QCAs, might be useful for contributing to validation.

This review was limited to public health evaluation research: other domains that would be useful to map include health systems/services interventions and studies used to design or target interventions. There is also an opportunity to broaden the scope of the field, particularly for addressing some of the more intractable challenges for public health research. Given the limitations in the evidence base on what works to address inequalities in health, for instance [ 74 ], QCA has potential here, to help identify the conditions under which interventions do or do not exacerbate unequal outcomes, or the conditions that lead to differential uptake or impacts across sub-population groups. It is perhaps surprising that relatively few of the studies in this review included cases at the level of country or region, the traditional level for QCA studies. There may be scope for developing international comparisons for public health policy, and using QCA methods at the case level (nation, sub-national region) of classic policy studies in the field. In the light of debate around COVID-19 pandemic response effectiveness, comparative studies across jurisdictions might shed light on issues such as differential population responses to vaccine uptake or mask use, for example, and these might in turn be considered as conditions in causal configurations leading to differential morbidity or mortality outcomes.

When should be QCA be considered?

Public health evaluations typically assess the efficacy, effectiveness or cost-effectiveness of interventions and the processes and mechanisms through which they effect change. There is no perfect evaluation design for achieving these aims. As in other fields, the choice of design will in part depend on the availability of counterfactuals, the extent to which the investigator can control the intervention, and the range of potential cases and contexts [ 75 ], as well as political considerations, such as the credibility of the approach with key stakeholders [ 76 ]. There are inevitably ‘horses for courses’ [ 77 ]. The evidence from this review suggests that QCA evaluation approaches are feasible when there is a sufficient number of comparable cases with and without the outcome of interest, and when the investigators have, or can generate, sufficiently in-depth understanding of those cases to make sense of connections between conditions, and to make credible decisions about the calibration of set membership. QCA may be particularly relevant for understanding multiple causation (that is, where different configurations might lead to the same outcome), and for understanding the conditions associated with both lack of effect and effect. As a stand-alone approach, QCA might be particularly valuable for national and regional comparative studies of the impact of policies on public health outcomes. Alongside cluster randomised trials of interventions, or alongside systematic reviews, QCA approaches are especially useful for identifying core combinations of causal conditions for success and lack of success in implementation and outcome.

Conclusions

QCA is a relatively new approach for public health research, with promise for contributing to much-needed methodological development for addressing causation in complex systems. This review has demonstrated the large range of evaluation questions that have been addressed to date using QCA, including contributions to process evaluations of trials and for exploring the conditions leading to effectiveness (or not) in systematic reviews of interventions. There is potential for QCA to be more widely used in evaluative research, to identify the conditions under which interventions across contexts are implemented or not, and the configurations of conditions associated with effect or lack of evidence of effect. However, QCA will not be appropriate for all evaluations, and cannot be the only answer to addressing complex causality. For explanatory questions, the approach is most appropriate when there is a series of enough comparable cases with and without the outcome of interest, and where the researchers have detailed understanding of those cases, and conditions. To improve the credibility of findings from QCA for public health evidence users, we recommend that studies are reported with the usual attention to methodological transparency and data availability, with key details that allow readers to judge the credibility of causal configurations reported. If the use of QCA continues to expand, it may be useful to develop more comprehensive consensus guidelines for conduct and reporting.

Availability of data and materials

Full search strategies and extraction forms are available by request from the first author.

Abbreviations

Comparative Methods for Systematic Cross-Case Analysis

crisp set QCA

fuzzy set QCA

multi-value QCA

Medical Research Council

  • Qualitative Comparative Analysis

randomised control trial

Physical Activity

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406. https://doi.org/10.1177/1356389015605205 .

Article   Google Scholar  

Craig P, Katikireddi SV, Leyland A, Popham F. Natural experiments: an overview of methods, approaches, and contributions to public health intervention research. Annu Rev Public Health. 2017;38(1):39–56. https://doi.org/10.1146/annurev-publhealth-031816-044327 .

Article   PubMed   PubMed Central   Google Scholar  

Shiell A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–3. https://doi.org/10.1136/bmj.39569.510521.AD .

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350(mar19 6):h1258. https://doi.org/10.1136/bmj.h1258 .

Pattyn V, Álamos-Concha P, Cambré B, Rihoux B, Schalembier B. Policy effectiveness through Configurational and mechanistic lenses: lessons for concept development. J Comp Policy Anal Res Pract. 2020;0:1–18.

Google Scholar  

Byrne D. Evaluating complex social interventions in a complex world. Evaluation. 2013;19(3):217–28. https://doi.org/10.1177/1356389013495617 .

Gerrits L, Pagliarin S. Social and causal complexity in qualitative comparative analysis (QCA): strategies to account for emergence. Int J Soc Res Methodol 2020;0:1–14, doi: https://doi.org/10.1080/13645579.2020.1799636 .

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32. https://doi.org/10.1080/09581596.2017.1282603 .

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4. https://doi.org/10.1016/S0140-6736(17)31267-9 .

Article   PubMed   Google Scholar  

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95. https://doi.org/10.1186/s12916-018-1089-4 .

Craig P, Di Ruggiero E, Frohlich KL, Mykhalovskiy E and White M, on behalf of the Canadian Institutes of Health Research (CIHR)–National Institute for Health Research (NIHR) Context Guidance Authors Group. Taking account of context in population health intervention research: guidance for producers, users and funders of research. Southampton: NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Paparini S, Green J, Papoutsi C, Murdoch J, Petticrew M, Greenhalgh T, et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med. 2020;18(1):301. https://doi.org/10.1186/s12916-020-01777-6 .

Ragin. The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkeley: University of California Press; 1987.

Ragin C. Redesigning social inquiry: fuzzy sets and beyond - Charles C: Ragin - Google Books. The University of Chicago Press; 2008. https://doi.org/10.7208/chicago/9780226702797.001.0001 .

Book   Google Scholar  

Befani B, Ledermann S, Sager F. Realistic evaluation and QCA: conceptual parallels and an empirical application. Evaluation. 2007;13(2):171–92. https://doi.org/10.1177/1356389007075222 .

Kane H, Lewis MA, Williams PA, Kahwati LC. Using qualitative comparative analysis to understand and quantify translation and implementation. Transl Behav Med. 2014;4(2):201–8. https://doi.org/10.1007/s13142-014-0251-6 .

Cronqvist L, Berg-Schlosser D. Chapter 4: Multi-Value QCA (mvQCA). In: Rihoux B, Ragin C, editors. Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. 2455 Teller Road, Thousand Oaks California 91320 United States: SAGE Publications, Inc.; 2009. p. 69–86. doi: https://doi.org/10.4135/9781452226569 .

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225–39.

CAS   PubMed   PubMed Central   Google Scholar  

Legewie N. An introduction to applied data analysis with qualitative comparative analysis (QCA). Forum Qual Soc Res. 2013;14.  https://doi.org/10.17169/fqs-14.3.1961 .

Varone F, Rihoux B, Marx A. A new method for policy evaluation? In: Rihoux B, Grimm H, editors. Innovative comparative methods for policy analysis: beyond the quantitative-qualitative divide. Boston: Springer US; 2006. p. 213–36. https://doi.org/10.1007/0-387-28829-5_10 .

Chapter   Google Scholar  

Gerrits L, Verweij S. The evaluation of complex infrastructure projects: a guide to qualitative comparative analysis. Cheltenham: Edward Elgar Pub; 2018. https://doi.org/10.4337/9781783478422 .

Greckhamer T, Misangyi VF, Fiss PC. The two QCAs: from a small-N to a large-N set theoretic approach. In: Configurational Theory and Methods in Organizational Research. Emerald Group Publishing Ltd.; 2013. p. 49–75. https://pennstate.pure.elsevier.com/en/publications/the-two-qcas-from-a-small-n-to-a-large-n-set-theoretic-approach . Accessed 16 Apr 2021.

Rihoux B, Ragin CC. Configurational comparative methods: qualitative comparative analysis (QCA) and related techniques. SAGE; 2009, doi: https://doi.org/10.4135/9781452226569 .

Marx A. Crisp-set qualitative comparative analysis (csQCA) and model specification: benchmarks for future csQCA applications. Int J Mult Res Approaches. 2010;4(2):138–58. https://doi.org/10.5172/mra.2010.4.2.138 .

Marx A, Dusa A. Crisp-set qualitative comparative analysis (csQCA), contradictions and consistency benchmarks for model specification. Methodol Innov Online. 2011;6(2):103–48. https://doi.org/10.4256/mio.2010.0037 .

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252. https://doi.org/10.1186/s13643-019-1159-5 .

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349(1):g7647. https://doi.org/10.1136/bmj.g7647 .

EPPI-Reviewer 4.0: Software for research synthesis. UK: University College London; 2010.

Harting J, Peters D, Grêaux K, van Assema P, Verweij S, Stronks K, et al. Implementing multiple intervention strategies in Dutch public health-related policy networks. Health Promot Int. 2019;34(2):193–203. https://doi.org/10.1093/heapro/dax067 .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8(1):45. https://doi.org/10.1186/1471-2288-8-45 .

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC methods Programme. 2006.

Wagemann C, Schneider CQ. Qualitative comparative analysis (QCA) and fuzzy-sets: agenda for a research approach and a data analysis technique. Comp Sociol. 2010;9:376–96.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. https://doi.org/10.1017/CBO9781139004244 .

Blackman T, Dunstan K. Qualitative comparative analysis and health inequalities: investigating reasons for differential Progress with narrowing local gaps in mortality. J Soc Policy. 2010;39(3):359–73. https://doi.org/10.1017/S0047279409990675 .

Blackman T, Wistow J, Byrne D. A Qualitative Comparative Analysis of factors associated with trends in narrowing health inequalities in England. Soc Sci Med 1982. 2011;72:1965–74.

Blackman T, Wistow J, Byrne D. Using qualitative comparative analysis to understand complex policy problems. Evaluation. 2013;19(2):126–40. https://doi.org/10.1177/1356389013484203 .

Glatman-Freedman A, Cohen M-L, Nichols KA, Porges RF, Saludes IR, Steffens K, et al. Factors affecting the introduction of new vaccines to poor nations: a comparative study of the haemophilus influenzae type B and hepatitis B vaccines. PLoS One. 2010;5(11):e13802. https://doi.org/10.1371/journal.pone.0013802 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ford EW, Duncan WJ, Ginter PM. Health departments’ implementation of public health’s core functions: an assessment of health impacts. Public Health. 2005;119(1):11–21. https://doi.org/10.1016/j.puhe.2004.03.002 .

Article   CAS   PubMed   Google Scholar  

Lucidarme S, Cardon G, Willem A. A comparative study of health promotion networks: configurations of determinants for network effectiveness. Public Manag Rev. 2016;18(8):1163–217. https://doi.org/10.1080/14719037.2015.1088567 .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Richardson M, Thomas J. Weight management programmes: re-analysis of a systematic review to identify pathways to effectiveness. Health Expect Int J Public Particip Health Care Health Policy. 2018;21:574–84.

CAS   Google Scholar  

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3(1):67. https://doi.org/10.1186/2046-4053-3-67 .

Fernald DH, Simpson MJ, Nease DE, Hahn DL, Hoffmann AE, Michaels LC, et al. Implementing community-created self-management support tools in primary care practices: multimethod analysis from the INSTTEPP study. J Patient-Centered Res Rev. 2018;5(4):267–75. https://doi.org/10.17294/2330-0698.1634 .

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD011651.pub2 .

Kahwati LC, Lewis MA, Kane H, Williams PA, Nerz P, Jones KR, et al. Best practices in the veterans health Administration’s MOVE! Weight management program. Am J Prev Med. 2011;41(5):457–64. https://doi.org/10.1016/j.amepre.2011.06.047 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) to evaluate a public health policy initiative in the north east of England. Polic Soc. 2013;32(4):289–301. https://doi.org/10.1016/j.polsoc.2013.10.002 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) in public health: a case study of a health improvement service for long-term incapacity benefit recipients. J Public Health. 2014;36(1):126–33. https://doi.org/10.1093/pubmed/fdt047 .

Article   CAS   Google Scholar  

Brunton G, O’Mara-Eves A, Thomas J. The “active ingredients” for successful community engagement with disadvantaged expectant and new mothers: a qualitative comparative analysis. J Adv Nurs. 2014;70(12):2847–60. https://doi.org/10.1111/jan.12441 .

McGowan VJ, Wistow J, Lewis SJ, Popay J, Bambra C. Pathways to mental health improvement in a community-led area-based empowerment initiative: evidence from the big local ‘communities in control’ study. England J Public Health. 2019;41(4):850–7. https://doi.org/10.1093/pubmed/fdy192 .

Parrott JS, Henry B, Thompson KL, Ziegler J, Handu D. Managing Complexity in Evidence Analysis: A Worked Example in Pediatric Weight Management. J Acad Nutr Diet. 2018;118:1526–1542.e3.

Kien C, Grillich L, Nussbaumer-Streit B, Schoberberger R. Pathways leading to success and non-success: a process evaluation of a cluster randomized physical activity health promotion program applying fuzzy-set qualitative comparative analysis. BMC Public Health. 2018;18(1):1386. https://doi.org/10.1186/s12889-018-6284-x .

Lubold AM. The effect of family policies and public health initiatives on breastfeeding initiation among 18 high-income countries: a qualitative comparative analysis research design. Int Breastfeed J. 2017;12(1):34. https://doi.org/10.1186/s13006-017-0122-0 .

Bianchi F, Garnett E, Dorsel C, Aveyard P, Jebb SA. Restructuring physical micro-environments to reduce the demand for meat: a systematic review and qualitative comparative analysis. Lancet Planet Health. 2018;2(9):e384–97. https://doi.org/10.1016/S2542-5196(18)30188-8 .

Bianchi F, Dorsel C, Garnett E, Aveyard P, Jebb SA. Interventions targeting conscious determinants of human behaviour to reduce the demand for meat: a systematic review with qualitative comparative analysis. Int J Behav Nutr Phys Act. 2018;15(1):102. https://doi.org/10.1186/s12966-018-0729-6 .

Hartmann-Boyce J, Bianchi F, Piernas C, Payne Riches S, Frie K, Nourse R, et al. Grocery store interventions to change food purchasing behaviors: a systematic review of randomized controlled trials. Am J Clin Nutr. 2018;107(6):1004–16. https://doi.org/10.1093/ajcn/nqy045 .

Burchett HED, Sutcliffe K, Melendez-Torres GJ, Rees R, Thomas J. Lifestyle weight management programmes for children: a systematic review using qualitative comparative analysis to identify critical pathways to effectiveness. Prev Med. 2018;106:1–12. https://doi.org/10.1016/j.ypmed.2017.08.025 .

Chiappone A. Technical assistance and changes in nutrition and physical activity practices in the National Early Care and education learning Collaboratives project, 2015–2016. Prev Chronic Dis. 2018;15. https://doi.org/10.5888/pcd15.170239 .

Kane H, Hinnant L, Day K, Council M, Tzeng J, Soler R, et al. Pathways to program success: a qualitative comparative analysis (QCA) of communities putting prevention to work case study programs. J Public Health Manag Pract JPHMP. 2017;23(2):104–11. https://doi.org/10.1097/PHH.0000000000000449 .

Roberts MC, Murphy T, Moss JL, Wheldon CW, Psek W. A qualitative comparative analysis of combined state health policies related to human papillomavirus vaccine uptake in the United States. Am J Public Health. 2018;108(4):493–9. https://doi.org/10.2105/AJPH.2017.304263 .

Breuer E, Subba P, Luitel N, Jordans M, Silva MD, Marchal B, et al. Using qualitative comparative analysis and theory of change to unravel the effects of a mental health intervention on service utilisation in Nepal. BMJ Glob Health. 2018;3(6):e001023. https://doi.org/10.1136/bmjgh-2018-001023 .

Rihoux B, Álamos-Concha P, Bol D, Marx A, Rezsöhazy I. From niche to mainstream method? A comprehensive mapping of QCA applications in journal articles from 1984 to 2011. Polit Res Q. 2013;66:175–84.

Rihoux B, Rezsöhazy I, Bol D. Qualitative comparative analysis (QCA) in public policy analysis: an extensive review. Ger Policy Stud. 2011;7:9–82.

Plancikova D, Duric P, O’May F. High-income countries remain overrepresented in highly ranked public health journals: a descriptive analysis of research settings and authorship affiliations. Crit Public Health 2020;0:1–7, DOI: https://doi.org/10.1080/09581596.2020.1722313 .

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348(mar07 3):g1687. https://doi.org/10.1136/bmj.g1687 .

Fiss PC, Sharapov D, Cronqvist L. Opposites attract? Opportunities and challenges for integrating large-N QCA and econometric analysis. Polit Res Q. 2013;66:191–8.

Blackman T. Can smoking cessation services be better targeted to tackle health inequalities? Evidence from a cross-sectional study. Health Educ J. 2008;67(2):91–101. https://doi.org/10.1177/0017896908089388 .

Haynes P, Banks L, Hill M. Social networks amongst older people in OECD countries: a qualitative comparative analysis. J Int Comp Soc Policy. 2013;29(1):15–27. https://doi.org/10.1080/21699763.2013.802988 .

Rioja EC. Valero-Moreno S, Giménez-Espert M del C, Prado-Gascó V. the relations of quality of life in patients with lupus erythematosus: regression models versus qualitative comparative analysis. J Adv Nurs. 2019;75(7):1484–92. https://doi.org/10.1111/jan.13957 .

Dy SM. Garg Pushkal, Nyberg Dorothy, Dawson Patricia B., Pronovost Peter J., Morlock Laura, et al. critical pathway effectiveness: assessing the impact of patient, hospital care, and pathway characteristics using qualitative comparative analysis. Health Serv Res. 2005;40(2):499–516. https://doi.org/10.1111/j.1475-6773.2005.0r370.x .

MELINDER KA, ANDERSSON R. The impact of structural factors on the injury rate in different European countries. Eur J Pub Health. 2001;11(3):301–8. https://doi.org/10.1093/eurpub/11.3.301 .

Saltkjel T, Holm Ingelsrud M, Dahl E, Halvorsen K. A fuzzy set approach to economic crisis, austerity and public health. Part II: How are configurations of crisis and austerity related to changes in population health across Europe? Scand J Public Health. 2017;45(18_suppl):48–55.

Baumgartner M, Thiem A. Often trusted but never (properly) tested: evaluating qualitative comparative analysis. Sociol Methods Res. 2020;49(2):279–311. https://doi.org/10.1177/0049124117701487 .

Tanner S. QCA is of questionable value for policy research. Polic Soc. 2014;33(3):287–98. https://doi.org/10.1016/j.polsoc.2014.08.003 .

Mackenbach JP. Tackling inequalities in health: the need for building a systematic evidence base. J Epidemiol Community Health. 2003;57(3):162. https://doi.org/10.1136/jech.57.3.162 .

Stern E, Stame N, Mayne J, Forss K, Davies R, Befani B. Broadening the range of designs and methods for impact evaluations. Technical report. London: DfiD; 2012.

Pattyn V. Towards appropriate impact evaluation methods. Eur J Dev Res. 2019;31(2):174–9. https://doi.org/10.1057/s41287-019-00202-w .

Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health. 2003;57(7):527–9. https://doi.org/10.1136/jech.57.7.527 .

Download references

Acknowledgements

The authors would like to thank and acknowledge the support of Sara Shaw, PI of MR/S014632/1 and the rest of the Triple C project team, the experts who were consulted on the final list of included studies, and the reviewers who provided helpful feedback on the original submission.

This study was funded by MRC: MR/S014632/1 ‘Case study, context and complex interventions (Triple C): development of guidance and publication standards to support case study research’. The funder played no part in the conduct or reporting of the study. JG is supported by a Wellcome Trust Centre grant 203109/Z/16/Z.

Author information

Authors and affiliations.

Institute for Culture and Society, Western Sydney University, Sydney, Australia

Benjamin Hanckel

Department of Public Health, Environments and Society, LSHTM, London, UK

Mark Petticrew

UCL Institute of Education, University College London, London, UK

James Thomas

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

You can also search for this author in PubMed   Google Scholar

Contributions

BH - research design, data acquisition, data extraction and coding, data interpretation, paper drafting; JT – research design, data interpretation, contributing to paper; MP – funding acquisition, research design, data interpretation, contributing to paper; JG – funding acquisition, research design, data extraction and coding, data interpretation, paper drafting. All authors approved the final version.

Corresponding author

Correspondence to Judith Green .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

All authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Example search strategy.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hanckel, B., Petticrew, M., Thomas, J. et al. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions. BMC Public Health 21 , 877 (2021). https://doi.org/10.1186/s12889-021-10926-2

Download citation

Received : 03 February 2021

Accepted : 22 April 2021

Published : 07 May 2021

DOI : https://doi.org/10.1186/s12889-021-10926-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public health
  • Intervention
  • Systematic review

BMC Public Health

ISSN: 1471-2458

is comparative analysis a methodology

  • Utility Menu

University Logo

GA4 Tracking Code

Gen ed writes, writing across the disciplines at harvard college.

  • Comparative Analysis

What It Is and Why It's Useful

Comparative analysis asks writers to make an argument about the relationship between two or more texts. Beyond that, there's a lot of variation, but three overarching kinds of comparative analysis stand out:

  • Coordinate (A ↔ B): In this kind of analysis, two (or more) texts are being read against each other in terms of a shared element, e.g., a memoir and a novel, both by Jesmyn Ward; two sets of data for the same experiment; a few op-ed responses to the same event; two YA books written in Chicago in the 2000s; a film adaption of a play; etc. 
  • Subordinate (A  → B) or (B → A ): Using a theoretical text (as a "lens") to explain a case study or work of art (e.g., how Anthony Jack's The Privileged Poor can help explain divergent experiences among students at elite four-year private colleges who are coming from similar socio-economic backgrounds) or using a work of art or case study (i.e., as a "test" of) a theory's usefulness or limitations (e.g., using coverage of recent incidents of gun violence or legislation un the U.S. to confirm or question the currency of Carol Anderson's The Second ).
  • Hybrid [A  → (B ↔ C)] or [(B ↔ C) → A] , i.e., using coordinate and subordinate analysis together. For example, using Jack to compare or contrast the experiences of students at elite four-year institutions with students at state universities and/or community colleges; or looking at gun culture in other countries and/or other timeframes to contextualize or generalize Anderson's main points about the role of the Second Amendment in U.S. history.

"In the wild," these three kinds of comparative analysis represent increasingly complex—and scholarly—modes of comparison. Students can of course compare two poems in terms of imagery or two data sets in terms of methods, but in each case the analysis will eventually be richer if the students have had a chance to encounter other people's ideas about how imagery or methods work. At that point, we're getting into a hybrid kind of reading (or even into research essays), especially if we start introducing different approaches to imagery or methods that are themselves being compared along with a couple (or few) poems or data sets.

Why It's Useful

In the context of a particular course, each kind of comparative analysis has its place and can be a useful step up from single-source analysis. Intellectually, comparative analysis helps overcome the "n of 1" problem that can face single-source analysis. That is, a writer drawing broad conclusions about the influence of the Iranian New Wave based on one film is relying entirely—and almost certainly too much—on that film to support those findings. In the context of even just one more film, though, the analysis is suddenly more likely to arrive at one of the best features of any comparative approach: both films will be more richly experienced than they would have been in isolation, and the themes or questions in terms of which they're being explored (here the general question of the influence of the Iranian New Wave) will arrive at conclusions that are less at-risk of oversimplification.

For scholars working in comparative fields or through comparative approaches, these features of comparative analysis animate their work. To borrow from a stock example in Western epistemology, our concept of "green" isn't based on a single encounter with something we intuit or are told is "green." Not at all. Our concept of "green" is derived from a complex set of experiences of what others say is green or what's labeled green or what seems to be something that's neither blue nor yellow but kind of both, etc. Comparative analysis essays offer us the chance to engage with that process—even if only enough to help us see where a more in-depth exploration with a higher and/or more diverse "n" might lead—and in that sense, from the standpoint of the subject matter students are exploring through writing as well the complexity of the genre of writing they're using to explore it—comparative analysis forms a bridge of sorts between single-source analysis and research essays.

Typical learning objectives for single-sources essays: formulate analytical questions and an arguable thesis, establish stakes of an argument, summarize sources accurately, choose evidence effectively, analyze evidence effectively, define key terms, organize argument logically, acknowledge and respond to counterargument, cite sources properly, and present ideas in clear prose.

Common types of comparative analysis essays and related types: two works in the same genre, two works from the same period (but in different places or in different cultures), a work adapted into a different genre or medium, two theories treating the same topic; a theory and a case study or other object, etc.

How to Teach It: Framing + Practice

Framing multi-source writing assignments (comparative analysis, research essays, multi-modal projects) is likely to overlap a great deal with "Why It's Useful" (see above), because the range of reasons why we might use these kinds of writing in academic or non-academic settings is itself the reason why they so often appear later in courses. In many courses, they're the best vehicles for exploring the complex questions that arise once we've been introduced to the course's main themes, core content, leading protagonists, and central debates.

For comparative analysis in particular, it's helpful to frame assignment's process and how it will help students successfully navigate the challenges and pitfalls presented by the genre. Ideally, this will mean students have time to identify what each text seems to be doing, take note of apparent points of connection between different texts, and start to imagine how those points of connection (or the absence thereof)

  • complicates or upends their own expectations or assumptions about the texts
  • complicates or refutes the expectations or assumptions about the texts presented by a scholar
  • confirms and/or nuances expectations and assumptions they themselves hold or scholars have presented
  • presents entirely unforeseen ways of understanding the texts

—and all with implications for the texts themselves or for the axes along which the comparative analysis took place. If students know that this is where their ideas will be heading, they'll be ready to develop those ideas and engage with the challenges that comparative analysis presents in terms of structure (See "Tips" and "Common Pitfalls" below for more on these elements of framing).

Like single-source analyses, comparative essays have several moving parts, and giving students practice here means adapting the sample sequence laid out at the " Formative Writing Assignments " page. Three areas that have already been mentioned above are worth noting:

  • Gathering evidence : Depending on what your assignment is asking students to compare (or in terms of what), students will benefit greatly from structured opportunities to create inventories or data sets of the motifs, examples, trajectories, etc., shared (or not shared) by the texts they'll be comparing. See the sample exercises below for a basic example of what this might look like.
  • Why it Matters: Moving beyond "x is like y but also different" or even "x is more like y than we might think at first" is what moves an essay from being "compare/contrast" to being a comparative analysis . It's also a move that can be hard to make and that will often evolve over the course of an assignment. A great way to get feedback from students about where they're at on this front? Ask them to start considering early on why their argument "matters" to different kinds of imagined audiences (while they're just gathering evidence) and again as they develop their thesis and again as they're drafting their essays. ( Cover letters , for example, are a great place to ask writers to imagine how a reader might be affected by reading an their argument.)
  • Structure: Having two texts on stage at the same time can suddenly feel a lot more complicated for any writer who's used to having just one at a time. Giving students a sense of what the most common patterns (AAA / BBB, ABABAB, etc.) are likely to be can help them imagine, even if provisionally, how their argument might unfold over a series of pages. See "Tips" and "Common Pitfalls" below for more information on this front.

Sample Exercises and Links to Other Resources

  • Common Pitfalls
  • Advice on Timing
  • Try to keep students from thinking of a proposed thesis as a commitment. Instead, help them see it as more of a hypothesis that has emerged out of readings and discussion and analytical questions and that they'll now test through an experiment, namely, writing their essay. When students see writing as part of the process of inquiry—rather than just the result—and when that process is committed to acknowledging and adapting itself to evidence, it makes writing assignments more scientific, more ethical, and more authentic. 
  • Have students create an inventory of touch points between the two texts early in the process.
  • Ask students to make the case—early on and at points throughout the process—for the significance of the claim they're making about the relationship between the texts they're comparing.
  • For coordinate kinds of comparative analysis, a common pitfall is tied to thesis and evidence. Basically, it's a thesis that tells the reader that there are "similarities and differences" between two texts, without telling the reader why it matters that these two texts have or don't have these particular features in common. This kind of thesis is stuck at the level of description or positivism, and it's not uncommon when a writer is grappling with the complexity that can in fact accompany the "taking inventory" stage of comparative analysis. The solution is to make the "taking inventory" stage part of the process of the assignment. When this stage comes before students have formulated a thesis, that formulation is then able to emerge out of a comparative data set, rather than the data set emerging in terms of their thesis (which can lead to confirmation bias, or frequency illusion, or—just for the sake of streamlining the process of gathering evidence—cherry picking). 
  • For subordinate kinds of comparative analysis , a common pitfall is tied to how much weight is given to each source. Having students apply a theory (in a "lens" essay) or weigh the pros and cons of a theory against case studies (in a "test a theory") essay can be a great way to help them explore the assumptions, implications, and real-world usefulness of theoretical approaches. The pitfall of these approaches is that they can quickly lead to the same biases we saw here above. Making sure that students know they should engage with counterevidence and counterargument, and that "lens" / "test a theory" approaches often balance each other out in any real-world application of theory is a good way to get out in front of this pitfall.
  • For any kind of comparative analysis, a common pitfall is structure. Every comparative analysis asks writers to move back and forth between texts, and that can pose a number of challenges, including: what pattern the back and forth should follow and how to use transitions and other signposting to make sure readers can follow the overarching argument as the back and forth is taking place. Here's some advice from an experienced writing instructor to students about how to think about these considerations:

a quick note on STRUCTURE

     Most of us have encountered the question of whether to adopt what we might term the “A→A→A→B→B→B” structure or the “A→B→A→B→A→B” structure.  Do we make all of our points about text A before moving on to text B?  Or do we go back and forth between A and B as the essay proceeds?  As always, the answers to our questions about structure depend on our goals in the essay as a whole.  In a “similarities in spite of differences” essay, for instance, readers will need to encounter the differences between A and B before we offer them the similarities (A d →B d →A s →B s ).  If, rather than subordinating differences to similarities you are subordinating text A to text B (using A as a point of comparison that reveals B’s originality, say), you may be well served by the “A→A→A→B→B→B” structure.  

     Ultimately, you need to ask yourself how many “A→B” moves you have in you.  Is each one identical?  If so, you may wish to make the transition from A to B only once (“A→A→A→B→B→B”), because if each “A→B” move is identical, the “A→B→A→B→A→B” structure will appear to involve nothing more than directionless oscillation and repetition.  If each is increasingly complex, however—if each AB pair yields a new and progressively more complex idea about your subject—you may be well served by the “A→B→A→B→A→B” structure, because in this case it will be visible to readers as a progressively developing argument.

As we discussed in "Advice on Timing" at the page on single-source analysis, that timeline itself roughly follows the "Sample Sequence of Formative Assignments for a 'Typical' Essay" outlined under " Formative Writing Assignments, " and it spans about 5–6 steps or 2–4 weeks. 

Comparative analysis assignments have a lot of the same DNA as single-source essays, but they potentially bring more reading into play and ask students to engage in more complicated acts of analysis and synthesis during the drafting stages. With that in mind, closer to 4 weeks is probably a good baseline for many single-source analysis assignments. For sections that meet once per week, the timeline will either probably need to expand—ideally—a little past the 4-week side of things, or some of the steps will need to be combined or done asynchronously.

What It Can Build Up To

Comparative analyses can build up to other kinds of writing in a number of ways. For example:

  • They can build toward other kinds of comparative analysis, e.g., student can be asked to choose an additional source to complicate their conclusions from a previous analysis, or they can be asked to revisit an analysis using a different axis of comparison, such as race instead of class. (These approaches are akin to moving from a coordinate or subordinate analysis to more of a hybrid approach.)
  • They can scaffold up to research essays, which in many instances are an extension of a "hybrid comparative analysis."
  • Like single-source analysis, in a course where students will take a "deep dive" into a source or topic for their capstone, they can allow students to "try on" a theoretical approach or genre or time period to see if it's indeed something they want to research more fully.
  • DIY Guides for Analytical Writing Assignments

For Teaching Fellows & Teaching Assistants

  • Types of Assignments
  • Unpacking the Elements of Writing Prompts
  • Formative Writing Assignments
  • Single-Source Analysis
  • Research Essays
  • Multi-Modal or Creative Projects
  • Giving Feedback to Students

Assignment Decoder

  • Open access
  • Published: 14 May 2024

Comparative analysis of GoPro and digital cameras in head and neck flap harvesting surgery video documentation: an innovative and efficient method for surgical education

  • Xin-Yue Huang 1   na1 ,
  • Zhe Shao 1 , 2   na1 ,
  • Nian-Nian Zhong 1 ,
  • Yuan-Hao Wen 1 ,
  • Tian-Fu Wu 1 , 2 ,
  • Bing Liu 1 , 2 ,
  • Si-Rui Ma 1 , 2 &
  • Lin-Lin Bu 1 , 2  

BMC Medical Education volume  24 , Article number:  531 ( 2024 ) Cite this article

90 Accesses

Metrics details

An urgent need exists for innovative surgical video recording techniques in head and neck reconstructive surgeries, particularly in low- and middle-income countries where a surge in surgical procedures necessitates more skilled surgeons. This demand, significantly intensified by the COVID-19 pandemic, highlights the critical role of surgical videos in medical education. We aimed to identify a straightforward, high-quality approach to recording surgical videos at a low economic cost in the operating room, thereby contributing to enhanced patient care.

The recording was comprised of six head and neck flap harvesting surgeries using GoPro or two types of digital cameras. Data were extracted from the recorded videos and their subsequent editing process. Some of the participants were subsequently interviewed.

Both cameras, set at 4 K resolution and 30 frames per second (fps), produced satisfactory results. The GoPro, worn on the surgeon’s head, moves in sync with the surgeon, offering a unique first-person perspective of the operation without needing an additional assistant. Though cost-effective and efficient, it lacks a zoom feature essential for close-up views. In contrast, while requiring occasional repositioning, the digital camera captures finer anatomical details due to its superior image quality and zoom capabilities.

Merging these two systems could significantly advance the field of surgical video recording. This innovation holds promise for enhancing technical communication and bolstering video-based medical education, potentially addressing the global shortage of specialized surgeons.

1) The GoPro camera offers stable vision and does not require an assistant for operation.

2) A digital camera provides images of higher quality and better anatomical detail.

3) Combining the two could result in a highly efficient and innovative method.

4) Trainees provided positive feedback on the educational impact of the videos.

Peer Review reports

Introduction

Innovation often occurs when knowledge from different disciplines converges and new ideas emerge or merge to foster progress [ 1 ]. Technological advancements have introduced innovations and tools that have entered head and neck surgical practice, ranging from the operating microscope and robotic, imaging-based navigation to computer-assisted design and perfusion monitoring technologies, providing precision care and better patient prognoses [ 1 , 2 , 3 , 4 ]. The combination of video recording and streaming with head and neck reconstructive surgery enables recording the surgeon’s view, allowing others to see exactly what the surgeon observes and does. Video recording technology can also be beneficial in various areas, such as technical communication, research, case data backup, and clinical education. As the saying goes, “A picture is worth a thousand words,” but video holds more convincing power than pictures alone. In the field of head and neck surgery, medical students and junior surgical trainees often do not acquire the full range of surgical skills during their operating room clerkships [ 5 ]. Simultaneously, the global shortage and uneven distribution of the surgical workforce are gaining recognition, with low- and middle-income countries (LMICs) in dire need of skilled surgeons [ 6 ]. There is significant demand for surgical videos in surgical education and surgeon training, especially as COVID-19 ravaged the world, affecting many residents’ clinical practice schedules to varying degrees [ 7 , 8 ]. Consequently, teaching surgical skills has become more challenging.

Digital video capture during surgical procedures is an essential technology in modern-day surgical education [ 9 , 10 , 11 , 12 ]. The advent of fifth-generation mobile technology (5G) has facilitated the distribution of video formats, making it as effortless as sharing text and picture formats in the past, no longer constrained by mobile devices or network bandwidth. Recording surgeries in video format is employed across various domains, such as open surgery, microsurgery, laryngoscopy, and laparoscopy, yielding excellent outcomes regarding video quality and educational purposes [ 13 , 14 , 15 , 16 , 17 ]. Its benefits include 1) assisting medical students in their training, 2) enhancing comprehension of the surgical procedure and the patient’s clinical condition, 3) visualizing crucial routine manual operations, such as flap harvesting, 4) aiding in the preservation of legal evidence, and 5) providing a more precise anatomical description of body regions [ 18 ]. These critical aspects are challenging to convey effectively through descriptions, even with the support of photographs and other media.

The high construction costs associated with a dedicated medical recording system in the operating room can be prohibitive for some hospitals and medical institutions in LMICs and developed countries. Fortunately, due to the rapid advancement of technological innovation in recent years, personal digital video technologies have become more affordable and offer good image quality. Previous studies have also demonstrated that these technologies when applied to surgical video recording, can yield positive results [ 19 , 20 ]. However, few studies have compared different types of camera systems for surgical recordings.

Our study compared the GoPro (Hero 8 Black), a low-cost commercially available action camera, with two higher-priced commercial digital cameras (Canon EOS R5 and EOS 850D). We preliminarily explored other types of surgical video recording (Figure S1), as flap harvesting is a crucial operation in head and neck reconstructive surgery with significant teaching values. Our research focused on comparing the video recording outcomes of these two camera systems during flap harvesting procedures. This study aimed to identify a straightforward, high-quality approach to recording surgical videos at a low economic cost in the operating room, thereby contributing to enhanced patient care.

Materials and methods

The recordings were taken in the Department of Oral & Maxillofacial—Head Neck Oncology at the Hospital of Stomatology, Wuhan University, from November to December 2021. A total of six operations were prospectively recorded. All patients signed informed consent forms before surgery, and the recordings did not involve any parts of the patients’ bodies outside the operative areas.

GoPro is a brand of action camera that can be attached to the body with simple accessories, enabling hands-free recording and first-person perspectives, especially in extreme sports. The GoPro HERO 8 Black (GoPro Inc, San Mateo, CA), used in this study, is currently a widely recognized product. This camera is exceptionally compact and portable, measuring 62*33.7*44.6 mm and weighing 450 g. The GoPro 8 supports stabilized 4 K video recording at 30 or 60 frames per second (fps) and slow-motion 1080P video at 240 fps. It is equipped with the HyperSmooth system, which stabilizes the video image without the need for external stabilizers, even when the surgeon wearing the device is moving. It can also connect to a smart device via a wireless network during filming to monitor the shot or even broadcast live using the GoPro Quik app. The fixed focus setting on this device maintains consistent focus, regardless of whether the subject gets closer or moves further away within a certain distance.

Generally, the term “digital camera” may also refer to the camera systems integrated into smartphones (such as an iPhone). However, surgical videos require precise documentation of operations on delicate anatomical structures, and our previous pilot study found that the images captured by smartphones (iPhone X) did not meet the requirements for teaching or technical communication. Therefore, the “digital camera” referenced in this article pertains specifically to professional digital cameras. We utilized two relatively recent models on the market, the EOS R5, Canon’s flagship product, and the EOS 850D, its entry-level counterpart.

A total of six operations were prospectively studied, involving three surgeons, seven circulating nurses, and ten surgical residents.

The surgeon wore the GoPro 8 camera attached to a unique headband (Fig.  1 ), with no additional loupes or head-mounted lighting systems to physically interfere with the camera. An iPad, connected to the GoPro and equipped with the GoPro Quik app, served as a viewfinder and remote control for recording the six operations.

figure 1

The surgeon with the head-mounted camera in place to record the surgery

The digital cameras were mounted on an external tripod for recording and were set to manual mode with manual focus. The first three recordings observed that the surgical team members occasionally obscured the surgical area. Therefore, the tripod setup was modified in the subsequent three recordings, drawing on previous studies’ methods to attain a better field of view (FOV) (Fig. 2 ) [ 21 ]. The sixth surgery was recorded using an EOS 850D, while the others were documented with an EOS R5.

figure 2

Post-assembly view of the modified tripod

Characterization of videos

Data for this study were extracted from the recorded videos and their subsequent editing process. Selected variables were entered into a Microsoft Excel database. The evaluated variables included: 1) the total recording time for each device during all surgeries; 2) the duration of all surgical procedures; 3) the duration of unavailable video (including obscured surgical areas, inaccurate focus, and overexposure) and their percentage of the entire surgical procedure; and 4) the file size of the original videos.

Educational impact analysis

To assess the quality of the recorded surgical videos and their applicability in teaching, we designed a questionnaire to gather opinions from three surgeons and nine resident physicians (students) on the edited videos. The surgeons’ questionnaire primarily assessed whether the videos clearly conveyed the surgeon’s operational concepts and specific details. Trainees were queried whether they could clearly view the procedures and derive educational benefits from it.

To further analyze the educational impact of the surgical videos, we established an Expert Review Panel (ERP) comprising five experts with over ten years of experience in clinical surgery and medical student education. We also created an assessment table for evaluating surgical education video (Table  1 ). The ERP reviewed six edited surgical videos and evaluated their instructional quality, clarity, stability, and effectiveness in conveying surgical techniques. Subsequently, the table was completed to categorize the overall quality of each video.

It took approximately 10 min to prepare the video equipment before surgery, including setting up the tripod, determining the recording settings, and adjusting the camera’s position. All surgeons reported that the head-mounted recording device did not interfere with the operation. The operations involved different types of flap harvesting, including ilium flap harvesting ( n  = 3), fibula flap harvesting ( n  = 1), anterolateral thigh flap harvesting ( n  = 1), and forearm flap harvesting ( n  = 1). The average duration of the operations was one and a half hours. Six surgical procedures were recorded simultaneously using both the GoPro and digital cameras. The characteristics of each surgical video are shown in Table  2 . The technical details of the two types of cameras used in the study (GoPro HERO 8 Black, EOS R5, and EOS 850D) are summarized in Table  3 . A video of the sample can be found in the Supplementary Video.

Video settings

While filming surgeries, we consulted previous studies for camera settings. Graves et al. conducted research using a GoPro camera in the operating room [ 20 ]. They selected an earlier type—the GoPro HERO 3 + Black—and concluded that with a narrow FOV, automatic white balance, 1080P resolution, and 48 fps, one could achieve high-quality, low-cost video recordings of surgical procedures. In our study, we initially tried a 1080P resolution with a narrow field and obtained relatively good results (Fig. 3 A). However, a 1080P resolution HD video consists of two million pixels (1920 × 1080), whereas a 4 K (Ultra HD) video comprises over eight million pixels (3840 × 2160). Thus, 4 K video produces a sharper image with four times the resolution of 1080P. Given that the GoPro does not support narrow-field shooting at 4 K resolution, we discovered that setting it to a “linear” FOV with 4 K resolution provided more precise and crisper imagery (Fig. 3 B). In most of our recordings, we used 4 K resolution and a frame rate of 30 fps for both the GoPro and digital cameras.

figure 3

Image quality comparison of video screenshots with the magnification of 500% obtained from ( A ) 1080P narrow field and ( B ) 4K linear field. C When set to automatic metering, the intraoperative area was overexposed, and ( D ) the area was normally exposed after locked exposure. E In automatic servo focusing mode, the focal plane is not in the area but in the operator’s hand. F Manual focus keeps the focus on the area, even if the operator’s hand is covered

Light metering involves determining the necessary exposure based on environmental conditions, which can be manually adjusted by the photographer with different exposure settings or automatically by the camera’s program. The light environment of the operating room is complex, as the surgical field illuminated by a shadowless lamp is typically brighter than the surrounding area. When recording the surgical area with a digital camera that permits manual operation, it is recommended to use a smaller aperture (an opening that allows light to reach a lens) to reduce light intake and increase the depth of field (DOF), which is the range within which objects appear sharp. Although the GoPro was set to automatic metering mode due to the difficulty of manual operation, its FOV shifted with the surgeon’s head movements. As bright lights continually focused on the surgical area, rapid changes in the FOV easily caused overexposure in the operating area (Fig.  3 C). This issue was later addressed by locking the exposure before recording (Fig.  3 D).

The first recorded video revealed that the digital camera’s automatic servo focusing caused instability in the focal plane within the operational area due to various instruments and the surgeon’s hands in the surgical field (Fig.  3 E). This issue was addressed by manually focusing and locking the focal plane before recording (Fig.  3 F). However, when the position changed during the procedure, an assistant without surgical hand disinfection was required to adjust the camera promptly.

Quality of videos

The image quality of videos from three cameras was sufficient for depicting static and moving objects. However, the operating room is a unique environment where multiple factors influence the cameras’ effectiveness. These factors include the distance from the operating area, obstruction by surgical team members [ 22 ], the lens’s FOV, light overexposure, and reflection from metal instruments.

To more clearly compare the video quality of the three devices across six different head and neck reconstructive procedures, we extracted images from the video files of all devices and assessed their clarity at magnifications of 100% and 300%. Fig.  4 A-C show 100% images alongside detailed 300% magnified images captured from videos recorded by the GoPro8, EOS 850D, and EOS R5, respectively. All three devices provided precise and reliable output under various circumstances and lighting conditions.

figure 4

A 100% image from uncompressed file video of GoPro8, compared with magnified 300% video image in detail. B 100% image from uncompressed file video of EOS 850D, compared with magnified 300% video image in detail. C 100% image from uncompressed file video of EOS R5, compared with magnified 300% video image in detail

Positioning in the operative room

Digital cameras capture high-definition images during surgery with accurate focus. However, there were instances when the lens was obscured by the bodies of surgical team members or instruments, missing critical moments. As shown in Table  2 , videos recorded with a digital camera placed on an unmodified tripod had a higher rate of unavailable video duration, primarily due to obscuration. The tripod was modified to position the camera’s FOV more perpendicular to the surgical area. Consequently, the average duration of unusable video recorded by the digital camera using the modified tripod was 11.3%, a significant decrease from the 37.55% average without modification.

Figure 5 depicts the positioning of the modified tripod, head-mounted camera, and surgical field used for the recording.

figure 5

The position used for the recording with the surgical field and surgeon. A Ilium flap harvesting with digital camera. B Ilium flap harvesting with GoPro

Field of view

In the recordings of Anterolateral thigh flap harvesting, Ilium flap harvesting 2, and Fibula flap harvesting surgeries, the use of the GoPro resulted in a lower duration of unavailable video (1.9%, 2.5%, and 2.0%, respectively) compared to digital cameras (29.5%, 14.7%, and 5.5%), even with the tripod modified for the latter two recordings. This outcome is primarily because the surgeon’s hand often blocked the digital camera, positioned for a third-person perspective. In contrast, the GoPro, attached to the surgeon’s head, offered a viewpoint closer to the surgeon’s own eyes, thereby capturing a better visual field. The surgeon’s perspective is arguably the most advantageous, as corroborated by many previous studies that placed the camera on the surgeon’s forehead for procedural recording [ 13 , 20 , 23 ]. Rafael et al. reported that the head camera position was well-received by volunteers [ 24 ]. Though we obtained valuable images this way, there were limitations. The angle from the eye to the target point varies with the surgical techniques. Digital cameras can easily shift focus by adjusting the tripod’s position and angle to maintain the view of the surgical area; however, the GoPro’s FOV and focus are fixed upon installation, allowing horizontal adjustments, occasionally resulting in the surgical area going out of frame. Nevertheless, the GoPro’s wide FOV in both “wide” and “linear” modes generally ensures that the area remains within the shot without continuous monitoring.

Based on the images extracted from the videos, the digital camera can achieve a more detailed view of the surgical area with its zoom capabilities compared to the GoPro’s wider FOV. Although the GoPro’s images reveal clear anatomical structures upon magnification, they are not as sharp as those from the digital camera. This limitation, however, had unexpected benefits, as it could record the surgeon’s hand movements between the patient’s tissues and the instruments, providing insights into surgical hand positioning and instrument ergonomics that are crucial for training but often overlooked [ 23 ]. Experienced surgeons efficiently organize their workspace, holding instruments currently in use while preparing others for subsequent steps. On-site trainees, focusing primarily on the operative site, may miss these subtle ergonomic maneuvers. When used in education, surgical recordings simultaneously displaying the operative site and hand positioning can offer learners vital insights previously unnoticed [ 25 ].

Connectivity

All three devices possess the capability for wireless connectivity via Wi-Fi or Bluetooth systems. Video captured by these devices can be streamed in real-time to nearby mobile devices or monitors and can even be broadcast online. This feature forms the foundation for remote tele-proctoring and education purposes in surgery, a method proven to be innovative for enhancing surgical education in high-resource settings [ 26 ]. Fig.  6 illustrates the connectivity scheme, which includes a wireless link between the cameras and mobile devices through Wi-Fi or Bluetooth, facilitating further dissemination by these devices.

figure 6

The camera can be connected to a mobile phone or laptop via Wi-Fi or Bluetooth or even broadcast live via the Internet for more purposes

In addition, the GoPro device itself comes equipped with a Livestream function in full 1080P HD mode. However, the video quality of the Webcast is not as high as the recordings due to limitations imposed by wireless connection speeds and bandwidth. The "choppy" nature of the video presentation during streaming can be mitigated by using a direct cable for live broadcasts, allowing direct streaming onto a monitor for local presentation or broader live broadcasts, and offering a quality superior to Wi-Fi or Bluetooth options. The downside is the cumbersome nature of the required cables.

Editing of video

Benefiting from the high resolution of 4K video, high definition is maintained even after the original video clip is magnified. Structural details are well-preserved, and the clarity of the operation remains evident in the magnified version, which can be further saved or shared. GoPro Quik, an application developed by the GoPro company, facilitates customized video editing. It can be used to edit original clips shot by the GoPro camera, re-adjust the field of interest, and conveniently export the video in the appropriate format. High-resolution video has its pros and cons. The extensive data involved makes storing and editing raw video files challenging. Future technologies should enable surgeons to ensure real-time recording of the area of interest, allowing for more manageable data acquisition without the need for zooming or cropping post-capture.

Videos for education

The results of the questionnaire were as follows. In the surgeons’ group, 100% ( n  = 3) confirmed that the videos well represented the details of their operations. In the students’ group, 66.7% of respondents ( n  = 6) rated the image quality with GoPro as excellent, and 33.3% ( n  = 3) found it fine, while for the digital camera, 88.9% of respondents ( n  = 8) rated it as excellent and 11.1% ( n  = 1) as fine (Fig.  7 ). All respondents ( n  = 9) positively affirmed that they could learn professional skills from the videos. In the evaluation conducted by the Expert Review Panel, of the six videos, four were considered suitable for clinical teaching applications, one was also suitable but required a better replacement, and one was deemed unsuitable for clinical teaching applications. However, these results must be interpreted with caution due to the small sample size.

figure 7

Trainees reported satisfaction degrees from surgical videos, according to whether they can see the procedure clearly and learn from it—comparison of GoPro versus digital camera

With the increasing demand for technical communication, medical teaching, surgical procedure recording, and so on, surgical video has become a popular multimedia mode. It is a powerful medium that can enhance patient safety in several ways: education, real-time consultation, research, process improvement, and workflow coordination [ 27 ]. Operation videos can be transmitted through the internet in real-time, providing a platform for communication and cooperation between hospitals. Experienced surgeons can assess trainees’ surgical competency in an unbiased fashion through the trainees’ intraoperative video [ 28 ]. Experienced individual surgeons hope to share their professional knowledge and skills through surgical videos and achieve the purpose of self-publicity. Regarding privacy protection, Turnbull et al. emphasized that video documentation has significant ethical and legal considerations as it contains personal information and infringes on patients’ privacy [ 29 ]. The patient’s privacy should be carefully considered to avoid potential ethical and legal conflicts brought about by filming operations.

Pros and cons of two camera systems

The introduction of video technology into surgical procedures is becoming more common, and high-resolution camera technology has been integrated into surgical instrumentation for laparoscopic and minimally invasive procedures [ 30 ]. Although technology continuously evolves, leading to the adoption of many new technologies in intraoperative video recording, there are still limitations in devices for capturing open surgery. Due to economic conditions and space constraints, operating rooms are not routinely equipped with video recording equipment, making personal recording equipment a more viable solution. This study compared two technologies (GoPro and digital camera) used for intraoperative video capture in open surgeries and summarized their advantages and disadvantages (Table 4 ).

GoPro cameras are designed for extreme sports, featuring high resolution, high frame rates, small image sensors, and a lack of complete manual control. They are light and portable enough to be worn on a surgeon’s head, providing an image that approximates the natural field of vision without hindering the operation. Simultaneously, their built-in stabilizer function ensures the output image remains stable and visible. Being waterproof, they can be soaked in povidone-iodine for disinfection, facilitating hand-held shooting [ 31 ]. Existing studies confirm that this disinfection method does not compromise asepsis [ 32 ]. The built-in Wi-Fi and Bluetooth allow for remote monitoring and real-time transmission of intraoperative video. The affordability of GoPro enables doctors wanting to record surgeries to do so cost-effectively, making them accessible to surgeons from LMICs. However, the downsides are clear: users in the operating room are more likely to obtain a narrower FOV aimed at the surgical area, but the GoPro, as an action camera, is designed to capture as comprehensive a panoramic view as possible. Due to the absence of manual controls, it does not adapt well to frequent brightness changes caused by bright overhead operating room lights. Additionally, the battery capacity of the GoPro lasts approximately 60 min and may shut down if its body temperature reaches an upper limit during prolonged sessions. For extened recording times, spare batteries are necessary, and consideration of the device temperature is essential.

Digital cameras, due to their optimal optical performance and excellent zoom capabilities, can capture specific areas of interest in high quality. They are typically more durable and are generally equipped with larger image sensors, better adapting to unfavorable lighting conditions. Their robust maneuverability makes them suitable for the complex operating room environment. Digital cameras were not widely used for surgical video recording due to their high cost. However, this study shows that even inexpensive digital cameras, such as the EOS 850D, can produce adequate surgical videos. The picture quality is not significantly different from the much pricier EOS R5 when using the same 4 K 30 fps model. Of course, more expensive cameras like the EOS R5, which supports 8 K quality video, allow for a better representation of delicate anatomy.

Nevertheless, these cameras’ drawback is that they always require supports like tripods or rocker arms for steady recording. The positioning, height, and relationship with the surgical team determine the final video quality. Furthermore, an additional assistant is needed to adjust camera positions and video settings to maintain the appropriate shooting angle during the procedure. This camera operator might need to direct the surgeon to stop and start at different points throughout the surgery, potentially interfering with the surgical team. The risk of breaking sterility should also be considered when introducing an extra individual into the operating room. This cumbersome and time-consuming shooting method does not lend itself to daily, routine intraoperative videotaping.

Using either a GoPro or a digital camera is a commendable choice. According to our research, the GoPro is a highly efficient option that is better suited for personal recording and can be operated easily without an assistant. Digital cameras, though requiring additional assistance, deliver higher output quality. If the two are innovatively combined, images from different fields of vision can be captured to produce rich, comprehensive, and high-quality videos.

Application of surgical video in education and other aspects

Surgical video holds broad application prospects in medical teaching, technical communication, patient safety, workflow coordination, case data backup, research, real-time consulting, and skill improvement. With the advancement of communication facilities, real-time video recording during surgery presents extensive development prospects akin to digital twin technology [ 33 , 34 ]. Mentoring through this medium can enhance quality and patient safety throughout a medical student’s career. Future developments may involve coaching sessions or honing non-technical skills, such as optimizing teamwork in the operating room to elevate patient care.

Medical students’ journey to becoming surgeons critically requires specific technical feedback while developing foundational skills during their internships. Despite the importance of targeted feedback, medical students often endure inconsistent, fragmented, and stressful experiences in the operating room [ 5 ]. Compounding these challenges, a study on oral and maxillofacial surgery trainees in the United States revealed that the COVID-19 pandemic disrupted the scheduling of non-urgent and elective operations [ 35 ]. With approximately 2.28 million more skilled medical professionals needed to meet the global demand for surgical procedures [ 6 ], training a substantial cohort of future surgeons is a pressing, worldwide challenge.

In addition to traditional book learning and clinical practice, watching surgical videos can help medical students acquire technical details related to surgical operations more precisely, and some critical but fleeting points can be repeated during video playback. Video-based interventions to enhance surgical skills are gaining attention for their educational applications and related research [ 12 ]. The use of video technology in teaching is relatively common in other fields, including sports. In head and neck surgery, some advantages of utilizing high-quality surgical recordings as educational tools are as follows: 1) They provide clear, sharp images that depict fine anatomical structure; 2) Learning through videos offers a more intuitive experience, as viewing surgery footage from a first-person perspective affords residents a more immersive sensation, encouraging them to conceptualize the surgery from the surgeon’s viewpoint; 3) Video recordings of resident physicians’ operations facilitate the assessment of their skill levels, paving the way for enhanced performance; 4) Essential intraoperative findings can be documented and elucidated; 5) The zoom feature enables close-up, detailed recording of surgical procedures and anatomical nuances.

Leveraging the Wi-Fi and Bluetooth capabilities of recording devices, real-time videos can be streamed to mobile phones or laptops or even broadcast live over the internet for tele-proctoring. This emerging technology allows instructors to provide real-time guidance and technical support through audio and video interactions from various geographical locations. This method effectively circumvents the additional logistical costs, time constraints, and challenges posed by distance that are inherent when instructors physically travel to the field [ 36 ]. McCullough et al. [ 26 ] previously explored the feasibility of wearable recording technology in expanding the reach and availability of specialized surgical training in LMICs, using Mozambique as a case study. Their research suggests that this educational model connects surgeons globally and fosters advanced mentoring in regions where surgical trainees have limited opportunities.

Limitations

The findings of this study must be considered within the context of certain limitations. The research was single-centered with a limited number of surgeons involved, and only a single brand of digital camera was selected, which may lead to a lack of diversity and overlook ergonomic differences between types of surgeries and the subtle imaging details between different camera manufacturers. The assessment of the impact of video on teaching also had a small sample size, so potential biases in questionnaire feedback should be considered. Furthermore, there is a persistent need for objective and repeatable metrics to conclusively demonstrate the efficacy of camera technology in clinical education, continuous performance improvement, and quality enhancement initiatives.

Considering that the primary aim of this study was to compare and recommend a high-quality approach for recording surgical videos, future research will focus on conducting multi-centered studies with larger sample sizes and emphasis on the diversity of surgical specialties and camera brands. It is also essential to assess its application more effectively in a learning experience in surgical education, not only in head and neck surgery but also in other surgical areas. Future studies will improve the evaluation of skill levels through practical techniques and written exams, study learning curves in relation to surgical timing, analyze cost-effectiveness, and gather evaluations from the trainer’s perspective.

The field of head and neck surgery has consistently welcomed innovation, embracing the introduction of new techniques into surgical practice. There is a substantial demand and room for development in the domain of open surgical recordings. Surgical video recording serves the purpose of technical communication and accomplishes the objective of medical education through real-time connectivity, addressing the current global shortage of specialized surgeons. The two systems examined in this study, the GoPro and the digital camera, each have distinct features and advantages. The GoPro, an affordable and physician-independent solution, offers a stable and continuous view of the surgical area, though it lacks a medical-specific design and a zoom function. On the other hand, despite requiring periodic repositioning and potentially distracting the surgical team, the digital camera delivers superior visibility of anatomical details and higher image quality.

Availability of data and materials

The data supporting the findings of this study are available within the article and its supplementary materials.

Suchyta M, Mardini S. Innovations and future directions in head and neck microsurgical reconstruction. Clin Plast Surg. 2017;44(2):325–44. https://doi.org/10.1016/j.cps.2016.11.009 .

Article   Google Scholar  

Tarsitano A, Battaglia S, Ciocca L, Scotti R, Cipriani R, Marchetti C. Surgical reconstruction of maxillary defects using a computer-assisted design/computer-assisted manufacturing-produced titanium mesh supporting a free flap. J Craniomaxillofac Surg. 2016;44(9):1320–6. https://doi.org/10.1016/j.jcms.2016.07.013 .

Rana M, Essig H, Eckardt AM, et al. Advances and innovations in computer-assisted head and neck oncologic surgery. J Craniofac Surg. 2012;23(1):272–8. https://doi.org/10.1097/SCS.0b013e318241bac7 .

Poon H, Li C, Gao W, Ren H, Lim CM. Evolution of robotic systems for transoral head and neck surgery. Oral Oncol. 2018;87:82–8. https://doi.org/10.1016/j.oraloncology.2018.10.020 .

Alameddine MB, Englesbe MJ, Waits SA. A video-based coaching intervention to improve surgical skill in fourth-year medical students. J Surg Educ. 2018;75(6):1475–9. https://doi.org/10.1016/j.jsurg.2018.04.003 .

Meara JG, Leather AJM, Hagander L, et al. Global surgery 2030: evidence and solutions for achieving health, welfare, and economic development. Lancet. 2015;386(9993):569–624. https://doi.org/10.1016/s0140-6736(15)60160-x .

Huntley RE, Ludwig DC, Dillon JK. Early effects of COVID-19 on oral and maxillofacial surgery residency training-results from a national survey. J Oral Maxillofac Surg. 2020;78(8):1257–67. https://doi.org/10.1016/j.joms.2020.05.026 .

Shah JP. The impact of COVID-19 on head and neck surgery, education, and training. Head Neck. 2020;42(6):1344–7. https://doi.org/10.1002/hed.26188 .

Augestad KM, Lindsetmo RO. Overcoming distance: video-conferencing as a clinical and educational tool among surgeons. World J Surg. 2009;33(7):1356–65. https://doi.org/10.1007/s00268-009-0036-0 .

Hu YY, Mazer LM, Yule SJ, et al. Complementing operating room teaching with video-based coaching. JAMA Surg. 2017;152(4):318–25. https://doi.org/10.1001/jamasurg.2016.4619 .

Augestad KM, Butt K, Ignjatovic D, Keller DS, Kiran R. Video-based coaching in surgical education: a systematic review and meta-analysis. Surg Endosc. 2020;34(2):521–35. https://doi.org/10.1007/s00464-019-07265-0 .

Greenberg CC, Dombrowski J, Dimick JB. Video-based surgical coaching: an emerging approach to performance improvement. JAMA Surg. 2016;151(3):282–3. https://doi.org/10.1001/jamasurg.2015.4442 .

Bizzotto N, Sandri A, Lavini F, Dall’Oca C, Regis D. Video in operating room: GoPro HERO3 camera on surgeon’s head to film operations–a test. Surg Innov. 2014;21(3):338–40. https://doi.org/10.1177/1553350613513514 .

Matsumoto S, Sekine K, Yamazaki M, et al. Digital video recording in trauma surgery using commercially available equipment. Scand J Trauma Resusc Emerg Med. 2013;21:27. https://doi.org/10.1186/1757-7241-21-27 .

Knight H, Gajendragadkar P, Bokhari A. Wearable technology: using Google Glass as a teaching tool. BMJ Case Rep. 2015;2015:bcr2014208768. https://doi.org/10.1136/bcr-2014-208768 .

Liao CH, Ooyang CH, Chen CC, et al. Video coaching improving contemporary technical and nontechnical ability in laparoscopic education. J Surg Educ. 2020;77(3):652–60. https://doi.org/10.1016/j.jsurg.2019.11.012 .

Volz S, Stevens TP, Dadiz R. A randomized controlled trial: does coaching using video during direct laryngoscopy improve residents’ success in neonatal intubations? J Perinatol. 2018;38(8):1074–80. https://doi.org/10.1038/s41372-018-0134-7 .

Giusto G, Caramello V, Comino F, Gandini M. The surgeon’s view: comparison of two digital video recording systems in veterinary surgery. J Vet Med Educ. 2015;42(2):161–5. https://doi.org/10.3138/jvme.0814-088R1 .

Silberthau KR, Chao TN, Newman JG. Innovating surgical education using video in the otolaryngology operating room. JAMA Otolaryngol Head Neck Surg. 2020;146(4):321–2. https://doi.org/10.1001/jamaoto.2019.4862 .

Graves SN, Shenaq DS, Langerman AJ, Song DH. Video capture of plastic surgery procedures using the GoPro HERO 3+. Plast Reconstr Surg Glob Open. 2015;3(2):e312. https://doi.org/10.1097/gox.0000000000000242 .

Kapi E. Surgeon-manipulated live surgery video recording apparatuses: personal experience and review of literature. Aesthetic Plast Surg. 2017;41(3):738–46. https://doi.org/10.1007/s00266-017-0826-y .

Kajita H, Takatsume Y, Shimizu T, Saito H, Kishi K. Overhead multiview camera system for recording open surgery. Plastic and reconstructive surgery Global open. 2020;8(4):e2765. https://doi.org/10.1097/GOX.0000000000002765 .

Warrian KJ, Ashenhurst M, Gooi A, Gooi P. A novel combination point-of-view (POV) action camera recording to capture the surgical field and instrument ergonomics in oculoplastic surgery. Ophthalmic Plast Reconstr Surg. 2015;31(4):321–2. https://doi.org/10.1097/iop.0000000000000465 .

Chaves RO, de Oliveira PAV, Rocha LC, et al. An innovative streaming video system with a point-of-view head camera transmission of surgeries to smartphones and tablets: an educational utility. Surg Innov. 2017;24(5):462–70. https://doi.org/10.1177/1553350617715162 .

Wentzell D, Dort J, Gooi A, Gooi P, Warrian K. Surgeon and assistant point of view simultaneous video recording. Studies in health technology and informatics. 2019;257:489–93.

Google Scholar  

McCullough MC, Kulber L, Sammons P, Santos P, Kulber DA. Google glass for remote surgical tele-proctoring in low- and middle-income countries: a feasibility study from Mozambique. Plastic and reconstructive surgery Global open. 2018;6(12):e1999. https://doi.org/10.1097/GOX.0000000000001999 .

Xiao Y, Schimpff S, Mackenzie C, et al. Video technology to advance safety in the operating room and perioperative environment. Surg Innov. 2007;14(1):52–61. https://doi.org/10.1177/1553350607299777 .

Berger AJ, Gaster RS, Lee GK. Development of an affordable system for personalized video-documented surgical skill analysis for surgical residency training. Ann Plast Surg. 2013;70(4):442–6. https://doi.org/10.1097/SAP.0b013e31827e513c .

Turnbull AM, Emsley ES. Video recording of ophthalmic surgery–ethical and legal considerations. Surv Ophthalmol. 2014;59(5):553–8. https://doi.org/10.1016/j.survophthal.2014.01.006 .

Rassweiler JJ, Teber D. Advances in laparoscopic surgery in urology. Nat Rev Urol. 2016;13(7):387–99. https://doi.org/10.1038/nrurol.2016.70 .

Navia A, Parada L, Urbina G, Vidal C, Morovic CG. Optimizing intraoral surgery video recording for residents’ training during the COVID-19 pandemic: Comparison of 3 point of views using a GoPro. J Plast Reconstr Aesthet Surg. 2021;74(5):1101–60. https://doi.org/10.1016/j.bjps.2020.10.068 .

Purnell CA, Alkureishi LWT, Koranda C, Patel PK. Use of a waterproof camera immersed in povidone-iodine to improve intraoperative photography. Plastic and reconstructive surgery. 2019;143(3):962–5. https://doi.org/10.1097/prs.0000000000005327 .

Ahmed H, Devoto L. The potential of a digital twin in surgery. Surg Innov. 2021;28(4):509–10. https://doi.org/10.1177/1553350620975896 .

Laaki H, Miche Y, Tammi K. Prototyping a digital twin for real time remote control over mobile networks: application of remote surgery. IEEE Access. 2019;7:20325–36. https://doi.org/10.1109/access.2019.2897018 .

Hope C, Reilly JJ, Griffiths G, Lund J, Humes D. The impact of COVID-19 on surgical training: a systematic review. Tech Coloproctol. 2021;25(5):505–20. https://doi.org/10.1007/s10151-020-02404-5 .

Ereso AQ, Garcia P, Tseng E, et al. Live transference of surgical subspecialty skills using telerobotic proctoring to remote general surgeons. J Am Coll Surg. 2010;211(3):400–11. https://doi.org/10.1016/j.jamcollsurg.2010.05.014 .

Download references

Acknowledgements

The authors express their gratitude to the surgeons and students who participated in this study. Special thanks are also extended to Dr. Jun Jia and Dr. Kun Lv from the Department of Oral and Maxillofacial Surgery at the School & Hospital of Stomatology, Wuhan University, for their assistance with the surgical cases.

This study was supported by Wuhan University Undergraduate Education Quality Improvement and Comprehensive Reform Project (1607–413200072), Fundamental Research Funds for the Central Universities (Wuhan University, Clinical Medicine + X) (2042024YXB017), Postdoctoral Science Foundation of China (2018M630883& 2019T120688), Hubei Province Chinese Medicine Research Project (ZY20230015), Natural Science Foundation of Hubei Province (2023AFB665), and Medical Young Talents Program of Hubei Province, and Wuhan Young Medical Talents Training Project to L.-L. Bu.

Author information

Xin-Yue Huang and Zhe Shao contributed equally to this work.

Authors and Affiliations

State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China

Xin-Yue Huang, Zhe Shao, Nian-Nian Zhong, Yuan-Hao Wen, Tian-Fu Wu, Bing Liu, Si-Rui Ma & Lin-Lin Bu

Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan, China

Zhe Shao, Tian-Fu Wu, Bing Liu, Si-Rui Ma & Lin-Lin Bu

You can also search for this author in PubMed   Google Scholar

Contributions

XYH: Methodology, Investigation, Visualization, Writing—Original Draft, Writing—Review & Editing. ZS: Conceptualization, Investigation, Writing—Original Draft. NNZ: Methodology, Investigation, Writing—Review & Editing. YHW: Visualization, Writing—Review & Editing. TFW: Investigation, Writing—Review & Editing. BL: Writing—Review & Editing, Supervision, Funding acquisition. SRM: Conceptualization, Writing—Review & Editing, Supervision. LLB: Conceptualization, Methodology, Writing—Review & Editing, Supervision, Funding acquisition. All authors have reviewed and approved the final version of this manuscript for publication. Each author agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Corresponding authors

Correspondence to Si-Rui Ma or Lin-Lin Bu .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the Ethics Committee of the School of Stomatology of Wuhan University (Approval No. 2022B11), and followed the guidelines of the Declaration of Helsinki of the World Medical Association. Informed consent was obtained fom all subjects involved in the study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1..

Supplementary Material 2.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Huang, XY., Shao, Z., Zhong, NN. et al. Comparative analysis of GoPro and digital cameras in head and neck flap harvesting surgery video documentation: an innovative and efficient method for surgical education. BMC Med Educ 24 , 531 (2024). https://doi.org/10.1186/s12909-024-05510-2

Download citation

Received : 18 November 2023

Accepted : 02 May 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s12909-024-05510-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Head and neck surgery
  • Surgery video recording
  • Video-based education
  • Medical education

BMC Medical Education

ISSN: 1472-6920

is comparative analysis a methodology

is comparative analysis a methodology

Analytical Methods

Chemical characterization and comparative analysis of different parts of cocculus orbiculatus through uhplc-q-tof-ms †.

ORCID logo

* Corresponding authors

a Guizhou Engineering Research Center of Industrial Key-technology for Dendrobium Nobile, Zunyi Medical University, Zunyi, Guizhou 563000, China E-mail: [email protected] , [email protected]

b Guiyang Xintian Pharmaceutical Co., Ltd, Guiyang 550000, China

c School of Pharmacy, Shanghai Jiao Tong University, Shanghai 200240, China

Cocculus orbiculatus (L.) DC. ( C. orbiculatus ) is a medicinal herb valued for its dried roots with anti-inflammatory, analgesic, diuretic, and other therapeutic properties. Despite its traditional applications, chemical investigations into C. orbiculatus remain limited, focusing predominantly on alkaloids and flavonoids. Furthermore, the therapeutic use of C. orbiculatus predominantly focuses on the roots, leaving the stems, a significant portion of the plant, underutilized. This study employed ultra-high performance liquid chromatography quadrupole time-of-flight mass spectrometry (UHPLC-Q-TOF-MS/MS) with in-house and online databases for comprehensive identification of components in various plant parts. Subsequently, untargeted metabolomics was employed to analyze differences in components across different harvest periods and plant sections of C. orbiculatus , aiming to screen for distinct components in different parts of the plant. Finally, metabolomic analysis of the roots and stems, which contribute significantly to the plant's weight, was conducted using chemometrics, including principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), orthogonal partial least squares discriminant analysis (OPLS-DA), and heatmaps. A total of 113 components, including alkaloids, flavonoids, and organic acids, were annotated across the root, stem, leaf, flower, and fruit, along with numerous previously unreported compounds. Metabolomic analyses revealed substantial differences in components between the root and stem compared to the leaf, flower, and fruit during the same harvest period. PLS-DA and OPLS-DA annotated 10 differentiating components (VIP > 1.5, P < 0.05, FC > 2 or FC < 0.67), with 5 unique to the root and stem, exhibiting lower mass spectrometric responses. This study provided the first characterization of 113 chemical constituents in different parts of C. orbiculatus , laying the groundwork for pharmacological research and advocating for the enhanced utilization of its stem.

Graphical abstract: Chemical characterization and comparative analysis of different parts of Cocculus orbiculatus through UHPLC-Q-TOF-MS

Supplementary files

  • Supplementary information PDF (2410K)

Article information

Download citation, permissions.

is comparative analysis a methodology

Chemical characterization and comparative analysis of different parts of Cocculus orbiculatus through UHPLC-Q-TOF-MS

X. Wang, M. Wei, L. Qin, D. Tan, F. Wu, J. Xie, D. Wu, A. Liu, J. Wu, X. Wu and Y. He, Anal. Methods , 2024, Advance Article , DOI: 10.1039/D3AY02251J

To request permission to reproduce material from this article, please go to the Copyright Clearance Center request page .

If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.

If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page .

Read more about how to correctly acknowledge RSC content .

Social activity

Search articles by author.

This article has not yet been cited.

Advertisements

is comparative analysis a methodology

  Scientia Africana Journal / Scientia Africana / Vol. 23 No. 2 (2024) / Articles (function() { function async_load(){ var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; var theUrl = 'https://www.journalquality.info/journalquality/ratings/2405-www-ajol-info-sa'; s.src = theUrl + ( theUrl.indexOf("?") >= 0 ? "&" : "?") + 'ref=' + encodeURIComponent(window.location.href); var embedder = document.getElementById('jpps-embedder-ajol-sa'); embedder.parentNode.insertBefore(s, embedder); } if (window.attachEvent) window.attachEvent('onload', async_load); else window.addEventListener('load', async_load, false); })();  

Article sidebar.

Open Access

Article Details

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License .

Main Article Content

Comparative analysis of three methods for screening for colistin resistant <i>escherichia coli</i>, k. otokunefor, o.n. akomah-abadaike.

The usage of colistin regarded as a drug of last resort has increased tremendously in recent years. This increase has been followed by an increase in the development of colistin resistant bacteria. Due the large size of colistin and its ability to adhere to plastic, the broth microdilution method using a special medium is the recommended testing method. Resource limited settings struggle with this method and employ alternate methods. This study therefore set out to determine colistin resistance in a group of Escherichia coli using three different methods comprised of colistin agar spot test (COL-AS), colistin drop test (COL-DT) and a disc diffusion method. A total of 51 Escherichia coli isolated from wound samples were screened for colistin resistance using the COLAS, COL-DT and colistin disc diffusion methods. Results showed a combined resistance rate of 96.1% among test isolates. Actual resistance rates varied between testing methods giving values of ranging from 37.3%, 66.0% and 88.2% for COL- DT, colistin disc diffusion methods and COL-A respectively. An assessment of test performance showed categorical agreement values and very major error values of 57.1%/36.7% for COL-DT and 63.3%/8.2% for COL-A. Results of this study show a high-level occurrence of colistin resistance among clinical Escherichia coli isolates. Furthermore, it demonstrates the superiority of the colistin agar test to the colistin drop test. It also points at a need to use higher concentrations of colistin in the screening tests. 

AJOL is a Non Profit Organisation that cannot function without donations. AJOL and the millions of African and international researchers who rely on our free services are deeply grateful for your contribution. AJOL is annually audited and was also independently assessed in 2019 by E&Y.

Your donation is guaranteed to directly contribute to Africans sharing their research output with a global readership.

  • For annual AJOL Supporter contributions, please view our Supporters page.

Journal Identifiers

is comparative analysis a methodology

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 15 May 2024

Comparative efficacy and safety of alpha-blockers as monotherapy for benign prostatic hyperplasia: a systematic review and network meta-analysis

  • Beema T Yoosuf   ORCID: orcid.org/0009-0001-3584-6212 1 ,
  • Abhilash Kumar Panda 1 ,
  • Muhammed Favas KT   ORCID: orcid.org/0000-0001-8068-6839 1 ,
  • Saroj Kundan Bharti   ORCID: orcid.org/0000-0003-4221-0025 1 ,
  • Sudheer Kumar Devana 2 &
  • Dipika Bansal   ORCID: orcid.org/0000-0003-4520-3293 1  

Scientific Reports volume  14 , Article number:  11116 ( 2024 ) Cite this article

Metrics details

  • Health care
  • Medical research

Despite the availability of various drugs for benign prostatic hyperplasia (BPH), alpha(α)-blockers are the preferred first-line treatment. However, there remains a scarcity of direct comparisons among various α-blockers. Therefore, this network meta-analysis (NMA) of randomized controlled trials (RCTs) aimed to evaluate the efficacy and safety of α-blockers in the management of BPH. A comprehensive electronic search covered PubMed, Embase, Ovid MEDLINE, and Cochrane Library until August 2023. The primary endpoints comprised international prostate symptom score (IPSS), maximum flow rate (Qmax), quality of life (QoL), and post-void residual volume (PVR), while treatment-emergent adverse events (TEAEs) were considered as secondary endpoints. This NMA synthesized evidence from 22 studies covering 3371 patients with six kinds of α-blockers with 12 dose categories. IPSS has been considerably improved by tamsulosin 0.4 mg, naftopidil 50 mg and silodosin 8 mg as compared to the placebo. Based on the p-score, tamsulosin 0.4 mg had the highest probability of ranking for IPSS, PVR, and Qmax, whereas doxazosin 8 mg had the highest probability of improving QoL. A total of 297 adverse events were reported among all the α-blockers, silodosin has reported a notable number of TEAEs. Current evidence supports α-blockers are effective in IPSS reduction and are considered safer. Larger sample size with long-term studies are needed to refine estimates of IPSS, QoL, PVR, and Qmax outcomes in α-blocker users.

Similar content being viewed by others

is comparative analysis a methodology

Current management strategy of treating patients with erectile dysfunction after radical prostatectomy: a systematic review and meta-analysis

is comparative analysis a methodology

A Pilot retrospective analysis of alpha-blockers on recurrence in men with localised prostate cancer treated with radiotherapy

is comparative analysis a methodology

Impact of minimally invasive surgical procedures for Male Lower Urinary Tract Symptoms due to benign prostatic hyperplasia on ejaculatory function: a systematic review

Introduction.

Benign prostatic hyperplasia (BPH) is a ubiquitous urological disease that inevitably affects older men, occurring in up to 50% of men over 50 to 60 years, rising to 90% by age 80, and its predominance increases further with age 1 , 2 , 3 . BPH results from the noncancerous prostate gland enlargement induced by cellular hyperplasia of both glandular and stromal components 4 . Numerous sources of evidence reveal that in addition to ageing and family history, modifiable risk factors such as enlarged prostate, dyslipidemia, hypertension, hormonal imbalance, obesity, metabolic syndrome, diet, alcohol use, and smoking can collectively contribute to BPH 5 , 6 . Many individuals with BPH experience lower urinary tract symptoms (LUTS) in the form of irritative (frequency, nocturia and urgency) and obstructive urinary symptoms (hesitancy, intermittency, weak stream, incomplete bladder emptying and acute urinary retention (AUR)) 1 . LUTS correlated with BPH drastically compromises the quality of life (QoL), primarily disrupting sleep and daily activities 7 . Ipso facto, the intent of BPH treatment is to alleviate these troublesome and irritating symptoms 1 .

Pharmacological management of LUTS correlated with BPH has emerged over the last 25 years 6 . Existing medical therapy for BPH includes alpha-adrenergic receptor antagonists (α-blockers), anticholinergics, 5-alpha reductase inhibitors (5-ARIs) and phosphodiesterase inhibitors (PDE5-Is). Medical therapy is generally considered the initial treatment option for patients with moderate to severe LUTS while surgical approaches like transurethral resection of the prostate (TURP) are recommended for patients who had poor response to medical therapy or those with specific indications like refractory urinary retention, recurrent hematuria and those with severe bladder outlet obstruction leading to hydroureteronephrosis 4 . α-blockers are considered as the first-line drugs for treating BPH. Long-acting α-blockers, such as doxazosin, terazosin, tamsulosin, alfuzosin and silodosin, have been approved by the Food and Drug Administration (FDA) for the treatment of BPH 8 . They can mitigate symptoms by blocking endogenously secreted noradrenaline on smooth muscle cells in the prostate gland, thus reducing prostate tone and bladder outlet obstruction 9 .

Despite the fact that a large number of drugs are now available to treat BPH, α-blockers have a significant impact on improvement in International Prostate Symptom Score (IPSS), maximum flow rate (Qmax), post-void residual (PVR) and QoL 8 , 10 , 11 . Even though several clinical trials have been performed to explore the effectiveness of α-blockers for BPH, direct comparisons among these drugs are still lacking and there is conflicting information coming forward from meta-analysis 12 , 13 , 14 . For instance, a network meta-analysis (NMA) conducted on drug therapies for BPH assessed the effectiveness of multiple drug classes, instead of individual agents 15 . Furthermore, the most recently published NMA demonstrates merely IPSS, peak urine flow rate (PUF), and adverse events (AEs) among mono-drug therapies for LUTs related to BPH 16 . At present, none of the NMA have extensively evaluated the efficacy of these agents within the class in terms of the majority of outcomes as well as treatment-emergent adverse events (TEAEs). Therefore, the aim of the present study is to address the knowledge gap surrounding the comparative effectiveness of α-blockers for BPH based on available randomised controlled trials (RCTs) and rank these agents for clinical consideration.

This network meta-analysis (NMA) was executed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension statement for NMA. We have applied frequentist network meta-analysis for its simplicity associated with the model formulation 17 . The protocol was registered in the Prospective Register of Systematic Reviews (CRD42022365398).

Literature searches

A comprehensive electronic search of PubMed, Ovid MEDLINE, EMBASE and the Cochrane library, was carried out to identify the eligible studies. Additionally, a manual search in Google Scholar was performed. The initial search strategy was developed in the PubMed database, and the search strings used for electronic searches consist of combinations of keywords and medical subject headings (MeSH) terms like “alpha-blockers”, “Alfuzosin”, “Tamsulosin”, “Doxazosin”, “Terazosin”, “Silodosin”, “Naftopidil”, “Benign prostatic hyperplasia” and “Randomised controlled trial”. A methodological search filter was adopted to identify RCTs, and the search was limited to English-language publications. This search strategy serves as a template for alternative search algorithms customized to different databases, such as EMBASE, Ovid MEDLINE, and the Cochrane Library. In addition, the reference lists of the selected studies and review articles were hand-searched for additional potentially pertinent studies.

Study eligibility

This systematic review and NMA sought studies that met the PICO (P—population, I—intervention, C—comparator, O—outcome) framework. RCTs that investigated the efficacy and safety of α-blocker in men aged 45 and above with LUTS related to BPH were included. However, monotherapy with α-blockers were eligible, including selective (i.e., terazosin and doxazosin) and uroselective (tamsulosin, silodosin, alfuzosin and naftopidil), with no restrictions on α-blocker dosage 18 . As the research question also explored placebo-controlled trials, therefore the placebo serves to be the comparator. The key outcomes of interest were IPSS, QoL, PVR and Q max. TEAEs are also evaluated in order to provide a comprehensive overview of these drugs. Reviews, editorials, case reports, conference abstracts, studies that deviated from the aimed outcomes or with incomplete results and articles published in non-English were excluded.

Study screening

Two reviewers (BY and AP) worked independently to screen citations and evaluate full-text records for eligibility. Initially, only the title and abstract were screened, and the full texts of presumably pertinent articles were subsequently assessed for ultimate inclusion. A cross-check has been performed at both stages to ensure full compliance with eligibility requirements. Disputes regarding the full-text articles were rectified through discussion with a third reviewer (DB).

Data extraction

Two reviewers (BY and AP) individually extracted the following information into a spreadsheet: study characteristics (Title, first author, publication year, country, duration of treatment), population (study setting, sample size, baseline demographics), characterization of interventions (drug name and dose), and outcomes (reduction in IPSS and PVR, improvement in QOL, Qmax). Disagreements among reviewers were resolved by discussion or, if necessary, communicating with a third reviewer (DB). If any imperative information about study outcomes was missing or unclear in the published studies, the authors were contacted to seek clarification or additional data.

Risk of bias

The methodological quality of each included RCT was critically appraised employing the revised Cochrane Risk of Bias Tool (ROB 2.0) 19 . This tool captures six main sources of bias, comprising random sequence generation, allocation concealment, missing outcome data, blinding, selective reporting, and other sources of bias. Each domain has been assigned a score of low, moderate to high.

Statistical analysis

To account for certain methodological and clinical heterogeneity across studies, and to acquire the optimal generalizability in the meta-analytical treatment effects, we adopted a random-effects model 20 . As all the efficacy outcomes are continuous data, the effect size was computed as standardised mean difference (SMD) along with 95% confidence intervals (CI), and the outcome data was compiled using direct and indirect evidence employing a frequentist approach.

Statistical analysis was carried out using the “netmeta” package of R Studio and data were analysed following the intention-to-treat approach. A network plot of interventions was used to visualise the evidence gathered and offered a succinct overview of its characteristics. Direct evidence has gathered by pair-wise meta-analysis, while indirect evidence was obtained through indirect comparisons. The treatments were ranked using p-scores derived from the surface under the cumulative ranking curve (SUCRA). Higher p- scores tend to indicate a higher probability of being the most effective treatment 21 . In order to evaluate inconsistency, both global and local approaches were utilized. Under the presumption of a full design-by-treatment interaction random effects model 22 , the Q test and the I 2 statistic are adopted to evaluate consistency 23 . The local approach distinguishes indirect from direct evidence (SIDE) using the back-calculation method. The comparison-adjusted funnel plot was utilized to evaluate small-study effects for each outcome with ≥ 10 studies, where the overall treatment effect for every comparison was estimated employing random-effect meta-analysis model 24 . All eligible drugs have been ordered from oldest to newest according to their international market authorisation dates. Furthermore, the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) ratings were deployed to assess the certainty of evidence in networks employing the Confidence In Network Meta‐Analysis (CINeMA) framework 25 .

Study selection

The literature search of across multiple databases yielded a total of 3019 potentially relevant citations (Table S9). After duplication screening, 2164 articles were found. Of these 2022 articles were removed after the initial title and abstract screening and retrieved 142 articles for full-text review. Finally, 22 RCTs (3271 participants) published from 2000 to 2023 were included (Fig.  1 ) 12 , 13 , 14 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 .

figure 1

PRISMA flow chart of literature searches and results. ( PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses).

Study characteristics

The included RCTs comprised the currently used six kinds of α-blockers with different dose categories including Naftopidil 25 mg, 50 mg and 75 mg, Silodosin 8 mg, Tamsulosin 0.2 mg and 0.4 mg, Alfuzosin 2.5 mg and 10 mg, Doxazosin 2 mg, 4 mg and 8 mg, Terazosin 0.5 mg with a total 3,371 participants. Among the 22 included studies, 10 were multi-centric 26 , 29 , 31 , 32 , 34 , 36 , 39 , 40 , 41 , 42 , while 12 were single-centric 12 , 13 , 14 , 27 , 28 , 30 , 33 , 35 , 37 , 38 , 43 , 44 . Nine trials were conducted in Japan 26 , 30 , 31 , 32 , 33 , 36 , 39 , 40 , 43 , five in India 12 , 13 , 14 , 37 , 44 , two in Korea 41 , 42 and one each in China 34 , Indonesia 29 , Europe 27 , Philippines 28 , Egypt 35 and Turkey 38 . Most studies (91%) were published after 2005 and, over half of the studies (50%) involved more than 100 patients. A majority of trials (72.73%) had treatment durations of more than 4 weeks. The mean (SD) age of the patients was 65.3 (6.7) years (Table 1 ). According to IPSS, the symptoms of patients in the included trials varied from moderate to severe, with a baseline mean (SD) of 18.1 (4.6). The baseline mean (SD) value of QOL was 4.2 (0.8), Q max (ml/s) 10.2 (3.4), and PVR (ml) 49.0 (34.2).

In terms of study quality, 15 trials (68.18%) exhibited a low risk of bias, three trials (13.64%) had a moderate risk of bias, and four trials (18.18%) had a high risk of bias (Table 2 ).

Efficacy outcome

International prostate symptom score (ipss).

The NMA on IPSS included 22 RCTs with 6 interventions across 13 dose categories and 3271 participants (Fig.  2 a). The base-case estimates of the efficacy of α-blockers regimens on reducing IPSS are listed in Table 2 . Twenty-three comparisons estimated the treatment effect derived from direct evidence, 86 comparisons with indirect evidence and 18 comparisons with mixed evidence. Compared to the placebo, the NMA results found that three drugs had a significant effect on the reduction in IPSS, such as tamsulosin 0.4 mg (SMD: − 6.10; 95% CI: [− 8.74; − 3.47]), followed by naftopidil 50 mg (SMD: − 5.09; 95% CI: [− 8.29; − 1.89]) and silodosin 8 mg (SMD: − 3.63; 95% CI: [− 6.31; − 0.95]) (Fig.  3 a). The relative effectiveness was depicted using the league table (Table S1), all included α-blockers significantly reduce the IPSS compared to the placebo. Based on the p-score the highest-ranked treatment was tamsulosin 0.4 mg (0.89) and the lowest-ranked treatment was doxazosin 2 mg (0.22) (Table 3 ). Furthermore, the Q test of consistency showed substantial heterogeneity for this comparison (I 2 , 85.5%) (Appendix S1).

figure 2

Network plot comparing individual α-blockers on international prostate symptom score (IPSS), quality of life (QoL), post-void residual volume (PVR) and maximum flow rate (Q max). The width of the edge is proportional to the number of trials comparing the two drugs, and the node represents the type of treatment. Tam = tamsulosin, Alfu = alfuzosin, Naf = naftopidil, Tera = terazosin, Dox = doxazosin, Sil = silodosin.

figure 3

Forest plot of interventions as measured by the international prostate symptom score (IPSS), quality of life (QoL), post-void residual volume (PVR) and maximum flow rate (Q max). Tam = tamsulosin, Alfu = alfuzosin, Naf = naftopidil, Tera = terazosin, Dox = doxazosin, Sil = silodosin.

Quality of life (QoL)

13 RCTs including 6 interventions in 12 dose categories with 2,783 participants contributed to the comparison of the improvement in QoL (Fig.  2 b). Fourteen comparisons estimated the treatment effect derived from direct evidence, 58 comparisons with indirect evidence and 7 comparisons with mixed evidence. Compared to the placebo, none of the comparison reached statistical significance in improving QoL (Fig.  3 b). Doxazosin 8 mg has the highest probability of improving QoL, although the results were imprecise (Table S3). According to the pairwise comparisons, doxazosin 8 mg (− 1.35 [− 4.68; 1.98]) improves QoL compared to placebo (Table S2). Additionally, the Q consistency test showed a substantial heterogeneity for this evaluation (I 2 , 83.04%) (Appendix S1).

Post-void residual volume (PVR)

15 RCTs including 6 interventions in 10 dose categories with 2,761 participants contributed to the comparison of the reduction in PVR (Fig.  2 c). Fifteen comparisons estimated the treatment effect derived from direct evidence, 51 comparisons with indirect evidence and 11 comparisons with mixed evidence. Compared to the placebo, none of the comparisons showed statistical significance in reducing PVR (Fig.  3 c). Tamsulosin 0.4 mg and naftopidil 50 mg had the highest probability of improving PVR, with a p-score of 0.89, however, the results were imprecise (Table S5). According to the pairwise comparisons, tamsulosin 0.4 mg (− 15.99 [− 3.15; 35.12]) reduces the PVR compared to placebo; followed by naftopidil 50 mg (− 15.88 [− 34.73; 2.97]), doxazosin 2 mg (− 12.44[− 36.96; 12.07]) and doxazosin 4 mg (− 6.34 [− 28.27; 15.58]) (Table S4). Additionally, the Q consistency test showed a no heterogeneity for this evaluation (I 2 , 0%) (Appendix S1).

Maximum urinary flow rate (Qmax)

16 RCTs including 6 interventions in 13 dose categories with 3,114 participants contributed to the comparison of the improvement in Qmax (Fig.  2 d). Twenty comparisons estimated the treatment effect derived from direct evidence, 60 comparisons with indirect evidence and 15 comparisons with mixed evidence. Compared to the placebo, none of the comparisons showed statistical significance in improving Qmax (Fig.  3 d). Tamsulosin 0.4 mg has the highest probability of improving Qmax, with a p-score of 0.75 (Table S7). According to the pairwise comparisons, tamsulosin 0.4 mg (− 4.30 [− 9.37; 0.76]) reduces the Qmax compared to placebo; followed by terazosin 1 mg (− 3.99 [− 9.86; 1.89]), doxazosin 4 mg (− 3.39 [− 2.08; 8.86]) and naftopidil 75 mg (− 3.53 [− 3.03; 10.09]) (Table S6). Moreover, the Q consistency test showed a considerable heterogeneity for this evaluation (I 2 , 65.87%) (Appendix S1).

Safety outcomes

A total of 297 AEs was reported among the α-blockers (events/participants = 297/3009), silodosin (190/739) dominated with a notable number of AEs followed by tamsulosin (32/966), doxazosin (27/313), naftopidil (25/544), alfuzosin (20/416) and terazosin (3/31). The most prominent AEs included ejaculation dysfunction, dizziness and hypotension. The AEs associated with α-blockers have been listed in Table S8.

Evaluation of evidence quality

The degree of certainty of evidence for each outcome has been depicted in Figure S6, S7, S8, S9. About half of the comparison are moderate to low level confidence rating for IPSS vs placebo. Despite this, it was low for all other comparisons owing to imprecision and incoherence. However, the results of local and global approaches for IPSS showed inconsistent while all the other outcomes were found consistent. The quality scoring of the included studies is illustrated in the Figure S5. Furthermore, visual inspection of the comparison-adjusted funnel plots found the evidence of small-study effects for all outcomes (asymmetrical funnel plot) which indicates presence of potential publication bias (Figs. S1, S2, S3, S4).

Although there are several therapeutical options for BPH presently, pharmacological therapy has become standard care and is widely recommended by clinical guidelines 7 . American Urological Association (AUA) and Canadian Urological Association (CUA) guidelines recommend α- blockers as the first-line drug for BPH 45 , 46 . Despite their rapid onset of action, efficacy and modest frequency and intensity of adverse effects, α-blockers are considered as an excellent choice of therapy for BPH associated LUTS. The underlying mechanism of α-blockers is to inhibit the effect of norepinephrine produced endogenously on smooth muscle cells of the prostate; thereby reducing prostatic tone and consequently, urethral obstruction 47 , 48 . Several α-blockers have been approved by the FDA for the treatment of BPH, including terazosin, alfuzosin, doxazosin, tamsulosin and silodosin whereas naftopidil is only approved in Japan 49 , 50 , 51 .

Various clinical trials have been performed to investigate the effectiveness of α- blockers for BPH, however direct comparisons among many drugs are still lacking 12 , 13 , 14 . At present, none of the NMA have extensively evaluated the efficacy of these agents within the class in terms of the majority of outcomes (IPSS, QoL, PVR, Qmax) as well as TEAEs. This NMA focused on 22 RCTs, which included 3271 patients randomly assigned to 6 kinds of α-blockers or placebo with 12 dose categories. Our study revealed that among all the α-blocker monotherapy, tamsulosin 0.4 mg is more effective in improving the IPSS, PVR and Qmax, compared to a placebo, as well as the highest-ranked treatment option for these outcomes based on the rank test. Silodosin is considered to be having the highest selectivity for α1A adrenoreceptors in comparison to other α-blockers. In-vitro studies have shown that the affinity of silodosin and tamsulosin for α1A adrenoreceptors over α1B adrenoreceptors was 580-fold and 55-fold respectively. Based on this several clinical trials have also shown that silodosin has greater or comparable efficacy to tamsulosin. However, our NMA contradicts the above observations 13 , 44 and it clearly suggests the so called highly selective α-blocker, silodosin is not superior to tamsulosin in terms of clinical outcomes. This will help urologists in better counselling the BPH patients with regard to efficacy of different α-blockers. All the included α-blockers in our study showed a promising effect in reducing the IPSS. On the other hand, α-blockers did not significantly improve QoL, although they showed numerically better results. Even though, the pairwise comparison has shown that doxazosin 8 mg considerably improves QoL more than other α-blockers and is the highest-ranked treatment choice in the rank test.

Most guidelines routinely recommended using a symptom questionnaire to evaluate the patient's symptoms. IPSS, is the most ordinarily preferred scoring system, which is based on the American Urological Association Symptom Index (AUA-SI) 15 , 16 . It comprises eight questions, seven of which explore urinary symptoms and one on the overall quality of life 52 . All of the included α-blockers significantly reduced IPSS within the first 2 weeks of treatment. Controlled studies suggest that α-blockers often lower the IPSS by 30–40% 47 . In addition to their remarkable efficacy, α-blockers are the least expensive and well-tolerated of the drugs used to treat LUTS 16 , 53 .

The included studies validated the overall safety profile, with the proportion of AEs ranging mild to moderate. The most commonly reported AEs were ejaculation disorder, dizziness, diarrhoea, nasal congestion, drowsiness and postural hypotension. Moreover, for each of the aforementioned α-blockers, dizziness was reported. Wang et al. observed similar findings, stating that the most commonly reported AEs with α-blockers were ejaculation disorders, nasopharyngitis, and vasodilation effects such as asthenia, dizziness, headache and hypotension 15 . As compared to other α-blockers, silodosin elicits a notable number of AEs followed by tamsulosin and doxazosin and the most predominant adverse effects were ejaculation dysfunction, dizziness, and hypotension. In addition to corroborating our findings, investigations on those most recent drug treatments for LUTS also concurred that silodosin have a higher AE profile than the other therapies, exhibiting with a higher rate of ejaculation dysfunction 54 , 55 . However, α-blockers monotherapies are generally safe with relatively few AEs.

This is the first robust network meta-analysis purely focused on α-blockers, considering the majority of outcomes (IPSS, QoL, PVR, Qmax) along with TEAEs. In 2015, Yuan et al. performed a NMA of RCTs for evaluating the comparative effectiveness of monodrug therapies in BPH 16 . However, outcomes such as PVR and QoL were not considered. Moreover, numerous studies were published after 2015 (36.4%), resulting in the up-to-date comparison of interventions. Studies conducted by Lepor et al. found that when comparing different α-blockers, it is imperative to consider that efficacy and safety are dose-dependent. As a result, observed differences in efficacy and toxicity may be related to diverse levels of α1-blockade achieved rather than inherent pharmacological advantages of the specific drug 8 . We compared α-blockers in a dose-dependent way to benefit the comparative efficacy and safety at different dose levels. Furthermore, the selected studies had similar study designs, selection criteria, and patient characteristics with few exception (duration of treatment) thus, supporting exchangeability. Exchangeability across the trials were conceptually considered and the NMA findings were interpreted accordingly. These factors enhance the credibility of the comparisons generated. Besides, the overall quality of the studies selected was found satisfactory.

Although we performed a comprehensive systematic review and NMA of α-blockers, there are still constraints to consider when interpreting the findings. This review focused on four outcomes, but there were limited data available for QoL, PVR, and Qmax as compared to IPSS. The majority of comparisons for outcomes such as PVR and QoL exhibited low certainty of evidence with the CINeMA framework, predominantly implying the risk of bias from the open-label trials and imprecision owing to a relatively small number of trials. Secondly, α-blockers can minimize both storage and voiding LUTS, however, prostate size has no effect in short-term studies (≤ 1 year) 56 , 57 . The conventional clinical treatment for larger prostate size requires a prolonged treatment period 15 . Ipso facto, the limited duration in the included RCTs (50% of studies were ≤ 8 weeks and 45% ≤ 12 weeks) impede the estimation of long-term effects of α-blockers. Furthermore, this study assessed the efficacy and safety of six different kinds of α-blockers, including five drugs approved by the US FDA (terazosin, alfuzosin, doxazosin, silodosin, and tamsulosin) for BPH while naftopidil is only approved in Japan. As a result, the findings of naftopidil cannot be generalised. Furthermore, the majority of studies were conducted in Asian countries, which could impact the broader applicability of the results. The safety of different kinds of α-blockers was not evaluated using NMA due to a lack of information and the diversity of TEAEs. When interpreting the outcomes of this study, it is imperative to consider the imprecision, heterogeneity and incoherence inherent in the effect estimates.

All the included α-blockers showed reduction in IPSS whereas tamsulosin 0.4 mg outperforms the other α-blocker monotherapies in terms of improving IPSS, PVR, and Qmax. Moreover, larger sample sizes along with longer-term studies are required to refine our estimates of IPSS, QoL, PVR, and Qmax among α-blocker users. Silodosin elicits a notable number of AEs however, dizziness was a common AE observed for all α-blockers. Despite the advancing volume of evidence on the α-blocker, there remains a paucity of evidence demonstrating comparative safety in terms of serious and unexpected outcomes. Even though results provide a pragmatic evaluation of six different types of α-blockers that can aid in treatment decisions, direct head-to-head comparisons are required to validate these findings.

Data availability

The datasets gathered in the present study are considered for sharing upon reasonable requests to the corresponding author.

Park, T. & Choi, J. Y. Efficacy and safety of dutasteride for the treatment of symptomatic benign prostatic hyperplasia (BPH): A systematic review and meta-analysis. World J. Urol. 32 (4), 1093–1105 (2014).

Article   CAS   PubMed   Google Scholar  

Kim, J. H. et al. Efficacy and safety of 5 alpha-reductase inhibitor monotherapy in patients with benign prostatic hyperplasia: A meta-analysis. PLoS ONE. 13 (10), e0203479 (2018).

Article   PubMed   PubMed Central   Google Scholar  

Zitoun, O. A. et al. Management of benign prostate hyperplasia (BPH) by combinatorial approach using alpha-1-adrenergic antagonists and 5-alpha-reductase inhibitors. Eur. J. Pharmacol. 883 , 173301 (2020).

Wu, Y. J., Dong, Q., Liu, L. R. & Wei, Q. A meta-analysis of efficacy and safety of the new α1A-adrenoceptor-selective antagonist silodosin for treating lower urinary tract symptoms associated with BPH. Prostate Cancer Prostatic Dis. 16 (1), 79–84 (2013).

Calogero, A. E., Burgio, G., Condorelli, R. A., Cannarella, R. & La Vignera, S. Epidemiology and risk factors of lower urinary tract symptoms/benign prostatic hyperplasia and erectile dysfunction. Aging Male. 22 (1), 12–19 (2019).

Article   PubMed   Google Scholar  

MacDonald, R. et al. Efficacy of newer medications for lower urinary tract symptoms attributed to benign prostatic hyperplasia: A systematic review. Aging Male. 22 (1), 1–11 (2019).

Sun, X. et al. Efficacy and safety of PDE5-Is and α-1 blockers for treating lower ureteric stones or LUTS: A meta-analysis of RCTs. BMC Urol. 18 (1), 30 (2018).

Lepor, H., Kazzazi, A. & Djavan, B. α-Blockers for benign prostatic hyperplasia: The new era. Curr. Opin. Urol. 22 (1), 7–15 (2012).

Zhang, J. et al. Alpha-blockers with or without phosphodiesterase type 5 inhibitor for treatment of lower urinary tract symptoms secondary to benign prostatic hyperplasia: A systematic review and meta-analysis. World J. Urol. 37 (1), 143–153 (2019).

Wang, X. H. et al. Systematic review and meta-analysis on phosphodiesterase 5 inhibitors and α-adrenoceptor antagonists used alone or combined for treatment of LUTS due to BPH. Asian J. Androl. 17 (6), 1022–1032 (2015).

Fusco, F. et al. Alpha-1 adrenergic antagonists, 5-alpha reductase inhibitors, phosphodiesterase type 5 inhibitors, and phytotherapic compounds in men with lower urinary tract symptoms suggestive of benign prostatic obstruction: A systematic review and meta-analysis of urodynamic studies. Neurourol. Urodyn. 37 (6), 1865–1874 (2018).

Perumal, C., Chowdhury, P. S., Ananthakrishnan, N., Nayak, P. & Gurumurthy, S. A comparison of the efficacy of naftopidil and tamsulosin hydrochloride in medical treatment of benign prostatic enlargement. Urol. Ann. 7 (1), 74–78 (2015).

Manohar, C. M. S. et al. Safety and efficacy of tamsulosin, alfuzosin or silodosin as monotherapy for LUTS in BPH—A double-blind randomized trial. Cent. Eur. J. Urol. 70 (2), 148–153 (2017).

CAS   Google Scholar  

Patil, S. B., Ranka, K., Kundargi, V. S. & Guru, N. Comparison of tamsulosin and silodosin in the management of acute urinary retention secondary to benign prostatic hyperplasia in patients planned for trial without catheter. A prospective randomized study. Cent. Eur. J. Urol. 70 (3), 259–263 (2017).

Wang, X. et al. Comparative effectiveness of oral drug therapies for lower urinary tract symptoms due to benign prostatic hyperplasia: A systematic review and network meta-analysis. PLoS ONE. 9 (9), e107593 (2014).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Yuan, J. Q. et al. Comparative effectiveness and safety of monodrug therapies for lower urinary tract symptoms associated with benign prostatic hyperplasia: A network meta-analysis. Medicine 94 (27), e974 (2015).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Shim, S. R., Kim, S. J., Lee, J. & Rücker, G. Network meta-analysis: Application and practice using R software. Epidemiol. Health. 41 , e2019013 (2019).

Capogrosso, P., Salonia, A. & Montorsi, F. Evaluation and Nonsurgical Management of Benign Prostatic Hyperplasia. Campbell-Walsh-Wein Urology 12th edn. (Elsevier, 2021).

Google Scholar  

Higgins, J. P. et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 343 , d5928 (2011).

Furukawa, T. A., Guyatt, G. H. & Griffith, L. E. Can we individualize the “number needed to treat”? An empirical study of summary effect measures in meta-analyses. Int. J. Epidemiol. 31 (1), 72–76 (2002).

Rücker, G. & Schwarzer, G. Ranking treatments in frequentist network meta-analysis works without resampling methods. BMC Med. Res. Methodol. 15 , 58 (2015).

Higgins, J. P. et al. Consistency and inconsistency in network meta-analysis: Concepts and models for multi-arm studies. Res. Synth. Methods. 3 (2), 98–110 (2012).

Higgins, J. P., Thompson, S. G., Deeks, J. J. & Altman, D. G. Measuring inconsistency in meta-analyses. BMJ. 327 (7414), 557–560 (2003).

Chaimani, A., Higgins, J. P., Mavridis, D., Spyridonos, P. & Salanti, G. Graphical tools for network meta-analysis in STATA. PLoS ONE. 8 (10), e76654 (2013).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Papakonstantinou, T., Nikolakopoulou, A., Higgins, J. P., Egger, M. & Salanti, G. Cinema: Software for semiautomated assessment of the confidence in the results of network meta-analysis. Campbell Syst. Rev. 16 (1), e1080 (2020).

Okada, H. et al. A comparative study of terazosin and tamsulosin for symptomatic benign prostatic hyperplasia in Japanese patients. BJU Int. 85 (6), 676–681 (2000).

van Kerrebroeck, P., Jardin, A., Laval, K. U. & van Cangh, P. Efficacy and safety of a new prolonged release formulation of alfuzosin 10 mg once daily versus alfuzosin 25 mg thrice daily and placebo in patients with symptomatic benign prostatic hyperplasia. ALFORTI Study Group. Eur. Urol. 37 (3), 306–313 (2000).

Lapitan, M. C., Acepcion, V. & Mangubat, J. A comparative study on the safety and efficacy of tamsulosin and alfuzosin in the management of symptomatic benign prostatic hyperplasia: A randomized controlled clinical trial. J. Int. Med. Res. 33 (5), 562–573 (2005).

Rahardjo, D. et al. Efficacy and safety of tamsulosin hydrochloride compared to doxazosin in the treatment of Indonesian patients with lower urinary tract symptoms due to benign prostatic hyperplasia. Int. J. Urol. 13 (11), 1405–1409 (2006).

Yokoyama, T., Kumon, H., Nasu, Y., Takamoto, H. & Watanabe, T. Comparison of 25 and 75 mg/day naftopidil for lower urinary tract symptoms associated with benign prostatic hyperplasia: A prospective, randomized controlled study. Int. J. Urol. 13 (7), 932–938 (2006).

Ukimura, O. et al. Naftopidil versus tamsulosin hydrochloride for lower urinary tract symptoms associated with benign prostatic hyperplasia with special reference to the storage symptom: A prospective randomized controlled study. Int. J. Urol. 15 (12), 1049–1054 (2008).

Masumori, N. et al. Ejaculatory disorders caused by alpha-1 blockers for patients with lower urinary tract symptoms suggestive of benign prostatic hyperplasia: Comparison of naftopidil and tamsulosin in a randomized multicenter study. Urol. Int. 83 (1), 49–54 (2009).

Yokoyama, T. et al. Effects of three types of alpha-1 adrenoceptor blocker on lower urinary tract symptoms and sexual function in males with benign prostatic hyperplasia. Int. J. Urol. 18 (3), 225–230 (2011).

Zhang, K. et al. Effect of doxazosin gastrointestinal therapeutic system 4 mg vs tamsulosin 0.2 mg on nocturia in Chinese men with lower urinary tract symptoms: A prospective, multicenter, randomized, open, parallel study. Urology. 78 (3), 636–640 (2011).

Shelbaia, A., Elsaied, W. M., Elghamrawy, H., Abdullah, A. & Salaheldin, M. Effect of selective alpha-blocker tamsulosin on erectile function in patients with lower urinary tract symptoms due to benign prostatic hyperplasia. Urology. 82 (1), 130–135 (2013).

Yamaguchi, K. et al. Silodosin versus naftopidil for the treatment of benign prostatic hyperplasia: A multicenter randomized trial. Int. J. Urol. 20 (12), 1234–1238 (2013).

Kumar, S., Tiwari, D. P., Ganesamoni, R. & Singh, S. K. Prospective randomized placebo-controlled study to assess the safety and efficacy of silodosin in the management of acute urinary retention. Urology. 82 (1), 171–175 (2013).

Keten, T. et al. Determination of the efficiency of 8 mg doxazosin XL treatment in patients with an inadequate response to 4 mg doxazosin XL treatment for benign prostatic hyperplasia. Urology. 85 (1), 189–194 (2015).

Seki, N. et al. Non-inferiority of silodosin 4 mg once daily to twice daily for storage symptoms score evaluated by the International Prostate Symptom Score in Japanese patients with benign prostatic hyperplasia: A multicenter, randomized, parallel-group study. Int. J. Urol. 22 (3), 311–316 (2015).

Matsukawa, Y. et al. Comparison of silodosin and naftopidil for efficacy in the treatment of benign prostatic enlargement complicated by overactive bladder: A randomized, prospective study (SNIPER study). J. Urol. 197 (2), 452–458 (2017).

Chung, J. H. et al. Efficacy and safety of tamsulosin 0.4 mg single pills for treatment of Asian patients with symptomatic benign prostatic hyperplasia with lower urinary tract symptoms: A randomized, double-blind, phase 3 trial. Curr. Med. Res. Opin. 34 (10), 1793–1801 (2018).

Kwon, S. Y. et al. Comparison of the effect of naftopidil 75 mg and tamsulosin 0.2 mg on the bladder storage symptom with benign prostatic hyperplasia: Prospective, multi-institutional study. Urology 111 , 145–150 (2018).

Matsumoto, S., Kasamo, S. & Hashizume, K. Influence of alpha-adrenoceptor antagonists therapy on stool form in patients with lower urinary tract symptoms suggestive of benign prostatic hyperplasia. Low Urin. Tract. Symptoms. 12 (1), 86–91 (2020).

Pande, S., Hazra, A. & Kundu, A. K. Evaluation of silodosin in comparison to tamsulosin in benign prostatic hyperplasia: A randomized controlled trial. Indian J. Pharmacol. 46 (6), 601–607 (2014).

Lerner, L. B. et al. Management of lower urinary tract symptoms attributed to benign prostatic hyperplasia: AUA GUIDELINE PART I-initial work-up and medical management. J. Urol. 206 (4), 806–817 (2021).

Nickel, J. C., Méndez-Probst, C. E., Whelan, T. F., Paterson, R. F. & Razvi, H. 2010 update: Guidelines for the management of benign prostatic hyperplasia. Can. Urol. Assoc. J. 4 (5), 310–316 (2010).

La Vignera, S. et al. Pharmacological treatment of lower urinary tract symptoms in benign prostatic hyperplasia: Consequences on sexual function and possible endocrine effects. Expert Opin. Pharmacother. 22 (2), 179–189 (2021).

Yassin, A. et al. Alpha-adrenoceptors are a common denominator in the pathophysiology of erectile function and BPH/LUTS–implications for clinical practice. Andrologia. 38 (1), 1–12 (2006).

Florent, R., Poulain, L. & N’Diaye, M. Drug repositioning of the α(1)-adrenergic receptor antagonist naftopidil: A potential new anti-cancer drug?. Int. J. Mol. Sci. 21 (15), 5339 (2020).

Masumori, N. Naftopidil for the treatment of urinary symptoms in patients with benign prostatic hyperplasia. Ther. Clin. Risk Manag. 7 , 227–238 (2011).

Yu, Z. J. et al. Efficacy and side effects of drugs commonly used for the treatment of lower urinary tract symptoms associated with benign prostatic hyperplasia. Front. Pharmacol. 11 , 658 (2020).

de la Rosette, J. J. et al. EAU Guidelines on benign prostatic hyperplasia (BPH). Eur. Urol. 40 (3), 256–263 (2001) ( discussion 64 ).

Lepor, H. Alpha-blockers for the treatment of benign prostatic hyperplasia. Urol. Clin. North Am. 43 (3), 311–323 (2016).

Dahm, P. et al. Comparative effectiveness of newer medications for lower urinary tract symptoms attributed to benign prostatic hyperplasia: A systematic review and meta-analysis. Eur. Urol. 71 (4), 570–581 (2017).

Hwang, E. C., Gandhi, S. & Jung, J. H. New alpha blockers to treat male lower urinary tract symptoms. Curr. Opin. Urol. 28 (3), 273–276 (2018).

McVary, K. T. et al. Update on AUA guideline on the management of benign prostatic hyperplasia. J. Urol. 185 (5), 1793–1803 (2011).

Oelke, M. et al. EAU guidelines on the treatment and follow-up of non-neurogenic male lower urinary tract symptoms including benign prostatic obstruction. Eur. Urol. 64 (1), 118–140 (2013).

Download references

Author information

Authors and affiliations.

Department of Pharmacy Practice, National Institute of Pharmaceutical Education and Research (NIPER), S.A.S. Nagar, Mohali, Punjab, India

Beema T Yoosuf, Abhilash Kumar Panda, Muhammed Favas KT, Saroj Kundan Bharti & Dipika Bansal

Department of Urology, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Sudheer Kumar Devana

You can also search for this author in PubMed   Google Scholar

Contributions

BTY contributed to the study concept and design, acquisition and interpretation of data, statistical analysis, drafted the manuscript and final revision and editing. AKP contributed to the acquisition and interpretation of data. MFKT contributed to the statistical analysis and interpretation of data. SKB contributed to the cross-checked the information and revision of the drafted manuscript. Dr. SKD contributed to the critical revision of the manuscript for important intellectual content and supervision. Dr. DB contributed to the study concept, critical revision of the manuscript for important intellectual content and supervision. All authors participated in the manuscript review process and endorsed the final version.

Corresponding author

Correspondence to Dipika Bansal .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Yoosuf, B.T., Panda, A.K., KT, M.F. et al. Comparative efficacy and safety of alpha-blockers as monotherapy for benign prostatic hyperplasia: a systematic review and network meta-analysis. Sci Rep 14 , 11116 (2024). https://doi.org/10.1038/s41598-024-61977-5

Download citation

Received : 27 February 2024

Accepted : 13 May 2024

Published : 15 May 2024

DOI : https://doi.org/10.1038/s41598-024-61977-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Benign prostatic hyperplasia
  • International prostate symptom score
  • Network meta-analysis
  • Quality of life

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

is comparative analysis a methodology

The “qualitative” in qualitative comparative analysis (QCA): research moves, case-intimacy and face-to-face interviews

  • Open access
  • Published: 26 March 2022
  • Volume 57 , pages 489–507, ( 2023 )

Cite this article

You have full access to this open access article

is comparative analysis a methodology

  • Sofia Pagliarin   ORCID: orcid.org/0000-0003-4846-6072 3 , 4 ,
  • Salvatore La Mendola 2 &
  • Barbara Vis 1  

8777 Accesses

4 Citations

5 Altmetric

Explore all metrics

Qualitative Comparative Analysis (QCA) includes two main components: QCA “as a research approach” and QCA “as a method”. In this study, we focus on the former and, by means of the “interpretive spiral”, we critically look at the research process of QCA. We show how QCA as a research approach is composed of (1) an “analytical move”, where cases, conditions and outcome(s) are conceptualised in terms of sets, and (2) a “membership move”, where set membership values are qualitatively assigned by the researcher (i.e. calibration). Moreover, we show that QCA scholars have not sufficiently acknowledged the data generation process as a constituent research phase (or “move”) for the performance of QCA. This is particularly relevant when qualitative data–e.g. interviews, focus groups, documents–are used for subsequent analysis and calibration (i.e. analytical and membership moves). We call the qualitative data collection process “relational move” because, for data gathering, researchers establish the social relation “interview” with the study participants. By using examples from our own research, we show how a dialogical interviewing style can help researchers gain the in-depth knowledge necessary to meaningfully represent qualitative data into set membership values for QCA, hence improving our ability to account for the “qualitative” in QCA.

Similar content being viewed by others

is comparative analysis a methodology

A worked example of Braun and Clarke’s approach to reflexive thematic analysis

is comparative analysis a methodology

What is Qualitative in Qualitative Research

is comparative analysis a methodology

Qualitative Research: Ethical Considerations

Avoid common mistakes on your manuscript.

1 Introduction

Qualitative Comparative Analysis (QCA) is a configurational comparative research approach and method for the social sciences based on set-theory. It was introduced in crisp-set form by Ragin ( 1987 ) and later expanded to fuzzy sets (Ragin 2000 ; 2008a ; Rihoux and Ragin 2009 ; Schneider and Wagemann 2012 ). QCA is a diversity-oriented approach extending “the single-case study to multiple cases with an eye toward configurations of similarities and differences” (Ragin 2000 :22). QCA aims at finding a balance between complexity and generalizability by identifying data patterns that can exhibit or approach set-theoretic connections (Ragin 2014 :88).

As a research approach, QCA researchers first conceptualise cases as elements belonging, in kind and/or degree, to a selection of conditions and outcome(s) that are conceived as sets. They then assign cases’ set membership values to conditions and outcome(s) (i.e. calibration). Populations are constructed for outcome-oriented investigations and causation is conceived to be conjunctural and heterogeneous (Ragin 2000 : 39ff). As a method, QCA is the systematic and formalised analysis of the calibrated dataset for cross-case comparison through Boolean algebra operations. Combinations of conditions (i.e. configurations) represent both the characterising features of cases and also the multiple paths towards the outcome (Byrne 2005 ).

Most of the critiques to QCA focus on the methodological aspects of “QCA as a method” (e.g. Lucas and Szatrowski 2014 ), although epistemological issues regarding deterministic causality and subjectivity in assigning set membership values are also discussed (e.g. Collier 2014 ). In response to these critiques, Ragin ( 2014 ; see also Ragin 2000 , ch. 11) emphasises the “mindset shift” needed to perform QCA: QCA “as a method” makes sense only if researchers admit “QCA as a research approach”, including its qualitative component.

The qualitative character of QCA emerges when recognising the relevance of case-based knowledge or “case intimacy”. The latter is key to perform calibration (see e.g. Ragin 2000 :53–61; Byrne 2005 ; Ragin 2008a ; Harvey 2009 ; Greckhamer et al. 2013 ; Gerrits and Verweij 2018 :36ff): when associating “meanings” to “numbers”, researchers engage in a “dialogue between ideas and evidence” by using set-membership values as “ interpretive tools ” (Ragin 2000 : 162, original emphasis). The foundations of QCA as a research approach are explicitly rooted in qualitative, case-oriented research approaches in the social sciences, in particular in the understanding of causation as multiple and configurational, in terms of combinations of conditions, and in the conceptualisation of populations as types of cases, which should be refined in the course of an investigation (Ragin 2000 : 30–42).

Arguably, QCA researchers should make ample use of qualitative methods for the social sciences, such as narrative or semi-structured interviews, focus groups, discourse and document analysis, because this will help gain case intimacy and enable the dialogue between theories and data. Furthermore, as many QCA-studies have a small to medium sample size (10–50 cases), qualitative data collection methods appear to be particularly appropriate to reach both goals. However, so far only around 30 published QCA studies use qualitative data (de Block and Vis 2018 ), out of which only a handful employ narrative interviews (see Sect.  2 ).

We argue that this puzzling observation about QCA empirical research is due to two main reasons. First, quantitative data, in particular secondary data available from official databases, are more malleable for calibration. Although QCA researchers should carefully distinguish between measurement and calibration (see e.g. Ragin, 2008a , b ; Schneider and Wagemann 2012 , Sect. 1.2), quantitative data are more convenient for establishing the three main qualitative anchors (i.e. the cross-over point as maximum ambiguity; the lower and upper thresholds for full set membership exclusion or inclusion). Quantitative data facilitate QCA researchers in performing QCA both as a research approach and method. QCA scholars are somewhat aware of this when discussing “the two QCAs” (large-n/quantitative data and small-n/more frequent use of qualitative data; Greckhamer et al. 2013 ; see also Thomann and Maggetti 2017 ).

Second, the use of qualitative data for performing QCA requires an additional effort from the part of the researcher, because data collected through, for instance, narrative interviews, focus groups and document analysis come in verbal form. Therefore, QCA researchers using qualitative methods for empirical research have to first collect data and only then move to their analysis and conceptualisation as sets (analytical move) and their calibration into “numbers” (membership move) for their subsequent handling through QCA procedures (QCA as a method).

Because of these two main reasons, we claim that data generation (or data construction) should also be recognised and integrated in the QCA research process. Fully accounting for QCA as a “qualitative” research approach necessarily entails questions about the data generation process, especially when qualitative research methods are used that come in verbal, and not numerical, form.

This study’s contributions are twofold. First, we present the “interpretative spiral” (see Fig.  1 ) or “cycle” (Sandelowski et al. 2009 ) where data gradually transit through changes of state: from meanings, to concepts to numerical values. In limiting our discussion to QCA as a research approach, we identified three main moves composing the interpretative spiral: the (1) relational (data generation through qualitative methods), (2) analytical (set conceptualisation) and (3) membership (calibration) moves. Second, we show how in-depth knowledge for subsequent set conceptualisation and calibration can be more effectively generated if the researcher is open, during data collection, to support the interviewee’s narration and to establish a dialogue—a relation—with him/her (i.e. the relational move). It is the researcher’s openness that can facilitate the development of case intimacy for set conceptualisation and assessment (analytical and membership moves). We hence introduce a “dialogical” interviewing style (La Mendola 2009 ) to show how this approach can be useful for QCA researchers. Although we mainly discuss narrative interviews, a dialogical interviewing style can also adapt to face-to-face semi-structured interviews or questionnaires.

figure 1

The interpretative spiral and the relational, analytical and membership moves

Our main aim is to make QCA researchers more aware of “minding their moves” in the interpretative spiral. Additionally, we show how a “dialogical” interviewing style can facilitate the access to the in-depth knowledge of cases useful for calibration. Researchers using narrative interviews who have not yet performed QCA can gain insight into–and potentially see the advantages of–how qualitative data, in particular narrative interviews, can be employed for the performance of QCA (see Gerrits and Verweij 2018 :36ff).

In Sect.  2 we present the interpretative spiral (Fig.  1 ,) the interconnections between the three moves and we discuss the limited use of qualitative data in QCA research. In Sect.  3 , we examine the use of qualitative data for performing QCA by discussing the relational move and a dialogical interviewing style. In Sect.  4 , we examine the analytical and membership moves and discuss how QCA researchers have so far dealt with them when using qualitative data. In Sect.  5 , we conclude by putting forward some final remarks.

2 The interpretative spiral and the three moves

Sandelowski et al. ( 2009 ) state that the conversion of qualitative data into quantitative data (“quantitizing”) necessarily involves “qualitazing”, because researchers perform a “continuous cycling between assigning numbers to meaning and meaning to numbers” (p. 213). “Data” are recognised as “the product of a move on the part of researchers” (p. 209, emphasis added) because information has to be conceptualised, understood and interpreted to become “data”. In Fig.  1 , we tailor this “cycling” to the performance of QCA by means of the interpretative spiral.

Through the interpretative spiral, we show both how knowledge for QCA is transformed into data by means of “moves” and how the gathering of qualitative data consists of a move on its own. Our choice for the term “move” is grounded in the need to communicate a sense of movement along the “cycling” between meanings and numbers. Furthermore, the term “move” resonates with the communicative steps that interviewers and interviewee engage in during an interview (see Sect.  3 below).

Although we present these moves as separate, they are in reality interfaces, because they are part of the same interpretative spiral. They can be thought of as moves in a dance; the latter emerges because of the succession of moves and steps as a whole, as we show below.

The analytical and membership moves are intertwined-as shown by the central “vortex” of the spiral in Fig.  1 -as they are composed of a number of interrelated steps, in particular case selection, theory-led set conceptualisation, definition of the most appropriate set membership scales and of the cross-over and upper and lower thresholds (e.g. crisp-set, 4- or 6-scale fuzzy-sets; see Ragin 2000 :166–171; Rihoux and Ragin 2009 ). Calibration is the last move of the dialogue between theory (concepts of the analytical move) and data (cases). In the membership move, fuzzy sets are used as “an interpretative algebra, a language that is half-verbal-conceptual and half-mathematical-analytical” (Ragin 2000 :4). Calibration is hence a type of “quantitizing” and “qualitizing” (Sandelowski et al. 2009 ). In applied QCA, set membership values can be reconceptualised and recalibrated. This will for instance be done to solve true logical contradictions in the truth table and when QCA results are interpreted by “going back to cases”, hence overlapping with the practices related to QCA “as a method”.

The relational move displayed in Fig.  1 expresses the additional interpretative process that researchers engage in when collecting and analysing qualitative data. De Block and Vis ( 2018 ) show that only around 30 published QCA-studies combine qualitative data with QCA, including a range of additional data, like observations, site visits, newspaper articles.

However, a closer look reveals that the majority of the published QCA-studies using qualitative data employ (semi)structured interviews or questionnaires. Footnote 1 For instance, Basurto and Speer ( 2012 ) Footnote 2 proposed a step-wise calibration process based on a frequency-oriented strategy (e.g. number of meetings, amount of available information) to calibrate the information collected through 99 semi-structured interviews. Fischer ( 2015 ) conducted 250 semi-structured interviews by cooperating with four trained researchers using pre-structured questions, where respondents could voluntarily add “qualitative pieces of information” in “an interview protocol” (p. 250). Henik ( 2015 ) structured and carried out 50 interviews on whistle-blowing episodes to ensure subsequent blind coding of a high number of items (almost 1000), arguably making them resemble face-to-face questionnaires.

In turn, only a few QCA-researchers use data from narrative interviews. Footnote 3 For example, Metelits ( 2009 ) conducted narrative interviews during ethnographic fieldwork over the course of several years. Verweij and Gerrits ( 2015 ) carried out 18 “open” interviews, while Chai and Schoon ( 2016 ) conducted “in-depth” interviews. Wang ( 2016 ), in turn, conducted structured interviews through a questionnaire, following a similar approach as in Fischer ( 2015 ); however, during the interviews, Wang’s respondents were asked to reflexively justify the chosen questionnaire's responses, hence moving the structured interviews closer to narrative ones. Tóth et al. ( 2017 ) performed 28 semi-structured interviews with company managers to evaluate the quality and attractiveness of customer-provider relationships for maintaining future business relations. Their empirical strategy was however grounded in initial focus groups and other semi-structured interviews, composed of open questions in the first part and a questionnaire in the second part (Tóth et al. 2015 ).

Although no interview is completely structured or unstructured, it is useful to conceptualise (semi-)structured and less structured (or narrative) interviews as the two ends of a continuum (Brinkmann 2014 ). Albeit still relatively rare as compared to quantitative data, the more popular integration of (semi-)structured interviews into QCA might be due to the advantages that this type of qualitative data holds for calibration. The “structured” portion of face-to-face semi-structured interviews or questionnaires facilitates the calibration of this type of qualitative data, because quantitative anchor points can be more clearly identified to assign set membership values (see e.g. Basurto and Speer 2012 ; Fischer 2015 ; Henik 2015 ).

Hence, when critically looking at the “qualitative” character of QCA as a research approach, applied research shows that qualitative methods uneasily fit with QCA. This is because data collection has not been recognised as an integral part of the QCA research process. In Sect.  3 , we show how qualitative data, and in particular a dialogical interviewing style, can help researchers to develop case intimacy.

3 The relational move

Social data are not self-evident facts, they do not reveal anything in themselves, but researchers must engage in interpretative efforts concerning their meaning (Sandelowski et al. 2009 ; Silverman, 2017 ). Differently stated, quantitising and qualitising characterise both quantitative and qualitative social data, albeit to different degrees (Sandelowski et al. 2009 ). This is an ontological understanding of reality that is diversely held by post-positivist, critical realist, critical and constructivist approaches (except by positivist scholars; see Guba and Lincoln, 2005 :193ff). Our position is more akin to critical realism that, in contrast to post-modernist perspectives (Spencer et al. 2014 :85ff), holds that reality exists “out there” and that epistemologically, our knowledge of it, although imperfect, is possible–for instance through the scientific method (Sayer 1992 ).

The socially constructed, not self-evident character of social data is manifest in the collection and analysis of qualitative data. Access to the field needs to be earned, as well as trust and consent from participants, to gradually build and expand a network of participants. More than “collected”, data are “gathered”, because they imply the cooperation with participants. Data from interviews and observations are heterogeneous, and need to be transcribed and analysed by researchers, who also self-reflectively experience the entire process of data collection. QCA researchers using qualitative data necessarily have to go through this additional research process–or move-to gather and generate data, before QCA as a research approach can even start. As QCA researchers using qualitative data need to interact with participants to collect their data, we call this additional research process “relational move”.

While we limit our discussion to narrative interviews and select a few references from a vast literature, our claim is that it is the ability of the interviewer to give life to interviews as a distinct type of social interaction that is key for the data collection process (Chase 2005 ; Leech 2002 ; La Mendola 2009 ; Brinkmann. 2014 ). The ability of the interviewer to establish a dialogue with the interviewee–also in the case of (semi-)structured interviews–is crucial to gain access to case-based knowledge and thus develop the case intimacy later needed in the analytical and membership moves. The relational move is about a researcher’s ability to handle the intrinsic duality characterising that specific social interaction we define as an interview. Both (or more) partners have to be considered as necessary actors involved in giving shape to the “inter-view” as an ex-change of views.

Qualitative researchers call this ability “rapport” (Leech, 2002 :665), “contract” or “staging” (Legard et al., 2003 :139). In our specific understanding of the relational move through a “dialogical” Footnote 4 interviewing style, during the interview 1) the interviewer and the interviewee become the “listener” and the “narrator” (Chase, 2005 :660) and 2) a true dialogue between listener and narrator can only take place when they engage in an “I-thou” interaction (Buber 1923 /2008), as we will show below when we discuss selected examples from our own research.

As a communicative style, in a dialogical interview not only the researcher cannot disappear behind the veil of objectivity (Spencer et al. 2014 ), but the researcher is also aware of the relational duality–or “dialogueness”–inherent to the “inter-view”. Dialogical face-to-face interviewing can be compared to a choreography (Brinkman 2014 :283; Silverman 2017 :153) or a dance (La Mendola 2009 , ch. 4 and 5) where one of the partners (the researcher) is the porteur (“supporter”) of the interaction. As in a dancing couple, the listener supports, but does not lead, the narrator in the unfolding of her story. The dialogical approach to interviewing is hence non-directive, but supportive. A key characteristic of dialogical interviews is a particular way of “being in the interview” (see example 2 below) because it requires the researcher to consider the interviewee as a true narrator (a “thou”). Footnote 5

In a dialogical approach to interviews, questions can be thought of as frames through which the listener invites the narrator to tell a story in her own terms (Chase 2005 :662). The narrator becomes the “subject of study” who can be disobedient and capable to raise her own questions (Latour 2000 : 116; see also Lund 2014). This is also compatible with a critical realist ontology and epistemology, which holds that researchers inevitably draw artificial (but negotiable) boundaries around the object and subject of analysis (Gerrits and Verweij 2013). The case-based, or data-driven (ib.), character of QCA as a research approach hence takes a new meaning: in a dialogical interviewing style, although the interviewer/listener proposes a focus of analysis and a frame of meaning, the interviewee/narrator is given the freedom to re-negotiate that frame of meaning (La Mendola 2009 ; see examples 1 and 2 below).

We argue that this is an appropriate way to obtain case intimacy and in-depth knowledge for subsequent QCA, because it is the narrator who proposes meanings that will then be translated by the researcher, in the following moves, into set membership values.

Particularly key for a dialogical interviewing style is the question formulation, where interviewer privileges “how” questions (Becker 1998 ). In this way, “what” and “why” (evaluative) questions are avoided, where the interviewee is asked to rationally explain a process with hindsight and that supposedly developed in a linear way. Also typifying questions are avoided, where the interviewer gathers general information (e.g. Can you tell me about the process through which an urban project is typically built? Can you tell me about your typical day as an academic?). Footnote 6 “Dialogical” questions can start with: “I would like to propose you to tell me about…” and are akin to “grand tour questions” (Spradley 1979 ; Leech 2002 ) or questions posed “obliquely” (Roulston 2018 ) because they aim at collecting stories, episodes in a certain situation or context and allowing the interviewee to be relatively free to answer the questions.

An example taken from our own research on a QCA of large-scale urban transformations in Western Europe illustrates the distinct approach characterising dialogical interviewing. One of our aims was to reconstruct the decision-making process concerning why and how a certain urban transformation took place (Pagliarin et al. 2019 ). QCA has already been previously used to study urban development and spatial policies because it is sensitive to individual cases, while also accounting for cross-case patterns by means of causal complexity (configurations of conditions), equifinality and causal asymmetry (e.g. Byrne 2005 ; Verweij and Gerrits 2015 ; Gerrits and Verweij 2018 ). A conventional way to formulate this question would be: “In your opinion, why did this urban transformation occur at this specific time?” or “Which were the governance actors that decided its implementation?”. Instead, we formulated the question in a narrative and dialogical way:

Example 1 Listener [L]: Can you tell me how the site identification and materialization of Ørestad came about? Narrator [N]: Yes. I mean there’s always a long background for these projects. (…) it’s an urban area built on partly reclaimed land. It was, until the second world war, a seaport and then they reclaimed it during the second world war, a big area. (…) this is the island called Amager. In the western part here, you can see it differs completely from the rest and that’s because they placed a dam all around like this, so it’s below sea level. (…) [L]: When you say “they”, it’s…? [N]: The municipality of Copenhagen. Footnote 7 (…)

In this example, the posed question (“how… [it]… came about?”) is open and oriented toward collecting the specific story of the narrator about “how” the Ørestad project emerged (Becker 1998 ), starting at the specific time point and angle decided by the interviewee. In this example, the interviewee decided to start just after the Second World War (albeit the focus of the research was only from the 1990s) and described the area’s geographical characteristics as a background for the subsequent decision-making processes. It is then up to the researcher to support the narrator in funnelling in the topics and themes of interest for the research. In the above example, the listener asked: “When you say “they”, it’s…?” to signal to the narrator to be more specific about “they”, without however assuming to know the answer (“it’s…?”). In this way, the narrator is supported to expand on the role of Copenhagen municipality without directly asking for it (which is nevertheless always a possibility to be seized by the interviewer).

The specific “dialogical” way of the researcher of “being in the interview” is rooted in the epistemological awareness of the discrepancy between the narrator’s representation and the listener’s. During an interview, there are a number of “representation loops”. As discussed in the interpretative spiral (see Sect.  2 ), the analytical and membership moves are characterised by a number of research steps; similarly, in the relational move the researcher engages in representation loops or interpretative steps when interacting with the interviewee. The researcher holds ( a ) an analytical representation of her focus of analysis, ( b ) which will be re-interpreted by the interviewee (Geertz, 1973 ). In a dialogical style of interview, the researcher also embraces ( c ) her representation of the ( b ) interviewee's interpretation of ( a ) her theory-led representation of the focus of analysis. Taken together, ( a )-( b )-( c ) are the structuring steps of a dialogical interview, where the listener’s and narrator’s representations “dance” with one another. In the relational move, the interviewer is aware of the steps from one representation to another.

In the following Example 2 , the narrator re-elaborated (interpretative step b) the frame of meaning of the listener (interpretative step a) by emphasising to the listener two development stages of a certain project (an airport expansion in Barcelona, Spain), which the researcher did not previously think of (interpretative step c):

Example 2 [L]: Could you tell me about how the project identification and realisation of the Barcelona airport come about? [N]: Of the Barcelona airport? Well. The Barcelona airport is I think a good thermometer of something deeper, which has been the inclusion of Barcelona and of its economy in the global economy. So, in the last 30 years El Prat airport has lived through like two impulses of development, because it lived, let´s say, the necessary adaptation to a specific event, that is the Olympic games. There it lived its first expansion, to what we today call Terminal 2. So, at the end of the ´80 and early ´90, El Prat airport experienced its first big jump. (...) Later, in 2009 (...) we did a more important expansion, because we did not expand the original terminal, but we did a new, bigger one, (...) the one we now call Terminal 1. Footnote 8

If the interviewee is considered as a “thou”, and if the researcher is aware of the representation loops (see above), the collected information can also be helpful for constructing the study population in QCA. The population under analysis is oftentimes not given in advance but gradually defined through the process of casing (Ragin 2000 ). This allows the researcher to be open to construct the study population “with the help of others”, like “informants, people in the area, the interlocutors” (Lund 2014:227). For instance, in example 2 above, the selection of which urban transformations will form the dataset can depend on the importance given by the interviewees to the structuring impact of a certain urban transformation on the overall urban structure of an urban region.

In synthesis, the data collection process is a move on its own in the research process for performing QCA. Especially when the collected data are qualitative, the researcher engages in a relation with the interlocutor to gather information. A dialogical approach emphasises that the quality of the gathered data depends on the quality of the dialogue between narrator and listener (La Mendola 2009 ). When the listener is open to consider the interviewee as a “thou”, and when she is aware of the interpretative steps occurring in the interview, then meaningful case-based knowledge can be accessed.

Case intimacy is at best developed when the researcher is open to integrate her focus of analysis with fieldwork information and when s/he invites, like in a dance, the narrator to tell his story. However, a dialogical interviewing style is not theory-free, but it is “theory-independent”: the dialogical interviewer supports the narration of the interviewee and does not lead the narrator by imposing her own conceptualisations. We argue that such dialogical I-thou interaction during interviews fosters in-depth knowledge of cases, because the narrator is treated as a subject that can propose his interpretation of the focus of analysis before the researcher frames it within her analytical and membership moves.

However, in practice, there is a tension between the researcher's need to collect data and the “here-and-now interactional event of the interview” (Rapley, 2001 :310). It is inevitable that the researcher re-elaborates, to a certain degree, her  analytical framework during the interviews, because this enables the researcher to get acquainted with the object of analysis and to keep the interview content on target with the research goals (Jopke and Gerrits, 2019 ). But is it this re-interpretation of the interviewee's replies and stories by the listener during the interviews that opens the interviewer’s awareness of the representation loops.

4 The analytical and membership moves

Researchers engage in face-to-face interviews as a strategy for data collection by holding specific analytical frameworks and theories. A researcher seldom begins his or her undertakings, even in the exploratory phase, with a completely open mind (Lund 2014:231). This means that the researcher's representations (a and c, see above) of the narrator's representation(s) (b, see above) are related to the theory-led frames of inquiry that the researcher organises to understand the world. These frames are typically also verbal, as “[t]his framing establishes, and is established through, the language we employ to speak about our concerns” (Lund 2014:226).

In particular for the collection of qualitative data, the analytical move is composed of two main movements: during and after the data collection process. During the data collection process, when adopting a dialogical interviewing style, the researcher should mind keeping the interview theory-independent (see above). First, this means that the interviewee is not asked to get to the researcher’s analytical level. The use of jargon should be avoided, either in narrative or semi-structured interviews and questionnaires, because it would limit the narrator's representation(s) (b) within the listener's interpretative frames (a), and hence the chance for the researcher to gain in-depth case knowledge (c). Silverman ( 2017 :154) cautions against “flooding” interviewees with “social science categories, assumptions and research agendas”. Footnote 9 In example 1 above, the use of the words “governance actors” may have misled the narrator–even an expert–since its meaning might not be clear or be the same as the interviewer's.

Second, the researcher should neither sympathise with the interviewee nor judge the narrator’s statements, because this would transform the interview into another type of social interaction, such as a conversation, an interrogation or a confession (La Mendola 2009 ). The analytical move requires that the researcher does not confuse the interview as social interaction with his or her analysis of the data, because this is a specific, separate moment after the interview is concluded. Whatever material or stories a researcher receives during the interviews, it is eventually up to him or her to decide which representation(s) will be told (and how) (Stake 2005 :456). It is the job of the researcher to perform the necessary analytical work on the collected data.

After the fieldwork, the second stage of the analytical move is a change of state of the interviewees' replies and stories to subsequently “feed in” in QCA. The researcher begins to qualitatively assess and organise the in-depth knowledge, in the form of replies or stories, received by the interviewees through their narrations. This usually involves the (double-)coding of the qualitative material, manually or through the use of dedicated software. The analysis of the qualitative material organises the in-depth knowledge gained through the relational move and sustains the (re)definition of the outcome and conditions, their related attributes and sub-dimensions, for performing QCA.

In recognising the difficulty in integrating qualitative (interview) data into QCA procedures, QCA-researchers have developed templates, tables or tree diagrams to structure the analysed qualitative material into set membership scores (Basurto and Speer 2012 ; Legewie 2017 ; Tóth et al. 2017 ; see also online supplementary material). We call these different templates “ Supports for Membership Representation ” (SMeRs) because they facilitate the passage from conceptualisation (analytical move) to operationalisation into set membership values (membership move). Below, we discuss these templates by placing them along a continuum from “more theory-driven” to “more data-driven” (see Gerrits and Verweij 2018 , ch. 1). Although the studies included below did not use a dialogical approach to interviews, we also examine the SMeRs in terms of their openness towards the collected material. As explained above, we believe it is this openness–at best “dialogical”–that facilitates the development of case intimacy on the side of the researcher. In distinguishing the steps characterising both moves (see Sect.  2 above), below we differentiate the analytical and membership moves.

Basurto and Speer ( 2012 ) were the first develop and present a preliminary but modifiable list of theoretical dimensions for conditions and outcome. Their interview guideline is purposely developed to obtain responses to identify anchor points prior to the interviews and to match fuzzy sets. In our perspective, this contravenes the separation between the relational and analytical move: the researcher deals with interviewees as “objects” whose shared information is fitted to the researchers’ analytical framework. In their analytical move, Basurto and Speer define an ideal and a deviant case–both of them non-observable–to locate, by comparison, their cases and facilitate the assignment of fuzzy-set membership scores (membership move).

Legewie ( 2017 ) proposes a “grid” called Anchored Calibration (AC) by building on Goertz ( 2006 ). In the analytical move, the researcher first structures (sub-)dimensions for each condition and the outcome by means of concept trees. Each concept is then represented by a gradation, which should form conceptual continua (e.g. from low to high) and is organised in a tree diagram to include sub-dimensions of the conditions and outcome. In the membership move, to each “graded” concept, anchor points are assigned (i.e. 0, 0.25, 0.75, 1). The researcher then iteratively matches coded evidence from narrative interviews (analytical move) to the identified anchor points for calibration, thus assigning set membership scores (e.g. 0.33 or 0.67; i.e. membership move). Similar to Basurto and Speer ( 2012 ), the analytical framework of the researcher is given priority to and tightly structures the collected data. Key for anchored calibration is the conceptual neatness of the SMeR, which is advantageous for the researcher but that, following our perspective, allows a limited dialogue with the cases and hence the development of case intimacy.

An alternative route is the one proposed by Tóth et al. ( 2017 ). The authors devise the Generic Membership Evaluation Template (GMET) as a “grid” where qualitative information from the interviews (e.g. quotes) and from the researcher’s interpretative process is included. In the analytical move, their template clearly serves as a “translation support” to represent “meanings” into “numbers”: researchers included information on how they interpreted the evidence (e.g. positive/negative direction/effect on membership of a certain attribute; i.e. analytical move), as well as an explanation of why specific set membership scores have been assigned to cases (i.e. membership move). Tóth et al.’s ( 2017 ) SMeR appears more open to the interviewees’ perspective, as researchers engaged in a mixed-method research process where the moment of data collection–the relational move–is elaborated on (Tóth et al. 2015 ). We find their approach more effective for gaining in-depth knowledge of cases and for supporting the dialogue between theory and data.

Jopke and Gerrits ( 2019 ) discuss routines, concrete procedures and recommendations on how to inductively interpret and code qualitative interview material for subsequent calibration by using a grounded-theory approach. In their analytical move, the authors show how conditions can be constructed from the empirical data collected from interviews; they suggest first performing an open coding of the interview material and then continuing with a theoretical coding (or “closed coding”) that is informed by the categories identified in the previous open coding procedure, before defining set membership scores for cases (i.e. membership move). Similar to Tóth et al. ( 2017 ), Jopke and Gerrits’ ( 2019 ) SMeR engages with the data collection and the gathered qualitative material by being open to what the “data” have to “tell”, hence implementing a strategy for data analysis that is effective to gain in-depth knowledge of cases.

Another type of SMeR is the elaboration of summaries of the interview material by unit of analysis (e.g. urban transformations, participation initiatives, interviewees’ individual careers paths). Rihoux and Lobe ( 2009 ) propose the so-called short case descriptions (SCDs). Footnote 10 As a possible step within the interpretative spiral available to the researcher, short case descriptions (SCDs) are concise summaries that effectively synthesise the most important information sorted by certain identified dimensions, which will then compose the conditions, and their sub-dimensions, for QCA. As a type of SMeR, the summaries consist of a change of state of the qualitative material, because they provide “intermediate” information on the threshold between the coding of the interviews' transcripts and the subsequent assignment of membership scores (the membership move, or calibration) for the outcome and each condition. Furthermore, the writing of short summaries appears to be particularly useful to allow researchers that have already performed narrative interviews to evaluate whether to carry out QCA as a systematic method for comparative analysis. For instance, similar to what Tóth et al. ( 2017 :200) did to reduce interview bias, in our own research interviewees could cover the development of multiple cases, and the use of short summaries helped us compare information per each case across multiple interviewees and spot possible contradictions.

The overall advantage of SMeRs is helping researchers provide an overview of the quality and “patchiness” of available information about the cases per interview (or document). SMeRs can also help spot inconsistencies and contradictions, thus guiding researchers to judge if their data can provide sufficiently homogeneous information for the conditions and outcome composing their QCA-model. This is particularly relevant in case-based QCA research, where descriptive inferences are drawn from the material collected from the selected cases and the degree of its internal validity (Thomann and Maggetti 2017 :361). Additionally, the issue of the “quality” and “quantity” across the available qualitative data (de Block and Vis 2018 ) can be checked ex-ante before embarking on QCA.

For the membership move, the GMET, the AC, grounded theory coding and short summaries supports the qualitative assignment of set membership values from empirical interview data. SMeRs typically include an explanation about why a certain set membership score has been assigned to each case record, and diagrammatically arrange information about the interpretation path that researchers have followed to attribute values. They are hence a true “interface” between qualitative empirical data (“words/meaning”) and set membership values (“numbers”). Each dimension included in SMeRs can also be coupled with direct quotes from the interviews (Basurto and Speer 2012 ; Tóth et al. 2017 ).

In our own research (Pagliarin et al. 2019 ), after having coded the interview narratives, we developed concepts and conditions first by comparing the gathered information through short summaries—similar to short case descriptions (SCDs), see Rihoux and Lobe ( 2009 )—and then by structuring the conditions and indicators in a grid by adapting the template proposed by Tóth et al. ( 2017 ). One of the goals of our research was to identify “external factors or events” affecting the formulation and development of large-scale urban transformations. External (national and international) events (e.g. failed/winning bid for the Olympic Games, fall of Iron Curtain/Berlin wall) do not have an effect per se, but they stimulate actors locally to make a certain decision about project implementation. We were able to gain this knowledge because we adopted a dialogical interviewing style (see Example 3 below). As the narrator is invited to tell us about some of the most relevant projects of urban transformation in Greater Copenhagen in the past 25–30 years, the narrator is free to mention the main factors and actors impacting on Ørestad as an urban transformation.

Example 3 [L]: In this interview, I would propose that you tell me about some of the most relevant projects of urban transformation that have been materialized in Greater Copenhagen in the past 25–30 years. I would like you to tell me about their itinerary of development, step by step, and if possible from where the idea of the project emerged. [N]: Okay, I will try to start in the 80’s. In the 80’s, there was a decline in the city of Copenhagen. (…) In the end of the 80’s and the beginning of the 90’s, there was a political trend. They said, “We need to do something about Copenhagen. It is the only big city in Denmark so if we are going to compete with other cities, we have to make something for Copenhagen so it can grow and be one of the cities that can compete with Amsterdam, Hamburg, Stockholm and Berlin”. I think also it was because of the EU and the market so we need to have something that could compete and that was the wall falling in Berlin. (…) The Berlin Wall, yes. So, at that time, there was a commission to sit down with the municipality and the state and they come with a plan or report. They have 20 goals and the 20 goals was to have a bridge to Sweden, expanding of the airport, a metro in Copenhagen, investment in cultural buildings, investment in education. (…) In the next 5 years, from the beginning of the 90’s to the middle of the 90’s, there were all of these projects more or less decided. (…) The state decided to make the airport, to make the bridge to Sweden, to make… the municipality and the city of Copenhagen decides to make Ørestad and the metro together with the state. So, all these projects that were lined up on the report, it was, let’s decide in the next 5 years. [L]: So, there was a report that decided at the end of the 80’s and in the 90’s…? [N]: Yes, ‘89. (…) To make all these projects, yes. (…). [L]: Actually, one of the projects I would like you to tell me about is the Ørestad. R: Yes. It is the Ørestad. The Ørestad was a transformation… (…).

The factors mentioned by the interviewee corresponded to the main topics of interest by the researcher. In this example, we can also highlight the presence of a “prompt” (Leech 2002 ) or “clue” (La Mendola 2009 ). To keep the narrator on focus, the researcher “brings back” (the original meaning of rapporter ) the interviewee to the main issues of the inter-view by asking “So, there was a report…”.

Following the question formulation as shown in example 3, below we compare the external event(s) impacting the cases of Lyon Part-Dieu in France (Example 4 ) and Scharnhauserpark in Stuttgart in Germany (Example 5 ).

Example 4 [N]: So, Part-Dieu is a transformation of the1970s, to equip [Lyon] with a Central Business District like almost all Western cities, following an encompassing regional plan. This is however not local planning, but it is part of a major national policy. (…) To counterbalance the macrocephaly of Paris, 8 big metropolises were identified to re-balance territorial development at the national level in the face of Paris. (…) including Lyon. (…) The genesis of Part-Dieu is, in my opinion, a real-estate opportunity, and the fact to have military barracks in an area (…) 15 min away from the city centre (…) to reconvert in a business district. Footnote 11
Example 5 [N]: When the American Army left the site in 1992, the city of Ostfildern consisted of five villages. They bought the site and they said, “We plan and build a new centre for our village”, because these are five villages and this is in the very centre. It’s perfectly located, and when they started they had 30,000 inhabitants and now that it’s finished, they have 40,000, so a third of the population were added in the last 20 years by this project. For a small municipality like Ostfildern, it was a tremendous effort and they were pretty good at it. Footnote 12

In the examples above, Lyon Part-Dieu and Scharnhauserpark are unique cases and developed into an area with different functions (a business district and a mixed-use area), but we can identify a similar event: the unforeseen dismantling of military barracks. Both events were considered external factors punctually identifiable in time that triggered the redevelopment of the areas. Instead, in the following illustration about the “Confluence” urban renewal in Lyon, the identified external event relates to a global trend regarding post-industrial cities and the “patchwork” replacement of functions in urban areas:

Example 6 [N]: The Confluence district (…) the wholesale market dismantles and opens an opportunity at the south of the Presqu'Île, so an area extremely well located, we are in the city centre, with water all around because of the Saône and Rhône rivers, so offering a great potential for a high quality of life. However, I say “potential” because there is also a highway passing at the boundary of the neighbourhood. Footnote 13

Although our theoretical framework identified a set of exogenous factors affecting large-scale urban transformations locally, we used the empirical material from our interviews to conceptualise the closing of military barracks and the dismantling of the wholesale market as two different, but similar types of external events, and considered them to be part of the same “external events” condition. In set-theoretic terms, this condition is defined as a “set of projects where external (unforeseen) events or general/international trends had a large impact on project implementation”. The broader set conceptualisation of this condition is possibly not optimal, as it reflects the tension in comparative research to find a balance between capturing cases’ individual histories (case idiosyncrasies) and more concepts that are abstract “enough” to account for cross-case patterns (see Gerrits and Verwej 2018 ; Jopke and Gerrit 2019 ). This is a key challenge of the analytical move.

However, the core of the subsequent membership move is precisely to perform a qualitative assessment to capture these differences by assigning different set-membership values. In the case of Lyon Confluence, where the closing of the whole sale market as external event did happen but did only have a “general” influence on the area’s redevelopment, the case was given a set membership value of 0.33 to this condition. In contrast, the case of Lyon Part-Dieu was given a set membership score of 0.67 to the condition “external events” because a French military area was dismantled, but it was also combined with a national strategy of the French state to redistribute territorial development across France. According to our analysis of the collected qualitative material, it was an advantage that the military area was dismantled but the redevelopment of Part-Dieu would have probably been affected anyway by the overall national territorial strategy. Footnote 14 Finally, the case of Stuttgart Scharnhauserpark case was given full membership (1.00) to the condition, because the US army left the area–which is an indication of a “fully exogenous” event–that truly stimulated urban change in Scharnhauserpark. Footnote 15

Our calibration (membership move) of the three cases illustrated in Examples 4 , 5 and 6 shows that set membership values represent a concept, at times also relatively broad to allow comparison (analytical move), but that they do not replace the specific way (or “meaning”) through which the impact of external factors empirically instantiate in each of the cases discussed in the above examples.

In the interpretative spiral Fig.  1 , there is hence–despite our wishes–no perfect correspondence between meanings and numbers (quantitising) and numbers and meanings (qualitising; see Sandelowski et al. 2009 ). This is a consequence of the constructed nature of social data (see Sect.  2 ). When using qualitative data, fuzzy-sets are “ interpretive tools” to operationalise theoretical concepts (Ragin 2000 :162, original emphasis) and hence are approximations to reality. In other words, set memberships values are token s. Here, we agree with Sandelowski et al. ( 2009 ), who are critical of “the rhetorical appeal of numbers” (p. 208) and the vagaries of ordinal categories in questionnaires (p. 211ff).

Note that calibration by using qualitative data is not blurry or unreliable. On the contrary, its robustness is given by the quality of the dialogue established between researcher and interviewee and by the acknowledgement that the analytical and membership moves are types of representation –as fourth and fifth representation loops. It might hence be possible that QCA researchers using qualitative data have a different research experience of QCA as a research approach and method than QCA researchers using quantitative data.

5 Conclusion

In this study, we critically observed how, so far, qualitative data have been used in few QCA studies, and only a handful use narrative interviews (de Block and Vis 2018 ). This situation is puzzling because qualitative research methods can offer an effective route to gain access to in-depth case knowledge, or case intimacy, considered key to perform QCA.

Besides the higher malleability of quantitative data for set conceptualisation and calibration (here called “analytical” and “membership” moves), we claimed that the limited use of qualitative data in QCA applied research depends on the failure to recognise that the data collection process is a constituent part of QCA “as a research approach”. Qualitative data, such as interviews, focus groups or documents, come in verbal form–hence, less “ready” for calibration than quantitative data–and require a research phase on their own for data collection (here called the “relational move”). The relational, analytical and membership moves form an “interpretative spiral” that hence accounts for the main research phases composing QCA “as a research approach”.

In the relational move, we showed how researchers can gain access to in-depth case-based knowledge, or case intimacy, by adopting a “dialogical” interviewing style (La Mendola 2009 ). First, researchers should be aware of the discrepancy between the interviewee/narrator’s representation and the interviewer/listener’. Second, researchers should establish an “I-thou” relationship with the narrator (Buber 1923 /2010; La Mendola 2009 ). As in a dancing couple, the interviewer/listener should accompany, but not lead, the narrator in the unfolding of her story. These are fundamental routes to make the most of QCA’s qualitative potential as a “close dialogue with cases” (Ragin 2014 :81).

In the analytical and membership moves, researchers code, structure and interpret their data to assign crisp- and fuzzy-set membership values. We examined the variety of templates–what we call Supports for Membership Representation (SMeRs)–designed by QCA-researchers to facilitate the assignment of “numbers” to “words” (Rihoux and Lobe 2009 ; Basurto and Speer 2012 ; Legewie 2017 ; Tóth et al. 2015 , 2017 ; Jopke and Gerrits 2019 ).

Our study did not offer an overarching examination of the research process involved in QCA, but critically focussed on a specific aspect of QCA as a research approach. We focussed on the “translation” of data collected through qualitative research methods (“words” and “meanings”) into set membership values (“numbers”). Hence, in this study the discussion of QCA as a method has been limited.

We hope our paper has been a first contribution to identify and critically examine the “qualitative” character of QCA as a research approach. Further research could identify other relevant moves in QCA as a research approach, especially when non-numerical data are employed and regarding internal and external validity. Other moves and steps could also be identified or clearly labelled in QCA as a method, in particular when assessing limited diversity, skewedness (e.g. “data distribution” step) and the management of true logical contradictions (e.g. “solving contradictions” move). These are all different mo(ve)ments in the full-fledged application of QCA that allow researchers to make sense of their data and to connect “theory” and “evidence”.

As also noted by de Block and Vis ( 2018 ), QCA researchers are not always clear about what they exactly mean with “in-depth” or “open” interviews and how they informed the calibration process (e.g. Verweij and Gerrits, 2015 ), especially when also quantitative data and different coders were used (e.g. Chai and Schoon, 2016 ).

See online appendix.

We are aware that other studies combining narrative interviews and QCA have been carried out, but here we limit our discussion only to already published articles that we are aware of at the time of writing.

Without going into further details on this occasion, the term “dialogical” explicitly refers to the “dialogical epistemology” as discussed by Buber ( 1923 /2008) who distinguishes between an “I-thou” relation and an “I-it” experience. In this perspective, “dialogical” is considered as a synonym of “relational” (i.e. “I-thou” relation).

See footnote.4

The interviewer avoids posing evaluative and typifying questions to the narrator, but the former naturally works through evaluative and typifying research questions.

Copenhagen, Interview 5, September 1, 2016.

Barcelona, Interview 1, June 27, 2016. Translated from the original Spanish.

We take the risk to quote Silverman ( 2017 ) although in his article he warned about extracting and using quotes to support the researchers' arguments.

Gerrits and Verweij ( 2018 ) also emphasise the usefulness of thick case descriptions.

Lyon, Interview 4, October, 13 2016. Translated from the original French.

Stuttgart, Interview 1, July, 18 2016.

Lyon, Interview 1, October 11, 2016. Translated from the original French.

This consideration also relates to the interdependence, and not necessarily independence, of conditions in QCA, which is a topic that is beyond the scope of this study (see e.g. Jopke and Gerrits 2019 ).

For a discussion regarding the “absence” of possible factors from the interviewees' narrations, we refer readers to Sandelowski et al. ( 2009 ) and de Block and Vis ( 2018 ). In general, data triangulation is a good strategy to deal with partial and even contradictory information collected from multiple interviewees. For our own strategy regarding data triangulation, we also used an online questionnaire, additional literature and site visits (Pagliarin et al. 2019 ).

Basurto, X., Speer, J.: Structuring the calibration of qualitative data as sets for qualitative comparative analysis (QCA). Field Methods 24 , 155–174 (2012)

Article   Google Scholar  

Becker, H.S.: Tricks of the Trade: HOW to Think About Your Research While You’re Doing It. University Press, Chicago (1998)

Book   Google Scholar  

Brinkmann, S.: Unstructured and Semistructured Interviewing. In: Leavy, P. (ed.) The Oxford Handbook of Qualitative Research, pp. 277–300. University Press, Oxford (2014)

Google Scholar  

Buber M (1923/2010) I and Thou. Charles Scribner's Sons New York

Byrne, D.: Complexity configurations and cases. Theory Cult. Soc. 22 (5), 95–111 (2005)

Chai, Y., Schoon, M.: Institutions and government efficiency: decentralized Irrigation management in China. Int. J. Commons 10 (1), 21–44 (2016)

Chase, S.E.: Narrative inquiry: multiple lenses, approaches, voices. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 631–679. Sage, Thousand Oaks, CA (2005)

Collier, D.: Symposium: The Set-Theoretic Comparative Method—Critical Assessment and the Search for Alternatives. ID 2463329, SSRN Scholarly Paper, 1 July. Rochester, NY: Social Science Research Network. Available at: https://papers-ssrn-com.eur.idm.oclc.org/abstract=2463329 (Accessed 9 March 2021). (2014)

de Block, D., Vis, B.: Addressing the challenges related to transforming qualitative into quantitative data in qualitative comparative analysis. J. Mixed Methods Res. 13 , 503–535 (2018). https://doi.org/10.1177/1558689818770061

Fischer, M.: Institutions and coalitions in policy processes: a cross-sectoral comparison. J. Publ. Policy 35 , 245–268 (2015)

Geertz, C.: The Interpretation of Cultures. Basic Books, New York (1973)

Gerrits, L., Verweij, S.: The Evaluation of Complex Infrastructure Projects. A Guide to Qualitative Comparative Analysis. Edward Elgar, Cheltenham UK (2018)

Goertz, G.: Social Science Concepts. A User’s Guide. University Press, Princeton (2006)

Greckhamer, T., Misangyi, V.F., Fiss, P.C.: Chapter 3 the two QCAs: from a small-N to a large-N set theoretic approach, In Fiss, P.C., Cambré, B. and Marx, A. (Eds.), Configurational theory and methods in organizational research (Research in the Sociology of Organizations, Vol. 38), Emerald Group Publishing Limited, Bingley, pp. 49–75. https://doi.org/10.1108/S0733-558X(2013)0000038007 (2013)

Guba, E.G., Lincoln, Y.S.: Paradigmatic controversies, contradictions and emerging confluences. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 191–215. Sage, Thousand Oaks, CA (2005)

Harvey, D.L.: Complexity and case D. In: Byrne, Ragin, C.C. (eds.) The SAGE Handbook of Case-Based Methods, pp. 15–38. SAGE Publications Inc, London (2009)

Chapter   Google Scholar  

Henik, E.: Understanding whistle-blowing: a set-theoretic approach. J. Bus. Res. 68 , 442–450 (2015)

Jopke, N., Gerrits, L.: Constructing cases and conditions in QCA – lessons from grounded theory. Int. J. Soc. Res. Methodol. 22 (6), 599–610(2019). https://doi.org/10.1080/13645579.2019.1625236

La Mendola, S.: Centrato e Aperto: dare vita a interviste dialogiche [Centred and Open: Give life to dialogical interviews]. UTET Università, Torino (2009)

Latour, B.: When things strike back: a possible contribution of ‘science studies’ to the social sciences. Br. J. Sociol. 51 , 107–123 (2000)

Leech, B.L.: Asking questions: Techniques for semistructured interviews. Polit. Sci. Polit. 35 , 665–668 (2002)

Legard, R., Keegan, J., Ward, K.: In-depth interviews. In: Richie, J., Lewis, J. (eds.) Qualitative Research Practice, pp. 139–168. Sage, London (2003)

Legewie, N.: Anchored Calibration: From qualitative data to fuzzy sets. In: Forum Qualitative Sozialforschung / Forum: Qualitative Social Research 18 (3), 14 (2017). https://doi.org/10.17169/fqs-18.3.2790

Lucas, S.R., Szatrowski, A.: Qualitative comparative analysis in critical perspective. Sociol. Methodol. 44 (1), 1–79 (2014)

Metelits, C.M.: The consequences of rivalry: explaining insurgent violence using fuzzy sets. Polit. Res. q. 62 , 673–684 (2009)

Pagliarin, S., Hersperger, A.M., Rihoux, B.: Implementation pathways of large-scale urban development projects (lsUDPs) in Western Europe: a qualitative comparative analysis (QCA). Eur. Plan. Stud. 28 , 1242–1263 (2019). https://doi.org/10.1080/09654313.2019.1681942

Ragin, C.C.: The Comparative Method. Moving Beyond Qualitative and Quantitative Strategies. University of California Press, Berkeley and Los Angeles (1987)

Ragin, C.C.: Fuzzy-Set Social Science. University Press, Chicago (2000)

Ragin, C.C.: Redesigning Social Inquiry. Fuzzy Sets and Beyond. University Press, Chicago (2008a)

Ragin, C.C.: Fuzzy sets: calibration versus measurement. In: Collier, D., Brady, H., Box-Steffensmeier, J. (eds.) Methodology Volume of Oxford Handbooks of Political Science, pp. 174–198. University Press, Oxford (2008b)

Ragin, C.C.: Comment: Lucas and Szatrowski in Critical Perspective. Sociol. Methodol. 44 (1), 80–94 (2014)

Rapley, T.J.: The art (fulness) of open-ended interviewing: some considerations on analysing interviews. Qual. Res. 1 (3), 303–323 (2001)

Rihoux, B., Ragin, C. (eds.): Configurational Comparative Methods. Qualitative Comparative Analysis (QCA) and related Techniques. Sage, Thousand Oaks, CA (2009)

Rihoux, B., Lobe, B.: The case for qualitative comparative analysis (QCA): adding leverage for thick cross-case comparison. In: Byrne, D., Ragin, C.C. (eds.) The SAGE Handbook of Case-Based Methods, pp. 222–242. SAGE Publications Inc, London (2009)

Roulston, K.: Qualitative interviewing and epistemics. Qual. Res. 18 (3), 322–341 (2018)

Sandelowski, M., Voils, C.I., Knafl, G.: On quantitizing. J. Mixed Methods Res. 3 , 208–222 (2009)

Sayer, A.: Method in Social Science. A Realist Approach. Routledge, London (1992)

Schneider, C.Q., Wagemann, C.: Set-Theoretic Methods for the Social Sciences. A Guide to Qualitative Comparative Analysis. University Press, Cambridge (2012)

Silverman, D.: How was it for you? The Interview Society and the irresistible rise of the (poorly analyzed) interview. Qual. Res. 17 (2), 144–158 (2017)

Spencer, R., Pryce, J.M., Walsh, J.: Philosophical approaches to qualitative research. In: Leavy, P. (ed.) The Oxford Handbook of Qualitative Research, pp. 81–98. University Press, Oxford (2014)

Spradley, J.P.: The ethnographic interview. Holt Rinehart and Winston, New York (1979)

Stake, R.E.: Qualitative case studies. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 443–466. Sage, Thousand Oaks, CA (2005)

Thomann, E., Maggetti, M.: Designing research with qualitative comparative analysis (QCA): approaches, challenges, and tools. Sociol. Methods Res. 49 (2), 356–386 (2017)

Tóth, Z., Thiesbrummel, C., Henneberg, S.C., Naudé, P.: Understanding configurations of relational attractiveness of the customer firm using fuzzy set QCA. J. Bus. Res. 68 (3), 723–734 (2015)

Tóth, Z., Henneberg, S.C., Naudé, P.: Addressing the ‘qualitative’ in fuzzy set qualitative comparative analysis: the generic membership evaluation template. Ind. Mark. Manage. 63 , 192–204 (2017)

Verweij, S., Gerrits, L.M.: How satisfaction is achieved in the implementation phase of large transportation infrastructure projects: a qualitative comparative analysis into the A2 tunnel project. Public W. Manag. Policy 20 , 5–28 (2015)

Wang, W.: Exploring the determinants of network effectiveness: the case of neighborhood governance networks in Beijing. J. Public Adm. Res. Theory 26 , 375–388 (2016)

Download references

Acknowledgements

The authors would like to thank the two reviewers who provided great insights and careful remarks, thus allowing us to improve the quality of the manuscript. During a peer-review process lasting for more than 2 years, we intensely felt the pushes and slows, and at times the impasses, of a fruitful dialogue on the qualitative and quantitative aspects of comparative analysis in the social sciences.

Open Access funding enabled and organized by Projekt DEAL. This research has been partially funded through the Consolidator Grant (ID: BSCGIO 157789), held by Prof. h. c. Dr. Anna M. Hersperger, provided by the Swiss National Science Foundation.

Author information

Authors and affiliations.

Utrecht University School of Governance, Utrecht University, Utrecht, The Netherlands

Barbara Vis

Department of Philosophy, Sociology, Pedagogy and Applied Psychology, Padua University, Padua, Italy

Salvatore La Mendola

Chair for the Governance of Complex and Innovative Technological Systems, Otto-Friedrich-Universität Bamberg, Bamberg, Germany

Sofia Pagliarin

Landscape Ecology Research Unit, CONCUR Project, Swiss Federal Research Institute WSL, Birmensdorf, Zurich, Switzerland

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sofia Pagliarin .

Ethics declarations

Conflict of interest.

The Authors declare not to have any conflict of interest to report.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 21 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Pagliarin, S., La Mendola, S. & Vis, B. The “qualitative” in qualitative comparative analysis (QCA): research moves, case-intimacy and face-to-face interviews. Qual Quant 57 , 489–507 (2023). https://doi.org/10.1007/s11135-022-01358-0

Download citation

Accepted : 20 February 2022

Published : 26 March 2022

Issue Date : February 2023

DOI : https://doi.org/10.1007/s11135-022-01358-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Calibration
  • Data generation
  • Interviewing
  • In-depth knowledge
  • Qualitative data
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Comparative Analysis: Exploring Similarities and Differences in Case Studies

    is comparative analysis a methodology

  2. Comparative analysis methodology.

    is comparative analysis a methodology

  3. Concepts and Methods in Comparative Politics

    is comparative analysis a methodology

  4. the comparative analysis methodology

    is comparative analysis a methodology

  5. Comparative Analysis Template for Four items

    is comparative analysis a methodology

  6. Comparative analysis_unlabeled

    is comparative analysis a methodology

VIDEO

  1. Static, Comparative, Dynamic analysis

  2. #comparative analysis of various investment #comparative #analysis #investment #incometax

  3. comparative Analysis of organizational culture

  4. comparative methods in research methodology @lecturesworld10Mviews #trending #research #researchscholar

  5. Literature Review

  6. Ignou MA |MSO-002

COMMENTS

  1. Comparative Research Methods

    His approach, Qualitative Comparative Analysis (QCA), is a configurational or holistic comparative method which considers each case (system, culture) as a complex entity, as a "whole," which needs to be studied in a case-sensitive way. It combines quantitative, variable-based logic and qualitative, case-based interpretation.

  2. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  3. (PDF) A Short Introduction to Comparative Research

    A comparative study is a kind of method that analyzes phenomena and then put them together. to find the points of differentiation and similarity (MokhtarianPour, 2016). A comparative perspective ...

  4. What is Comparative Analysis and How to Conduct It?

    Comparative analysis is a versatile methodology that finds application in various fields and scenarios. Let's explore some of the most common and impactful applications. Business Decision-Making. Comparative analysis is widely employed in business to inform strategic decisions and drive success. Key applications include:

  5. What is Comparative Analysis? Guide with Examples

    A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets. For instance, you could use comparative ...

  6. Comparative Studies

    Comparative is a concept that derives from the verb "to compare" (the etymology is Latin comparare, derivation of par = equal, with prefix com-, it is a systematic comparison).Comparative studies are investigations to analyze and evaluate, with quantitative and qualitative methods, a phenomenon and/or facts among different areas, subjects, and/or objects to detect similarities and/or ...

  7. Comparative Analysis

    Comparative analysis is a multidisciplinary method, which spans a wide cross-section of disciplines (Azarian, 2011).It is the process of comparing multiple units of study for the purpose of scientific discovery and for informing policy decisions (Rogers, 2014).Even though there has been a renewed interest in comparative analysis as a research method over the last decade in fields such as ...

  8. Qualitative Comparative Analysis (QCA)

    The social sciences use a wide range of research methods and techniques ranging from experiments to techniques which analyze observational data such as statistical techniques, qualitative text analytic techniques, ethnographies, and many others. In the 1980s a new technique emerged, named Qualitative Comparative Analysis (QCA), which aimed to ...

  9. Qualitative comparative analysis

    Qualitative comparative analysis. In statistics, qualitative comparative analysis ( QCA) is a data analysis based on set theory to examine the relationship of conditions to outcome. QCA describes the relationship in terms of necessary conditions and sufficient conditions. [1] The technique was originally developed by Charles Ragin in 1987 [2 ...

  10. PDF Comparative Analysis

    Comparative analysis is a multidisciplinary method, which spans a wide cross-section of disciplines (Azarian, 2011). It is the process of comparing multiple units of study for the purpose of scientific discovery and for informing policy decisions (Rogers, 2014). Even though there has been a renewed interest in comparative analysis as a research ...

  11. (PDF) Qualitative Comparative Analysis: An Introduction to Research

    Qualitative Comparative Analysis: An Introduction to Research Design and Application is a comprehensive guide to QCA. As QCA becomes increasingly popular across the social sciences, this textbook ...

  12. Qualitative comparative analysis

    This book, by Schneider and Wagemann, provides a comprehensive overview of the basic principles of set theory to model causality and applications of Qualitative Comparative Analysis (QCA), the most developed form of set-theoretic method, for research ac. An introduction to applied data analysis with qualitative comparative analysis.

  13. PDF The Comparative approach: theory and method

    2.1 Introduction. In this chapter we shall elaborate on the essentials of the 'art of comparing' by discussing. relation between theory and method as it is discussed with reference to the Comparative. approach. In order to clarify this point of view, we shall first discuss some of the existing.

  14. How to Do Comparative Analysis in Research ( Examples )

    Comparative analysis is a method that is widely used in social science. It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and ...

  15. PDF Qualitative Comparative Analysis

    Qualitative Comparative Analysis (QCA) is a case-based method that enables evaluators to systematically compare cases, identifying key factors which are responsible for the success of an intervention. As a comparative method, QCA doesn't work with a single case - it needs to compare factors at work across

  16. Comparative Analysis: What It Is & How to Conduct It

    Comparative analysis is a way to look at two or more similar things to see how they are different and what they have in common. It is used in many ways and fields to help people understand the similarities and differences between products better. It can help businesses make good decisions about key issues. One meaningful way it's used is when ...

  17. The use of Qualitative Comparative Analysis (QCA) to address causality

    Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA ...

  18. Comparative Analysis

    Comparative analyses can build up to other kinds of writing in a number of ways. For example: They can build toward other kinds of comparative analysis, e.g., student can be asked to choose an additional source to complicate their conclusions from a previous analysis, or they can be asked to revisit an analysis using a different axis of comparison, such as race instead of class.

  19. Deciphering the environmental values behind green purchasing: A mixed

    In the second study, the fuzzy set qualitative comparative analysis method (fs-QCA) is employed to examine the different antecedent variables affecting green purchasing behavior. The findings show no single determinant is essential; instead, green purchasing behavior follows three distinct patterns - namely, internal autogenous, environment ...

  20. Comparative efficacy of Antioxidant Therapies for Sepsis and Septic

    Nevertheless, a comparative study is essential to evaluate the effectiveness of these therapies and determine which antioxidant has the most potential in supporting treatment, relieving symptoms, and reducing disease severity. ... 2.9 Measures-planned methods of analysis. Data analysis was conducted using R within R Studio software ...

  21. Comparative Analysis

    Another methodology for the analysis of data with complex patterns of variability, with a focus on nested sources of such variability, is multilevel analysis. Comparative research on quality of life can span several levels of analysis, for example, comparing individuals in different organizations, regions, countries, or time points.

  22. Comparative analysis of GoPro and digital cameras in head and neck flap

    3) Combining the two could result in a highly efficient and innovative method. 4) Trainees provided positive feedback on the educational impact of the videos. An urgent need exists for innovative surgical video recording techniques in head and neck reconstructive surgeries, particularly in low- and middle-income countries where a surge in ...

  23. PlanetiQ Radio Occultation: Preliminary Comparative Analysis of ...

    Comparative analysis plays a crucial role in understanding the impact of observing systems, offering nuanced insights into the effectiveness of GNSS-RO data across temporal epochs . A recent scholarly exploration has delved into the comparative analysis of Binhu and COSMIC-2 RO data, highlighting the avant-garde exploration of Earth's ...

  24. Cancers

    Purpose: Hepatic Arteriography and C-Arm CT-Guided Ablation of liver tumors (HepACAGA) is a novel technique, combining hepatic-arterial contrast injection with C-arm CT-guided navigation. This study compared the outcomes of the HepACAGA technique with patients treated with conventional ultrasound (US) and/or CT-guided ablation. Materials and Methods: In this retrospective cohort study, all ...

  25. Chemical characterization and comparative analysis of different parts

    Cocculus orbiculatus (L.) DC. (C. orbiculatus) is a medicinal herb valued for its dried roots with anti-inflammatory, analgesic, diuretic, and other therapeutic properties. Despite its traditional applications, chemical investigations into C. orbiculatus remain limited, focusing predominantly on alkaloids and flavo

  26. Comparative analysis of three methods for screening for colistin

    This increase has been followed by an increase in the development of colistin resistant bacteria. Due the large size of colistin and its ability to adhere to plastic, the broth microdilution method using a special medium is the recommended testing method. Resource limited settings struggle with this method and employ alternate methods.

  27. Comparative efficacy and safety of alpha-blockers as ...

    Comparative efficacy and safety of alpha-blockers as monotherapy for benign prostatic hyperplasia: a systematic review and network meta-analysis

  28. The "qualitative" in qualitative comparative analysis (QCA): research

    Qualitative Comparative Analysis (QCA) includes two main components: QCA "as a research approach" and QCA "as a method". In this study, we focus on the former and, by means of the "interpretive spiral", we critically look at the research process of QCA. We show how QCA as a research approach is composed of (1) an "analytical move", where cases, conditions and outcome(s) are ...