• Utility Menu

University Logo

GA4 Tracking Code

Gen ed writes, writing across the disciplines at harvard college.

  • Comparative Analysis

What It Is and Why It's Useful

Comparative analysis asks writers to make an argument about the relationship between two or more texts. Beyond that, there's a lot of variation, but three overarching kinds of comparative analysis stand out:

  • Coordinate (A ↔ B): In this kind of analysis, two (or more) texts are being read against each other in terms of a shared element, e.g., a memoir and a novel, both by Jesmyn Ward; two sets of data for the same experiment; a few op-ed responses to the same event; two YA books written in Chicago in the 2000s; a film adaption of a play; etc. 
  • Subordinate (A  → B) or (B → A ): Using a theoretical text (as a "lens") to explain a case study or work of art (e.g., how Anthony Jack's The Privileged Poor can help explain divergent experiences among students at elite four-year private colleges who are coming from similar socio-economic backgrounds) or using a work of art or case study (i.e., as a "test" of) a theory's usefulness or limitations (e.g., using coverage of recent incidents of gun violence or legislation un the U.S. to confirm or question the currency of Carol Anderson's The Second ).
  • Hybrid [A  → (B ↔ C)] or [(B ↔ C) → A] , i.e., using coordinate and subordinate analysis together. For example, using Jack to compare or contrast the experiences of students at elite four-year institutions with students at state universities and/or community colleges; or looking at gun culture in other countries and/or other timeframes to contextualize or generalize Anderson's main points about the role of the Second Amendment in U.S. history.

"In the wild," these three kinds of comparative analysis represent increasingly complex—and scholarly—modes of comparison. Students can of course compare two poems in terms of imagery or two data sets in terms of methods, but in each case the analysis will eventually be richer if the students have had a chance to encounter other people's ideas about how imagery or methods work. At that point, we're getting into a hybrid kind of reading (or even into research essays), especially if we start introducing different approaches to imagery or methods that are themselves being compared along with a couple (or few) poems or data sets.

Why It's Useful

In the context of a particular course, each kind of comparative analysis has its place and can be a useful step up from single-source analysis. Intellectually, comparative analysis helps overcome the "n of 1" problem that can face single-source analysis. That is, a writer drawing broad conclusions about the influence of the Iranian New Wave based on one film is relying entirely—and almost certainly too much—on that film to support those findings. In the context of even just one more film, though, the analysis is suddenly more likely to arrive at one of the best features of any comparative approach: both films will be more richly experienced than they would have been in isolation, and the themes or questions in terms of which they're being explored (here the general question of the influence of the Iranian New Wave) will arrive at conclusions that are less at-risk of oversimplification.

For scholars working in comparative fields or through comparative approaches, these features of comparative analysis animate their work. To borrow from a stock example in Western epistemology, our concept of "green" isn't based on a single encounter with something we intuit or are told is "green." Not at all. Our concept of "green" is derived from a complex set of experiences of what others say is green or what's labeled green or what seems to be something that's neither blue nor yellow but kind of both, etc. Comparative analysis essays offer us the chance to engage with that process—even if only enough to help us see where a more in-depth exploration with a higher and/or more diverse "n" might lead—and in that sense, from the standpoint of the subject matter students are exploring through writing as well the complexity of the genre of writing they're using to explore it—comparative analysis forms a bridge of sorts between single-source analysis and research essays.

Typical learning objectives for single-sources essays: formulate analytical questions and an arguable thesis, establish stakes of an argument, summarize sources accurately, choose evidence effectively, analyze evidence effectively, define key terms, organize argument logically, acknowledge and respond to counterargument, cite sources properly, and present ideas in clear prose.

Common types of comparative analysis essays and related types: two works in the same genre, two works from the same period (but in different places or in different cultures), a work adapted into a different genre or medium, two theories treating the same topic; a theory and a case study or other object, etc.

How to Teach It: Framing + Practice

Framing multi-source writing assignments (comparative analysis, research essays, multi-modal projects) is likely to overlap a great deal with "Why It's Useful" (see above), because the range of reasons why we might use these kinds of writing in academic or non-academic settings is itself the reason why they so often appear later in courses. In many courses, they're the best vehicles for exploring the complex questions that arise once we've been introduced to the course's main themes, core content, leading protagonists, and central debates.

For comparative analysis in particular, it's helpful to frame assignment's process and how it will help students successfully navigate the challenges and pitfalls presented by the genre. Ideally, this will mean students have time to identify what each text seems to be doing, take note of apparent points of connection between different texts, and start to imagine how those points of connection (or the absence thereof)

  • complicates or upends their own expectations or assumptions about the texts
  • complicates or refutes the expectations or assumptions about the texts presented by a scholar
  • confirms and/or nuances expectations and assumptions they themselves hold or scholars have presented
  • presents entirely unforeseen ways of understanding the texts

—and all with implications for the texts themselves or for the axes along which the comparative analysis took place. If students know that this is where their ideas will be heading, they'll be ready to develop those ideas and engage with the challenges that comparative analysis presents in terms of structure (See "Tips" and "Common Pitfalls" below for more on these elements of framing).

Like single-source analyses, comparative essays have several moving parts, and giving students practice here means adapting the sample sequence laid out at the " Formative Writing Assignments " page. Three areas that have already been mentioned above are worth noting:

  • Gathering evidence : Depending on what your assignment is asking students to compare (or in terms of what), students will benefit greatly from structured opportunities to create inventories or data sets of the motifs, examples, trajectories, etc., shared (or not shared) by the texts they'll be comparing. See the sample exercises below for a basic example of what this might look like.
  • Why it Matters: Moving beyond "x is like y but also different" or even "x is more like y than we might think at first" is what moves an essay from being "compare/contrast" to being a comparative analysis . It's also a move that can be hard to make and that will often evolve over the course of an assignment. A great way to get feedback from students about where they're at on this front? Ask them to start considering early on why their argument "matters" to different kinds of imagined audiences (while they're just gathering evidence) and again as they develop their thesis and again as they're drafting their essays. ( Cover letters , for example, are a great place to ask writers to imagine how a reader might be affected by reading an their argument.)
  • Structure: Having two texts on stage at the same time can suddenly feel a lot more complicated for any writer who's used to having just one at a time. Giving students a sense of what the most common patterns (AAA / BBB, ABABAB, etc.) are likely to be can help them imagine, even if provisionally, how their argument might unfold over a series of pages. See "Tips" and "Common Pitfalls" below for more information on this front.

Sample Exercises and Links to Other Resources

  • Common Pitfalls
  • Advice on Timing
  • Try to keep students from thinking of a proposed thesis as a commitment. Instead, help them see it as more of a hypothesis that has emerged out of readings and discussion and analytical questions and that they'll now test through an experiment, namely, writing their essay. When students see writing as part of the process of inquiry—rather than just the result—and when that process is committed to acknowledging and adapting itself to evidence, it makes writing assignments more scientific, more ethical, and more authentic. 
  • Have students create an inventory of touch points between the two texts early in the process.
  • Ask students to make the case—early on and at points throughout the process—for the significance of the claim they're making about the relationship between the texts they're comparing.
  • For coordinate kinds of comparative analysis, a common pitfall is tied to thesis and evidence. Basically, it's a thesis that tells the reader that there are "similarities and differences" between two texts, without telling the reader why it matters that these two texts have or don't have these particular features in common. This kind of thesis is stuck at the level of description or positivism, and it's not uncommon when a writer is grappling with the complexity that can in fact accompany the "taking inventory" stage of comparative analysis. The solution is to make the "taking inventory" stage part of the process of the assignment. When this stage comes before students have formulated a thesis, that formulation is then able to emerge out of a comparative data set, rather than the data set emerging in terms of their thesis (which can lead to confirmation bias, or frequency illusion, or—just for the sake of streamlining the process of gathering evidence—cherry picking). 
  • For subordinate kinds of comparative analysis , a common pitfall is tied to how much weight is given to each source. Having students apply a theory (in a "lens" essay) or weigh the pros and cons of a theory against case studies (in a "test a theory") essay can be a great way to help them explore the assumptions, implications, and real-world usefulness of theoretical approaches. The pitfall of these approaches is that they can quickly lead to the same biases we saw here above. Making sure that students know they should engage with counterevidence and counterargument, and that "lens" / "test a theory" approaches often balance each other out in any real-world application of theory is a good way to get out in front of this pitfall.
  • For any kind of comparative analysis, a common pitfall is structure. Every comparative analysis asks writers to move back and forth between texts, and that can pose a number of challenges, including: what pattern the back and forth should follow and how to use transitions and other signposting to make sure readers can follow the overarching argument as the back and forth is taking place. Here's some advice from an experienced writing instructor to students about how to think about these considerations:

a quick note on STRUCTURE

     Most of us have encountered the question of whether to adopt what we might term the “A→A→A→B→B→B” structure or the “A→B→A→B→A→B” structure.  Do we make all of our points about text A before moving on to text B?  Or do we go back and forth between A and B as the essay proceeds?  As always, the answers to our questions about structure depend on our goals in the essay as a whole.  In a “similarities in spite of differences” essay, for instance, readers will need to encounter the differences between A and B before we offer them the similarities (A d →B d →A s →B s ).  If, rather than subordinating differences to similarities you are subordinating text A to text B (using A as a point of comparison that reveals B’s originality, say), you may be well served by the “A→A→A→B→B→B” structure.  

     Ultimately, you need to ask yourself how many “A→B” moves you have in you.  Is each one identical?  If so, you may wish to make the transition from A to B only once (“A→A→A→B→B→B”), because if each “A→B” move is identical, the “A→B→A→B→A→B” structure will appear to involve nothing more than directionless oscillation and repetition.  If each is increasingly complex, however—if each AB pair yields a new and progressively more complex idea about your subject—you may be well served by the “A→B→A→B→A→B” structure, because in this case it will be visible to readers as a progressively developing argument.

As we discussed in "Advice on Timing" at the page on single-source analysis, that timeline itself roughly follows the "Sample Sequence of Formative Assignments for a 'Typical' Essay" outlined under " Formative Writing Assignments, " and it spans about 5–6 steps or 2–4 weeks. 

Comparative analysis assignments have a lot of the same DNA as single-source essays, but they potentially bring more reading into play and ask students to engage in more complicated acts of analysis and synthesis during the drafting stages. With that in mind, closer to 4 weeks is probably a good baseline for many single-source analysis assignments. For sections that meet once per week, the timeline will either probably need to expand—ideally—a little past the 4-week side of things, or some of the steps will need to be combined or done asynchronously.

What It Can Build Up To

Comparative analyses can build up to other kinds of writing in a number of ways. For example:

  • They can build toward other kinds of comparative analysis, e.g., student can be asked to choose an additional source to complicate their conclusions from a previous analysis, or they can be asked to revisit an analysis using a different axis of comparison, such as race instead of class. (These approaches are akin to moving from a coordinate or subordinate analysis to more of a hybrid approach.)
  • They can scaffold up to research essays, which in many instances are an extension of a "hybrid comparative analysis."
  • Like single-source analysis, in a course where students will take a "deep dive" into a source or topic for their capstone, they can allow students to "try on" a theoretical approach or genre or time period to see if it's indeed something they want to research more fully.
  • DIY Guides for Analytical Writing Assignments

For Teaching Fellows & Teaching Assistants

  • Types of Assignments
  • Unpacking the Elements of Writing Prompts
  • Formative Writing Assignments
  • Single-Source Analysis
  • Research Essays
  • Multi-Modal or Creative Projects
  • Giving Feedback to Students

Assignment Decoder

What is comparative analysis? A complete guide

Last updated

18 April 2023

Reviewed by

Jean Kaluza

Comparative analysis is a valuable tool for acquiring deep insights into your organization’s processes, products, and services so you can continuously improve them. 

Similarly, if you want to streamline, price appropriately, and ultimately be a market leader, you’ll likely need to draw on comparative analyses quite often.

When faced with multiple options or solutions to a given problem, a thorough comparative analysis can help you compare and contrast your options and make a clear, informed decision.

If you want to get up to speed on conducting a comparative analysis or need a refresher, here’s your guide.

Make comparative analysis less tedious

Dovetail streamlines comparative analysis to help you uncover and share actionable insights

  • What exactly is comparative analysis?

A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets.

For instance, you could use comparative analysis to investigate how your product features measure up to the competition.

After a successful comparative analysis, you should be able to identify strengths and weaknesses and clearly understand which product is more effective.

You could also use comparative analysis to examine different methods of producing that product and determine which way is most efficient and profitable.

The potential applications for using comparative analysis in everyday business are almost unlimited. That said, a comparative analysis is most commonly used to examine

Emerging trends and opportunities (new technologies, marketing)

Competitor strategies

Financial health

Effects of trends on a target audience

Free AI content analysis generator

Make sense of your research by automatically summarizing key takeaways through our free content analysis tool.

comparative analysis research paper examples

  • Why is comparative analysis so important? 

Comparative analysis can help narrow your focus so your business pursues the most meaningful opportunities rather than attempting dozens of improvements simultaneously.

A comparative approach also helps frame up data to illuminate interrelationships. For example, comparative research might reveal nuanced relationships or critical contexts behind specific processes or dependencies that wouldn’t be well-understood without the research.

For instance, if your business compares the cost of producing several existing products relative to which ones have historically sold well, that should provide helpful information once you’re ready to look at developing new products or features.

  • Comparative vs. competitive analysis—what’s the difference?

Comparative analysis is generally divided into three subtypes, using quantitative or qualitative data and then extending the findings to a larger group. These include

Pattern analysis —identifying patterns or recurrences of trends and behavior across large data sets.

Data filtering —analyzing large data sets to extract an underlying subset of information. It may involve rearranging, excluding, and apportioning comparative data to fit different criteria. 

Decision tree —flowcharting to visually map and assess potential outcomes, costs, and consequences.

In contrast, competitive analysis is a type of comparative analysis in which you deeply research one or more of your industry competitors. In this case, you’re using qualitative research to explore what the competition is up to across one or more dimensions.

For example

Service delivery —metrics like the Net Promoter Scores indicate customer satisfaction levels.

Market position — the share of the market that the competition has captured.

Brand reputation —how well-known or recognized your competitors are within their target market.

  • Tips for optimizing your comparative analysis

Conduct original research

Thorough, independent research is a significant asset when doing comparative analysis. It provides evidence to support your findings and may present a perspective or angle not considered previously. 

Make analysis routine

To get the maximum benefit from comparative research, make it a regular practice, and establish a cadence you can realistically stick to. Some business areas you could plan to analyze regularly include:

Profitability

Competition

Experiment with controlled and uncontrolled variables

In addition to simply comparing and contrasting, explore how different variables might affect your outcomes.

For example, a controllable variable would be offering a seasonal feature like a shopping bot to assist in holiday shopping or raising or lowering the selling price of a product.

Uncontrollable variables include weather, changing regulations, the current political climate, or global pandemics.

Put equal effort into each point of comparison

Most people enter into comparative research with a particular idea or hypothesis already in mind to validate. For instance, you might try to prove the worthwhileness of launching a new service. So, you may be disappointed if your analysis results don’t support your plan.

However, in any comparative analysis, try to maintain an unbiased approach by spending equal time debating the merits and drawbacks of any decision. Ultimately, this will be a practical, more long-term sustainable approach for your business than focusing only on the evidence that favors pursuing your argument or strategy.

Writing a comparative analysis in five steps

To put together a coherent, insightful analysis that goes beyond a list of pros and cons or similarities and differences, try organizing the information into these five components:

1. Frame of reference

Here is where you provide context. First, what driving idea or problem is your research anchored in? Then, for added substance, cite existing research or insights from a subject matter expert, such as a thought leader in marketing, startup growth, or investment

2. Grounds for comparison Why have you chosen to examine the two things you’re analyzing instead of focusing on two entirely different things? What are you hoping to accomplish?

3. Thesis What argument or choice are you advocating for? What will be the before and after effects of going with either decision? What do you anticipate happening with and without this approach?

For example, “If we release an AI feature for our shopping cart, we will have an edge over the rest of the market before the holiday season.” The finished comparative analysis will weigh all the pros and cons of choosing to build the new expensive AI feature including variables like how “intelligent” it will be, what it “pushes” customers to use, how much it takes off the plates of customer service etc.

Ultimately, you will gauge whether building an AI feature is the right plan for your e-commerce shop.

4. Organize the scheme Typically, there are two ways to organize a comparative analysis report. First, you can discuss everything about comparison point “A” and then go into everything about aspect “B.” Or, you alternate back and forth between points “A” and “B,” sometimes referred to as point-by-point analysis.

Using the AI feature as an example again, you could cover all the pros and cons of building the AI feature, then discuss the benefits and drawbacks of building and maintaining the feature. Or you could compare and contrast each aspect of the AI feature, one at a time. For example, a side-by-side comparison of the AI feature to shopping without it, then proceeding to another point of differentiation.

5. Connect the dots Tie it all together in a way that either confirms or disproves your hypothesis.

For instance, “Building the AI bot would allow our customer service team to save 12% on returns in Q3 while offering optimizations and savings in future strategies. However, it would also increase the product development budget by 43% in both Q1 and Q2. Our budget for product development won’t increase again until series 3 of funding is reached, so despite its potential, we will hold off building the bot until funding is secured and more opportunities and benefits can be proved effective.”

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

comparative analysis research paper examples

Users report unexpectedly high data usage, especially during streaming sessions.

comparative analysis research paper examples

Users find it hard to navigate from the home page to relevant playlists in the app.

comparative analysis research paper examples

It would be great to have a sleep timer feature, especially for bedtime listening.

comparative analysis research paper examples

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

Sociology Group: Welcome to Social Sciences Blog

How to Do Comparative Analysis in Research ( Examples )

Comparative analysis is a method that is widely used in social science . It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and explain every element of data that comparing. 

Comparative Analysis in Social SCIENCE RESEARCH

We often compare and contrast in our daily life. So it is usual to compare and contrast the culture and human society. We often heard that ‘our culture is quite good than theirs’ or ‘their lifestyle is better than us’. In social science, the social scientist compares primitive, barbarian, civilized, and modern societies. They use this to understand and discover the evolutionary changes that happen to society and its people.  It is not only used to understand the evolutionary processes but also to identify the differences, changes, and connections between societies.

Most social scientists are involved in comparative analysis. Macfarlane has thought that “On account of history, the examinations are typically on schedule, in that of other sociologies, transcendently in space. The historian always takes their society and compares it with the past society, and analyzes how far they differ from each other.

The comparative method of social research is a product of 19 th -century sociology and social anthropology. Sociologists like Emile Durkheim, Herbert Spencer Max Weber used comparative analysis in their works. For example, Max Weber compares the protestant of Europe with Catholics and also compared it with other religions like Islam, Hinduism, and Confucianism.

To do a systematic comparison we need to follow different elements of the method.

1. Methods of comparison The comparison method

In social science, we can do comparisons in different ways. It is merely different based on the topic, the field of study. Like Emile Durkheim compare societies as organic solidarity and mechanical solidarity. The famous sociologist Emile Durkheim provides us with three different approaches to the comparative method. Which are;

  • The first approach is to identify and select one particular society in a fixed period. And by doing that, we can identify and determine the relationship, connections and differences exist in that particular society alone. We can find their religious practices, traditions, law, norms etc.
  •  The second approach is to consider and draw various societies which have common or similar characteristics that may vary in some ways. It may be we can select societies at a specific period, or we can select societies in the different periods which have common characteristics but vary in some ways. For example, we can take European and American societies (which are universally similar characteristics) in the 20 th century. And we can compare and contrast their society in terms of law, custom, tradition, etc. 
  • The third approach he envisaged is to take different societies of different times that may share some similar characteristics or maybe show revolutionary changes. For example, we can compare modern and primitive societies which show us revolutionary social changes.

2 . The unit of comparison

We cannot compare every aspect of society. As we know there are so many things that we cannot compare. The very success of the compare method is the unit or the element that we select to compare. We are only able to compare things that have some attributes in common. For example, we can compare the existing family system in America with the existing family system in Europe. But we are not able to compare the food habits in china with the divorce rate in America. It is not possible. So, the next thing you to remember is to consider the unit of comparison. You have to select it with utmost care.

3. The motive of comparison

As another method of study, a comparative analysis is one among them for the social scientist. The researcher or the person who does the comparative method must know for what grounds they taking the comparative method. They have to consider the strength, limitations, weaknesses, etc. He must have to know how to do the analysis.

Steps of the comparative method

1. Setting up of a unit of comparison

As mentioned earlier, the first step is to consider and determine the unit of comparison for your study. You must consider all the dimensions of your unit. This is where you put the two things you need to compare and to properly analyze and compare it. It is not an easy step, we have to systematically and scientifically do this with proper methods and techniques. You have to build your objectives, variables and make some assumptions or ask yourself about what you need to study or make a hypothesis for your analysis.

The best casings of reference are built from explicit sources instead of your musings or perceptions. To do that you can select some attributes in the society like marriage, law, customs, norms, etc. by doing this you can easily compare and contrast the two societies that you selected for your study. You can set some questions like, is the marriage practices of Catholics are different from Protestants? Did men and women get an equal voice in their mate choice? You can set as many questions that you wanted. Because that will explore the truth about that particular topic. A comparative analysis must have these attributes to study. A social scientist who wishes to compare must develop those research questions that pop up in your mind. A study without those is not going to be a fruitful one.

2. Grounds of comparison

The grounds of comparison should be understandable for the reader. You must acknowledge why you selected these units for your comparison. For example, it is quite natural that a person who asks why you choose this what about another one? What is the reason behind choosing this particular society? If a social scientist chooses primitive Asian society and primitive Australian society for comparison, he must acknowledge the grounds of comparison to the readers. The comparison of your work must be self-explanatory without any complications.

If you choose two particular societies for your comparative analysis you must convey to the reader what are you intended to choose this and the reason for choosing that society in your analysis.

3 . Report or thesis

The main element of the comparative analysis is the thesis or the report. The report is the most important one that it must contain all your frame of reference. It must include all your research questions, objectives of your topic, the characteristics of your two units of comparison, variables in your study, and last but not least the finding and conclusion must be written down. The findings must be self-explanatory because the reader must understand to what extent did they connect and what are their differences. For example, in Emile Durkheim’s Theory of Division of Labour, he classified organic solidarity and Mechanical solidarity . In which he means primitive society as Mechanical solidarity and modern society as Organic Solidarity. Like that you have to mention what are your findings in the thesis.

4. Relationship and linking one to another

Your paper must link each point in the argument. Without that the reader does not understand the logical and rational advance in your analysis. In a comparative analysis, you need to compare the ‘x’ and ‘y’ in your paper. (x and y mean the two-unit or things in your comparison). To do that you can use likewise, similarly, on the contrary, etc. For example, if we do a comparison between primitive society and modern society we can say that; ‘in the primitive society the division of labour is based on gender and age on the contrary (or the other hand), in modern society, the division of labour is based on skill and knowledge of a person.

Demerits of comparison

Comparative analysis is not always successful. It has some limitations. The broad utilization of comparative analysis can undoubtedly cause the feeling that this technique is a solidly settled, smooth, and unproblematic method of investigation, which because of its undeniable intelligent status can produce dependable information once some specialized preconditions are met acceptably.

Perhaps the most fundamental issue here respects the independence of the unit picked for comparison. As different types of substances are gotten to be analyzed, there is frequently a fundamental and implicit supposition about their independence and a quiet propensity to disregard the mutual influences and common impacts among the units.

One more basic issue with broad ramifications concerns the decision of the units being analyzed. The primary concern is that a long way from being a guiltless as well as basic assignment, the decision of comparison units is a basic and precarious issue. The issue with this sort of comparison is that in such investigations the depictions of the cases picked for examination with the principle one will in general turn out to be unreasonably streamlined, shallow, and stylised with contorted contentions and ends as entailment.

However, a comparative analysis is as yet a strategy with exceptional benefits, essentially due to its capacity to cause us to perceive the restriction of our psyche and check against the weaknesses and hurtful results of localism and provincialism. We may anyway have something to gain from history specialists’ faltering in utilizing comparison and from their regard for the uniqueness of settings and accounts of people groups. All of the above, by doing the comparison we discover the truths the underlying and undiscovered connection, differences that exist in society.

Also Read: How to write a Sociology Analysis? Explained with Examples

comparative analysis research paper examples

Sociology Group

We believe in sharing knowledge with everyone and making a positive change in society through our work and contributions. If you are interested in joining us, please check our 'About' page for more information

comparative analysis research paper examples

What is Comparative Analysis and How to Conduct It? (+ Examples)

Appinio Research · 30.10.2023 · 36min read

What Is Comparative Analysis and How to Conduct It Examples

Have you ever faced a complex decision, wondering how to make the best choice among multiple options? In a world filled with data and possibilities, the art of comparative analysis holds the key to unlocking clarity amidst the chaos.

In this guide, we'll demystify the power of comparative analysis, revealing its practical applications, methodologies, and best practices. Whether you're a business leader, researcher, or simply someone seeking to make more informed decisions, join us as we explore the intricacies of comparative analysis and equip you with the tools to chart your course with confidence.

What is Comparative Analysis?

Comparative analysis is a systematic approach used to evaluate and compare two or more entities, variables, or options to identify similarities, differences, and patterns. It involves assessing the strengths, weaknesses, opportunities, and threats associated with each entity or option to make informed decisions.

The primary purpose of comparative analysis is to provide a structured framework for decision-making by:

  • Facilitating Informed Choices: Comparative analysis equips decision-makers with data-driven insights, enabling them to make well-informed choices among multiple options.
  • Identifying Trends and Patterns: It helps identify recurring trends, patterns, and relationships among entities or variables, shedding light on underlying factors influencing outcomes.
  • Supporting Problem Solving: Comparative analysis aids in solving complex problems by systematically breaking them down into manageable components and evaluating potential solutions.
  • Enhancing Transparency: By comparing multiple options, comparative analysis promotes transparency in decision-making processes, allowing stakeholders to understand the rationale behind choices.
  • Mitigating Risks : It helps assess the risks associated with each option, allowing organizations to develop risk mitigation strategies and make risk-aware decisions.
  • Optimizing Resource Allocation: Comparative analysis assists in allocating resources efficiently by identifying areas where resources can be optimized for maximum impact.
  • Driving Continuous Improvement: By comparing current performance with historical data or benchmarks, organizations can identify improvement areas and implement growth strategies.

Importance of Comparative Analysis in Decision-Making

  • Data-Driven Decision-Making: Comparative analysis relies on empirical data and objective evaluation, reducing the influence of biases and subjective judgments in decision-making. It ensures decisions are based on facts and evidence.
  • Objective Assessment: It provides an objective and structured framework for evaluating options, allowing decision-makers to focus on key criteria and avoid making decisions solely based on intuition or preferences.
  • Risk Assessment: Comparative analysis helps assess and quantify risks associated with different options. This risk awareness enables organizations to make proactive risk management decisions.
  • Prioritization: By ranking options based on predefined criteria, comparative analysis enables decision-makers to prioritize actions or investments, directing resources to areas with the most significant impact.
  • Strategic Planning: It is integral to strategic planning, helping organizations align their decisions with overarching goals and objectives. Comparative analysis ensures decisions are consistent with long-term strategies.
  • Resource Allocation: Organizations often have limited resources. Comparative analysis assists in allocating these resources effectively, ensuring they are directed toward initiatives with the highest potential returns.
  • Continuous Improvement: Comparative analysis supports a culture of continuous improvement by identifying areas for enhancement and guiding iterative decision-making processes.
  • Stakeholder Communication: It enhances transparency in decision-making, making it easier to communicate decisions to stakeholders. Stakeholders can better understand the rationale behind choices when supported by comparative analysis.
  • Competitive Advantage: In business and competitive environments , comparative analysis can provide a competitive edge by identifying opportunities to outperform competitors or address weaknesses.
  • Informed Innovation: When evaluating new products , technologies, or strategies, comparative analysis guides the selection of the most promising options, reducing the risk of investing in unsuccessful ventures.

In summary, comparative analysis is a valuable tool that empowers decision-makers across various domains to make informed, data-driven choices, manage risks, allocate resources effectively, and drive continuous improvement. Its structured approach enhances decision quality and transparency, contributing to the success and competitiveness of organizations and research endeavors.

How to Prepare for Comparative Analysis?

1. define objectives and scope.

Before you begin your comparative analysis, clearly defining your objectives and the scope of your analysis is essential. This step lays the foundation for the entire process. Here's how to approach it:

  • Identify Your Goals: Start by asking yourself what you aim to achieve with your comparative analysis. Are you trying to choose between two products for your business? Are you evaluating potential investment opportunities? Knowing your objectives will help you stay focused throughout the analysis.
  • Define Scope: Determine the boundaries of your comparison. What will you include, and what will you exclude? For example, if you're analyzing market entry strategies for a new product, specify whether you're looking at a specific geographic region or a particular target audience.
  • Stakeholder Alignment: Ensure that all stakeholders involved in the analysis understand and agree on the objectives and scope. This alignment will prevent misunderstandings and ensure the analysis meets everyone's expectations.

2. Gather Relevant Data and Information

The quality of your comparative analysis heavily depends on the data and information you gather. Here's how to approach this crucial step:

  • Data Sources: Identify where you'll obtain the necessary data. Will you rely on primary sources , such as surveys and interviews, to collect original data? Or will you use secondary sources, like published research and industry reports, to access existing data? Consider the advantages and disadvantages of each source.
  • Data Collection Plan: Develop a plan for collecting data. This should include details about the methods you'll use, the timeline for data collection, and who will be responsible for gathering the data.
  • Data Relevance: Ensure that the data you collect is directly relevant to your objectives. Irrelevant or extraneous data can lead to confusion and distract from the core analysis.

3. Select Appropriate Criteria for Comparison

Choosing the right criteria for comparison is critical to a successful comparative analysis. Here's how to go about it:

  • Relevance to Objectives: Your chosen criteria should align closely with your analysis objectives. For example, if you're comparing job candidates, your criteria might include skills, experience, and cultural fit.
  • Measurability: Consider whether you can quantify the criteria. Measurable criteria are easier to analyze. If you're comparing marketing campaigns, you might measure criteria like click-through rates, conversion rates, and return on investment.
  • Weighting Criteria : Not all criteria are equally important. You'll need to assign weights to each criterion based on its relative importance. Weighting helps ensure that the most critical factors have a more significant impact on the final decision.

4. Establish a Clear Framework

Once you have your objectives, data, and criteria in place, it's time to establish a clear framework for your comparative analysis. This framework will guide your process and ensure consistency. Here's how to do it:

  • Comparative Matrix: Consider using a comparative matrix or spreadsheet to organize your data. Each row in the matrix represents an option or entity you're comparing, and each column corresponds to a criterion. This visual representation makes it easy to compare and contrast data.
  • Timeline: Determine the time frame for your analysis. Is it a one-time comparison, or will you conduct ongoing analyses? Having a defined timeline helps you manage the analysis process efficiently.
  • Define Metrics: Specify the metrics or scoring system you'll use to evaluate each criterion. For example, if you're comparing potential office locations, you might use a scoring system from 1 to 5 for factors like cost, accessibility, and amenities.

With your objectives, data, criteria, and framework established, you're ready to move on to the next phase of comparative analysis: data collection and organization.

Comparative Analysis Data Collection

Data collection and organization are critical steps in the comparative analysis process. We'll explore how to gather and structure the data you need for a successful analysis.

1. Utilize Primary Data Sources

Primary data sources involve gathering original data directly from the source. This approach offers unique advantages, allowing you to tailor your data collection to your specific research needs.

Some popular primary data sources include:

  • Surveys and Questionnaires: Design surveys or questionnaires and distribute them to collect specific information from individuals or groups. This method is ideal for obtaining firsthand insights, such as customer preferences or employee feedback.
  • Interviews: Conduct structured interviews with relevant stakeholders or experts. Interviews provide an opportunity to delve deeper into subjects and gather qualitative data, making them valuable for in-depth analysis.
  • Observations: Directly observe and record data from real-world events or settings. Observational data can be instrumental in fields like anthropology, ethnography, and environmental studies.
  • Experiments: In controlled environments, experiments allow you to manipulate variables and measure their effects. This method is common in scientific research and product testing.

When using primary data sources, consider factors like sample size , survey design, and data collection methods to ensure the reliability and validity of your data.

2. Harness Secondary Data Sources

Secondary data sources involve using existing data collected by others. These sources can provide a wealth of information and save time and resources compared to primary data collection.

Here are common types of secondary data sources:

  • Public Records: Government publications, census data, and official reports offer valuable information on demographics, economic trends, and public policies. They are often free and readily accessible.
  • Academic Journals: Scholarly articles provide in-depth research findings across various disciplines. They are helpful for accessing peer-reviewed studies and staying current with academic discourse.
  • Industry Reports: Industry-specific reports and market research publications offer insights into market trends, consumer behavior, and competitive landscapes. They are essential for businesses making strategic decisions.
  • Online Databases: Online platforms like Statista , PubMed , and Google Scholar provide a vast repository of data and research articles. They offer search capabilities and access to a wide range of data sets.

When using secondary data sources, critically assess the credibility, relevance, and timeliness of the data. Ensure that it aligns with your research objectives.

3. Ensure and Validate Data Quality

Data quality is paramount in comparative analysis. Poor-quality data can lead to inaccurate conclusions and flawed decision-making. Here's how to ensure data validation and reliability:

  • Cross-Verification: Whenever possible, cross-verify data from multiple sources. Consistency among different sources enhances the reliability of the data.
  • Sample Size : Ensure that your data sample size is statistically significant for meaningful analysis. A small sample may not accurately represent the population.
  • Data Integrity: Check for data integrity issues, such as missing values, outliers, or duplicate entries. Address these issues before analysis to maintain data quality.
  • Data Source Reliability: Assess the reliability and credibility of the data sources themselves. Consider factors like the reputation of the institution or organization providing the data.

4. Organize Data Effectively

Structuring your data for comparison is a critical step in the analysis process. Organized data makes it easier to draw insights and make informed decisions. Here's how to structure data effectively:

  • Data Cleaning: Before analysis, clean your data to remove inconsistencies, errors, and irrelevant information. Data cleaning may involve data transformation, imputation of missing values, and removing outliers.
  • Normalization: Standardize data to ensure fair comparisons. Normalization adjusts data to a standard scale, making comparing variables with different units or ranges possible.
  • Variable Labeling: Clearly label variables and data points for easy identification. Proper labeling enhances the transparency and understandability of your analysis.
  • Data Organization: Organize data into a format that suits your analysis methods. For quantitative analysis, this might mean creating a matrix, while qualitative analysis may involve categorizing data into themes.

By paying careful attention to data collection, validation, and organization, you'll set the stage for a robust and insightful comparative analysis. Next, we'll explore various methodologies you can employ in your analysis, ranging from qualitative approaches to quantitative methods and examples.

Comparative Analysis Methods

When it comes to comparative analysis, various methodologies are available, each suited to different research goals and data types. In this section, we'll explore five prominent methodologies in detail.

Qualitative Comparative Analysis (QCA)

Qualitative Comparative Analysis (QCA) is a methodology often used when dealing with complex, non-linear relationships among variables. It seeks to identify patterns and configurations among factors that lead to specific outcomes.

  • Case-by-Case Analysis: QCA involves evaluating individual cases (e.g., organizations, regions, or events) rather than analyzing aggregate data. Each case's unique characteristics are considered.
  • Boolean Logic: QCA employs Boolean algebra to analyze data. Variables are categorized as either present or absent, allowing for the examination of different combinations and logical relationships.
  • Necessary and Sufficient Conditions: QCA aims to identify necessary and sufficient conditions for a specific outcome to occur. It helps answer questions like, "What conditions are necessary for a successful product launch?"
  • Fuzzy Set Theory: In some cases, QCA may use fuzzy set theory to account for degrees of membership in a category, allowing for more nuanced analysis.

QCA is particularly useful in fields such as sociology, political science, and organizational studies, where understanding complex interactions is essential.

Quantitative Comparative Analysis

Quantitative Comparative Analysis involves the use of numerical data and statistical techniques to compare and analyze variables. It's suitable for situations where data is quantitative, and relationships can be expressed numerically.

  • Statistical Tools: Quantitative comparative analysis relies on statistical methods like regression analysis, correlation, and hypothesis testing. These tools help identify relationships, dependencies, and trends within datasets.
  • Data Measurement: Ensure that variables are measured consistently using appropriate scales (e.g., ordinal, interval, ratio) for meaningful analysis. Variables may include numerical values like revenue, customer satisfaction scores, or product performance metrics.
  • Data Visualization: Create visual representations of data using charts, graphs, and plots. Visualization aids in understanding complex relationships and presenting findings effectively.
  • Statistical Significance: Assess the statistical significance of relationships. Statistical significance indicates whether observed differences or relationships are likely to be real rather than due to chance.

Quantitative comparative analysis is commonly applied in economics, social sciences, and market research to draw empirical conclusions from numerical data.

Case Studies

Case studies involve in-depth examinations of specific instances or cases to gain insights into real-world scenarios. Comparative case studies allow researchers to compare and contrast multiple cases to identify patterns, differences, and lessons.

  • Narrative Analysis: Case studies often involve narrative analysis, where researchers construct detailed narratives of each case, including context, events, and outcomes.
  • Contextual Understanding: In comparative case studies, it's crucial to consider the context within which each case operates. Understanding the context helps interpret findings accurately.
  • Cross-Case Analysis: Researchers conduct cross-case analysis to identify commonalities and differences across cases. This process can lead to the discovery of factors that influence outcomes.
  • Triangulation: To enhance the validity of findings, researchers may use multiple data sources and methods to triangulate information and ensure reliability.

Case studies are prevalent in fields like psychology, business, and sociology, where deep insights into specific situations are valuable.

SWOT Analysis

SWOT Analysis is a strategic tool used to assess the Strengths, Weaknesses, Opportunities, and Threats associated with a particular entity or situation. While it's commonly used in business, it can be adapted for various comparative analyses.

  • Internal and External Factors: SWOT Analysis examines both internal factors (Strengths and Weaknesses), such as organizational capabilities, and external factors (Opportunities and Threats), such as market conditions and competition.
  • Strategic Planning: The insights from SWOT Analysis inform strategic decision-making. By identifying strengths and opportunities, organizations can leverage their advantages. Likewise, addressing weaknesses and threats helps mitigate risks.
  • Visual Representation: SWOT Analysis is often presented as a matrix or a 2x2 grid, making it visually accessible and easy to communicate to stakeholders.
  • Continuous Monitoring: SWOT Analysis is not a one-time exercise. Organizations use it periodically to adapt to changing circumstances and make informed decisions.

SWOT Analysis is versatile and can be applied in business, healthcare, education, and any context where a structured assessment of factors is needed.

Benchmarking

Benchmarking involves comparing an entity's performance, processes, or practices to those of industry leaders or best-in-class organizations. It's a powerful tool for continuous improvement and competitive analysis.

  • Identify Performance Gaps: Benchmarking helps identify areas where an entity lags behind its peers or industry standards. These performance gaps highlight opportunities for improvement.
  • Data Collection: Gather data on key performance metrics from both internal and external sources. This data collection phase is crucial for meaningful comparisons.
  • Comparative Analysis: Compare your organization's performance data with that of benchmark organizations. This analysis can reveal where you excel and where adjustments are needed.
  • Continuous Improvement: Benchmarking is a dynamic process that encourages continuous improvement. Organizations use benchmarking findings to set performance goals and refine their strategies.

Benchmarking is widely used in business, manufacturing, healthcare, and customer service to drive excellence and competitiveness.

Each of these methodologies brings a unique perspective to comparative analysis, allowing you to choose the one that best aligns with your research objectives and the nature of your data. The choice between qualitative and quantitative methods, or a combination of both, depends on the complexity of the analysis and the questions you seek to answer.

How to Conduct Comparative Analysis?

Once you've prepared your data and chosen an appropriate methodology, it's time to dive into the process of conducting a comparative analysis. We will guide you through the essential steps to extract meaningful insights from your data.

What Is Comparative Analysis and How to Conduct It Examples

1. Identify Key Variables and Metrics

Identifying key variables and metrics is the first crucial step in conducting a comparative analysis. These are the factors or indicators you'll use to assess and compare your options.

  • Relevance to Objectives: Ensure the chosen variables and metrics align closely with your analysis objectives. When comparing marketing strategies, relevant metrics might include customer acquisition cost, conversion rate, and retention.
  • Quantitative vs. Qualitative : Decide whether your analysis will focus on quantitative data (numbers) or qualitative data (descriptive information). In some cases, a combination of both may be appropriate.
  • Data Availability: Consider the availability of data. Ensure you can access reliable and up-to-date data for all selected variables and metrics.
  • KPIs: Key Performance Indicators (KPIs) are often used as the primary metrics in comparative analysis. These are metrics that directly relate to your goals and objectives.

2. Visualize Data for Clarity

Data visualization techniques play a vital role in making complex information more accessible and understandable. Effective data visualization allows you to convey insights and patterns to stakeholders. Consider the following approaches:

  • Charts and Graphs: Use various types of charts, such as bar charts, line graphs, and pie charts, to represent data. For example, a line graph can illustrate trends over time, while a bar chart can compare values across categories.
  • Heatmaps: Heatmaps are particularly useful for visualizing large datasets and identifying patterns through color-coding. They can reveal correlations, concentrations, and outliers.
  • Scatter Plots: Scatter plots help visualize relationships between two variables. They are especially useful for identifying trends, clusters, or outliers.
  • Dashboards: Create interactive dashboards that allow users to explore data and customize views. Dashboards are valuable for ongoing analysis and reporting.
  • Infographics: For presentations and reports, consider using infographics to summarize key findings in a visually engaging format.

Effective data visualization not only enhances understanding but also aids in decision-making by providing clear insights at a glance.

3. Establish Clear Comparative Frameworks

A well-structured comparative framework provides a systematic approach to your analysis. It ensures consistency and enables you to make meaningful comparisons. Here's how to create one:

  • Comparison Matrices: Consider using matrices or spreadsheets to organize your data. Each row represents an option or entity, and each column corresponds to a variable or metric. This matrix format allows for side-by-side comparisons.
  • Decision Trees: In complex decision-making scenarios, decision trees help map out possible outcomes based on different criteria and variables. They visualize the decision-making process.
  • Scenario Analysis: Explore different scenarios by altering variables or criteria to understand how changes impact outcomes. Scenario analysis is valuable for risk assessment and planning.
  • Checklists: Develop checklists or scoring sheets to systematically evaluate each option against predefined criteria. Checklists ensure that no essential factors are overlooked.

A well-structured comparative framework simplifies the analysis process, making it easier to draw meaningful conclusions and make informed decisions.

4. Evaluate and Score Criteria

Evaluating and scoring criteria is a critical step in comparative analysis, as it quantifies the performance of each option against the chosen criteria.

  • Scoring System: Define a scoring system that assigns values to each criterion for every option. Common scoring systems include numerical scales, percentage scores, or qualitative ratings (e.g., high, medium, low).
  • Consistency: Ensure consistency in scoring by defining clear guidelines for each score. Provide examples or descriptions to help evaluators understand what each score represents.
  • Data Collection: Collect data or information relevant to each criterion for all options. This may involve quantitative data (e.g., sales figures) or qualitative data (e.g., customer feedback).
  • Aggregation: Aggregate the scores for each option to obtain an overall evaluation. This can be done by summing the individual criterion scores or applying weighted averages.
  • Normalization: If your criteria have different measurement scales or units, consider normalizing the scores to create a level playing field for comparison.

5. Assign Importance to Criteria

Not all criteria are equally important in a comparative analysis. Weighting criteria allows you to reflect their relative significance in the final decision-making process.

  • Relative Importance: Assess the importance of each criterion in achieving your objectives. Criteria directly aligned with your goals may receive higher weights.
  • Weighting Methods: Choose a weighting method that suits your analysis. Common methods include expert judgment, analytic hierarchy process (AHP), or data-driven approaches based on historical performance.
  • Impact Analysis: Consider how changes in the weights assigned to criteria would affect the final outcome. This sensitivity analysis helps you understand the robustness of your decisions.
  • Stakeholder Input: Involve relevant stakeholders or decision-makers in the weighting process. Their input can provide valuable insights and ensure alignment with organizational goals.
  • Transparency: Clearly document the rationale behind the assigned weights to maintain transparency in your analysis.

By weighting criteria, you ensure that the most critical factors have a more significant influence on the final evaluation, aligning the analysis more closely with your objectives and priorities.

With these steps in place, you're well-prepared to conduct a comprehensive comparative analysis. The next phase involves interpreting your findings, drawing conclusions, and making informed decisions based on the insights you've gained.

Comparative Analysis Interpretation

Interpreting the results of your comparative analysis is a crucial phase that transforms data into actionable insights. We'll delve into various aspects of interpretation and how to make sense of your findings.

  • Contextual Understanding: Before diving into the data, consider the broader context of your analysis. Understand the industry trends, market conditions, and any external factors that may have influenced your results.
  • Drawing Conclusions: Summarize your findings clearly and concisely. Identify trends, patterns, and significant differences among the options or variables you've compared.
  • Quantitative vs. Qualitative Analysis: Depending on the nature of your data and analysis, you may need to balance both quantitative and qualitative interpretations. Qualitative insights can provide context and nuance to quantitative findings.
  • Comparative Visualization: Visual aids such as charts, graphs, and tables can help convey your conclusions effectively. Choose visual representations that align with the nature of your data and the key points you want to emphasize.
  • Outliers and Anomalies: Identify and explain any outliers or anomalies in your data. Understanding these exceptions can provide valuable insights into unusual cases or factors affecting your analysis.
  • Cross-Validation: Validate your conclusions by comparing them with external benchmarks, industry standards, or expert opinions. Cross-validation helps ensure the reliability of your findings.
  • Implications for Decision-Making: Discuss how your analysis informs decision-making. Clearly articulate the practical implications of your findings and their relevance to your initial objectives.
  • Actionable Insights: Emphasize actionable insights that can guide future strategies, policies, or actions. Make recommendations based on your analysis, highlighting the steps needed to capitalize on strengths or address weaknesses.
  • Continuous Improvement: Encourage a culture of continuous improvement by using your analysis as a feedback mechanism. Suggest ways to monitor and adapt strategies over time based on evolving circumstances.

Comparative Analysis Applications

Comparative analysis is a versatile methodology that finds application in various fields and scenarios. Let's explore some of the most common and impactful applications.

Business Decision-Making

Comparative analysis is widely employed in business to inform strategic decisions and drive success. Key applications include:

Market Research and Competitive Analysis

  • Objective: To assess market opportunities and evaluate competitors.
  • Methods: Analyzing market trends, customer preferences, competitor strengths and weaknesses, and market share.
  • Outcome: Informed product development, pricing strategies, and market entry decisions.

Product Comparison and Benchmarking

  • Objective: To compare the performance and features of products or services.
  • Methods: Evaluating product specifications, customer reviews, and pricing.
  • Outcome: Identifying strengths and weaknesses, improving product quality, and setting competitive pricing.

Financial Analysis

  • Objective: To evaluate financial performance and make investment decisions.
  • Methods: Comparing financial statements, ratios, and performance indicators of companies.
  • Outcome: Informed investment choices, risk assessment, and portfolio management.

Healthcare and Medical Research

In the healthcare and medical research fields, comparative analysis is instrumental in understanding diseases, treatment options, and healthcare systems.

Clinical Trials and Drug Development

  • Objective: To compare the effectiveness of different treatments or drugs.
  • Methods: Analyzing clinical trial data, patient outcomes, and side effects.
  • Outcome: Informed decisions about drug approvals, treatment protocols, and patient care.

Health Outcomes Research

  • Objective: To assess the impact of healthcare interventions.
  • Methods: Comparing patient health outcomes before and after treatment or between different treatment approaches.
  • Outcome: Improved healthcare guidelines, cost-effectiveness analysis, and patient care plans.

Healthcare Systems Evaluation

  • Objective: To assess the performance of healthcare systems.
  • Methods: Comparing healthcare delivery models, patient satisfaction, and healthcare costs.
  • Outcome: Informed healthcare policy decisions, resource allocation, and system improvements.

Social Sciences and Policy Analysis

Comparative analysis is a fundamental tool in social sciences and policy analysis, aiding in understanding complex societal issues.

Educational Research

  • Objective: To compare educational systems and practices.
  • Methods: Analyzing student performance, curriculum effectiveness, and teaching methods.
  • Outcome: Informed educational policies, curriculum development, and school improvement strategies.

Political Science

  • Objective: To study political systems, elections, and governance.
  • Methods: Comparing election outcomes, policy impacts, and government structures.
  • Outcome: Insights into political behavior, policy effectiveness, and governance reforms.

Social Welfare and Poverty Analysis

  • Objective: To evaluate the impact of social programs and policies.
  • Methods: Comparing the well-being of individuals or communities with and without access to social assistance.
  • Outcome: Informed policymaking, poverty reduction strategies, and social program improvements.

Environmental Science and Sustainability

Comparative analysis plays a pivotal role in understanding environmental issues and promoting sustainability.

Environmental Impact Assessment

  • Objective: To assess the environmental consequences of projects or policies.
  • Methods: Comparing ecological data, resource use, and pollution levels.
  • Outcome: Informed environmental mitigation strategies, sustainable development plans, and regulatory decisions.

Climate Change Analysis

  • Objective: To study climate patterns and their impacts.
  • Methods: Comparing historical climate data, temperature trends, and greenhouse gas emissions.
  • Outcome: Insights into climate change causes, adaptation strategies, and policy recommendations.

Ecosystem Health Assessment

  • Objective: To evaluate the health and resilience of ecosystems.
  • Methods: Comparing biodiversity, habitat conditions, and ecosystem services.
  • Outcome: Conservation efforts, restoration plans, and ecological sustainability measures.

Technology and Innovation

Comparative analysis is crucial in the fast-paced world of technology and innovation.

Product Development and Innovation

  • Objective: To assess the competitiveness and innovation potential of products or technologies.
  • Methods: Comparing research and development investments, technology features, and market demand.
  • Outcome: Informed innovation strategies, product roadmaps, and patent decisions.

User Experience and Usability Testing

  • Objective: To evaluate the user-friendliness of software applications or digital products.
  • Methods: Comparing user feedback, usability metrics, and user interface designs.
  • Outcome: Improved user experiences, interface redesigns, and product enhancements.

Technology Adoption and Market Entry

  • Objective: To analyze market readiness and risks for new technologies.
  • Methods: Comparing market conditions, regulatory landscapes, and potential barriers.
  • Outcome: Informed market entry strategies, risk assessments, and investment decisions.

These diverse applications of comparative analysis highlight its flexibility and importance in decision-making across various domains. Whether in business, healthcare, social sciences, environmental studies, or technology, comparative analysis empowers researchers and decision-makers to make informed choices and drive positive outcomes.

Comparative Analysis Best Practices

Successful comparative analysis relies on following best practices and avoiding common pitfalls. Implementing these practices enhances the effectiveness and reliability of your analysis.

  • Clearly Defined Objectives: Start with well-defined objectives that outline what you aim to achieve through the analysis. Clear objectives provide focus and direction.
  • Data Quality Assurance: Ensure data quality by validating, cleaning, and normalizing your data. Poor-quality data can lead to inaccurate conclusions.
  • Transparent Methodologies: Clearly explain the methodologies and techniques you've used for analysis. Transparency builds trust and allows others to assess the validity of your approach.
  • Consistent Criteria: Maintain consistency in your criteria and metrics across all options or variables. Inconsistent criteria can lead to biased results.
  • Sensitivity Analysis: Conduct sensitivity analysis by varying key parameters, such as weights or assumptions, to assess the robustness of your conclusions.
  • Stakeholder Involvement: Involve relevant stakeholders throughout the analysis process. Their input can provide valuable perspectives and ensure alignment with organizational goals.
  • Critical Evaluation of Assumptions: Identify and critically evaluate any assumptions made during the analysis. Assumptions should be explicit and justifiable.
  • Holistic View: Take a holistic view of the analysis by considering both short-term and long-term implications. Avoid focusing solely on immediate outcomes.
  • Documentation: Maintain thorough documentation of your analysis, including data sources, calculations, and decision criteria. Documentation supports transparency and facilitates reproducibility.
  • Continuous Learning: Stay updated with the latest analytical techniques, tools, and industry trends. Continuous learning helps you adapt your analysis to changing circumstances.
  • Peer Review: Seek peer review or expert feedback on your analysis. External perspectives can identify blind spots and enhance the quality of your work.
  • Ethical Considerations: Address ethical considerations, such as privacy and data protection, especially when dealing with sensitive or personal data.

By adhering to these best practices, you'll not only improve the rigor of your comparative analysis but also ensure that your findings are reliable, actionable, and aligned with your objectives.

Comparative Analysis Examples

To illustrate the practical application and benefits of comparative analysis, let's explore several real-world examples across different domains. These examples showcase how organizations and researchers leverage comparative analysis to make informed decisions, solve complex problems, and drive improvements:

Retail Industry - Price Competitiveness Analysis

Objective: A retail chain aims to assess its price competitiveness against competitors in the same market.

Methodology:

  • Collect pricing data for a range of products offered by the retail chain and its competitors.
  • Organize the data into a comparative framework, categorizing products by type and price range.
  • Calculate price differentials, averages, and percentiles for each product category.
  • Analyze the findings to identify areas where the retail chain's prices are higher or lower than competitors.

Outcome: The analysis reveals that the retail chain's prices are consistently lower in certain product categories but higher in others. This insight informs pricing strategies, allowing the retailer to adjust prices to remain competitive in the market.

Healthcare - Comparative Effectiveness Research

Objective: Researchers aim to compare the effectiveness of two different treatment methods for a specific medical condition.

  • Recruit patients with the medical condition and randomly assign them to two treatment groups.
  • Collect data on treatment outcomes, including symptom relief, side effects, and recovery times.
  • Analyze the data using statistical methods to compare the treatment groups.
  • Consider factors like patient demographics and baseline health status as potential confounding variables.

Outcome: The comparative analysis reveals that one treatment method is statistically more effective than the other in relieving symptoms and has fewer side effects. This information guides medical professionals in recommending the more effective treatment to patients.

Environmental Science - Carbon Emission Analysis

Objective: An environmental organization seeks to compare carbon emissions from various transportation modes in a metropolitan area.

  • Collect data on the number of vehicles, their types (e.g., cars, buses, bicycles), and fuel consumption for each mode of transportation.
  • Calculate the total carbon emissions for each mode based on fuel consumption and emission factors.
  • Create visualizations such as bar charts and pie charts to represent the emissions from each transportation mode.
  • Consider factors like travel distance, occupancy rates, and the availability of alternative fuels.

Outcome: The comparative analysis reveals that public transportation generates significantly lower carbon emissions per passenger mile compared to individual car travel. This information supports advocacy for increased public transit usage to reduce carbon footprint.

Technology Industry - Feature Comparison for Software Development Tools

Objective: A software development team needs to choose the most suitable development tool for an upcoming project.

  • Create a list of essential features and capabilities required for the project.
  • Research and compile information on available development tools in the market.
  • Develop a comparative matrix or scoring system to evaluate each tool's features against the project requirements.
  • Assign weights to features based on their importance to the project.

Outcome: The comparative analysis highlights that Tool A excels in essential features critical to the project, such as version control integration and debugging capabilities. The development team selects Tool A as the preferred choice for the project.

Educational Research - Comparative Study of Teaching Methods

Objective: A school district aims to improve student performance by comparing the effectiveness of traditional classroom teaching with online learning.

  • Randomly assign students to two groups: one taught using traditional methods and the other through online courses.
  • Administer pre- and post-course assessments to measure knowledge gain.
  • Collect feedback from students and teachers on the learning experiences.
  • Analyze assessment scores and feedback to compare the effectiveness and satisfaction levels of both teaching methods.

Outcome: The comparative analysis reveals that online learning leads to similar knowledge gains as traditional classroom teaching. However, students report higher satisfaction and flexibility with the online approach. The school district considers incorporating online elements into its curriculum.

These examples illustrate the diverse applications of comparative analysis across industries and research domains. Whether optimizing pricing strategies in retail, evaluating treatment effectiveness in healthcare, assessing environmental impacts, choosing the right software tool, or improving educational methods, comparative analysis empowers decision-makers with valuable insights for informed choices and positive outcomes.

Conclusion for Comparative Analysis

Comparative analysis is your compass in the world of decision-making. It helps you see the bigger picture, spot opportunities, and navigate challenges. By defining your objectives, gathering data, applying methodologies, and following best practices, you can harness the power of Comparative Analysis to make informed choices and drive positive outcomes.

Remember, Comparative analysis is not just a tool; it's a mindset that empowers you to transform data into insights and uncertainty into clarity. So, whether you're steering a business, conducting research, or facing life's choices, embrace Comparative Analysis as your trusted guide on the journey to better decisions. With it, you can chart your course, make impactful choices, and set sail toward success.

How to Conduct Comparative Analysis in Minutes?

Are you ready to revolutionize your approach to market research and comparative analysis? Appinio , a real-time market research platform, empowers you to harness the power of real-time consumer insights for swift, data-driven decisions. Here's why you should choose Appinio:

  • Speedy Insights:  Get from questions to insights in minutes, enabling you to conduct comparative analysis without delay.
  • User-Friendly:  No need for a PhD in research – our intuitive platform is designed for everyone, making it easy to collect and analyze data.
  • Global Reach:  With access to over 90 countries and the ability to define your target group from 1200+ characteristics, Appinio provides a worldwide perspective for your comparative analysis

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Time Series Analysis Definition Types Techniques Examples

16.05.2024 | 30min read

Time Series Analysis: Definition, Types, Techniques, Examples

Experimental Research Definition Types Design Examples

14.05.2024 | 31min read

Experimental Research: Definition, Types, Design, Examples

Interval Scale Definition Characteristics Examples

07.05.2024 | 29min read

Interval Scale: Definition, Characteristics, Examples

Rasmussen University: FAQS banner

How do I write a comparative analysis?

A comparative analysis is an essay in which two things are compared and contrasted. You may have done a "compare and contrast" paper in your English class, and a comparative analysis is the same general idea, but as a graduate student you are expected to produce a higher level of analysis in your writing. You can follow these guidelines to get started. 

  • Conduct your research. Need help? Ask a Librarian!
  • Brainstorm a list of similarities and differences. The Double Bubble  document linked below can be helpful for this step.
  • Write your thesis. This will be based on what you have discovered regarding the weight of similarities and differences between the things you are comparing. 
  • Alternating (point-by-point) method: Find similar points between each subject and alternate writing about each of them.
  • Block (subject-by-subject) method: Discuss all of the first subject and then all of the second.
  • This page from the University of Toronto gives some great examples of when each of these is most effective.
  • Don't forget to cite your sources! 

Visvis, V., & Plotnik, J. (n.d.). The comparative essay . University of Toronto. https://advice.writing.utoronto.ca/types-of-writing/comparative-essay/

Walk, K. (1998). How to write a comparative analysis . Harvard University. https://writingcenter.fas.harvard.edu/pages/how-write-comparative-analysis

Links & Files

  • Double_Bubble_Map.docx
  • Health Sciences
  • Reading and Writing
  • Graduate Writing
  • Last Updated Sep 06, 2023
  • Views 128537
  • Answered By Kerry Louvier

FAQ Actions

  • Share on Facebook

Comments (0)

Hello! We're here to help! Please log in to ask your question.

Need an answer now? Search our FAQs !

How can I find my course textbook?

You can expect a prompt response, Monday through Friday, 8:00 AM-4:00 PM Central Time (by the next business day on weekends and holidays).

Questions may be answered by a Librarian, Learning Services Coordinator, Instructor, or Tutor. 

  • Open access
  • Published: 07 May 2021

The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions

  • Benjamin Hanckel 1 ,
  • Mark Petticrew 2 ,
  • James Thomas 3 &
  • Judith Green 4  

BMC Public Health volume  21 , Article number:  877 ( 2021 ) Cite this article

22k Accesses

43 Citations

34 Altmetric

Metrics details

Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA approaches identified in published studies, and identify implications for future research and reporting.

PubMed, Scopus and Web of Science were systematically searched for peer-reviewed studies published in English up to December 2019 that had used QCA methods to identify the conditions associated with the uptake and/or effectiveness of interventions for public health. Data relating to the interventions studied (settings/level of intervention/populations), methods (type of QCA, case level, source of data, other methods used) and reported strengths and weaknesses of QCA were extracted and synthesised narratively.

The search identified 1384 papers, of which 27 (describing 26 studies) met the inclusion criteria. Interventions evaluated ranged across: nutrition/obesity ( n  = 8); physical activity ( n  = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation ( n  = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2), and general population health ( n  = 1). The majority of studies ( n  = 24) were of interventions solely or predominantly in high income countries. Key strengths reported were that QCA provides a method for addressing causal complexity; and that it provides a systematic approach for understanding the mechanisms at work in implementation across contexts. Weaknesses reported related to data availability limitations, especially on ineffective interventions. The majority of papers demonstrated good knowledge of cases, and justification of case selection, but other criteria of methodological quality were less comprehensively met.

QCA is a promising approach for addressing the role of context in complex interventions, and for identifying causal configurations of conditions that predict implementation and/or outcomes when there is sufficiently detailed understanding of a series of comparable cases. As the use of QCA in evaluative health research increases, there may be a need to develop advice for public health researchers and journals on minimum criteria for quality and reporting.

Peer Review reports

Interest in the use of Qualitative Comparative Analysis (QCA) arises in part from growing recognition of the need to broaden methodological capacity to address causality in complex systems [ 1 , 2 , 3 ]. Guidance for researchers for evaluating complex interventions suggests process evaluations [ 4 , 5 ] can provide evidence on the mechanisms of change, and the ways in which context affects outcomes. However, this does not address the more fundamental problems with trial and quasi-experimental designs arising from system complexity [ 6 ]. As Byrne notes, the key characteristic of complex systems is ‘emergence’ [ 7 ]: that is, effects may accrue from combinations of components, in contingent ways, which cannot be reduced to any one level. Asking about ‘what works’ in complex systems is not to ask a simple question about whether an intervention has particular effects, but rather to ask: “how the intervention works in relation to all existing components of the system and to other systems and their sub-systems that intersect with the system of interest” [ 7 ]. Public health interventions are typically attempts to effect change in systems that are themselves dynamic; approaches to evaluation are needed that can deal with emergence [ 8 ]. In short, understanding the uptake and impact of interventions requires methods that can account for the complex interplay of intervention conditions and system contexts.

To build a useful evidence base for public health, evaluations thus need to assess not just whether a particular intervention (or component) causes specific change in one variable, in controlled circumstances, but whether those interventions shift systems, and how specific conditions of interventions and setting contexts interact to lead to anticipated outcomes. There have been a number of calls for the development of methods in intervention research to address these issues of complex causation [ 9 , 10 , 11 ], including calls for the greater use of case studies to provide evidence on the important elements of context [ 12 , 13 ]. One approach for addressing causality in complex systems is Qualitative Comparative Analysis (QCA): a systematic way of comparing the outcomes of different combinations of system components and elements of context (‘conditions’) across a series of cases.

The potential of qualitative comparative analysis

QCA is an approach developed by Charles Ragin [ 14 , 15 ], originating in comparative politics and macrosociology to address questions of comparative historical development. Using set theory, QCA methods explore the relationships between ‘conditions’ and ‘outcomes’ by identifying configurations of necessary and sufficient conditions for an outcome. The underlying logic is different from probabilistic reasoning, as the causal relationships identified are not inferred from the (statistical) likelihood of them being found by chance, but rather from comparing sets of conditions and their relationship to outcomes. It is thus more akin to the generative conceptualisations of causality in realist evaluation approaches [ 16 ]. QCA is a non-additive and non-linear method that emphasises diversity, acknowledging that different paths can lead to the same outcome. For evaluative research in complex systems [ 17 ], QCA therefore offers a number of benefits, including: that QCA can identify more than one causal pathway to an outcome (equifinality); that it accounts for conjectural causation (where the presence or absence of conditions in relation to other conditions might be key); and that it is asymmetric with respect to the success or failure of outcomes. That is, that specific factors explain success does not imply that their absence leads to failure (causal asymmetry).

QCA was designed, and is typically used, to compare data from a medium N (10–50) series of cases that include those with and those without the (dichotomised) outcome. Conditions can be dichotomised in ‘crisp sets’ (csQCA) or represented in ‘fuzzy sets’ (fsQCA), where set membership is calibrated (either continuously or with cut offs) between two extremes representing fully in (1) or fully out (0) of the set. A third version, multi-value QCA (mvQCA), infrequently used, represents conditions as ‘multi-value sets’, with multinomial membership [ 18 ]. In calibrating set membership, the researcher specifies the critical qualitative anchors that capture differences in kind (full membership and full non-membership), as well as differences in degree in fuzzy sets (partial membership) [ 15 , 19 ]. Data on outcomes and conditions can come from primary or secondary qualitative and/or quantitative sources. Once data are assembled and coded, truth tables are constructed which “list the logically possible combinations of causal conditions” [ 15 ], collating the number of cases where those configurations occur to see if they share the same outcome. Analysis of these truth tables assesses first whether any conditions are individually necessary or sufficient to predict the outcome, and then whether any configurations of conditions are necessary or sufficient. Necessary conditions are assessed by examining causal conditions shared by cases with the same outcome, whilst identifying sufficient conditions (or combinations of conditions) requires examining cases with the same causal conditions to identify if they have the same outcome [ 15 ]. However, as Legewie argues, the presence of a condition, or a combination of conditions in actual datasets, are likely to be “‘quasi-necessary’ or ‘quasi-sufficient’ in that the causal relation holds in a great majority of cases, but some cases deviate from this pattern” [ 20 ]. Following reduction of the complexity of the model, the final model is tested for coverage (the degree to which a configuration accounts for instances of an outcome in the empirical cases; the proportion of cases belonging to a particular configuration) and consistency (the degree to which the cases sharing a combination of conditions align with a proposed subset relation). The result is an analysis of complex causation, “defined as a situation in which an outcome may follow from several different combinations of causal conditions” [ 15 ] illuminating the ‘causal recipes’, the causally relevant conditions or configuration of conditions that produce the outcome of interest.

QCA, then, has promise for addressing questions of complex causation, and recent calls for the greater use of QCA methods have come from a range of fields related to public health, including health research [ 17 ], studies of social interventions [ 7 ], and policy evaluation [ 21 , 22 ]. In making arguments for the use of QCA across these fields, researchers have also indicated some of the considerations that must be taken into account to ensure robust and credible analyses. There is a need, for instance, to ensure that ‘contradictions’, where cases with the same configurations show different outcomes, are resolved and reported [ 15 , 23 , 24 ]. Additionally, researchers must consider the ratio of cases to conditions, and limit the number of conditions to cases to ensure the validity of models [ 25 ]. Marx and Dusa, examining crisp set QCA, have provided some guidance to the ‘ceiling’ number of conditions which can be included relative to the number of cases to increase the probability of models being valid (that is, with a low probability of being generated through random data) [ 26 ].

There is now a growing body of published research in public health and related fields drawing on QCA methods. This is therefore a timely point to map the field and assess the potential of QCA as a method for contributing to the evidence base for what works in improving public health. To inform future methodological development of robust methods for addressing complexity in the evaluation of public health interventions, we undertook a systematic review to map existing evidence, identify gaps in, and strengths and weakness of, the QCA literature to date, and identify the implications of these for conducting and reporting future QCA studies for public health evaluation. We aimed to address the following specific questions [ 27 ]:

1. How is QCA used for public health evaluation? What populations, settings, methods used in source case studies, unit/s and level of analysis (‘cases’), and ‘conditions’ have been included in QCA studies?

2. What strengths and weaknesses have been identified by researchers who have used QCA to understand complex causation in public health evaluation research?

3. What are the existing gaps in, and strengths and weakness of, the QCA literature in public health evaluation, and what implications do these have for future research and reporting of QCA studies for public health?

This systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 29 April 2019 ( CRD42019131910 ). A protocol was prepared in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) 2015 statement [ 28 ], and published in 2019 [ 27 ], where the methods are explained in detail. EPPI-Reviewer 4 was used to manage the process and undertake screening of abstracts [ 29 ].

Search strategy

We searched for peer-reviewed published papers in English, which used QCA methods to examine causal complexity in evaluating the implementation, uptake and/or effects of a public health intervention, in any region of the world, for any population. ‘Public health interventions’ were defined as those which aim to promote or protect health, or prevent ill health, in the population. No date exclusions were made, and papers published up to December 2019 were included.

Search strategies used the following phrases “Qualitative Comparative Analysis” and “QCA”, which were combined with the keywords “health”, “public health”, “intervention”, and “wellbeing”. See Additional file  1 for an example. Searches were undertaken on the following databases: PubMed, Web of Science, and Scopus. Additional searches were undertaken on Microsoft Academic and Google Scholar in December 2019, where the first pages of results were checked for studies that may have been missed in the initial search. No additional studies were identified. The list of included studies was sent to experts in QCA methods in health and related fields, including authors of included studies and/or those who had published on QCA methodology. This generated no additional studies within scope, but a suggestion to check the COMPASSS (Comparative Methods for Systematic Cross-Case Analysis) database; this was searched, identifying one further study that met the inclusion criteria [ 30 ]. COMPASSS ( https://compasss.org/ ) collates publications of studies using comparative case analysis.

We excluded studies where no intervention was evaluated, which included studies that used QCA to examine public health infrastructure (i.e. staff training) without a specific health outcome, and papers that report on prevalence of health issues (i.e. prevalence of child mortality). We also excluded studies of health systems or services interventions where there was no public health outcome.

After retrieval, and removal of duplicates, titles and abstracts were screened by one of two authors (BH or JG). Double screening of all records was assisted by EPPI Reviewer 4’s machine learning function. Of the 1384 papers identified after duplicates were removed, we excluded 820 after review of titles and abstracts (Fig.  1 ). The excluded studies included: a large number of papers relating to ‘quantitative coronary angioplasty’ and some which referred to the Queensland Criminal Code (both of which are also abbreviated to ‘QCA’); papers that reported methodological issues but not empirical studies; protocols; and papers that used the phrase ‘qualitative comparative analysis’ to refer to qualitative studies that compared different sub-populations or cases within the study, but did not include formal QCA methods.

figure 1

Flow Diagram

Full texts of the 51 remaining studies were screened by BH and JG for inclusion, with 10 papers double coded by both authors, with complete agreement. Uncertain inclusions were checked by the third author (MP). Of the full texts, 24 were excluded because: they did not report a public health intervention ( n  = 18); had used a methodology inspired by QCA, but had not undertaken a QCA ( n  = 2); were protocols or methodological papers only ( n  = 2); or were not published in peer-reviewed journals ( n  = 2) (see Fig.  1 ).

Data were extracted manually from the 27 remaining full texts by BH and JG. Two papers relating to the same research question and dataset were combined, such that analysis was by study ( n  = 26) not by paper. We retrieved data relating to: publication (journal, first author country affiliation, funding reported); the study setting (country/region setting, population targeted by the intervention(s)); intervention(s) studied; methods (aims, rationale for using QCA, crisp or fuzzy set QCA, other analysis methods used); data sources drawn on for cases (source [primary data, secondary data, published analyses], qualitative/quantitative data, level of analysis, number of cases, final causal conditions included in the analysis); outcome explained; and claims made about strengths and weaknesses of using QCA (see Table  1 ). Data were synthesised narratively, using thematic synthesis methods [ 31 , 32 ], with interventions categorised by public health domain and level of intervention.

Quality assessment

There are no reporting guidelines for QCA studies in public health, but there are a number of discussions of best practice in the methodological literature [ 25 , 26 , 33 , 34 ]. These discussions suggest several criteria for strengthening QCA methods that we used as indicators of methodological and/or reporting quality: evidence of familiarity of cases; justification for selection of cases; discussion and justification of set membership score calibration; reporting of truth tables; reporting and justification of solution formula; and reporting of consistency and coverage measures. For studies using csQCA, and claiming an explanatory analysis, we additionally identified whether the number of cases was sufficient for the number of conditions included in the model, using a pragmatic cut-off in line with Marx & Dusa’s guideline thresholds, which indicate how many cases are sufficient for given numbers of conditions to reject a 10% probability that models could be generated with random data [ 26 ].

Overview of scope of QCA research in public health

Twenty-seven papers reporting 26 studies were included in the review (Table  1 ). The earliest was published in 2005, and 17 were published after 2015. The majority ( n  = 19) were published in public health/health promotion journals, with the remainder published in other health science ( n  = 3) or in social science/management journals ( n  = 4). The public health domain(s) addressed by each study were broadly coded by the main area of focus. They included nutrition/obesity ( n  = 8); physical activity (PA) (n = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation (n = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2); or general population health ( n  = 1). The majority ( n  = 24) of studies were conducted solely or predominantly in high-income countries (systematic reviews in general searched global sources, but commented that the overwhelming majority of studies were from high-income countries). Country settings included: any ( n  = 6); OECD countries ( n  = 3); USA ( n  = 6); UK ( n  = 6) and one each from Nepal, Austria, Belgium, Netherlands and Africa. These largely reflected the first author’s country affiliations in the UK ( n  = 13); USA ( n  = 9); and one each from South Africa, Austria, Belgium, and the Netherlands. All three studies primarily addressing health inequalities [ 35 , 36 , 37 ] were from the UK.

Eight of the interventions evaluated were individual-level behaviour change interventions (e.g. weight management interventions, case management, self-management for chronic conditions); eight evaluated policy/funding interventions; five explored settings-based health promotion/behaviour change interventions (e.g. schools-based physical activity intervention, store-based food choice interventions); three evaluated community empowerment/engagement interventions, and two studies evaluated networks and their impact on health outcomes.

Methods and data sets used

Fifteen studies used crisp sets (csQCA), 11 used fuzzy sets (fsQCA). No study used mvQCA. Eleven studies included additional analyses of the datasets drawn on for the QCA, including six that used qualitative approaches (narrative synthesis, case comparisons), typically to identify cases or conditions for populating the QCA; and four reporting additional statistical analyses (meta-regression, linear regression) to either identify differences overall between cases prior to conducting a QCA (e.g. [ 38 ]) or to explore correlations in more detail (e.g. [ 39 ]). One study used an additional Boolean configurational technique to reduce the number of conditions in the QCA analysis [ 40 ]. No studies reported aiming to compare the findings from the QCA with those from other techniques for evaluating the uptake or effectiveness of interventions, although some [ 41 , 42 ] were explicitly using the study to showcase the possibilities of QCA compared with other approaches in general. Twelve studies drew on primary data collected specifically for the study, with five of those additionally drawing on secondary data sets; five drew only on secondary data sets, and nine used data from systematic reviews of published research. Seven studies drew primarily on qualitative data, generally derived from interviews or observations.

Many studies were undertaken in the context of one or more trials, which provided evidence of effect. Within single trials, this was generally for a process evaluation, with cases being trial sites. Fernald et al’s study, for instance, was in the context of a trial of a programme to support primary care teams in identifying and implementing self-management support tools for their patients, which measured patient and health care provider level outcomes [ 43 ]. The QCA reported here used qualitative data from the trial to identify a set of necessary conditions for health care provider practices to implement the tools successfully. In studies drawing on data from systematic reviews, cases were always at the level of intervention or intervention component, with data included from multiple trials. Harris et al., for instance, undertook a mixed-methods systematic review of school-based self-management interventions for asthma, using meta-analysis methods to identify effective interventions and QCA methods to identify which intervention features were aligned with success [ 44 ].

The largest number of studies ( n  = 10), including all the systematic reviews, analysed cases at the level of the intervention, or a component of the intervention; seven analysed organisational level cases (e.g. school class, network, primary care practice); five analysed sub-national region level cases (e.g. state, local authority area), and two each analysed country or individual level cases. Sample sizes ranged from 10 to 131, with no study having small N (< 10) sample sizes, four having large N (> 50) sample sizes, and the majority (22) being medium N studies (in the range 10–50).

Rationale for using QCA

Most papers reported a rationale for using QCA that mentioned ‘complexity’ or ‘context’, including: noting that QCA is appropriate for addressing causal complexity or multiple pathways to outcome [ 37 , 43 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ]; noting the appropriateness of the method for providing evidence on how context impacts on interventions [ 41 , 50 ]; or the need for a method that addressed causal asymmetry [ 52 ]. Three stated that the QCA was an ‘exploratory’ analysis [ 53 , 54 , 55 ]. In addition to the empirical aims, several papers (e.g. [ 42 , 48 ]) sought to demonstrate the utility of QCA, or to develop QCA methods for health research (e.g. [ 47 ]).

Reported strengths and weaknesses of approach

There was a general agreement about the strengths of QCA. Specifically, that it was a useful tool to address complex causality, providing a systematic approach to understand the mechanisms at work in implementation across contexts [ 38 , 39 , 43 , 45 , 46 , 47 , 55 , 56 , 57 ], particularly as they relate to (in) effective intervention implementation [ 44 , 51 ] and the evaluation of interventions [ 58 ], or “where it is not possible to identify linearity between variables of interest and outcomes” [ 49 ]. Authors highlighted the strengths of QCA as providing possibilities for examining complex policy problems [ 37 , 59 ]; for testing existing as well as new theory [ 52 ]; and for identifying aspects of interventions which had not been previously perceived as critical [ 41 ] or which may have been missed when drawing on statistical methods that use, for instance, linear additive models [ 42 ]. The strengths of QCA in terms of providing useful evidence for policy were flagged in a number of studies, particularly where the causal recipes suggested that conventional assumptions about effectiveness were not confirmed. Blackman et al., for instance, in a series of studies exploring why unequal health outcomes had narrowed in some areas of the UK and not others, identified poorer outcomes in settings with ‘better’ contracting [ 35 , 36 , 37 ]; Harting found, contrary to theoretical assumptions about the necessary conditions for successful implementation of public health interventions, that a multisectoral network was not a necessary condition [ 30 ].

Weaknesses reported included the limitations of QCA in general for addressing complexity, as well as specific limitations with either the csQCA or the fsQCA methods employed. One general concern discussed across a number of studies was the problem of limited empirical diversity, which resulted in: limitations in the possible number of conditions included in each study, particularly with small N studies [ 58 ]; missing data on important conditions [ 43 ]; or limited reported diversity (where, for instance, data were drawn from systematic reviews, reflecting publication biases which limit reporting of ineffective interventions) [ 41 ]. Reported methodological limitations in small and intermediate N studies included concerns about the potential that case selection could bias findings [ 37 ].

In terms of potential for addressing causal complexity, the limitations of QCA for identifying unintended consequences, tipping points, and/or feedback loops in complex adaptive systems were noted [ 60 ], as were the potential limitations (especially in csQCA studies) of reducing complex conditions, drawn from detailed qualitative understanding, to binary conditions [ 35 ]. The impossibility of doing this was a rationale for using fsQCA in one study [ 57 ], where detailed knowledge of conditions is needed to make theoretically justified calibration decisions. However, others [ 47 ] make the case that csQCA provides more appropriate findings for policy: dichotomisation forces a focus on meaningful distinctions, including those related to decisions that practitioners/policy makers can action. There is, then, a potential trade-off in providing ‘interpretable results’, but ones which preclude potential for utilising more detailed information [ 45 ]. That QCA does not deal with probabilistic causation was noted [ 47 ].

Quality of published studies

Assessment of ‘familiarity with cases’ was made subjectively on the basis of study authors’ reports of their knowledge of the settings (empirical or theoretical) and the descriptions they provided in the published paper: overall, 14 were judged as sufficient, and 12 less than sufficient. Studies which included primary data were more likely to be judged as demonstrating familiarity ( n  = 10) than those drawing on secondary sources or systematic reviews, of which only two were judged as demonstrating familiarity. All studies justified how the selection of cases had been made; for those not using the full available population of cases, this was in general (appropriately) done theoretically: following previous research [ 52 ]; purposively to include a range of positive and negative outcomes [ 41 ]; or to include a diversity of cases [ 58 ]. In identifying conditions leading to effective/not effective interventions, one purposive strategy was to include a specified percentage or number of the most effective and least effective interventions (e.g. [ 36 , 40 , 51 , 52 ]). Discussion of calibration of set membership scores was judged adequate in 15 cases, and inadequate in 11; 10 reported raw data matrices in the paper or supplementary material; 21 reported truth tables in the paper or supplementary material. The majority ( n  = 21) reported at least some detail on the coverage (the number of cases with a particular configuration) and consistency (the percentage of similar causal configurations which result in the same outcome). The majority ( n  = 21) included truth tables (or explicitly provided details of how to obtain them); fewer ( n  = 10) included raw data. Only five studies met all six of these quality criteria (evidence of familiarity with cases, justification of case selection, discussion of calibration, reporting truth tables, reporting raw data matrices, reporting coverage and consistency); a further six met at least five of them.

Of the csQCA studies which were not reporting an exploratory analysis, four appeared to have insufficient cases for the large number of conditions entered into at least one of the models reported, with a consequent risk to the validity of the QCA models [ 26 ].

QCA has been widely used in public health research over the last decade to advance understanding of causal inference in complex systems. In this review of published evidence to date, we have identified studies using QCA to examine the configurations of conditions that lead to particular outcomes across contexts. As noted by most study authors, QCA methods have promised advantages over probabilistic statistical techniques for examining causation where systems and/or interventions are complex, providing public health researchers with a method to test the multiple pathways (configurations of conditions), and necessary and sufficient conditions that lead to desired health outcomes.

The origins of QCA approaches are in comparative policy studies. Rihoux et al’s review of peer-reviewed journal articles using QCA methods published up to 2011 found the majority of published examples were from political science and sociology, with fewer than 5% of the 313 studies they identified coming from health sciences [ 61 ]. They also reported few examples of the method being used in policy evaluation and implementation studies [ 62 ]. In the decade since their review of the field [ 61 ], there has been an emerging body of evaluative work in health: we identified 26 studies in the field of public health alone, with the majority published in public health journals. Across these studies, QCA has been used for evaluative questions in a range of settings and public health domains to identify the conditions under which interventions are implemented and/or have evidence of effect for improving population health. All studies included a series of cases that included some with and some without the outcome of interest (such as behaviour change, successful programme implementation, or good vaccination uptake). The dominance of high-income countries in both intervention settings and author affiliations is disappointing, but reflects the disproportionate location of public health research in the global north more generally [ 63 ].

The largest single group of studies included were systematic reviews, using QCA to compare interventions (or intervention components) to identify successful (and non-successful) configurations of conditions across contexts. Here, the value of QCA lies in its potential for synthesis with quantitative meta-synthesis methods to identify the particular conditions or contexts in which interventions or components are effective. As Parrott et al. note, for instance, their meta-analysis could identify probabilistic effects of weight management programmes, and the QCA analysis enabled them to address the “role that the context of the [paediatric weight management] intervention has in influencing how, when, and for whom an intervention mix will be successful” [ 50 ]. However, using QCA to identify configurations of conditions that lead to effective or non- effective interventions across particular areas of population health is an application that does move away in some significant respects from the origins of the method. First, researchers drawing on evidence from systematic reviews for their data are reliant largely on published evidence for information on conditions (such as the organisational contexts in which interventions were implemented, or the types of behaviour change theory utilised). Although guidance for describing interventions [ 64 ] advises key aspects of context are included in reports, this may not include data on the full range of conditions that might be causally important, and review research teams may have limited knowledge of these ‘cases’ themselves. Second, less successful interventions are less likely to be published, potentially limiting the diversity of cases, particularly of cases with unsuccessful outcomes. A strength of QCA is the separate analysis of conditions leading to positive and negative outcomes: this is precluded where there is insufficient evidence on negative outcomes [ 50 ]. Third, when including a range of types of intervention, it can be unclear whether the cases included are truly comparable. A QCA study requires a high degree of theoretical and pragmatic case knowledge on the part of the researcher to calibrate conditions to qualitative anchors: it is reliant on deep understanding of complex contexts, and a familiarity with how conditions interact within and across contexts. Perhaps surprising is that only seven of the studies included here clearly drew on qualitative data, given that QCA is primarily seen as a method that requires thick, detailed knowledge of cases, particularly when the aim is to understand complex causation [ 8 ]. Whilst research teams conducting QCA in the context of systematic reviews may have detailed understanding in general of interventions within their spheres of expertise, they are unlikely to have this for the whole range of cases, particularly where a diverse set of contexts (countries, organisational settings) are included. Making a theoretical case for the valid comparability of such a case series is crucial. There may, then, be limitations in the portability of QCA methods for conducting studies entirely reliant on data from published evidence.

QCA was developed for small and medium N series of cases, and (as in the field more broadly, [ 61 ]), the samples in our studies predominantly had between 10 and 50 cases. However, there is increasing interest in the method as an alternative or complementary technique to regression-oriented statistical methods for larger samples [ 65 ], such as from surveys, where detailed knowledge of cases is likely to be replaced by theoretical knowledge of relationships between conditions (see [ 23 ]). The two larger N (> 100 cases) studies in our sample were an individual level analysis of survey data [ 46 , 47 ] and an analysis of intervention arms from a systematic review [ 50 ]. Larger sample sizes allow more conditions to be included in the analysis [ 23 , 26 ], although for evaluative research, where the aim is developing a causal explanation, rather than simply exploring patterns, there remains a limit to the number of conditions that can be included. As the number of conditions included increases, so too does the number of possible configurations, increasing the chance of unique combinations and of generating spurious solutions with a high level of consistency. As a rule of thumb, once the number of conditions exceeds 6–8 (with up to 50 cases) or 10 (for larger samples), the credibility of solutions may be severely compromised [ 23 ].

Strengths and weaknesses of the study

A systematic review has the potential advantages of transparency and rigour and, if not exhaustive, our search is likely to be representative of the body of research using QCA for evaluative public health research up to 2020. However, a limitation is the inevitable difficulty in operationalising a ‘public health’ intervention. Exclusions on scope are not straightforward, given that most social, environmental and political conditions impact on public health, and arguably a greater range of policy and social interventions (such as fiscal or trade policies) that have been the subject of QCA analyses could have been included, or a greater range of more clinical interventions. However, to enable a manageable number of papers to review, and restrict our focus to those papers that were most directly applicable to (and likely to be read by) those in public health policy and practice, we operationalised ‘public health interventions’ as those which were likely to be directly impacting on population health outcomes, or on behaviours (such as increased physical activity) where there was good evidence for causal relationships with public health outcomes, and where the primary research question of the study examined the conditions leading to those outcomes. This review has, of necessity, therefore excluded a considerable body of evidence likely to be useful for public health practice in terms of planning interventions, such as studies on how to better target smoking cessation [ 66 ] or foster social networks [ 67 ] where the primary research question was on conditions leading to these outcomes, rather than on conditions for outcomes of specific interventions. Similarly, there are growing number of descriptive epidemiological studies using QCA to explore factors predicting outcomes across such diverse areas as lupus and quality of life [ 68 ]; length of hospital stay [ 69 ]; constellations of factors predicting injury [ 70 ]; or the role of austerity, crisis and recession in predicting public health outcomes [ 71 ]. Whilst there is undoubtedly useful information to be derived from studying the conditions that lead to particular public health problems, these studies were not directly evaluating interventions, so they were also excluded.

Restricting our search to publications in English and to peer reviewed publications may have missed bodies of work from many regions, and has excluded research from non-governmental organisations using QCA methods in evaluation. As this is a rapidly evolving field, with relatively recent uptake in public health (all our included studies were after 2005), our studies may not reflect the most recent advances in the area.

Implications for conducting and reporting QCA studies

This systematic review has reviewed studies that deployed an emergent methodology, which has no reporting guidelines and has had, to date, a relatively low level of awareness among many potential evidence users in public health. For this reason, many of the studies reviewed were relatively detailed on the methods used, and the rationale for utilising QCA.

We did not assess quality directly, but used indicators of good practice discussed in QCA methodological literature, largely written for policy studies scholars, and often post-dating the publication dates of studies included in this review. It is also worth noting that, given the relatively recent development of QCA methods, methodological debate is still thriving on issues such as the reliability of causal inferences [ 72 ], alongside more general critiques of the usefulness of the method for policy decisions (see, for instance, [ 73 ]). The authors of studies included in this review also commented directly on methodological development: for instance, Thomas et al. suggests that QCA may benefit from methods development for sensitivity analyses around calibration decisions [ 42 ].

However, we selected quality criteria that, we argue, are relevant for public health research> Justifying the selection of cases, discussing and justifying the calibration of set membership, making data sets available, and reporting truth tables, consistency and coverage are all good practice in line with the usual requirements of transparency and credibility in methods. When QCA studies aim to provide explanation of outcomes (rather than exploring configurations), it is also vital that they are reported in ways that enhance the credibility of claims made, including justifying the number of conditions included relative to cases. Few of the studies published to date met all these criteria, at least in the papers included here (although additional material may have been provided in other publications). To improve the future discoverability and uptake up of QCA methods in public health, and to strengthen the credibility of findings from these methods, we therefore suggest the following criteria should be considered by authors and reviewers for reporting QCA studies which aim to provide causal evidence about the configurations of conditions that lead to implementation or outcomes:

The paper title and abstract state the QCA design;

The sampling unit for the ‘case’ is clearly defined (e.g.: patient, specified geographical population, ward, hospital, network, policy, country);

The population from which the cases have been selected is defined (e.g.: all patients in a country with X condition, districts in X country, tertiary hospitals, all hospitals in X country, all health promotion networks in X province, European policies on smoking in outdoor places, OECD countries);

The rationale for selection of cases from the population is justified (e.g.: whole population, random selection, purposive sample);

There are sufficient cases to provide credible coverage across the number of conditions included in the model, and the rationale for the number of conditions included is stated;

Cases are comparable;

There is a clear justification for how choices of relevant conditions (or ‘aspects of context’) have been made;

There is sufficient transparency for replicability: in line with open science expectations, datasets should be available where possible; truth tables should be reported in publications, and reports of coverage and consistency provided.

Implications for future research

In reviewing methods for evaluating natural experiments, Craig et al. focus on statistical techniques for enhancing causal inference, noting only that what they call ‘qualitative’ techniques (the cited references for these are all QCA studies) require “further studies … to establish their validity and usefulness” [ 2 ]. The studies included in this review have demonstrated that QCA is a feasible method when there are sufficient (comparable) cases for identifying configurations of conditions under which interventions are effective (or not), or are implemented (or not). Given ongoing concerns in public health about how best to evaluate interventions across complex contexts and systems, this is promising. This review has also demonstrated the value of adding QCA methods to the tool box of techniques for evaluating interventions such as public policies, health promotion programmes, and organisational changes - whether they are implemented in a randomised way or not. Many of the studies in this review have clearly generated useful evidence: whether this evidence has had more or less impact, in terms of influencing practice and policy, or is more valid, than evidence generated by other methods is not known. Validating the findings of a QCA study is perhaps as challenging as validating the findings from any other design, given the absence of any gold standard comparators. Comparisons of the findings of QCA with those from other methods are also typically constrained by the rather different research questions asked, and the different purposes of the analysis. In our review, QCA were typically used alongside other methods to address different questions, rather than to compare methods. However, as the field develops, follow up studies, which evaluate outcomes of interventions designed in line with conditions identified as causal in prior QCAs, might be useful for contributing to validation.

This review was limited to public health evaluation research: other domains that would be useful to map include health systems/services interventions and studies used to design or target interventions. There is also an opportunity to broaden the scope of the field, particularly for addressing some of the more intractable challenges for public health research. Given the limitations in the evidence base on what works to address inequalities in health, for instance [ 74 ], QCA has potential here, to help identify the conditions under which interventions do or do not exacerbate unequal outcomes, or the conditions that lead to differential uptake or impacts across sub-population groups. It is perhaps surprising that relatively few of the studies in this review included cases at the level of country or region, the traditional level for QCA studies. There may be scope for developing international comparisons for public health policy, and using QCA methods at the case level (nation, sub-national region) of classic policy studies in the field. In the light of debate around COVID-19 pandemic response effectiveness, comparative studies across jurisdictions might shed light on issues such as differential population responses to vaccine uptake or mask use, for example, and these might in turn be considered as conditions in causal configurations leading to differential morbidity or mortality outcomes.

When should be QCA be considered?

Public health evaluations typically assess the efficacy, effectiveness or cost-effectiveness of interventions and the processes and mechanisms through which they effect change. There is no perfect evaluation design for achieving these aims. As in other fields, the choice of design will in part depend on the availability of counterfactuals, the extent to which the investigator can control the intervention, and the range of potential cases and contexts [ 75 ], as well as political considerations, such as the credibility of the approach with key stakeholders [ 76 ]. There are inevitably ‘horses for courses’ [ 77 ]. The evidence from this review suggests that QCA evaluation approaches are feasible when there is a sufficient number of comparable cases with and without the outcome of interest, and when the investigators have, or can generate, sufficiently in-depth understanding of those cases to make sense of connections between conditions, and to make credible decisions about the calibration of set membership. QCA may be particularly relevant for understanding multiple causation (that is, where different configurations might lead to the same outcome), and for understanding the conditions associated with both lack of effect and effect. As a stand-alone approach, QCA might be particularly valuable for national and regional comparative studies of the impact of policies on public health outcomes. Alongside cluster randomised trials of interventions, or alongside systematic reviews, QCA approaches are especially useful for identifying core combinations of causal conditions for success and lack of success in implementation and outcome.

Conclusions

QCA is a relatively new approach for public health research, with promise for contributing to much-needed methodological development for addressing causation in complex systems. This review has demonstrated the large range of evaluation questions that have been addressed to date using QCA, including contributions to process evaluations of trials and for exploring the conditions leading to effectiveness (or not) in systematic reviews of interventions. There is potential for QCA to be more widely used in evaluative research, to identify the conditions under which interventions across contexts are implemented or not, and the configurations of conditions associated with effect or lack of evidence of effect. However, QCA will not be appropriate for all evaluations, and cannot be the only answer to addressing complex causality. For explanatory questions, the approach is most appropriate when there is a series of enough comparable cases with and without the outcome of interest, and where the researchers have detailed understanding of those cases, and conditions. To improve the credibility of findings from QCA for public health evidence users, we recommend that studies are reported with the usual attention to methodological transparency and data availability, with key details that allow readers to judge the credibility of causal configurations reported. If the use of QCA continues to expand, it may be useful to develop more comprehensive consensus guidelines for conduct and reporting.

Availability of data and materials

Full search strategies and extraction forms are available by request from the first author.

Abbreviations

Comparative Methods for Systematic Cross-Case Analysis

crisp set QCA

fuzzy set QCA

multi-value QCA

Medical Research Council

  • Qualitative Comparative Analysis

randomised control trial

Physical Activity

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406. https://doi.org/10.1177/1356389015605205 .

Article   Google Scholar  

Craig P, Katikireddi SV, Leyland A, Popham F. Natural experiments: an overview of methods, approaches, and contributions to public health intervention research. Annu Rev Public Health. 2017;38(1):39–56. https://doi.org/10.1146/annurev-publhealth-031816-044327 .

Article   PubMed   PubMed Central   Google Scholar  

Shiell A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–3. https://doi.org/10.1136/bmj.39569.510521.AD .

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350(mar19 6):h1258. https://doi.org/10.1136/bmj.h1258 .

Pattyn V, Álamos-Concha P, Cambré B, Rihoux B, Schalembier B. Policy effectiveness through Configurational and mechanistic lenses: lessons for concept development. J Comp Policy Anal Res Pract. 2020;0:1–18.

Google Scholar  

Byrne D. Evaluating complex social interventions in a complex world. Evaluation. 2013;19(3):217–28. https://doi.org/10.1177/1356389013495617 .

Gerrits L, Pagliarin S. Social and causal complexity in qualitative comparative analysis (QCA): strategies to account for emergence. Int J Soc Res Methodol 2020;0:1–14, doi: https://doi.org/10.1080/13645579.2020.1799636 .

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32. https://doi.org/10.1080/09581596.2017.1282603 .

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4. https://doi.org/10.1016/S0140-6736(17)31267-9 .

Article   PubMed   Google Scholar  

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95. https://doi.org/10.1186/s12916-018-1089-4 .

Craig P, Di Ruggiero E, Frohlich KL, Mykhalovskiy E and White M, on behalf of the Canadian Institutes of Health Research (CIHR)–National Institute for Health Research (NIHR) Context Guidance Authors Group. Taking account of context in population health intervention research: guidance for producers, users and funders of research. Southampton: NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Paparini S, Green J, Papoutsi C, Murdoch J, Petticrew M, Greenhalgh T, et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med. 2020;18(1):301. https://doi.org/10.1186/s12916-020-01777-6 .

Ragin. The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkeley: University of California Press; 1987.

Ragin C. Redesigning social inquiry: fuzzy sets and beyond - Charles C: Ragin - Google Books. The University of Chicago Press; 2008. https://doi.org/10.7208/chicago/9780226702797.001.0001 .

Book   Google Scholar  

Befani B, Ledermann S, Sager F. Realistic evaluation and QCA: conceptual parallels and an empirical application. Evaluation. 2007;13(2):171–92. https://doi.org/10.1177/1356389007075222 .

Kane H, Lewis MA, Williams PA, Kahwati LC. Using qualitative comparative analysis to understand and quantify translation and implementation. Transl Behav Med. 2014;4(2):201–8. https://doi.org/10.1007/s13142-014-0251-6 .

Cronqvist L, Berg-Schlosser D. Chapter 4: Multi-Value QCA (mvQCA). In: Rihoux B, Ragin C, editors. Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. 2455 Teller Road, Thousand Oaks California 91320 United States: SAGE Publications, Inc.; 2009. p. 69–86. doi: https://doi.org/10.4135/9781452226569 .

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225–39.

CAS   PubMed   PubMed Central   Google Scholar  

Legewie N. An introduction to applied data analysis with qualitative comparative analysis (QCA). Forum Qual Soc Res. 2013;14.  https://doi.org/10.17169/fqs-14.3.1961 .

Varone F, Rihoux B, Marx A. A new method for policy evaluation? In: Rihoux B, Grimm H, editors. Innovative comparative methods for policy analysis: beyond the quantitative-qualitative divide. Boston: Springer US; 2006. p. 213–36. https://doi.org/10.1007/0-387-28829-5_10 .

Chapter   Google Scholar  

Gerrits L, Verweij S. The evaluation of complex infrastructure projects: a guide to qualitative comparative analysis. Cheltenham: Edward Elgar Pub; 2018. https://doi.org/10.4337/9781783478422 .

Greckhamer T, Misangyi VF, Fiss PC. The two QCAs: from a small-N to a large-N set theoretic approach. In: Configurational Theory and Methods in Organizational Research. Emerald Group Publishing Ltd.; 2013. p. 49–75. https://pennstate.pure.elsevier.com/en/publications/the-two-qcas-from-a-small-n-to-a-large-n-set-theoretic-approach . Accessed 16 Apr 2021.

Rihoux B, Ragin CC. Configurational comparative methods: qualitative comparative analysis (QCA) and related techniques. SAGE; 2009, doi: https://doi.org/10.4135/9781452226569 .

Marx A. Crisp-set qualitative comparative analysis (csQCA) and model specification: benchmarks for future csQCA applications. Int J Mult Res Approaches. 2010;4(2):138–58. https://doi.org/10.5172/mra.2010.4.2.138 .

Marx A, Dusa A. Crisp-set qualitative comparative analysis (csQCA), contradictions and consistency benchmarks for model specification. Methodol Innov Online. 2011;6(2):103–48. https://doi.org/10.4256/mio.2010.0037 .

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252. https://doi.org/10.1186/s13643-019-1159-5 .

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349(1):g7647. https://doi.org/10.1136/bmj.g7647 .

EPPI-Reviewer 4.0: Software for research synthesis. UK: University College London; 2010.

Harting J, Peters D, Grêaux K, van Assema P, Verweij S, Stronks K, et al. Implementing multiple intervention strategies in Dutch public health-related policy networks. Health Promot Int. 2019;34(2):193–203. https://doi.org/10.1093/heapro/dax067 .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8(1):45. https://doi.org/10.1186/1471-2288-8-45 .

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC methods Programme. 2006.

Wagemann C, Schneider CQ. Qualitative comparative analysis (QCA) and fuzzy-sets: agenda for a research approach and a data analysis technique. Comp Sociol. 2010;9:376–96.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. https://doi.org/10.1017/CBO9781139004244 .

Blackman T, Dunstan K. Qualitative comparative analysis and health inequalities: investigating reasons for differential Progress with narrowing local gaps in mortality. J Soc Policy. 2010;39(3):359–73. https://doi.org/10.1017/S0047279409990675 .

Blackman T, Wistow J, Byrne D. A Qualitative Comparative Analysis of factors associated with trends in narrowing health inequalities in England. Soc Sci Med 1982. 2011;72:1965–74.

Blackman T, Wistow J, Byrne D. Using qualitative comparative analysis to understand complex policy problems. Evaluation. 2013;19(2):126–40. https://doi.org/10.1177/1356389013484203 .

Glatman-Freedman A, Cohen M-L, Nichols KA, Porges RF, Saludes IR, Steffens K, et al. Factors affecting the introduction of new vaccines to poor nations: a comparative study of the haemophilus influenzae type B and hepatitis B vaccines. PLoS One. 2010;5(11):e13802. https://doi.org/10.1371/journal.pone.0013802 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ford EW, Duncan WJ, Ginter PM. Health departments’ implementation of public health’s core functions: an assessment of health impacts. Public Health. 2005;119(1):11–21. https://doi.org/10.1016/j.puhe.2004.03.002 .

Article   CAS   PubMed   Google Scholar  

Lucidarme S, Cardon G, Willem A. A comparative study of health promotion networks: configurations of determinants for network effectiveness. Public Manag Rev. 2016;18(8):1163–217. https://doi.org/10.1080/14719037.2015.1088567 .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Richardson M, Thomas J. Weight management programmes: re-analysis of a systematic review to identify pathways to effectiveness. Health Expect Int J Public Particip Health Care Health Policy. 2018;21:574–84.

CAS   Google Scholar  

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3(1):67. https://doi.org/10.1186/2046-4053-3-67 .

Fernald DH, Simpson MJ, Nease DE, Hahn DL, Hoffmann AE, Michaels LC, et al. Implementing community-created self-management support tools in primary care practices: multimethod analysis from the INSTTEPP study. J Patient-Centered Res Rev. 2018;5(4):267–75. https://doi.org/10.17294/2330-0698.1634 .

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD011651.pub2 .

Kahwati LC, Lewis MA, Kane H, Williams PA, Nerz P, Jones KR, et al. Best practices in the veterans health Administration’s MOVE! Weight management program. Am J Prev Med. 2011;41(5):457–64. https://doi.org/10.1016/j.amepre.2011.06.047 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) to evaluate a public health policy initiative in the north east of England. Polic Soc. 2013;32(4):289–301. https://doi.org/10.1016/j.polsoc.2013.10.002 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) in public health: a case study of a health improvement service for long-term incapacity benefit recipients. J Public Health. 2014;36(1):126–33. https://doi.org/10.1093/pubmed/fdt047 .

Article   CAS   Google Scholar  

Brunton G, O’Mara-Eves A, Thomas J. The “active ingredients” for successful community engagement with disadvantaged expectant and new mothers: a qualitative comparative analysis. J Adv Nurs. 2014;70(12):2847–60. https://doi.org/10.1111/jan.12441 .

McGowan VJ, Wistow J, Lewis SJ, Popay J, Bambra C. Pathways to mental health improvement in a community-led area-based empowerment initiative: evidence from the big local ‘communities in control’ study. England J Public Health. 2019;41(4):850–7. https://doi.org/10.1093/pubmed/fdy192 .

Parrott JS, Henry B, Thompson KL, Ziegler J, Handu D. Managing Complexity in Evidence Analysis: A Worked Example in Pediatric Weight Management. J Acad Nutr Diet. 2018;118:1526–1542.e3.

Kien C, Grillich L, Nussbaumer-Streit B, Schoberberger R. Pathways leading to success and non-success: a process evaluation of a cluster randomized physical activity health promotion program applying fuzzy-set qualitative comparative analysis. BMC Public Health. 2018;18(1):1386. https://doi.org/10.1186/s12889-018-6284-x .

Lubold AM. The effect of family policies and public health initiatives on breastfeeding initiation among 18 high-income countries: a qualitative comparative analysis research design. Int Breastfeed J. 2017;12(1):34. https://doi.org/10.1186/s13006-017-0122-0 .

Bianchi F, Garnett E, Dorsel C, Aveyard P, Jebb SA. Restructuring physical micro-environments to reduce the demand for meat: a systematic review and qualitative comparative analysis. Lancet Planet Health. 2018;2(9):e384–97. https://doi.org/10.1016/S2542-5196(18)30188-8 .

Bianchi F, Dorsel C, Garnett E, Aveyard P, Jebb SA. Interventions targeting conscious determinants of human behaviour to reduce the demand for meat: a systematic review with qualitative comparative analysis. Int J Behav Nutr Phys Act. 2018;15(1):102. https://doi.org/10.1186/s12966-018-0729-6 .

Hartmann-Boyce J, Bianchi F, Piernas C, Payne Riches S, Frie K, Nourse R, et al. Grocery store interventions to change food purchasing behaviors: a systematic review of randomized controlled trials. Am J Clin Nutr. 2018;107(6):1004–16. https://doi.org/10.1093/ajcn/nqy045 .

Burchett HED, Sutcliffe K, Melendez-Torres GJ, Rees R, Thomas J. Lifestyle weight management programmes for children: a systematic review using qualitative comparative analysis to identify critical pathways to effectiveness. Prev Med. 2018;106:1–12. https://doi.org/10.1016/j.ypmed.2017.08.025 .

Chiappone A. Technical assistance and changes in nutrition and physical activity practices in the National Early Care and education learning Collaboratives project, 2015–2016. Prev Chronic Dis. 2018;15. https://doi.org/10.5888/pcd15.170239 .

Kane H, Hinnant L, Day K, Council M, Tzeng J, Soler R, et al. Pathways to program success: a qualitative comparative analysis (QCA) of communities putting prevention to work case study programs. J Public Health Manag Pract JPHMP. 2017;23(2):104–11. https://doi.org/10.1097/PHH.0000000000000449 .

Roberts MC, Murphy T, Moss JL, Wheldon CW, Psek W. A qualitative comparative analysis of combined state health policies related to human papillomavirus vaccine uptake in the United States. Am J Public Health. 2018;108(4):493–9. https://doi.org/10.2105/AJPH.2017.304263 .

Breuer E, Subba P, Luitel N, Jordans M, Silva MD, Marchal B, et al. Using qualitative comparative analysis and theory of change to unravel the effects of a mental health intervention on service utilisation in Nepal. BMJ Glob Health. 2018;3(6):e001023. https://doi.org/10.1136/bmjgh-2018-001023 .

Rihoux B, Álamos-Concha P, Bol D, Marx A, Rezsöhazy I. From niche to mainstream method? A comprehensive mapping of QCA applications in journal articles from 1984 to 2011. Polit Res Q. 2013;66:175–84.

Rihoux B, Rezsöhazy I, Bol D. Qualitative comparative analysis (QCA) in public policy analysis: an extensive review. Ger Policy Stud. 2011;7:9–82.

Plancikova D, Duric P, O’May F. High-income countries remain overrepresented in highly ranked public health journals: a descriptive analysis of research settings and authorship affiliations. Crit Public Health 2020;0:1–7, DOI: https://doi.org/10.1080/09581596.2020.1722313 .

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348(mar07 3):g1687. https://doi.org/10.1136/bmj.g1687 .

Fiss PC, Sharapov D, Cronqvist L. Opposites attract? Opportunities and challenges for integrating large-N QCA and econometric analysis. Polit Res Q. 2013;66:191–8.

Blackman T. Can smoking cessation services be better targeted to tackle health inequalities? Evidence from a cross-sectional study. Health Educ J. 2008;67(2):91–101. https://doi.org/10.1177/0017896908089388 .

Haynes P, Banks L, Hill M. Social networks amongst older people in OECD countries: a qualitative comparative analysis. J Int Comp Soc Policy. 2013;29(1):15–27. https://doi.org/10.1080/21699763.2013.802988 .

Rioja EC. Valero-Moreno S, Giménez-Espert M del C, Prado-Gascó V. the relations of quality of life in patients with lupus erythematosus: regression models versus qualitative comparative analysis. J Adv Nurs. 2019;75(7):1484–92. https://doi.org/10.1111/jan.13957 .

Dy SM. Garg Pushkal, Nyberg Dorothy, Dawson Patricia B., Pronovost Peter J., Morlock Laura, et al. critical pathway effectiveness: assessing the impact of patient, hospital care, and pathway characteristics using qualitative comparative analysis. Health Serv Res. 2005;40(2):499–516. https://doi.org/10.1111/j.1475-6773.2005.0r370.x .

MELINDER KA, ANDERSSON R. The impact of structural factors on the injury rate in different European countries. Eur J Pub Health. 2001;11(3):301–8. https://doi.org/10.1093/eurpub/11.3.301 .

Saltkjel T, Holm Ingelsrud M, Dahl E, Halvorsen K. A fuzzy set approach to economic crisis, austerity and public health. Part II: How are configurations of crisis and austerity related to changes in population health across Europe? Scand J Public Health. 2017;45(18_suppl):48–55.

Baumgartner M, Thiem A. Often trusted but never (properly) tested: evaluating qualitative comparative analysis. Sociol Methods Res. 2020;49(2):279–311. https://doi.org/10.1177/0049124117701487 .

Tanner S. QCA is of questionable value for policy research. Polic Soc. 2014;33(3):287–98. https://doi.org/10.1016/j.polsoc.2014.08.003 .

Mackenbach JP. Tackling inequalities in health: the need for building a systematic evidence base. J Epidemiol Community Health. 2003;57(3):162. https://doi.org/10.1136/jech.57.3.162 .

Stern E, Stame N, Mayne J, Forss K, Davies R, Befani B. Broadening the range of designs and methods for impact evaluations. Technical report. London: DfiD; 2012.

Pattyn V. Towards appropriate impact evaluation methods. Eur J Dev Res. 2019;31(2):174–9. https://doi.org/10.1057/s41287-019-00202-w .

Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health. 2003;57(7):527–9. https://doi.org/10.1136/jech.57.7.527 .

Download references

Acknowledgements

The authors would like to thank and acknowledge the support of Sara Shaw, PI of MR/S014632/1 and the rest of the Triple C project team, the experts who were consulted on the final list of included studies, and the reviewers who provided helpful feedback on the original submission.

This study was funded by MRC: MR/S014632/1 ‘Case study, context and complex interventions (Triple C): development of guidance and publication standards to support case study research’. The funder played no part in the conduct or reporting of the study. JG is supported by a Wellcome Trust Centre grant 203109/Z/16/Z.

Author information

Authors and affiliations.

Institute for Culture and Society, Western Sydney University, Sydney, Australia

Benjamin Hanckel

Department of Public Health, Environments and Society, LSHTM, London, UK

Mark Petticrew

UCL Institute of Education, University College London, London, UK

James Thomas

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

You can also search for this author in PubMed   Google Scholar

Contributions

BH - research design, data acquisition, data extraction and coding, data interpretation, paper drafting; JT – research design, data interpretation, contributing to paper; MP – funding acquisition, research design, data interpretation, contributing to paper; JG – funding acquisition, research design, data extraction and coding, data interpretation, paper drafting. All authors approved the final version.

Corresponding author

Correspondence to Judith Green .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

All authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Example search strategy.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hanckel, B., Petticrew, M., Thomas, J. et al. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions. BMC Public Health 21 , 877 (2021). https://doi.org/10.1186/s12889-021-10926-2

Download citation

Received : 03 February 2021

Accepted : 22 April 2021

Published : 07 May 2021

DOI : https://doi.org/10.1186/s12889-021-10926-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public health
  • Intervention
  • Systematic review

BMC Public Health

ISSN: 1471-2458

comparative analysis research paper examples

  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, a comparative analysis of student performance in an online vs. face-to-face environmental science course from 2009 to 2016.

comparative analysis research paper examples

  • Department of Biology, Fort Valley State University, Fort Valley, GA, United States

A growing number of students are now opting for online classes. They find the traditional classroom modality restrictive, inflexible, and impractical. In this age of technological advancement, schools can now provide effective classroom teaching via the Web. This shift in pedagogical medium is forcing academic institutions to rethink how they want to deliver their course content. The overarching purpose of this research was to determine which teaching method proved more effective over the 8-year period. The scores of 548 students, 401 traditional students and 147 online students, in an environmental science class were used to determine which instructional modality generated better student performance. In addition to the overarching objective, we also examined score variabilities between genders and classifications to determine if teaching modality had a greater impact on specific groups. No significant difference in student performance between online and face-to-face (F2F) learners overall, with respect to gender, or with respect to class rank were found. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. A potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.

Introduction

The advent of online education has made it possible for students with busy lives and limited flexibility to obtain a quality education. As opposed to traditional classroom teaching, Web-based instruction has made it possible to offer classes worldwide through a single Internet connection. Although it boasts several advantages over traditional education, online instruction still has its drawbacks, including limited communal synergies. Still, online education seems to be the path many students are taking to secure a degree.

This study compared the effectiveness of online vs. traditional instruction in an environmental studies class. Using a single indicator, we attempted to see if student performance was effected by instructional medium. This study sought to compare online and F2F teaching on three levels—pure modality, gender, and class rank. Through these comparisons, we investigated whether one teaching modality was significantly more effective than the other. Although there were limitations to the study, this examination was conducted to provide us with additional measures to determine if students performed better in one environment over another ( Mozes-Carmel and Gold, 2009 ).

The methods, procedures, and operationalization tools used in this assessment can be expanded upon in future quantitative, qualitative, and mixed method designs to further analyze this topic. Moreover, the results of this study serve as a backbone for future meta-analytical studies.

Origins of Online Education

Computer-assisted instruction is changing the pedagogical landscape as an increasing number of students are seeking online education. Colleges and universities are now touting the efficiencies of Web-based education and are rapidly implementing online classes to meet student needs worldwide. One study reported “increases in the number of online courses given by universities have been quite dramatic over the last couple of years” ( Lundberg et al., 2008 ). Think tanks are also disseminating statistics on Web-based instruction. “In 2010, the Sloan Consortium found a 17% increase in online students from the years before, beating the 12% increase from the previous year” ( Keramidas, 2012 ).

Contrary to popular belief, online education is not a new phenomenon. The first correspondence and distance learning educational programs were initiated in the mid-1800s by the University of London. This model of educational learning was dependent on the postal service and therefore wasn't seen in American until the later Nineteenth century. It was in 1873 when what is considered the first official correspondence educational program was established in Boston, Massachusetts known as the “Society to Encourage Home Studies.” Since then, non-traditional study has grown into what it is today considered a more viable online instructional modality. Technological advancement indubitably helped improve the speed and accessibility of distance learning courses; now students worldwide could attend classes from the comfort of their own homes.

Qualities of Online and Traditional Face to Face (F2F) Classroom Education

Online and traditional education share many qualities. Students are still required to attend class, learn the material, submit assignments, and complete group projects. While teachers, still have to design curriculums, maximize instructional quality, answer class questions, motivate students to learn, and grade assignments. Despite these basic similarities, there are many differences between the two modalities. Traditionally, classroom instruction is known to be teacher-centered and requires passive learning by the student, while online instruction is often student-centered and requires active learning.

In teacher-centered, or passive learning, the instructor usually controls classroom dynamics. The teacher lectures and comments, while students listen, take notes, and ask questions. In student-centered, or active learning, the students usually determine classroom dynamics as they independently analyze the information, construct questions, and ask the instructor for clarification. In this scenario, the teacher, not the student, is listening, formulating, and responding ( Salcedo, 2010 ).

In education, change comes with questions. Despite all current reports championing online education, researchers are still questioning its efficacy. Research is still being conducted on the effectiveness of computer-assisted teaching. Cost-benefit analysis, student experience, and student performance are now being carefully considered when determining whether online education is a viable substitute for classroom teaching. This decision process will most probably carry into the future as technology improves and as students demand better learning experiences.

Thus far, “literature on the efficacy of online courses is expansive and divided” ( Driscoll et al., 2012 ). Some studies favor traditional classroom instruction, stating “online learners will quit more easily” and “online learning can lack feedback for both students and instructors” ( Atchley et al., 2013 ). Because of these shortcomings, student retention, satisfaction, and performance can be compromised. Like traditional teaching, distance learning also has its apologists who aver online education produces students who perform as well or better than their traditional classroom counterparts ( Westhuis et al., 2006 ).

The advantages and disadvantages of both instructional modalities need to be fully fleshed out and examined to truly determine which medium generates better student performance. Both modalities have been proven to be relatively effective, but, as mentioned earlier, the question to be asked is if one is truly better than the other.

Student Need for Online Education

With technological advancement, learners now want quality programs they can access from anywhere and at any time. Because of these demands, online education has become a viable, alluring option to business professionals, stay-at home-parents, and other similar populations. In addition to flexibility and access, multiple other face value benefits, including program choice and time efficiency, have increased the attractiveness of distance learning ( Wladis et al., 2015 ).

First, prospective students want to be able to receive a quality education without having to sacrifice work time, family time, and travel expense. Instead of having to be at a specific location at a specific time, online educational students have the freedom to communicate with instructors, address classmates, study materials, and complete assignments from any Internet-accessible point ( Richardson and Swan, 2003 ). This type of flexibility grants students much-needed mobility and, in turn, helps make the educational process more enticing. According to Lundberg et al. (2008) “the student may prefer to take an online course or a complete online-based degree program as online courses offer more flexible study hours; for example, a student who has a job could attend the virtual class watching instructional film and streaming videos of lectures after working hours.”

Moreover, more study time can lead to better class performance—more chapters read, better quality papers, and more group project time. Studies on the relationship between study time and performance are limited; however, it is often assumed the online student will use any surplus time to improve grades ( Bigelow, 2009 ). It is crucial to mention the link between flexibility and student performance as grades are the lone performance indicator of this research.

Second, online education also offers more program choices. With traditional classroom study, students are forced to take courses only at universities within feasible driving distance or move. Web-based instruction, on the other hand, grants students electronic access to multiple universities and course offerings ( Salcedo, 2010 ). Therefore, students who were once limited to a few colleges within their immediate area can now access several colleges worldwide from a single convenient location.

Third, with online teaching, students who usually don't participate in class may now voice their opinions and concerns. As they are not in a classroom setting, quieter students may feel more comfortable partaking in class dialogue without being recognized or judged. This, in turn, may increase average class scores ( Driscoll et al., 2012 ).

Benefits of Face-to-Face (F2F) Education via Traditional Classroom Instruction

The other modality, classroom teaching, is a well-established instructional medium in which teaching style and structure have been refined over several centuries. Face-to-face instruction has numerous benefits not found in its online counterpart ( Xu and Jaggars, 2016 ).

First and, perhaps most importantly, classroom instruction is extremely dynamic. Traditional classroom teaching provides real-time face-to-face instruction and sparks innovative questions. It also allows for immediate teacher response and more flexible content delivery. Online instruction dampens the learning process because students must limit their questions to blurbs, then grant the teacher and fellow classmates time to respond ( Salcedo, 2010 ). Over time, however, online teaching will probably improve, enhancing classroom dynamics and bringing students face-to face with their peers/instructors. However, for now, face-to-face instruction provides dynamic learning attributes not found in Web-based teaching ( Kemp and Grieve, 2014 ).

Second, traditional classroom learning is a well-established modality. Some students are opposed to change and view online instruction negatively. These students may be technophobes, more comfortable with sitting in a classroom taking notes than sitting at a computer absorbing data. Other students may value face-to-face interaction, pre and post-class discussions, communal learning, and organic student-teacher bonding ( Roval and Jordan, 2004 ). They may see the Internet as an impediment to learning. If not comfortable with the instructional medium, some students may shun classroom activities; their grades might slip and their educational interest might vanish. Students, however, may eventually adapt to online education. With more universities employing computer-based training, students may be forced to take only Web-based courses. Albeit true, this doesn't eliminate the fact some students prefer classroom intimacy.

Third, face-to-face instruction doesn't rely upon networked systems. In online learning, the student is dependent upon access to an unimpeded Internet connection. If technical problems occur, online students may not be able to communicate, submit assignments, or access study material. This problem, in turn, may frustrate the student, hinder performance, and discourage learning.

Fourth, campus education provides students with both accredited staff and research libraries. Students can rely upon administrators to aid in course selection and provide professorial recommendations. Library technicians can help learners edit their papers, locate valuable study material, and improve study habits. Research libraries may provide materials not accessible by computer. In all, the traditional classroom experience gives students important auxiliary tools to maximize classroom performance.

Fifth, traditional classroom degrees trump online educational degrees in terms of hiring preferences. Many academic and professional organizations do not consider online degrees on par with campus-based degrees ( Columbaro and Monaghan, 2009 ). Often, prospective hiring bodies think Web-based education is a watered-down, simpler means of attaining a degree, often citing poor curriculums, unsupervised exams, and lenient homework assignments as detriments to the learning process.

Finally, research shows online students are more likely to quit class if they do not like the instructor, the format, or the feedback. Because they work independently, relying almost wholly upon self-motivation and self-direction, online learners may be more inclined to withdraw from class if they do not get immediate results.

The classroom setting provides more motivation, encouragement, and direction. Even if a student wanted to quit during the first few weeks of class, he/she may be deterred by the instructor and fellow students. F2F instructors may be able to adjust the structure and teaching style of the class to improve student retention ( Kemp and Grieve, 2014 ). With online teaching, instructors are limited to electronic correspondence and may not pick-up on verbal and non-verbal cues.

Both F2F and online teaching have their pros and cons. More studies comparing the two modalities to achieve specific learning outcomes in participating learner populations are required before well-informed decisions can be made. This study examined the two modalities over eight (8) years on three different levels. Based on the aforementioned information, the following research questions resulted.

RQ1: Are there significant differences in academic performance between online and F2F students enrolled in an environmental science course?

RQ2: Are there gender differences between online and F2F student performance in an environmental science course?

RQ3: Are there significant differences between the performance of online and F2F students in an environmental science course with respect to class rank?

The results of this study are intended to edify teachers, administrators, and policymakers on which medium may work best.

Methodology

Participants.

The study sample consisted of 548 FVSU students who completed the Environmental Science class between 2009 and 2016. The final course grades of the participants served as the primary comparative factor in assessing performance differences between online and F2F instruction. Of the 548 total participants, 147 were online students while 401 were traditional students. This disparity was considered a limitation of the study. Of the 548 total students, 246 were male, while 302 were female. The study also used students from all four class ranks. There were 187 freshmen, 184 sophomores, 76 juniors, and 101 seniors. This was a convenience, non-probability sample so the composition of the study set was left to the discretion of the instructor. No special preferences or weights were given to students based upon gender or rank. Each student was considered a single, discrete entity or statistic.

All sections of the course were taught by a full-time biology professor at FVSU. The professor had over 10 years teaching experience in both classroom and F2F modalities. The professor was considered an outstanding tenured instructor with strong communication and management skills.

The F2F class met twice weekly in an on-campus classroom. Each class lasted 1 h and 15 min. The online class covered the same material as the F2F class, but was done wholly on-line using the Desire to Learn (D2L) e-learning system. Online students were expected to spend as much time studying as their F2F counterparts; however, no tracking measure was implemented to gauge e-learning study time. The professor combined textbook learning, lecture and class discussion, collaborative projects, and assessment tasks to engage students in the learning process.

This study did not differentiate between part-time and full-time students. Therefore, many part-time students may have been included in this study. This study also did not differentiate between students registered primarily at FVSU or at another institution. Therefore, many students included in this study may have used FVSU as an auxiliary institution to complete their environmental science class requirement.

Test Instruments

In this study, student performance was operationalized by final course grades. The final course grade was derived from test, homework, class participation, and research project scores. The four aforementioned assessments were valid and relevant; they were useful in gauging student ability and generating objective performance measurements. The final grades were converted from numerical scores to traditional GPA letters.

Data Collection Procedures

The sample 548 student grades were obtained from FVSU's Office of Institutional Research Planning and Effectiveness (OIRPE). The OIRPE released the grades to the instructor with the expectation the instructor would maintain confidentiality and not disclose said information to third parties. After the data was obtained, the instructor analyzed and processed the data though SPSS software to calculate specific values. These converted values were subsequently used to draw conclusions and validate the hypothesis.

Summary of the Results: The chi-square analysis showed no significant difference in student performance between online and face-to-face (F2F) learners [χ 2 (4, N = 548) = 6.531, p > 0.05]. The independent sample t -test showed no significant difference in student performance between online and F2F learners with respect to gender [ t (145) = 1.42, p = 0.122]. The 2-way ANOVA showed no significant difference in student performance between online and F2F learners with respect to class rank ( Girard et al., 2016 ).

Research question #1 was to determine if there was a statistically significant difference between the academic performance of online and F2F students.

Research Question 1

The first research question investigated if there was a difference in student performance between F2F and online learners.

To investigate the first research question, we used a traditional chi-square method to analyze the data. The chi-square analysis is particularly useful for this type of comparison because it allows us to determine if the relationship between teaching modality and performance in our sample set can be extended to the larger population. The chi-square method provides us with a numerical result which can be used to determine if there is a statistically significant difference between the two groups.

Table 1 shows us the mean and SD for modality and for gender. It is a general breakdown of numbers to visually elucidate any differences between scores and deviations. The mean GPA for both modalities is similar with F2F learners scoring a 69.35 and online learners scoring a 68.64. Both groups had fairly similar SDs. A stronger difference can be seen between the GPAs earned by men and women. Men had a 3.23 mean GPA while women had a 2.9 mean GPA. The SDs for both groups were almost identical. Even though the 0.33 numerical difference may look fairly insignificant, it must be noted that a 3.23 is approximately a B+ while a 2.9 is approximately a B. Given a categorical range of only A to F, a plus differential can be considered significant.

www.frontiersin.org

Table 1 . Means and standard deviations for 8 semester- “Environmental Science data set.”

The mean grade for men in the environmental online classes ( M = 3.23, N = 246, SD = 1.19) was higher than the mean grade for women in the classes ( M = 2.9, N = 302, SD = 1.20) (see Table 1 ).

First, a chi-square analysis was performed using SPSS to determine if there was a statistically significant difference in grade distribution between online and F2F students. Students enrolled in the F2F class had the highest percentage of A's (63.60%) as compared to online students (36.40%). Table 2 displays grade distribution by course delivery modality. The difference in student performance was statistically significant, χ 2 (4, N = 548) = 6.531, p > 0.05. Table 3 shows the gender difference on student performance between online and F2F students.

www.frontiersin.org

Table 2 . Contingency table for student's academic performance ( N = 548).

www.frontiersin.org

Table 3 . Gender * performance crosstabulation.

Table 2 shows us the performance measures of online and F2F students by grade category. As can be seen, F2F students generated the highest performance numbers for each grade category. However, this disparity was mostly due to a higher number of F2F students in the study. There were 401 F2F students as opposed to just 147 online students. When viewing grades with respect to modality, there are smaller percentage differences between respective learners ( Tanyel and Griffin, 2014 ). For example, F2F learners earned 28 As (63.60% of total A's earned) while online learners earned 16 As (36.40% of total A's earned). However, when viewing the A grade with respect to total learners in each modality, it can be seen that 28 of the 401 F2F students (6.9%) earned As as compared to 16 of 147 (10.9%) online learners. In this case, online learners scored relatively higher in this grade category. The latter measure (grade total as a percent of modality total) is a better reflection of respective performance levels.

Given a critical value of 7.7 and a d.f. of 4, we were able to generate a chi-squared measure of 6.531. The correlating p -value of 0.163 was greater than our p -value significance level of 0.05. We, therefore, had to accept the null hypothesis and reject the alternative hypothesis. There is no statistically significant difference between the two groups in terms of performance scores.

Research Question 2

The second research question was posed to evaluate if there was a difference between online and F2F varied with gender. Does online and F2F student performance vary with respect to gender? Table 3 shows the gender difference on student performance between online and face to face students. We used chi-square test to determine if there were differences in online and F2F student performance with respect to gender. The chi-square test with alpha equal to 0.05 as criterion for significance. The chi-square result shows that there is no statistically significant difference between men and women in terms of performance.

Research Question 3

The third research question tried to determine if there was a difference between online and F2F varied with respect to class rank. Does online and F2F student performance vary with respect to class rank?

Table 4 shows the mean scores and standard deviations of freshman, sophomore, and junior and senior students for both online and F2F student performance. To test the third hypothesis, we used a two-way ANOVA. The ANOVA is a useful appraisal tool for this particular hypothesis as it tests the differences between multiple means. Instead of testing specific differences, the ANOVA generates a much broader picture of average differences. As can be seen in Table 4 , the ANOVA test for this particular hypothesis states there is no significant difference between online and F2F learners with respect to class rank. Therefore, we must accept the null hypothesis and reject the alternative hypothesis.

www.frontiersin.org

Table 4 . Descriptive analysis of student performance by class rankings gender.

The results of the ANOVA show there is no significant difference in performance between online and F2F students with respect to class rank. Results of ANOVA is presented in Table 5 .

www.frontiersin.org

Table 5 . Analysis of variance (ANOVA) for online and F2F of class rankings.

As can be seen in Table 4 , the ANOVA test for this particular hypothesis states there is no significant difference between online and F2F learners with respect to class rank. Therefore, we must accept the null hypothesis and reject the alternative hypothesis.

Discussion and Social Implications

The results of the study show there is no significant difference in performance between online and traditional classroom students with respect to modality, gender, or class rank in a science concepts course for non-STEM majors. Although there were sample size issues and study limitations, this assessment shows both online learners and classroom learners perform at the same level. This conclusion indicates teaching modality may not matter as much as other factors. Given the relatively sparse data on pedagogical modality comparison given specific student population characteristics, this study could be considered innovative. In the current literature, we have not found a study of this nature comparing online and F2F non-STEM majors with respect to three separate factors—medium, gender, and class rank—and the ability to learn science concepts and achieve learning outcomes. Previous studies have compared traditional classroom learning vs. F2F learning for other factors (including specific courses, costs, qualitative analysis, etcetera, but rarely regarding outcomes relevant to population characteristics of learning for a specific science concepts course over many years) ( Liu, 2005 ).

In a study evaluating the transformation of a graduate level course for teachers, academic quality of the online course and learning outcomes were evaluated. The study evaluated the ability of course instructors to design the course for online delivery and develop various interactive multimedia models at a cost-savings to the respective university. The online learning platform proved effective in translating information where tested students successfully achieved learning outcomes comparable to students taking the F2F course ( Herman and Banister, 2007 ).

Another study evaluated the similarities and differences in F2F and online learning in a non-STEM course, “Foundations of American Education” and overall course satisfaction by students enrolled in either of the two modalities. F2F and online course satisfaction was qualitatively and quantitative analyzed. However, in analyzing online and F2F course feedback using quantitative feedback, online course satisfaction was less than F2F satisfaction. When qualitative data was used, course satisfaction was similar between modalities ( Werhner, 2010 ). The course satisfaction data and feedback was used to suggest a number of posits for effective online learning in the specific course. The researcher concluded that there was no difference in the learning success of students enrolled in the online vs. F2F course, stating that “in terms of learning, students who apply themselves diligently should be successful in either format” ( Dell et al., 2010 ). The author's conclusion presumes that the “issues surrounding class size are under control and that the instructor has a course load that makes the intensity of the online course workload feasible” where the authors conclude that the workload for online courses is more than for F2F courses ( Stern, 2004 ).

In “A Meta-Analysis of Three Types of Interaction Treatments in Distance Education,” Bernard et al. (2009) conducted a meta-analysis evaluating three types of instructional and/or media conditions designed into distance education (DE) courses known as interaction treatments (ITs)—student–student (SS), student–teacher (ST), or student–content (SC) interactions—to other DE instructional/interaction treatments. The researchers found that a strong association existed between the integration of these ITs into distance education courses and achievement compared with blended or F2F modalities of learning. The authors speculated that this was due to increased cognitive engagement based in these three interaction treatments ( Larson and Sung, 2009 ).

Other studies evaluating students' preferences (but not efficacy) for online vs. F2F learning found that students prefer online learning when it was offered, depending on course topic, and online course technology platform ( Ary and Brune, 2011 ). F2F learning was preferred when courses were offered late morning or early afternoon 2–3 days/week. A significant preference for online learning resulted across all undergraduate course topics (American history and government, humanities, natural sciences, social, and behavioral sciences, diversity, and international dimension) except English composition and oral communication. A preference for analytical and quantitative thought courses was also expressed by students, though not with statistically significant results ( Mann and Henneberry, 2014 ). In this research study, we looked at three hypothesis comparing online and F2F learning. In each case, the null hypothesis was accepted. Therefore, at no level of examination did we find a significant difference between online and F2F learners. This finding is important because it tells us traditional-style teaching with its heavy emphasis on interpersonal classroom dynamics may 1 day be replaced by online instruction. According to Daymont and Blau (2008) online learners, regardless of gender or class rank, learn as much from electronic interaction as they do from personal interaction. Kemp and Grieve (2014) also found that both online and F2F learning for psychology students led to similar academic performance. Given the cost efficiencies and flexibility of online education, Web-based instructional systems may rapidly rise.

A number of studies support the economic benefits of online vs. F2F learning, despite differences in social constructs and educational support provided by governments. In a study by Li and Chen (2012) higher education institutions benefit the most from two of four outputs—research outputs and distance education—with teaching via distance education at both the undergraduate and graduate levels more profitable than F2F teaching at higher education institutions in China. Zhang and Worthington (2017) reported an increasing cost benefit for the use of distance education over F2F instruction as seen at 37 Australian public universities over 9 years from 2003 to 2012. Maloney et al. (2015) and Kemp and Grieve (2014) also found significant savings in higher education when using online learning platforms vs. F2F learning. In the West, the cost efficiency of online learning has been demonstrated by several research studies ( Craig, 2015 ). Studies by Agasisti and Johnes (2015) and Bartley and Golek (2004) both found the cost benefits of online learning significantly greater than that of F2F learning at U.S. institutions.

Knowing there is no significant difference in student performance between the two mediums, institutions of higher education may make the gradual shift away from traditional instruction; they may implement Web-based teaching to capture a larger worldwide audience. If administered correctly, this shift to Web-based teaching could lead to a larger buyer population, more cost efficiencies, and more university revenue.

The social implications of this study should be touted; however, several concerns regarding generalizability need to be taken into account. First, this study focused solely on students from an environmental studies class for non-STEM majors. The ability to effectively prepare students for scientific professions without hands-on experimentation has been contended. As a course that functions to communicate scientific concepts, but does not require a laboratory based component, these results may not translate into similar performance of students in an online STEM course for STEM majors or an online course that has an online laboratory based co-requisite when compared to students taking traditional STEM courses for STEM majors. There are few studies that suggest the landscape may be changing with the ability to effectively train students in STEM core concepts via online learning. Biel and Brame (2016) reported successfully translating the academic success of F2F undergraduate biology courses to online biology courses. However, researchers reported that of the large-scale courses analyzed, two F2F sections outperformed students in online sections, and three found no significant difference. A study by Beale et al. (2014) comparing F2F learning with hybrid learning in an embryology course found no difference in overall student performance. Additionally, the bottom quartile of students showed no differential effect of the delivery method on examination scores. Further, a study from Lorenzo-Alvarez et al. (2019) found that radiology education in an online learning platform resulted in similar academic outcomes as F2F learning. Larger scale research is needed to determine the effectiveness of STEM online learning and outcomes assessments, including workforce development results.

In our research study, it is possible the study participants may have been more knowledgeable about environmental science than about other subjects. Therefore, it should be noted this study focused solely on students taking this one particular class. Given the results, this course presents a unique potential for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.

Second, the operationalization measure of “grade” or “score” to determine performance level may be lacking in scope and depth. The grades received in a class may not necessarily show actual ability, especially if the weights were adjusted to heavily favor group tasks and writing projects. Other performance indicators may be better suited to properly access student performance. A single exam containing both multiple choice and essay questions may be a better operationalization indicator of student performance. This type of indicator will provide both a quantitative and qualitative measure of subject matter comprehension.

Third, the nature of the student sample must be further dissected. It is possible the online students in this study may have had more time than their counterparts to learn the material and generate better grades ( Summers et al., 2005 ). The inverse holds true, as well. Because this was a convenience non-probability sampling, the chances of actually getting a fair cross section of the student population were limited. In future studies, greater emphasis must be placed on selecting proper study participants, those who truly reflect proportions, types, and skill levels.

This study was relevant because it addressed an important educational topic; it compared two student groups on multiple levels using a single operationalized performance measure. More studies, however, of this nature need to be conducted before truly positing that online and F2F teaching generate the same results. Future studies need to eliminate spurious causal relationships and increase generalizability. This will maximize the chances of generating a definitive, untainted results. This scientific inquiry and comparison into online and traditional teaching will undoubtedly garner more attention in the coming years.

Our study compared learning via F2F vs. online learning modalities in teaching an environmental science course additionally evaluating factors of gender and class rank. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. The social implications of this finding are important for advancing access to and learning of scientific concepts by the general population, as many institutions of higher education allow an online course to be taken without enrolling in a degree program. Thus, the potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.

Limitations of the Study

The limitations of the study centered around the nature of the sample group, student skills/abilities, and student familiarity with online instruction. First, because this was a convenience, non-probability sample, the independent variables were not adjusted for real-world accuracy. Second, student intelligence and skill level were not taken into consideration when separating out comparison groups. There exists the possibility that the F2F learners in this study may have been more capable than the online students and vice versa. This limitation also applies to gender and class rank differences ( Friday et al., 2006 ). Finally, there may have been ease of familiarity issues between the two sets of learners. Experienced traditional classroom students now taking Web-based courses may be daunted by the technical aspect of the modality. They may not have had the necessary preparation or experience to efficiently e-learn, thus leading to lowered scores ( Helms, 2014 ). In addition to comparing online and F2F instructional efficacy, future research should also analyze blended teaching methods for the effectiveness of courses for non-STEM majors to impart basic STEM concepts and see if the blended style is more effective than any one pure style.

Data Availability Statement

The datasets generated for this study are available on request to the corresponding author.

Ethics Statement

The studies involving human participants were reviewed and approved by Fort Valley State University Human Subjects Institutional Review Board. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.

Author Contributions

JP provided substantial contributions to the conception of the work, acquisition and analysis of data for the work, and is the corresponding author on this paper who agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. FJ provided substantial contributions to the design of the work, interpretation of the data for the work, and revised it critically for intellectual content.

This research was supported in part by funding from the National Science Foundation, Awards #1649717, 1842510, Ñ900572, and 1939739 to FJ.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors would like to thank the reviewers for their detailed comments and feedback that assisted in the revising of our original manuscript.

Agasisti, T., and Johnes, G. (2015). Efficiency, costs, rankings and heterogeneity: the case of US higher education. Stud. High. Educ. 40, 60–82. doi: 10.1080/03075079.2013.818644

CrossRef Full Text | Google Scholar

Ary, E. J., and Brune, C. W. (2011). A comparison of student learning outcomes in traditional and online personal finance courses. MERLOT J. Online Learn. Teach. 7, 465–474.

Google Scholar

Atchley, W., Wingenbach, G., and Akers, C. (2013). Comparison of course completion and student performance through online and traditional courses. Int. Rev. Res. Open Dist. Learn. 14, 104–116. doi: 10.19173/irrodl.v14i4.1461

Bartley, S. J., and Golek, J. H. (2004). Evaluating the cost effectiveness of online and face-to-face instruction. Educ. Technol. Soc. 7, 167–175.

Beale, E. G., Tarwater, P. M., and Lee, V. H. (2014). A retrospective look at replacing face-to-face embryology instruction with online lectures in a human anatomy course. Am. Assoc. Anat. 7, 234–241. doi: 10.1002/ase.1396

PubMed Abstract | CrossRef Full Text | Google Scholar

Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkesh, M. A., et al. (2009). A meta-analysis of three types of interaction treatments in distance education. Rev. Educ. Res. 79, 1243–1289. doi: 10.3102/0034654309333844

Biel, R., and Brame, C. J. (2016). Traditional versus online biology courses: connecting course design and student learning in an online setting. J. Microbiol. Biol. Educ. 17, 417–422. doi: 10.1128/jmbe.v17i3.1157

Bigelow, C. A. (2009). Comparing student performance in an online versus a face to face introductory turfgrass science course-a case study. NACTA J. 53, 1–7.

Columbaro, N. L., and Monaghan, C. H. (2009). Employer perceptions of online degrees: a literature review. Online J. Dist. Learn. Administr. 12.

Craig, R. (2015). A Brief History (and Future) of Online Degrees. Forbes/Education . Available online at: https://www.forbes.com/sites/ryancraig/2015/06/23/a-brief-history-and-future-of-online-degrees/#e41a4448d9a8

Daymont, T., and Blau, G. (2008). Student performance in online and traditional sections of an undergraduate management course. J. Behav. Appl. Manag. 9, 275–294.

Dell, C. A., Low, C., and Wilker, J. F. (2010). Comparing student achievement in online and face-to-face class formats. J. Online Learn. Teach. Long Beach 6, 30–42.

Driscoll, A., Jicha, K., Hunt, A. N., Tichavsky, L., and Thompson, G. (2012). Can online courses deliver in-class results? A comparison of student performance and satisfaction in an online versus a face-to-face introductory sociology course. Am. Sociol. Assoc . 40, 312–313. doi: 10.1177/0092055X12446624

Friday, E., Shawnta, S., Green, A. L., and Hill, A. Y. (2006). A multi-semester comparison of student performance between multiple traditional and online sections of two management courses. J. Behav. Appl. Manag. 8, 66–81.

Girard, J. P., Yerby, J., and Floyd, K. (2016). Knowledge retention in capstone experiences: an analysis of online and face-to-face courses. Knowl. Manag. ELearn. 8, 528–539. doi: 10.34105/j.kmel.2016.08.033

Helms, J. L. (2014). Comparing student performance in online and face-to-face delivery modalities. J. Asynchr. Learn. Netw. 18, 1–14. doi: 10.24059/olj.v18i1.348

Herman, T., and Banister, S. (2007). Face-to-face versus online coursework: a comparison of costs and learning outcomes. Contemp. Issues Technol. Teach. Educ. 7, 318–326.

Kemp, N., and Grieve, R. (2014). Face-to-Face or face-to-screen? Undergraduates' opinions and test performance in classroom vs. online learning. Front. Psychol. 5:1278. doi: 10.3389/fpsyg.2014.01278

Keramidas, C. G. (2012). Are undergraduate students ready for online learning? A comparison of online and face-to-face sections of a course. Rural Special Educ. Q . 31, 25–39. doi: 10.1177/875687051203100405

Larson, D.K., and Sung, C. (2009). Comparing student performance: online versus blended versus face-to-face. J. Asynchr. Learn. Netw. 13, 31–42. doi: 10.24059/olj.v13i1.1675

Li, F., and Chen, X. (2012). Economies of scope in distance education: the case of Chinese Research Universities. Int. Rev. Res. Open Distrib. Learn. 13, 117–131.

Liu, Y. (2005). Effects of online instruction vs. traditional instruction on student's learning. Int. J. Instruct. Technol. Dist. Learn. 2, 57–64.

Lorenzo-Alvarez, R., Rudolphi-Solero, T., Ruiz-Gomez, M. J., and Sendra-Portero, F. (2019). Medical student education for abdominal radiographs in a 3D virtual classroom versus traditional classroom: a randomized controlled trial. Am. J. Roentgenol. 213, 644–650. doi: 10.2214/AJR.19.21131

Lundberg, J., Castillo-Merino, D., and Dahmani, M. (2008). Do online students perform better than face-to-face students? Reflections and a short review of some Empirical Findings. Rev. Univ. Soc. Conocim . 5, 35–44. doi: 10.7238/rusc.v5i1.326

Maloney, S., Nicklen, P., Rivers, G., Foo, J., Ooi, Y. Y., Reeves, S., et al. (2015). Cost-effectiveness analysis of blended versus face-to-face delivery of evidence-based medicine to medical students. J. Med. Internet Res. 17:e182. doi: 10.2196/jmir.4346

Mann, J. T., and Henneberry, S. R. (2014). Online versus face-to-face: students' preferences for college course attributes. J. Agric. Appl. Econ . 46, 1–19. doi: 10.1017/S1074070800000602

Mozes-Carmel, A., and Gold, S. S. (2009). A comparison of online vs proctored final exams in online classes. Imanagers J. Educ. Technol. 6, 76–81. doi: 10.26634/jet.6.1.212

Richardson, J. C., and Swan, K. (2003). Examining social presence in online courses in relation to student's perceived learning and satisfaction. J. Asynchr. Learn. 7, 68–88.

Roval, A. P., and Jordan, H. M. (2004). Blended learning and sense of community: a comparative analysis with traditional and fully online graduate courses. Int. Rev. Res. Open Dist. Learn. 5. doi: 10.19173/irrodl.v5i2.192

Salcedo, C. S. (2010). Comparative analysis of learning outcomes in face-to-face foreign language classes vs. language lab and online. J. Coll. Teach. Learn. 7, 43–54. doi: 10.19030/tlc.v7i2.88

Stern, B. S. (2004). A comparison of online and face-to-face instruction in an undergraduate foundations of american education course. Contemp. Issues Technol. Teach. Educ. J. 4, 196–213.

Summers, J. J., Waigandt, A., and Whittaker, T. A. (2005). A comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. Innov. High. Educ. 29, 233–250. doi: 10.1007/s10755-005-1938-x

Tanyel, F., and Griffin, J. (2014). A Ten-Year Comparison of Outcomes and Persistence Rates in Online versus Face-to-Face Courses . Retrieved from: https://www.westga.edu/~bquest/2014/onlinecourses2014.pdf

Werhner, M. J. (2010). A comparison of the performance of online versus traditional on-campus earth science students on identical exams. J. Geosci. Educ. 58, 310–312. doi: 10.5408/1.3559697

Westhuis, D., Ouellette, P. M., and Pfahler, C. L. (2006). A comparative analysis of on-line and classroom-based instructional formats for teaching social work research. Adv. Soc. Work 7, 74–88. doi: 10.18060/184

Wladis, C., Conway, K. M., and Hachey, A. C. (2015). The online STEM classroom-who succeeds? An exploration of the impact of ethnicity, gender, and non-traditional student characteristics in the community college context. Commun. Coll. Rev. 43, 142–164. doi: 10.1177/0091552115571729

Xu, D., and Jaggars, S. S. (2016). Performance gaps between online and face-to-face courses: differences across types of students and academic subject areas. J. Higher Educ. 85, 633–659. doi: 10.1353/jhe.2014.0028

Zhang, L.-C., and Worthington, A. C. (2017). Scale and scope economies of distance education in Australian universities. Stud. High. Educ. 42, 1785–1799. doi: 10.1080/03075079.2015.1126817

Keywords: face-to-face (F2F), traditional classroom teaching, web-based instructions, information and communication technology (ICT), online learning, desire to learn (D2L), passive learning, active learning

Citation: Paul J and Jefferson F (2019) A Comparative Analysis of Student Performance in an Online vs. Face-to-Face Environmental Science Course From 2009 to 2016. Front. Comput. Sci. 1:7. doi: 10.3389/fcomp.2019.00007

Received: 15 May 2019; Accepted: 15 October 2019; Published: 12 November 2019.

Reviewed by:

Copyright © 2019 Paul and Jefferson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jasmine Paul, paulj@fvsu.edu

Examples

Comparative Analysis Essay

comparative analysis research paper examples

In school, we are asked to write about different forms of essay. It can be descriptive essay , comparative literature essay , experience essay and more. Some essays would allow you to compare and contrast two different things, terms, figures, process and the like. Some things have crucial differences, while others really have something in common. Sometimes, we are faced with surprisingly unrelated similarities and differences which makes it difficult for us to begin with the topic. We may feel confused and often ask ourselves about how do we start constructing a paper. In this article, you will know what a comparative analysis essay is all about and how to write one.

10+ Comparative Analysis Essay Examples

1. comparative analysis contrast essay.

Comparative Analysis Contrast Essay

Size: 116 KB

2. Comparative Assignment Analysis Essay

Comparative Assignment Analysis Essay

Size: 158 KB

3. Comparative Analysis Essay

Comparative Analysis Essay

Size: 202 KB

4. Comparative Analysis Essay Rubric

Comparative Analysis Essay Rubric

5. Analytical Exposition Comparative Essay

Analytical Exposition Comparative Essay

Size: 24 KB

6. Comparative Source Analysis Essay

Comparative Source Analysis Essay

Size: 166 KB

7. Comparative Analysis Worksheet Essay

Comparative Analysis Worksheet Essay

Size: 31 kB

8. Basic Comparative Analysis Essay

Basic Comparative Analysis Essay

Size: 360 KB

9. Comparative Analysis Essay Template

Comparative Analysis Essay Template

10. Comparative Analysis Essay Writing

Comparative Analysis Essay Writing

Size: 117 KB

11. Student Comparative Analysis Essay

Student Comparative Analysis Essay

Size: 791 KB

What is a Comparative Analysis Essay?

A comparative analysis essay is a type of essay that involves comparing and contrasting two things. This is a higher level of analysis in an essay. In this essay, you may be asked to compare or contrast different positions with regards to a particular issue, theories, figures, events or texts in literature. This assumes to consider both the similarities and differences of what you have compared and contrasted.

When comparing and contrasting two things, make sure that you can come up with the basis for comparison. Once you are able to know the basis for comparison, you should be bale to think critically between the things that you are comparing.

How to Conduct a Good Comparative Analysis Essay?

You might have brilliant ideas about your essay , but you lack knowledge about what does comparative mean. The first thing you should do is research about your topic and know about the principle of compare and contrast. Below are some tips on how to conduct a comparative analysis essay.

Identify your basis of comparison

First on the list is to develop a basis for the comparison. It is usually that the question that will provide the basis of comparison, but there are are times that the author has to develop one. For example, a question may ask you to compare between the philosophical principles of Immanuel Kant and John Locke. This already has a very clear objective. You don’t to look for something else.

Plan for the content of your essay

You should be able to identify the features of your essay in which it mainly involves comparing and contrasting. List down the similarities and differences of two different subjects. This will help you develop an initial plan for your comparative essay . After listing, evaluate it and make sure that it is only the main points are considered in your essay.

Come up with a thesis

Every essay needs a good thesis statement that enables the author to easily identify which part he or she currently is during the writing process. Before you finalize your thesis statement, assess if the your list has more similarities than differences. Your thesis statement reflects how the subjects written in your list share more of the similarities than the differences or vice versa.

Develop a structure for your essay

All of the essays have a specific structure to follow. Every comparative essay must be written in an organized way just like any essay should be. In that way, it would be easier to the readers to understand.

What are the methods to be used in making a comparative analysis essay?

You may use either of the three methods: the alternating method, mixed paragraphs method or the block method. The alternating method makes the entire aspect of comparison contained only in one paragraph. The mixed paragraphs method devotes a single paragraph in providing for an explanation about the way subjects are being compared. The block method is the easiest method, yet the dangerous one. This divides the essay into two parts in which the first part is devoted into one subject and the second part is devoted to another subject.

When to use alternating method, mixed paragraphs method and block method?

You are going to use the alternating method when you are going to provide cases where detailed comparison is needed. Use mixed paragraphs method when you have to deal with long comparative analysis essay. Use block method when you are short essays and simple subjects.

Does a comparative analysis still follow the three basic parts of an essay?

Yes. This three basic parts of an essay are referred to as the introduction, body of paragraphs and the conclusion.

A comparative analysis essay has a primary purpose of allowing you to compare two different subjects. It may be people, events, objects, theories, etc. Its main focus is to identify the similarities and differences. Having the skills for essay writing , you are good to go in writing your comparative analysis essay.

Twitter

Comparative Analysis Essay Generator

Text prompt

  • Instructive
  • Professional

Compare and contrast two different leadership styles in the context of a Comparative Analysis Essay

Analyze the impact of social media vs traditional media in a Comparative Analysis Essay

IMAGES

  1. Comparative Essay

    comparative analysis research paper examples

  2. 😂 Example of comparative research paper. How To Write A Comparative

    comparative analysis research paper examples

  3. 😱 Comparative analysis paper. How to Write a Comparative Analysis Essay

    comparative analysis research paper examples

  4. Comparative Research

    comparative analysis research paper examples

  5. Comparative Essay

    comparative analysis research paper examples

  6. (PDF) A Comparative analysis of two ESP research papers: A schema-based

    comparative analysis research paper examples

VIDEO

  1. How to Assess the Quantitative Data Collected from Questionnaire

  2. Static, Comparative, Dynamic analysis

  3. What is Component Analysis and Comparative Analysis with Examples and BCBA Mock Exams #TaskListD6

  4. Literature Review

  5. Dr. Aghassi: The Root Causes of Disunity Among Iranians (ریشهٔ عدم اتحاد و اختلاف در میان ایرانیان)

  6. Comparative Analysis in English Language

COMMENTS

  1. Comparative Analysis

    As we discussed in "Advice on Timing" at the page on single-source analysis, that timeline itself roughly follows the "Sample Sequence of Formative Assignments for a 'Typical' Essay" outlined under "Formative Writing Assignments," and it spans about 5-6 steps or 2-4 weeks.Comparative analysis assignments have a lot of the same DNA as single-source essays, but they potentially bring more ...

  2. What is Comparative Analysis? Guide with Examples

    A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets. For instance, you could use comparative ...

  3. (PDF) A Short Introduction to Comparative Research

    Conference Paper PDF Available. ... Comparative research or analysis is a broad term that includes both quantitative and . ... In this example, the cases being compared are two countries.

  4. Comparative Research Methods

    Search for more papers by this author. First published: 01 August 2017. ... Depending on the sample size, the following research strategies are available. ... Qualitative Comparative Analysis (QCA), is a configurational or holistic comparative method which considers each case (system, culture) as a complex entity, as a "whole," which needs ...

  5. PDF How to Write a Comparative Analysis

    To write a good compare-and-contrast paper, you must take your raw data—the similarities and differences you've observed —and make them cohere into a meaningful argument. Here are the five elements required. Frame of Reference. This is the context within which you place the two things you plan to compare and contrast; it is the umbrella ...

  6. PDF How to Write a Comparative Analysis

    Determine the focus of your piece. Determine if you will focus on the similarities, the differences, or both. Be sure you treat each individual the same; each person deserves the same amount of focus-meaning, do not place most of the emphasis on you or the other person. Find a balance.

  7. How to Do Comparative Analysis in Research ( Examples )

    Comparative analysis is a method that is widely used in social science. It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and ...

  8. A Step-by-Step Guide to Writing a Comparative Analysis

    Writing a comparative analysis in a research paper is not as difficult as many people might tend to think. With some tips, it is possible to write an outstanding comparative review. ... Comparative Analysis Example 2: Paragraph 1: Messi teamwork. Paragraph 2: Ronaldo's teamwork. Paragraph 3: Messi stopped the ball.

  9. PDF Title of Module: Comparative Analysis

    Comparative analysis is a common way of engaging with those sources that builds upon the basic components of an academic argument by asking you to identify and join in a conversation with multiple other writers. Comparative analysis goes much further than a traditional compare/contrast essay, where you are mainly identifying similarities and ...

  10. (PDF) Qualitative Comparative Analysis: An Introduction to Research

    Qualitative Comparative Analysis: An Introduction to Research Design and Application is a comprehensive guide to QCA. As QCA becomes increasingly popular across the social sciences, this textbook ...

  11. Comparative Research Methods

    COMPARATIVERESEARCH METHODS 5 Case selection and research designs Forallfiveresearchgoals,theselectionofwhichcasesareincludedinthecomparison iscrucial.Hantrais(1999 ...

  12. COMPARATIVE RESEARCH METHODS (Chapter 15)

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  13. What is Comparative Analysis and How to Conduct It? (+ Examples)

    Comparative analysis is a systematic approach used to evaluate and compare two or more entities, variables, or options to identify similarities, differences, and patterns. It involves assessing the strengths, weaknesses, opportunities, and threats associated with each entity or option to make informed decisions.

  14. Comparative Research

    Best Practices in Writing an Essay for Comparative Research in Visual Arts. If you are going to write an essay for a comparative research examples paper, this section is for you. You must know that there are inevitable mistakes that students do in essay writing. To avoid those mistakes, follow the following pointers. 1.

  15. Comparing and Contrasting in an Essay

    Making effective comparisons. As the name suggests, comparing and contrasting is about identifying both similarities and differences. You might focus on contrasting quite different subjects or comparing subjects with a lot in common—but there must be some grounds for comparison in the first place. For example, you might contrast French ...

  16. How do I write a comparative analysis?

    A comparative analysis is an essay in which two things are compared and contrasted. You may have done a "compare and contrast" paper in your English class, and a comparative analysis is the same general idea, but as a graduate student you are expected to produce a higher level of analysis in your writing.

  17. The use of Qualitative Comparative Analysis (QCA) to address causality

    Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA ...

  18. Frontiers

    The mean grade for men in the environmental online classes (M = 3.23, N = 246, SD = 1.19) was higher than the mean grade for women in the classes (M = 2.9, N = 302, SD = 1.20) (see Table 1).First, a chi-square analysis was performed using SPSS to determine if there was a statistically significant difference in grade distribution between online and F2F students.

  19. (PDF) Four Varieties of Comparative Analysis

    Comparative analysis methods consist of four different types methods which are individualizing, universalizing, variating finding and encompassing. According to Pickvance, C. (2001 ...

  20. PDF COMPARATIVE RESEARCH

    COMPARATIVE RESEARCH Types of Comparative Research There are several methods of doing comparative analysis and Tilly (1984) distinguishes four types of comparative analysis namely: individualizing, universalizing, variation-finding and encompassing (p.82). Adding to the types of comparative analysis, May (1993, as cited in Azarian 2011, p. 117 ...

  21. Comparative Analysis Essay

    Yes. This three basic parts of an essay are referred to as the introduction, body of paragraphs and the conclusion. A comparative analysis essay has a primary purpose of allowing you to compare two different subjects. It may be people, events, objects, theories, etc. Its main focus is to identify the similarities and differences.

  22. (PDF) Methods of comparative analysis

    The article researches methods of comparison: object comparison and property comparison. It reveals the content of qualitative and quantitative comparison methods; shows the difference between the ...

  23. PDF A Causal Comparative Study on The Effect of Proficiency-based ...

    conducting research through the lens of the student and parent population with school climate and proficiency-based and non-proficiency-based education, and research on the impact of self-actualization linked to student success within the proficiency-based model. Keywords: school climate, organizational climate, proficiency-based education,