Communication

iResearchNet

Custom Writing Services

Comparative research.

A specific comparative research methodology is known in most social sciences. Its definition often refers to countries and cultures at the same time, because cultural differences between countries can be rather small (e.g., in Scandinavian countries), whereas very different cultural or ethnic groups may live within one country (e.g., minorities in the United States). Comparative studies have their problems on every level of research, i.e., from theory to types of research questions, operationalization, instruments, sampling, and interpretation of results.

The major problem in comparative research, regardless of the discipline, is that all aspects of the analysis from theory to datasets may vary in definitions and/or categories. As the objects to compare usually belong to different systemic contexts, the establishment of equivalence and comparability is thus a major challenge of comparative research. This is often “operationalized” as functional equivalence, i.e., the functionality of the research objects within the different system contexts must be equivalent. Neither equivalence nor its absence, “bias,” can be presumed. It has to be analyzed and tested for on all the different levels of the research process.

Equivalence And Bias

Equivalence has to be analyzed and established on at least three levels: on the levels of the construct, the item, and the method (van de Vijver & Tanzer 1997). Whenever a test on any of these levels shows negative results, a cultural bias is supposable. Thus, bias on these three levels can be described as the opposite of equivalence. Van de Vijver and Leung (1997) define bias as the variance within certain variables or indicators that can only be caused by culturally unspecific measurement. For example, a media content analysis could examine the amount of foreign affairs coverage in one variable, by measuring the length of newspaper articles. If, however, newspaper articles in country A are generally longer than they are in country B, irrespective of their topic, the result of a sum or mean index of foreign affairs coverage would almost inevitably lead to the conclusion that the amount of foreign affairs coverage in country A is higher than in country B. This outcome would be hardly surprising and not in focus with the research question, because the countries’ average amount of foreign affairs coverage is not related to the national average length of articles. To avoid cultural bias, the results must be standardized or weighted, for example by the mean article length.

To find out whether construct equivalence can be assumed, the researcher will generally require external data and rather complex procedures of culture-specific construct validation(s). Ideally, this includes analyses of the external structure, i.e., theoretical references to other constructs, as well as an examination of the latent or internal structure. The internal structure consists of the relationships between the construct’s sub-dimensions. It can be tested using confirmatory factor analyses, multidimensional scaling, or item analyses. Equivalence can be assumed if the construct validation for every culture has been successful and if the internal and external structures are identical in every country. However, it has to be stated that it is hardly possible to prove construct equivalence beyond any doubts (Wirth & Kolb 2004).

Even with a given construct equivalence, bias can still occur on the item level. The verbalization of items in surveys and of definitions and categories in content analyses can cause bias due to culture-specific connotations. Item bias is mostly evoked by bad, in the sense of nonequivalent, translation or by culture-specific questions and categories (van de Vijver & Leung 1997). Compared to the complex procedures discussed in the case of construct equivalence, the testing for item bias is rather simple (once construct equivalence has been established): Persons from different cultures who take the same positions or ranks on an imaginary construct scale must show the same answering attitude toward every item that measures the construct. Statistically, the correlation of the single items with the total (sum) score have to be identical in every culture, as test theory generally uses the total score to estimate the position of any individual on the construct scale. In brief, equivalence on the item level is established whenever the same sub-dimensions or issues can be used to explain the same theoretical construct in every country (Wirth & Kolb 2004).

When the instruments are ready for application, method equivalence comes to the fore. Method equivalence consists of sample equivalence, instrument equivalence, and administration equivalence. Violation of any of these equivalences produces a method bias. Sample equivalence refers to an equivalent selection of subjects or units of analysis. Instrument equivalence deals with the examination of whether people in every culture agree to take part in the study equivalently, and whether they are used to the instruments equivalently (Lauf & Peter 2001). Finally, a bias on the administration level can occur due to culturespecific attitudes of the interviewers that might produce culture-specific answers. Another source of administration bias could be found in socio-demographic differences between the various national interviewer teams (van de Vijver & Tanzer 1997).

The Role Of Theory

Theory plays a major role in three dimensions when looking for a comparative research strategy: theoretical diversity, theory drivenness, and contextual factors (Wirth & Kolb 2004). Swanson (1992) distinguishes between three principal strategies of dealing with international theoretical diversity. A common possibility is called the avoidance strategy. Many international comparisons are made by teams that come from one culture or nation only. Usually, their research interests are restricted to their own (scientific) socialization. Within this monocultural context, broad approaches cannot be applied and “intertheoretical” questions cannot be answered. This strategy includes atheoretical and unitheoretical (referring to one national theory) studies with or without contextualization (van den Vijver & Leung 2000; Wirth & Kolb 2004).

The pretheoretical strategy tries to avoid cultural and theoretical bias in another way: these studies are undertaken without a strict theoretical background until results are to be interpreted. The advantage of this strategy lies in the exploration, i.e., in developing new theories. Although, following the strict principles of critical rationalism, because of the missing theoretical background the proving of theoretical deduced hypotheses is not applicable (Popper 1994). Most of the results remain on a descriptive level and never reach theoretical diversity. Besides, the instruments for pretheoretical studies must be almost “holistic,” in order to integrate every theoretical construct conceivable for the interpretation. These studies are mostly contextualized and can, thus, become rather extensive (Swanson 1992).

Finally, when a research team develops a meta-theoretical orientation to build a framework for the basic theories and research questions, the data can be analyzed using different theoretical backgrounds. This meta-theoretical strategy allows the extensive use of all data and contextual factors, producing, however, quite a variety of often very different results, which are not easily summarized in one report (Swanson 1992). It is obvious that the higher is the level of theoretical diversity, the greater has to be the effort for construct equivalence.

Research Questions

Van de Vijver and Leung (1996, 1997) distinguish between two types of research questions: structure-oriented questions are mostly interested in the relationship between certain variables, whereas level-oriented questions focus on the parameter values. If, for example, a knowledge gap study analyzes the relationship between the knowledge gained from television news by recipients with high and low socio-economic status (SES) in the UK and the US, the question is structure oriented, because the focus is on a national relationship of knowledge indices and the mean gain of knowledge is not taken into account. Usually, structure-oriented data require correlation or regression analyses. If the main interest of the study is a comparison of the mean gain of knowledge of people with low SES in the UK and the US, the research question is level oriented, because the two knowledge indices of the two nations are to be compared. In this case, one would most probably use analyses of variance. The risk for cultural bias is the same for both kinds of research questions.

Emic And Etic Strategies Of Operationalization

Before the operationalization of an international comparison, the research team has to analyze construct equivalence to prove comparability. In the case of missing internal construct equivalence, the construct cannot be measured equivalently in every country. The decision of whether or not to use the same instruments in every country does not have any impact on this problem of missing construct equivalence. An emic approach could solve this problem. The operationalization for the measurement of the construct(s) is developed nationally, so that the culture-specific adequacy of each of the national instruments will be high. Comparison on the construct level remains possible, even though the instruments vary culturally, because functional equivalence has been established on the construct level by the culture-specific measurement. In general, this procedure will even be possible if national instruments already exist.

As measurement differs from culture to culture, the integration of the national results can be very difficult. Strictly speaking, this disadvantage of emic studies only allows for the interpretation of a structure-oriented outcome with a thorny validation process. It has to be proven that the measurements with different indicators on different scales really lead to data on equivalent constructs. By using external reference data from every culture, complex weighting and standardization procedures can possibly lead to valid equalization of levels and variance (van de Vijver & Leung 1996, 1997). In research practice, emic measuring and data analysis is often used to cast light on cultural differences.

If construct equivalence can be assumed after an in-depth analysis, an etic modus operandi could be recommended. In this logic, approaching the different cultures by using the same or a slightly adapted instrument is valid because the constructs are “functioning” equally in every culture. Consequently, an emic proceeding should most probably come to similar instruments in every culture. Reciprocally, an etic approach must lead to bias and measurement artifacts when applied under the circumstances of missing construct equivalence.

It is obvious that the advantages of emic proceedings are not only the adequate measurement of culture-specific elements, but also the possible inclusion of, e.g., idiographic elements of each culture. Thus, this approach can be seen as a compromise of qualitative and quantitative methodologies. Sometimes comparative researchers suggest analyzing cultural processes in a holistic way without crushing them into variables; psychometric, quantitative data collection would be suitable for similar cultures only. As an objection to this simplification, one should remember the emic approach’s potential to provide the researchers with comparable data, as described above. In contrast, holistic analyses produce culture-specific outcomes that will not be comparable; the problem of equivalence and bias has only been moved to the interpretation of results.

Adaptation Of The Instruments

Difficulties in establishing equivalence are regularly linked to linguistic problems. How can a researcher try to establish functional equivalence without knowledge of every language of the cultures under examination? For a linguistic adaptation of the theoretical background as well as for the instruments, one can again discriminate between “more etic” and “more emic” approaches.

Translation-oriented approaches produce two translated versions of the text: one in the “foreign” language and one after retranslation into the original language. The latter version can be compared to the original version to evaluate the translation. Note that this method produces eticly formed instruments, which can only work whenever functional equivalence has been established on every superior level. Van de Vijver and Tanzer (1997) call this procedure application of an instrument in another language. In a “more emic” cultural adaptation, cultural singularities can be included if, e.g., culture-specific connotations are counterbalanced by a different item formulation.

Purely emic approaches develop entirely culture-specific instruments without translation. Two assembly approaches are available (van de Vijver & Tanzer 1997). First, in order to maintain the committee approach, an international interdisciplinary group of experts of the cultures, languages, and research field decides whether the instruments are to be formed culture-specifically or whether a cultural adaptation will be sufficient. Second, the dual-focus approach tries to find a compromise between literal, grammatical, syntactical, and construct equivalence. Native speakers and/or bilinguals arrange the different language versions together with the research team in a multistep procedure (Erkut et al. 1999).

Usually, researchers use personal preference and accessibility of data to select the countries or cultures to study. This kind of forming of an atheoretical sample avoids many problems (but not cultural bias!). At the same time, it ignores some advantages. Przeworski and Teune (1970) suggest two systematic and theory-driven approaches. The quasiexperimental most similar systems design tries to stress cultural differences. To minimize the possible causes for the differences, those countries are chosen that are the “most similar,” so that the few dissimilarities between these countries are most likely to be the reason for the different outcomes. Whenever the hypotheses highlight intercultural similarities, the most different systems design is appropriate. Here, in a kind of turned-around quasi-experimental logic, the focus lies on similarities between cultures, even though these differ in the greatest possible way (Kolb 2004; Wirth & Kolb 2004).

Random sampling and representativeness play a minor role in international comparisons. The number of states in the world is limited and a normal distribution for the social factors under examination, i.e., the precondition of random sampling, cannot be assumed. Moreover, many statistical methods meet problems when applied under the condition of a low number of cases (Hartmann 1995).

Data Analysis And Interpretation Of Results

Given the presented conceptual and methodological problems of international research, special care must be taken over data analysis and the interpretation of results. As the implementation of every single variable of relevance is hardly accomplishable in international research, the documentation of methods, work process, and data analysis is even more important here than in single-culture studies. Thus, the evaluation of the results must ensue in additional studies. At any rate, an intensive use of different statistical analyses beyond the “general” comparison of arithmetic means can lead to further validation of the results and especially of the interpretation. Van de Vijver and Leung (1997) present a widespread summary of data analysis procedures, including structureand level-oriented approaches, examples of SPSS syntax, and references.

Following Przeworski’s and Teune’s research strategies (1970), results of comparative research can be classified into differences and similarities between the research objects. For both types, Kohn (1989) introduces two separate ways of interpretation. Intercultural similarities seem to be easier to interpret, at first glance. The difficulties emerge when regarding equivalence on the one hand (i.e., there may be covert cultural differences within culturally biased similarities), and when regarding the causes of similarities on the other. The causes will be especially hard to determine in the case of “most different” countries, as different combinations of different indicators can theoretically produce the same results. Esser (2000) refers to diverse theoretical backgrounds that will lead either to differences (e.g., action-theoretically based micro-research) or to similarities (e.g., system-theoretically oriented macro-approaches). In general, the starting point of Przeworski and Teune (1970) seems to be the easier way to come to interesting results and interpretations, using the quasi-experimental approach for “most similar systems with different outcome.” In addition to the advantages of causal interpretation, the “most similar” systems are likely to be equivalent from the top level of the construct to the bottom level of indicators and items. “Controlling” other influences can minimize methodological problems and makes analysis and interpretation more valid.

References:

  • Erkut, S., Alarcón, O., García Coll, C., Tropp, L. R., & Vázquez García, H. A. (1999). The dual-focus approach to creating bilingual measures. Journal of Cross-Cultural Psychology, 30(2), 206 –218.
  • Esser, F. (2000). Journalismus vergleichen: Journalismustheorie und komparative Forschung [Comparing journalism: Journalism theory and comparative research]. In M. Löffelholz (ed.), Theorien des Journalismus: Ein diskursives Handbuch [Journalism theories: A discoursal handbook]. Wiesbaden: Westdeutscher, pp. 123 –146.
  • Esser, F., & Pfetsch, B. (eds.) (2004). Comparing political communication: Theories, cases, and challenges. Cambridge: Cambridge University Press.
  • Hartmann, J. (1995). Vergleichende Politikwissenschaft: Ein Lehrbuch [Comparative political science: A textbook]. Frankfurt: Campus.
  • Kohn, M. L. (1989). Cross-national research as an analytic strategy. In M. L. Kohn (ed.), Crossnational research in sociology. Newbury Park, CA: Sage, pp. 77–102.
  • Kolb, S. (2004). Voraussetzungen für und Gewinn bringende Anwendung von quasiexperimentellen Ansätzen in der kulturvergleichenden Kommunikationsforschung [Precondition for and advantageous application of quasi-experimental approaches in comparative communication research]. In W. Wirth, E. Lauf, & A. Fahr (eds.), Forschungslogik und – design in der Kommunikationswissenschaft, vol. 1: Einführung, Problematisierungen und Aspekte der Methodenlogik aus kommunikationswissenschaftlicher Perspektive [Logic of inquiry and research designs in communication research, vol. 1: Introduction, problematization, and aspects of methodology from a communications point of view]. Cologne: Halem, 2004, pp. 157–178.
  • Lauf, E., & Peter, J. (2001). Die Codierung verschiedensprachiger Inhalte: Erhebungskonzepte und Gütemaße [Coding of content in different languages: Concepts of inquiry and quality indices]. In E. Lauf & W. Wirth (eds.), Inhaltsanalyse: Perspektiven, Probleme, Potentiale [Content analysis: Perspectives, problems, potentialities]. Cologne: Halem, pp. 199 –217.
  • Popper, K. R. (1994). Logik der Forschung [Logic of inquiry], 10th edn. Tübingen: Mohr.
  • Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry. Malabar, FL: Krieger.
  • Swanson, D. L. (1992). Managing theoretical diversity in cross-national studies of political In J. G. Blumler, J. M. McLeod, & K. E. Rosengren (eds.), Comparatively speaking: Communication and culture across space and time. Newbury Park, CA: Sage, pp. 19 –34.
  • Vijver, F. van de, & Leung, K. (1996). Methods and data analysis of comparative research. In J. W. Berry, Y. H. Poortinga, & J. Pandey (eds.), Handbook of cross-cultural research. Boston, MA: Allyn and Bacon, pp. 257–300.
  • Vijver, F. van de, & Leung, K. (1997). Methods and data analysis of cross-cultural research. Thousand Oaks, CA: Sage.
  • Vijver, F. van de, & Leung, K. (2000). Methodological issues in psychological research on culture. Journal of Cross-Cultural Psychology, 31(1), 33 –51.
  • Vijver, F. van de, & Tanzer, N. K. (1997). Bias and equivalence in cross-cultural assessment: An overview. European Journal of Applied Psychology, 47(4), 263 –279.
  • Wirth, W., & Kolb, S. (2004). Designs and methods of comparative political communication research. In F. Esser, & B. Pfetsch (eds.), Comparing political communication: Theories, cases, and challenges. Cambridge: Cambridge University Press, pp. 87–111.

3. Comparative Research Methods

This chapter examines the ‘art of comparing’ by showing how to relate a theoretically guided research question to a properly founded research answer by developing an adequate research design. It first considers the role of variables in comparative research, before discussing the meaning of ‘cases’ and case selection. It then looks at the ‘core’ of the comparative research method: the use of the logic of comparative inquiry to analyse the relationships between variables (representing theory), and the information contained in the cases (the data). Two logics are distinguished: Method of Difference and Method of Agreement. The chapter concludes with an assessment of some problems common to the use of comparative methods.

  • Related Documents

3. Comparative research methods

This chapter examines the ‘art of comparing’ by showing how to relate a theoretically guided research question to a properly founded research answer by developing an adequate research design. It first considers the role of variables in comparative research before discussing the meaning of ‘cases’ and case selection. It then looks at the ‘core’ of the comparative research method: the use of the logic of comparative inquiry to analyse the relationships between variables (representing theory) and the information contained in the cases (the data). Two logics are distinguished: Method of Difference and Method of Agreement. The chapter concludes with an assessment of some problems common to the use of comparative methods.

Rethinking Comparison

Qualitative comparative methods – and specifically controlled qualitative comparisons – are central to the study of politics. They are not the only kind of comparison, though, that can help us better understand political processes and outcomes. Yet there are few guides for how to conduct non-controlled comparative research. This volume brings together chapters from more than a dozen leading methods scholars from across the discipline of political science, including positivist and interpretivist scholars, qualitative methodologists, mixed-methods researchers, ethnographers, historians, and statisticians. Their work revolutionizes qualitative research design by diversifying the repertoire of comparative methods available to students of politics, offering readers clear suggestions for what kinds of comparisons might be possible, why they are useful, and how to execute them. By systematically thinking through how we engage in qualitative comparisons and the kinds of insights those comparisons produce, these collected essays create new possibilities to advance what we know about politics.

PERAN PEMUDA RELAWAN DEMOKRASI DALAM MENINGKATKAN PARTISIPASI POLITIK MASYARAKAT PADA PEMILIHAN UMUM LEGISLATIF TAHUN 2014 DAN IMPLIKASINYA TERHADAP KETAHANAN POLITIK WILAYAH (STUDI PADA RELAWAN DEMOKRASI BANYUMAS, JAWA TENGAH)

This research was going to described the role of Banyumas Democracy Volunteer ( Relawan Demokrasi Banyumas) in increasing political public partitipation in Banyumas’s legislative election 2014 and its implication to Banyumas’s political resilience. This research used qualitative research design as a research method. Data were collected by in depth review, observation and documentation. This research used purpossive sampling technique with stakeholder sampling variant to pick informants. The research showed that Banyumas Democracy Volunteer had a positive role in developing political resilience in Banyumas. Their role was gave political education and election education to voters in Banyumas. In the other words, Banyumas Democracy Volunteer had a vital role in developing ideal political resilience in Banyumas.Keywords: Banyumas Democracy Volunteer, Democracy, Election, Political Resilience of Region.

Ezer Kenegdo: Eksistensi Perempuan dan Perannya dalam Keluarga

AbstractPurpose of this study was to describe the meaning of ezer kenegdo and to know position and role of women in the family. The research method used is qualitative research methods (library research). The term of “ ezer kenegdo” refer to a helper but her position withoutsuperiority and inferiority. “The patner model” between men and women is uderstood in relation to one another as the same function, where differences are complementary and mutually beneficial in all walks of life and human endeavors.Keywords: Ezer Kenegdo; Women; Family.AbstrakTujuan penulisan artikel ini adalah untuk mendeskripsikan pengertian ezer kenegdo dan mengetahui kedudukan dan peran perempuan dalam keluarga. Metode yang digunakan adalah metode kualitatif library research. Ungkapan “ezer kenegdo” menunjuk pada seorang penolong namun kedudukannya adalah setara tanpa ada superioritas dan inferioritas. “Model kepatneran” antara laki-laki dan perempuan dipahami dengan hubungan satu dengan yang lain sebagai fungsi yang sama, yang mana perbedaan adalah saling melengkapi dan saling menguntungkan dalam semua lapisan kehidupan dan usaha manusia.Kata Kunci: Ezer Kenegdo, Prerempuan, Keluarga.

Commentary on ‘Opportunities and Challenges of Engaged Indigenous Scholarship’ (Van de Ven, Meyer, & Jing, 2018)

The mission ofManagement and Organization Review, founded in 2005, is to publish research about Chinese management and organizations, foreign organizations operating in China, or Chinese firms operating globally. The aspiration is to develop knowledge that is unique to China as well as universal knowledge that may transcend China. Articulated in the first editorial published in the inaugural issue of MOR (2005) and further elaborated in a second editorial (Tsui, 2006), the question of contextualization is framed, discussing the role of context in the choices of the research question, theory, measurement, and research design. The idea of ‘engaged indigenous research’ by Van de Ven, Meyer, and Jing (2018) describes the highest level of contextualization, with the local context serving as the primary factor guiding all the decisions of a research project. Tsui (2007: 1353) refers to it as ‘deep contextualization’.

PERAN DINAS KESEHATAN DALAM MENANGGULANGI GIZI BURUK ANAK DI KECAMATAN NGAMPRAH KABUPATEN BANDUNG BARAT

The title of this research is "The Role of the Health Office in Tackling Child Malnutrition in Ngamprah District, West Bandung Regency". The problem in this research is not yet optimal implementation of the Health Office in Overcoming Malnutrition in Children in Ngamprah District, West Bandung Regency. The research method that researchers use is descriptive research methods with a qualitative approach. The technique of determining the informants used was purposive sampling technique. The results showed that in carrying out its duties and functions, the health office, the Health Office had implemented a sufficiently optimal role seen from the six indicators of success in overcoming malnutrition, namely: All Posyandu carry out weighing operations at least once a month All toddlers are weighed, All cases of malnutrition are referred to the Puskemas Nursing or Home Sick, all cases of malnutrition are treated at the health center. Nursing or hospitalization is handled according to the management of malnutrition. All post-treatment malnourished toddlers receive assistance.

Jazz jem sessions in the aspect of listener perception

The purpose of the article is to identify the characteristic features ofjazz jam sessions as creative and concert events. The research methods arebased on the use of a number of empirical approaches. The historicalmethod has characterized the periodization of the emergence andpopularity of jam session as an artistic phenomenon. The use of themethod of comparison of jazz jam sessions and jazz concert made itpossible to determine the characteristic features of jams. An appeal toaxiological research methods has identified the most strikingimprovisational solos of leading jazz artists. Of particular importance inthe context of the article are the methods of analysis and synthesis,observation and generalization. It is important to pay attention to the use ofa structural-functional scientific-research method that indicates theeffectiveness of technological and execution processes on jams. Scientificinnovation. The article is about discovering the peculiarities of the jamsession phenomenon and defining the role of interaction between theaudience of improviser listeners and musicians throughout the jams. Theprocesses of development of jazz concerts and improvisations at jamsessions are revealed. Conclusions. The scientific research providedconfirms the fact that system of interactions between musicians amongthemselves and the audience, as well as improvisation of the performers atthe jam sessions is immense and infinite. That is why modern jazz singersand the audience will always strive for its development and understanding.This way is worth starting with repeated listening to improvisation, in theimmediate presence of the jam sessions (both participant and listener).

THE ROLE OF TECHNOLOGY INFORMATION SYSTEMS AND APPLICATION OF SAK ETAP ON DEVELOPMENT MODEL FINANCIAL POSITION REPORT

Bina Siswa SMA Plus Cisarua addressing in Jl. colonel canal masturi no. 64. At the time of document making, record-keeping of transaction relating to account real or financial position report account especially, Bina Siswa SMA Plus Cisarua has applied computer that is by using the application of Microsoft Office Word 2007 and Microsoft Excel 2007, in practice of control to relative financial position report account unable to be added with the duration process performed within financial statement making. For the problems then writer takes title: “The Role Of Technology Information Systems And Aplication Of SAK ETAP On Development Model Financial Position Report”. Research type which writer applies is research type academy, data type which writer applies is qualitative data and quantitative data, research design type which writer applies is research design deskriptif-analistis, research method which writer applies is descriptive research method, survey and eksperiment,  data collecting technique which writer applies is field researcher what consisted of interview and observation  library research, system development method which writer applies is methodologies orienting at process, data and output. System development structure applied is Iterasi. Design of information system applied is context diagram, data flow diagram, and flowchart. Design of this financial position report accounting information system according to statement of financial accounting standard SAK ETAP and output  consisted of information of accumulated fixed assets, receivable list, transaction summary of cash, transaction summary of bank and financial position report.

Dilema Hakim Pengadilan Agama dalam Menyelesaikan Perkara Hukum Keluarga Melalui Mediasi

This article aims to determine the role of judges in resolving family law cases through mediation in the Religious Courts, where judges have the position as state officials as regulated in Law Number 43 of 1999 concerning Basic Personnel, can also be a mediator in the judiciary. as regulated in Supreme Court Regulation Number 1 of 2016 concerning Mediation Procedures where judges have the responsibility to seek peace at every level of the trial and are also involved in mediation procedures. The research method used in this article uses normative legal research methods. Whereas until now judges still have a very important role in resolving family law cases in the Religious Courts due to the fact that there are still many negotiating processes with mediation assisted by judges, even though on the one hand the number of non-judge mediators is available, although in each region it is not evenly distributed in terms of number and capacity. non-judge mediator.

Anime affection on human IQ and behavior in Saudi Arabia

The present study attempted to determine the effects of watching anime and understanding if watching anime could affect the mental and social aspects of kids or other group of ages, and also to decide that the teenagers and children should watch anime or not. The research design used in this study is the descriptive research method and observational where in data and facts from direct observations and online questionnaires were used to answer the research question. The finding of this study suggested that anime viewers has higher level of general knowledge comparing with the non- anime viewers and as well as higher IQ level significantly in a specific group, besides anime can be used to spread a background about any culture and plays a role in increase the economy.

Export Citation Format

Share document.

Comparison in Scientific Research: Uncovering statistically significant relationships

by Anthony Carpi, Ph.D., Anne E. Egger, Ph.D.

Listen to this reading

Did you know that when Europeans first saw chimpanzees, they thought the animals were hairy, adult humans with stunted growth? A study of chimpanzees paved the way for comparison to be recognized as an important research method. Later, Charles Darwin and others used this comparative research method in the development of the theory of evolution.

Comparison is used to determine and quantify relationships between two or more variables by observing different groups that either by choice or circumstance are exposed to different treatments.

Comparison includes both retrospective studies that look at events that have already occurred, and prospective studies, that examine variables from the present forward.

Comparative research is similar to experimentation in that it involves comparing a treatment group to a control, but it differs in that the treatment is observed rather than being consciously imposed due to ethical concerns, or because it is not possible, such as in a retrospective study.

Anyone who has stared at a chimpanzee in a zoo (Figure 1) has probably wondered about the animal's similarity to humans. Chimps make facial expressions that resemble humans, use their hands in much the same way we do, are adept at using different objects as tools, and even laugh when they are tickled. It may not be surprising to learn then that when the first captured chimpanzees were brought to Europe in the 17 th century, people were confused, labeling the animals "pygmies" and speculating that they were stunted versions of "full-grown" humans. A London physician named Edward Tyson obtained a "pygmie" that had died of an infection shortly after arriving in London, and began a systematic study of the animal that cataloged the differences between chimpanzees and humans, thus helping to establish comparative research as a scientific method .

Figure 1: A chimpanzee

Figure 1: A chimpanzee

  • A brief history of comparative methods

In 1698, Tyson, a member of the Royal Society of London, began a detailed dissection of the "pygmie" he had obtained and published his findings in the 1699 work: Orang-Outang, sive Homo Sylvestris: or, the Anatomy of a Pygmie Compared with that of a Monkey, an Ape, and a Man . The title of the work further reflects the misconception that existed at the time – Tyson did not use the term Orang-Outang in its modern sense to refer to the orangutan; he used it in its literal translation from the Malay language as "man of the woods," as that is how the chimps were viewed.

Tyson took great care in his dissection. He precisely measured and compared a number of anatomical variables such as brain size of the "pygmie," ape, and human. He recorded his measurements of the "pygmie," even down to the direction in which the animal's hair grew: "The tendency of the Hair of all of the Body was downwards; but only from the Wrists to the Elbows 'twas upwards" (Russell, 1967). Aided by William Cowper, Tyson made drawings of various anatomical structures, taking great care to accurately depict the dimensions of these structures so that they could be compared to those in humans (Figure 2). His systematic comparative study of the dimensions of anatomical structures in the chimp, ape, and human led him to state:

in the Organization of abundance of its Parts, it more approached to the Structure of the same in Men: But where it differs from a Man, there it resembles plainly the Common Ape, more than any other Animal. (Russell, 1967)

Tyson's comparative studies proved exceptionally accurate and his research was used by others, including Thomas Henry Huxley in Evidence as to Man's Place in Nature (1863) and Charles Darwin in The Descent of Man (1871).

Figure 2: Edward Tyson's drawing of the external appearance of a

Figure 2: Edward Tyson's drawing of the external appearance of a "pygmie" (left) and the animal's skeleton (right) from The Anatomy of a Pygmie Compared with that of a Monkey, an Ape, and a Man from the second edition, London, printed for T. Osborne, 1751.

Tyson's methodical and scientific approach to anatomical dissection contributed to the development of evolutionary theory and helped establish the field of comparative anatomy. Further, Tyson's work helps to highlight the importance of comparison as a scientific research method .

  • Comparison as a scientific research method

Comparative research represents one approach in the spectrum of scientific research methods and in some ways is a hybrid of other methods, drawing on aspects of both experimental science (see our Experimentation in Science module) and descriptive research (see our Description in Science module). Similar to experimentation, comparison seeks to decipher the relationship between two or more variables by documenting observed differences and similarities between two or more subjects or groups. In contrast to experimentation, the comparative researcher does not subject one of those groups to a treatment , but rather observes a group that either by choice or circumstance has been subject to a treatment. Thus comparison involves observation in a more "natural" setting, not subject to experimental confines, and in this way evokes similarities with description.

Importantly, the simple comparison of two variables or objects is not comparative research . Tyson's work would not have been considered scientific research if he had simply noted that "pygmies" looked like humans without measuring bone lengths and hair growth patterns. Instead, comparative research involves the systematic cataloging of the nature and/or behavior of two or more variables, and the quantification of the relationship between them.

Figure 3: Skeleton of the juvenile chimpanzee dissected by Edward Tyson, currently displayed at the Natural History Museum, London.

Figure 3: Skeleton of the juvenile chimpanzee dissected by Edward Tyson, currently displayed at the Natural History Museum, London.

While the choice of which research method to use is a personal decision based in part on the training of the researchers conducting the study, there are a number of scenarios in which comparative research would likely be the primary choice.

  • The first scenario is one in which the scientist is not trying to measure a response to change, but rather he or she may be trying to understand the similarities and differences between two subjects . For example, Tyson was not observing a change in his "pygmie" in response to an experimental treatment . Instead, his research was a comparison of the unknown "pygmie" to humans and apes in order to determine the relationship between them.
  • A second scenario in which comparative studies are common is when the physical scale or timeline of a question may prevent experimentation. For example, in the field of paleoclimatology, researchers have compared cores taken from sediments deposited millions of years ago in the world's oceans to see if the sedimentary composition is similar across all oceans or differs according to geographic location. Because the sediments in these cores were deposited millions of years ago, it would be impossible to obtain these results through the experimental method . Research designed to look at past events such as sediment cores deposited millions of years ago is referred to as retrospective research.
  • A third common comparative scenario is when the ethical implications of an experimental treatment preclude an experimental design. Researchers who study the toxicity of environmental pollutants or the spread of disease in humans are precluded from purposefully exposing a group of individuals to the toxin or disease for ethical reasons. In these situations, researchers would set up a comparative study by identifying individuals who have been accidentally exposed to the pollutant or disease and comparing their symptoms to those of a control group of people who were not exposed. Research designed to look at events from the present into the future, such as a study looking at the development of symptoms in individuals exposed to a pollutant, is referred to as prospective research.

Comparative science was significantly strengthened in the late 19th and early 20th century with the introduction of modern statistical methods . These were used to quantify the association between variables (see our Statistics in Science module). Today, statistical methods are critical for quantifying the nature of relationships examined in many comparative studies. The outcome of comparative research is often presented in one of the following ways: as a probability , as a statement of statistical significance , or as a declaration of risk. For example, in 2007 Kristensen and Bjerkedal showed that there is a statistically significant relationship (at the 95% confidence level) between birth order and IQ by comparing test scores of first-born children to those of their younger siblings (Kristensen & Bjerkedal, 2007). And numerous studies have contributed to the determination that the risk of developing lung cancer is 30 times greater in smokers than in nonsmokers (NCI, 1997).

Comprehension Checkpoint

  • Comparison in practice: The case of cigarettes

In 1919, Dr. George Dock, chairman of the Department of Medicine at Barnes Hospital in St. Louis, asked all of the third- and fourth-year medical students at the teaching hospital to observe an autopsy of a man with a disease so rare, he claimed, that most of the students would likely never see another case of it in their careers. With the medical students gathered around, the physicians conducting the autopsy observed that the patient's lungs were speckled with large dark masses of cells that had caused extensive damage to the lung tissue and had forced the airways to close and collapse. Dr. Alton Ochsner, one of the students who observed the autopsy, would write years later that "I did not see another case until 1936, seventeen years later, when in a period of six months, I saw nine patients with cancer of the lung. – All the afflicted patients were men who smoked heavily and had smoked since World War I" (Meyer, 1992).

Figure 4: Image from a stereoptic card showing a woman smoking a cigarette circa 1900

Figure 4: Image from a stereoptic card showing a woman smoking a cigarette circa 1900

The American physician Dr. Isaac Adler was, in fact, the first scientist to propose a link between cigarette smoking and lung cancer in 1912, based on his observation that lung cancer patients often reported that they were smokers. Adler's observations, however, were anecdotal, and provided no scientific evidence toward demonstrating a relationship. The German epidemiologist Franz Müller is credited with the first case-control study of smoking and lung cancer in the 1930s. Müller sent a survey to the relatives of individuals who had died of cancer, and asked them about the smoking habits of the deceased. Based on the responses he received, Müller reported a higher incidence of lung cancer among heavy smokers compared to light smokers. However, the study had a number of problems. First, it relied on the memory of relatives of deceased individuals rather than first-hand observations, and second, no statistical association was made. Soon after this, the tobacco industry began to sponsor research with the biased goal of repudiating negative health claims against cigarettes (see our Scientific Institutions and Societies module for more information on sponsored research).

Beginning in the 1950s, several well-controlled comparative studies were initiated. In 1950, Ernest Wynder and Evarts Graham published a retrospective study comparing the smoking habits of 605 hospital patients with lung cancer to 780 hospital patients with other diseases (Wynder & Graham, 1950). Their study showed that 1.3% of lung cancer patients were nonsmokers while 14.6% of patients with other diseases were nonsmokers. In addition, 51.2% of lung cancer patients were "excessive" smokers while only 19.1% of other patients were excessive smokers. Both of these comparisons proved to be statistically significant differences. The statisticians who analyzed the data concluded:

when the nonsmokers and the total of the high smoking classes of patients with lung cancer are compared with patients who have other diseases, we can reject the null hypothesis that smoking has no effect on the induction of cancer of the lungs.

Wynder and Graham also suggested that there might be a lag of ten years or more between the period of smoking in an individual and the onset of clinical symptoms of cancer. This would present a major challenge to researchers since any study that investigated the relationship between smoking and lung cancer in a prospective fashion would have to last many years.

Richard Doll and Austin Hill published a similar comparative study in 1950 in which they showed that there was a statistically higher incidence of smoking among lung cancer patients compared to patients with other diseases (Doll & Hill, 1950). In their discussion, Doll and Hill raise an interesting point regarding comparative research methods by saying,

This is not necessarily to state that smoking causes carcinoma of the lung. The association would occur if carcinoma of the lung caused people to smoke or if both attributes were end-effects of a common cause.

They go on to assert that because the habit of smoking was seen to develop before the onset of lung cancer, the argument that lung cancer leads to smoking can be rejected. They therefore conclude, "that smoking is a factor, and an important factor, in the production of carcinoma of the lung."

Despite this substantial evidence , both the tobacco industry and unbiased scientists raised objections, claiming that the retrospective research on smoking was "limited, inconclusive, and controversial." The industry stated that the studies published did not demonstrate cause and effect, but rather a spurious association between two variables . Dr. Wilhelm Hueper of the National Cancer Institute, a scientist with a long history of research into occupational causes of cancers, argued that the emphasis on cigarettes as the only cause of lung cancer would compromise research support for other causes of lung cancer. Ronald Fisher , a renowned statistician, also was opposed to the conclusions of Doll and others, purportedly because they promoted a "puritanical" view of smoking.

The tobacco industry mounted an extensive campaign of misinformation, sponsoring and then citing research that showed that smoking did not cause "cardiac pain" as a distraction from the studies that were being published regarding cigarettes and lung cancer. The industry also highlighted studies that showed that individuals who quit smoking suffered from mild depression, and they pointed to the fact that even some doctors themselves smoked cigarettes as evidence that cigarettes were not harmful (Figure 5).

Figure 5: Cigarette advertisement circa 1946.

Figure 5: Cigarette advertisement circa 1946.

While the scientific research began to impact health officials and some legislators, the industry's ad campaign was effective. The US Federal Trade Commission banned tobacco companies from making health claims about their products in 1955. However, more significant regulation was averted. An editorial that appeared in the New York Times in 1963 summed up the national sentiment when it stated that the tobacco industry made a "valid point," and the public should refrain from making a decision regarding cigarettes until further reports were issued by the US Surgeon General.

In 1951, Doll and Hill enrolled 40,000 British physicians in a prospective comparative study to examine the association between smoking and the development of lung cancer. In contrast to the retrospective studies that followed patients with lung cancer back in time, the prospective study was designed to follow the group forward in time. In 1952, Drs. E. Cuyler Hammond and Daniel Horn enrolled 187,783 white males in the United States in a similar prospective study. And in 1959, the American Cancer Society (ACS) began the first of two large-scale prospective studies of the association between smoking and the development of lung cancer. The first ACS study, named Cancer Prevention Study I, enrolled more than 1 million individuals and tracked their health, smoking and other lifestyle habits, development of diseases, cause of death, and life expectancy for almost 13 years (Garfinkel, 1985).

All of the studies demonstrated that smokers are at a higher risk of developing and dying from lung cancer than nonsmokers. The ACS study further showed that smokers have elevated rates of other pulmonary diseases, coronary artery disease, stroke, and cardiovascular problems. The two ACS Cancer Prevention Studies would eventually show that 52% of deaths among smokers enrolled in the studies were attributed to cigarettes.

In the second half of the 20 th century, evidence from other scientific research methods would contribute multiple lines of evidence to the conclusion that cigarette smoke is a major cause of lung cancer:

Descriptive studies of the pathology of lungs of deceased smokers would demonstrate that smoking causes significant physiological damage to the lungs. Experiments that exposed mice, rats, and other laboratory animals to cigarette smoke showed that it caused cancer in these animals (see our Experimentation in Science module for more information). Physiological models would help demonstrate the mechanism by which cigarette smoke causes cancer.

As evidence linking cigarette smoke to lung cancer and other diseases accumulated, the public, the legal community, and regulators slowly responded. In 1957, the US Surgeon General first acknowledged an association between smoking and lung cancer when a report was issued stating, "It is clear that there is an increasing and consistent body of evidence that excessive cigarette smoking is one of the causative factors in lung cancer." In 1965, over objections by the tobacco industry and the American Medical Association, which had just accepted a $10 million grant from the tobacco companies, the US Congress passed the Federal Cigarette Labeling and Advertising Act, which required that cigarette packs carry the warning: "Caution: Cigarette Smoking May Be Hazardous to Your Health." In 1967, the US Surgeon General issued a second report stating that cigarette smoking is the principal cause of lung cancer in the United States. While the tobacco companies found legal means to protect themselves for decades following this, in 1996, Brown and Williamson Tobacco Company was ordered to pay $750,000 in a tobacco liability lawsuit; it became the first liability award paid to an individual by a tobacco company.

  • Comparison across disciplines

Comparative studies are used in a host of scientific disciplines, from anthropology to archaeology, comparative biology, epidemiology , psychology, and even forensic science. DNA fingerprinting, a technique used to incriminate or exonerate a suspect using biological evidence , is based on comparative science. In DNA fingerprinting, segments of DNA are isolated from a suspect and from biological evidence such as blood, semen, or other tissue left at a crime scene. Up to 20 different segments of DNA are compared between that of the suspect and the DNA found at the crime scene. If all of the segments match, the investigator can calculate the statistical probability that the DNA came from the suspect as opposed to someone else. Thus DNA matches are described in terms of a "1 in 1 million" or "1 in 1 billion" chance of error.

Comparative methods are also commonly used in studies involving humans due to the ethical limits of experimental treatment . For example, in 2007, Petter Kristensen and Tor Bjerkedal published a study in which they compared the IQ of over 250,000 male Norwegians in the military (Kristensen & Bjerkedal, 2007). The researchers found a significant relationship between birth order and IQ, where the average IQ of first-born male children was approximately three points higher than the average IQ of the second-born male in the same family. The researchers further showed that this relationship was correlated with social rather than biological factors, as second-born males who grew up in families in which the first-born child died had average IQs similar to other first-born children. One might imagine a scenario in which this type of study could be carried out experimentally, for example, purposefully removing first-born male children from certain families, but the ethics of such an experiment preclude it from ever being conducted.

  • Limitations of comparative methods

One of the primary limitations of comparative methods is the control of other variables that might influence a study. For example, as pointed out by Doll and Hill in 1950, the association between smoking and cancer deaths could have meant that: a) smoking caused lung cancer, b) lung cancer caused individuals to take up smoking, or c) a third unknown variable caused lung cancer AND caused individuals to smoke (Doll & Hill, 1950). As a result, comparative researchers often go to great lengths to choose two different study groups that are similar in almost all respects except for the treatment in question. In fact, many comparative studies in humans are carried out on identical twins for this exact reason. For example, in the field of tobacco research , dozens of comparative twin studies have been used to examine everything from the health effects of cigarette smoke to the genetic basis of addiction.

  • Comparison in modern practice

Figure 6: The

Figure 6: The "Keeling curve," a long-term record of atmospheric CO 2 concentration measured at the Mauna Loa Observatory (Keeling et al.). Although the annual oscillations represent natural, seasonal variations, the long-term increase means that concentrations are higher than they have been in 400,000 years. Graphic courtesy of NASA's Earth Observatory.

Despite the lessons learned during the debate that ensued over the possible effects of cigarette smoke, misconceptions still surround comparative science. For example, in the late 1950s, Charles Keeling , an oceanographer at the Scripps Institute of Oceanography, began to publish data he had gathered from a long-term descriptive study of atmospheric carbon dioxide (CO 2 ) levels at the Mauna Loa observatory in Hawaii (Keeling, 1958). Keeling observed that atmospheric CO 2 levels were increasing at a rapid rate (Figure 6). He and other researchers began to suspect that rising CO 2 levels were associated with increasing global mean temperatures, and several comparative studies have since correlated rising CO 2 levels with rising global temperature (Keeling, 1970). Together with research from modeling studies (see our Modeling in Scientific Research module), this research has provided evidence for an association between global climate change and the burning of fossil fuels (which emits CO 2 ).

Yet in a move reminiscent of the fight launched by the tobacco companies, the oil and fossil fuel industry launched a major public relations campaign against climate change research . As late as 1989, scientists funded by the oil industry were producing reports that called the research on climate change "noisy junk science" (Roberts, 1989). As with the tobacco issue, challenges to early comparative studies tried to paint the method as less reliable than experimental methods. But the challenges actually strengthened the science by prompting more researchers to launch investigations, thus providing multiple lines of evidence supporting an association between atmospheric CO 2 concentrations and climate change. As a result, the culmination of multiple lines of scientific evidence prompted the Intergovernmental Panel on Climate Change organized by the United Nations to issue a report stating that "Warming of the climate system is unequivocal," and "Carbon dioxide is the most important anthropogenic greenhouse gas (IPCC, 2007)."

Comparative studies are a critical part of the spectrum of research methods currently used in science. They allow scientists to apply a treatment-control design in settings that preclude experimentation, and they can provide invaluable information about the relationships between variables . The intense scrutiny that comparison has undergone in the public arena due to cases involving cigarettes and climate change has actually strengthened the method by clarifying its role in science and emphasizing the reliability of data obtained from these studies.

Table of Contents

Activate glossary term highlighting to easily identify key terms within the module. Once highlighted, you can click on these terms to view their definitions.

Activate NGSS annotations to easily identify NGSS standards within the module. Once highlighted, you can click on them to view these standards.

QuestionPro GmbH

Market research

Comparative research: what it is and how to conduct it

Comparative research

TRY OUT NOW

Comparative research involves comparing elements to better understand the similarities and differences between them, applying rigorous methods and analyzing the results to draw meaningful conclusions. It helps to expand knowledge and provides a basis for informed decisions.

Learn more about its features and how it can be done.

  • 1 What is comparative research?
  • 2 Why comparative research?
  • 3.1 1. Define the goal of comparative research
  • 3.2 2. Select the items to compare
  • 3.3 3. Collect data
  • 3.4 4. Analysis of the data
  • 3.5 5. Interpretation of results
  • 3.6 6. Conclusion(s) from the comparative research
  • 3.7 7. Present the results of the comparative research
  • 3.8 Conclusion
  • 4 1:1 Live Online Presentation: QUESTIONPRO MARKET RESEARCH SOFTWARE
  • 5 Try software for market research and experience management now for 10 days free of charge!

What is comparative research?

Comparative research is research designed to analyse and compare two or more elements or phenomena to identify similarities, differences, and patterns between them. It is used in various disciplines such as science, psychology, sociology and economics.

The main features of comparative research are:

  • Compare : A direct comparison is made between two or more objects. Similarities and differences in terms of characteristics, behaviors, effects or other relevant aspects are examined.
  • Clear targets: It has specific and clearly defined goals. An attempt can be made to understand the causes of the differences or similarities observed, to explain the effects of the variables being compared, or to suggest better approaches or solutions.
  • Kontext : Comparative research is carried out in a specific context. This means taking into account factors such as time, location, culture, socio-economic environment, etc., which may influence the elements being compared.
  • Different approaches: In comparative research, various methods and techniques can be used to collect data. These include, among other things: Case studies , surveys, direct observation, document analysis and statistical analysis.
  • Analysis and conclusion : Comparative research involves analyzing the data collected and drawing conclusions based on the comparisons made. These conclusions can provide important information about causal relationships, trends, or observed patterns.
  • generalization : Depending on the scope of the study, the results may allow generalizations about the elements being compared. However, it is important to be aware of the limitations and to consider the validity of the results in different contexts.

Why comparative research?

Comparative research is used in a variety of situations and for a variety of purposes. Here are some examples where you can use this approach:

  • Understanding cultural differences : Comparative research is useful for analyzing and understanding cultural differences between different groups of people. It can help identify particular practices, values, beliefs and behaviors in different societies.
  • Evaluation of policies and programs : It makes it possible to analyse how policies are implemented and what results are achieved in different contexts, thus identifying good practices or areas for improvement.
  • Market research : In business, comparative research is used to analyse and compare the demand for products or services in different markets. This helps companies understand consumer preferences, adapt their marketing strategies and make informed decisions about expanding into new markets.
  • Scientific research : Comparative research is carried out in a variety of disciplines Scientific research applied. In biology, for example, it can be used to compare species and study their characteristics and behavior. In psychology, it is used to compare groups of people and understand differences in behavior or personality.
  • Analysis of educational policies and systems : Comparative research is used in the field of education to analyse and compare the educational policies and systems of different countries or regions. This helps identify successful practices, common challenges and opportunities for improvement in education.
  • Labor market studies: They help analyse and compare working conditions, wages and other work-related aspects in different industries or countries. This provides information about labor market trends and inequalities.

How to Conduct Comparative Research

Conducting comparative research requires a few basic steps. Here is a simple explanation of how to do it:

1. Define the goal of comparative research

Before you begin, you should be clear about what you want to achieve with comparative research. Clearly define the goal and the research questions you want to answer.

2. Select the items to compare

Determine the elements, phenomena, or groups you want to compare. These can be different countries, cultures, policies, products, groups of people, etc. Make sure they are comparable and that you can get relevant data for each item.

Make sure you have a clear understanding of the elements you want to compare and that they are relevant to your research objective. The selected elements should be comparable to each other. This means that they should have characteristics and properties that can be measured and compared in a meaningful way.

When selecting a sample of items for comparison, ensure that it is a representative sample of the population or group to which you want to generalize the results.

3. Collect data

Use a variety of sources and methods to collect data about the items being compared. Identify appropriate data sources to collect information about the items being compared. These sources may include surveys, interviews, direct observations, databases, historical records, government reports, academic literature, media, and others.

Perform a quality check on the data collected. This includes checking the consistency, accuracy and completeness of the data. If necessary, perform additional checks or contact participants to clarify any ambiguities or errors in the data.

4. Analysis of the data

Review the data collected and conduct a comparative analysis. Identify similarities and differences between the elements being compared. You can use statistical analysis techniques and comparison graphs, or simply compare the data qualitatively.

You can descriptive statistics Use to summarize and present quantitative data clearly and concisely. This can include measures of central tendency (such as mean, median or mode) and measures of dispersion (such as standard deviation or span). With the help of descriptive statistics you can understand the main characteristics of the items being compared.

5. Interpretation of results

Based on the analyses carried out, interpret the results of the investigation. Identify patterns, trends, or causal relationships that emerge from the comparison. Explain the similarities and differences observed and look for possible explanations.

Try to find possible explanations for the results observed in your comparative research. Identify key variables that may influence the similarities and differences identified. Consider whether there are underlying causal factors or mediating variables that could explain the results obtained.

6. Conclusion(s) from the comparative research

Draw relevant conclusions based on the interpretation of the results. Summarize the most important results of the comparative research and answer the research questions asked in the first step.

Reflect on the impact of your findings in the broader context. Explore how the results may contribute to existing knowledge on the topic and how they might impact practice. Additionally, identify the limitations of your comparative research, such as: B. possible biases or limitations in the sample or methods used.

7. Present the results of the comparative research

Communicate the results of your comparative research clearly and concisely. You can use written reports, visual presentations, charts, or comparative tables, whichever works best for your audience.

Comparative research is used to analyse and compare elements, phenomena or practices in order to understand differences, identify best practices, evaluate policies or programs and make informed decisions in various fields such as culture, economics, science, education, etc.

Remember that data collection is a crucial phase in comparative research. It is important that it is carried out carefully and accurately in order to obtain reliable and valid information that allows you to make meaningful comparisons between the selected items.

Online survey tools like QuestionPro, help you with structured data collection. If you choose, you can first set up a free account to try out the basic features, or request a demo to let us know your research needs and learn more about our products and various licenses.

1:1 live online presentation: QUESTIONPRO MARKET RESEARCH SOFTWARE

Arrange an individual appointment and discover our market research software.

Try software for market research and experience management now for 10 days free of charge!

Do you have any questions about the content of this blog? Simply contact us via contact form . We look forward to a dialogue with you! You too can test QuestionPro for 10 days free of charge and without risk in depth!

Test the agile market research and experience management platform for qualitative and quantitative data collection and data analysis from QuestionPro for 10 days free of charge

FURTHER KEYWORDS

Market research | Empirical research | Research process | Survey research

SHARE THIS ARTICLE

KEYWORDS OF THIS BLOG POST

Comparative research | Research  | Compare

FURTHER INFORMATION

  • Research Process: Steps to conduct the research
  • Market research: examples, tips, data collection, data analysis, software for carrying out and presenting the results
  • Empirical research: definition, methods and examples
  • Data control: what it is, what types there are and how to carry it out
  • Data collection tools: which are the best?
  • Sentiment analyses and semantic text analysis based on artificial intelligence
  • All information about the experience management platform QuestionPro
  • Cross-sectional data: what are they, characteristics and types

Applied research

PRESS RELEASES

Positive People Science

What is comparative analysis? A complete guide

Last updated

18 April 2023

Reviewed by

Jean Kaluza

Comparative analysis is a valuable tool for acquiring deep insights into your organization’s processes, products, and services so you can continuously improve them. 

Similarly, if you want to streamline, price appropriately, and ultimately be a market leader, you’ll likely need to draw on comparative analyses quite often.

When faced with multiple options or solutions to a given problem, a thorough comparative analysis can help you compare and contrast your options and make a clear, informed decision.

If you want to get up to speed on conducting a comparative analysis or need a refresher, here’s your guide.

Make comparative analysis less tedious

Dovetail streamlines comparative analysis to help you uncover and share actionable insights

  • What exactly is comparative analysis?

A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets.

For instance, you could use comparative analysis to investigate how your product features measure up to the competition.

After a successful comparative analysis, you should be able to identify strengths and weaknesses and clearly understand which product is more effective.

You could also use comparative analysis to examine different methods of producing that product and determine which way is most efficient and profitable.

The potential applications for using comparative analysis in everyday business are almost unlimited. That said, a comparative analysis is most commonly used to examine

Emerging trends and opportunities (new technologies, marketing)

Competitor strategies

Financial health

Effects of trends on a target audience

  • Why is comparative analysis so important? 

Comparative analysis can help narrow your focus so your business pursues the most meaningful opportunities rather than attempting dozens of improvements simultaneously.

A comparative approach also helps frame up data to illuminate interrelationships. For example, comparative research might reveal nuanced relationships or critical contexts behind specific processes or dependencies that wouldn’t be well-understood without the research.

For instance, if your business compares the cost of producing several existing products relative to which ones have historically sold well, that should provide helpful information once you’re ready to look at developing new products or features.

  • Comparative vs. competitive analysis—what’s the difference?

Comparative analysis is generally divided into three subtypes, using quantitative or qualitative data and then extending the findings to a larger group. These include

Pattern analysis —identifying patterns or recurrences of trends and behavior across large data sets.

Data filtering —analyzing large data sets to extract an underlying subset of information. It may involve rearranging, excluding, and apportioning comparative data to fit different criteria. 

Decision tree —flowcharting to visually map and assess potential outcomes, costs, and consequences.

In contrast, competitive analysis is a type of comparative analysis in which you deeply research one or more of your industry competitors. In this case, you’re using qualitative research to explore what the competition is up to across one or more dimensions.

For example

Service delivery —metrics like the Net Promoter Scores indicate customer satisfaction levels.

Market position — the share of the market that the competition has captured.

Brand reputation —how well-known or recognized your competitors are within their target market.

  • Tips for optimizing your comparative analysis

Conduct original research

Thorough, independent research is a significant asset when doing comparative analysis. It provides evidence to support your findings and may present a perspective or angle not considered previously. 

Make analysis routine

To get the maximum benefit from comparative research, make it a regular practice, and establish a cadence you can realistically stick to. Some business areas you could plan to analyze regularly include:

Profitability

Competition

Experiment with controlled and uncontrolled variables

In addition to simply comparing and contrasting, explore how different variables might affect your outcomes.

For example, a controllable variable would be offering a seasonal feature like a shopping bot to assist in holiday shopping or raising or lowering the selling price of a product.

Uncontrollable variables include weather, changing regulations, the current political climate, or global pandemics.

Put equal effort into each point of comparison

Most people enter into comparative research with a particular idea or hypothesis already in mind to validate. For instance, you might try to prove the worthwhileness of launching a new service. So, you may be disappointed if your analysis results don’t support your plan.

However, in any comparative analysis, try to maintain an unbiased approach by spending equal time debating the merits and drawbacks of any decision. Ultimately, this will be a practical, more long-term sustainable approach for your business than focusing only on the evidence that favors pursuing your argument or strategy.

Writing a comparative analysis in five steps

To put together a coherent, insightful analysis that goes beyond a list of pros and cons or similarities and differences, try organizing the information into these five components:

1. Frame of reference

Here is where you provide context. First, what driving idea or problem is your research anchored in? Then, for added substance, cite existing research or insights from a subject matter expert, such as a thought leader in marketing, startup growth, or investment

2. Grounds for comparison Why have you chosen to examine the two things you’re analyzing instead of focusing on two entirely different things? What are you hoping to accomplish?

3. Thesis What argument or choice are you advocating for? What will be the before and after effects of going with either decision? What do you anticipate happening with and without this approach?

For example, “If we release an AI feature for our shopping cart, we will have an edge over the rest of the market before the holiday season.” The finished comparative analysis will weigh all the pros and cons of choosing to build the new expensive AI feature including variables like how “intelligent” it will be, what it “pushes” customers to use, how much it takes off the plates of customer service etc.

Ultimately, you will gauge whether building an AI feature is the right plan for your e-commerce shop.

4. Organize the scheme Typically, there are two ways to organize a comparative analysis report. First, you can discuss everything about comparison point “A” and then go into everything about aspect “B.” Or, you alternate back and forth between points “A” and “B,” sometimes referred to as point-by-point analysis.

Using the AI feature as an example again, you could cover all the pros and cons of building the AI feature, then discuss the benefits and drawbacks of building and maintaining the feature. Or you could compare and contrast each aspect of the AI feature, one at a time. For example, a side-by-side comparison of the AI feature to shopping without it, then proceeding to another point of differentiation.

5. Connect the dots Tie it all together in a way that either confirms or disproves your hypothesis.

For instance, “Building the AI bot would allow our customer service team to save 12% on returns in Q3 while offering optimizations and savings in future strategies. However, it would also increase the product development budget by 43% in both Q1 and Q2. Our budget for product development won’t increase again until series 3 of funding is reached, so despite its potential, we will hold off building the bot until funding is secured and more opportunities and benefits can be proved effective.”

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 5 March 2024

Last updated: 25 November 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

Characteristics of a Comparative Research Design

Hannah richardson, 28 jun 2018.

Characteristics of a Comparative Research Design

Comparative research essentially compares two groups in an attempt to draw a conclusion about them. Researchers attempt to identify and analyze similarities and differences between groups, and these studies are most often cross-national, comparing two separate people groups. Comparative studies can be used to increase understanding between cultures and societies and create a foundation for compromise and collaboration. These studies contain both quantitative and qualitative research methods.

Explore this article

  • Comparative Quantitative
  • Comparative Qualitative
  • When to Use It
  • When Not to Use It

1 Comparative Quantitative

Quantitative, or experimental, research is characterized by the manipulation of an independent variable to measure and explain its influence on a dependent variable. Because comparative research studies analyze two different groups -- which may have very different social contexts -- it is difficult to establish the parameters of research. Such studies might seek to compare, for example, large amounts of demographic or employment data from different nations that define or measure relevant research elements differently.

However, the methods for statistical analysis of data inherent in quantitative research are still helpful in establishing correlations in comparative studies. Also, the need for a specific research question in quantitative research helps comparative researchers narrow down and establish a more specific comparative research question.

2 Comparative Qualitative

Qualitative, or nonexperimental, is characterized by observation and recording outcomes without manipulation. In comparative research, data are collected primarily by observation, and the goal is to determine similarities and differences that are related to the particular situation or environment of the two groups. These similarities and differences are identified through qualitative observation methods. Additionally, some researchers have favored designing comparative studies around a variety of case studies in which individuals are observed and behaviors are recorded. The results of each case are then compared across people groups.

3 When to Use It

Comparative research studies should be used when comparing two people groups, often cross-nationally. These studies analyze the similarities and differences between these two groups in an attempt to better understand both groups. Comparisons lead to new insights and better understanding of all participants involved. These studies also require collaboration, strong teams, advanced technologies and access to international databases, making them more expensive. Use comparative research design when the necessary funding and resources are available.

4 When Not to Use It

Do not use comparative research design with little funding, limited access to necessary technology and few team members. Because of the larger scale of these studies, they should be conducted only if adequate population samples are available. Additionally, data within these studies require extensive measurement analysis; if the necessary organizational and technological resources are not available, a comparative study should not be used. Do not use a comparative design if data are not able to be measured accurately and analyzed with fidelity and validity.

  • 1 San Jose State University: Selected Issues in Study Design
  • 2 University of Surrey: Social Research Update 13: Comparative Research Methods

About the Author

Hannah Richardson has a Master's degree in Special Education from Vanderbilt University and a Bacheor of Arts in English. She has been a writer since 2004 and wrote regularly for the sports and features sections of "The Technician" newspaper, as well as "Coastwach" magazine. Richardson also served as the co-editor-in-chief of "Windhover," an award-winning literary and arts magazine. She is currently teaching at a middle school.

Related Articles

Research Study Design Types

Research Study Design Types

Correlational Methods vs. Experimental Methods

Correlational Methods vs. Experimental Methods

Different Types of Methodologies

Different Types of Methodologies

Quasi-Experiment Advantages & Disadvantages

Quasi-Experiment Advantages & Disadvantages

What Are the Advantages & Disadvantages of Non-Experimental Design?

What Are the Advantages & Disadvantages of Non-Experimental...

Independent vs. Dependent Variables in Sociology

Independent vs. Dependent Variables in Sociology

Methods of Research Design

Methods of Research Design

Qualitative Research Pros & Cons

Qualitative Research Pros & Cons

How to Form a Theoretical Study of a Dissertation

How to Form a Theoretical Study of a Dissertation

What Is the Difference Between Internal & External Validity of Research Study Design?

What Is the Difference Between Internal & External...

Difference Between Conceptual & Theoretical Framework

Difference Between Conceptual & Theoretical Framework

The Advantages of Exploratory Research Design

The Advantages of Exploratory Research Design

What Is Quantitative Research?

What Is Quantitative Research?

What is a Dissertation?

What is a Dissertation?

What Are the Advantages & Disadvantages of Correlation Research?

What Are the Advantages & Disadvantages of Correlation...

What Is the Meaning of the Descriptive Method in Research?

What Is the Meaning of the Descriptive Method in Research?

How to Use Qualitative Research Methods in a Case Study Research Project

How to Use Qualitative Research Methods in a Case Study...

How to Tabulate Survey Results

How to Tabulate Survey Results

How to Cross Validate Qualitative Research Results

How to Cross Validate Qualitative Research Results

Types of Descriptive Research Methods

Types of Descriptive Research Methods

Regardless of how old we are, we never stop learning. Classroom is the educational resource for people of all ages. Whether you’re studying times tables or applying to college, Classroom has the answers.

  • Accessibility
  • Terms of Use
  • Privacy Policy
  • Copyright Policy
  • Manage Preferences

© 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Based on the Word Net lexical database for the English Language. See disclaimer .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Oncol

Logo of jco

Methods in Comparative Effectiveness Research

Katrina armstrong.

From the Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.

Comparative effectiveness research (CER) seeks to assist consumers, clinicians, purchasers, and policy makers to make informed decisions to improve health care at both the individual and population levels. CER includes evidence generation and evidence synthesis. Randomized controlled trials are central to CER because of the lack of selection bias, with the recent development of adaptive and pragmatic trials increasing their relevance to real-world decision making. Observational studies comprise a growing proportion of CER because of their efficiency, generalizability to clinical practice, and ability to examine differences in effectiveness across patient subgroups. Concerns about selection bias in observational studies can be mitigated by measuring potential confounders and analytic approaches, including multivariable regression, propensity score analysis, and instrumental variable analysis. Evidence synthesis methods include systematic reviews and decision models. Systematic reviews are a major component of evidence-based medicine and can be adapted to CER by broadening the types of studies included and examining the full range of benefits and harms of alternative interventions. Decision models are particularly suited to CER, because they make quantitative estimates of expected outcomes based on data from a range of sources. These estimates can be tailored to patient characteristics and can include economic outcomes to assess cost effectiveness. The choice of method for CER is driven by the relative weight placed on concerns about selection bias and generalizability, as well as pragmatic concerns related to data availability and timing. Value of information methods can identify priority areas for investigation and inform research methods.

INTRODUCTION

The desire to determine the best treatment for a patient is as old as the medical field itself. However, the methods used to make this determination have changed substantially over time, progressing from the humoral model of disease through the Oslerian application of clinical observation to the paradigm of experimental, evidence-based medicine of the last 40 years. Most recently, the field of comparative effectiveness research (CER) has taken center stage 1 in this arena, driven, at least in part, by the belief that better information about which treatment a patient should receive is part of the answer to addressing the unsustainable growth in health care costs in the United States. 2 , 3

The emergence of CER has galvanized a re-examination of clinical effectiveness research methods, both among researchers and policy organizations. New definitions have been created that emphasize the necessity of answering real-world questions, where patients and their clinicians have to pick from a range of possible options, recognizing that the best choice may vary across patients, settings, and even time periods. 4 The long-standing emphasis on double-blinded, randomized controlled trials (RCTs) is increasingly seen as impractical and irrelevant to many of the questions facing clinicians and policy makers today. The importance of generating information that will “assist consumers, clinicians, purchasers, and policy makers to make informed decisions” 1 (p29) is certainly not a new tenet of clinical effectiveness research, but its primacy in CER definitions has important implications for research methods in this area.

CER encompasses both evidence generation and evidence synthesis. 5 Generation of comparative effectiveness evidence uses experimental and observational methods. Synthesis of evidence uses systematic reviews and decision and cost-effectiveness modeling. Across these methods, CER examines a broad range of interventions to “prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care.” 1 (p29)

EXPERIMENTAL METHODS

RCTs became the gold standard for clinical effectiveness research soon after publication of the first RCT in 1948. 6 An RCT compares outcomes across groups of participants who are randomly assigned to different interventions, often including a placebo or control arm ( Fig 1 ). RCTs are widely revered for their ability to address selection bias, the correlation between the type of intervention received and other factors associated with the outcome of interest. RCTs are fundamental to the evaluation of new therapeutic agents that are not available outside of a trial setting, and phase III RCT evidence is required for US Food and Drug Administration approval. RCTs are also important for evaluating new technology, including imaging and devices. Increasingly, RCTs are also used to shed light on biology through correlative mechanistic studies, particularly in oncology.

An external file that holds a picture, illustration, etc.
Object name is zlj9991027160001.jpg

Experimental and observational study designs. In a randomized controlled trial, a population of interest is screened for eligibility, randomly assigned to alternative interventions, and observed for outcomes of interest. In an observational study, the population of interest is assigned to alternative interventions based on patient, provider, and system factors and observed for outcomes of interest.

However, traditional approaches to RCTs are increasingly seen as impractical and irrelevant to many of the questions facing clinicians and policy makers today. RCTs have long been recognized as having important limitations in real-world decision making, 7 including: one, RCTs often have restrictive enrollment criteria so that the participants do not resemble patients in practice, particularly in clinical characteristics such as comorbidity, age, and medications or in sociodemographic characteristics such as race, ethnicity, and socioeconomic status; two, RCTs are often not feasible, either because of expense, ethical concerns, or patient acceptance; and three, given their expense and enrollment restrictions, RCTs are rarely able to answer questions about how the effect of the intervention may vary across patients or settings.

Despite these limitations, there is little doubt that RCTs will be a major component of CER. 8 Furthermore, their role is likely to grow with new approaches that increase their relevance in clinical practice. 9 Adaptive trials use accumulating evidence from the trials to modify trial design of the trial to increase efficiency and the probability that trial participants benefit from participation. 10 These adaptations can include changing the end of the trial, changing the interventions or intervention doses, changing the accrual rate, or changing the probability of being randomly assigned to the different arms. One example of an adaptive clinical trial in oncology is the multiarm I-Spy2 trial, which is evaluating multiple agents for neoadjuvant breast cancer treatment. 11 The I-Spy2 trial uses an adaptive approach to assigning patients to treatment arms (where patients with a tumor profile are more likely to be assigned to the arm with the best outcomes for that profile), and data safety monitoring board decisions are guided by Bayesian predicted probabilities of pathologic complete response. 12 , 13 Other examples of adaptive clinical trials in oncology include a randomized trial of four regiments in metastatic prostate cancer, where patients who did not respond to their initial regimen (selected based on randomization) were then randomly assigned to the remaining three regimens, 14 and the CALGB (Cancer and Leukemia Group B) 49907 trial, which used Bayesian predictive probabilities of inferiority to determine the final sample size needed for the comparison of capecitabine and standard chemotherapy in elderly women with early-stage breast cancer. 15 Pragmatic trials relax some of the traditional rules of RCTs to maximize the relevance of the results for clinicians and policy makers. These changes may include expansion of eligibility criteria, flexibility in the application of the intervention and in the management of the control group, and reduction in the intensity of follow-up or procedures for assessing outcomes. 16

OBSERVATIONAL METHODS

The emergence of comparative effectiveness has led to a renewed interest in the role of observational studies for assessing the benefits and harms of alternative interventions. Observational studies compare outcomes between patients who receive different interventions through some process other than investigator randomization. Most commonly, this process is the natural variation in clinical care, although observational studies also can take advantage of natural experiments, where higher-level changes in care delivery (eg, changes in state policy or changes in hospital unit structure) lead to changes in intervention exposure between groups. Observational studies can enroll patients by exposure (eg, type of intervention) using a cohort design or outcome using a case-control design. Cohort studies can be performed prospectively, where participants are recruited at the time of exposure, or retrospectively, where the exposure occurred before participants are identified.

The strengths and limitations of observational studies for clinical effectiveness research have been debated for decades. 7 , 17 Because the incremental cost of including an additional participant is generally low, observational studies often have relatively large numbers of participants who are more representative of the general population. Large, diverse study populations make the results more generalizable to real-world practice and enable the examination of variation in effect across patient subgroups. This advantage is particularly important for understanding effectiveness among vulnerable populations, such as racial minorities, who are often underrepresented in RCT participants. Observational studies that take advantage of existing data sets are able to provide results quickly and efficiently, a critical need for most CER. Currently, observational data already play an important role in influencing guidelines in many areas of oncology, particularly around prevention (eg, nutritional guidelines, management of BRCA1/2 mutation carriers) 18 , 19 and the use of diagnostic tests (eg, use of gene expression profiling in women with node-negative, estrogen receptor–positive breast cancer). 20 However, observational studies also have important limitations. Observational studies are only feasible if the intervention of interest is already being used in clinical practice; they are not possible for evaluation of new drugs or devices. Observational studies are subject to bias, including performance bias, detection bias, and selection bias. 17 , 21 Performance bias occurs when the delivery of one type of intervention is associated with generally higher levels of performance by the health care unit (ie, health care quality) than the delivery of a different type of intervention, making it difficult to determine if better outcomes are the result of the intervention or the accompanying higher-quality health care. Detection bias occurs when the outcomes of interest are more easily detected in one group than another, generally because of differential contact with the health care system between groups. Selection bias is the most important concern in the validity of observational studies and occurs when intervention groups differ in characteristics that are associated with the outcome of interest. These differences can occur because a characteristic is part of the decision about which treatment to recommend (ie, disease severity), which is often termed confounding by indication, or because it is correlated with both intervention and outcome for another reason. A particular concern for CER of therapies is that some new agents may be more likely to be used in patients for whom established therapies have failed and who are less likely to be responsive to any therapy.

There are two main approaches for addressing bias in observational studies. First, important potential confounders must be identified and included in the data collection. Measured confounders can be addressed through multivariate and propensity score analysis. A telling example of the importance of adequate assessment of potential confounders was found through examination of the observational studies of hormone replacement therapy (HRT) and coronary heart disease (CHD). Meta-analyses of observational studies had long estimated a substantial reduction in CHD risk with the use of postmenopausal HRT. However, the WHI (Women's Health Initiative) trial, a large, double-blind RCT of postmenopausal HRT, found no difference in CHD risk between women assigned to HRT or placebo. Although this apparent contradiction is often used as general evidence against the validity of observational studies, a re-examination of the observational studies demonstrated that studies that adjusted for measures of socioeconomic status (a clear confounder between HRT use and better health outcomes) had results similar to those of the WHI, whereas studies that did not adjust for socioeconomic status found a protective effect with HRT 22 ( Fig 2 ). The use of administrative data sets for observational studies of comparative effectiveness is likely to become increasingly common as health information technology spreads, and data become more accessible; however, these data sets may be particularly limiting in their ability to include data on potential confounders. In some cases, the characteristics that influence the treatment decision may not be available in the data (eg, performance status, tumor gene expression), making concerns about confounding by indication too high to proceed without adjusting data collection or considering a different question.

An external file that holds a picture, illustration, etc.
Object name is zlj9991027160002.jpg

Meta-analysis of observational studies of hormone replacement therapy (HRT) and coronary artery disease incidence comparing studies that did and did not adjust for socioeconomic status (SES). Data adapted. 22

Second, several analytic approaches can be used to address differences between groups in observational studies. The standard analytic approach involves the use of multivariable adjustment through regression models. Regression allows the estimation of the change in the outcome of interest from the difference in intervention, holding the other variables in the model (covariates) constant. Although regression remains the standard approach to analysis of observational data, regression can be misleading if there is insufficient overlap in the covariates between groups or if the functional forms of the variables are incorrectly specified. 23 Furthermore, the number of covariates that can be included is limited by the number of participants with the outcome of interest in the data set.

Propensity score analysis is another approach to the estimation of an intervention effect in observational data that enables the inclusion of a large number of covariates and a transparent assessment of the balance of covariates after adjustment. 23 – 26 Propensity score analysis uses a two-step process, first estimating the probability of receiving a particular intervention based on the observed covariates (the propensity score) and estimating the effect of the intervention within groups of patients who had a similar probability of receiving the intervention (often grouped as quintiles of propensity score). The degree to which the propensity score is able to represent the differences in covariates between intervention groups is assessed by examining the balance in covariates across propensity score categories. In an ideal situation, after participants are grouped by their propensity for being treated, those who receive different interventions have similar clinical and sociodemographic characteristics—at least for the characteristics that are measured ( Table 1 ). Rates of the outcomes of interest are then compared between intervention groups within each propensity score category, paying attention to whether the intervention effect differs across patients with a different propensity for receiving the intervention. In addition, the propensity score itself can be included in a regression model estimating the effect of the intervention on the outcome, a method that also allows for additional adjustment for covariates that were not sufficiently balanced across intervention groups within propensity score categories.

Hypothetic Example of Propensity Score Analysis Comparing Two Intervention Groups, A and B

The use of propensity scores for oncology clinical effectiveness research has become increasingly popular over the last decade, with six articles published in Journal of Clinical Oncology in 2011 alone. 27 – 32 However, propensity score analysis has limitations, the most important of which is that it can only include the variables that are in the available data. If a factor that influences the intervention assignment is not included or measured accurately in the data, it cannot be adequately addressed by a propensity score. For example, in a prior propensity score analysis of the association between active treatment and prostate cancer mortality among elderly men, we were able to include only the variables available in Surveillance, Epidemiology, and End Results–Medicare linked data in our propensity score. 33 The data included some of the factors that influence treatment decisions (eg, age, comorbidities, tumor grade, and size) but not others (eg, functional status, prostate-specific antigen score). Furthermore, the measurement of some of the available factors was imperfect—for example, assessment of comorbidities was based on billing codes, which can underestimate actual comorbidity burden and provide no information about the severity of the comorbidity. Thus, although the final result demonstrating a fairly strong association between active treatment and reduced mortality was quite robust based on the data that were available, it is still possible that the association represents unaddressed selection factors where healthier men underwent active treatment. 34

Instrumental variable methods are a third analytic approach that estimate the effect of an intervention in observational data without requiring the factors that differ between the intervention groups to be available in the data, thereby addressing both measured and unmeasured confounders. 35 The goal underlying instrumental variable analysis is to identify a characteristic (called the instrument) that strongly influences the assignment of patients to intervention but is not associated with the outcomes of interest (except through the intervention). In essence, an instrumental variable approach is an attempt to replicate an RCT, where the instrument is randomization. 36 Common instruments include the patterns of treatment across geographic areas or health care providers, the distance to a health care facility able to provide the intervention of interest, or structural characteristics of the health care system that influence what interventions are used, such as the density of certain types of providers or facilities. The analysis involves two stages: first, the probability of receiving the intervention of interest is estimated as a function of the instrument variable and other covariates; second, a model is built predicting the outcome of interest based on the instrument-based intervention probability and the residual from the first model.

Instrumental variable analysis is commonly used in economics 37 and has increasingly been applied to health and health care. In oncology, instrumental variable approaches have been used to examine the effectiveness of treatments for lung, prostate, bladder, and breast cancers, with the most common instruments being area-level treatment patterns. 38 – 42 One recent analysis of prostate cancer treatment found that multivariable regression and propensity score methods resulted in essentially the same estimate of effect for radical prostatectomy, but an instrumental variable based on the treatment pattern of the previous year found no benefit from radical prostatectomy, similar to the estimate from a recently published trial. 41 , 43 However, concerns also exist about the validity of instrumental variable results, particularly if the instrument is not strongly associated with the intervention, or if there are other potential pathways by which the instrument may influence the outcome. Although the strength of the association between the instrument and the intervention assignment can be tested in the analysis, alternative pathways by which the instrument may be associated with the outcome are often not identified until after publication. A recent instrumental variable analysis used annual rainfall as the instrument to demonstrate an association between television watching and autism, arguing that annual rainfall is associated with the amount of time children watch television but is not otherwise associated with the risk of autism. 44 The findings generated considerable controversy after publication, with the identification of several other potential links between rainfall and autism. 45 Instrumental variable methods have traditionally been unable to examine differences in effect between patient subgroups, but new approaches may improve their utility in this important component of CER. 46 , 47

SYSTEMATIC REVIEWS

For some decisions faced by clinicians and policy makers, there is insufficient evidence to inform decision making, and new studies to generate evidence are needed. However, for other decisions, evidence exists but is sufficiently complex or controversial that it must be synthesized to inform decision making. Systematic reviews are an important form of evidence synthesis that brings together the available evidence using an organized and evaluative approach. 48 Systematic reviews are frequently used for guideline development and generally include four major steps. 49 First, the clinical decision is identified, and the analytic framework and key questions are determined. Sometimes the decision may be straightforward and involve a single key question (eg, Does drug A reduce the incidence of disease B?), but other times the question may be more complicated (eg, Should gene expression profiling be used in early-stage breast cancer?) and involve multiple key questions. 50 Second, the literature is searched to identify the relevant studies using inclusion and exclusion criteria that may include the timing of the study, the study design, and the location of the study. Third, the identified studies are graded on quality using established criteria such as the CONSORT criteria for RCTs 51 and the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) criteria for observational studies. 52 Studies that do not meet a minimum quality threshold may be excluded because of concern about the validity of the results. Fourth, the results of all the studies are collated in evidence tables, often including key characteristics of the study design or population that might influence the results. Meta-analytic techniques may be used to combine results across studies when there is sufficient homogeneity to make a single-point estimate statistically valid. Alternatively, models may be used to identify the study or population factors that are associated with different results.

Although systematic reviews are a key component of evidence-based medicine, their role in CER is still uncertain. The traditional approach to systematic reviews has often excluded observational studies because of concerns about internal validity, but such exclusions may greatly limit the evidence available for many important comparative effectiveness questions. CER is designed to inform real-world decisions between available alternatives, which may include multiple tradeoffs. Inclusion of information about harms in comparative effectiveness systematic reviews is desirable but often challenging because of limited data. Finally, systematic reviews are rarely able to examine differences in intervention effects across patient characteristics, another important step for achieving the goals of CER.

DECISION AND COST-EFFECTIVENESS ANALYSIS

Another evidence synthesis method that is gaining increasing traction in CER is decision modeling. Decision modeling is a quantitative approach to evidence synthesis that brings together data from a range of sources to estimate expected outcomes of different interventions. 53 The first step in a decision model is to lay out the structure of the decision, including the alternative choices and the clinical and economic outcomes of those alternatives. 54 Ensuring that the structure of the model holds true to the clinical scenario of interest without becoming overwhelmed by minor possible variations is critical for the eventual impact of the model. 55 Once the decision structure is determined, a decision tree or simulation model is created that incorporates the probabilities of different outcomes over time and the change in those probabilities from the use of different interventions. 56 , 57 To calculate the expected outcomes, a hypothetic cohort of patients is run through each of the decision alternatives in the model. Estimated outcomes are generally assessed as a count of events in the cohort (eg, deaths, cancers) or as the mean or median life expectancy among the cohort. 58

Decision models can also include information about the value placed on each of the outcomes (often referred to as utility) as well as the health care costs incurred by the interventions and the health outcomes. A decision model that includes cost and utility is often referred to as a cost-benefit or cost-effectiveness model and is used in some settings to compare value across interventions. The types of costs that are included depend on the perspective of the model, with a model from the societal perspective including both direct and indirect medical costs (eg, loss of productivity), a model from a payer (ie, insurer) perspective including only direct medical costs, and a model from a patient perspective including the costs experienced by the patient. Future costs are discounted to address the change in monetary value over time. 59 Sensitivity analyses are used to explore the impact of different assumptions on the model results, a critical step for understanding how the results should be used in clinical and policy decisions and for the development of future evidence-generation research. These sensitivity analyses often use a probabilistic approach, where a distribution is entered for each of the inputs and the computer samples from those distributions across a large number of simulations, thereby creating a confidence interval around the estimated outcomes of the alternative choices.

Decision models have several strengths in CER. They can link multiple sources of information to estimate the effect of different interventions on health outcomes, even when there are no studies that directly assess the effect of interest. Because they can examine the effect of variation in different probability estimates, they are particularly useful for understanding how patient characteristics will affect the expected outcomes of different interventions. Decision models can also estimate the impact of an intervention across a population, including the effect on economic outcomes. Decision and cost-effectiveness analyses have been used frequently in oncology, particularly for decisions with options that include the use of a diagnostic or screening test (eg, bone mineral density testing for management of osteoporosis risk), 60 involve significant tradeoffs (eg, adjuvant chemotherapy), 61 or have only limited empirical evidence (eg, management strategies in BRCA mutation carriers). 62

However, decision models also have several limitations that have limited their impact on clinical and policy decision making in the United States to date and are likely to constrain their role in future CER. Often, model results are highly sensitive to the assumptions of the model, and removing bias from these assumptions is difficult. The potential impact of conflicts of interest is high. Decision models require data inputs. For many decisions, data are insufficient for key inputs, requiring the use of educated guesses (ie, expert opinion). The measurement of utility has proven particularly challenging and can lead to counterintuitive results. In the end, decision analysis is similar to other comparative effectiveness methods—useful for the right question as long as results are interpreted with an understanding of the methodologic limitations.

SELECTION OF CER METHODS

The choice of method for a comparative effectiveness study involves the consideration of multiple factors. The Patient-Centered Outcomes Research Institute Methods Committee has identified five intrinsic and three extrinsic factors ( Table 2 ), including internal validity, generalizability, and variation across patient subgroups as well as the feasibility and time urgency. 63 The importance of these factors will vary across the questions being considered. For some questions, the concern about selection bias will be too great for observational studies, particularly if a strong instrument cannot be identified. Many questions about aggressive versus less aggressive treatments may fall into this category, because the decision is often correlated with patient characteristics that predict survival but are rarely found in observational data sets (eg, functional status, social support). For other questions, concern about selection bias will be less pressing than the need for rapid and efficient results. This scenario may be particularly relevant for the comparison of existing therapies that differ in cost or adverse outcomes, where the use of the therapy is largely driven by practice style. In many cases, the choice will be pragmatic based on what data are available and the feasibility of conducting an RCT. These choices will increasingly be informed by the value of information methods 64 – 66 that use economic modeling to provide guidance about where and how investment in CER should be made.

Factors That Influence Selection of Study Design for Patient-Centered Outcome Research

In reality, the questions of CER are not new but are simply more important than ever. Nearly 50 years ago, Sir Austin Bradford Hill spoke about the importance of a broad portfolio of methods in clinical research, saying “To-day … there are many drugs that work and work potently. We want to know whether this one is more potent than that, what dose is right and proper, for what kind of patient.” 7 (p109) This call has expanded beyond drugs to become the charge for CER. To fulfill this charge, investigators will need to use a range of methods, extending the experience in effectiveness research of the last decades “to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.” 1 (p29)

Supported by Award No. UC2CA148310 from the National Cancer Institute.

The content is solely the responsibility of the author and does not necessarily represent the official views of the National Cancer Institute or the National Institutes of Health.

Author's disclosures of potential conflicts of interest and author contributions are found at the end of this article.

AUTHOR'S DISCLOSURES OF POTENTIAL CONFLICTS OF INTEREST

The author(s) indicated no potential conflicts of interest.

All Formats

Table of Contents

Comparative research definition & meaning, what is comparative research, 10 types of comparative research, comparative research uses, purpose, importance, what’s in comparative research parts, how to design comparative research, comparative research vs. correlational research, what’s the difference between comparative research, experimental research, & research, comparative research sizes, comparative research ideas & examples, comparative research.

Comparative research , in research methodology, takes comparisons to the next level by allowing you to compare and analyze two or more subjects as well as a multitude of disciplines. Thanks to the development of the comparative method in the 19th century, research innovated to apply educational comparisons in drawing facts and conclusions in the form of comparative research.

a comparative research meaning

Comparative Research Report

comparative research report

Comparative Research Strategy

comparative research strategy

Comparative Research Outline

comparative research outline

Small Business Comparative Research

small business comparative research

Comparative Research Study

comparative research study

Business Comparative Research

business comparative research

Comparative Research Essay

comparative research essay

Comparative Case Study Research

comparative case study research

Comparative Market Research

comparative market research

Comparative Research Analysis

comparative research analysis

Statistical Contributions

Awareness of both advantages and disadvantages, formation of strategies in response to comparative results, versatile type of research, makes research relevant for generations, figures and tables.

  • Comparative Research Proposal Ideas and Examples
  • Comparative Research Analysis Ideas and Examples
  • Comparative Market Research Ideas and Examples
  • Comparative Case Study Research Ideas and Examples
  • Comparative Research Essay Ideas and Examples
  • Business Comparative Research Ideas and Examples
  • Comparative Research Study Ideas and Examples
  • Small Business Comparative Research Ideas and Examples
  • Comparative Research Outline Ideas and Examples
  • Comparative Research Strategy Ideas and Examples

What is the purpose of comparative research?

Is comparative research qualitative or quantitative, what is a decent example of comparative research, what are the four types of comparative research, can i have two or more dependable variables in comparative research, what are the two main approaches to comparative research, what kind of research is a comparative study, what are the four types of research design, what are the three types of comparisons, what are the best statistical tools for comparative studies, more in documents, comparative case study research template, comparative research proposal template, college research template, research template, project research template, sales retargeting research template, sales customer behavior research template, email marketing research on open rates by industry template, recruitment roi research hr template, biography research template.

  • How To Create a Schedule in Microsoft Word [Template + Example]
  • How To Create a Schedule in Google Docs [Template + Example]
  • How To Create a Quotation in Google Docs [Template + Example]
  • How To Create a Quotation in Microsoft Word [Template + Example]
  • How To Make a Plan in Google Docs [Template + Example]
  • How To Make a Plan in Microsoft Word [Template + Example]
  • How To Make/Create an Inventory in Google Docs [Templates + Examples]
  • How To Create Meeting Minutes in Microsoft Word [Template + Example]
  • How To Create Meeting Minutes in Google Docs [Template + Example]
  • How To Make/Create an Estimate in Microsoft Word [Templates + Examples] 2023
  • How To Make/Create an Estimate in Google Docs [Templates + Examples] 2023
  • How To Make/Create a Manual in Google Docs [Templates + Examples] 2023
  • How To Make/Create a Manual in Microsoft Word [Templates + Examples] 2023
  • How To Make/Create a Statement in Google Docs [Templates + Examples] 2023
  • How To Make/Create a Statement in Microsoft Word [Templates + Examples] 2023

File Formats

Word templates, google docs templates, excel templates, powerpoint templates, google sheets templates, google slides templates, pdf templates, publisher templates, psd templates, indesign templates, illustrator templates, pages templates, keynote templates, numbers templates, outlook templates.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

a comparative research meaning

Home Market Research Research Tools and Apps

Causal Comparative Research: Definition, Types & Benefits

Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables.

Within the field of research, there are multiple methodologies and ways to find answers to your needs, in this article we will address everything you need to know about Causal Comparative Research, a methodology with many advantages and applications.

What Is Causal Comparative Research?

Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables.

Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people.

When you think of Casual Comparative Research, it will almost always consist of the following:

  • A method or set of methods to identify cause/effect relationships
  • A set of individuals (or entities) that are NOT selected randomly – they were intended to participate in this specific study
  • Variables are represented in two or more groups (cannot be less than two, otherwise there is no differentiation between them)
  • Non-manipulated independent variables – *typically, it’s a suggested relationship (since we can’t control the independent variable completely)

Types of Casual Comparative Research

Casual Comparative Research is broken down into two types:

  • Retrospective Comparative Research
  • Prospective Comparative Research

Retrospective Comparative Research: Involves investigating a particular question…. after the effects have occurred. As an attempt to see if a specific variable does influence another variable.

Prospective Comparative Research: This type of Casual Comparative Research is characterized by being initiated by the researcher and starting with the causes and determined to analyze the effects of a given condition. This type of investigation is much less common than the Retrospective type of investigation.

LEARN ABOUT: Quasi-experimental Research

Causal Comparative Research vs Correlation Research

The universal rule of statistics… correlation is NOT causation! 

Casual Comparative Research does not rely on relationships. Instead, they’re comparing two groups to find out whether the independent variable affected the outcome of the dependent variable

When running a Causal Comparative Research, none of the variables can be influenced, and a cause-effect relationship has to be established with a persuasive, logical argument; otherwise, it’s a correlation.

Another significant difference between both methodologies is their analysis of the data collected. In the case of Causal Comparative Research, the results are usually analyzed using cross-break tables and comparing the averages obtained. At the same time, in Causal Comparative Research, Correlation Analysis typically uses scatter charts and correlation coefficients.

Advantages and Disadvantages of Causal Comparative Research

Like any research methodology, causal comparative research has a specific use and limitations to consider when considering them in your next project. Below we list some of the main advantages and disadvantages.

  • It is more efficient since it allows you to save human and economic resources and to do it relatively quickly.
  • Identifying causes of certain occurrences (or non-occurrences)
  • Thus, descriptive analysis rather than experimental

Disadvantages

  • You’re not fully able to manipulate/control an independent variable as well as the lack of randomization
  • Like other methodologies, it tends to be prone to some research bias , the most common type of research is subject- selection bias , so special care must be taken to avoid it so as not to compromise the validity of this type of research.
  • The loss of subjects/location influences / poor attitude of subjects/testing threats….are always a possibility

Finally, it is important to remember that the results of this type of causal research should be interpreted with caution since a common mistake is to think that although there is a relationship between the two variables analyzed, this does not necessarily guarantee that the variable influences or is the main factor to influence in the second variable.

LEARN ABOUT: ANOVA testing

QuestionPro can be your ally in your next Causal Comparative Research

QuestionPro is one of the platforms most used by the world’s leading research agencies, thanks to its diverse functions and versatility when collecting and analyzing data.

With QuestionPro you will not only be able to collect the necessary data to carry out your causal comparative research, you will also have access to a series of advanced reports and analyses to obtain valuable insights for your research project.

We invite you to learn more about our Research Suite, schedule a free demo of our main features today, and clarify all your doubts about our solutions.

LEARN MORE         SIGN UP FREE

Author : John Oppenhimer

MORE LIKE THIS

A/B testing software

Top 13 A/B Testing Software for Optimizing Your Website

Apr 12, 2024

contact center experience software

21 Best Contact Center Experience Software in 2024

Government Customer Experience

Government Customer Experience: Impact on Government Service

Apr 11, 2024

Employee Engagement App

Employee Engagement App: Top 11 For Workforce Improvement 

Apr 10, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Comparative Legal Research: A Brief Overview

  • Vellah Kedogo Kigwiru

January 25, 2020 Introduction

Fombad has argued that, legal research on any legal system, legal traditions or topic is either explicitly or implicitly comparative because none is self-contained or self-reliant. Comparative legal research and comparative law is evolving and attracting a lot of attention in the legal scholarly work. Africa , as a region is not an exception.  It is very common in Africa to come across research papers or thesis dissertations at undergraduate, masters and doctorate level, where the authors assert that they are undertaking a comparative study.  In most cases, the comparative legal research will involve countries beyond the African borders, or doctrines that have evolved and are more established in other jurisdictions. But do the outcome of the research reflect a comparative legal research or what should authors consider when selecting a comparative research method? In short, at what point does a researcher conclude that, indeed, a comparative study is relevant for their research project? It was during my own research presentation, when I posited that, my study was a comparative study between the European Union (EU) and Common Market for Eastern and Southern Africa (COMESA), I realized despite having used the term ‘comparative analysis’ on various occasions, it was more than what I had contemplated. The questions that followed left me dumbfounded and I could not answer them clearly and with certainty. The questions asked were: how was my study an actual comparative study?; why a comparative study?; why did I choose the EU and not the US or any other existing regional regime in Africa?; what was the ‘construct equivalence’?; what was I going to compare? and why the comparison?; how was I going to compare the two?; and which method of data collection was I going to use in carrying out the comparative analysis? Finally, what was my research question and how was I going to use both cases to provide an answer? Was a comparative research necessary? Would I still answer my research question and objectives without a comparative study? The essence of these questions border on understanding comparative research methods . It also seeks to enable a researcher answer the why, where, what, when and how questions in selecting a research method and design.  Having difficulty answering these questions I decided to delve further into what comparative research method constitutes by attending research methods classes and reviewing available literature on comparative research methods  and I hope this article helps anyone who is seeking to employ a comparative legal research. Selecting a research method and design

Before you select any research method and design, the first thing as a researcher, is to formulate a clear research question informed by your research topic, aim, interests and theoretical framework. The assumption is that, you have selected a research topic that not only interests you, but is relevant and contributes to the ongoing conversation. The next step as vogt provides is to select a research design and method that will provide an answer to the research question. This implies that, you must have a great understanding of research methods and designs as noted by Cane and Kritzer in his detailed book  on empirical legal research. Choosing a research method or design is not easy and it is not exclusive either. A researcher can employ mixed methods if necessary to provide an answer to the research question and provide evidence to their argument in a logical manner. What is comparative legal research?

As this article focuses on comparative legal research, before choosing to employ it, it is critical to understand what it constitutes .   Hoecke notes that, ‘researchers get easily lost when embarking on a comparative legal research. The main reason being that there is no agreement on the kind of methodology to be followed, nor even on the methodologies that could be followed’. According to   Paris the lack of definition of what comparative law is, or what the method of comparative law is has exacerbated the situation. Despite these concerns, comparative legal research emanates from comparative research methods which is the study of more than two or more macro-level units with the aim of explaining the differences and similarities between the units of analysis. The term ‘comparative’ implies that, a researcher seeks to compare one subject with another. At the core of comparative research methods, some authors argue that some extent of similarities referred to as, the ‘ comparability’  or ‘ construct equivalence’ should exist.  Esser and Vliegenthart assert, ‘a key issue in concluding comparative empirical research is to ensure equivalence, that is, the ability to validly collect data that are indeed comparable between different contexts and to avoid biases in measurement, instruments and sampling`.  Yet, in real life scenarios, ‘comparability’ may not reflect similarities. Explaining equivalence is also undermined by the single reason that, meaning of any concept is contextual. Örücü   has argued that the concept of comparability which stipulates that things to be compared must be comparable is not entirely practical. What a researcher requires to show, is why they believe that the two unit of analysis should be compared by studying both similarities and diversity and taking into consideration the social context. Understanding the aim and goal of comparative study is therefore critical. This takes us to the next question, why comparative legal research? Why comparative legal research?

After understanding, what is comparative legal research, you have to justify why you selected it. Paris posits that, ‘the researcher in a comparative law, while going through the different stages of the comparative analysis, has to set her own parameters of research within the theoretical framework provided in the comparative law literature and has to justify the direction she chooses to give as regards her methodological choices. In short, the researcher has to master the art of justifying her choices about why and how she uses comparative law’. In answering the why question, it will be prudent to understand the aims and theoretical underpinnings of comparative research methods which seek to provide conclusions beyond single cases. Mills and others argue that, ‘the underlying goal of comparative analysis is to search for similarity and variance’. According to Wilson , ‘by looking overseas, by looking at the other legal systems, it has been hoped to benefit the national legal system of the observer, offering suggestions for future developments, providing warnings of possible difficulties, giving an opportunity to stand back from one’s own national system and look at it more critically, but not to remove it from first place on the agenda’ . In the modern globalized world and multidisciplinary research, comparative legal research is not limited to the analysis of national legal systems as was conceptualized in the 19 th and 20 th century.  Further the traditional aims of comparative legal studies which sought to harmonize laws especially in the Europe are questionable in the modern age. For instance, after colonization, most of the African countries adopted laws that were entirely a transplant of their former colonies as a result of western imperialism missing the contextualization of the African societies. This has led to massive reforms of the laws to reflect the realities in the various African societies. Scholars such as Shako have called for the need to dismantle the legacies of colonization. Justifying the case selection

As you seek to justify why you selected comparative legal research methods, another hurdle is justifying case selection. Case selection and the sampling in comparative research methods is closely linked to the concept of ‘comparability’ and ‘construct equivalence’ as discussed above. As a researcher you must carry out a thorough contextual approach. This will involve considering the historical and socio-economic context of the subjects under study to provide a better understanding and avoid unnecessary biasness. In essence case selection, narrows down to the ‘why’ question and understanding the aim of comparative research methods. For instance, you can use comparative analysis where a doctrine originated in a certain jurisdiction and it is well embedded to inform its application in another jurisdiction where the doctrine is still novel. So, before you indicate that you are carrying out a comparative analysis study between Nigeria and the US, on the fight against terrorism, you must justify why you chose US and not Kenya. For guidance see Erbele , Eser , Fombad . Epstein and Martin ,   Cane and Kritzer . You have justified why you selected a comparative legal research to answer your research question and also your case selection, the next hurdle is to explain how you are going to use the comparative legal research design. In what way are you going to compare these two cases? Hoecke, posits six methods for comparative research: ‘the functional method, the structural method, the analytical method, the law-in-context method, the historical method and the common-core method’.  To understand these methods and how you can employ each  see Michaels , Karst , Monateri , Leckey , Eberle , and Frohlich . Conclusion

As this blog article is limited to word limit, we cannot discuss everything related to comparative research methods in legal studies. However, before employing a comparative legal research method, understand what it entails, why you are adopting, justify the case selection and sampling. Finally, be very clear how you are going to carry out the comparative analysis.

  • European Union (EU)
  • Common Market for Eastern and Southern Africa (COMESA)
  • Colonization

Comparative Designs

  • First Online: 18 January 2019

Cite this chapter

Book cover

  • Oddbjørn Bukve 2  

937 Accesses

A comparative design involves studying variation by comparing a limited number of cases without using statistical probability analyses. Such designs are particularly useful for knowledge development when we lack the conditions for control through variable-centred, quasi-experimental designs. Comparative designs often combine different research strategies by using one strategy to analyse properties of a single case and another strategy for comparing cases. A common combination is the use of a type of case design to analyse within the cases, and a variable-centred design to compare cases. Case-oriented approaches can also be used for analysis both within and between cases. Typologies and typological theories play an important role in such a design. In this chapter I discuss the two types separately.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Later Ragin has developed the method to make it possible to use continuous variables and a probability logic, so-called fuzzy-set logic (Ragin, 2000 ).

Boolean algebra describes logical relations in a similar way that ordinary algebra describes numeric relations.

Bukve, O. (2001). Lokale utviklingsnettverk ein komparativ analyse av næringsutvikling i åtte kommunar. Høgskulen i Sogn og Fjordane, Sogndal.

Google Scholar  

Collier, R. B., & Collier, D. (1991). Shaping the political arena: Critical junctures, the labor movement, and regime dynamics in Latin America . Princeton, NJ: Princeton University Press.

Dion, D. (1998). Evidence and inference in the comparative case study. (Case studies in politics). Comparative Politics, 30 , 127.

Article   Google Scholar  

George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences . Cambridge, MA: MIT Press.

Goggin, M. L. (1986). The “too few cases/too many variables” problem in implementation research. The Western Political Quarterly, 39 , 328–347.

Landman, T. (2008). Issues and methods in comparative politics: An introduction (3rd ed.). Milton Park, Abingdon, Oxon: Routledge.

Book   Google Scholar  

Lange, M. (2013). Comparative-historical methods . Los Angeles: Sage.

Luebbert, G. M. (1991). Liberalism, fascism, or social democracy: Social classes and the political origins of regimes in interwar Europe . New York: Oxford University Press.

Matland, R. E. (1995). Synthesizing the implementation literature: The ambiguity-conflict model of policy implementation. Journal of Public Administration Research and Theory: J-PART, 5 (2), 145–174.

Paige, J. (1975). Agrarian revolution: Social movements and export agriculture in the underdeveloped world . New York: Free Press.

Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry . New York: Wiley.

Ragin, C. C. (1987). The comparative method . Berkeley, CA: University of California Press.

Ragin, C. C. (2000). Fuzzy-set social science . Chicago, IL: University of Chicago Press.

Ragin, C. C., & Amoroso, L. M. (2011). Constructing social research . Thousand Oaks, CA: Pine Forge Press.

Skocpol, T. (1979). States and social revolutions: A comparative analysis of France, Russia, and China . Cambridge: Cambridge University Press.

Weber, M. (1971). Makt og byråkrati: essays om politikk og klasse, samfunnsforskning og verdier . Oslo, Norway: Gyldendal.

Wickham-Crowley, T. P. (1992). Guerrillas and revolution in Latin America: A comparative study of insurgents and regimes since 1956 . Princeton, NJ: Princeton University Press.

Download references

Author information

Authors and affiliations.

Western Norway University of Applied Sciences, Sogndal, Norway

Oddbjørn Bukve

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2019 The Author(s)

About this chapter

Bukve, O. (2019). Comparative Designs. In: Designing Social Science Research. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-03979-0_9

Download citation

DOI : https://doi.org/10.1007/978-3-030-03979-0_9

Published : 18 January 2019

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-03978-3

Online ISBN : 978-3-030-03979-0

eBook Packages : Political Science and International Studies Political Science and International Studies (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. What is Comparative Research? Definition, Types, Uses

    a comparative research meaning

  2. PPT

    a comparative research meaning

  3. PPT

    a comparative research meaning

  4. What is Comparative Research? Definition, Types, Uses

    a comparative research meaning

  5. Comparative Analysis: What It Is & How to Conduct It

    a comparative research meaning

  6. PPT

    a comparative research meaning

VIDEO

  1. Comparative Research Design

  2. Comparative Research: Lecture-23

  3. Causal comparative research/types, characteristics, advantages and disadvantages

  4. Causal Comparative Research: Explained with Examples and Benefits by Zeshan Umar

  5. Comparative Research Designs and Methods

  6. Causal-Comparative Research

COMMENTS

  1. Comparative research

    Definition. Comparative research, simply put, is the act of comparing two or more things with a view to discovering something about one or all of the things being compared. This technique often utilizes multiple disciplines in one study. When it comes to method, the majority agreement is that there is no methodology peculiar to comparative ...

  2. (PDF) A Short Introduction to Comparative Research

    Comparative research method can be defined as a research methodology in which aspects of social science or life are examined across different cultures or countries. It is a form of

  3. Comparative Studies

    Comparative is a concept that derives from the verb "to compare" (the etymology is Latin comparare, derivation of par = equal, with prefix com-, it is a systematic comparison).Comparative studies are investigations to analyze and evaluate, with quantitative and qualitative methods, a phenomenon and/or facts among different areas, subjects, and/or objects to detect similarities and/or ...

  4. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  5. Comparative Research Methods

    Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time. ... To ensure equal meaning of survey questions and coding ...

  6. Comparative Research, Higher Education

    Definition. Within higher education, the catch-all term "comparative research" typically denotes inter- or cross-national or inter- or cross-cultural comparative research. For an overall terminology, this type of research is defined here as empirical research that collects data and/or carries out observations across national, geographical ...

  7. Comparative Research

    Comparative Research. A specific comparative research methodology is known in most social sciences. Its definition often refers to countries and cultures at the same time, because cultural differences between countries can be rather small (e.g., in Scandinavian countries), whereas very different cultural or ethnic groups may live within one ...

  8. Comparative Research Designs and Methods

    Comparative Research Designs. This module presents further advances in comparative research designs. To begin with, you will be introduced to case selection and types of research designs. Subsequently, you will delve into most similar and most different designs (MSDO/MDSO) and observe their operationalization.

  9. Comparative Analysis

    Definition. The goal of comparative analysis is to search for similarity and variance among units of analysis. Comparative research commonly involves the description and explanation of similarities and differences of conditions or outcomes among large-scale social units, usually regions, nations, societies, and cultures.

  10. 3. Comparative Research Methods

    It first considers the role of variables in comparative research, before discussing the meaning of 'cases' and case selection. It then looks at the 'core' of the comparative research method: the use of the logic of comparative inquiry to analyse the relationships between variables (representing theory), and the information contained in ...

  11. Comparison in Scientific Research

    Comparison as a scientific research method. Comparative research represents one approach in the spectrum of scientific research methods and in some ways is a hybrid of other methods, drawing on aspects of both experimental science (see our Experimentation in Science module) and descriptive research (see our Description in Science module ...

  12. Comparative Research: Definition & Implementation

    Comparative research is used to analyse and compare elements, phenomena or practices in order to understand differences, identify best practices, evaluate policies or programs and make informed decisions in various fields such as culture, economics, science, education, etc.

  13. What is Comparative Analysis? Guide with Examples

    A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets. For instance, you could use comparative ...

  14. Characteristics of a Comparative Research Design

    Comparative research essentially compares two groups in an attempt to draw a conclusion about them. Researchers attempt to identify and analyze similarities and differences between groups, and these studies are most often cross-national, comparing two separate people groups.

  15. Chapter 10 Methods for Comparative Studies

    In eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between ...

  16. Methods in Comparative Effectiveness Research

    Comparative effectiveness research (CER) seeks to assist consumers, clinicians, purchasers, and policy makers to make informed decisions to improve health care at both the individual and population levels. CER includes evidence generation and evidence synthesis. Randomized controlled trials are central to CER because of the lack of selection ...

  17. [PDF] Comparative research methods

    Comparative research methods. In this entry, we discuss the opportunities and challenges of comparative research. We outline the major obstacles in terms of building a comparative theoretical framework, collecting good‐quality data and analyzing those data, and we also explicate the advantages and research questions that can be addressed when ...

  18. What is Comparative Research? Definition, Types, Uses

    Comparative research compares or analyzes two or more nationalities, social groups, and other social or demographic subjects in an experimental or nonexperimental study. Comparative research is generally used to obtain the similarities and differences between variables, meaning it can be a study that inputs a number of disciplines.

  19. Comparative Analysis

    Comparative analysis is a multidisciplinary method, which spans a wide cross-section of disciplines (Azarian, 2011).It is the process of comparing multiple units of study for the purpose of scientific discovery and for informing policy decisions (Rogers, 2014).Even though there has been a renewed interest in comparative analysis as a research method over the last decade in fields such as ...

  20. Causal Comparative Research: Definition, Types & Benefits

    Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables. Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people.

  21. Comparative Legal Research: A Brief Overview

    Hoecke, posits six methods for comparative research: 'the functional method, the structural method, the analytical method, the law-in-context method, the historical method and the common-core method'. To understand these methods and how you can employ each see Michaels, Karst, Monateri, Leckey, Eberle, and Frohlich. Conclusion.

  22. Comparative Designs

    Comparative designs often combine different research strategies by using one strategy to analyse properties of a single case and another strategy for comparing cases. A common combination is the use of a type of case design to analyse within the cases, and a variable-centred design to compare cases.

  23. A high-throughput and sensitive method for food preference ...

    Insects pose significant challenges in both pest management and ecological conservation. Often, the most effective strategy is employing toxicant-laced baits, which must also be designed to specifically attract and be preferred by the targeted species for optimal species-specific effectiveness. However, traditional methods for measuring bait preference are either non-comparative, meaning that ...