Banner

Research Methods

  • Getting Started
  • What is Research Design?
  • Research Approach
  • Research Methodology
  • Data Collection
  • Data Analysis & Interpretation
  • Population & Sampling
  • Theories, Theoretical Perspective & Theoretical Framework
  • Useful Resources

Further Resources

Cover Art

Data Analysis & Interpretation

  • Quantitative Data

Qualitative Data

  • Mixed Methods

You will need to tidy, analyse and interpret the data you collected to give meaning to it, and to answer your research question.  Your choice of methodology points the way to the most suitable method of analysing your data.

techniques of interpretation in research methodology

If the data is numeric you can use a software package such as SPSS, Excel Spreadsheet or “R” to do statistical analysis.  You can identify things like mean, median and average or identify a causal or correlational relationship between variables.  

The University of Connecticut has useful information on statistical analysis.

If your research set out to test a hypothesis your research will either support or refute it, and you will need to explain why this is the case.  You should also highlight and discuss any issues or actions that may have impacted on your results, either positively or negatively.  To fully contribute to the body of knowledge in your area be sure to discuss and interpret your results within the context of your research and the existing literature on the topic.

Data analysis for a qualitative study can be complex because of the variety of types of data that can be collected. Qualitative researchers aren’t attempting to measure observable characteristics, they are often attempting to capture an individual’s interpretation of a phenomena or situation in a particular context or setting.  This data could be captured in text from an interview or focus group, a movie, images, or documents.   Analysis of this type of data is usually done by analysing each artefact according to a predefined and outlined criteria for analysis and then by using a coding system.  The code can be developed by the researcher before analysis or the researcher may develop a code from the research data.  This can be done by hand or by using thematic analysis software such as NVivo.

Interpretation of qualitative data can be presented as a narrative.  The themes identified from the research can be organised and integrated with themes in the existing literature to give further weight and meaning to the research.  The interpretation should also state if the aims and objectives of the research were met.   Any shortcomings with research or areas for further research should also be discussed (Creswell,2009)*.

For further information on analysing and presenting qualitative date, read this article in Nature .

Mixed Methods Data

Data analysis for mixed methods involves aspects of both quantitative and qualitative methods.  However, the sequencing of data collection and analysis is important in terms of the mixed method approach that you are taking.  For example, you could be using a convergent, sequential or transformative model which directly impacts how you use different data to inform, support or direct the course of your study.

The intention in using mixed methods is to produce a synthesis of both quantitative and qualitative information to give a detailed picture of a phenomena in a particular context or setting. To fully understand how best to produce this synthesis it might be worth looking at why researchers choose this method.  Bergin**(2018) states that researchers choose mixed methods because it allows them to triangulate, illuminate or discover a more diverse set of findings.  Therefore, when it comes to interpretation you will need to return to the purpose of your research and discuss and interpret your data in that context. As with quantitative and qualitative methods, interpretation of data should be discussed within the context of the existing literature.

Bergin’s book is available in the Library to borrow. Bolton LTT collection 519.5 BER

Creswell’s book is available in the Library to borrow.  Bolton LTT collection 300.72 CRE

For more information on data analysis look at Sage Research Methods database on the library website.

*Creswell, John W.(2009)  Research design: qualitative, and mixed methods approaches.  Sage, Los Angeles, pp 183

**Bergin, T (2018), Data analysis: quantitative, qualitative and mixed methods. Sage, Los Angeles, pp182

  • << Previous: Data Collection
  • Next: Population & Sampling >>
  • Last Updated: Sep 7, 2023 3:09 PM
  • URL: https://tudublin.libguides.com/research_methods
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

techniques of interpretation in research methodology

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

techniques of interpretation in research methodology

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

interactive presentation software

Top 12 Interactive Presentation Software to Engage Your User

May 29, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Chapter 12 Interpretive Research

The last chapter introduced interpretive research, or more specifically, interpretive case research. This chapter will explore other kinds of interpretive research. Recall that positivist or deductive methods, such as laboratory experiments and survey research, are those that are specifically intended for theory (or hypotheses) testing, while interpretive or inductive methods, such as action research and ethnography, are intended for theory building. Unlike a positivist method, where the researcher starts with a theory and tests theoretical postulates using empirical data, in interpretive methods, the researcher starts with data and tries to derive a theory about the phenomenon of interest from the observed data.

The term “interpretive research” is often used loosely and synonymously with “qualitative research”, although the two concepts are quite different. Interpretive research is a research paradigm (see Chapter 3) that is based on the assumption that social reality is not singular or objective, but is rather shaped by human experiences and social contexts (ontology), and is therefore best studied within its socio-historic context by reconciling the subjective interpretations of its various participants (epistemology). Because interpretive researchers view social reality as being embedded within and impossible to abstract from their social settings, they “interpret” the reality though a “sense-making” process rather than a hypothesis testing process. This is in contrast to the positivist or functionalist paradigm that assumes that the reality is relatively independent of the context, can be abstracted from their contexts, and studied in a decomposable functional manner using objective techniques such as standardized measures. Whether a researcher should pursue interpretive or positivist research depends on paradigmatic considerations about the nature of the phenomenon under consideration and the best way to study it.

However, qualitative versus quantitative research refers to empirical or data -oriented considerations about the type of data to collect and how to analyze them. Qualitative research relies mostly on non-numeric data, such as interviews and observations, in contrast to quantitative research which employs numeric data such as scores and metrics. Hence, qualitative research is not amenable to statistical procedures such as regression analysis, but is coded using techniques like content analysis. Sometimes, coded qualitative data is tabulated quantitatively as frequencies of codes, but this data is not statistically analyzed. Many puritan interpretive researchers reject this coding approach as a futile effort to seek consensus or objectivity in a social phenomenon which is essentially subjective.

Although interpretive research tends to rely heavily on qualitative data, quantitative data may add more precision and clearer understanding of the phenomenon of interest than qualitative data. For example, Eisenhardt (1989), in her interpretive study of decision making n high-velocity firms (discussed in the previous chapter on case research), collected numeric data on how long it took each firm to make certain strategic decisions (which ranged from 1.5 months to 18 months), how many decision alternatives were considered for each decision, and surveyed her respondents to capture their perceptions of organizational conflict. Such numeric data helped her clearly distinguish the high-speed decision making firms from the low-speed decision makers, without relying on respondents’ subjective perceptions, which then allowed her to examine the number of decision alternatives considered by and the extent of conflict in high-speed versus low-speed firms. Interpretive research should attempt to collect both qualitative and quantitative data pertaining to their phenomenon of interest, and so should positivist research as well. Joint use of qualitative and quantitative data, often called “mixed-mode designs”, may lead to unique insights and are highly prized in the scientific community.

Interpretive research has its roots in anthropology, sociology, psychology, linguistics, and semiotics, and has been available since the early 19 th century, long before positivist techniques were developed. Many positivist researchers view interpretive research as erroneous and biased, given the subjective nature of the qualitative data collection and interpretation process employed in such research. However, the failure of many positivist techniques to generate interesting insights or new knowledge have resulted in a resurgence of interest in interpretive research since the 1970’s, albeit with exacting methods and stringent criteria to ensure the reliability and validity of interpretive inferences.

Distinctions from Positivist Research

In addition to fundamental paradigmatic differences in ontological and epistemological assumptions discussed above, interpretive and positivist research differ in several other ways. First, interpretive research employs a theoretical sampling strategy, where study sites, respondents, or cases are selected based on theoretical considerations such as whether they fit the phenomenon being studied (e.g., sustainable practices can only be studied in organizations that have implemented sustainable practices), whether they possess certain characteristics that make them uniquely suited for the study (e.g., a study of the drivers of firm innovations should include some firms that are high innovators and some that are low innovators, in order to draw contrast between these firms), and so forth. In contrast, positivist research employs random sampling (or a variation of this technique), where cases are chosen randomly from a population, for purposes of generalizability. Hence, convenience samples and small samples are considered acceptable in interpretive research as long as they fit the nature and purpose of the study, but not in positivist research.

Second, the role of the researcher receives critical attention in interpretive research. In some methods such as ethnography, action research, and participant observation, the researcher is considered part of the social phenomenon, and her specific role and involvement in the research process must be made clear during data analysis. In other methods, such as case research, the researcher must take a “neutral” or unbiased stance during the data collection and analysis processes, and ensure that her personal biases or preconceptions does not taint the nature of subjective inferences derived from interpretive research. In positivist research, however, the researcher is considered to be external to and independent of the research context and is not presumed to bias the data collection and analytic procedures.

Third, interpretive analysis is holistic and contextual, rather than being reductionist and isolationist. Interpretive interpretations tend to focus on language, signs, and meanings from the perspective of the participants involved in the social phenomenon, in contrast to statistical techniques that are employed heavily in positivist research. Rigor in interpretive research is viewed in terms of systematic and transparent approaches for data collection and analysis rather than statistical benchmarks for construct validity or significance testing.

Lastly, data collection and analysis can proceed simultaneously and iteratively in interpretive research. For instance, the researcher may conduct an interview and code it before proceeding to the next interview. Simultaneous analysis helps the researcher correct potential flaws in the interview protocol or adjust it to capture the phenomenon of interest better. The researcher may even change her original research question if she realizes that her original research questions are unlikely to generate new or useful insights. This is a valuable but often understated benefit of interpretive research, and is not available in positivist research, where the research project cannot be modified or changed once the data collection has started without redoing the entire project from the start.

Benefits and Challenges of Interpretive Research

Interpretive research has several unique advantages. First, they are well-suited for exploring hidden reasons behind complex, interrelated, or multifaceted social processes, such as inter-firm relationships or inter-office politics, where quantitative evidence may be biased, inaccurate, or otherwise difficult to obtain. Second, they are often helpful for theory construction in areas with no or insufficient a priori theory. Third, they are also appropriate for studying context-specific, unique, or idiosyncratic events or processes. Fourth, interpretive research can also help uncover interesting and relevant research questions and issues for follow-up research.

At the same time, interpretive research also has its own set of challenges. First, this type of research tends to be more time and resource intensive than positivist research in data collection and analytic efforts. Too little data can lead to false or premature assumptions, while too much data may not be effectively processed by the researcher. Second, interpretive research requires well-trained researchers who are capable of seeing and interpreting complex social phenomenon from the perspectives of the embedded participants and reconciling the diverse perspectives of these participants, without injecting their personal biases or preconceptions into their inferences. Third, all participants or data sources may not be equally credible, unbiased, or knowledgeable about the phenomenon of interest, or may have undisclosed political agendas, which may lead to misleading or false impressions. Inadequate trust between participants and researcher may hinder full and honest self-representation by participants, and such trust building takes time. It is the job of the interpretive researcher to

“see through the smoke” (hidden or biased agendas) and understand the true nature of the problem. Fourth, given the heavily contextualized nature of inferences drawn from interpretive research, such inferences do not lend themselves well to replicability or generalizability. Finally, interpretive research may sometimes fail to answer the research questions of interest or predict future behaviors.

Characteristics of Interpretive Research

All interpretive research must adhere to a common set of principles, as described below.

Naturalistic inquiry: Social phenomena must be studied within their natural setting. Because interpretive research assumes that social phenomena are situated within and cannot be isolated from their social context, interpretations of such phenomena must be grounded within their socio-historical context. This implies that contextual variables should be observed and considered in seeking explanations of a phenomenon of interest, even though context sensitivity may limit the generalizability of inferences.

Researcher as instrument: Researchers are often embedded within the social context that they are studying, and are considered part of the data collection instrument in that they must use their observational skills, their trust with the participants, and their ability to extract the correct information. Further, their personal insights, knowledge, and experiences of the social context is critical to accurately interpreting the phenomenon of interest. At the same time, researchers must be fully aware of their personal biases and preconceptions, and not let such biases interfere with their ability to present a fair and accurate portrayal of the phenomenon.

Interpretive analysis: Observations must be interpreted through the eyes of the participants embedded in the social context. Interpretation must occur at two levels. The first level involves viewing or experiencing the phenomenon from the subjective perspectives of the social participants. The second level is to understand the meaning of the participants’ experiences in order to provide a “thick description” or a rich narrative story of the phenomenon of interest that can communicate why participants acted the way they did.

Use of expressive language: Documenting the verbal and non-verbal language of participants and the analysis of such language are integral components of interpretive analysis. The study must ensure that the story is viewed through the eyes of a person, and not a machine, and must depict the emotions and experiences of that person, so that readers can understand and relate to that person. Use of imageries, metaphors, sarcasm, and other figures of speech is very common in interpretive analysis.

Temporal nature: Interpretive research is often not concerned with searching for specific answers, but with understanding or “making sense of” a dynamic social process as it unfolds over time. Hence, such research requires an immersive involvement of the researcher at the study site for an extended period of time in order to capture the entire evolution of the phenomenon of interest.

Hermeneutic circle: Interpretive interpretation is an iterative process of moving back and forth from pieces of observations (text) to the entirety of the social phenomenon (context) to reconcile their apparent discord and to construct a theory that is consistent with the diverse subjective viewpoints and experiences of the embedded participants. Such iterations between the understanding/meaning of a phenomenon and observations must continue until “theoretical saturation” is reached, whereby any additional iteration does not yield any more insight into the phenomenon of interest.

Interpretive Data Collection

Data is collected in interpretive research using a variety of techniques. The most frequently used technique is interviews (face-to-face, telephone, or focus groups). Interview types and strategies are discussed in detail in a previous chapter on survey research. A second technique is observation . Observational techniques include direct observation , where the researcher is a neutral and passive external observer and is not involved in the phenomenon of interest (as in case research), and participant observation , where the researcher is an active participant in the phenomenon and her inputs or mere presence influence the phenomenon being studied (as in action research). A third technique is documentation , where external and internal documents, such as memos, electronic mails, annual reports, financial statements, newspaper articles, websites, may be used to cast further insight into the phenomenon of interest or to corroborate other forms of evidence.

Interpretive Research Designs

Case research . As discussed in the previous chapter, case research is an intensive longitudinal study of a phenomenon at one or more research sites for the purpose of deriving detailed, contextualized inferences and understanding the dynamic process underlying a phenomenon of interest. Case research is a unique research design in that it can be used in an interpretive manner to build theories or in a positivist manner to test theories. The previous chapter on case research discusses both techniques in depth and provides illustrative exemplars. Furthermore, the case researcher is a neutral observer (direct observation) in the social setting rather than an active participant (participant observation). As with any other interpretive approach, drawing meaningful inferences from case research depends heavily on the observational skills and integrative abilities of the researcher.

Action research . Action research is a qualitative but positivist research design aimed at theory testing rather than theory building (discussed in this chapter due to lack of a proper space). This is an interactive design that assumes that complex social phenomena are best understood by introducing changes, interventions, or “actions” into those phenomena and observing the outcomes of such actions on the phenomena of interest. In this method, the researcher is usually a consultant or an organizational member embedded into a social context (such as an organization), who initiates an action in response to a social problem, and examines how her action influences the phenomenon while also learning and generating insights about the relationship between the action and the phenomenon. Examples of actions may include organizational change programs, such as the introduction of new organizational processes, procedures, people, or technology or replacement of old ones, initiated with the goal of improving an organization’s performance or profitability in its business environment. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may bring forth the desired social change. The theory is validated by the extent to which the chosen action is successful in remedying the targeted problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from other research methods (which may not involve problem solving) and from consulting (which may not involve insight generation). Hence, action research is an excellent method for bridging research and practice.

There are several variations of the action research method. The most popular of these method is the participatory action research, designed by Susman and Evered (1978) [13] . This method follows an action research cycle consisting of five phases: (1) diagnosing, (2) action planning, (3) action taking, (4) evaluating, and (5) learning (see Figure 10.1). Diagnosing involves identifying and defining a problem in its social context. Action planning involves identifying and evaluating alternative solutions to the problem, and deciding on a future course of action (based on theoretical rationale). Action taking is the implementation of the planned course of action. The evaluation stage examines the extent to which the initiated action is successful in resolving the original problem, i.e., whether theorized effects are indeed realized in practice. In the learning phase, the experiences and feedback from action evaluation are used to generate insights about the problem and suggest future modifications or improvements to the action. Based on action evaluation and learning, the action may be modified or adjusted to address the problem better, and the action research cycle is repeated with the modified action sequence. It is suggested that the entire action research cycle be traversed at least twice so that learning from the first cycle can be implemented in the second cycle. The primary mode of data collection is participant observation, although other techniques such as interviews and documentary evidence may be used to corroborate the researcher’s observations.

techniques of interpretation in research methodology

Figure 10.1. Action research cycle.

Ethnography . The ethnographic research method, derived largely from the field of anthropology, emphasizes studying a phenomenon within the context of its culture. The researcher must be deeply immersed in the social culture over an extended period of time (usually 8 months to 2 years) and should engage, observe, and record the daily life of the studied culture and its social participants within their natural setting. The primary mode of data collection is participant observation, and data analysis involves a “sense-making” approach. In addition, the researcher must take extensive field notes, and narrate her experience in descriptive detail so that readers may experience the same culture as the researcher. In this method, the researcher has two roles: rely on her unique knowledge and engagement to generate insights (theory), and convince the scientific community of the trans-situational nature of the studied phenomenon.

The classic example of ethnographic research is Jane Goodall’s study of primate behaviors, where she lived with chimpanzees in their natural habitat at Gombe National Park in Tanzania, observed their behaviors, interacted with them, and shared their lives. During that process, she learnt and chronicled how chimpanzees seek food and shelter, how they socialize with each other, their communication patterns, their mating behaviors, and so forth. A more contemporary example of ethnographic research is Myra Bluebond-Langer’s (1996) [14] study of decision making in families with children suffering from life-threatening illnesses, and the physical, psychological, environmental, ethical, legal, and cultural issues that influence such decision-making. The researcher followed the experiences of approximately 80 children with incurable illnesses and their families for a period of over two years. Data collection involved participant observation and formal/informal conversations with children, their parents and relatives, and health care providers to document their lived experience.

Phenomenology. Phenomenology is a research method that emphasizes the study of conscious experiences as a way of understanding the reality around us. It is based on the ideas of German philosopher Edmund Husserl in the early 20 th century who believed that human experience is the source of all knowledge. Phenomenology is concerned with the systematic reflection and analysis of phenomena associated with conscious experiences, such as human judgment, perceptions, and actions, with the goal of (1) appreciating and describing social reality from the diverse subjective perspectives of the participants involved, and (2) understanding the symbolic meanings (“deep structure”) underlying these subjective experiences. Phenomenological inquiry requires that researchers eliminate any prior assumptions and personal biases, empathize with the participant’s situation, and tune into existential dimensions of that situation, so that they can fully understand the deep structures that drives the conscious thinking, feeling, and behavior of the studied participants.

techniques of interpretation in research methodology

Figure 10.2. The existential phenomenological research method.

Some researchers view phenomenology as a philosophy rather than as a research method. In response to this criticism, Giorgi and Giorgi (2003) [15] developed an existential phenomenological research method to guide studies in this area. This method, illustrated in Figure 10.2, can be grouped into data collection and data analysis phases. In the data collection phase, participants embedded in a social phenomenon are interviewed to capture their subjective experiences and perspectives regarding the phenomenon under investigation.

Examples of questions that may be asked include “can you describe a typical day” or “can you describe that particular incident in more detail?” These interviews are recorded and transcribed for further analysis. During data analysis , the researcher reads the transcripts to:

(1) get a sense of the whole, and (2) establish “units of significance” that can faithfully represent participants’ subjective experiences. Examples of such units of significance are concepts such as “felt space” and “felt time,” which are then used to document participants’ psychological experiences. For instance, did participants feel safe, free, trapped, or joyous when experiencing a phenomenon (“felt-space”)? Did they feel that their experience was pressured, slow, or discontinuous (“felt-time”)? Phenomenological analysis should take into account the participants’ temporal landscape (i.e., their sense of past, present, and future), and the researcher must transpose herself in an imaginary sense in the participant’s situati on (i.e., temporarily live the participant’s life). The participants’ lived experience is described in form of a narrative or using emergent themes. The analysis then delves into these themes to identify multiple layers of meaning while retaining the fragility and ambiguity of subjects’ lived experiences.

Rigor in Interpretive Research

While positivist research employs a “reductionist” approach by simplifying social reality into parsimonious theories and laws, interpretive research attempts to interpret social reality through the subjective viewpoints of the embedded participants within the context where the reality is situated. These interpretations are heavily contextualized, and are naturally less generalizable to other contexts. However, because interpretive analysis is subjective and sensitive to the experiences and insight of the embedded researcher, it is often considered less rigorous by many positivist (functionalist) researchers. Because interpretive research is based on different set of ontological and epistemological assumptions about social phenomenon than positivist research, the positivist notions of rigor, such as reliability, internal validity, and generalizability, do not apply in a similar manner. However, Lincoln and Guba (1985) [16] provide an alternative set of criteria that can be used to judge the rigor of interpretive research.

Dependability. Interpretive research can be viewed as dependable or authentic if two researchers assessing the same phenomenon using the same set of evidence independently arrive at the same conclusions or the same researcher observing the same or a similar phenomenon at different times arrives at similar conclusions. This concept is similar to that of reliability in positivist research, with agreement between two independent researchers being similar to the notion of inter-rater reliability, and agreement between two observations of the same phenomenon by the same researcher akin to test -retest reliability. To ensure dependability, interpretive researchers must provide adequate details about their phenomenon of interest and the social context in which it is embedded so as to allow readers to independently authenticate their interpretive inferences.

Credibility. Interpretive research can be considered credible if readers find its inferences to be believable. This concept is akin to that of internal validity in functionalistic research. The credibility of interpretive research can be improved by providing evidence of the researcher’s extended engagement in the field, by demonstrating data triangulation across subjects or data collection techniques, and by maintaining meticulous data management and analytic procedures, such as verbatim transcription of interviews, accurate records of contacts and interviews, and clear notes on theoretical and methodological decisions, that can allow an independent audit of data collection and analysis if needed.

Confirmability. Confirmability refers to the extent to which the findings reported in interpretive research can be independently confirmed by others (typically, participants). This is similar to the notion of objectivity in functionalistic research. Since interpretive research rejects the notion of an objective reality, confirmability is demonstrated in terms of “inter-subjectivity”, i.e., if the study’s participants agree with the inferences derived by the researcher. For instance, if a study’s participants generally agree with the inferences drawn by a researcher about a phenomenon of interest (based on a review of the research paper or report), then the findings can be viewed as confirmable.

Transferability. Transferability in interpretive research refers to the extent to which the findings can be generalized to other settings. This idea is similar to that of external validity in functionalistic research. The researcher must provide rich, detailed descriptions of the research context (“thick description”) and thoroughly describe the structures, assumptions, and processes revealed from the data so that readers can independently assess whether and to what extent are the reported findings transferable to other settings.

[13] Susman, G.I. and Evered, R.D. (1978). “An Assessment of the Scientific Merits of Action Research,”

Administrative Science Quarterly , (23), 582-603.

[14] Bluebond-Langer, M. (1996). In the Shadow of Illness: Parents and Siblings of the Chronically Ill Child . Princeton, NJ: Princeton University Press.

[15] Giorgi, A and Giorgi, B (2003) Phenomenology. In J A Smith (ed.) Qualitative Psychology: A Practical Guide to Research Methods . London: Sage Publications.

[16] Lincoln, Y. S., and Guba, E. G. (1985). Naturalistic Inquiry . Beverly Hills, CA: Sage Publications.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

39k Accesses

53 Citations

58 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

techniques of interpretation in research methodology

A Guide To The Methods, Benefits & Problems of The Interpretation of Data

Data interpretation blog post by datapine

Table of Contents

1) What Is Data Interpretation?

2) How To Interpret Data?

3) Why Data Interpretation Is Important?

4) Data Interpretation Skills

5) Data Analysis & Interpretation Problems

6) Data Interpretation Techniques & Methods

7) The Use of Dashboards For Data Interpretation

8) Business Data Interpretation Examples

Data analysis and interpretation have now taken center stage with the advent of the digital age… and the sheer amount of data can be frightening. In fact, a Digital Universe study found that the total data supply in 2012 was 2.8 trillion gigabytes! Based on that amount of data alone, it is clear the calling card of any successful enterprise in today’s global world will be the ability to analyze complex data, produce actionable insights, and adapt to new market needs… all at the speed of thought.

Business dashboards are the digital age tools for big data. Capable of displaying key performance indicators (KPIs) for both quantitative and qualitative data analyses, they are ideal for making the fast-paced and data-driven market decisions that push today’s industry leaders to sustainable success. Through the art of streamlined visual communication, data dashboards permit businesses to engage in real-time and informed decision-making and are key instruments in data interpretation. First of all, let’s find a definition to understand what lies behind this practice.

What Is Data Interpretation?

Data interpretation refers to the process of using diverse analytical methods to review data and arrive at relevant conclusions. The interpretation of data helps researchers to categorize, manipulate, and summarize the information in order to answer critical questions.

The importance of data interpretation is evident, and this is why it needs to be done properly. Data is very likely to arrive from multiple sources and has a tendency to enter the analysis process with haphazard ordering. Data analysis tends to be extremely subjective. That is to say, the nature and goal of interpretation will vary from business to business, likely correlating to the type of data being analyzed. While there are several types of processes that are implemented based on the nature of individual data, the two broadest and most common categories are “quantitative and qualitative analysis.”

Yet, before any serious data interpretation inquiry can begin, it should be understood that visual presentations of data findings are irrelevant unless a sound decision is made regarding measurement scales. Before any serious data analysis can begin, the measurement scale must be decided for the data as this will have a long-term impact on data interpretation ROI. The varying scales include:

  • Nominal Scale: non-numeric categories that cannot be ranked or compared quantitatively. Variables are exclusive and exhaustive.
  • Ordinal Scale: exclusive categories that are exclusive and exhaustive but with a logical order. Quality ratings and agreement ratings are examples of ordinal scales (i.e., good, very good, fair, etc., OR agree, strongly agree, disagree, etc.).
  • Interval: a measurement scale where data is grouped into categories with orderly and equal distances between the categories. There is always an arbitrary zero point.
  • Ratio: contains features of all three.

For a more in-depth review of scales of measurement, read our article on data analysis questions . Once measurement scales have been selected, it is time to select which of the two broad interpretation processes will best suit your data needs. Let’s take a closer look at those specific methods and possible data interpretation problems.

How To Interpret Data? Top Methods & Techniques

Illustration of data interpretation on blackboard

When interpreting data, an analyst must try to discern the differences between correlation, causation, and coincidences, as well as many other biases – but he also has to consider all the factors involved that may have led to a result. There are various data interpretation types and methods one can use to achieve this.

The interpretation of data is designed to help people make sense of numerical data that has been collected, analyzed, and presented. Having a baseline method for interpreting data will provide your analyst teams with a structure and consistent foundation. Indeed, if several departments have different approaches to interpreting the same data while sharing the same goals, some mismatched objectives can result. Disparate methods will lead to duplicated efforts, inconsistent solutions, wasted energy, and inevitably – time and money. In this part, we will look at the two main methods of interpretation of data: qualitative and quantitative analysis.

Qualitative Data Interpretation

Qualitative data analysis can be summed up in one word – categorical. With this type of analysis, data is not described through numerical values or patterns but through the use of descriptive context (i.e., text). Typically, narrative data is gathered by employing a wide variety of person-to-person techniques. These techniques include:

  • Observations: detailing behavioral patterns that occur within an observation group. These patterns could be the amount of time spent in an activity, the type of activity, and the method of communication employed.
  • Focus groups: Group people and ask them relevant questions to generate a collaborative discussion about a research topic.
  • Secondary Research: much like how patterns of behavior can be observed, various types of documentation resources can be coded and divided based on the type of material they contain.
  • Interviews: one of the best collection methods for narrative data. Inquiry responses can be grouped by theme, topic, or category. The interview approach allows for highly focused data segmentation.

A key difference between qualitative and quantitative analysis is clearly noticeable in the interpretation stage. The first one is widely open to interpretation and must be “coded” so as to facilitate the grouping and labeling of data into identifiable themes. As person-to-person data collection techniques can often result in disputes pertaining to proper analysis, qualitative data analysis is often summarized through three basic principles: notice things, collect things, and think about things.

After qualitative data has been collected through transcripts, questionnaires, audio and video recordings, or the researcher’s notes, it is time to interpret it. For that purpose, there are some common methods used by researchers and analysts.

  • Content analysis : As its name suggests, this is a research method used to identify frequencies and recurring words, subjects, and concepts in image, video, or audio content. It transforms qualitative information into quantitative data to help discover trends and conclusions that will later support important research or business decisions. This method is often used by marketers to understand brand sentiment from the mouths of customers themselves. Through that, they can extract valuable information to improve their products and services. It is recommended to use content analytics tools for this method as manually performing it is very time-consuming and can lead to human error or subjectivity issues. Having a clear goal in mind before diving into it is another great practice for avoiding getting lost in the fog.  
  • Thematic analysis: This method focuses on analyzing qualitative data, such as interview transcripts, survey questions, and others, to identify common patterns and separate the data into different groups according to found similarities or themes. For example, imagine you want to analyze what customers think about your restaurant. For this purpose, you do a thematic analysis on 1000 reviews and find common themes such as “fresh food”, “cold food”, “small portions”, “friendly staff”, etc. With those recurring themes in hand, you can extract conclusions about what could be improved or enhanced based on your customer’s experiences. Since this technique is more exploratory, be open to changing your research questions or goals as you go. 
  • Narrative analysis: A bit more specific and complicated than the two previous methods, it is used to analyze stories and discover their meaning. These stories can be extracted from testimonials, case studies, and interviews, as these formats give people more space to tell their experiences. Given that collecting this kind of data is harder and more time-consuming, sample sizes for narrative analysis are usually smaller, which makes it harder to reproduce its findings. However, it is still a valuable technique for understanding customers' preferences and mindsets.  
  • Discourse analysis : This method is used to draw the meaning of any type of visual, written, or symbolic language in relation to a social, political, cultural, or historical context. It is used to understand how context can affect how language is carried out and understood. For example, if you are doing research on power dynamics, using discourse analysis to analyze a conversation between a janitor and a CEO and draw conclusions about their responses based on the context and your research questions is a great use case for this technique. That said, like all methods in this section, discourse analytics is time-consuming as the data needs to be analyzed until no new insights emerge.  
  • Grounded theory analysis : The grounded theory approach aims to create or discover a new theory by carefully testing and evaluating the data available. Unlike all other qualitative approaches on this list, grounded theory helps extract conclusions and hypotheses from the data instead of going into the analysis with a defined hypothesis. This method is very popular amongst researchers, analysts, and marketers as the results are completely data-backed, providing a factual explanation of any scenario. It is often used when researching a completely new topic or with little knowledge as this space to start from the ground up. 

Quantitative Data Interpretation

If quantitative data interpretation could be summed up in one word (and it really can’t), that word would be “numerical.” There are few certainties when it comes to data analysis, but you can be sure that if the research you are engaging in has no numbers involved, it is not quantitative research, as this analysis refers to a set of processes by which numerical data is analyzed. More often than not, it involves the use of statistical modeling such as standard deviation, mean, and median. Let’s quickly review the most common statistical terms:

  • Mean: A mean represents a numerical average for a set of responses. When dealing with a data set (or multiple data sets), a mean will represent the central value of a specific set of numbers. It is the sum of the values divided by the number of values within the data set. Other terms that can be used to describe the concept are arithmetic mean, average, and mathematical expectation.
  • Standard deviation: This is another statistical term commonly used in quantitative analysis. Standard deviation reveals the distribution of the responses around the mean. It describes the degree of consistency within the responses; together with the mean, it provides insight into data sets.
  • Frequency distribution: This is a measurement gauging the rate of a response appearance within a data set. When using a survey, for example, frequency distribution, it can determine the number of times a specific ordinal scale response appears (i.e., agree, strongly agree, disagree, etc.). Frequency distribution is extremely keen in determining the degree of consensus among data points.

Typically, quantitative data is measured by visually presenting correlation tests between two or more variables of significance. Different processes can be used together or separately, and comparisons can be made to ultimately arrive at a conclusion. Other signature interpretation processes of quantitative data include:

  • Regression analysis: Essentially, it uses historical data to understand the relationship between a dependent variable and one or more independent variables. Knowing which variables are related and how they developed in the past allows you to anticipate possible outcomes and make better decisions going forward. For example, if you want to predict your sales for next month, you can use regression to understand what factors will affect them, such as products on sale and the launch of a new campaign, among many others. 
  • Cohort analysis: This method identifies groups of users who share common characteristics during a particular time period. In a business scenario, cohort analysis is commonly used to understand customer behaviors. For example, a cohort could be all users who have signed up for a free trial on a given day. An analysis would be carried out to see how these users behave, what actions they carry out, and how their behavior differs from other user groups.
  • Predictive analysis: As its name suggests, the predictive method aims to predict future developments by analyzing historical and current data. Powered by technologies such as artificial intelligence and machine learning, predictive analytics practices enable businesses to identify patterns or potential issues and plan informed strategies in advance.
  • Prescriptive analysis: Also powered by predictions, the prescriptive method uses techniques such as graph analysis, complex event processing, and neural networks, among others, to try to unravel the effect that future decisions will have in order to adjust them before they are actually made. This helps businesses to develop responsive, practical business strategies.
  • Conjoint analysis: Typically applied to survey analysis, the conjoint approach is used to analyze how individuals value different attributes of a product or service. This helps researchers and businesses to define pricing, product features, packaging, and many other attributes. A common use is menu-based conjoint analysis, in which individuals are given a “menu” of options from which they can build their ideal concept or product. Through this, analysts can understand which attributes they would pick above others and drive conclusions.
  • Cluster analysis: Last but not least, the cluster is a method used to group objects into categories. Since there is no target variable when using cluster analysis, it is a useful method to find hidden trends and patterns in the data. In a business context, clustering is used for audience segmentation to create targeted experiences. In market research, it is often used to identify age groups, geographical information, and earnings, among others.

Now that we have seen how to interpret data, let's move on and ask ourselves some questions: What are some of the benefits of data interpretation? Why do all industries engage in data research and analysis? These are basic questions, but they often don’t receive adequate attention.

Your Chance: Want to test a powerful data analysis software? Use our 14-days free trial & start extracting insights from your data!

Why Data Interpretation Is Important

illustrating quantitative data interpretation with charts & graphs

The purpose of collection and interpretation is to acquire useful and usable information and to make the most informed decisions possible. From businesses to newlyweds researching their first home, data collection and interpretation provide limitless benefits for a wide range of institutions and individuals.

Data analysis and interpretation, regardless of the method and qualitative/quantitative status, may include the following characteristics:

  • Data identification and explanation
  • Comparing and contrasting data
  • Identification of data outliers
  • Future predictions

Data analysis and interpretation, in the end, help improve processes and identify problems. It is difficult to grow and make dependable improvements without, at the very least, minimal data collection and interpretation. What is the keyword? Dependable. Vague ideas regarding performance enhancement exist within all institutions and industries. Yet, without proper research and analysis, an idea is likely to remain in a stagnant state forever (i.e., minimal growth). So… what are a few of the business benefits of digital age data analysis and interpretation? Let’s take a look!

1) Informed decision-making: A decision is only as good as the knowledge that formed it. Informed data decision-making can potentially set industry leaders apart from the rest of the market pack. Studies have shown that companies in the top third of their industries are, on average, 5% more productive and 6% more profitable when implementing informed data decision-making processes. Most decisive actions will arise only after a problem has been identified or a goal defined. Data analysis should include identification, thesis development, and data collection, followed by data communication.

If institutions only follow that simple order, one that we should all be familiar with from grade school science fairs, then they will be able to solve issues as they emerge in real-time. Informed decision-making has a tendency to be cyclical. This means there is really no end, and eventually, new questions and conditions arise within the process that need to be studied further. The monitoring of data results will inevitably return the process to the start with new data and sights.

2) Anticipating needs with trends identification: data insights provide knowledge, and knowledge is power. The insights obtained from market and consumer data analyses have the ability to set trends for peers within similar market segments. A perfect example of how data analytics can impact trend prediction is evidenced in the music identification application Shazam . The application allows users to upload an audio clip of a song they like but can’t seem to identify. Users make 15 million song identifications a day. With this data, Shazam has been instrumental in predicting future popular artists.

When industry trends are identified, they can then serve a greater industry purpose. For example, the insights from Shazam’s monitoring benefits not only Shazam in understanding how to meet consumer needs but also grant music executives and record label companies an insight into the pop-culture scene of the day. Data gathering and interpretation processes can allow for industry-wide climate prediction and result in greater revenue streams across the market. For this reason, all institutions should follow the basic data cycle of collection, interpretation, decision-making, and monitoring.

3) Cost efficiency: Proper implementation of analytics processes can provide businesses with profound cost advantages within their industries. A recent data study performed by Deloitte vividly demonstrates this in finding that data analysis ROI is driven by efficient cost reductions. Often, this benefit is overlooked because making money is typically viewed as “sexier” than saving money. Yet, sound data analyses have the ability to alert management to cost-reduction opportunities without any significant exertion of effort on the part of human capital.

A great example of the potential for cost efficiency through data analysis is Intel. Prior to 2012, Intel would conduct over 19,000 manufacturing function tests on their chips before they could be deemed acceptable for release. To cut costs and reduce test time, Intel implemented predictive data analyses. By using historical and current data, Intel now avoids testing each chip 19,000 times by focusing on specific and individual chip tests. After its implementation in 2012, Intel saved over $3 million in manufacturing costs. Cost reduction may not be as “sexy” as data profit, but as Intel proves, it is a benefit of data analysis that should not be neglected.

4) Clear foresight: companies that collect and analyze their data gain better knowledge about themselves, their processes, and their performance. They can identify performance challenges when they arise and take action to overcome them. Data interpretation through visual representations lets them process their findings faster and make better-informed decisions on the company's future.

Key Data Interpretation Skills You Should Have

Just like any other process, data interpretation and analysis require researchers or analysts to have some key skills to be able to perform successfully. It is not enough just to apply some methods and tools to the data; the person who is managing it needs to be objective and have a data-driven mind, among other skills. 

It is a common misconception to think that the required skills are mostly number-related. While data interpretation is heavily analytically driven, it also requires communication and narrative skills, as the results of the analysis need to be presented in a way that is easy to understand for all types of audiences. 

Luckily, with the rise of self-service tools and AI-driven technologies, data interpretation is no longer segregated for analysts only. However, the topic still remains a big challenge for businesses that make big investments in data and tools to support it, as the interpretation skills required are still lacking. It is worthless to put massive amounts of money into extracting information if you are not going to be able to interpret what that information is telling you. For that reason, below we list the top 5 data interpretation skills your employees or researchers should have to extract the maximum potential from the data. 

  • Data Literacy: The first and most important skill to have is data literacy. This means having the ability to understand, work, and communicate with data. It involves knowing the types of data sources, methods, and ethical implications of using them. In research, this skill is often a given. However, in a business context, there might be many employees who are not comfortable with data. The issue is the interpretation of data can not be solely responsible for the data team, as it is not sustainable in the long run. Experts advise business leaders to carefully assess the literacy level across their workforce and implement training instances to ensure everyone can interpret their data. 
  • Data Tools: The data interpretation and analysis process involves using various tools to collect, clean, store, and analyze the data. The complexity of the tools varies depending on the type of data and the analysis goals. Going from simple ones like Excel to more complex ones like databases, such as SQL, or programming languages, such as R or Python. It also involves visual analytics tools to bring the data to life through the use of graphs and charts. Managing these tools is a fundamental skill as they make the process faster and more efficient. As mentioned before, most modern solutions are now self-service, enabling less technical users to use them without problem.
  • Critical Thinking: Another very important skill is to have critical thinking. Data hides a range of conclusions, trends, and patterns that must be discovered. It is not just about comparing numbers; it is about putting a story together based on multiple factors that will lead to a conclusion. Therefore, having the ability to look further from what is right in front of you is an invaluable skill for data interpretation. 
  • Data Ethics: In the information age, being aware of the legal and ethical responsibilities that come with the use of data is of utmost importance. In short, data ethics involves respecting the privacy and confidentiality of data subjects, as well as ensuring accuracy and transparency for data usage. It requires the analyzer or researcher to be completely objective with its interpretation to avoid any biases or discrimination. Many countries have already implemented regulations regarding the use of data, including the GDPR or the ACM Code Of Ethics. Awareness of these regulations and responsibilities is a fundamental skill that anyone working in data interpretation should have. 
  • Domain Knowledge: Another skill that is considered important when interpreting data is to have domain knowledge. As mentioned before, data hides valuable insights that need to be uncovered. To do so, the analyst needs to know about the industry or domain from which the information is coming and use that knowledge to explore it and put it into a broader context. This is especially valuable in a business context, where most departments are now analyzing data independently with the help of a live dashboard instead of relying on the IT department, which can often overlook some aspects due to a lack of expertise in the topic. 

Common Data Analysis And Interpretation Problems

Man running away from common data interpretation problems

The oft-repeated mantra of those who fear data advancements in the digital age is “big data equals big trouble.” While that statement is not accurate, it is safe to say that certain data interpretation problems or “pitfalls” exist and can occur when analyzing data, especially at the speed of thought. Let’s identify some of the most common data misinterpretation risks and shed some light on how they can be avoided:

1) Correlation mistaken for causation: our first misinterpretation of data refers to the tendency of data analysts to mix the cause of a phenomenon with correlation. It is the assumption that because two actions occurred together, one caused the other. This is inaccurate, as actions can occur together, absent a cause-and-effect relationship.

  • Digital age example: assuming that increased revenue results from increased social media followers… there might be a definitive correlation between the two, especially with today’s multi-channel purchasing experiences. But that does not mean an increase in followers is the direct cause of increased revenue. There could be both a common cause and an indirect causality.
  • Remedy: attempt to eliminate the variable you believe to be causing the phenomenon.

2) Confirmation bias: our second problem is data interpretation bias. It occurs when you have a theory or hypothesis in mind but are intent on only discovering data patterns that support it while rejecting those that do not.

  • Digital age example: your boss asks you to analyze the success of a recent multi-platform social media marketing campaign. While analyzing the potential data variables from the campaign (one that you ran and believe performed well), you see that the share rate for Facebook posts was great, while the share rate for Twitter Tweets was not. Using only Facebook posts to prove your hypothesis that the campaign was successful would be a perfect manifestation of confirmation bias.
  • Remedy: as this pitfall is often based on subjective desires, one remedy would be to analyze data with a team of objective individuals. If this is not possible, another solution is to resist the urge to make a conclusion before data exploration has been completed. Remember to always try to disprove a hypothesis, not prove it.

3) Irrelevant data: the third data misinterpretation pitfall is especially important in the digital age. As large data is no longer centrally stored and as it continues to be analyzed at the speed of thought, it is inevitable that analysts will focus on data that is irrelevant to the problem they are trying to correct.

  • Digital age example: in attempting to gauge the success of an email lead generation campaign, you notice that the number of homepage views directly resulting from the campaign increased, but the number of monthly newsletter subscribers did not. Based on the number of homepage views, you decide the campaign was a success when really it generated zero leads.
  • Remedy: proactively and clearly frame any data analysis variables and KPIs prior to engaging in a data review. If the metric you use to measure the success of a lead generation campaign is newsletter subscribers, there is no need to review the number of homepage visits. Be sure to focus on the data variable that answers your question or solves your problem and not on irrelevant data.

4) Truncating an Axes: When creating a graph to start interpreting the results of your analysis, it is important to keep the axes truthful and avoid generating misleading visualizations. Starting the axes in a value that doesn’t portray the actual truth about the data can lead to false conclusions. 

  • Digital age example: In the image below, we can see a graph from Fox News in which the Y-axes start at 34%, making it seem that the difference between 35% and 39.6% is way higher than it actually is. This could lead to a misinterpretation of the tax rate changes. 

Fox news graph truncating an axes

* Source : www.venngage.com *

  • Remedy: Be careful with how your data is visualized. Be respectful and realistic with axes to avoid misinterpretation of your data. See below how the Fox News chart looks when using the correct axis values. This chart was created with datapine's modern online data visualization tool.

Fox news graph with the correct axes values

5) (Small) sample size: Another common problem is using a small sample size. Logically, the bigger the sample size, the more accurate and reliable the results. However, this also depends on the size of the effect of the study. For example, the sample size in a survey about the quality of education will not be the same as for one about people doing outdoor sports in a specific area. 

  • Digital age example: Imagine you ask 30 people a question, and 29 answer “yes,” resulting in 95% of the total. Now imagine you ask the same question to 1000, and 950 of them answer “yes,” which is again 95%. While these percentages might look the same, they certainly do not mean the same thing, as a 30-person sample size is not a significant number to establish a truthful conclusion. 
  • Remedy: Researchers say that in order to determine the correct sample size to get truthful and meaningful results, it is necessary to define a margin of error that will represent the maximum amount they want the results to deviate from the statistical mean. Paired with this, they need to define a confidence level that should be between 90 and 99%. With these two values in hand, researchers can calculate an accurate sample size for their studies.

6) Reliability, subjectivity, and generalizability : When performing qualitative analysis, researchers must consider practical and theoretical limitations when interpreting the data. In some cases, this type of research can be considered unreliable because of uncontrolled factors that might or might not affect the results. This is paired with the fact that the researcher has a primary role in the interpretation process, meaning he or she decides what is relevant and what is not, and as we know, interpretations can be very subjective.

Generalizability is also an issue that researchers face when dealing with qualitative analysis. As mentioned in the point about having a small sample size, it is difficult to draw conclusions that are 100% representative because the results might be biased or unrepresentative of a wider population. 

While these factors are mostly present in qualitative research, they can also affect the quantitative analysis. For example, when choosing which KPIs to portray and how to portray them, analysts can also be biased and represent them in a way that benefits their analysis.

  • Digital age example: Biased questions in a survey are a great example of reliability and subjectivity issues. Imagine you are sending a survey to your clients to see how satisfied they are with your customer service with this question: “How amazing was your experience with our customer service team?”. Here, we can see that this question clearly influences the response of the individual by putting the word “amazing” on it. 
  • Remedy: A solution to avoid these issues is to keep your research honest and neutral. Keep the wording of the questions as objective as possible. For example: “On a scale of 1-10, how satisfied were you with our customer service team?”. This does not lead the respondent to any specific answer, meaning the results of your survey will be reliable. 

Data Interpretation Best Practices & Tips

Data interpretation methods and techniques by datapine

Data analysis and interpretation are critical to developing sound conclusions and making better-informed decisions. As we have seen with this article, there is an art and science to the interpretation of data. To help you with this purpose, we will list a few relevant techniques, methods, and tricks you can implement for a successful data management process. 

As mentioned at the beginning of this post, the first step to interpreting data in a successful way is to identify the type of analysis you will perform and apply the methods respectively. Clearly differentiate between qualitative (observe, document, and interview notice, collect and think about things) and quantitative analysis (you lead research with a lot of numerical data to be analyzed through various statistical methods). 

1) Ask the right data interpretation questions

The first data interpretation technique is to define a clear baseline for your work. This can be done by answering some critical questions that will serve as a useful guideline to start. Some of them include: what are the goals and objectives of my analysis? What type of data interpretation method will I use? Who will use this data in the future? And most importantly, what general question am I trying to answer?

Once all this information has been defined, you will be ready for the next step: collecting your data. 

2) Collect and assimilate your data

Now that a clear baseline has been established, it is time to collect the information you will use. Always remember that your methods for data collection will vary depending on what type of analysis method you use, which can be qualitative or quantitative. Based on that, relying on professional online data analysis tools to facilitate the process is a great practice in this regard, as manually collecting and assessing raw data is not only very time-consuming and expensive but is also at risk of errors and subjectivity. 

Once your data is collected, you need to carefully assess it to understand if the quality is appropriate to be used during a study. This means, is the sample size big enough? Were the procedures used to collect the data implemented correctly? Is the date range from the data correct? If coming from an external source, is it a trusted and objective one? 

With all the needed information in hand, you are ready to start the interpretation process, but first, you need to visualize your data. 

3) Use the right data visualization type 

Data visualizations such as business graphs , charts, and tables are fundamental to successfully interpreting data. This is because data visualization via interactive charts and graphs makes the information more understandable and accessible. As you might be aware, there are different types of visualizations you can use, but not all of them are suitable for any analysis purpose. Using the wrong graph can lead to misinterpretation of your data, so it’s very important to carefully pick the right visual for it. Let’s look at some use cases of common data visualizations. 

  • Bar chart: One of the most used chart types, the bar chart uses rectangular bars to show the relationship between 2 or more variables. There are different types of bar charts for different interpretations, including the horizontal bar chart, column bar chart, and stacked bar chart. 
  • Line chart: Most commonly used to show trends, acceleration or decelerations, and volatility, the line chart aims to show how data changes over a period of time, for example, sales over a year. A few tips to keep this chart ready for interpretation are not using many variables that can overcrowd the graph and keeping your axis scale close to the highest data point to avoid making the information hard to read. 
  • Pie chart: Although it doesn’t do a lot in terms of analysis due to its uncomplex nature, pie charts are widely used to show the proportional composition of a variable. Visually speaking, showing a percentage in a bar chart is way more complicated than showing it in a pie chart. However, this also depends on the number of variables you are comparing. If your pie chart needs to be divided into 10 portions, then it is better to use a bar chart instead. 
  • Tables: While they are not a specific type of chart, tables are widely used when interpreting data. Tables are especially useful when you want to portray data in its raw format. They give you the freedom to easily look up or compare individual values while also displaying grand totals. 

With the use of data visualizations becoming more and more critical for businesses’ analytical success, many tools have emerged to help users visualize their data in a cohesive and interactive way. One of the most popular ones is the use of BI dashboards . These visual tools provide a centralized view of various graphs and charts that paint a bigger picture of a topic. We will discuss the power of dashboards for an efficient data interpretation practice in the next portion of this post. If you want to learn more about different types of graphs and charts , take a look at our complete guide on the topic. 

4) Start interpreting 

After the tedious preparation part, you can start extracting conclusions from your data. As mentioned many times throughout the post, the way you decide to interpret the data will solely depend on the methods you initially decided to use. If you had initial research questions or hypotheses, then you should look for ways to prove their validity. If you are going into the data with no defined hypothesis, then start looking for relationships and patterns that will allow you to extract valuable conclusions from the information. 

During the process of interpretation, stay curious and creative, dig into the data, and determine if there are any other critical questions that should be asked. If any new questions arise, you need to assess if you have the necessary information to answer them. Being able to identify if you need to dedicate more time and resources to the research is a very important step. No matter if you are studying customer behaviors or a new cancer treatment, the findings from your analysis may dictate important decisions in the future. Therefore, taking the time to really assess the information is key. For that purpose, data interpretation software proves to be very useful.

5) Keep your interpretation objective

As mentioned above, objectivity is one of the most important data interpretation skills but also one of the hardest. Being the person closest to the investigation, it is easy to become subjective when looking for answers in the data. A good way to stay objective is to show the information related to the study to other people, for example, research partners or even the people who will use your findings once they are done. This can help avoid confirmation bias and any reliability issues with your interpretation. 

Remember, using a visualization tool such as a modern dashboard will make the interpretation process way easier and more efficient as the data can be navigated and manipulated in an easy and organized way. And not just that, using a dashboard tool to present your findings to a specific audience will make the information easier to understand and the presentation way more engaging thanks to the visual nature of these tools. 

6) Mark your findings and draw conclusions

Findings are the observations you extracted from your data. They are the facts that will help you drive deeper conclusions about your research. For example, findings can be trends and patterns you found during your interpretation process. To put your findings into perspective, you can compare them with other resources that use similar methods and use them as benchmarks.

Reflect on your own thinking and reasoning and be aware of the many pitfalls data analysis and interpretation carry—correlation versus causation, subjective bias, false information, inaccurate data, etc. Once you are comfortable with interpreting the data, you will be ready to develop conclusions, see if your initial questions were answered, and suggest recommendations based on them.

Interpretation of Data: The Use of Dashboards Bridging The Gap

As we have seen, quantitative and qualitative methods are distinct types of data interpretation and analysis. Both offer a varying degree of return on investment (ROI) regarding data investigation, testing, and decision-making. But how do you mix the two and prevent a data disconnect? The answer is professional data dashboards. 

For a few years now, dashboards have become invaluable tools to visualize and interpret data. These tools offer a centralized and interactive view of data and provide the perfect environment for exploration and extracting valuable conclusions. They bridge the quantitative and qualitative information gap by unifying all the data in one place with the help of stunning visuals. 

Not only that, but these powerful tools offer a large list of benefits, and we will discuss some of them below. 

1) Connecting and blending data. With today’s pace of innovation, it is no longer feasible (nor desirable) to have bulk data centrally located. As businesses continue to globalize and borders continue to dissolve, it will become increasingly important for businesses to possess the capability to run diverse data analyses absent the limitations of location. Data dashboards decentralize data without compromising on the necessary speed of thought while blending both quantitative and qualitative data. Whether you want to measure customer trends or organizational performance, you now have the capability to do both without the need for a singular selection.

2) Mobile Data. Related to the notion of “connected and blended data” is that of mobile data. In today’s digital world, employees are spending less time at their desks and simultaneously increasing production. This is made possible because mobile solutions for analytical tools are no longer standalone. Today, mobile analysis applications seamlessly integrate with everyday business tools. In turn, both quantitative and qualitative data are now available on-demand where they’re needed, when they’re needed, and how they’re needed via interactive online dashboards .

3) Visualization. Data dashboards merge the data gap between qualitative and quantitative data interpretation methods through the science of visualization. Dashboard solutions come “out of the box” and are well-equipped to create easy-to-understand data demonstrations. Modern online data visualization tools provide a variety of color and filter patterns, encourage user interaction, and are engineered to help enhance future trend predictability. All of these visual characteristics make for an easy transition among data methods – you only need to find the right types of data visualization to tell your data story the best way possible.

4) Collaboration. Whether in a business environment or a research project, collaboration is key in data interpretation and analysis. Dashboards are online tools that can be easily shared through a password-protected URL or automated email. Through them, users can collaborate and communicate through the data in an efficient way. Eliminating the need for infinite files with lost updates. Tools such as datapine offer real-time updates, meaning your dashboards will update on their own as soon as new information is available.  

Examples Of Data Interpretation In Business

To give you an idea of how a dashboard can fulfill the need to bridge quantitative and qualitative analysis and help in understanding how to interpret data in research thanks to visualization, below, we will discuss three valuable examples to put their value into perspective.

1. Customer Satisfaction Dashboard 

This market research dashboard brings together both qualitative and quantitative data that are knowledgeably analyzed and visualized in a meaningful way that everyone can understand, thus empowering any viewer to interpret it. Let’s explore it below. 

Data interpretation example on customers' satisfaction with a brand

**click to enlarge**

The value of this template lies in its highly visual nature. As mentioned earlier, visuals make the interpretation process way easier and more efficient. Having critical pieces of data represented with colorful and interactive icons and graphs makes it possible to uncover insights at a glance. For example, the colors green, yellow, and red on the charts for the NPS and the customer effort score allow us to conclude that most respondents are satisfied with this brand with a short glance. A further dive into the line chart below can help us dive deeper into this conclusion, as we can see both metrics developed positively in the past 6 months. 

The bottom part of the template provides visually stunning representations of different satisfaction scores for quality, pricing, design, and service. By looking at these, we can conclude that, overall, customers are satisfied with this company in most areas. 

2. Brand Analysis Dashboard

Next, in our list of data interpretation examples, we have a template that shows the answers to a survey on awareness for Brand D. The sample size is listed on top to get a perspective of the data, which is represented using interactive charts and graphs. 

Data interpretation example using a market research dashboard for brand awareness analysis

When interpreting information, context is key to understanding it correctly. For that reason, the dashboard starts by offering insights into the demographics of the surveyed audience. In general, we can see ages and gender are diverse. Therefore, we can conclude these brands are not targeting customers from a specified demographic, an important aspect to put the surveyed answers into perspective. 

Looking at the awareness portion, we can see that brand B is the most popular one, with brand D coming second on both questions. This means brand D is not doing wrong, but there is still room for improvement compared to brand B. To see where brand D could improve, the researcher could go into the bottom part of the dashboard and consult the answers for branding themes and celebrity analysis. These are important as they give clear insight into what people and messages the audience associates with brand D. This is an opportunity to exploit these topics in different ways and achieve growth and success. 

3. Product Innovation Dashboard 

Our third and last dashboard example shows the answers to a survey on product innovation for a technology company. Just like the previous templates, the interactive and visual nature of the dashboard makes it the perfect tool to interpret data efficiently and effectively. 

Market research results on product innovation, useful for product development and pricing decisions as an example of data interpretation using dashboards

Starting from right to left, we first get a list of the top 5 products by purchase intention. This information lets us understand if the product being evaluated resembles what the audience already intends to purchase. It is a great starting point to see how customers would respond to the new product. This information can be complemented with other key metrics displayed in the dashboard. For example, the usage and purchase intention track how the market would receive the product and if they would purchase it, respectively. Interpreting these values as positive or negative will depend on the company and its expectations regarding the survey. 

Complementing these metrics, we have the willingness to pay. Arguably, one of the most important metrics to define pricing strategies. Here, we can see that most respondents think the suggested price is a good value for money. Therefore, we can interpret that the product would sell for that price. 

To see more data analysis and interpretation examples for different industries and functions, visit our library of business dashboards .

To Conclude…

As we reach the end of this insightful post about data interpretation and analysis, we hope you have a clear understanding of the topic. We've covered the definition and given some examples and methods to perform a successful interpretation process.

The importance of data interpretation is undeniable. Dashboards not only bridge the information gap between traditional data interpretation methods and technology, but they can help remedy and prevent the major pitfalls of the process. As a digital age solution, they combine the best of the past and the present to allow for informed decision-making with maximum data interpretation ROI.

To start visualizing your insights in a meaningful and actionable way, test our online reporting software for free with our 14-day trial !

  • What is Data Interpretation? + [Types, Method & Tools]

busayo.longe

  • Data Collection

Data interpretation and analysis are fast becoming more valuable with the prominence of digital communication, which is responsible for a large amount of data being churned out daily. According to the WEF’s “A Day in Data” Report , the accumulated digital universe of data is set to reach 44 ZB (Zettabyte) in 2020.

Based on this report, it is clear that for any business to be successful in today’s digital world, the founders need to know or employ people who know how to analyze complex data, produce actionable insights and adapt to new market trends. Also, all these need to be done in milliseconds.

So, what is data interpretation and analysis, and how do you leverage this knowledge to help your business or research? All this and more will be revealed in this article.

What is Data Interpretation?

Data interpretation is the process of reviewing data through some predefined processes which will help assign some meaning to the data and arrive at a relevant conclusion. It involves taking the result of data analysis, making inferences on the relations studied, and using them to conclude.

Therefore, before one can talk about interpreting data, they need to be analyzed first. What then, is data analysis?

Data analysis is the process of ordering, categorizing, manipulating, and summarizing data to obtain answers to research questions. It is usually the first step taken towards data interpretation.

It is evident that the interpretation of data is very important, and as such needs to be done properly. Therefore, researchers have identified some data interpretation methods to aid this process.

What are Data Interpretation Methods?

Data interpretation methods are how analysts help people make sense of numerical data that has been collected, analyzed and presented. Data, when collected in raw form, may be difficult for the layman to understand, which is why analysts need to break down the information gathered so that others can make sense of it.

For example, when founders are pitching to potential investors, they must interpret data (e.g. market size, growth rate, etc.) for better understanding. There are 2 main methods in which this can be done, namely; quantitative methods and qualitative methods . 

Qualitative Data Interpretation Method 

The qualitative data interpretation method is used to analyze qualitative data, which is also known as categorical data . This method uses texts, rather than numbers or patterns to describe data.

Qualitative data is usually gathered using a wide variety of person-to-person techniques , which may be difficult to analyze compared to the quantitative research method .

Unlike the quantitative data which can be analyzed directly after it has been collected and sorted, qualitative data needs to first be coded into numbers before it can be analyzed.  This is because texts are usually cumbersome, and will take more time, and result in a lot of errors if analyzed in their original state. Coding done by the analyst should also be documented so that it can be reused by others and also analyzed. 

There are 2 main types of qualitative data, namely; nominal and ordinal data . These 2 data types are both interpreted using the same method, but ordinal data interpretation is quite easier than that of nominal data .

In most cases, ordinal data is usually labeled with numbers during the process of data collection, and coding may not be required. This is different from nominal data that still needs to be coded for proper interpretation.

Quantitative Data Interpretation Method

The quantitative data interpretation method is used to analyze quantitative data, which is also known as numerical data . This data type contains numbers and is therefore analyzed with the use of numbers and not texts.

Quantitative data are of 2 main types, namely; discrete and continuous data. Continuous data is further divided into interval data and ratio data, with all the data types being numeric .

Due to its natural existence as a number, analysts do not need to employ the coding technique on quantitative data before it is analyzed. The process of analyzing quantitative data involves statistical modelling techniques such as standard deviation, mean and median.

Some of the statistical methods used in analyzing quantitative data are highlighted below:

The mean is a numerical average for a set of data and is calculated by dividing the sum of the values by the number of values in a dataset. It is used to get an estimate of a large population from the dataset obtained from a sample of the population. 

For example, online job boards in the US use the data collected from a group of registered users to estimate the salary paid to people of a particular profession. The estimate is usually made using the average salary submitted on their platform for each profession.

  • Standard deviation

This technique is used to measure how well the responses align with or deviates from the mean. It describes the degree of consistency within the responses; together with the mean, it provides insight into data sets.

In the job board example highlighted above, if the average salary of writers in the US is $20,000 per annum, and the standard deviation is 5.0, we can easily deduce that the salaries for the professionals are far away from each other. This will birth other questions like why the salaries deviate from each other that much. 

With this question, we may conclude that the sample contains people with few years of experience, which translates to a lower salary, and people with many years of experience, translating to a higher salary. However, it does not contain people with mid-level experience.

  • Frequency distribution

This technique is used to assess the demography of the respondents or the number of times a particular response appears in research.  It is extremely keen on determining the degree of intersection between data points.

Some other interpretation processes of quantitative data include:

  • Regression analysis
  • Cohort analysis
  • Predictive and prescriptive analysis

Tips for Collecting Accurate Data for Interpretation  

  • Identify the Required Data Type

 Researchers need to identify the type of data required for particular research. Is it nominal, ordinal, interval, or ratio data ? 

The key to collecting the required data to conduct research is to properly understand the research question. If the researcher can understand the research question, then he can identify the kind of data that is required to carry out the research.

For example, when collecting customer feedback, the best data type to use is the ordinal data type . Ordinal data can be used to access a customer’s feelings about a brand and is also easy to interpret.

  • Avoid Biases

There are different kinds of biases a researcher might encounter when collecting data for analysis. Although biases sometimes come from the researcher, most of the biases encountered during the data collection process is caused by the respondent. 

There are 2 main biases, that can be caused by the President, namely; response bias and non-response bias . Researchers may not be able to eliminate these biases, but there are ways in which they can be avoided and reduced to a minimum.

Response biases are biases that are caused by respondents intentionally giving wrong answers to responses, while non-response bias occurs when the respondents don’t give answers to questions at all. Biases are capable of affecting the process of data interpretation .

  • Use Close Ended Surveys

Although open-ended surveys are capable of giving detailed information about the questions and allowing respondents to fully express themselves, it is not the best kind of survey for data interpretation. It requires a lot of coding before the data can be analyzed.

Close-ended surveys , on the other hand, restrict the respondents’ answers to some predefined options, while simultaneously eliminating irrelevant data.  This way, researchers can easily analyze and interpret data.

However, close-ended surveys may not be applicable in some cases, like when collecting respondents’ personal information like name, credit card details, phone number, etc.

Visualization Techniques in Data Analysis

One of the best practices of data interpretation is the visualization of the dataset. Visualization makes it easy for a layman to understand the data, and also encourages people to view the data, as it provides a visually appealing summary of the data.

There are different techniques of data visualization, some of which are highlighted below.

Bar graphs are graphs that interpret the relationship between 2 or more variables using rectangular bars. These rectangular bars can be drawn either vertically or horizontally, but they are mostly drawn vertically.

The graph contains the horizontal axis (x) and the vertical axis (y), with the former representing the independent variable while the latter is the dependent variable. Bar graphs can be grouped into different types, depending on how the rectangular bars are placed on the graph.

Some types of bar graphs are highlighted below:

  • Grouped Bar Graph

The grouped bar graph is used to show more information about variables that are subgroups of the same group with each subgroup bar placed side-by-side like in a histogram.

  • Stacked Bar Graph

A stacked bar graph is a grouped bar graph with its rectangular bars stacked on top of each other rather than placed side by side.

  • Segmented Bar Graph

Segmented bar graphs are stacked bar graphs where each rectangular bar shows 100% of the dependent variable. It is mostly used when there is an intersection between the variable categories.

Advantages of a Bar Graph

  • It helps to summarize a large data
  • Estimations of key values c.an be made at a glance
  • Can be easily understood

Disadvantages of a Bar Graph

  • It may require additional explanation.
  • It can be easily manipulated.
  • It doesn’t properly describe the dataset.

A pie chart is a circular graph used to represent the percentage of occurrence of a variable using sectors. The size of each sector is dependent on the frequency or percentage of the corresponding variables.

There are different variants of the pie charts, but for the sake of this article, we will be restricting ourselves to only 3. For better illustration of these types, let us consider the following examples.

Pie Chart Example : There are a total of 50 students in a class, and out of them, 10 students like Football, 25 students like snooker, and 15 students like Badminton. 

  • Simple Pie Chart

The simple pie chart is the most basic type of pie chart, which is used to depict the general representation of a bar chart. 

  • Doughnut Pie Chart

Doughnut pie is a variant of the pie chart, with a blank center allowing for additional information about the data as a whole to be included.

  • 3D Pie Chart

3D pie chart is used to give the chart a 3D look and is often used for aesthetic purposes. It is usually difficult to reach because of the distortion of perspective due to the third dimension.

Advantages of a Pie Chart 

  • It is visually appealing.
  • Best for comparing small data samples.

Disadvantages of a Pie Chart

  • It can only compare small sample sizes.
  • Unhelpful with observing trends over time.

Tables are used to represent statistical data by placing them in rows and columns. They are one of the most common statistical visualization techniques and are of 2 main types, namely; simple and complex tables.

  • Simple Tables

Simple tables summarize information on a single characteristic and may also be called a univariate table. An example of a simple table showing the number of employed people in a community concerning their age group.

  • Complex Tables

As its name suggests, complex tables summarize complex information and present them in two or more intersecting categories. A complex table example is a table showing the number of employed people in a population concerning their age group and sex as shown in the table below.

Advantages of Tables

  • Can contain large data sets
  • Helpful in comparing 2 or more similar things

Disadvantages of Tables

  • They do not give detailed information.
  • Maybe time-consuming.

Line graphs or charts are a type of graph that displays information as a series of points, usually connected by a straight line. Some of the types of line graphs are highlighted below.

  • Simple Line Graphs

Simple line graphs show the trend of data over time, and may also be used to compare categories. Let us assume we got the sales data of a firm for each quarter and are to visualize it using a line graph to estimate sales for the next year.

  • Line Graphs with Markers

These are similar to line graphs but have visible markers illustrating the data points

  • Stacked Line Graphs

Stacked line graphs are line graphs where the points do not overlap, and the graphs are therefore placed on top of each other. Consider that we got the quarterly sales data for each product sold by the company and are to visualize it to predict company sales for the next year.

Advantages of a Line Graph

  • Great for visualizing trends and changes over time.
  • It is simple to construct and read.

Disadvantage of a Line Graph

  • It can not compare different variables at a single place or time.
Read: 11 Types of Graphs & Charts + [Examples]

What are the Steps in Interpreting Data?

After data collection, you’d want to know the result of your findings. Ultimately, the findings of your data will be largely dependent on the questions you’ve asked in your survey or your initial study questions. Here are the four steps for accurately interpreting data

1. Gather the data

The very first step in interpreting data is having all the relevant data assembled. You can do this by visualizing it first either in a bar, graph, or pie chart. The purpose of this step is to accurately analyze the data without any bias. 

Now is the time to remember the details of how you conducted the research. Were there any flaws or changes that occurred when gathering this data? Did you keep any observatory notes and indicators?

Once you have your complete data, you can move to the next stage

2. Develop your findings

This is the summary of your observations. Here, you observe this data thoroughly to find trends, patterns, or behavior. If you are researching about a group of people through a sample population, this is where you analyze behavioral patterns. The purpose of this step is to compare these deductions before drawing any conclusions. You can compare these deductions with each other, similar data sets in the past, or general deductions in your industry. 

3. Derive Conclusions

Once you’ve developed your findings from your data sets, you can then draw conclusions based on trends you’ve discovered. Your conclusions should answer the questions that led you to your research. If they do not answer these questions ask why? It may lead to further research or subsequent questions.

4. Give recommendations

For every research conclusion, there has to be a recommendation. This is the final step in data interpretation because recommendations are a summary of your findings and conclusions. For recommendations, it can only go in one of two ways. You can either recommend a line of action or recommend that further research be conducted. 

How to Collect Data with Surveys or Questionnaires

As a business owner who wants to regularly track the number of sales made in your business, you need to know how to collect data. Follow these 4 easy steps to collect real-time sales data for your business using Formplus.

Step 1 – Register on Formplus

  • Visit Formplus on your PC or mobile device.
  • Click on the Start for Free button to start collecting data for your business.

Step 2 – Start Creating Surveys For Free

  • Go to the Forms tab beside your Dashboard in the Formplus menu.
  • Click on Create Form to start creating your survey
  • Take advantage of the dynamic form fields to add questions to your survey.
  • You can also add payment options that allow you to receive payments using Paypal, Flutterwave, and Stripe.

Step 3 – Customize Your Survey and Start Collecting Data

  • Go to the Customise tab to beautify your survey by adding colours, background images, fonts, or even a custom CSS.
  • You can also add your brand logo, colour and other things to define your brand identity.
  • Preview your form, share, and start collecting data.

Step 4 – Track Responses Real-time

  • Track your sales data in real-time in the Analytics section.

Why Use Formplus to Collect Data?  

The responses to each form can be accessed through the analytics section, which automatically analyzes the responses collected through Formplus forms. This section visualizes the collected data using tables and graphs, allowing analysts to easily arrive at an actionable insight without going through the rigorous process of analyzing the data.

  • 30+ Form Fields

There is no restriction on the kind of data that can be collected by researchers through the available form fields. Researchers can collect both quantitative and qualitative data types simultaneously through a single questionnaire.

  • Data Storage

 The data collected through Formplus are safely stored and secured in the Formplus database. You can also choose to store this data in an external storage device.

  • Real-time access

Formplus gives real-time access to information, making sure researchers are always informed of the current trends and changes in data. That way, researchers can easily measure a shift in market trends that inform important decisions.  

  • WordPress Integration

Users can now embed Formplus forms into their WordPress posts and pages using a shortcode. This can be done by installing the Formplus plugin into your WordPress websites.

Advantages and Importance of Data Interpretation  

  • Data interpretation is important because it helps make data-driven decisions.
  • It saves costs by providing costing opportunities
  • The insights and findings gotten from interpretation can be used to spot trends in a sector or industry.

Conclusion   

Data interpretation and analysis is an important aspect of working with data sets in any field or research and statistics. They both go hand in hand, as the process of data interpretation involves the analysis of data.

The process of data interpretation is usually cumbersome, and should naturally become more difficult with the best amount of data that is being churned out daily. However, with the accessibility of data analysis tools and machine learning techniques, analysts are gradually finding it easier to interpret data.

Data interpretation is very important, as it helps to acquire useful information from a pool of irrelevant ones while making informed decisions. It is found useful for individuals, businesses, and researchers.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • data analysis
  • data interpretation
  • data interpretation methods
  • how to analyse data
  • how to interprete data
  • qualitative data
  • quantitative data
  • busayo.longe

Formplus

You may also like:

Qualitative vs Quantitative Data:15 Differences & Similarities

Quantitative and qualitative data differences & similarities in definitions, examples, types, analysis, data collection techniques etc.

techniques of interpretation in research methodology

What is a Panel Survey?

Introduction Panel surveys are a type of survey conducted over a long period measuring consumer behavior and perception of a product, an...

15 Reasons to Choose Quantitative over Qualitative Research

This guide tells you everything you need to know about qualitative and quantitative research, it differences,types, when to use it, how...

Collecting Voice of Customer Data: 9 Techniques that Work

In this article, we’ll show you nine(9) practical ways to collect voice of customer data from your clients.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Search Menu
  • Sign in through your institution
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research (2nd edn)

  • < Previous chapter
  • Next chapter >

31 Interpretation In Qualitative Research: What, Why, How

Allen Trent, College of Education, University of Wyoming

Jeasik Cho, Department of Educational Studies, University of Wyoming

  • Published: 02 September 2020
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter addresses a wide range of concepts related to interpretation in qualitative research, examines the meaning and importance of interpretation in qualitative inquiry, and explores the ways methodology, data, and the self/researcher as instrument interact and impact interpretive processes. Additionally, the chapter presents a series of strategies for qualitative researchers engaged in the process of interpretation and closes by presenting a framework for qualitative researchers designed to inform their interpretations. The framework includes attention to the key qualitative research concepts transparency, reflexivity, analysis, validity, evidence, and literature. Four questions frame the chapter: What is interpretation, and why are interpretive strategies important in qualitative research? How do methodology, data, and the researcher/self impact interpretation in qualitative research? How do qualitative researchers engage in the process of interpretation? And, in what ways can a framework for interpretation strategies support qualitative researchers across multiple methodologies and paradigms?

“ All human knowledge takes the form of interpretation.” In this seemingly simple statement, the late German philosopher Walter Benjamin asserted that all knowledge is mediated and constructed. In doing so, he situates himself as an interpretivist, one who believes that human subjectivity, individuals’ characteristics, feelings, opinions, and experiential backgrounds impact observations, analysis of these observations, and resultant knowledge/truth constructions. Hammersley ( 2013 ) noted,

People—unlike atoms … actively interpret or make sense of their environment and of themselves; the ways in which they do this are shaped by the particular cultures in which they live; and these distinctive cultural orientations will strongly influence not only what they believe but also what they do. (p. 26)

Contrast this perspective with positivist claims that knowledge is based exclusively on external facts, objectively observed and recorded. Interpretivists, then, acknowledge that if positivistic notions of knowledge and truth are inadequate to explain social phenomena, then positivist, hard science approaches to research (i.e., the scientific method and its variants) are also inadequate and can even have a detrimental impact. According to Polyani (1967), “The ideal of exact science would turn out to be fundamentally misleading and possibly a source of devastating fallacies” (as cited in Packer, 2018 , p. 71). So, although the literature often contrasts quantitative and qualitative research as largely a difference in kinds of data employed (numerical vs. linguistic), instead, the primary differentiation is in the foundational, paradigmatic assumptions about truth, knowledge, and objectivity.

This chapter is about interpretation and the strategies that qualitative researchers use to interpret a wide variety of “texts.” Knowledge, we assert, is constructed, both individually (constructivism) and socially (constructionism). We accept this as our starting point. Our aim here is to share our perspective on a broad set of concepts associated with the interpretive, or meaning-making, process. Although it may happen at different times and in different ways, interpretation is part of almost all qualitative research.

Qualitative research is an umbrella term that encompasses a wide array of paradigmatic views, goals, and methods. Still, there are key unifying elements that include a generally constructionist epistemological standpoint, attention to primarily linguistic data, and generally accepted protocols or syntax for conducting research. Typically, qualitative researchers begin with a starting point—a curiosity, a problem in need of solutions, a research question, and/or a desire to better understand a situation from the “native” perspectives of the individuals who inhabit that context. This is what anthropologists call the emic , or insider’s, perspective. Olivier de Sardan ( 2015 ) wrote, “It evokes the meaning that social facts have for the actors concerned. It is opposed to the term etic , which, at times, designates more external or ‘objective’ data, and, at others, the researcher’s interpretive analysis” (p. 65).

From this starting point, researchers determine the appropriate kinds of data to collect, engage in fieldwork as participant observers to gather these data, organize the data, look for patterns, and attempt to understand the emic perspectives while integrating their own emergent interpretations. Researchers construct meaning from data by synthesizing research “findings,” “assertions,” or “theories” that can be shared so that others may also gain insights from the conducted inquiry. This interpretive process has a long history; hermeneutics, the theory of interpretation, blossomed in the 17th century in the form of biblical exegesis (Packer, 2018 ).

Although there are commonalities that cut across most forms of qualitative research, this is not to say that there is an accepted, linear, standardized approach. To be sure, there are an infinite number of variations and nuances in the qualitative research process. For example, some forms of inquiry begin with a firm research question; others start without even a clear focus for study. Grounded theorists begin data analysis and interpretation very early in the research process, whereas some case study researchers, for example, may collect data in the field for a period of time before seriously considering the data and its implications. Some ethnographers may be a part of the context (e.g., observing in classrooms), but they may assume more observer-like roles, as opposed to actively participating in the context. Alternatively, action researchers, in studying issues related to their own practice, are necessarily situated toward the participant end of the participant–observer continuum.

Our focus here is on one integrated part of the qualitative research process, interpretation, the hermeneutic process of collective and individual “meaning making.” Like Willig ( 2017 ), we believe “interpretation is at the heart of qualitative research because qualitative research is concerned with meaning and the process of meaning-making … qualitative data … needs to be given meaning by the researcher” (p. 276). As we discuss throughout this chapter, researchers take a variety of approaches to interpretation in qualitative work. Four general questions guide our explorations:

What is interpretation, and why are interpretive strategies important in qualitative research?

How do methodology, data, and the researcher/self impact interpretation in qualitative research?

How do qualitative researchers engage in the process of interpretation?

In what ways can a framework for interpretation strategies support qualitative researchers across multiple methodological and paradigmatic views?

We address each of these guiding questions in our attempt to explicate our interpretation of “interpretation” and, as educational researchers, we include examples from our own work to illustrate some key concepts.

What Is Interpretation, and Why Are Interpretive Strategies Important in Qualitative Research?

Qualitative researchers and those writing about qualitative methods often intertwine the terms analysis and interpretation . For example, Hubbard and Power ( 2003 ) described data analysis as “bringing order, structure, and meaning to the data” (p. 88). To us, this description combines analysis with interpretation. Although there is nothing wrong with this construction, our understanding aligns more closely with Mills’s ( 2018 ) claim that, “put simply, analysis involves summarizing what’s in the data, whereas interpretation involves making sense of—finding meaning in—that data” (p. 176). Hesse-Biber ( 2017 ) also separated out the essential process of interpretation. She described the steps in qualitative analysis and interpretation as data preparation, data exploration, and data reduction (all part of Mills’s “analysis” processes), followed by interpretation (pp. 307–328). Willig ( 2017 ) elaborated: analysis, she claims, is “sober and systematic,” whereas interpretation is associated with “creativity and the imagination … interpretation is seen as stimulating, it is interesting and it can be illuminating” (p. 276). For the purpose of this chapter, we will adhere to Mills’s distinction, understanding analysis as summarizing and organizing and interpretation as meaning making. Unavoidably, these closely related processes overlap and interact, but our focus will be primarily on the more complex of these endeavors, interpretation. Interpretation, in this sense, is in part translation, but translation is not an objective act. Instead, translation necessarily involves selectivity and the ascribing of meaning. Qualitative researchers “aim beneath manifest behavior to the meaning events have for those who experience them” (Eisner, 1991 , p. 35). The presentation of these insider/emic perspectives, coupled with researchers’ own interpretations, is a hallmark of qualitative research.

Qualitative researchers have long borrowed from extant models for fieldwork and interpretation. Approaches from anthropology and the arts have become especially prominent. For example, Eisner’s ( 1991 ) form of qualitative inquiry, educational criticism , draws heavily on accepted models of art criticism. T. Barrett ( 2011 ), an authority on art criticism, described interpretation as a complex set of processes based on a set of principles. We believe many of these principles apply as readily to qualitative research as they do to critique. The following principles, adapted from T. Barrett’s principles of interpretation (2011), inform our examination:

Qualitative phenomena have “aboutness” : All social phenomena have meaning, but meanings in this context can be multiple, even contradictory.

Interpretations are persuasive arguments : All interpretations are arguments, and qualitative researchers, like critics, strive to build strong arguments grounded in the information, or data, available.

  Some interpretations are better than others : Barrett noted that “some interpretations are better argued, better grounded with evidence, and therefore more reasonable, more certain, and more acceptable than others.” This contradicts the argument that “all interpretations are equal,” heard in the common refrain, “Well, that’s just your interpretation.”

There can be different, competing, and contradictory interpretations of the same phenomena : As noted at the beginning of this chapter, we acknowledge that subjectivity matters, and, unavoidably, it impacts one’s interpretations. As Barrett noted, “Interpretations are often based on a worldview.”

Interpretations are not (and cannot be) “right,” but instead, they can be more or less reasonable, convincing, and informative : There is never one “true” interpretation, but some interpretations are more compelling than others.

Interpretations can be judged by coherence, correspondence, and inclusiveness : Does the argument/interpretation make sense (coherence)? Does the interpretation fit the data (correspondence)? Have all data been attended to, including outlier data that do not necessarily support identified themes (inclusiveness)?

Interpretation is ultimately a communal endeavor : Initial interpretations may be incomplete, nearsighted, and/or narrow, but eventually these interpretations become richer, broader, and more inclusive. Feminist revisionist history projects are an exemplary case. Over time, the writing, art, and cultural contributions of countless women, previously ignored, diminished, or distorted, have come to be accepted as prominent contributions given serious consideration.

So, meaning is conferred; interpretations are socially constructed arguments; multiple interpretations are to be expected; and some interpretations are better than others. As we discuss later in this chapter, what makes an interpretation “better” often hinges on the purpose/goals of the research in question. Interpretations designed to generate theory, or generalizable rules, will be better for responding to research questions aligned with the aims of more traditional quantitative/positivist research, whereas interpretations designed to construct meanings through social interaction, to generate multiple perspectives, and to represent the context-specific perspectives of the research participants are better for researchers constructing thick, contextually rich descriptions, stories, or narratives. The former relies on more atomistic interpretive strategies, whereas the latter adheres to a more holistic approach (Willis, 2007 ). Both approaches to analysis/interpretation are addressed in more detail later in this chapter.

At this point, readers might ask, Why does interpretation matter, anyway? Our response to this question involves the distinctive nature of interpretation and the ability of the interpretive process to put unique fingerprints on an otherwise relatively static set of data. Once interview data are collected and transcribed (and we realize that even the process of transcription is, in part, interpretive), documents are collected, and observations are recorded, qualitative researchers could just, in good faith and with fidelity, represent the data in as straightforward ways as possible, allowing readers to “see for themselves” by sharing as much actual data (e.g., the transcribed words of the research participants) as possible. This approach, however, includes analysis, what we have defined as summarizing and organizing data for presentation, but it falls short of what we reference and define as interpretation—attempting to explain the meaning of others’ words and actions. According to Lichtman ( 2013 ),

While early efforts at qualitative research might have stopped at description, it is now more generally accepted that a qualitative researcher goes beyond pure description.… Many believe that it is the role of the researcher to bring understanding, interpretation, and meaning. (p. 17)

Because we are fond of the arts and arts-based approaches to qualitative research, an example from the late jazz drummer, Buddy Rich, seems fitting. Rich explains the importance of having the flexibility to interpret: “I don’t think any arranger should ever write a drum part for a drummer, because if a drummer can’t create his own interpretation of the chart, and he plays everything that’s written, he becomes mechanical; he has no freedom.” The same is true for qualitative researchers: without the freedom to interpret, the researcher merely regurgitates, attempting to share with readers/reviewers exactly what the research subjects shared with him or her. It is only through interpretation that the researcher, as collaborator with unavoidable subjectivities, is able to construct unique, contextualized meaning. Interpretation, then, in this sense, is knowledge construction.

In closing this section, we will illustrate the analysis-versus-interpretation distinction with the following transcript excerpt. In this study, the authors (Trent & Zorko, 2006 ) were studying student teaching from the perspective of K–12 students. This quote comes from a high school student in a focus group interview. She is describing a student teacher she had:

The right-hand column contains codes or labels applied to parts of the transcript text. Coding will be discussed in more depth later in this chapter, but for now, note that the codes are mostly summarizing the main ideas of the text, sometimes using the exact words of the research participant. This type of coding is a part of what we have called analysis—organizing and summarizing the data. It is a way of beginning to say “what is” there. As noted, though, most qualitative researchers go deeper. They want to know more than what is; they also ask, What does it mean? This is a question of interpretation.

Specific to the transcript excerpt, researchers might next begin to cluster the early codes into like groups. For example, the teacher “felt targeted,” “assumed kids were going to behave inappropriately,” and appeared to be “overwhelmed.” A researcher might cluster this group of codes in a category called “teacher feelings and perceptions” and may then cluster the codes “could not control class” and “students off task” into a category called “classroom management.” The researcher then, in taking a fresh look at these categories and the included codes, may begin to conclude that what is going on in this situation is that the student teacher does not have sufficient training in classroom management models and strategies and may also be lacking the skills she needs to build relationships with her students. These then would be interpretations, persuasive arguments connected to the study’s data. In this specific example, the researchers might proceed to write a memo about these emerging interpretations. In this memo, they might more clearly define their early categories and may also look through other data to see if there are other codes or categories that align with or overlap this initial analysis. They may write further about their emergent interpretations and, in doing so, may inform future data collection in ways that will allow them to either support or refute their early interpretations. These researchers will also likely find that the processes of analysis and interpretation are inextricably intertwined. Good interpretations very often depend on thorough and thoughtful analyses.

How Do Methodology, Data, and the Researcher/Self Impact Interpretation in Qualitative Research?

Methodological conventions guide interpretation and the use of interpretive strategies. For example, in grounded theory and in similar methodological traditions, “formal analysis begins early in the study and is nearly completed by the end of data collection” (Bogdan & Biklen, 2007 , p. 73). Alternatively, for researchers from other traditions, for example, case study researchers, “formal analysis and theory development [interpretation] do not occur until after the data collection is near complete” (p. 73).

Researchers subscribing to methodologies that prescribe early data analysis and interpretation may employ methods like analytic induction or the constant comparison method. In using analytic induction, researchers develop a rough definition of the phenomena under study; collect data to compare to this rough definition; modify the definition as needed, based on cases that both fit and do not fit the definition; and, finally, establish a clear, universal definition (theory) of the phenomena (Robinson, 1951, cited in Bogdan & Biklen, 2007 , p. 73). Generally, those using a constant comparison approach begin data collection immediately; identify key issues, events, and activities related to the study that then become categories of focus; collect data that provide incidents of these categories; write about and describe the categories, accounting for specific incidents and seeking others; discover basic processes and relationships; and, finally, code and write about the categories as theory, “grounded” in the data (Glaser, 1965 ). Although processes like analytic induction and constant comparison can be listed as steps to follow, in actuality, these are more typically recursive processes in which the researcher repeatedly goes back and forth between the data and emerging analyses and interpretations.

In addition to methodological conventions that prescribe data analysis early (e.g., grounded theory) or later (e.g., case study) in the inquiry process, methodological approaches also impact the general approach to analysis and interpretation. Ellingson ( 2011 ) situated qualitative research methodologies on a continuum spanning “science”-like approaches on one end juxtaposed with “art”-like approaches on the other.

Researchers pursuing a more science-oriented approach seek valid, reliable, generalizable knowledge; believe in neutral, objective researchers; and ultimately claim single, authoritative interpretations. Researchers adhering to these science-focused, postpositivistic approaches may count frequencies, emphasize the validity of the employed coding system, and point to intercoder reliability and random sampling as criteria that bolster the research credibility. Researchers at or near the science end of the continuum might employ analysis and interpretation strategies that include “paired comparisons,” “pile sorts,” “word counts,” identifying “key words in context,” and “triad tests” (Bernard, Wutich, & Ryan, 2017 , pp. 112, 381, 113, 170). These researchers may ultimately seek to develop taxonomies or other authoritative final products that organize and explain the collected data.

For example, in a study we conducted about preservice teachers’ experiences learning to teach second-language learners, the researchers collected larger data sets and used a statistical analysis package to analyze survey data, and the resultant findings included descriptive statistics. These survey results were supported with open-ended, qualitative data. For example, one of the study’s findings was that “a strong majority of candidates (96%) agreed that an immersion approach alone will not guarantee academic or linguistic success for second language learners.” In narrative explanations, one preservice teacher, representative of many others, remarked, “There has to be extra instructional efforts to help their students learn English … they won’t learn English by merely sitting in the classrooms” (Cho, Rios, Trent, & Mayfield, 2012 , p. 75).

Methodologies on the art side of Ellingson’s ( 2011 ) continuum, alternatively, “value humanistic, openly subjective knowledge, such as that embodied in stories, poetry, photography, and painting” (p. 599). Analysis and interpretation in these (often more contemporary) methodological approaches do not strive for “social scientific truth,” but instead are formulated to “enable us to learn about ourselves, each other, and the world through encountering the unique lens of a person’s (or a group’s) passionate rendering of a reality into a moving, aesthetic expression of meaning” (p. 599). For these “artistic/interpretivists, truths are multiple, fluctuating and ambiguous” (p. 599). Methodologies taking more subjective approaches to analysis and interpretation include autoethnography, testimonio, performance studies, feminist theorists/researchers, and others from related critical methodological forms of qualitative practice. More specifically arts-based approaches include poetic inquiry, fiction-based research, music as method, and dance and movement as inquiry (Leavy, 2017 ). Interpretation in these approaches is inherent. For example, “ interpretive poetry is understood as a method of merging the participant’s words with the researcher’s perspective” (Leavy, 2017 , p. 82).

As an example, one of us engaged in an artistic inquiry with a group of students in an art class for elementary teachers. We called it “Dreams as Data” and, among the project aims, we wanted to gather participants’ “dreams for education in the future” and display these dreams in an accessible, interactive, artistic display (see Trent, 2002 ). The intent was not to statistically analyze the dreams/data; instead, it was more universal. We wanted, as Ellingson ( 2011 , p. 599) noted, to use participant responses in ways that “enable us to learn about ourselves, each other, and the world.” The decision was made to leave responses intact and to share the whole/raw data set in the artistic display in ways that allowed the viewers to holistically analyze and interpret for themselves. Additionally, the researcher (Trent, 2002 ) collaborated with his students to construct their own contextually situated interpretations of the data. The following text is an excerpt from one participant’s response:

Almost a century ago, John Dewey eloquently wrote about the need to imagine and create the education that ALL children deserve, not just the richest, the Whitest, or the easiest to teach. At the dawn of this new century, on some mornings, I wake up fearful that we are further away from this ideal than ever.… Collective action, in a critical, hopeful, joyful, anti-racist and pro-justice spirit, is foremost in my mind as I reflect on and act in my daily work.… Although I realize the constraints on teachers and schools in the current political arena, I do believe in the power of teachers to stand next to, encourage, and believe in the students they teach—in short, to change lives. (Trent, 2002 , p. 49)

In sum, researchers whom Ellingson ( 2011 ) characterized as being on the science end of the continuum typically use more detailed or atomistic strategies to analyze and interpret qualitative data, whereas those toward the artistic end most often employ more holistic strategies. Both general approaches to qualitative data analysis and interpretation, atomistic and holistic, will be addressed later in this chapter.

As noted, qualitative researchers attend to data in a wide variety of ways depending on paradigmatic and epistemological beliefs, methodological conventions, and the purpose/aims of the research. These factors impact the kinds of data collected and the ways these data are ultimately analyzed and interpreted. For example, life history or testimonio researchers conduct extensive individual interviews, ethnographers record detailed observational notes, critical theorists may examine documents from pop culture, and ethnomethodologists may collect videotapes of interaction for analysis and interpretation.

In addition to the wide range of data types that are collected by qualitative researchers (and most qualitative researchers collect multiple forms of data), qualitative researchers, again influenced by the factors noted earlier, employ a variety of approaches to analyzing and interpreting data. As mentioned earlier in this chapter, some advocate for a detailed/atomistic, fine-grained approach to data (see, e.g., Bernard et al., 2017 ); others prefer a more broad-based, holistic, “eyeballing” of the data. According to Willis ( 2007 ), “Eyeballers reject the more structured approaches to analysis that break down the data into small units and, from the perspective of the eyeballers, destroy the wholeness and some of the meaningfulness of the data” (p. 298).

Regardless, we assert, as illustrated in Figure 31.1 , that as the process evolves, data collection becomes less prominent later in the process, as interpretation and making sense/meaning of the data becomes more prominent. It is through this emphasis on interpretation that qualitative researchers put their individual imprints on the data, allowing for the emergence of multiple, rich perspectives. This space for interpretation allows researchers the freedom Buddy Rich alluded to in his quote about interpreting musical charts. Without this freedom, Rich noted that the process would simply be “mechanical.” Furthermore, allowing space for multiple interpretations nourishes the perspectives of many others in the community. Writer and theorist Meg Wheatley explained, “Everyone in a complex system has a slightly different interpretation. The more interpretations we gather, the easier it becomes to gain a sense of the whole.” In qualitative research, “there is no ‘getting it right’ because there could be many ‘rights’ ” (as cited in Lichtman, 2013 ).

Increasing Role of Interpretation in Data Analysis

In addition to the roles methodology and data play in the interpretive process, perhaps the most important is the role of the self/the researcher in the interpretive process. According to Lichtman ( 2013 ), “Data are collected, information is gathered, settings are viewed, and realities are constructed through his or her eyes and ears … the qualitative researcher interprets and makes sense of the data” (p. 21). Eisner ( 1991 ) supported the notion of the researcher “self as instrument,” noting that expert researchers know not simply what to attend to, but also what to neglect. He describes the researcher’s role in the interpretive process as combining sensibility , the ability to observe and ascertain nuances, with schema , a deep understanding or cognitive framework of the phenomena under study.

J. Barrett ( 2007 ) described self/researcher roles as “transformations” (p. 418) at multiple points throughout the inquiry process: early in the process, researchers create representations through data generation, conducting observations and interviews and collecting documents and artifacts. Then,

transformation occurs when the “raw” data generated in the field are shaped into data records by the researcher. These data records are produced through organizing and reconstructing the researcher’s notes and transcribing audio and video recordings in the form of permanent records that serve as the “evidentiary warrants” of the generated data. The researcher strives to capture aspects of the phenomenal world with fidelity by selecting salient aspects to incorporate into the data record. (J. Barrett, 2007 , p. 418)

Transformation continues when the researcher codes, categorizes, and explores patterns in the data (the process we call analysis).

Transformations also involve interpreting what the data mean and relating these interpretations to other sources of insight about the phenomena, including findings from related research, conceptual literature, and common experience.… Data analysis and interpretation are often intertwined and rely upon the researcher’s logic, artistry, imagination, clarity, and knowledge of the field under study. (J. Barrett, 2007 , p. 418)

We mentioned the often-blended roles of participation and observation earlier in this chapter. The role(s) of the self/researcher are often described as points along a participant–observer continuum (see, e.g., Bogdan & Biklen, 2007 ). On the far observer end of this continuum, the researcher situates as detached, tries to be inconspicuous (so as not to impact/disrupt the phenomena under study), and approaches the studied context as if viewing it from behind a one-way mirror. On the opposite, participant end, the researcher is completely immersed and involved in the context. It would be difficult for an outsider to distinguish between researcher and subjects. For example, “some feminist researchers and postmodernists take a political stance and have an agenda that places the researcher in an activist posture. These researchers often become quite involved with the individuals they study and try to improve their human condition” (Lichtman, 2013 , p. 17).

We assert that most researchers fall somewhere between these poles. We believe that complete detachment is both impossible and misguided. In doing so, we, along with many others, acknowledge (and honor) the role of subjectivity, the researcher’s beliefs, opinions, biases, and predispositions. Positivist researchers seeking objective data and accounts either ignore the impact of subjectivity or attempt to drastically diminish/eliminate its impact. Even qualitative researchers have developed methods to avoid researcher subjectivity affecting research data collection, analysis, and interpretation. For example, foundational phenomenologist Husserl ( 1913/1962 ) developed the concept of bracketing , what Lichtman describes as “trying to identify your views on the topic and then putting them aside” (2013, p. 22). Like Slotnick and Janesick ( 2011 ), we ultimately claim “it is impossible to bracket yourself” (p. 1358). Instead, we take a balanced approach, like Eisner, understanding that subjectivity allows researchers to produce the rich, idiosyncratic, insightful, and yet data-based interpretations and accounts of lived experience that accomplish the primary purposes of qualitative inquiry. Eisner ( 1991 ) wrote, “Rather than regarding uniformity and standardization as the summum bonum, educational criticism [Eisner’s form of qualitative research] views unique insight as the higher good” (p. 35). That said, we also claim that, just because we acknowledge and value the role of researcher subjectivity, researchers are still obligated to ground their findings in reasonable interpretations of the data. Eisner ( 1991 ) explained:

This appreciation for personal insight as a source of meaning does not provide a license for freedom. Educational critics must provide evidence and reasons. But they reject the assumption that unique interpretation is a conceptual liability in understanding, and they see the insights secured from multiple views as more attractive than the comforts provided by a single right one. (p. 35)

Connected to this participant–observer continuum is the way the researcher positions him- or herself in relation to the “subjects” of the study. Traditionally, researchers, including early qualitative researchers, anthropologists, and ethnographers, referenced those studied as subjects . More recently, qualitative researchers better understand that research should be a reciprocal process in which both researcher and the foci of the research should derive meaningful benefit. Researchers aligned with this thinking frequently use the term participants to describe those groups and individuals included in a study. Going a step further, some researchers view research participants as experts on the studied topic and as equal collaborators in the meaning-making process. In these instances, researchers often use the terms co-researchers or co-investigators .

The qualitative researcher, then, plays significant roles throughout the inquiry process. These roles include transforming data, collaborating with research participants or co-researchers, determining appropriate points to situate along the participant–observer continuum, and ascribing personal insights, meanings, and interpretations that are both unique and justified with data exemplars. Performing these roles unavoidably impacts and changes the researcher. Slotnick and Janesick ( 2011 ) noted, “Since, in qualitative research the individual is the research instrument through which all data are passed, interpreted, and reported, the scholar’s role is constantly evolving as self evolves” (p. 1358).

As we note later, key in all this is for researchers to be transparent about the topics discussed in the preceding section: What methodological conventions have been employed and why? How have data been treated throughout the inquiry to arrive at assertions and findings that may or may not be transferable to other idiosyncratic contexts? And, finally, in what ways has the researcher/self been situated in and impacted the inquiry? Unavoidably, we assert, the self lies at the critical intersection of data and theory, and, as such, two legs of this stool, data and researcher, interact to create the third, theory.

How Do Qualitative Researchers Engage in the Process of Interpretation?

Theorists seem to have a propensity to dichotomize concepts, pulling them apart and placing binary opposites on the far ends of conceptual continuums. Qualitative research theorists are no different, and we have already mentioned some of these continua in this chapter. For example, in the previous section, we discussed the participant–observer continuum. Earlier, we referenced both Willis’s ( 2007 ) conceptualization of atomistic versus holistic approaches to qualitative analysis and interpretation and Ellingson’s ( 2011 ) science–art continuum. Each of these latter two conceptualizations inform how qualitative researchers engage in the process of interpretation.

Willis ( 2007 ) shared that the purpose of a qualitative project might be explained as “what we expect to gain from research” (p. 288). The purpose, or what we expect to gain, then guides and informs the approaches researchers might take to interpretation. Some researchers, typically positivist/postpositivist, conduct studies that aim to test theories about how the world works and/or how people behave. These researchers attempt to discover general laws, truths, or relationships that can be generalized. Others, less confident in the ability of research to attain a single, generalizable law or truth, might seek “local theory.” These researchers still seek truths, but “instead of generalizable laws or rules, they search for truths about the local context … to understand what is really happening and then to communicate the essence of this to others” (Willis, 2007 , p. 291). In both these purposes, researchers employ atomistic strategies in an inductive process in which researchers “break the data down into small units and then build broader and broader generalizations as the data analysis proceeds” (p. 317). The earlier mentioned processes of analytic induction, constant comparison, and grounded theory fit within this conceptualization of atomistic approaches to interpretation. For example, a line-by-line coding of a transcript might begin an atomistic approach to data analysis.

Alternatively, other researchers pursue distinctly different aims. Researchers with an objective description purpose focus on accurately describing the people and context under study. These researchers adhere to standards and practices designed to achieve objectivity, and their approach to interpretation falls within the binary atomistic/holistic distinction.

The purpose of hermeneutic approaches to research is to “understand the perspectives of humans. And because understanding is situational, hermeneutic research tends to look at the details of the context in which the study occurred. The result is generally rich data reports that include multiple perspectives” (Willis, 2007 , p. 293).

Still other researchers see their purpose as the creation of stories or narratives that utilize “a social process that constructs meaning through interaction … it is an effort to represent in detail the perspectives of participants … whereas description produces one truth about the topic of study, storytelling may generate multiple perspectives, interpretations, and analyses by the researcher and participants” (Willis, 2007 , p. 295).

In these latter purposes (hermeneutic, storytelling, narrative production), researchers typically employ more holistic strategies. According to Willis ( 2007 ), “Holistic approaches tend to leave the data intact and to emphasize that meaning must be derived for a contextual reading of the data rather than the extraction of data segments for detailed analysis” (p. 297). This was the case with the Dreams as Data project mentioned earlier.

We understand the propensity to dichotomize, situate concepts as binary opposites, and create neat continua between these polar descriptors. These sorts of reduction and deconstruction support our understandings and, hopefully, enable us to eventually reconstruct these ideas in meaningful ways. Still, in reality, we realize most of us will, and should, work in the middle of these conceptualizations in fluid ways that allow us to pursue strategies, processes, and theories most appropriate for the research task at hand. As noted, Ellingson ( 2011 ) set up another conceptual continuum, but, like ours, her advice was to “straddle multiple points across the field of qualitative methods” (p. 595). She explained, “I make the case for qualitative methods to be conceptualized as a continuum anchored by art and science, with vast middle spaces that embody infinite possibilities for blending artistic, expository, and social scientific ways of analysis and representation” (p. 595).

We explained at the beginning of this chapter that we view analysis as organizing and summarizing qualitative data and interpretation as constructing meaning. In this sense, analysis allows us to describe the phenomena under study. It enables us to succinctly answer what and how questions and ensures that our descriptions are grounded in the data collected. Descriptions, however, rarely respond to questions of why . Why questions are the domain of interpretation, and, as noted throughout this text, interpretation is complex. Gubrium and Holstein ( 2000 ) noted, “Traditionally, qualitative inquiry has concerned itself with what and how questions … qualitative researchers typically approach why questions cautiously, explanation is tricky business” (p. 502). Eisner ( 1991 ) described this distinctive nature of interpretation: “It means that inquirers try to account for [interpretation] what they have given account of ” (p. 35).

Our focus here is on interpretation, but interpretation requires analysis, because without clear understandings of the data and its characteristics, derived through systematic examination and organization (e.g., coding, memoing, categorizing), “interpretations” resulting from inquiry will likely be incomplete, uninformed, and inconsistent with the constructed perspectives of the study participants. Fortunately for qualitative researchers, we have many sources that lead us through analytic processes. We earlier mentioned the accepted processes of analytic induction and the constant comparison method. These detailed processes (see, e.g., Bogdan & Biklen, 2007 ) combine the inextricably linked activities of analysis and interpretation, with analysis more typically appearing as earlier steps in the process and meaning construction—interpretation—happening later.

A wide variety of resources support researchers engaged in the processes of analysis and interpretation. Saldaña ( 2011 ), for example, provided a detailed description of coding types and processes. He showed researchers how to use process coding (uses gerunds, “-ing” words to capture action), in vivo coding (uses the actual words of the research participants/ subjects), descriptive coding (uses nouns to summarize the data topics), versus coding (uses “vs” to identify conflicts and power issues), and values coding (identifies participants’ values, attitudes, and/or beliefs). To exemplify some of these coding strategies, we include an excerpt from a transcript of a meeting of a school improvement committee. In this study, the collaborators were focused on building “school community.” This excerpt illustrates the application of a variety of codes described by Saldaña to this text:

To connect and elaborate the ideas developed in coding, Saldaña ( 2011 ) suggested researchers categorize the applied codes, write memos to deepen understandings and illuminate additional questions, and identify emergent themes. To begin the categorization process, Saldaña recommended all codes be “classified into similar clusters … once the codes have been classified, a category label is applied to them” (p. 97). So, in continuing with the study of school community example coded here, the researcher might create a cluster/category called “Value of Collaboration” and in this category might include the codes “relationships,” “building community,” and “effective strategies.”

Having coded and categorized a study’s various data forms, a typical next step for researchers is to write memos or analytic memos . Writing analytic memos allows the researcher(s) to

set in words your interpretation of the data … an analytic memo further articulates your … thinking processes on what things may mean … as the study proceeds, however, initial and substantive analytic memos can be revisited and revised for eventual integration into the report itself. (Saldaña, 2011 , p. 98)

In the study of student teaching from K–12 students’ perspectives (Trent & Zorko, 2006 ), we noticed throughout our analysis a series of focus group interview quotes coded “names.” The following quote from a high school student is representative of many others:

I think that, ah, they [student teachers] should like know your face and your name because, uh, I don’t like it if they don’t and they’ll just like … cause they’ll blow you off a lot easier if they don’t know, like our new principal is here … he is, like, he always, like, tries to make sure to say hi even to the, like, not popular people if you can call it that, you know, and I mean, yah, and the people that don’t usually socialize a lot, I mean he makes an effort to know them and know their name like so they will cooperate better with him.

Although we did not ask the focus groups a specific question about whether student teachers knew the K–12 students’ names, the topic came up in every focus group interview. We coded the above excerpt and the others “knowing names,” and these data were grouped with others under the category “relationships.” In an initial analytic memo about this, the researchers wrote,

STUDENT TEACHING STUDY—MEMO #3 “Knowing Names as Relationship Building” Most groups made unsolicited mentions of student teachers knowing, or not knowing, their names. We haven’t asked students about this, but it must be important to them because it always seems to come up. Students expected student teachers to know their names. When they did, students noticed and seemed pleased. When they didn’t, students seemed disappointed, even annoyed. An elementary student told us that early in the semester, “she knew our names … cause when we rose [sic] our hands, she didn’t have to come and look at our name tags … it made me feel very happy.” A high schooler, expressing displeasure that his student teacher didn’t know students’ names, told us, “They should like know your name because it shows they care about you as a person. I mean, we know their names, so they should take the time to learn ours too.” Another high school student said that even after 3 months, she wasn’t sure the student teacher knew her name. Another student echoed, “Same here.” Each of these students asserted that this (knowing students’ names) had impacted their relationship with the student teacher. This high school student focus group stressed that a good relationship, built early, directly impacts classroom interaction and student learning. A student explained it like this: “If you get to know each other, you can have fun with them … they seem to understand you more, you’re more relaxed, and learning seems easier.”

As noted in these brief examples, coding, categorizing, and writing memos about a study’s data are all accepted processes for data analysis and allow researchers to begin constructing new understandings and forming interpretations of the studied phenomena. We find the qualitative research literature to be particularly strong in offering support and guidance for researchers engaged in these analytic practices. In addition to those already noted in this chapter, we have found the following resources provide practical, yet theoretically grounded approaches to qualitative data analysis. For more detailed, procedural, or atomistic approaches to data analysis, we direct researchers to Miles and Huberman’s classic 1994 text, Qualitative Data Analysis , and Bernard et al.’s 2017 book Analyzing Qualitative Data: Systematic Approaches. For analysis and interpretation strategies falling somewhere between the atomistic and holistic poles, we suggest Hesse-Biber and Leavy’s ( 2011 ) chapter, “Analysis and Interpretation of Qualitative Data,” in their book, The Practice of Qualitative Research (second edition); Lichtman’s chapter, “Making Meaning From Your Data,” in her 2013 book Qualitative Research in Education: A User’s Guide (third edition); and “Processing Fieldnotes: Coding and Memoing,” a chapter in Emerson, Fretz, and Shaw’s ( 1995 ) book, Writing Ethnographic Fieldwork . Each of these sources succinctly describes the processes of data preparation, data reduction, coding and categorizing data, and writing memos about emergent ideas and findings. For more holistic approaches, we have found Denzin and Lincoln’s ( 2007 ) Collecting and Interpreting Qualitative Materials and Ellis and Bochner’s ( 2000 ) chapter “Autoethnography, Personal Narrative, Reflexivity” to both be very informative. Finally, Leavy’s 2017 book, Method Meets Art: Arts-Based Research Practice , provides support and guidance to researchers engaged in arts-based research.

Even after reviewing the multiple resources for treating data included here, qualitative researchers might still be wondering, But exactly how do we interpret? In the remainder of this section and in the concluding section of this chapter, we more concretely provide responses to this question and, in closing, we propose a framework for researchers to utilize as they engage in the complex, ambiguous, and yet exciting process of constructing meanings and new understandings from qualitative sources.

These meanings and understandings are often presented as theory, but theories in this sense should be viewed more as “guides to perception” as opposed to “devices that lead to the tight control or precise prediction of events” (Eisner, 1991 , p. 95). Perhaps Erickson’s ( 1986 ) concept of assertions is a more appropriate aim for qualitative researchers. He claimed that assertions are declarative statements; they include a summary of the new understandings, and they are supported by evidence/data. These assertions are open to revision and are revised when disconfirming evidence requires modification. Assertions, theories, or other explanations resulting from interpretation in research are typically presented as “findings” in written research reports. Belgrave and Smith ( 2002 ) emphasized the importance of these interpretations (as opposed to descriptions): “The core of the report is not the events reported by the respondent, but rather the subjective meaning of the reported events for the respondent” (p. 248).

Mills ( 2018 ) viewed interpretation as responding to the question, So what? He provided researchers a series of concrete strategies for both analysis and interpretation. Specific to interpretation, Mills (pp. 204–207) suggested a variety of techniques, including the following:

“ Extend the analysis ”: In doing so, researchers ask additional questions about the research. The data appear to say X , but could it be otherwise? In what ways do the data support emergent finding X ? And, in what ways do they not?

“ Connect findings with personal experience ”: Using this technique, researchers share interpretations based on their intimate knowledge of the context, the observed actions of the individuals in the studied context, and the data points that support emerging interpretations, as well as their awareness of discrepant events or outlier data. In a sense, the researcher is saying, “Based on my experiences in conducting this study, this is what I make of it all.”

“ Seek the advice of ‘critical’ friends ”: In doing so, researchers utilize trusted colleagues, fellow researchers, experts in the field of study, and others to offer insights, alternative interpretations, and the application of their own unique lenses to a researcher’s initial findings. We especially like this strategy because we acknowledge that, too often, qualitative interpretation is a “solo” affair.

“ Contextualize findings in the literature ”: This allows researchers to compare their interpretations to those of others writing about and studying the same/similar phenomena. The results of this contextualization may be that the current study’s findings correspond with the findings of other researchers. The results might, alternatively, differ from the findings of other researchers. In either instance, the researcher can highlight his or her unique contributions to our understanding of the topic under study.

“ Turn to theory ”: Mills defined theory as “an analytical and interpretive framework that helps the researcher make sense of ‘what is going on’ in the social setting being studied.” In turning to theory, researchers search for increasing levels of abstraction and move beyond purely descriptive accounts. Connecting to extant or generating new theory enables researchers to link their work to the broader contemporary issues in the field.

Other theorists offer additional advice for researchers engaged in the act of interpretation. Richardson ( 1995 ) reminded us to account for the power dynamics in the researcher–researched relationship and notes that, in doing so, we can allow for oppressed and marginalized voices to be heard in context. Bogdan and Biklen ( 2007 ) suggested that researchers engaged in interpretation revisit foundational writing about qualitative research, read studies related to the current research, ask evaluative questions (e.g., Is what I’m seeing here good or bad?), ask about implications of particular findings/interpretations, think about the audience for interpretations, look for stories and incidents that illustrate a specific finding/interpretation, and attempt to summarize key interpretations in a succinct paragraph. All these suggestions can be pertinent in certain situations and with particular methodological approaches. In the next and closing section of this chapter, we present a framework for interpretive strategies we believe will support, guide, and be applicable to qualitative researchers across multiple methodologies and paradigms.

In What Ways Can a Framework for Interpretation Strategies Support Qualitative Researchers across Multiple Methodological and Paradigmatic Views?

The process of qualitative research is often compared to a journey, one without a detailed itinerary and ending, but with general direction and aims and yet an open-endedness that adds excitement and thrives on curiosity. Qualitative researchers are travelers. They travel physically to field sites; they travel mentally through various epistemological, theoretical, and methodological grounds; they travel through a series of problem-finding, access, data collection, and data analysis processes; and, finally—the topic of this chapter—they travel through the process of making meaning of all this physical and cognitive travel via interpretation.

Although travel is an appropriate metaphor to describe the journey of qualitative researchers, we will also use “travel” to symbolize a framework for qualitative research interpretation strategies. By design, this framework applies across multiple paradigmatic, epistemological, and methodological traditions. The application of this framework is not formulaic or highly prescriptive; it is also not an anything-goes approach. It falls, and is applicable, between these poles, giving concrete (suggested) direction to qualitative researchers wanting to make the most of the interpretations that result from their research and yet allowing the necessary flexibility for researchers to employ the methods, theories, and approaches they deem most appropriate to the research problem(s) under study.

TRAVEL, a Comprehensive Approach to Qualitative Interpretation

In using the word TRAVEL as a mnemonic device, our aim is to highlight six essential concepts we argue all qualitative researchers should attend to in the interpretive process: transparency, reflexivity, analysis, validity, evidence, and literature. The importance of each is addressed here.

Transparency , as a research concept seems, well, transparent. But, too often, we read qualitative research reports and are left with many questions: How were research participants and the topic of study selected/excluded? How were the data collected, when, and for how long? Who analyzed and interpreted these data? A single researcher? Multiple? What interpretive strategies were employed? Are there data points that substantiate these interpretations/findings? What analytic procedures were used to organize the data prior to making the presented interpretations? In being transparent about data collection, analysis, and interpretation processes, researchers allow reviewers/readers insight into the research endeavor, and this transparency leads to credibility for both researcher and researcher’s claims. Altheide and Johnson ( 2011 ) explained,

There is great diversity of qualitative research.… While these approaches differ, they also share an ethical obligation to make public their claims, to show the reader, audience, or consumer why they should be trusted as faithful accounts of some phenomenon. (p. 584)

This includes, they noted, articulating

what the different sources of data were, how they were interwoven, and … how subsequent interpretations and conclusions are more or less closely tied to the various data … the main concern is that the connection be apparent, and to the extent possible, transparent. (p. 590)

In the Dreams as Data art and research project noted earlier, transparency was addressed in multiple ways. Readers of the project write-up were informed that interpretations resulting from the study, framed as themes , were a result of collaborative analysis that included insights from both students and instructor. Viewers of the art installation/data display had the rare opportunity to see all participant responses. In other words, viewers had access to the entire raw data set (see Trent, 2002 ). More frequently, we encounter only research “findings” already distilled, analyzed, and interpreted in research accounts, often by a single researcher. Allowing research consumers access to the data to interpret for themselves in the Dreams project was an intentional attempt at transparency.

Reflexivity , the second of our concepts for interpretive researcher consideration, has garnered a great deal of attention in qualitative research literature. Some have called this increased attention the reflexive turn (see, e.g., Denzin & Lincoln, 2004 ).

Although you can find many meanings for the term reflexivity, it is usually associated with a critical reflection on the practice and process of research and the role of the researcher. It concerns itself with the impact of the researcher on the system and the system on the researcher. It acknowledges the mutual relationships between the researcher and who and what is studied … by acknowledging the role of the self in qualitative research, the researcher is able to sort through biases and think about how they affect various aspects of the research, especially interpretation of meanings. (Lichtman, 2013 , p. 165)

As with transparency, attending to reflexivity allows researchers to attach credibility to presented findings. Providing a reflexive account of researcher subjectivity and the interactions of this subjectivity within the research process is a way for researchers to communicate openly with their audience. Instead of trying to exhume inherent bias from the process, qualitative researchers share with readers the value of having a specific, idiosyncratic positionality. As a result, situated, contextualized interpretations are viewed as an asset, as opposed to a liability.

LaBanca ( 2011 ), acknowledging the often solitary nature of qualitative research, called for researchers to engage others in the reflexive process. Like many other researchers, LaBanca utilized a researcher journal to chronicle reflexive thoughts, explorations, and understandings, but he took it a step farther. Realizing the value of others’ input, LaBanca posts his reflexive journal entries on a blog (what he calls an online reflexivity blog ) and invites critical friends, other researchers, and interested members of the community to audit his reflexive moves, providing insights, questions, and critique that inform his research and study interpretations.

We agree this is a novel approach worth considering. We, too, understand that multiple interpreters will undoubtedly produce multiple interpretations, a richness of qualitative research. So, we suggest researchers consider bringing others in before the production of the report. This could be fruitful in multiple stages of the inquiry process, but especially in the complex, idiosyncratic processes of reflexivity and interpretation. We are both educators and educational researchers. Historically, each of these roles has tended to be constructed as an isolated endeavor, the solitary teacher, the solo researcher/fieldworker. As noted earlier and in the analysis section that follows, introducing collaborative processes to what has often been a solitary activity offers much promise for generating rich interpretations that benefit from multiple perspectives.

Being consciously reflexive throughout our practice as researchers has benefitted us in many ways. In a study of teacher education curricula designed to prepare preservice teachers to support second-language learners, we realized hard truths that caused us to reflect on and adapt our own practices as teacher educators. Reflexivity can inform a researcher at all parts of the inquiry, even in early stages. For example, one of us was beginning a study of instructional practices in an elementary school. The communicated methods of the study indicated that the researcher would be largely an observer. Early fieldwork revealed that the researcher became much more involved as a participant than anticipated. Deep reflection and writing about the classroom interactions allowed the researcher to realize that the initial purpose of the research was not being accomplished, and the researcher believed he was having a negative impact on the classroom culture. Reflexivity in this instance prompted the researcher to leave the field and abandon the project as it was just beginning. Researchers should plan to openly engage in reflexive activities, including writing about their ongoing reflections and subjectivities. Including excerpts of this writing in research account supports our earlier recommendation of transparency.

Early in this chapter, for the purposes of discussion and examination, we defined analysis as “summarizing and organizing” data in a qualitative study and interpretation as “meaning making.” Although our focus has been on interpretation as the primary topic, the importance of good analysis cannot be underestimated, because without it, resultant interpretations are likely incomplete and potentially uninformed. Comprehensive analysis puts researchers in a position to be deeply familiar with collected data and to organize these data into forms that lead to rich, unique interpretations, and yet interpretations that are clearly connected to data exemplars. Although we find it advantageous to examine analysis and interpretation as different but related practices, in reality, the lines blur as qualitative researchers engage in these recursive processes.

We earlier noted our affinity for a variety of approaches to analysis (see, e.g., Hesse-Biber & Leavy, 2011 ; Lichtman, 2013 ; or Saldaña, 2011 ). Emerson et al. ( 1995 ) presented a grounded approach to qualitative data analysis: In early stages, researchers engage in a close, line-by-line reading of data/collected text and accompany this reading with open coding , a process of categorizing and labeling the inquiry data. Next, researchers write initial memos to describe and organize the data under analysis. These analytic phases allow the researcher(s) to prepare, organize, summarize, and understand the data, in preparation for the more interpretive processes of focused coding and the writing up of interpretations and themes in the form of integrative memos .

Similarly, Mills ( 2018 ) provided guidance on the process of analysis for qualitative action researchers. His suggestions for organizing and summarizing data include coding (labeling data and looking for patterns); identifying themes by considering the big picture while looking for recurrent phrases, descriptions, or topics; asking key questions about the study data (who, what, where, when, why, and how); developing concept maps (graphic organizers that show initial organization and relationships in the data); and stating what’s missing by articulating what data are not present (pp. 179–189).

Many theorists, like Emerson et al. ( 1995 ) and Mills ( 2018 ) noted here, provide guidance for individual researchers engaged in individual data collection, analysis, and interpretation; others, however, invite us to consider the benefits of collaboratively engaging in these processes through the use of collaborative research and analysis teams. Paulus, Woodside, and Ziegler ( 2008 ) wrote about their experiences in collaborative qualitative research: “Collaborative research often refers to collaboration among the researcher and the participants. Few studies investigate the collaborative process among researchers themselves” (p. 226).

Paulus et al. ( 2008 ) claimed that the collaborative process “challenged and transformed our assumptions about qualitative research” (p. 226). Engaging in reflexivity, analysis, and interpretation as a collaborative enabled these researchers to reframe their views about the research process, finding that the process was much more recursive, as opposed to following a linear progression. They also found that cooperatively analyzing and interpreting data yielded “collaboratively constructed meanings” as opposed to “individual discoveries.” And finally, instead of the traditional “individual products” resulting from solo research, collaborative interpretation allowed researchers to participate in an “ongoing conversation” (p. 226).

These researchers explained that engaging in collaborative analysis and interpretation of qualitative data challenged their previously held assumptions. They noted,

through collaboration, procedures are likely to be transparent to the group and can, therefore, be made public. Data analysis benefits from an iterative, dialogic, and collaborative process because thinking is made explicit in a way that is difficult to replicate as a single researcher. (Paulus et al., 2008 , p. 236)

They shared that, during the collaborative process, “we constantly checked our interpretation against the text, the context, prior interpretations, and each other’s interpretations” (p. 234).

We, too, have engaged in analysis similar to these described processes, including working on research teams. We encourage other researchers to find processes that fit with the methodology and data of a particular study, use the techniques and strategies most appropriate, and then cite the utilized authority to justify the selected path. We urge traditionally solo researchers to consider trying a collaborative approach. Generally, we suggest researchers be familiar with a wide repertoire of practices. In doing so, they will be in better positions to select and use strategies most appropriate for their studies and data. Succinctly preparing, organizing, categorizing, and summarizing data sets the researcher(s) up to construct meaningful interpretations in the forms of assertions, findings, themes, and theories.

Researchers want their findings to be sound, backed by evidence, and justifiable and to accurately represent the phenomena under study. In short, researchers seek validity for their work. We assert that qualitative researchers should attend to validity concepts as a part of their interpretive practices. We have previously written and theorized about validity, and, in doing so, we have highlighted and labeled what we consider two distinctly different approaches, transactional and transformational (Cho & Trent, 2006 ). We define transactional validity in qualitative research as an interactive process occurring among the researcher, the researched, and the collected data, one that is aimed at achieving a relatively higher level of accuracy. Techniques, methods, and/or strategies are employed during the conduct of the inquiry. These techniques, such as member checking and triangulation, are seen as a medium with which to ensure an accurate reflection of reality (or, at least, participants’ constructions of reality). Lincoln and Guba’s ( 1985 ) widely known notion of trustworthiness in “naturalistic inquiry” is grounded in this approach. In seeking trustworthiness, researchers attend to research credibility, transferability, dependability, and confirmability. Validity approaches described by Maxwell ( 1992 ) as “descriptive” and “interpretive” also proceed in the usage of transactional processes.

For example, in the write-up of a study on the facilitation of teacher research, one of us (Trent, 2012 ) wrote about the use of transactional processes:

“Member checking is asking the members of the population being studied for their reaction to the findings” (Sagor, 2000 , p. 136). Interpretations and findings of this research, in draft form, were shared with teachers (for member checking) on multiple occasions throughout the study. Additionally, teachers reviewed and provided feedback on the final draft of this article. (p. 44)

This member checking led to changes in some resultant interpretations (called findings in this particular study) and to adaptations of others that shaped these findings in ways that made them both richer and more contextualized.

Alternatively, in transformational approaches, validity is not so much something that can be achieved solely by employing certain techniques. Transformationalists assert that because traditional or positivist inquiry is no longer seen as an absolute means to truth in the realm of human science, alternative notions of validity should be considered to achieve social justice, deeper understandings, broader visions, and other legitimate aims of qualitative research. In this sense, it is the ameliorative aspects of the research that achieve (or do not achieve) its validity. Validity is determined by the resultant actions prompted by the research endeavor.

Lather ( 1993 ), Richardson ( 1997 ), and others (e.g., Lenzo, 1995 ; Scheurich, 1996 ) proposed a transgressive approach to validity that emphasized a higher degree of self-reflexivity. For example, Lather proposed a “catalytic validity” described as “the degree to which the research empowers and emancipates the research subjects” (Scheurich, 1996 , p. 4). Beverley ( 2000 , p. 556) proposed testimonio as a qualitative research strategy. These first-person narratives find their validity in their ability to raise consciousness and thus provoke political action to remedy problems of oppressed peoples (e.g., poverty, marginality, exploitation).

We, too, have pursued research with transformational aims. In the earlier mentioned study of preservice teachers’ experiences learning to teach second-language learners (Cho et al., 2012 ), our aims were to empower faculty members, evolve the curriculum, and, ultimately, better serve preservice teachers so that they might better serve English-language learners in their classrooms. As program curricula and activities have changed as a result, we claim a degree of transformational validity for this research.

Important, then, for qualitative researchers throughout the inquiry, but especially when engaged in the process of interpretation, is to determine the type(s) of validity applicable to the study. What are the aims of the study? Providing an “accurate” account of studied phenomena? Empowering participants to take action for themselves and others? The determination of this purpose will, in turn, inform researchers’ analysis and interpretation of data. Understanding and attending to the appropriate validity criteria will bolster researcher claims to meaningful findings and assertions.

Regardless of purpose or chosen validity considerations, qualitative research depends on evidence . Researchers in different qualitative methodologies rely on different types of evidence to support their claims. Qualitative researchers typically utilize a variety of forms of evidence including texts (written notes, transcripts, images, etc.), audio and video recordings, cultural artifacts, documents related to the inquiry, journal entries, and field notes taken during observations of social contexts and interactions. Schwandt ( 2001 ) wrote,

Evidence is essential to justification, and justification takes the form of an argument about the merit(s) of a given claim. It is generally accepted that no evidence is conclusive or unassailable (and hence, no argument is foolproof). Thus, evidence must often be judged for its credibility, and that typically means examining its source and the procedures by which it was produced [thus the need for transparency discussed earlier]. (p. 82)

Altheide and Johnson ( 2011 ) drew a distinction between evidence and facts:

Qualitative researchers distinguish evidence from facts. Evidence and facts are similar but not identical. We can often agree on facts, e.g., there is a rock, it is harder than cotton candy. Evidence involves an assertion that some facts are relevant to an argument or claim about a relationship. Since a position in an argument is likely tied to an ideological or even epistemological position, evidence is not completely bound by facts, but it is more problematic and subject to disagreement. (p. 586)

Inquirers should make every attempt to link evidence to claims (or findings, interpretations, assertions, conclusions, etc.). There are many strategies for making these connections. Induction involves accumulating multiple data points to infer a general conclusion. Confirmation entails directly linking evidence to resultant interpretations. Testability/falsifiability means illustrating that evidence does not necessarily contradict the claim/interpretation and so increases the credibility of the claim (Schwandt, 2001 ). In the study about learning to teach second-language learners, for example, a study finding (Cho et al., 2012 ) was that “as a moral claim , candidates increasingly [in higher levels of the teacher education program] feel more responsible and committed to … [English language learners]” (p. 77). We supported this finding with a series of data points that included the following preservice teacher response: “It is as much the responsibility of the teacher to help teach second-language learners the English language as it is our responsibility to teach traditional English speakers to read or correctly perform math functions.” Claims supported by evidence allow readers to see for themselves and to both examine researcher assertions in tandem with evidence and form further interpretations of their own.

Some postmodernists reject the notion that qualitative interpretations are arguments based on evidence. Instead, they argue that qualitative accounts are not intended to faithfully represent that experience, but instead are designed to evoke some feelings or reactions in the reader of the account (Schwandt, 2001 ). We argue that, even in these instances where transformational validity concerns take priority over transactional processes, evidence still matters. Did the assertions accomplish the evocative aims? What evidence/arguments were used to evoke these reactions? Does the presented claim correspond with the study’s evidence? Is the account inclusive? In other words, does it attend to all evidence or selectively compartmentalize some data while capitalizing on other evidentiary forms?

Researchers, we argue, should be both transparent and reflexive about these questions and, regardless of research methodology or purpose, should share with readers of the account their evidentiary moves and aims. Altheide and Johnson ( 2011 ) called this an evidentiary narrative and explain:

Ultimately, evidence is bound up with our identity in a situation.… An “evidentiary narrative” emerges from a reconsideration of how knowledge and belief systems in everyday life are tied to epistemic communities that provide perspectives, scenarios, and scripts that reflect symbolic and social moral orders. An “evidentiary narrative” symbolically joins an actor, an audience, a point of view (definition of a situation), assumptions, and a claim about a relationship between two or more phenomena. If any of these factors are not part of the context of meaning for a claim, it will not be honored, and thus, not seen as evidence. (p. 686)

In sum, readers/consumers of a research account deserve to know how evidence was treated and viewed in an inquiry. They want and should be aware of accounts that aim to evoke versus represent, and then they can apply their own criteria (including the potential transferability to their situated context). Renowned ethnographer and qualitative research theorist Harry Wolcott ( 1990 ) urged researchers to “let readers ‘see’ for themselves” by providing more detail rather than less and by sharing primary data/evidence to support interpretations. In the end, readers do not expect perfection. Writer Eric Liu ( 2010 ) explained, “We don’t expect flawless interpretation. We expect good faith. We demand honesty.”

Last, in this journey through concepts we assert are pertinent to researchers engaged in interpretive processes, we include attention to the literature . In discussing literature, qualitative researchers typically mean publications about the prior research conducted on topics aligned with or related to a study. Most often, this research/literature is reviewed and compiled by researchers in a section of the research report titled “Literature Review.” It is here we find others’ studies, methods, and theories related to our topics of study, and it is here we hope the assertions and theories that result from our studies will someday reside.

We acknowledge the value of being familiar with research related to topics of study. This familiarity can inform multiple phases of the inquiry process. Understanding the extant knowledge base can inform research questions and topic selection, data collection and analysis plans, and the interpretive process. In what ways do the interpretations from this study correspond with other research conducted on this topic? Do findings/interpretations corroborate, expand, or contradict other researchers’ interpretations of similar phenomena? In any of these scenarios (correspondence, expansion, contradiction), new findings and interpretations from a study add to and deepen the knowledge base, or literature, on a topic of investigation.

For example, in our literature review for the study of student teaching, we quickly determined that the knowledge base and extant theories related to the student teaching experience were immense, but also quickly realized that few, if any, studies had examined student teaching from the perspective of the K–12 students who had the student teachers. This focus on the literature related to our topic of student teaching prompted us to embark on a study that would fill a gap in this literature: Most of the knowledge base focused on the experiences and learning of the student teachers themselves. Our study, then, by focusing on the K–12 students’ perspectives, added literature/theories/assertions to a previously untapped area. The “literature” in this area (at least we would like to think) is now more robust as a result.

In another example, a research team (Trent et al., 2003 ) focused on institutional diversity efforts, mined the literature, found an appropriate existing (a priori) set of theories/assertions, and then used the existing theoretical framework from the literature as a framework to analyze data, in this case, a variety of institutional activities related to diversity.

Conducting a literature review to explore extant theories on a topic of study can serve a variety of purposes. As evidenced in these examples, consulting the literature/extant theory can reveal gaps in the literature. A literature review might also lead researchers to existing theoretical frameworks that support analysis and interpretation of their data (as in the use of the a priori framework example). Finally, a review of current theories related to a topic of inquiry might confirm that much theory already exists, but that further study may add to, bolster, and/or elaborate on the current knowledge base.

Guidance for researchers conducting literature reviews is plentiful. Lichtman ( 2013 ) suggested researchers conduct a brief literature review, begin research, and then update and modify the literature review as the inquiry unfolds. She suggested reviewing a wide range of related materials (not just scholarly journals) and additionally suggested that researchers attend to literature on methodology, not just the topic of study. She also encouraged researchers to bracket and write down thoughts on the research topic as they review the literature, and, important for this chapter, that researchers “integrate your literature review throughout your writing rather than using a traditional approach of placing it in a separate chapter” (p. 173).

We agree that the power of a literature review to provide context for a study can be maximized when this information is not compartmentalized apart from a study’s findings. Integrating (or at least revisiting) reviewed literature juxtaposed alongside findings can illustrate how new interpretations add to an evolving story. Eisenhart ( 1998 ) expanded the traditional conception of the literature review and discussed the concept of an interpretive review . By taking this interpretive approach, Eisenhart claimed that reviews, alongside related interpretations/findings on a specific topic, have the potential to allow readers to see the studied phenomena in entirely new ways, through new lenses, revealing heretofore unconsidered perspectives. Reviews that offer surprising and enriching perspectives on meanings and circumstances “shake things up, break down boundaries, and cause things (or thinking) to expand” (p. 394). Coupling reviews of this sort with current interpretations will “give us stories that startle us with what we have failed to notice” (p. 395).

In reviews of research studies, it can certainly be important to evaluate the findings in light of established theories and methods [the sorts of things typically included in literature reviews]. However, it also seems important to ask how well the studies disrupt conventional assumptions and help us to reconfigure new, more inclusive, and more promising perspectives on human views and actions. From an interpretivist perspective, it would be most important to review how well methods and findings permit readers to grasp the sense of unfamiliar perspectives and actions. (Eisenhart, 1998 , p. 397)

Though our interpretation-related journey in this chapter nears an end, we are hopeful it is just the beginning of multiple new conversations among ourselves and in concert with other qualitative researchers. Our aims have been to circumscribe interpretation in qualitative research; emphasize the importance of interpretation in achieving the aims of the qualitative project; discuss the interactions of methodology, data, and the researcher/self as these concepts and theories intertwine with interpretive processes; describe some concrete ways that qualitative inquirers engage the process of interpretation; and, finally, provide a framework of interpretive strategies that may serve as a guide for ourselves and other researchers.

In closing, we note that the TRAVEL framework, construed as a journey to be undertaken by researchers engaged in interpretive processes, is not designed to be rigid or prescriptive, but instead is designed to be a flexible set of concepts that will inform researchers across multiple epistemological, methodological, and theoretical paradigms. We chose the concepts of transparency, reflexivity, analysis, validity, evidence, and literature (TRAVEL) because they are applicable to the infinite journeys undertaken by qualitative researchers who have come before and to those who will come after us. As we journeyed through our interpretations of interpretation, we have discovered new things about ourselves and our work. We hope readers also garner insights that enrich their interpretive excursions. Happy travels!

Altheide, D. , & Johnson, J. M. ( 2011 ). Reflections on interpretive adequacy in qualitative research. In N. M. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 595–610). Thousand Oaks, CA: Sage.

Google Scholar

Google Preview

Barrett, J. ( 2007 ). The researcher as instrument: Learning to conduct qualitative research through analyzing and interpreting a choral rehearsal.   Music Education Research, 9, 417–433.

Barrett, T. ( 2011 ). Criticizing art: Understanding the contemporary (3rd ed.). New York, NY: McGraw–Hill.

Belgrave, L. L. , & Smith, K. J. ( 2002 ). Negotiated validity in collaborative ethnography. In N. M. Denzin & Y. S. Lincoln (Eds.), The qualitative inquiry reader (pp. 233–255). Thousand Oaks, CA: Sage.

Bernard, H. R. , Wutich, A. , & Ryan, G. W. ( 2017 ). Analyzing qualitative data: Systematic approaches (2nd ed.). Thousand Oaks, CA: Sage.

Beverly, J. ( 2000 ). Testimonio, subalternity, and narrative authority. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 555–566). Thousand Oaks, CA: Sage.

Bogdan, R. C. , & Biklen, S. K. ( 2007 ). Qualitative research for education: An introduction to theories and methods (5th ed.). Boston, MA: Allyn & Bacon.

Cho, J. , Rios, F. , Trent, A. , & Mayfield, K. ( 2012 ). Integrating language diversity into teacher education curricula in a rural context: Candidates’ developmental perspectives and understandings.   Teacher Education Quarterly, 39(2), 63–85.

Cho, J. , & Trent, A. ( 2006 ). Validity in qualitative research revisited.   QR—Qualitative Research Journal, 6, 319–340.

Denzin, N. M. , & Lincoln, Y. S . (Eds.). ( 2004 ). Handbook of qualitative research . Newbury Park, CA: Sage.

Denzin, N. M. , & Lincoln, Y. S. ( 2007 ). Collecting and interpreting qualitative materials . Thousand Oaks, CA: Sage.

Eisenhart, M. ( 1998 ). On the subject of interpretive reviews.   Review of Educational Research, 68, 391–393.

Eisner, E. ( 1991 ). The enlightened eye: Qualitative inquiry and the enhancement of educational practice . New York, NY: Macmillan.

Ellingson, L. L. ( 2011 ). Analysis and representation across the continuum. In N. M. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 595–610). Thousand Oaks, CA: Sage.

Ellis, C. , & Bochner, A. P. ( 2000 ). Autoethnography, personal narrative, reflexivity: Researcher as subject. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 733–768). Thousand Oaks, CA: Sage.

Emerson, R. , Fretz, R. , & Shaw, L. ( 1995 ). Writing ethnographic fieldwork . Chicago, IL: University of Chicago Press.

Erickson, F. ( 1986 ). Qualitative methods in research in teaching and learning. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp 119–161). New York, NY: Macmillan.

Glaser, B. ( 1965 ). The constant comparative method of qualitative analysis.   Social Problems, 12, 436–445.

Gubrium, J. F. , & Holstein, J. A. ( 2000 ). Analyzing interpretive practice. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 487–508). Thousand Oaks, CA: Sage.

Hammersley, M. ( 2013 ). What is qualitative research? London, England: Bloomsbury Academic.

Hesse-Biber, S. N. ( 2017 ). The practice of qualitative research (3rd ed.). Thousand Oaks, CA: Sage.

Hesse-Biber, S. N. , & Leavy, P. ( 2011 ). The practice of qualitative research (2nd ed.). Thousand Oaks, CA: Sage.

Hubbard, R. S. , & Power, B. M. ( 2003 ). The art of classroom inquiry: A handbook for teacher researchers . Portsmouth, NH: Heinemann.

Husserl, E. ( 1913 /1962). Ideas: general introduction to pure phenomenology (W. R. Boyce Gibson, Trans.). London, England: Collier.

LaBanca, F. ( 2011 ). Online dynamic asynchronous audit strategy for reflexivity in the qualitative paradigm.   Qualitative Report, 16, 1160–1171.

Lather, P. ( 1993 ). Fertile obsession: Validity after poststructuralism.   Sociological Quarterly, 34, 673–693.

Leavy, P. ( 2017 ). Method meets art: Arts-based research practice (2nd ed.). New York, NY: Guilford Press.

Lenzo, K. ( 1995 ). Validity and self reflexivity meet poststructuralism: Scientific ethos and the transgressive self.   Educational Researcher, 24(4), 17–23, 45.

Lichtman, M. ( 2013 ). Qualitative research in education: A user’s guide (3rd ed.). Thousand Oaks, CA: Sage.

Lincoln, Y. S. , & Guba, E. G. ( 1985 ). Naturalistic inquiry . Beverly Hills, CA: Sage.

Liu, E. (2010). The real meaning of balls and strikes . Retrieved from http://www.huffingtonpost.com/eric-liu/the-real-meaning-of-balls_b_660915.html

Maxwell, J. ( 1992 ). Understanding and validity in qualitative research.   Harvard Educational Review, 62, 279–300.

Miles, M. B. , & Huberman, A. M. ( 1994 ). Qualitative data analysis . Thousand Oaks, CA: Sage.

Mills, G. E. ( 2018 ). Action research: A guide for the teacher researcher (6th ed.). New York, NY: Pearson.

Olivier de Sardan, J. P. ( 2015 ). Epistemology, fieldwork, and anthropology. New York, NY: Palgrave Macmillan.

Packer, M. J. ( 2018 ). The science of qualitative research (2nd ed.). Cambridge, England: Cambridge University Press.

Paulus, T. , Woodside, M. , & Ziegler, M. ( 2008 ). Extending the conversation: Qualitative research as dialogic collaborative process.   Qualitative Report, 13, 226–243.

Richardson, L. ( 1995 ). Writing stories: Co-authoring the “sea monster,” a writing story.   Qualitative Inquiry, 1, 189–203.

Richardson, L. ( 1997 ). Fields of play: Constructing an academic life . New Brunswick, NJ: Rutgers University Press.

Sagor, R. ( 2000 ). Guiding school improvement with action research . Alexandria, VA: ASCD.

Saldaña, J. ( 2011 ). Fundamentals of qualitative research . New York, NY: Oxford University Press.

Scheurich, J. ( 1996 ). The masks of validity: A deconstructive investigation.   Qualitative Studies in Education, 9, 49–60.

Schwandt, T. A. ( 2001 ). Dictionary of qualitative inquiry . Thousand Oaks, CA: Sage.

Slotnick, R. C. , & Janesick, V. J. ( 2011 ). Conversations on method: Deconstructing policy through the researcher reflective journal.   Qualitative Report, 16, 1352–1360.

Trent, A. ( 2002 ). Dreams as data: Art installation as heady research,   Teacher Education Quarterly, 29(4), 39–51.

Trent, A. ( 2012 ). Action research on action research: A facilitator’s account.   Action Learning and Action Research Journal, 18, 35–67.

Trent, A. , Rios, F. , Antell, J. , Berube, W. , Bialostok, S. , Cardona, D. , … Rush, T. ( 2003 ). Problems and possibilities in the pursuit of diversity: An institutional analysis.   Equity & Excellence, 36, 213–224.

Trent, A. , & Zorko, L. ( 2006 ). Listening to students: “New” perspectives on student teaching.   Teacher Education & Practice, 19, 55–70.

Willig, C. ( 2017 ). Interpretation in qualitative research. In C. Willig & W. Stainton-Rogers (Eds.), The Sage handbook of qualitative research in psychology (2nd ed., pp. 267–290). London, England: Sage.

Willis, J. W. ( 2007 ). Foundations of qualitative research: Interpretive and critical approaches . Thousand Oaks, CA: Sage.

Wolcott, H. ( 1990 ). On seeking-and rejecting-validity in qualitative research. In E. Eisner & A. Peshkin (Eds.), Qualitative inquiry in education: The continuing debate (pp. 121–152). New York, NY: Teachers College Press.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

8 Types of Data Analysis

techniques of interpretation in research methodology

Data analysis is an aspect of  data science and data analytics that is all about analyzing data for different kinds of purposes. The data analysis process involves inspecting, cleaning, transforming and modeling data to draw useful insights from it.

What Are the Different Types of Data Analysis?

  • Descriptive analysis
  • Diagnostic analysis
  • Exploratory analysis
  • Inferential analysis
  • Predictive analysis
  • Causal analysis
  • Mechanistic analysis
  • Prescriptive analysis

With its multiple facets, methodologies and techniques, data analysis is used in a variety of fields, including business, science and social science, among others. As businesses thrive under the influence of technological advancements in data analytics, data analysis plays a huge role in  decision-making , providing a better, faster and more efficacious system that minimizes risks and reduces  human biases .

That said, there are different kinds of data analysis catered with different goals. We’ll examine each one below.

Two Camps of Data Analysis

Data analysis can be divided into two camps, according to the book  R for Data Science :

  • Hypothesis Generation — This involves looking deeply at the data and combining your domain knowledge to generate hypotheses about why the data behaves the way it does.
  • Hypothesis Confirmation — This involves using a precise mathematical model to generate falsifiable predictions with statistical sophistication to confirm your prior hypotheses.

Types of Data Analysis

Data analysis can be separated and organized into types, arranged in an increasing order of complexity.

1. Descriptive Analysis

The goal of descriptive analysis is to describe or summarize a set of data. Here’s what you need to know:

  • Descriptive analysis is the very first analysis performed in the data analysis process.
  • It generates simple summaries about samples and measurements.
  • It involves common, descriptive statistics like measures of central tendency, variability, frequency and position.

Descriptive Analysis Example

Take the  Covid-19 statistics page on Google, for example. The line graph is a pure summary of the cases/deaths, a presentation and description of the population of a particular country infected by the virus.

Descriptive analysis is the first step in analysis where you summarize and describe the data you have using descriptive statistics, and the result is a simple presentation of your data.

More on Data Analysis: Data Analyst vs. Data Scientist: Similarities and Differences Explained

2. Diagnostic Analysis 

Diagnostic analysis seeks to answer the question “Why did this happen?” by taking a more in-depth look at data to uncover subtle patterns. Here’s what you need to know:

  • Diagnostic analysis typically comes after descriptive analysis, taking initial findings and investigating why certain patterns in data happen. 
  • Diagnostic analysis may involve analyzing other related data sources, including past data, to reveal more insights into current data trends.  
  • Diagnostic analysis is ideal for further exploring patterns in data to explain anomalies.  

Diagnostic Analysis Example

A footwear store wants to review its website traffic levels over the previous 12 months. Upon compiling and assessing the data, the company’s marketing team finds that June experienced above-average levels of traffic while July and August witnessed slightly lower levels of traffic. 

To find out why this difference occurred, the marketing team takes a deeper look. Team members break down the data to focus on specific categories of footwear. In the month of June, they discovered that pages featuring sandals and other beach-related footwear received a high number of views while these numbers dropped in July and August. 

Marketers may also review other factors like seasonal changes and company sales events to see if other variables could have contributed to this trend.   

3. Exploratory Analysis (EDA)

Exploratory analysis involves examining or exploring data and finding relationships between variables that were previously unknown. Here’s what you need to know:

  • EDA helps you discover relationships between measures in your data, which are not evidence for the existence of the correlation, as denoted by the phrase, “ Correlation doesn’t imply causation .”
  • It’s useful for discovering new connections and forming hypotheses. It drives design planning and data collection.

Exploratory Analysis Example

Climate change is an increasingly important topic as the global temperature has gradually risen over the years. One example of an exploratory data analysis on climate change involves taking the rise in temperature over the years from 1950 to 2020 and the increase of human activities and industrialization to find relationships from the data. For example, you may increase the number of factories, cars on the road and airplane flights to see how that correlates with the rise in temperature.

Exploratory analysis explores data to find relationships between measures without identifying the cause. It’s most useful when formulating hypotheses.

4. Inferential Analysis

Inferential analysis involves using a small sample of data to infer information about a larger population of data.

The goal of statistical modeling itself is all about using a small amount of information to extrapolate and generalize information to a larger group. Here’s what you need to know:

  • Inferential analysis involves using estimated data that is representative of a population and gives a measure of uncertainty or standard deviation to your estimation.
  • The  accuracy of inference depends heavily on your sampling scheme. If the sample isn’t representative of the population, the generalization will be inaccurate. This is known as the  central limit theorem .

Inferential Analysis Example

The idea of drawing an inference about the population at large with a smaller sample size is intuitive. Many statistics you see on the media and the internet are inferential; a prediction of an event based on a small sample. For example, a psychological study on the benefits of sleep might have a total of 500 people involved. When they followed up with the candidates, the candidates reported to have better overall attention spans and well-being with seven-to-nine hours of sleep, while those with less sleep and more sleep than the given range suffered from reduced attention spans and energy. This study drawn from 500 people was just a tiny portion of the 7 billion people in the world, and is thus an inference of the larger population.

Inferential analysis extrapolates and generalizes the information of the larger group with a smaller sample to generate analysis and predictions.

5. Predictive Analysis

Predictive analysis involves using historical or current data to find patterns and make predictions about the future. Here’s what you need to know:

  • The accuracy of the predictions depends on the input variables.
  • Accuracy also depends on the types of models. A linear model might work well in some cases, and in other cases it might not.
  • Using a variable to predict another one doesn’t denote a causal relationship.

Predictive Analysis Example

The 2020 US election is a popular topic and many  prediction models are built to predict the winning candidate. FiveThirtyEight did this to forecast the 2016 and 2020 elections. Prediction analysis for an election would require input variables such as historical polling data, trends and current polling data in order to return a good prediction. Something as large as an election wouldn’t just be using a linear model, but a complex model with certain tunings to best serve its purpose.

Predictive analysis takes data from the past and present to make predictions about the future.

More on Data: Explaining the Empirical for Normal Distribution

6. Causal Analysis

Causal analysis looks at the cause and effect of relationships between variables and is focused on finding the cause of a correlation. Here’s what you need to know:

  • To find the cause, you have to question whether the observed correlations driving your conclusion are valid. Just looking at the surface data won’t help you discover the hidden mechanisms underlying the correlations.
  • Causal analysis is applied in randomized studies focused on identifying causation.
  • Causal analysis is the gold standard in data analysis and scientific studies where the cause of phenomenon is to be extracted and singled out, like separating wheat from chaff.
  • Good data is hard to find and requires expensive research and studies. These studies are analyzed in aggregate (multiple groups), and the observed relationships are just average effects (mean) of the whole population. This means the results might not apply to everyone.

Causal Analysis Example  

Say you want to test out whether a new drug improves human strength and focus. To do that, you perform randomized control trials for the drug to test its effect. You compare the sample of candidates for your new drug against the candidates receiving a mock control drug through a few tests focused on strength and overall focus and attention. This will allow you to observe how the drug affects the outcome.

Causal analysis is about finding out the causal relationship between variables, and examining how a change in one variable affects another.

7. Mechanistic Analysis

Mechanistic analysis is used to understand exact changes in variables that lead to other changes in other variables. Here’s what you need to know:

  • It’s applied in physical or engineering sciences, situations that require high precision and little room for error, only noise in data is measurement error.
  • It’s designed to understand a biological or behavioral process, the pathophysiology of a disease or the mechanism of action of an intervention. 

Mechanistic Analysis Example

Many graduate-level research and complex topics are suitable examples, but to put it in simple terms, let’s say an experiment is done to simulate safe and effective nuclear fusion to power the world. A mechanistic analysis of the study would entail a precise balance of controlling and manipulating variables with highly accurate measures of both variables and the desired outcomes. It’s this intricate and meticulous modus operandi toward these big topics that allows for scientific breakthroughs and advancement of society.

Mechanistic analysis is in some ways a predictive analysis, but modified to tackle studies that require high precision and meticulous methodologies for physical or engineering science .

8. Prescriptive Analysis 

Prescriptive analysis compiles insights from other previous data analyses and determines actions that teams or companies can take to prepare for predicted trends. Here’s what you need to know: 

  • Prescriptive analysis may come right after predictive analysis, but it may involve combining many different data analyses. 
  • Companies need advanced technology and plenty of resources to conduct prescriptive analysis. AI systems that process data and adjust automated tasks are an example of the technology required to perform prescriptive analysis.  

Prescriptive Analysis Example

Prescriptive analysis is pervasive in everyday life, driving the curated content users consume on social media. On platforms like TikTok and Instagram, algorithms can apply prescriptive analysis to review past content a user has engaged with and the kinds of behaviors they exhibited with specific posts. Based on these factors, an algorithm seeks out similar content that is likely to elicit the same response and recommends it on a user’s personal feed. 

When to Use the Different Types of Data Analysis 

  • Descriptive analysis summarizes the data at hand and presents your data in a comprehensible way.
  • Diagnostic analysis takes a more detailed look at data to reveal why certain patterns occur, making it a good method for explaining anomalies. 
  • Exploratory data analysis helps you discover correlations and relationships between variables in your data.
  • Inferential analysis is for generalizing the larger population with a smaller sample size of data.
  • Predictive analysis helps you make predictions about the future with data.
  • Causal analysis emphasizes finding the cause of a correlation between variables.
  • Mechanistic analysis is for measuring the exact changes in variables that lead to other changes in other variables.
  • Prescriptive analysis combines insights from different data analyses to develop a course of action teams and companies can take to capitalize on predicted outcomes. 

A few important tips to remember about data analysis include:

  • Correlation doesn’t imply causation.
  • EDA helps discover new connections and form hypotheses.
  • Accuracy of inference depends on the sampling scheme.
  • A good prediction depends on the right input variables.
  • A simple linear model with enough data usually does the trick.
  • Using a variable to predict another doesn’t denote causal relationships.
  • Good data is hard to find, and to produce it requires expensive research.
  • Results from studies are done in aggregate and are average effects and might not apply to everyone.​

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Great Companies Need Great People. That's Where We Come In.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 03 June 2024

Assessing rates and predictors of cannabis-associated psychotic symptoms across observational, experimental and medical research

  • Tabea Schoeler   ORCID: orcid.org/0000-0003-4846-2741 1 , 2 ,
  • Jessie R. Baldwin 2 , 3 ,
  • Ellen Martin 2 ,
  • Wikus Barkhuizen 2 &
  • Jean-Baptiste Pingault   ORCID: orcid.org/0000-0003-2557-4716 2 , 3  

Nature Mental Health ( 2024 ) Cite this article

3 Altmetric

Metrics details

  • Outcomes research
  • Risk factors

Cannabis, one of the most widely used psychoactive substances worldwide, can give rise to acute cannabis-associated psychotic symptoms (CAPS). While distinct study designs have been used to examine CAPS, an overarching synthesis of the existing findings has not yet been carried forward. To that end, we quantitatively pooled the evidence on rates and predictors of CAPS ( k  = 162 studies, n  = 210,283 cannabis-exposed individuals) as studied in (1) observational research, (2) experimental tetrahydrocannabinol (THC) studies, and (3) medicinal cannabis research. We found that rates of CAPS varied substantially across the study designs, given the high rates reported by observational and experimental research (19% and 21%, respectively) but not medicinal cannabis studies (2%). CAPS was predicted by THC administration (for example, single dose, Cohen’s d  = 0.7), mental health liabilities (for example, bipolar disorder, d  = 0.8), dopamine activity ( d  = 0.4), younger age ( d  = −0.2), and female gender ( d  = −0.09). Neither candidate genes (for example, COMT , AKT1 ) nor other demographic variables (for example, education) predicted CAPS in meta-analytical models. The results reinforce the need to more closely monitor adverse cannabis-related outcomes in vulnerable individuals as these individuals may benefit most from harm-reduction efforts.

Similar content being viewed by others

techniques of interpretation in research methodology

Do AKT1, COMT and FAAH influence reports of acute cannabis intoxication experiences in patients with first episode psychosis, controls and young adult cannabis users?

techniques of interpretation in research methodology

Rates and correlates of cannabis-associated psychotic symptoms in over 230,000 people who use cannabis

techniques of interpretation in research methodology

Measuring the diversity gap of cannabis clinical trial participants compared to people who report using cannabis

Cannabis, one of the most widely used psychoactive substances in the world, 1 is commonly used as a recreational substance and is increasingly taken for medicinal purposes. 2 , 3 As a recreational substance, cannabis use is particularly prevalent among young people 1 who seek its rewarding acute effects such as relaxation, euphoria, or sociability. 4 When used as a medicinal product, cannabis is typically prescribed to alleviate clinical symptoms in individuals with pre-existing health conditions (for example, epilepsy, multiple sclerosis, chronic pain, nausea. 5 )

Given the widespread use of cannabis, alongside the shifts toward legalization of cannabis for medicinal and recreational purposes, momentum is growing to scrutinize both the potential therapeutic and adverse effects of cannabis on health. From a public health perspective, of particular concern are the increasing rates of cannabis-associated emergency department presentations, 6 the rising levels of THC (tetrahydrocannabinol, the main psychoactive ingredient in cannabis) in street cannabis, 7 the adverse events associated with medicinal cannabis use, 8 and the long-term health hazards associated with cannabis use. 9 In this context, risk of psychosis as a major adverse health outcome related to cannabis use has been studied extensively, suggesting that early-onset and heavy cannabis use constitutes a contributory cause of psychosis. 10 , 11 , 12

More recent research has started to examine the more acute cannabis-associated psychotic symptoms (CAPS) to understand better how individual vulnerabilities and the pharmacological properties of cannabis elicit adverse reactions in individuals exposed to cannabis. Indeed, transient psychosis-like symptoms, including hallucinations or paranoia during cannabis intoxication, are well documented. 5 , 13 , 14 In more rare cases, recreational cannabis users experience severe forms of CAPS, 15 requiring emergency medical treatment as a result of acute CAPS. 16 In addition, acute psychosis following THC administration has been documented in medicinal cannabis trials and experimental studies, 17 , 18 , 19 suggesting that CAPS can also occur in more-controlled environments.

While numerous studies have provided evidence on CAPS in humans, no research has yet synthesized and compared the findings obtained from different study designs and populations. More specifically, three distinct study types have focused on CAPS: (1) observational studies assessing the subjective experiences of cannabis intoxication in recreational cannabis users, (2) experimental challenge studies administering THC in healthy volunteers, and (3) medicinal cannabis studies documenting adverse events when testing medicinal cannabis products in individuals with pre-existing health conditions. As such, the availability of these three distinct lines of evidence provides a unique research opportunity as their findings can be synthesized, be inspected for convergence, and ultimately, contribute to more evidence-based harm-reduction initiatives.

In this work, we therefore aim to perform a quantitative synthesis of all existing evidence examining CAPS to advance our understanding concerning the rates and predictors of CAPS: First, it is currently unknown how common CAPS are among individuals exposed to cannabis. While rates of CAPS are reported by numerous studies, estimates vary substantially (for example, from <1% (ref. 20 ) to 70% (ref. 21 )) and may differ depending on the assessed symptom profile (for example, cannabis-associated hallucinations versus cannabis-associated paranoia), the study design (for example, observational versus experimental research), and the population (for example, healthy volunteers versus medicinal cannabis users). Second, distinct study designs have scrutinized similar questions concerning the risks involved in CAPS. As such, comparisons of the results from one study design (for example, observational studies, assessing self-reported cannabis use in recreational users 22 , 23 ) with another study design (for example, experimental studies administering varying doses of THC 24 , 25 ) can be used to triangulate findings on a given risk factor of interest (for example, potency of cannabis). Finally, studies focusing on predictors of CAPS typically assess hypothesized risk factors in isolation. Pooling all existing evidence across different risk factors therefore provides a more complete picture of the relative magnitude of the individual risk factors involved in CAPS.

In summary, this work is set out to synthesize all of the available evidence on CAPS across three lines of research. In light of the increasingly liberal cannabis policies around the world, alongside the rising levels of THC in cannabis, such efforts are key to informing harm-reduction strategies and future research avenues for public health. Considering that individuals presenting with acute cannabis-induced psychosis are at high risk of converting to a psychotic disorder (for example, rates ranging between 18% (ref. 26 ) and 45% (ref. 27 )), a deeper understanding of factors predicting CAPS would contribute to our understanding concerning risk of long-term psychosis in the context of cannabis use.

Of 20,428 published studies identified by the systematic search, 162 were included in this work. The reasons for exclusion are detailed in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram (Fig. 1 ; see Supplementary Fig. 1 for a breakdown of the number of independent participants included in the different analytical models). The PRISMA reporting checklist is included in the Supplementary Results . At the full-text screening stage, the majority of studies were excluded because they did not report data on CAPS (83.88% of all excluded studies). Figure 2 displays the number of published studies included ( k ) and the number of (non-overlapping) study participants ( n ) per study design, highlighting that out of all participants included in this meta-analysis ( n  = 201,283), most took part in observational research ( n  = 174,300; 82.89%), followed by studies assessing medicinal cannabis products ( n  = 33,502; 15.93%), experimental studies administering THC ( n  = 2,009; 0.96%), and quasi-experimental studies ( n  = 472; 0.22%). Screening of 10% of the studies at the full-text stage by an independent researcher (E.M.) did not identify missed studies.

figure 1

Flow chart as adapted from the PRISMA flow chart ( http://www.prisma-statement.org/ ). Independent study participants are defined as the maximum number of participants available for an underlying study sample assessed in one or more of the included studies.

figure 2

Number of included studies per year of publication and study design, including observational research assessing recreational cannabis users, experimental studies administering THC in healthy volunteers, and medicinal studies assessing adverse events in individuals taking cannabis products for medicinal use. Quasi-experimental research involved research testing the effects of THC administration in a naturalistic setting. 23 , 62 k , number of studies; n , number of (non-overlapping) study participants.

Rates of CAPS across the three study designs

A total of 99 studies published between 1971 and 2023 reported data on rates of CAPS and were included in the analysis, comprising 126,430 individuals from independent samples. Convergence of the data extracted by the two researchers (T.S. and W.B.) was high for the pooled rates on CAPS from observational studies (rate DIFF  = −0.01%, where rate DIFF  = rate TS  – rate WB ), experimental studies (rate DIFF  = 0%), and medicinal cannabis studies (rate DIFF  = 0%). More specifically, we included data from 41 observational studies ( n  = 92,888 cannabis users), 19 experimental studies administering THC ( n  = 754), and 79 studies assessing efficacy and tolerability of medicinal cannabis products containing THC ( n  = 32,821). In medicinal trials, the most common conditions treated with THC were pain ( k  = 19 (23.75%)) and cancer ( k  = 16 (20%)) (see Supplementary Table 1 for an overview). The age distribution of the included participants was similar in observational studies (mean age = 24.47 years, ranging from 16.6 to 34.34 years) and experimental studies (mean age = 25.1 years, ranging from 22.47 to 27.3 years). Individuals taking part in medicinal trials were substantially older (mean age = 48.16 years, ranging from 8 to 74.5 years).

As summarized in Fig. 3 and Supplementary Table 3 , substantial rates of CAPS were reported by observational studies (19.4%, 95% confidence interval (CI): 14.2%, 24.6%) and THC-challenge studies (21%, 95% CI: 11.3%, 30.7%), but not medicinal cannabis studies (1.5%, 95% CI: 1.1%, 1.9%). The pooled rates estimated for different symptom profiles of CAPS (CAPS – paranoia, CAPS – hallucinations, CAPS – delusions) are displayed in Supplementary Fig. 2 . All individual study estimates are listed in Supplementary Table 2 .

figure 3

Pooled rates of CAPS across the three different study designs. Estimates on the y axis are the rates (in %, 95% confidence interval) obtained from models pooling together estimates on rates of CAPS (including psychosis-like symptoms, paranoia, hallucinations, and delusions) per study design.

Most models showed significant levels of heterogeneity (Supplementary Table 3 ), highlighting that rates of CAPS differed as a function of study-specific features. Risk of publication bias was indicated ( P Peters  < 0.05) for one of the meta-analytical models combining all rates of CAPS (see funnel plots, Supplementary Fig. 2 ). Applying the trim-and-fill method slightly reduced the pooled rate of CAPS obtained from medicinal cannabis studies (rate unadjusted  = 1.53%; rate adjusted  = 1.18%). Finally, Fig. 4 summarizes rates of CAPS of a subset of studies where CAPS was defined as the occurrence of a full-blown cannabis-associated psychotic episode (as described in Table 1 ). When combined, the rate of CAPS (full episode) was 0.52% (0.42–0.62%) across the three study designs, highlighting that around one in 200 individuals experienced a severe episode of psychosis when exposed to cannabis/THC. Rates of CAPS (full episode) as reported by the individual studies showed high levels of consistency ( I 2  = 8%, P(I 2 ) = 0.45; Fig. 4 ).

figure 4

Studies reporting rates of cannabis-associated psychosis (full episode). Depicted in violet are the individual study estimates (in %, 95% confidence interval) of studies reporting rates of (full-blown) cannabis-associated psychotic episodes. Included are studies using medicinal cannabis, observational, or experimental samples. The pooled meta-analyzed estimate is colored in blue. The I 2 statistic (scale of 0 to 100) indexes the level of heterogeneity across the estimates included in the meta-analysis.

Predictors of cannabis-associated psychotic symptoms

Assessing predictors of CAPS, we included 103 studies published between 1976 and 2023, corresponding to 80 independent samples ( n  = 170,158 non-overlapping individuals). In total, we extracted 381 Cohen’s d that were pooled in 44 separate meta-analytical models. A summary of all extracted study estimates is provided in Supplementary Table 4 . Comparing the P values of the individual Cohen’s d to the original P values as reported in the studies revealed a high level of concordance ( r  = 0.96 P  = 1.1 × 10 –79 ), indicating that the conversion of the raw study estimates to a common metric did not result in a substantial loss of information. Comparing the results obtained from the data extracted by two researchers (T.S. and W.B.) identified virtually no inconsistencies when inspecting estimates of Cohen’s d , as obtained for severity of cannabis use on CAPS ( d DIFF  = 0, where d DIFF  =  d TS   –d   WB ), gender ( d DIFF  = 0), administration of (placebo controlled) medicinal cannabis ( d DIFF  = 0.003), psychosis liability ( d DIFF  = 0), and administration of a single dose of THC ( d DIFF  = 0).

Figure 5 summarizes the results obtained from the meta-analytical models. We examined whether CAPS was predicted by the pharmacodynamic properties of cannabis, a person’s cannabis use history, demographic factors, mental health/personality traits, neurotransmitters, genetics, and use of other drugs: With respect to the pharmacodynamic properties of cannabis, the largest effect on CAPS severity was present for a single dose of THC ( d  = 0.7, 95% CI: 0.52, 0.87) as administered in experimental studies, followed by a significant dose–response effect of THC on CAPS ( d  = 0.42, 95% CI: 0.25, 0.59, that is, tested as moderation effects of THC dose in experimental studies). When tested in medicinal randomized controlled trials, cannabis products significantly increased symptoms of CAPS ( d  = 0.14, 95% CI: 0.05, 0.23), albeit by a smaller magnitude. Protective effects were present for low THC/COOH levels ( d  = −0.22, 95% CI: −0.39, −0.05, that is, the inactive metabolite of cannabis), but not for the THC/CBD (cannabidiol) ratio ( d  = −0.19, 95% CI: −0.43, 0.05, P  = 0.13).

figure 5

Summary of pooled Cohen’s d , the corresponding 95% confidence intervals, and P values (two-sided, uncorrected for multiple testing). Positive estimates of Cohen’s d indicate increases in CAPS in response to the assessed predictor. Details regarding the classification and interpretation of each predictor are provided in the Supplementary Information . The reference list of all studies included in this figure is provided in Supplementary Table 4 . NS, neurotransmission.

Less clear were the findings with respect to the cannabis use history of the participants and its effect on CAPS. Here, neither young age of onset of cannabis use nor high-frequency use of cannabis or the preferred type of cannabis (strains high in THC, strains high in CBD) was associated with CAPS. The only demographic factors that significantly predicted CAPS were age ( d  = −0.17, 95% CI: −0.292, −0.050) and gender (−0.09, 95% CI: −0.180, −0.001), indicating that younger and female cannabis users report higher levels of CAPS compared with older and male users. With respect to mental health and personality, the strongest predictors for CAPS were diagnosis of bipolar disorder ( d  = 0.8, 95% CI: 0.54, 1.06)) and psychosis liability ( d  = 0.49, 95% CI: 0.21, 0.77), followed by mood problems (anxiety d  = 0.44, 95% CI: 0.03, 0.84; depression d  = 0.37, 95% CI: 0.003, 0.740) and addiction liability ( d  = 0.26, 95% CI: 0.14, 0.38). Summarizing the evidence from studies looking at neurotransmitter functioning showed that increased dopamine activity significantly predicted CAPS ( d  = 0.4, 95% CI: 0.16, 0.64) (for example, reduced CAPS following administration of D2 blockers such as olanzapine 28 or haloperidol 29 ). By contrast, alterations in the opioid system did not reduce risk of CAPS. Similarly, none of the assessed candidate genes showed evidence of altering response to cannabis. Finally, out of 11 psychoactive substances with available data, only use histories of MDMA (3,4-methyl enedioxy methamphetamine) ( d  = 0.2, 95% CI: 0.03, 0.36), crack ( d  = 0.13, 95% CI: 0.03, 0.23), inhalants ( d  = 0.12, 95% CI: 0.03, 0.22), and sedatives ( d  = 0.12, 95% CI: 0.02, 0.22) linked to increases in CAPS.

Most of the meta-analytical models showed considerable levels of heterogeneity ( I 2  > 80%; Supplementary Table 5 ), notably when summarizing findings from observational studies (for example, severity of cannabis use: I 2  = 98%, age of onset of cannabis use: I 2  = 98%), highlighting that the individual effect estimates varied substantially across studies. By contrast, lower levels of heterogeneity were present when pooling evidence from experimental and medicinal cannabis studies (for example, effects of medicinal cannabis: I 2  = 18%; THC dose–response effects: I 2  = 37%). While risk of publication bias was indicated for four of the meta-analytical models (Egger’s test P  < 0.05) (Supplementary Fig. 3 ), an inspection of trim-and-fill adjusted estimates did not alter the conclusions for (1) administration of a single dose of THC ( P Egger  < 0.0001, d unadjusted  = 0.7, d trim-and-fill  = 0.49), (2) CBD administration ( P Egger  = 0.0001, d unadjusted  = −0.19, d trim-and-fill  = −0.14, both P  < 0.05), psychosis liability ( P Egger  = 0.025, d unadjusted  = 0.49, d trim-and-fill  = 0.49), and (3) diagnosis of depression ( P Egger  = 0.019, d unadjusted  = 0.37, d trim-and-fill  = 0.54). Outliers were identified for seven meta-analytical models (Supplementary Fig. 4 ). Removing outliers from the models did not substantially alter the conclusions drawn from the models, as indicated for age ( d  = −0.18, d corr  = −0.14, both P  < 0.05); anxiety ( d  = 0.61, d corr  = 0.47, both P  < 0.05), severity of cannabis use ( d  = 0.19, d corr  = 0.25, both P  > 0.05), depression ( d  = 0.41, d corr  = 0.25, both P  > 0.05), gender ( d  = −0.09, d corr  = −0.12, both P  < 0.05), psychosis liability ( d  = 0.49, d corr  = 0.43, both P  < 0.05), and administration of a single dose of THC ( d  = 0.6, d corr  = 0.56, both P  < 0.05). Sensitivity checks assessing whether Cohen’s d changes as a function of within-subject correlation coefficient highlighted that the results were highly concordant (Supplementary Fig. 6 ). Minor deviations from the main analysis were present for the effects of a single dose of THC ( d r =0.3  = 0.64 versus d r =0.5  = 0.69 versus d r =0.7  = 0.77) and dose–response effects of THC ( d r =0.3  = 0.45 versus d r =0.5  = 0.42 versus d r =0.7  = 0.39), but this did not alter the interpretation of the findings.

Finally, we assessed consistency of findings for predictors examined in more than one of the different study designs (observational, experimental, and medicinal cannabis studies), as illustrated for four meta-analytical models in Fig. 6 (see Supplementary Fig. 7 for the complete set of results). Triangulating the results highlighted that consistency with respect to the direction of effects was particularly high for age ( d Experiments  = −0.14 versus d Observational  = −0.19 versus d Quasi-Experimental  = −0.16) and gender ( d Experiments  = −0.09 versus d Observational  = −0.07 versus d Quasi-Experimental  = −0.25) on CAPS. By contrast, little consistency across the different study designs was present with respect to cannabis use histories, notable age of onset of cannabis use ( d Observational  = −0.3 versus d Quasi-Experimental  = 0.24), and use of high-THC cannabis ( d Observational  = 0.12 versus d Quasi-Experimental  = −0.13).

figure 6

Pooled estimates of Cohen’s d when estimated separately for each of the different study designs. The I 2 statistic (scale of 0 to 100) indexes the level of heterogeneity across the estimates included in the meta-analysis.

In this work, we examined rates and predictors of acute CAPS by synthesizing evidence from three distinct study designs: observational research, experimental studies administering THC, and studies testing medicinal cannabis products. Our results led to a number of key findings regarding the risk of CAPS in individuals exposed to cannabis. First, significant rates of CAPS were reported by all three study designs. This indicates that risk of acute psychosis-like symptoms exists after exposure to cannabis, irrespective of whether it is used recreationally, administered in controlled experiments, or prescribed as a medicinal product. Second, rates of CAPS vary across the different study designs, with substantially higher rates of CAPS in observational and experimental samples than in medicinal cannabis samples. Third, not every individual exposed to cannabis is equally at risk of CAPS as the interplay between individual differences and the pharmacological properties of the cannabis likely play an important role in modulating risk. In particular, risk appears most amplified in vulnerable individuals (for example, young age, pre-existing mental health problems) and increases with higher doses of THC (as shown in experimental studies).

Rates of cannabis-associated psychotic symptoms

Summarizing the existing evidence on rates of CAPS, we find that cannabis can acutely induce CAPS in a subset of cannabis-exposed individuals, irrespective of whether it is used recreationally, administered in controlled experiments, or prescribed as a medicinal product. Importantly, rates of CAPS varied substantially across the designs. More specifically, similar rates of CAPS were reported by observational and experimental evidence (around 19% and 21% in cannabis-exposed individuals, respectively), while considerably lower rates of CAPS were documented in medicinal cannabis samples (between 1% and 2%).

A number of factors likely contribute to the apparently different rates of CAPS across the three study designs. First, rates of CAPS are not directly comparable as different, design-specific measures were used: in observational/experimental research, CAPS is typically defined as the occurrence of transient cannabis-induced psychosis-like symptoms, whereas medicinal trials screen for CAPS as the occurrence of first-rank psychotic symptoms, often resulting in treatment discontinuation. 20 , 30 , 31 As such, transient CAPS may indeed occur commonly in cannabis-exposed individuals (as evident in the higher rates in observational/experimental research), while risk of severe CAPS requiring medical attention is less frequently reported (resulting in lower reported rates in medicinal cannabis samples). This converges with our meta-analytic results, showing that severe CAPS (full psychotic episode) may occur in about 1 in 200 (0.5%) cannabis users. Another key difference between medicinal trials and experimental/observational research lies in the demographic profile of participants recruited into the studies. For example, individuals taking part in medicinal trials were substantially older (mean age: 48 years) compared with subjects taking part in observational or experimental studies (mean age: 24 and 25 years, respectively). As such, older age may have buffered some of the adverse effects reported by adolescent individuals. Finally, cannabis products used in medicinal trials contain noticeable levels of CBD (for example, Sativex, with a THC/CBD ratio of approximately 1:1), a ratio different from that typically found in street cannabis (for example, >15% THC and <1% CBD 32 ) and in the experimental studies included in our meta-analyses (pure THC). As such, the use of medicinal cannabis (as opposed to street cannabis) may constitute a somewhat safer option. However, the potentially protective effects of CBD in this context require further investigation as we did not find a consistent effect of CBD co-administration on THC-induced psychosis-like symptoms. While earlier experimental studies included in our work were suggestive of protective effects of CBD, 33 , 34 , 35 two recent studies did not replicate these findings. 36 , 37

Interestingly, lower but significant rates of CAPS were also observed in placebo groups assessed as part of THC-challenge studies (% THC  = 25% versus % placebo  = 11%) and medicinal cannabis trials (% THC  = 3% versus % placebo  = 1%), highlighting that psychotic symptoms occur not only in the context of cannabis exposure. This is in line with the notion that cannabis use can increase risk of psychosis but appears to be neither a sufficient nor necessary cause for the emergence of psychotic symptoms. 38

Predictors of CAPS

Summarizing evidence on predictors of CAPS, we found that individual vulnerabilities and the pharmacological properties of cannabis both appear to play an important role in modulating risk. Regarding the pharmacological properties of cannabis, evidence from experimental studies showed that the administration of THC increases risk of CAPS, both in a single-dose and dose-dependent manner. Given the nature of the experimental design, these effects are independent of potential confounders that bias estimates obtained from observational studies. More challenging to interpret are therefore findings on individual cannabis use histories (for example, frequency/severity of cannabis use, age of onset of use, preferred cannabis strain) as assessed in observational studies. Contrary to evidence linking high-frequency and early-onset cannabis use to long-term risk of psychosis, 39 none of these factors associated with CAPS in our study. This discrepancy may indicate that cumulative effects of THC exposure are expressed differently for long-term risk of psychosis and acute CAPS: while users accustomed to cannabis may show a more blunted acute response as a result of tolerance, they are nevertheless at a higher risk of developing the clinical manifestation of psychosis in the long run. 38

We also tested a number of meta-analytical models for predictors tapping into demographic and mental health dimensions. Interestingly, among the assessed demographic factors, only age and gender associated with CAPS, with younger and female individuals reporting increased levels of CAPS. Other factors often linked to mental health, such as education or socioeconomic status, were not related to CAPS. Concerning predictors indexing mental health, we found converging evidence showing that a predisposition to psychosis increased the risk of experiencing CAPS. In addition, individuals with other pre-existing mental health vulnerabilities (for example, bipolar disorder, depression, anxiety, addiction liability) also showed a higher risk of CAPS, indicating that risk may stem partly from a common vulnerability to mental health problems.

These findings align with findings from studies focusing on the biological correlates of CAPS, showing that increases in dopamine activity, a neurotransmitter implicated in the etiology of psychosis, 40 altered sensitivity to cannabis. By contrast, none of the a priori selected candidate genes (chosen mostly to index schizophrenia liability) modulated risk of CAPS. This meta-analytic finding is coherent with results from the largest available genome-wide association study on schizophrenia, 41 where none of the candidate genes reached genome-wide significance ( P  < 5 × 10 −8 ) ( Supplementary Information ). Instead, as for any complex trait, genetic risk underlying CAPS is likely to be more polygenic in nature, possibly converging on pathways as yet to be identified. As such, genetic testing companies that screen for the aforementioned genetic variants to provide their customers with an individualized risk profile (such as the Cannabis Genetic Test offered by Lobo Genetics ( https://www.lobogene.com )) are unlikely to fully capture the genetic risk underlying CAPS. Similarly, genetic counseling programs targeting specifically AKT1 allele carriers in the context of cannabis use 42 may be only of limited use when trying to reduce cannabis-associated harms.

Implications for research on cannabis use and psychosis

This work has a number of implications for future research avenues. First, experimental studies administering THC constitute the most stringent available causal inference method when studying risk of CAPS. Future studies should therefore capitalize on experimental designs to advance our understanding of the acute pharmacological effects of cannabis, in terms of standard cannabis units, 43 dose–response risk profiles, 44 the interplay of different cannabinoids, 44 , 45 and building on recent work.

Despite the value of experimental studies in causal inference, observational studies are essential to identify predictors of CAPS that cannot be experimentally manipulated (for example, age, long-term/chronic exposure to cannabis) and to strengthen external validity. However, a particular challenge for inference from observational studies results from bias due to confounding and reverse causation. Triangulating and comparing findings across study designs can therefore help to identify potential sources of bias that are specific to the different study designs. 46 For example, we observed that, despite THC dosing being robustly associated with CAPS in experimental studies, we did not find an association between cannabis use patterns (for example, high-THC cannabis strain) in observational and quasi-observational studies. This apparent inconsistency may result from THC effects that are blunted by long-term, early-onset and heavy cannabis use. For other designs, reverse causation may bias the association between cannabis use patterns and CAPS: as individuals may reduce cannabis consumption as a result of adverse acute effects, 47 the interpretation of cross-sectional estimates concerning different cannabis exposure and risk of CAPS is particularly challenging. Future observational studies should therefore exploit more robust causal inference methods (for example, THC administration in naturalistic settings 48 or within-subject comparisons controlling for time-invariant confounds 49 ) to better approximate the experimental design. In particular, innovative designs that can provide a higher temporal resolution on cannabis exposures and related experiences (for example, experience sampling, 50 assessing daily reactivity to cannabis 51 ) are a valuable addition to the causal inference toolbox for cannabis research. Applying genetically informed causal inferences such as Mendelian randomization analyses 52 can further help to triangulate findings, which would be possible once genome-wide summary results for both different cannabis use patterns and CAPS become available.

With respect to medicinal trials, it is important to note that an assessment of CAPS has not been a primary research focus. Although psychotic events are recognized as a potential adverse reaction to medicinal cannabis, 53 data on CAPS are rarely reported by medicinal trials, considering that only about 20% of medicinal cannabis randomized controlled trials screen for psychosis as a potential adverse effects. 5 As such, trials should systematically monitor CAPS, in addition to longer-term follow-ups assessing the risk of psychosis as a result of medicinal cannabis use. In particular, the use of validated instruments designed to capture more-subtle changes in CAPS should be included in trials to more adequately assess adverse reactions associated with medicinal cannabis products.

Second, with respect to factors associated with risk of CAPS, we find that these are similar to factors associated with onset of psychosis, notably pre-existing mental health vulnerabilities, 54 dose–response effects of cannabis, 55 and young age. 12 The key question deserving further attention is therefore whether CAPS constitutes, per se, a risk maker for long-term psychosis. Preliminary evidence found that in individuals with recent-onset psychosis, 37% reported to have experienced their first psychotic symptoms during cannabis intoxication. 56 Future longitudinal evidence building on this is required to determine whether subclinical cannabis-associated psychotic symptoms can help to identify users at high risk of developing psychosis in the long run. Follow-up research should also examine longitudinal trajectories of adverse cannabis-induced experiences and the distress associated with these experiences, given research suggesting that high levels of distress/persistence may constitute a marker of clinical relevance of psychotic-like experiences. 57 While few studies have explored this question in the context of CAPS, there is, for example, evidence suggesting that the level of distress caused by acute adverse reactions to cannabis may depend on the specific symptom dimension. 58 Here the highest levels of distress resulted from cannabis-associated paranoia and anxiety, rather than cannabis-associated hallucinations or experiences tapping into physical sensations (for example, body humming, numbness). In addition, some evidence highlights the re-occurring nature of CAPS in cannabis-exposed individuals. 22 , 58 Further research focusing on individuals with persisting symptoms of CAPS may therefore help to advance our knowledge concerning individual vulnerabilities underlying the development of long-term psychosis in the context of cannabis use.

Importantly, our synthesizing analysis is not immune to the sources of bias that exist for the different study designs, and our findings should therefore be considered in light of the aforementioned limitations (for example, residual confounding or reverse causation in observational studies, limited external validity in experimental studies). Nevertheless, comparing findings across the different study designs allowed us to pin down areas of inconsistency, which existed mostly with regard to cannabis-related parameters (for example, age of onset, frequency of use) and CAPS. In addition, we observed large levels of heterogeneity among most meta-analysis models, highlighting that study-specific findings may vary as a result of different sample characteristics and study methodologies. Future studies aiming to further discern potential sources of variation such as study design features (for example, treatment length in medicinal trials, route of THC administration in experimental studies), statistical modeling (for example, the type of confounding factors considered in observational research), and sample demographics (for example, age of the participants, previous experience with cannabis) are therefore essential when studying CAPS.

Conclusions

Our results demonstrate that cannabis can induce acute psychotic symptoms in individuals using cannabis for recreational or medicinal purposes. Some individuals appear to be particularly sensitive to the adverse acute effects of cannabis, notably young individuals with pre-existing mental health problems and individuals exposed to high levels of THC. Future studies should therefore monitor more closely adverse cannabis-related outcomes in vulnerable individuals as these individuals may benefit most from harm-reduction efforts.

Systematic search

A systematic literature search was performed in three databases (MEDLINE, EMBASE, and PsycInfo) following the PRISMA guidelines. 59 The final search was conducted on 6 December 2023 using 26 search terms indexing cannabis/THC and 20 terms indexing psychosis-like outcomes or cannabis-intoxication experiences (see Supplementary Information for a complete list of search terms). Search terms were chosen on the basis of terminology used in studies assessing CAPS, including observational studies (self-reported cannabis-induced psychosis-like experiences), THC-challenge studies (testing change in psychosis-like symptoms following THC administration), and medicinal studies testing the efficacy and safety of medicinal cannabis products (adverse events related to medicinal cannabis). Before screening the identified studies for inclusion, we removed non-relevant article types (reviews, case reports, comments, guidelines, editorials, letters, newspaper articles, book chapters, dissertations, conference abstracts) and duplicates using the R package revtools 60 . A senior researcher experienced in meta-analyses on cannabis use (T.S.) then reviewed all titles and abstracts for their relevance before conducting full-text screening. To reduce the risk of wrongful inclusion at the full-text screening stage, 10% of the articles selected for full-text screening were cross-checked for eligibility by a second researcher (E.M.).

Data extraction

We included all study estimates that could be used to derive rates of CAPS (the proportion of cannabis-exposed individuals reporting CAPS) or effect sizes (Cohen’s d ) for factors predicting CAPS. CAPS was defined as the occurrence of hallucinations, paranoia, and/or delusions during cannabis intoxication. These symptom-level items have been identified as the most reliable self-report measures screening for psychosis when validated against clinical interview measures. 61 Table 1 provides examples of CAPS as measured across the three different study designs. In brief, from observational studies, we extracted data if CAPS was assessed in cannabis-exposed individuals on the basis of self-report measures screening for subjective experiences while under the influence of cannabis. From experimental studies administering THC, CAPS was measured as the degree of psychotic symptom change in response to THC, either estimated from a between-subject (placebo groups versus THC group) or within-subject (pre-THC versus post-THC assessment) comparison. We also included data from natural experiments (referred to as quasi-experimental studies hereafter), where psychosis-like experiences were monitored in recreational cannabis users before and after they consumed their own cannabis products. 23 , 62 Finally, with respect to trials testing the efficacy and/or safety of medicinal cannabis products containing THC, we extracted data on adverse events, including the occurrence of psychosis, hallucinations, delusions, and/or paranoia during treatment with medicinal cannabis products. Medicinal studies that tested the effects of cannabis products not containing THC (for example, CBD only, olorinab, lenabasum) were not included.

For 10% of the included studies, data on rates and predictors of CAPS were extracted by a second researcher (W.B.), and agreement between the two extracted datasets was assessed by comparing the pooled estimates on rates and predictors of CAPS. In addition, following recommendations for improved reproducibility and transparency in meta-analytical works, 63 we provide all extracted data, the corresponding analytical scripts, and transformation information in the study repository.

Statistical analysis

Rates of caps.

We extracted the raw estimates of rates of CAPS as reported by observational, experimental, and medicinal cannabis studies. Classification of CAPS differs across the three study designs. In observational studies, occurrence of CAPS is typically defined as the experience of psychotic-like symptoms while under the influence of cannabis. In experimental studies administering THC, CAPS is commonly defined as a clinically significant change in psychotic symptom severity (for example, ≥3 points increase in Positive and Negative Syndrome Scale positive scores following THC 33 ). Finally, in medicinal cannabis samples, a binary measure of CAPS indicates whether psychotic symptoms occurred as an adverse event throughout the treatment with medicinal cannabis products. We derived rates of CAPS ( R CAPS  =  X Count of CAPS / N Sample size ) and the corresponding confidence intervals using the function BinomCI and the Clopper–Pearson method as implemented in the R package DescTools. 64 To estimate the pooled proportions, we fitted random-effects models or multilevel random-effects models as implemented in the R package metafor. 65 Multilevel random-effects models were used whenever accounting for non-independent sampling errors was necessary (further described in the following). Risk of publication bias was assessed using Peters’ test 66 and funnel plots and, if indicated ( P Peters  < 0.05), corrected using the trim-and-fill method ( Supplementary Methods ).

To derive the pooled effects of factors predicting CAPS, we converted study estimates to the standardized effect size Cohen’s d as a common metric. For studies reporting mean differences, two formulas were used for the conversion. First, for studies reporting mean differences from between-subject comparisons (independent samples), we used the following formula:

where M E and M C are the mean scores on a continuous scale (severity of CAPS), reported for individuals exposed ( M E ) and unexposed ( M C ) to a certain risk factor (for example, cannabis users with pre-existing mental health problems versus cannabis users without pre-existing mental health problems). The formula used to derive the pooled standard deviations, SD P , and the variance of Cohen’s d are listed in the Supplementary Methods . Second, an extension of the preceding formula was used to derive Cohen’s d from within-subject comparisons, comparing time-point one ( M T1 ) with time-point two ( M T2 ).The formula takes into account the dependency between the two groups: 67

where r indexes the correlation between the pairs of observations, such as the correlation between the pre- and post-THC condition in the same set of individuals for a particular outcome measure. The correlation coefficient was set to be r  = 0.5 for all studies included in the meta-analysis, on the basis of previous research. 13 We also assessed whether varying within-person correlation coefficients altered the interpretation of the results by re-estimating the pooled Cohen’s d for predictors of CAPS for two additional coefficients ( r  = 0.3 and r  = 0.7). The results were then compared with the findings obtained from the main analysis ( r  = 0.5).

From experimental studies reporting multiple time points of psychosis-like experiences following THC administration (for example, refs. 68 , 69 , 70 , 71 , 72 ), we selected the most immediate time point following THC administration. Of note, whenever studies reported test statistics instead of means (for example, t -test or F -test statistics), the preceding formula was amended to accommodate these statistics. In addition, to allow for the inclusion of studies reporting metrics other than mean comparisons (for example, regression coefficients, correlations coefficients), we converted the results to Cohen’s d using existing formulas. All formulas used in this study are provided in the Supplementary Information . Whenever studies reported non-significant results without providing sufficient data to estimate Cohen’s d ( for example, results reported only as P  > 0.05 ) , we used a conservative estimate of P  = 1 and the corresponding sample size as the input to derive Cohen’s d . Finally, if studies reported estimates in figures only, we used WebPlotDigitizer ( https://automeris.io/WebPlotDigitizer ) to extract the data. Since the conversion of estimates from one metric to another may result in loss of precision, we also extracted the original P -value estimates (whenever reported as numerical values) and assessed the level of concordance with the P values corresponding to the estimated Cohen’s d .

Next, a series of meta-analytical models were fitted, each pooling estimates of Cohen’s d that belonged to the same class of predictors (for example, estimates indexing the effect of dopaminergic function on CAPS; estimates indexing the effect of age on CAPS). A detailed description of the classification of the included predictors is provided in the Supplementary Methods . Cohen’s d estimates were pooled if at least two estimates were available for one predictor class, using one of the following models:

Aggregation models (pooling effect sizes coming from the same underlying sample)

Random-effects models (pooling effect sizes coming from independent samples)

Multilevel random-effects models (pooling effect sizes coming from both independent and non-independent samples)

Predictors that could not meaningfully be grouped were not included in meta-analytical models but are, for completeness, reported as individual study estimates in the Supplementary Information . Levels of heterogeneity for each meta-analytical model were explored using the I 2 statistic, 73 indexing the contribution of study heterogeneity to the total variance. Here, I 2  > 30% represents moderate heterogeneity and I 2  > 50% represents substantial heterogeneity. Risk of publication bias was assessed visually using funnel plots alongside the application of Egger’s test to test for funnel-plot asymmetry. This test was performed for meta-analytical models containing at least six effect estimates. 74 The trim-and-fill 75 method was used whenever risk of publication bias was indicated ( P Egger  < 0.05). To assess whether outliers distorted the conclusions of the meta-analytical models, we applied leave-one-out and outlier analysis 76 as implemented in the R package dmetar, 77 where a pooled estimate was re-calculated after omitting studies that deviated from the pooled estimate. Further details on all applied sensitivity analyses are provided in the Supplementary Methods .

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Data availability

The data are publicly available via GitHub at github.com/TabeaSchoeler/TS2023_MetaCAPS .

Code availability

All analytical code used to analyze, summarize, and present the data is accessible via GitHub at github.com/TabeaSchoeler/TS2023_MetaCAPS .

World Drug Report 2022 (UNODC, 2022); https://www.unodc.org/unodc/en/data-and-analysis/wdr-2022_booklet-3.html

Turna, J. et al. Overlapping patterns of recreational and medical cannabis use in a large community sample of cannabis users. Compr. Psychiatry 102 , 152188 (2020).

Article   PubMed   Google Scholar  

Rhee, T. G. & Rosenheck, R. A. Increasing use of cannabis for medical purposes among US residents, 2013–2020. Am. J. Prev. Med. 65 , 528–533 (2023).

Green, B., Kavanagh, D. & Young, R. Being stoned: a review of self-reported cannabis effects. Drug Alcohol Rev. 22 , 453–460 (2003).

Whiting, P. F. et al. Cannabinoids for medical use. JAMA. 313 , 2456 (2015).

Callaghan, R. C. et al. Associations between Canada’s cannabis legalization and emergency department presentations for transient cannabis-induced psychosis and schizophrenia conditions: Ontario and Alberta, 2015–2019. Can. J. Psychiatry 67 , 616–625 (2022).

Article   PubMed   PubMed Central   Google Scholar  

Manthey, J., Freeman, T. P., Kilian, C., López-Pelayo, H. & Rehm, J. Public health monitoring of cannabis use in Europe: prevalence of use, cannabis potency, and treatment rates. Lancet Reg. Health Eur. 10 , 100227 (2021).

Pratt, M. et al. Benefits and harms of medical cannabis: a scoping review of systematic reviews. Syst. Rev. 8 , 320 (2019).

McGee, R., Williams, S., Poulton, R. & Moffitt, T. A longitudinal study of cannabis use and mental health from adolescence to early adulthood. Addiction 95 , 491–503 (2000).

Large, M., Sharma, S., Compton, M. T., Slade, T. & Nielssen, O. Cannabis use and earlier onset of psychosis. Arch. Gen. Psychiatry 68 , 555 (2011).

Marconi, A., Di Forti, M., Lewis, C. M., Murray, R. M. & Vassos, E. Meta-analysis of the association between the level of cannabis use and risk of psychosis. Schizophr. Bull. 42 , 1262–1269 (2016).

Hasan, A. et al. Cannabis use and psychosis: a review of reviews. Eur. Arch. Psychiatry Clin. Neurosci. 270 , 403–412 (2020).

Hindley, G. et al. Psychiatric symptoms caused by cannabis constituents: a systematic review and meta-analysis. Lancet Psychiatry 7 , 344–353 (2020).

Sexton, M., Cuttler, C. & Mischley, L. K. A survey of cannabis acute effects and withdrawal symptoms: differential responses across user types and age. J. Altern. Complement. Med. 25 , 326–335 (2019).

Schoeler, T., Ferris, J. & Winstock, A. R. Rates and correlates of cannabis-associated psychotic symptoms in over 230,000 people who use cannabis. Transl. Psychiatry 12 , 369 (2022).

Winstock, A., Lynskey, M., Borschmann, R. & Waldron, J. Risk of emergency medical treatment following consumption of cannabis or synthetic cannabinoids in a large global sample. J. Psychopharmacol. 29 , 698–703 (2015).

Kaufmann, R. M. et al. Acute psychotropic effects of oral cannabis extract with a defined content of Δ9-tetrahydrocannabinol (THC) in healthy volunteers. Pharmacopsychiatry 43 , 24–32 (2010).

Cameron, C., Watson, D. & Robinson, J. Use of a synthetic cannabinoid in a correctional population for posttraumatic stress disorder-related insomnia and nightmares, chronic pain, harm reduction, and other indications. J. Clin. Psychopharmacol. 34 , 559–564 (2014).

Aviram, J. et al. Medical cannabis treatment for chronic pain: outcomes and prediction of response. Eur. J. Pain 25 , 359–374 (2021).

Serpell, M. G., Notcutt, W. & Collin, C. Sativex long-term use: an open-label trial in patients with spasticity due to multiple sclerosis. J. Neurol. 260 , 285–295 (2013).

Colizzi, M. et al. Delta-9-tetrahydrocannabinol increases striatal glutamate levels in healthy individuals: implications for psychosis. Mol. Psychiatry. 25 , 3231–3240 (2020).

Bianconi, F. et al. Differences in cannabis-related experiences between patients with a first episode of psychosis and controls. Psychol. Med. 46 , 995–1003 (2016).

Valerie Curran, H. et al. Which biological and self-report measures of cannabis use predict cannabis dependency and acute psychotic-like effects? Psychol. Med. 49 , 1574–1580 (2019).

Kleinloog, D., Roozen, F., De Winter, W., Freijer, J. & Van Gerven, J. Profiling the subjective effects of Δ9-tetrahydrocannabinol using visual analogue scales. Int. J. Methods Psychiatr. Res. 23 , 245–256 (2014).

Ganesh, S. et al. Psychosis-relevant effects of intravenous delta-9-tetrahydrocannabinol: a mega analysis of individual participant-data from human laboratory studies. Int. J. Neuropsychopharmacol. 23 , 559–570 (2020).

Kendler, K. S., Ohlsson, H., Sundquist, J. & Sundquist, K. Prediction of onset of substance-induced psychotic disorder and its progression to schizophrenia in a Swedish national sample. Am. J. Psychiatry 176 , 711–719 (2019).

Arendt, M., Rosenberg, R., Foldager, L., Perto, G. & Munk-Jørgensen, P. Cannabis-induced psychosis and subsequent schizophrenia-spectrum disorders: follow-up study of 535 incident cases. Br. J. Psychiatry 187 , 510–515 (2005).

Kleinloog, D. et al. Does olanzapine inhibit the psychomimetic effects of Δ9-tetrahydrocannabinol? J. Psychopharmacol. 26 , 1307–1316 (2012).

Liem-Moolenaar, M. et al. Central nervous system effects of haloperidol on THC in healthy male volunteers. J. Psychopharmacol. 24 , 1697–1708 (2010).

Patti, F. et al. Efficacy and safety of cannabinoid oromucosal spray for multiple sclerosis spasticity. J. Neurol. Neurosurg. Psychiatry 87 , 944–951 (2016).

Thaler, A. et al. Single center experience with medical cannabis in Gilles de la Tourette syndrome. Parkinsonism Relat. Disord . 61 , 211–213 (2019).

Chandra, S. et al. New trends in cannabis potency in USA and Europe during the last decade (2008–2017). Eur. Arch. Psychiatry Clin. Neurosci. 269 , 5–15 (2019).

Englund, A. et al. Cannabidiol inhibits THC-elicited paranoid symptoms and hippocampal-dependent memory impairment. J. Psychopharmacol. 27 , 19–27 (2013).

Gibson, L. P. et al. Effects of cannabidiol in cannabis flower: implications for harm reduction. Addict. Biol. 27 , e13092 (2022).

Sainz-Cort, A. et al. The effects of cannabidiol and delta-9-tetrahydrocannabinol in social cognition: a naturalistic controlled study. Cannabis Cannabinoid Res . https://doi.org/10.1089/can.2022.0037 (2022).

Lawn, W. et al. The acute effects of cannabis with and without cannabidiol in adults and adolescents: a randomised, double‐blind, placebo‐controlled, crossover experiment. Addiction 118 , 1282–1294 (2023).

Englund, A. et al. Does cannabidiol make cannabis safer? A randomised, double-blind, cross-over trial of cannabis with four different CBD:THC ratios. Neuropsychopharmacology 48 , 869–876 (2023).

Arseneault, L., Cannon, M., Witton, J. & Murray, R. M. Causal association between cannabis and psychosis: examination of the evidence. Br. J. Psychiatry 184 , 110–117 (2004).

Di Forti, M. et al. The contribution of cannabis use to variation in the incidence of psychotic disorder across Europe (EU-GEI): a multicentre case-control study. Lancet Psychiatry 6 , 427–436 (2019).

McCutcheon, R. A., Abi-Dargham, A. & Howes, O. D. Schizophrenia, dopamine and the striatum: from biology to symptoms. Trends Neurosci. 42 , 205–220 (2019).

Trubetskoy, V. et al. Mapping genomic loci implicates genes and synaptic biology in schizophrenia. Nature 604 , 502–508 (2022).

Zwicker, A. et al. Genetic counselling for the prevention of mental health consequences of cannabis use: a randomized controlled trial‐within‐cohort. Early Interv. Psychiatry 15 , 1306–1314 (2021).

Hindocha, C., Norberg, M. M. & Tomko, R. L. Solving the problem of cannabis quantification. Lancet Psychiatry 5 , e8 (2018).

Englund, A. et al. The effect of five day dosing with THCV on THC-induced cognitive, psychological and physiological effects in healthy male human volunteers: a placebo-controlled, double-blind, crossover pilot trial. J. Psychopharmacol. 30 , 140–151 (2016).

Wall, M. B. et al. Individual and combined effects of cannabidiol and Δ9-tetrahydrocannabinol on striato-cortical connectivity in the human brain. J. Psychopharmacol. 36 , 732–744 (2022).

Hammerton, G. & Munafò, M. R. Causal inference with observational data: the need for triangulation of evidence. Psychol. Med. 51 , 563–578 (2021).

Sami, M., Notley, C., Kouimtsidis, C., Lynskey, M. & Bhattacharyya, S. Psychotic-like experiences with cannabis use predict cannabis cessation and desire to quit: a cannabis discontinuation hypothesis. Psychol. Med. 49 , 103–112 (2019).

Morgan, C. J. A., Schafer, G., Freeman, T. P. & Curran, H. V. Impact of cannabidiol on the acute memory and psychotomimetic effects of smoked cannabis: naturalistic study. Br. J. Psychiatry 197 , 285–290 (2010).

Schoeler, T. et al. Association between continued cannabis use and risk of relapse in first-episode psychosis: a quasi-experimental investigation within an observational study. JAMA Psychiatry 73 , 1173–1179 (2016).

Sznitman, S., Baruch, Y. Ben, Greene, T. & Gelkopf, M. The association between physical pain and cannabis use in daily life: an experience sampling method. Drug Alcohol Depend. 191 , 294–299 (2018).

Henquet, C. et al. Psychosis reactivity to cannabis use in daily life: an experience sampling study. Br. J. Psychiatry 196 , 447–453 (2010).

Pingault, J.-B. et al. Using genetic data to strengthen causal inference in observational research. Nat. Rev. Genet. 19 , 566–580 (2018).

Hill, K. P. Medical cannabis. JAMA 323 , 580 (2020).

Esterberg, M. L., Trotman, H. D., Holtzman, C., Compton, M. T. & Walker, E. F. The impact of a family history of psychosis on age-at-onset and positive and negative symptoms of schizophrenia: a meta-analysis. Schizophr. Res. 120 , 121–130 (2010).

Di Forti, M. et al. Proportion of patients in south London with first-episode psychosis attributable to use of high potency cannabis: a case-control study. Lancet Psychiatry 2 , 233–238 (2015).

Peters, B. D. et al. Subjective effects of cannabis before the first psychotic episode. Aust. N. Z. J. Psychiatry 43 , 1155–1162 (2009).

Karcher, N. R. et al. Persistent and distressing psychotic-like experiences using adolescent brain cognitive development study data. Mol. Psychiatry 27 , 1490–1501 (2022).

LaFrance, E. M., Stueber, A., Glodosky, N. C., Mauzay, D. & Cuttler, C. Overbaked: assessing and predicting acute adverse reactions to cannabis. J. Cannabis Res. 2 , 3 (2020).

Moher, D., Liberati, A., Tetzlaff, J. & Altman, D. G. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Brit. Med. J. 339 , b2535 (2009).

Westgate, M. J. revtools: an R package to support article screening for evidence synthesis. Res. Synth. Methods. 10 , 606–614 (2019).

Kelleher, I., Harley, M., Murtagh, A. & Cannon, M. Are screening instruments valid for psychotic-like experiences? A validation study of screening questions for psychotic-like experiences using in-depth clinical interview. Schizophr. Bull. 37 , 362–369 (2011).

Morgan, C. J. A., Freeman, T. P., Powell, J. & Curran, H. V. AKT1 genotype moderates the acute psychotomimetic effects of naturalistically smoked cannabis in young cannabis smokers. Transl. Psychiatry 6 , e738 (2016).

Ivimey‐Cook, E. R., Noble, D. W. A., Nakagawa, S., Lajeunesse, M. J. & Pick, J. L. Advice for improving the reproducibility of data extraction in meta‐analysis. Res. Synth. Methods. 14 , 911–915 (2023).

Signorell, A. et al. DescTools: Tools for Descriptive Statistics R Package version 0.99 https://cran.r-project.org/web/packages/DescTools/index.html (2019).

Viechtbauer, W. Conducting meta-analyses in R with the metafor package. J. Stat. Softw . https://doi.org/10.18637/jss.v036.i03 (2010).

Peters, J. L. Comparison of two methods to detect publication bias in meta-analysis. JAMA 295 , 676–680 (2006).

Borenstein, M., Hedges, L. V., Higgins, J. P. T. & Rothstein, H. R. in Introduction to Meta-Analysis 225–238 (John Wiley & Sons, 2009); https://doi.org/10.1002/9780470743386.ch24

Mason, O. et al. Acute cannabis use causes increased psychotomimetic experiences in individuals prone to psychosis. Psychol. Med. 39 , 951–956 (2009).

D’Souza, D. C. et al. Delta-9-tetrahydrocannabinol effects in schizophrenia: implications for cognition, psychosis, and addiction. Biol. Psychiatry 57 , 594–608 (2005).

Solowij, N. et al. A randomised controlled trial of vaporised Δ9-tetrahydrocannabinol and cannabidiol alone and in combination in frequent and infrequent cannabis users: acute intoxication effects. Eur. Arch. Psychiatry Clin. Neurosci. 269 , 17–35 (2019).

Vadhan, N. P., Corcoran, C. M., Bedi, G., Keilp, J. G. & Haney, M. Acute effects of smoked marijuana in marijuana smokers at clinical high-risk for psychosis: a preliminary study. Psychiatry Res. 257 , 372–374 (2017).

Radhakrishnan, R. et al. GABA deficits enhance the psychotomimetic effects of Δ9-THC. Neuropsychopharmacology 40 , 2047–2056 (2015).

Higgins, J. P. T. & Thompson, S. G. Quantifying heterogeneity in a meta-analysis. Stat. Med. 21 , 1539–1558 (2002).

Tang, J.-L. & Liu, J. L. Misleading funnel plot for detection of bias in meta-analysis. J. Clin. Epidemiol. 53 , 477–484 (2000).

Duval, S. & Tweedie, R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics 56 , 455–463 (2000).

Viechtbauer, W. & Cheung, M. W.-L. Outlier and influence diagnostics for meta-analysis. Res. Synth. Methods 1 , 112–125 (2010).

Harrer, M., Cuijpers, P., Furukawa, T. & Ebert, D. D. dmetar: Companion R Package for the Guide ’Doing Meta-Analysis in R’ R package version 00.9000 http://dmetar.protectlab.org/ (2019).

Thomas, H. A community survey of adverse effects of cannabis use. Drug Alcohol Depend. 42 , 201–207 (1996).

Olsson, F. et al. An observational study of safety and clinical outcome measures across patient groups in the United Kingdom Medical Cannabis Registry. Expert Rev. Clin. Pharmacol. 16 , 257–266 (2023).

Arendt, M. et al. Testing the self-medication hypothesis of depression and aggression in cannabis-dependent subjects. Psychol. Med. 37 , 935–945 (2007).

Bonn-Miller, M. O. et al. The short-term impact of 3 smoked cannabis preparations versus placebo on PTSD symptoms: a randomized cross-over clinical trial. PLoS ONE 16 , e0246990 (2021).

Stokes, P. R. A., Mehta, M. A., Curran, H. V., Breen, G. & Grasby Paul, R. A. Can recreational doses of THC produce significant dopamine release in the human striatum? Neuroimage 48 , 186–190 (2009).

Zuurman, L. et al. Effect of intrapulmonary tetrahydrocannabinol administration in humans. J. Psychopharmacol. 22 , 707–716 (2008).

Safakish, R. et al. Medical cannabis for the management of pain and quality of life in chronic pain patients: a prospective observational study. Pain Med. 21 , 3073–3086 (2020).

Favrat, B. et al. Two cases of ‘cannabis acute psychosis’ following the administration of oral cannabis. BMC Psychiatry 5 , 17 (2005).

Balash, Y. et al. Medical cannabis in Parkinson disease: real-life patients' experience. Clin. Neuropharmacol. 40 , 268–272 (2017).

Habib, G. & Levinger, U. Characteristics of medical cannabis usage among patients with fibromyalgia. Harefuah 159 , 343–348 (2020).

PubMed   Google Scholar  

Beaulieu, P. Effects of nabilone, a synthetic cannabinoid, on postoperative pain. Can J. Anesth. 53 , 769–775 (2006).

Rup, J., Freeman, T. P., Perlman, C. & Hammond, D. Cannabis and mental health: adverse outcomes and self-reported impact of cannabis use by mental health status. Subst. Use Misuse 57 , 719–729 (2022).

Download references

Acknowledgments

This research was funded in whole, or in part, by the Wellcome Trust (grant nos. 218641/Z/19/Z (to T.S.) and 215917/Z/19/Z (to J.R.B.)). For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. J.-B.P. is funded by the Medical Research Foundation 2018 Emerging Leaders First Prize in Adolescent Mental Health (MRF-160-0002-ELP-PINGA (to J.-B.P.)). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Department of Computational Biology, University of Lausanne, Lausanne, Switzerland

Tabea Schoeler

Department of Clinical, Educational and Health Psychology, Division of Psychology and Language Sciences, University College London, London, UK

Tabea Schoeler, Jessie R. Baldwin, Ellen Martin, Wikus Barkhuizen & Jean-Baptiste Pingault

Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, UK

Jessie R. Baldwin & Jean-Baptiste Pingault

You can also search for this author in PubMed   Google Scholar

Contributions

T.S., J.R.B., and J.-B.P. conceived and designed the study. T.S., E.M., and W.B. acquired the data. T.S. analyzed the data and drafted the paper. All authors (T.S., J.R.B., E.M., W.B., and J.-B.P.) reviewed and approved the manuscript.

Corresponding author

Correspondence to Tabea Schoeler .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Mental Health thanks Evangelos Vassos and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

Supplementary Figs. 1–7, Methods (literature search, estimation of Cohen’s d , classification of predictors of CAPS, analysis plan), and references.

Reporting Summary

Supplementary tables.

Supplementary Tables 1–5.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Schoeler, T., Baldwin, J.R., Martin, E. et al. Assessing rates and predictors of cannabis-associated psychotic symptoms across observational, experimental and medical research. Nat. Mental Health (2024). https://doi.org/10.1038/s44220-024-00261-x

Download citation

Received : 06 September 2023

Accepted : 26 April 2024

Published : 03 June 2024

DOI : https://doi.org/10.1038/s44220-024-00261-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

techniques of interpretation in research methodology

Conditions for approaching shared value creation management in the Japanese rice flour-related business: application of mixed methods research

  • Open access
  • Published: 03 June 2024

Cite this article

You have full access to this open access article

techniques of interpretation in research methodology

  • Lily Kiminami   ORCID: orcid.org/0000-0003-1784-6283 1 ,
  • Shinichi Furuzawa 1 &
  • Akira Kiminami 2  

This is the second paper on creating shared value (CSV) management in Japanese rice flour-related businesses conducted by the same authors. In the first study, the relationships among business philosophies, business strategies and business outcomes of rice flour-related corporates in Japan were clarified using structural equation modeling (SEM) and cognitive mapping of questionnaire survey results. The management philosophy, effective altruism, influences business strategies (potential head market, tail market, organizational learning, and proposals from stakeholders) of rice flour-related corporates, inducing innovation and determining current business performance and future prospects for shared value creation. The business performance reflects their expectations for the rice flour market, and influences the direction of market development. In addition, we showed a need for policy innovations that strengthen effective altruism and create shared value through organizational learning of the stakeholders in rice flour-related businesses. Therefore, the purpose of this study was to clarify conditions for approaching CSV management in domestic rice flour-related businesses by applying mixed methods research (MMR). Specifically, a latent class analysis (LCA) was introduced to classify the management characteristics of rice flour-related businesses with survey results, and a qualitative comparative analysis (QCA) conducted on the CSV management entities extracted from the LCA to clarify the necessary and sufficient conditions for achieving CSV management. The results revealed that there are very few rice flour-related businesses in Japan that have approached CSV management, and sufficient conditions for approaching CSV management in rice flour-related businesses are a combination of effective altruism and various management strategies (long tail/organizational learning/innovation/stakeholder proposals). Therefore, we conclude that to achieve a sustainable regional development of rice flour-related businesses, policy innovations that integrate pull-type and push-type strategies are important.

Avoid common mistakes on your manuscript.

1 Introduction

The production volume of rice for rice flour (a part of new demand rice) in 2022 reached 45,903 tons in Japan. The top five prefectures of Niigata (12,731 tons; 27.7%), Tochigi (8035 tons; 17.5%), Saitama (4395 tons; 9.6%), Akita (2569 tons; 5.6%), and Ishikawa (2176 tons; 4.7%) account for 65.1% of the whole country (MAFF 2023 ). The reason why the production of rice for rice flour is concentrated in specific regions is that it is necessary to develop actual users and to collaborate with primary processors, secondary processors, and distributors to receive subsidies (Kinoshita 2012 ). On the other hand, a survey report on the use of rice flour by food manufacturers conducted by the NPO Domestic Rice Flour Promotion Network ( 2017 ) pointed out that the characteristics of consumers targeted by companies that sell domestically produced rice flour products are those with high health consciousness and allergies, and insisting on `domestic production' and `local production for local consumption'; the high price of rice flour is the most important issue in expanding its usage.

However, through an interview survey, Takahashi ( 2012 ) revealed that the formation of a self-sustaining industrial cluster in Kumamoto Prefecture is contributing to the development of a rice flour-related business aimed at solving social issues. In particular, by collaborating with producers, distributors, processing companies, and related research institutions, flour milling companies have succeeded in reducing costs and creating demand by introducing high-yield rice.

As an empirical study on the CSV management of rice flour-related corporates in Japan, Kiminami et al. ( 2024 ) clarified the relationships among business philosophy, business strategy, and business outcome of rice flour-related corporates by introducing structural equation modeling (SEM) and cognitive map analysis to the results of a questionnaire survey. The results revealed that the management philosophy (Effective Altruism, and member of the Rice Flour Association) of rice flour-related corporates influences their business strategies (potential head market, tail market, organizational learning, and proposals from stakeholders) which induce innovation and determine business performance (current performance and future prospects for shared value creation), and the business performance reflects their expectations for the rice flour market and influences the direction of market development. Based on the analytical results, the research suggested a policy innovation that strengthens effective altruism and creating shared value through organizational learning of stakeholders in rice flour-related businesses.

Therefore, the purpose of our study is to clarify the conditions for approaching CSV management in the domestic rice flour-related businesses following up the results of previous study. The methodology of this research is unique in that latent class analysis (LCA) was introduced to the survey results for classifying the management characteristics of rice flour-related businesses, and a qualitative comparative analysis (QCA) was conducted on the CSV management entities extracted from the LCA to clarify the necessary and sufficient conditions for approaching CSV management. Based on the empirical analysis results, we will derive policy implications of regional science.

2 Literature review

2.1 creating shared value.

Creating shared value (CSV) is the idea that companies create social value by working to solve social needs and problems, and as a result, economic value is created (Porter and Kramer 2011 ). CSV is often criticized as being vague in its differences from corporate social responsibility (CSR). According to Dembek et al. ( 2016 ), the definition of CSV is roughly divided into those that emphasize conceptual theory and those that emphasize the relationship with real society. The former includes Porter and Kramer, and the later includes Maltz et al. ( 2011 ). These different positions on the definition of CSV are also reflected in different views on the relationship between CSV and sustainability. In addition, as pointed out by Horings ( 2015 ), there are three ways to understand regional value: economic, intentional, and symbolic approaches. In terms of sustainable regional development, it is thought that the companies introducing CSV have a formation process with management philosophy and market strategy that is different from the companies without introducing CSV.

Although there are no studies targeting the rice industry or rice flour, there are some empirical analyses on shared value creation in the agriculture and food sectors. Wiśniewska-Paluszak and Paluszak ( 2019 ) found that companies engaged in CSV in Polish agribusiness are gaining new competitive advantages through solving social issues and redefining business models through cooperation with stakeholders. Additionally, Saraswati ( 2021 ) pointed out that Indonesian food companies create value by placing the highest priority on consumers, while also creating shared value with society, employees, the environment, and business partners. Furthermore, using mixed methods research, Kiminami et al. ( 2022 ) found that social entrepreneurs as the creative class in Japanese urban agriculture are approaching shared value creation while generating cognitive innovation through organizational learning with stakeholders.

2.2 Effective altruism

Effective altruism (EA) is an evidence- and theory-based philosophy or movement that seeks to maximize the improvement of the world, with particular emphasis on the areas of global poverty, human existential risk, and animal welfare (MacAskill 2015 ). EA is particularly useful for maximizing social impact with limited resources when the scale is large (social problems that can be solved), the visibility is low (niche), and there are no other viable alternatives. The idea of EA enables impact assessment and prioritization of projects that solve social problems, and many efforts are already being put into practice. As a practical initiative in the agriculture and food sectors, there is a movement to promote the production of alternative proteins (Good Food Institute), as well as R&D and market entry by private companies. In addition, research is being conducted to focus on the awareness and behavior of individuals who engage in donation behavior based on EA, and to analyze the factors that promote and inhibit it (e.g., Jaeger and van Vugt 2022 ). However, there are no examples of empirically analyzing the decision-making and actions of existing businesses and companies from the perspective of EA.

2.3 Social innovation in rice market

Kiminami et al. ( 2021a ) pointed out that the bottleneck in creating innovation in Japan’s rice cultivation to date is that agricultural policies (push policies), including rice policies, have not had the expected effects. Although rice production adjustment has been officially abolished, it still exists as a game equilibrium and customary system, and even if there are structural reforms in the economic realm, cultural belief systems (peasantism) dominate the political and social realm of the system.

On the other hand, Christensen et al. ( 2019 ) classify innovation into three types: sustaining innovation, efficiency innovation, and market-creating innovation, and point out the following: market-creating innovations create new markets that serve people for whom products either did not exist or where existing products were not affordable for a variety of reasons, making complex and expensive products far more affordable. Market-creating innovation is a strong foundation for sustained economic prosperity; for markets to be created and sustained, they must be profitable, or at least have the prospect of producing profits in the future. It is also important to create jobs and, most importantly, to change the culture through new markets. On the other hand, it is pointed out that because a society’s institutions reflect the culture and values of its people, a pull strategy is needed, rather than simply pushing effective institutions.

Social innovation (SI) is a new solution to meet social needs. It also leads to new or better capabilities and relationships, and better utilization of resource assets, increasing a society's ability to act (Murray et al. 2010 ). Europe explicitly incorporated SI in its food and agricultural innovation policies in 2010 (European Commission 2010 ). In 2020, the "Farm to Fork Strategy (F2F)" was launched with the aim of creating a sustainable food system, and it is introducing policies that place particular emphasis on addressing environmental and climate change issues (European Commission 2020 ). These can be said to be policy innovations that combine push and pull strategies to promote SI in the food and agriculture sector by popularizing the CSV-type management.

2.4 Long tail theory and corporate strategy

Traditionally, sales at brick-and-mortar stores focused on best-selling products, which accounted for about 20% of the total sales, based on the Pareto principle (80% of sales are generated by 20% of good customers). However, consumer choice theory in economics has been extended to accept considerations, such as the cost of information gathering, the incompleteness of information, and the limitations of consumers’ cognitive abilities in gathering and processing information.

As Anderson ( 2008 ) has argued, e-commerce and other new technologies improve efficiency by facilitating the entry of new producers and innovations, creating a “long tail” of niche products. And reducing the market share of previously popular products. However, the development of a long-tail market requires not only an efficient distribution system for a wide variety of products and services, but also the ability to meet consumers with diverse niche needs and provide products and services that meet their needs. Salvador et al. ( 2020 ) found that business success in long-tail markets requires reducing the cost of creating and maintaining large assortments on the supply side and exploring large assortments on the demand side.

However, the characteristics of a company’s long-tail strategy differ depending on the characteristics of the business (e.g., online retailer, brick-and-mortar retailer, manufacturer, etc.). Therefore, to establish a long-tail market, promotions that increase latent demand in the tail, a system that can supply many niche products at low prices, mechanization, and mass customization to efficiently produce a wide variety of products in small quantities are necessary (Ministry of Agriculture, Forestry and Fisheries 2019 ). Furthermore, Kiminami et al. ( 2021b ) explicitly used the long-tail market concept to study the Japanese rice market. The results showed that rice consumption can be expanded through market-creating innovation to expand the head portion by satisfying latent needs for rice, and expand the tail potion by doing business that leverages consumers’ internal motivation.

2.5 Organizational learning

There are various theories on organizational learning, each with different learning subjects, purposes, processes, and styles. Huber ( 1991 ) describes organizational learning as a process by which an organization acquires new information and knowledge, periodically modifies that knowledge, and changes the organization’s potential scope of action. Senge ( 2006 ) considers an organization that continuously develops its ability to adapt and change to be a learning organization, with growth in five areas: systems thinking, personal mastery, mental models, building a shared vision, and team learning. According to Iriyama ( 2019 ), in business administration, innovation is a part of organizational learning in a broad sense, and it is common to acquire new knowledge through learning and reflect it in organizational outcomes.

Tippins and Sohi ( 2003 ) used structural equation modeling to evaluate the introduction of organizational learning in IT companies using four indicators (knowledge acquisition, knowledge diffusion, knowledge interpretation, and organizational memory), and found that organizational learning has an impact on corporate performance. Furthermore, Attia and Eldin ( 2018 ) analyzed the impact of knowledge management capabilities, organizational learning, and supply chain management on organizational performance of food companies in Saudi Arabia, and found that organizational learning improves corporate performance. Furthermore, Kiminami et al. ( 2024 ) clarified the relationship between a company’s management philosophy, management strategy, and management performance.

3 Analytical framework and methods

3.1 analytical framework.

Based on the results of the literature review above, we frame our research questions as follows: What is the current situation of CSV-type management in rice flour-related businesses in Japan? What are the conditions for approaching CSV management under the circumstances? If policy innovation is required for rice flour-related businesses? The analytical framework for this study is shown in Fig.  1 . To achieve the research purpose, we formulate the following hypotheses for verification. In addition, in this study, latent class analysis (LCA) will be applied to verify Hypothesis 1 and qualitative comparative analysis (QCA) will be applied to verify Hypothesis 2.

figure 1

Analytical framework

Hypothesis 1: There are very few rice flour-related businesses in Japan that have approached CSV-type management.

Hypothesis 2: A sufficient condition for approaching CSV-type management in rice flour-related businesses is the combination of the management philosophy of effective altruism and various management strategies (long tail/organizational learning/innovation/stakeholder proposals).

3.2 Data and analysis methods

3.2.1 questionnaire survey.

The questionnaire was distributed to retailers and stores listed on each Agricultural Administration Bureau website, members of the National Rice Flour Association (only those listed on the website), companies supporting the R10 Project, Footnote 1 and rice flour distributors. The survey was conducted by mail in September 2022, and the final distributed number was 974 (excluding those to unknown addresses), and the returned number was 240. The main asked questions include basic attributes (industry classification, number of employees, sales), effective altruism, market potential, organizational learning, innovation, and management performance. Table 1 is a summary of surveyed attributes. Looking at the industries, “food manufacturing” accounts for more than half, followed by “agriculture, forestry and fisheries”, “food distribution”, and “restaurant”. In terms of sales, “10 million yen to less than 30 million yen” was the most common, followed by “100 million yen to less than 500 million yen” and “less than 10 million yen”. The percentage of companies with sales of 500 million or more is about 24.3%, and large companies account for a certain number. In terms of number of employees, “0 to 5 or less” was the highest, followed by “6 to 20 or less” and “21 to 50 or less”. In addition, 231 samples were used in the analysis, excluding samples whose basic attributes were not answered (Appendix 1 ).

3.2.2 Mixed methods research

Traditional research has adopted either the “qualitative method”, which understands phenomena by describing them in detail and accurately, or the “quantitative method”, which captures phenomena quantitatively and understands them through statistical analysis. However, to elucidate complex phenomena and problems, the data to be collected and analysis methods may become complex. For this reason, a mixed methods approach that integrates qualitative and quantitative research is currently attracting attention as a research method.

Creswell and Plano Clark ( 2017 ) summarized mixed methods approaches into three types: convergent designs, explanatory sequential designs, and exploratory sequential designs. In a convergent design, quantitative and qualitative data are collected and analyzed separately, and the results of the analyses are combined or compared. In an explanatory sequential design, quantitative data are collected and analyzed first, then qualitative data are collected and analyzed to explain the results. An exploratory sequential design first explores an issue through qualitative data collection and analysis, followed by quantitative data collection and analysis to develop measurements and interventions based on the findings. The mixed methods approach used in this study collects quantitative data from a questionnaire survey to extract CSV-type management through latent class analysis (LCA), and analyses the conditions for realizing CSV-type management through QCA (qualitative comparative analysis). Therefore, it is a mixed methods approach with an explanatory sequential design.

3.2.2.1 Latent class analysis (LCA)

Latent class analysis is a method that identifies the latent types (= latent classes) that a population of respondents consists of and reveals the structure of each (Bollen 2002 ; Magidson 2020 ). In recent years, researches applying LCA have been increasing in those such as consumer knowledge and environmentally conscious behavior in food selection (Peschel et al. 2016 ), joint venture decision-making in agriculture (Kragt et al. 2019 ), and farmers' willingness to pay for animal welfare investments (Peden et al. 2019 ). Therefore, it is also effective in analyzing long-tail markets, where diversity of preferences and behavioral patterns of economic agents is important.

3.2.2.2 Qualitative Comparative Analysis (QCA)

QCA is a standard method for drawing causal relationships from a small number of cases. QCA is innovative in that it uses set theory and Boolean algebra to systematize formal procedures for inferring causal relationships from case comparisons. Research results using QCA have increased rapidly since the 2010s, and have been published mainly in business administration, political science, environmental studies, sociology, etc. (Oana et al. 2021 ). The method characteristics of QCA are that small data can be analyzed, even fuzzy concepts can be analyzed, integrated analysis with cases is easy, and causal complexity (inferring complex causal paths from cause to effect) can be revealed from cases.

QCA is effective in bridging quantitative analysis and qualitative analysis, so it is effective to use it as one of the analysis methods in mixed methods research. For example, in an empirical study that combines SEM and QCA, Torres et al. ( 2021 ) used SEM and fsQCA to clarify the process and necessary and sufficient conditions for the influence of gamification on consumer brand loyalty.

4 Analytical results

4.1 results of latent class analysis (lca).

Here, we use the technique of latent class analysis (LCA) to classify rice flour-related businesses. As a result of searching for the optimal number of classes using Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), the optimal number was two for the BIC standard and four for the AIC standard (Appendix 2 ). Considering interpretability, the number of classes was finally set to four (AIC = 2661.833, BIC = 2947.554, Chi-square value = 1010.19). Table 2 shows the analysis results when the number of classes is specified as four, and shows the proportion of each class and the expected probability for each question item.

Class 1 businesses selected many items for “current performance'' and many answered “Strongly agree'' to all items for “future prospects'', so these businesses can be interpreted as CSV-type managements (attribution probability:18.3%). Among Class 2 businesses, those who answered "no selection" for "current performance" and "strongly disagree" or "somewhat disagree" for all items of "future prospects" were the most likely, and they can be interpreted as non-CSV type I managements (attribution probability 13.4%). On the other hand, Class 3 businesses have "1" or "4" selections for "current performance" and answered "somewhat agree" to all items for "business prospects", and they can be interpreted as a weak CSV or CSR type managements (attribution probability: 39.3%). Finally, Class 4 businesses have the second highest percentage of respondents who answered “no selection” for “current performance” and the largest number of respondents responded “neither agree nor disagree” for all items of “future prospects”, they can be interpreted as non-CSV type II managements (attribution probability: 29.0%).

Next, we performed a cross-tabulation analysis based on the above LCA results, and found that CSV-type management is characterized by "secondary processing", "membership of the Rice Flour Association", and a “management philosophy” of effective altruism (Table  3 ).

On the other hand, regarding management strategy, it was found that CSV-type management emphasizes “latent demand'' (market potential), organizational learning, potential head markets, tail markets, and proposals from stakeholders (Table  4 ). Furthermore, regarding innovation, CSV-type management emphasizes all four innovation types (Table  5 ), and regarding management results, it emphasizes both "current performance" and “future prospects” (Table  6 )

To summarize the results of LCA and cross-tabulation analysis, CSV-type rice flour-related businesses have following characteristics: secondary processing and sales, members of related associations, and a high level of effective altruism as management philosophy. This means that CSV-type companies place the highest priority on effectively resolving social issues through rice flour-related businesses and are striving for sustainable business development. In addition, the companies’ management strategy for realizing this goal is to view the rice flour market as a niche market with high potential, and to stimulate multifaceted innovation related to rice flour through organizational learning and collaboration with stakeholders. As a result, the business performances are improving. However, there are very few corporates that have approached CSV management in rice flour-related businesses (Class 1: 18.3%). Our Hypothesis 1 is supported.

4.2 Results of qualitative comparative analysis (QCA)

The QCA in this study is performed with reference to previous research (Tamura 2015 ; Yokoyama and Azuma 2022 ). The flow of analysis is: (1) setting up and coding of causal conditions and outcome, (2) analysis of necessary conditions, and (3) analysis of sufficient conditions. The analysis used the software packages of R and SetMethods for QCA (Oana et al. 2021 ; Mori 2017 , 2018 ).

Based on the literature review and analysis results of previous research, we set and coded the following five factors as the cause conditions for the success of rice flour-related businesses in creating shared value (Table  7 ). In this research, we set "future business prospects" as an outcome and conduct analysis. The data matrix after coding is shown in Appendix 3 .

First, we conduct a necessary condition analysis for approaching CSV management in rice flour-related businesses. The results of the analysis are shown in Table  8 , with "future prospects" as the business outcome. In general, the standard for the degree of consistency in necessary condition analysis is high, with a value of 0.9 or higher being considered as a guideline. Among the analysis results of this research, there are no causal conditions for which the degree of consistency is 0.9 or higher. Therefore, there are no factors that can be said to be necessary conditions for approaching CSV management in rice flour-related businesses. However, among the positive groups, the one with the highest degree of consistency is stakeholders (0.71), followed by innovation (0.70) and effective altruism (0.66).

Second, we complete a truth table to explore the sufficient conditions for CSV success in rice flour-related businesses. Footnote 2 In standard QCA analysis, there are three solutions: “complex solution”, “simplest solution”, and “intermediate solution”, each with different assumptions regarding logical residuals. Since it is generally recommended to use an “intermediate solution”, the results of the intermediate solution are used here as well. The completed truth table is shown in Appendix 4 .

Table 9 and Fig.  2 show the results of the “intermediate solution” regarding future prospects. The logical formula are derived from I to V as follows. The consistency level of the results was set at 0.8. All directional predictions of causal conditions were assumed to be “present”. The consistency of solution is 0.851 and the coverage is 0.857, so the results are generally good. The inherent coverage is all less than 0.1. There are four relevant combinations: effective altruism and stakeholders; organizational learning and innovation; long tail and innovation; and effective altruism and long tail. It can be said that an appropriate combination of these causal conditions is necessary to improve the business prospects of CSV-type companies.

figure 2

Venn diagram of intermediate solution for sufficiency conditions

FuzEfAltru2*FuzStHold (I)

 + FuzOL*FuzInno (II)

 +  ~ FuzLongTail*FuzStHold (III)

 + FuzLongTail*FuzInno (IV)

 + FuzEfAltru2* ~ FuzOL*FuzLongTail (V)

To summarize the results of QCA, there are no factors that can be said to be necessary conditions, and the relative importance of stakeholders, innovation, and effective altruism was confirmed. On the other hand, regarding sufficient conditions, it has been revealed that the combinations of the management philosophy of effective altruism and various management strategies (innovation, long-tail, stakeholder, and organizational learning) are important. Companies that carry out CSV-type management related to rice flour are causing SI through businesses that aim to solve social issues. Our Hypothesis 2 is supported.

5 Conclusions and policy implications

In this study, we conducted an analysis using mixed methods research (MMR) with the aim of clarifying the conditions for approaching CSV management in domestic rice flour-related businesses. Based on the above-mentioned analysis, we obtained the following results.

Table 10 shows the types of policies based on the strengths and weaknesses of push-type strategies and pull-type strategies. Japan's rice policy to date has focused on a push-type strategy (Policy B) in the absence of a pull-type strategy, and production has been promoted in a direction that does not match the latent needs of consumers or social needs. There was a lack of perspective on fostering and expressing the entrepreneurship of producers. As a result, market-creating innovations that can increase both economic and social value have not been realized. In general, the development of the rice flour-related business is strongly determined by the regional supply structure of raw material rice both in terms of price and quality, and is influenced by overall rice policy. Like the overall rice policy, the policy for rice flour-related business emphasizes a push-type strategy at the production stage of raw material rice (Policy B), and some pull-type strategies in the distribution, processing, and sales stages. Under the circumstances, only some of the rice flour-related businesses that engaged in distribution, processing, and sales are approaching CSV management. Therefore, CSV-type managements in rice flour-related business are in the infant stage. Considering the situation at the regional level, Kumamoto Prefecture (type C) has a strong tendency toward a pull-type strategy, and Niigata Prefecture has a push-type strategy (type B). Footnote 3

To promote rice flour-related businesses to approach CSV management in the future, it is necessary to have policy innovation that integrates pull-type and push-type strategies (Policy A). First, there is a need to form clusters for rice flour-related businesses crossing boundaries of industries and regions. The current push-type strategy emphasizes the formation of food system for rice flour within the regions without rationality. Second, it is important not only to respond to consumers' latent needs and increase consumption, but also to bring about a change in stakeholders’ awareness in the form of a cultural change regarding rice flour (both consumer side and producer/supply side). Thirdly, it is important for each rice flour-related business to evolve the management philosophy of effective altruism and constantly review and explore management strategies through systematic triple-loop learning (Barbat et al. 2011 ). Finally, rice flour-related businesses are expected to expand from micro-level SI activities to macro-level activities that transcend industries, regions, and national borders, and have an impact on social structures. By facing the issues that have been neglected in the food system such as social needs (food safety and health, etc.) and social value (food security and climate change, etc.), a shared vision can be built through organizational learning.

Data availability

The datasets analyzed during the current study are available from the corresponding author upon reasonable request.

The R10 project is an initiative implemented in Niigata Prefecture as a movement to replace 10% or more of wheat flour consumption with domestically produced rice flour. Efforts are underway to develop large-scale users, develop demand in new fields, and spread the use of the product in household consumption. Companies that support the project will be registered as supporting companies.

In a complete truth table, when there are five causal conditions, there are 2 5  = 32 possible combinations of existence/absence of the causal conditions.

Niigata Prefecture is the region with the highest production of rice as well as raw material rice for flour. Most of the rice flour-related businesses have been a response to the adjustment of rice production. On the other hand, Kumamoto Prefecture is not a major rice-producing prefecture, but its rice flour-related businesses have been promoted through private initiatives. This has led to the development of popular varieties for rice flour such as Mizuho no Chikara. Additionally, a council made up of rice flour-related businesses has become a platform for innovation creation.

Anderson C (2008) The longer long tail: How endless choice is creating unlimited demand, Hyperion

Attia A, Eldin IE (2018) Organizational learning, knowledge management capability and supply chain management practices in the Saudi food industry. J Knowl Manag 22(69):1217–1242. https://doi.org/10.1108/JKM-09-2017-0409

Article   Google Scholar  

Barbat G, Boigey P, Jehan I (2011) Triple-loop learning: theoretical framework, methodology and illustration (an example from the railway sector). Projectics 8(2):129. https://doi.org/10.3917/proj.008.0129

Bollen K (2002) Latent variables in psychology and the social science. Annu Rev Psychol 53:605–634. https://doi.org/10.1146/annurev.psych.53.100901.135239

Christensen CM, Ojomo E, Dillon K (2019) The prosperity paradox: how innovation can lift nations out of poverty. Harper Business, New York

Google Scholar  

Creswell JW, Plano Clark V (2017) Designing and conducting mixed methods research, 3rd edn. SAGE Publications, New York

DembekK SP, Bhakoo V (2016) Literature review of shared value: a theoretical concept or a management buzzword? J Bus Ethics 137:231–267. https://doi.org/10.1007/s10551-015-2554-z

European Commission (2020) Farm to fork stratgy: for a fair, healthy and environmentall-friendly food system. European Union. https://food.ec.europa.eu/document/download/472acca8-7f7b-4171-98b0-ed76720d68d3_en . Accessed 5 Apr 2024

European Commission (2010) Europe 2020: a strategy for smart, sustainable and inclusive growth. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2010:2020:FIN:en:PDF . Accessed 5 Apr 2024

Horlings LG (2015) Values in place: a value-oriented apporach toward sustainable place-shaping regional studies. Regional Science 2(1):257–274. https://doi.org/10.1080/21681376.2015.1014062

Huber GP (1991) Organizational learning: the contributing processes and literatures. Organ Sci 2(1):88–115

Iriyama A (2019) Management theories of the global standard. Diamond Inc, Tokyo ( in Japanese )

Jaeger B, van Vugt M (2022) Psychological barriers to effective altruism: an evolutionary perspective. Curr Opin Psychol 44:130–134. https://doi.org/10.1016/j.copsyc.2021.09.008

Kiminami L, Furuzawa S, Kiminami A (2021a) Transformation of Japan’s rice policy toward innovation creation for a sustainable development. Asia-Pac J Reg Sci 5:351–371. https://doi.org/10.1007/s41685-020-00175-3

Kiminami L, Furuzawa S, Kiminami A (2021b) Rice policies for long-tail market-creating innovations: empirical study on consumers’ cognition and behavior in Japan. Asia-Pac J Reg Sci 5:909–931. https://doi.org/10.1007/s41685-021-00209-4

Kiminami L, Furuzawa S, Kiminami A (2022) Exploring the possibilities of creating shared value in Japan’s urban agriculture: using a mixed methods approach. Asia-Pac J Reg Sci 6:541–569. https://doi.org/10.1007/s41685-022-00233-y

Kiminami L, Furuzawa S, Kiminami A (2024) Empirical study on the rice flour business in Japan: introducing structural equation modeling (SEM) and cognitive mapping. Jpn Jo Agric Econ 26:1–22

Kinoshita K (2012) Towards the diffusion of rice flour. Bull Toyohashi Sozo Jr Coll 29:31–38 ( in Japanese )

Kragt ME, Lynch B, Llewellyn RS, Umberger WJ (2019) What farmer types are most likely to adopt joint venture farm business structures? Aust J Agric Resour Econ 63:881–896. https://doi.org/10.1111/1467-8489.12332

MacAskill W (2015) Doing good better: a radical new way to make a difference. Guardian Faber Publishing, London

MAFF (2019) Interview with experts in the Japanese food manufacturing industry. Commissioned project survey of comparative analysis for industrial structure in the food manufacturing industry in 2019 ( in Japanese ) https://www.maff.go.jp/j/budget/yosan_kansi/sikkou/tokutei_keihi/R1itaku/R1ippan/attach/pdf/index-43.pdf . Accessed 18 Oct 2023

MAFF (2023) Production volume of newly demanded rice produced in 2022 ( in Japanese ). https://www.maff.go.jp/j/seisan/jyukyu/komeseisaku/ . Accessed 14 Sept 2023

Magidson J, Vermunt JK, Madura JP (2020) Latent class analysis: foundation entries. SAGE Publications, SAGE Research Methods Foundations

Maltz E, Thompson F, Ringold DJ (2011) Assessing and maximizing corporate social initiatives: a strategic view of corporate social responsibility. J Public Aff 11(4):344–352. https://doi.org/10.1002/pa.384

Mori D (2017) How to use software for qualitative comparative analysis (QCA): fs/QCA and R (1). Kumamoto Law Rev 140:250–209 ( in Japanese )

Mori D (2018) How to use software for qualitative comparative analysis (QCA): fs/QCA and R (2). Kumamoto Law Rev 141:388–348 ( in Japanese )

Murray R, Caulier-Grice J, Mulgan G (2010) The open book of social innovation. The Young Foundation

NPO Domestic Rice Flour Promotion Network (2017) Survey for companies regarding rice flour ( in Japanese ). https://www.maff.go.jp/j/seisan/keikaku/komeko/attach/pdf/index-74.pdf . Accessed 14 Sept 2023

Oana IE, Schneider CQ, Thomann E (2021) Qualitative comparative analysis using R: a beginner’s guide. Cambridge University Press, Cambridge

Book   Google Scholar  

Peden RSE, Akaichi F, Camerlink I, Simon LAB (2019) Pig farmers’ willingness to pay for management strategies to reduce aggression between pigs. PLoS ONE 14(11):e0224924. https://doi.org/10.1371/journal.pone.0224924

Peschel AO, Grebitus G, Steiner B, Veeman M (2016) How does consumer knowledge affect environmentally sustainable choices? Evidence from a cross-country latent class analysis of food labels. Appetite 106:78–91. https://doi.org/10.1016/j.appet.2016.02.162

Porter ME, Kramer MR (2011) The big idea: creating shared value (How to reinvent capitalism-and unleash a wave of innovation and growth). Harv Bus Rev 89(1–2):62–77

Salvador F, Piller FT, Aggarwal S (2020) Surviving on the long tail: an empirical investigation of business model elements for mass customization. Long Range Plan 53(4):101886. https://doi.org/10.1016/j.lrp.2019.05.006

Saraswati E (2021) Analysis of creating shared value in the food and beverage industry. Jurnal Ilmiah Akuntansi Dan Bisnis 16(1):154–162. https://doi.org/10.24843/JIAB.2021.v16.i01.p10

Senge PM (2006) The fifth discipline: the art and practice of the learning organization. Doubleday, New York

Takahashi M (2012) The development of the Kumamoto food industry cluster. Yokohama Bus Rev 33(1):71–85 ( in Japanese )

Tamura M (2015) Qualitative Comparative Analysis of business cases: exploring cause and effect using small data, Hakuto-shobo, Tokyo ( in Japanese )

Tippins MJ, Sohi RS (2003) IT competency and firm performance: is organizational learning a missing link? Strateg Manag J 24(8):745–761. https://doi.org/10.1002/smj.337

Torres P, Augusto M, Neves C (2021) Value dimensions of gamification and their influence on brand loyalty and word-of-mouth: relationships and combinations with satisfaction and brand love. Psychol Mark 39(1):59–75. https://doi.org/10.1002/mar.21573

Wiśniewska-Paluszak J, Paluszak G (2019) Examples of creating shared value (CSV) in agribusiness in Poland. Ann Pol Assoc Agric Agribus Econ XXI 2:297–306. https://doi.org/10.5604/01.3001.0013.2198

Yokoyama N, Azuma S (2022) Towards formalization of analytical approaches to tackle issues of retail business models: exploring the methodological strengths of process tracing method and qualitative comparative analysis (QCA). Jpn Mark J 41(4):53–64. https://doi.org/10.7222/marketing.2022.021 . ( in Japanese )

Download references

Acknowledgements

This research was supported by JSPS KAKENHI under Grant No. 20K06256 (Analysis of social entrepreneurship in urban agriculture: Creating shared value through social business), No. 20H03089 (Comprehensive study on structure and process of entrepreneurship in agriculture and rural sector), and No. 20K12280 (Analysis on innovation of social enterprise for sustainable agriculture and food systems). The authors wish to express our gratitude for the support.

Author information

Authors and affiliations.

Niigata University, Niigata, Japan

Lily Kiminami & Shinichi Furuzawa

The University of Tokyo, Tokyo, Japan

Akira Kiminami

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lily Kiminami .

Ethics declarations

Conflict of interest.

The authors declare no conflicts of interest associated with this manuscript.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Table 11 .

See Fig. 3 .

figure 3

Scree plot of AIC and BIC versus number of latent classes

See Table 12 .

See Table 13 .

Rights and permissions

This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team .

About this article

Kiminami, L., Furuzawa, S. & Kiminami, A. Conditions for approaching shared value creation management in the Japanese rice flour-related business: application of mixed methods research. Asia-Pac J Reg Sci (2024). https://doi.org/10.1007/s41685-024-00342-w

Download citation

Received : 18 December 2023

Accepted : 18 May 2024

Published : 03 June 2024

DOI : https://doi.org/10.1007/s41685-024-00342-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Rice flour-related business
  • Creating shared value (CSV)
  • Latent class analysis (LCA)
  • Qualitative comparative analysis (QCA)
  • Mixed methods research (MMR)

JEL Classification

  • Find a journal
  • Publish with us
  • Track your research

SYSTEMATIC REVIEW article

This article is part of the research topic.

Reviews in Gastroenterology 2023

Electrogastrography Measurement Systems and Analysis Methods Used in Clinical Practice and Research: Comprehensive Review Provisionally Accepted

  • 1 VSB-Technical University of Ostrava, Czechia

The final, formatted version of the article will be published soon.

Electrogastrography (EGG) is a non-invasive method with high diagnostic potential for the prevention of gastroenterological pathologies in clinical practice. In this paper, a review of the measurement systems, procedures, and methods of analysis used in electrogastrography is presented. A critical review of historical and current literature is conducted, focusing on electrode placement, measurement apparatus, measurement procedures, and time-frequency domain methods of filtration and analysis of the non-invasively measured electrical activity of the stomach.As a result a total of 129 relevant articles with primary aim on experimental diet were reviewed in this study. Scopus, PubMed and Web of Science databases were used to search for articles in English language, according to the specific query and using PRISMA method. The research topic of electrogastrography has been continuously growing in popularity since the first measurement by professor Alvarez 100 years ago and there are many researchers and companies interested in EGG nowadays. Measurement apparatus and procedures are still being developed in both commercial and research settings. There are plenty variable electrode layouts, ranging from minimal numbers of electrodes for ambulatory measurements to very high numbers of electrodes for spatial measurements. Most authors used in their research anatomically approximated layout with 2 active electrodes in bipolar connection and commercial electrogastrograph with sampling rate of 2 or 4 Hz. Test subjects were usually healthy adults and diet was controlled. However, evaluation methods are being developed at a slower pace and usually the signals are classified only based on dominant frequency. The main review contributions include the overview of spectrum of measurement systems and procedures for electrogastrography developed by many authors, but a firm medical standard has not yet been defined. Therefore, it is not possible to use this method in clinical practice for objective diagnosis.

Keywords: electrogastrography, non-invasive method, Measurement systems, Electrode placement, Measurement apparatus, Signal processing

Received: 19 Jan 2024; Accepted: 03 Jun 2024.

Copyright: © 2024 Oczka, Augustynek, Penhaker and Kubicek. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Dr. Jan Kubicek, VSB-Technical University of Ostrava, Ostrava, 708 33, Moravian-Silesian Region, Czechia

People also looked at

IMAGES

  1. Research Methodology

    techniques of interpretation in research methodology

  2. PPT

    techniques of interpretation in research methodology

  3. Research methodology interpretation

    techniques of interpretation in research methodology

  4. Research data interpretation

    techniques of interpretation in research methodology

  5. Research methodology interpretation

    techniques of interpretation in research methodology

  6. Research data interpretation

    techniques of interpretation in research methodology

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. Introduction to Research Methodology, Descriptive Statistics, Correlation & Regression Analysis

  3. what is Data Analysis in Research Methodology

  4. UGC NET Paper 1

  5. RESEARCH METHODOLOGY

  6. What is Quartiles and Inter Quartile Range

COMMENTS

  1. Data Interpretation

    Data interpretation and data analysis are two different but closely related processes in data-driven decision-making. Data analysis refers to the process of examining and examining data using statistical and computational methods to derive insights and conclusions from it. It involves cleaning, transforming, and modeling the data to uncover ...

  2. Interpretation Strategies: Appropriate Concepts

    Abstract. This essay addresses a wide range of concepts related to interpretation in qualitative research, examines the meaning and importance of interpretation in qualitative inquiry, and explores the ways methodology, data, and the self/researcher as instrument interact and impact interpretive processes.

  3. PDF Research Methodology: Tools and Techniques

    ^Research may be defined as a method of studying problems whose solutions are to be derived partly or wholly from facts. _ W.S. Monroes ^Research is considered to be the more formal, systematic intensive process of carrying on the scientific method of analysis. It involves a

  4. PDF Chapter 4 Interpretation and Report Writing

    analysis methods, the next task is to draw Inferences from these data. In other words, Interpretation of data needs to be done, so as to derive certain conclusions, which is the whole purpose of the research study. Definition "Interpretation refers to the process of making sense of numerical data that has been collected, analysed and presented".

  5. LibGuides: Research Methods: Data Analysis & Interpretation

    Interpretation of qualitative data can be presented as a narrative. The themes identified from the research can be organised and integrated with themes in the existing literature to give further weight and meaning to the research. The interpretation should also state if the aims and objectives of the research were met.

  6. Textual Analysis

    Textual analysis is a broad term for various research methods used to describe, interpret and understand texts. All kinds of information can be gleaned from a text - from its literal meaning to the subtext, symbolism, assumptions, and values it reveals. The methods used to conduct textual analysis depend on the field and the aims of the ...

  7. PDF Essentials of Descriptive-Interpretive Qualitative Research: A Generic

    In this particular book, we present descriptive-interpretive qualitative research by Robert Elliott and Ladislav Timulak. This generic approach is the culmination of many years of method development and research by these authors, who were pioneers in introducing qualitative research to the psycho-therapy field.

  8. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  9. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  10. Research Methodology (Methods, Approaches And Techniques)

    Conclusion and Interpretation: The research methodology influences how . researchers interpret their findings and draw conclusions. ... Research Methodology (Methods, Approaches and Techniques) 7.

  11. Research Methodology

    The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

  12. Chapter 12 Interpretive Research

    Some researchers view phenomenology as a philosophy rather than as a research method. In response to this criticism, Giorgi and Giorgi (2003) developed an existential phenomenological research method to guide studies in this area. This method, illustrated in Figure 10.2, can be grouped into data collection and data analysis phases.

  13. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. We provide an overview of some of the key aspects of methodological studies such ...

  14. What Is Data Interpretation? Meaning & Analysis Examples

    Content analysis: As its name suggests, this is a research method used to identify frequencies and recurring words, subjects, and concepts in image, video, or audio content. It transforms qualitative information into quantitative data to help discover trends and conclusions that will later support important research or business decisions.

  15. What is Data Interpretation? + [Types, Method & Tools]

    The quantitative data interpretation method is used to analyze quantitative data, which is also known as numerical data. This data type contains numbers and is therefore analyzed with the use of numbers and not texts. Quantitative data are of 2 main types, namely; discrete and continuous data. Continuous data is further divided into interval ...

  16. Research methodology interpretation

    This document discusses interpretation in research methodology. Interpretation involves drawing inferences from collected facts after analytical or experimental study. It has two main aspects: establishing continuity of research and explanatory concepts. Interpretation allows researchers to understand abstract principles, link findings across ...

  17. Learning to Do Qualitative Data Analysis: A Starting Point

    For many researchers unfamiliar with qualitative research, determining how to conduct qualitative analyses is often quite challenging. Part of this challenge is due to the seemingly limitless approaches that a qualitative researcher might leverage, as well as simply learning to think like a qualitative researcher when analyzing data. From framework analysis (Ritchie & Spencer, 1994) to content ...

  18. Data Analysis

    Data Analysis. Definition: Data analysis refers to the process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, drawing conclusions, and supporting decision-making. It involves applying various statistical and computational techniques to interpret and derive insights from large datasets.

  19. Visual Methodologies in Qualitative Research:

    Visual methodologies are used to understand and interpret images (Barbour, 2014) and include photography, film, video, painting, drawing, collage, sculpture, artwork, graffiti, advertising, and cartoons.Visual methodologies are a new and novel approach to qualitative research derived from traditional ethnography methods used in anthropology and sociology.

  20. Interpretation, Techniques of Interpretation & Precautions in Research

    Interpretation refers to drawing inferences from collected data and establishing broader meanings and relationships. It involves linking results to other research, theory, and hypotheses. It allows researchers to better understand abstract principles underlying findings and predict concrete events, guiding future research. When interpreting, researchers should [1] give reasonable explanations ...

  21. Interpretation In Qualitative Research: What, Why, How

    Abstract. This chapter addresses a wide range of concepts related to interpretation in qualitative research, examines the meaning and importance of interpretation in qualitative inquiry, and explores the ways methodology, data, and the self/researcher as instrument interact and impact interpretive processes.

  22. Techniques of Interpretation

    The document discusses techniques of interpretation in research. Interpretation refers to drawing inferences from collected data and facts after analysis to understand broader meanings and establish explanatory concepts. It allows the researcher to understand abstract principles underlying findings, link results to other research, and guide future studies. Correct interpretation is an art that ...

  23. Types of Data Analysis: A Guide

    Exploratory analysis. Inferential analysis. Predictive analysis. Causal analysis. Mechanistic analysis. Prescriptive analysis. With its multiple facets, methodologies and techniques, data analysis is used in a variety of fields, including business, science and social science, among others. As businesses thrive under the influence of ...

  24. How can qualitative methods be applied to behavior analytic research: A

    Behavior analysts in research and clinical practice are interested in an ever-expanding array of topics. They are compelled to explore the social validity of the interventions they propose and the findings they generate. As the field moves in these important directions, qualitative methods are becoming increasingly relevant. Representing a departure from small-n design favored by behavior ...

  25. Assessing rates and predictors of cannabis-associated ...

    The correlation coefficient was set to be r = 0.5 for all studies included in the meta-analysis, on the basis of previous research. 13 We also assessed whether varying within-person correlation ...

  26. Conditions for approaching shared value creation management ...

    The methodology of this research is unique in that latent class analysis (LCA) was introduced to the survey results for classifying the management characteristics of rice flour-related businesses, and a qualitative comparative analysis (QCA) was conducted on the CSV management entities extracted from the LCA to clarify the necessary and ...

  27. The curses of performing differential expression analysis ...

    Differential expression analysis is pivotal in single-cell transcriptomics for unraveling cell-type-specific responses to stimuli. While numerous methods are available to identify differentially expressed genes in single-cell data, recent evaluations of both single-cell-specific methods and methods adapted from bulk studies have revealed significant shortcomings in performance.

  28. Electrogastrography Measurement Systems and Analysis Methods Used in

    Electrogastrography (EGG) is a non-invasive method with high diagnostic potential for the prevention of gastroenterological pathologies in clinical practice. In this paper, a review of the measurement systems, procedures, and methods of analysis used in electrogastrography is presented. A critical review of historical and current literature is conducted, focusing on electrode placement ...

  29. PT-seq: A method for metagenomic analysis of phosphorothioate ...

    Among dozens of known epigenetic marks, naturally occurring phosphorothioate (PT) DNA modifications are unique in replacing a non-bridging phosphate oxygen with redox-active sulfur and function in prokaryotic restriction-modification and transcriptional regulation. Interest in PTs has grown due to the widespread distribution of the dnd, ssp, and brx genes among bacteria and archaea, as well as ...

  30. Exploring large language models as an integrated tool for learning

    The analysis process involved qualitative techniques, such as coding (deductive) and thematic analysis, and quantitative methods, such as statistical analysis and data visualization. The linear regression analysis (confidence interval, CI = 95%) was used to analyze the FBM factors. ... ChatGPT should be included in research methodology courses ...