• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

analysis strategies for research

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection  methods, and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

A/B testing software

Top 13 A/B Testing Software for Optimizing Your Website

Apr 12, 2024

contact center experience software

21 Best Contact Center Experience Software in 2024

Government Customer Experience

Government Customer Experience: Impact on Government Service

Apr 11, 2024

Employee Engagement App

Employee Engagement App: Top 11 For Workforce Improvement 

Apr 10, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

The 7 Most Useful Data Analysis Methods and Techniques

Data analytics is the process of analyzing raw data to draw out meaningful insights. These insights are then used to determine the best course of action.

When is the best time to roll out that marketing campaign? Is the current team structure as effective as it could be? Which customer segments are most likely to purchase your new product?

Ultimately, data analytics is a crucial driver of any successful business strategy. But how do data analysts actually turn raw data into something useful? There are a range of methods and techniques that data analysts use depending on the type of data in question and the kinds of insights they want to uncover.

You can get a hands-on introduction to data analytics in this free short course .

In this post, we’ll explore some of the most useful data analysis techniques. By the end, you’ll have a much clearer idea of how you can transform meaningless data into business intelligence. We’ll cover:

  • What is data analysis and why is it important?
  • What is the difference between qualitative and quantitative data?
  • Regression analysis
  • Monte Carlo simulation
  • Factor analysis
  • Cohort analysis
  • Cluster analysis
  • Time series analysis
  • Sentiment analysis
  • The data analysis process
  • The best tools for data analysis
  •  Key takeaways

The first six methods listed are used for quantitative data , while the last technique applies to qualitative data. We briefly explain the difference between quantitative and qualitative data in section two, but if you want to skip straight to a particular analysis technique, just use the clickable menu.

1. What is data analysis and why is it important?

Data analysis is, put simply, the process of discovering useful information by evaluating data. This is done through a process of inspecting, cleaning, transforming, and modeling data using analytical and statistical tools, which we will explore in detail further along in this article.

Why is data analysis important? Analyzing data effectively helps organizations make business decisions. Nowadays, data is collected by businesses constantly: through surveys, online tracking, online marketing analytics, collected subscription and registration data (think newsletters), social media monitoring, among other methods.

These data will appear as different structures, including—but not limited to—the following:

The concept of big data —data that is so large, fast, or complex, that it is difficult or impossible to process using traditional methods—gained momentum in the early 2000s. Then, Doug Laney, an industry analyst, articulated what is now known as the mainstream definition of big data as the three Vs: volume, velocity, and variety. 

  • Volume: As mentioned earlier, organizations are collecting data constantly. In the not-too-distant past it would have been a real issue to store, but nowadays storage is cheap and takes up little space.
  • Velocity: Received data needs to be handled in a timely manner. With the growth of the Internet of Things, this can mean these data are coming in constantly, and at an unprecedented speed.
  • Variety: The data being collected and stored by organizations comes in many forms, ranging from structured data—that is, more traditional, numerical data—to unstructured data—think emails, videos, audio, and so on. We’ll cover structured and unstructured data a little further on.

This is a form of data that provides information about other data, such as an image. In everyday life you’ll find this by, for example, right-clicking on a file in a folder and selecting “Get Info”, which will show you information such as file size and kind, date of creation, and so on.

Real-time data

This is data that is presented as soon as it is acquired. A good example of this is a stock market ticket, which provides information on the most-active stocks in real time.

Machine data

This is data that is produced wholly by machines, without human instruction. An example of this could be call logs automatically generated by your smartphone.

Quantitative and qualitative data

Quantitative data—otherwise known as structured data— may appear as a “traditional” database—that is, with rows and columns. Qualitative data—otherwise known as unstructured data—are the other types of data that don’t fit into rows and columns, which can include text, images, videos and more. We’ll discuss this further in the next section.

2. What is the difference between quantitative and qualitative data?

How you analyze your data depends on the type of data you’re dealing with— quantitative or qualitative . So what’s the difference?

Quantitative data is anything measurable , comprising specific quantities and numbers. Some examples of quantitative data include sales figures, email click-through rates, number of website visitors, and percentage revenue increase. Quantitative data analysis techniques focus on the statistical, mathematical, or numerical analysis of (usually large) datasets. This includes the manipulation of statistical data using computational techniques and algorithms. Quantitative analysis techniques are often used to explain certain phenomena or to make predictions.

Qualitative data cannot be measured objectively , and is therefore open to more subjective interpretation. Some examples of qualitative data include comments left in response to a survey question, things people have said during interviews, tweets and other social media posts, and the text included in product reviews. With qualitative data analysis, the focus is on making sense of unstructured data (such as written text, or transcripts of spoken conversations). Often, qualitative analysis will organize the data into themes—a process which, fortunately, can be automated.

Data analysts work with both quantitative and qualitative data , so it’s important to be familiar with a variety of analysis methods. Let’s take a look at some of the most useful techniques now.

3. Data analysis techniques

Now we’re familiar with some of the different types of data, let’s focus on the topic at hand: different methods for analyzing data. 

a. Regression analysis

Regression analysis is used to estimate the relationship between a set of variables. When conducting any type of regression analysis , you’re looking to see if there’s a correlation between a dependent variable (that’s the variable or outcome you want to measure or predict) and any number of independent variables (factors which may have an impact on the dependent variable). The aim of regression analysis is to estimate how one or more variables might impact the dependent variable, in order to identify trends and patterns. This is especially useful for making predictions and forecasting future trends.

Let’s imagine you work for an ecommerce company and you want to examine the relationship between: (a) how much money is spent on social media marketing, and (b) sales revenue. In this case, sales revenue is your dependent variable—it’s the factor you’re most interested in predicting and boosting. Social media spend is your independent variable; you want to determine whether or not it has an impact on sales and, ultimately, whether it’s worth increasing, decreasing, or keeping the same. Using regression analysis, you’d be able to see if there’s a relationship between the two variables. A positive correlation would imply that the more you spend on social media marketing, the more sales revenue you make. No correlation at all might suggest that social media marketing has no bearing on your sales. Understanding the relationship between these two variables would help you to make informed decisions about the social media budget going forward. However: It’s important to note that, on their own, regressions can only be used to determine whether or not there is a relationship between a set of variables—they don’t tell you anything about cause and effect. So, while a positive correlation between social media spend and sales revenue may suggest that one impacts the other, it’s impossible to draw definitive conclusions based on this analysis alone.

There are many different types of regression analysis, and the model you use depends on the type of data you have for the dependent variable. For example, your dependent variable might be continuous (i.e. something that can be measured on a continuous scale, such as sales revenue in USD), in which case you’d use a different type of regression analysis than if your dependent variable was categorical in nature (i.e. comprising values that can be categorised into a number of distinct groups based on a certain characteristic, such as customer location by continent). You can learn more about different types of dependent variables and how to choose the right regression analysis in this guide .

Regression analysis in action: Investigating the relationship between clothing brand Benetton’s advertising expenditure and sales

b. Monte Carlo simulation

When making decisions or taking certain actions, there are a range of different possible outcomes. If you take the bus, you might get stuck in traffic. If you walk, you might get caught in the rain or bump into your chatty neighbor, potentially delaying your journey. In everyday life, we tend to briefly weigh up the pros and cons before deciding which action to take; however, when the stakes are high, it’s essential to calculate, as thoroughly and accurately as possible, all the potential risks and rewards.

Monte Carlo simulation, otherwise known as the Monte Carlo method, is a computerized technique used to generate models of possible outcomes and their probability distributions. It essentially considers a range of possible outcomes and then calculates how likely it is that each particular outcome will be realized. The Monte Carlo method is used by data analysts to conduct advanced risk analysis, allowing them to better forecast what might happen in the future and make decisions accordingly.

So how does Monte Carlo simulation work, and what can it tell us? To run a Monte Carlo simulation, you’ll start with a mathematical model of your data—such as a spreadsheet. Within your spreadsheet, you’ll have one or several outputs that you’re interested in; profit, for example, or number of sales. You’ll also have a number of inputs; these are variables that may impact your output variable. If you’re looking at profit, relevant inputs might include the number of sales, total marketing spend, and employee salaries. If you knew the exact, definitive values of all your input variables, you’d quite easily be able to calculate what profit you’d be left with at the end. However, when these values are uncertain, a Monte Carlo simulation enables you to calculate all the possible options and their probabilities. What will your profit be if you make 100,000 sales and hire five new employees on a salary of $50,000 each? What is the likelihood of this outcome? What will your profit be if you only make 12,000 sales and hire five new employees? And so on. It does this by replacing all uncertain values with functions which generate random samples from distributions determined by you, and then running a series of calculations and recalculations to produce models of all the possible outcomes and their probability distributions. The Monte Carlo method is one of the most popular techniques for calculating the effect of unpredictable variables on a specific output variable, making it ideal for risk analysis.

Monte Carlo simulation in action: A case study using Monte Carlo simulation for risk analysis

 c. Factor analysis

Factor analysis is a technique used to reduce a large number of variables to a smaller number of factors. It works on the basis that multiple separate, observable variables correlate with each other because they are all associated with an underlying construct. This is useful not only because it condenses large datasets into smaller, more manageable samples, but also because it helps to uncover hidden patterns. This allows you to explore concepts that cannot be easily measured or observed—such as wealth, happiness, fitness, or, for a more business-relevant example, customer loyalty and satisfaction.

Let’s imagine you want to get to know your customers better, so you send out a rather long survey comprising one hundred questions. Some of the questions relate to how they feel about your company and product; for example, “Would you recommend us to a friend?” and “How would you rate the overall customer experience?” Other questions ask things like “What is your yearly household income?” and “How much are you willing to spend on skincare each month?”

Once your survey has been sent out and completed by lots of customers, you end up with a large dataset that essentially tells you one hundred different things about each customer (assuming each customer gives one hundred responses). Instead of looking at each of these responses (or variables) individually, you can use factor analysis to group them into factors that belong together—in other words, to relate them to a single underlying construct. In this example, factor analysis works by finding survey items that are strongly correlated. This is known as covariance . So, if there’s a strong positive correlation between household income and how much they’re willing to spend on skincare each month (i.e. as one increases, so does the other), these items may be grouped together. Together with other variables (survey responses), you may find that they can be reduced to a single factor such as “consumer purchasing power”. Likewise, if a customer experience rating of 10/10 correlates strongly with “yes” responses regarding how likely they are to recommend your product to a friend, these items may be reduced to a single factor such as “customer satisfaction”.

In the end, you have a smaller number of factors rather than hundreds of individual variables. These factors are then taken forward for further analysis, allowing you to learn more about your customers (or any other area you’re interested in exploring).

Factor analysis in action: Using factor analysis to explore customer behavior patterns in Tehran

d. Cohort analysis

Cohort analysis is a data analytics technique that groups users based on a shared characteristic , such as the date they signed up for a service or the product they purchased. Once users are grouped into cohorts, analysts can track their behavior over time to identify trends and patterns.

So what does this mean and why is it useful? Let’s break down the above definition further. A cohort is a group of people who share a common characteristic (or action) during a given time period. Students who enrolled at university in 2020 may be referred to as the 2020 cohort. Customers who purchased something from your online store via the app in the month of December may also be considered a cohort.

With cohort analysis, you’re dividing your customers or users into groups and looking at how these groups behave over time. So, rather than looking at a single, isolated snapshot of all your customers at a given moment in time (with each customer at a different point in their journey), you’re examining your customers’ behavior in the context of the customer lifecycle. As a result, you can start to identify patterns of behavior at various points in the customer journey—say, from their first ever visit to your website, through to email newsletter sign-up, to their first purchase, and so on. As such, cohort analysis is dynamic, allowing you to uncover valuable insights about the customer lifecycle.

This is useful because it allows companies to tailor their service to specific customer segments (or cohorts). Let’s imagine you run a 50% discount campaign in order to attract potential new customers to your website. Once you’ve attracted a group of new customers (a cohort), you’ll want to track whether they actually buy anything and, if they do, whether or not (and how frequently) they make a repeat purchase. With these insights, you’ll start to gain a much better understanding of when this particular cohort might benefit from another discount offer or retargeting ads on social media, for example. Ultimately, cohort analysis allows companies to optimize their service offerings (and marketing) to provide a more targeted, personalized experience. You can learn more about how to run cohort analysis using Google Analytics .

Cohort analysis in action: How Ticketmaster used cohort analysis to boost revenue

e. Cluster analysis

Cluster analysis is an exploratory technique that seeks to identify structures within a dataset. The goal of cluster analysis is to sort different data points into groups (or clusters) that are internally homogeneous and externally heterogeneous. This means that data points within a cluster are similar to each other, and dissimilar to data points in another cluster. Clustering is used to gain insight into how data is distributed in a given dataset, or as a preprocessing step for other algorithms.

There are many real-world applications of cluster analysis. In marketing, cluster analysis is commonly used to group a large customer base into distinct segments, allowing for a more targeted approach to advertising and communication. Insurance firms might use cluster analysis to investigate why certain locations are associated with a high number of insurance claims. Another common application is in geology, where experts will use cluster analysis to evaluate which cities are at greatest risk of earthquakes (and thus try to mitigate the risk with protective measures).

It’s important to note that, while cluster analysis may reveal structures within your data, it won’t explain why those structures exist. With that in mind, cluster analysis is a useful starting point for understanding your data and informing further analysis. Clustering algorithms are also used in machine learning—you can learn more about clustering in machine learning in our guide .

Cluster analysis in action: Using cluster analysis for customer segmentation—a telecoms case study example

f. Time series analysis

Time series analysis is a statistical technique used to identify trends and cycles over time. Time series data is a sequence of data points which measure the same variable at different points in time (for example, weekly sales figures or monthly email sign-ups). By looking at time-related trends, analysts are able to forecast how the variable of interest may fluctuate in the future.

When conducting time series analysis, the main patterns you’ll be looking out for in your data are:

  • Trends: Stable, linear increases or decreases over an extended time period.
  • Seasonality: Predictable fluctuations in the data due to seasonal factors over a short period of time. For example, you might see a peak in swimwear sales in summer around the same time every year.
  • Cyclic patterns: Unpredictable cycles where the data fluctuates. Cyclical trends are not due to seasonality, but rather, may occur as a result of economic or industry-related conditions.

As you can imagine, the ability to make informed predictions about the future has immense value for business. Time series analysis and forecasting is used across a variety of industries, most commonly for stock market analysis, economic forecasting, and sales forecasting. There are different types of time series models depending on the data you’re using and the outcomes you want to predict. These models are typically classified into three broad types: the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. For an in-depth look at time series analysis, refer to our guide .

Time series analysis in action: Developing a time series model to predict jute yarn demand in Bangladesh

g. Sentiment analysis

When you think of data, your mind probably automatically goes to numbers and spreadsheets.

Many companies overlook the value of qualitative data, but in reality, there are untold insights to be gained from what people (especially customers) write and say about you. So how do you go about analyzing textual data?

One highly useful qualitative technique is sentiment analysis , a technique which belongs to the broader category of text analysis —the (usually automated) process of sorting and understanding textual data.

With sentiment analysis, the goal is to interpret and classify the emotions conveyed within textual data. From a business perspective, this allows you to ascertain how your customers feel about various aspects of your brand, product, or service.

There are several different types of sentiment analysis models, each with a slightly different focus. The three main types include:

Fine-grained sentiment analysis

If you want to focus on opinion polarity (i.e. positive, neutral, or negative) in depth, fine-grained sentiment analysis will allow you to do so.

For example, if you wanted to interpret star ratings given by customers, you might use fine-grained sentiment analysis to categorize the various ratings along a scale ranging from very positive to very negative.

Emotion detection

This model often uses complex machine learning algorithms to pick out various emotions from your textual data.

You might use an emotion detection model to identify words associated with happiness, anger, frustration, and excitement, giving you insight into how your customers feel when writing about you or your product on, say, a product review site.

Aspect-based sentiment analysis

This type of analysis allows you to identify what specific aspects the emotions or opinions relate to, such as a certain product feature or a new ad campaign.

If a customer writes that they “find the new Instagram advert so annoying”, your model should detect not only a negative sentiment, but also the object towards which it’s directed.

In a nutshell, sentiment analysis uses various Natural Language Processing (NLP) algorithms and systems which are trained to associate certain inputs (for example, certain words) with certain outputs.

For example, the input “annoying” would be recognized and tagged as “negative”. Sentiment analysis is crucial to understanding how your customers feel about you and your products, for identifying areas for improvement, and even for averting PR disasters in real-time!

Sentiment analysis in action: 5 Real-world sentiment analysis case studies

4. The data analysis process

In order to gain meaningful insights from data, data analysts will perform a rigorous step-by-step process. We go over this in detail in our step by step guide to the data analysis process —but, to briefly summarize, the data analysis process generally consists of the following phases:

Defining the question

The first step for any data analyst will be to define the objective of the analysis, sometimes called a ‘problem statement’. Essentially, you’re asking a question with regards to a business problem you’re trying to solve. Once you’ve defined this, you’ll then need to determine which data sources will help you answer this question.

Collecting the data

Now that you’ve defined your objective, the next step will be to set up a strategy for collecting and aggregating the appropriate data. Will you be using quantitative (numeric) or qualitative (descriptive) data? Do these data fit into first-party, second-party, or third-party data?

Learn more: Quantitative vs. Qualitative Data: What’s the Difference? 

Cleaning the data

Unfortunately, your collected data isn’t automatically ready for analysis—you’ll have to clean it first. As a data analyst, this phase of the process will take up the most time. During the data cleaning process, you will likely be:

  • Removing major errors, duplicates, and outliers
  • Removing unwanted data points
  • Structuring the data—that is, fixing typos, layout issues, etc.
  • Filling in major gaps in data

Analyzing the data

Now that we’ve finished cleaning the data, it’s time to analyze it! Many analysis methods have already been described in this article, and it’s up to you to decide which one will best suit the assigned objective. It may fall under one of the following categories:

  • Descriptive analysis , which identifies what has already happened
  • Diagnostic analysis , which focuses on understanding why something has happened
  • Predictive analysis , which identifies future trends based on historical data
  • Prescriptive analysis , which allows you to make recommendations for the future

Visualizing and sharing your findings

We’re almost at the end of the road! Analyses have been made, insights have been gleaned—all that remains to be done is to share this information with others. This is usually done with a data visualization tool, such as Google Charts, or Tableau.

Learn more: 13 of the Most Common Types of Data Visualization

To sum up the process, Will’s explained it all excellently in the following video:

5. The best tools for data analysis

As you can imagine, every phase of the data analysis process requires the data analyst to have a variety of tools under their belt that assist in gaining valuable insights from data. We cover these tools in greater detail in this article , but, in summary, here’s our best-of-the-best list, with links to each product:

The top 9 tools for data analysts

  • Microsoft Excel
  • Jupyter Notebook
  • Apache Spark
  • Microsoft Power BI

6. Key takeaways and further reading

As you can see, there are many different data analysis techniques at your disposal. In order to turn your raw data into actionable insights, it’s important to consider what kind of data you have (is it qualitative or quantitative?) as well as the kinds of insights that will be useful within the given context. In this post, we’ve introduced seven of the most useful data analysis techniques—but there are many more out there to be discovered!

So what now? If you haven’t already, we recommend reading the case studies for each analysis technique discussed in this post (you’ll find a link at the end of each section). For a more hands-on introduction to the kinds of methods and techniques that data analysts use, try out this free introductory data analytics short course. In the meantime, you might also want to read the following:

  • The Best Online Data Analytics Courses for 2024
  • What Is Time Series Data and How Is It Analyzed?
  • What is Spatial Analysis?

Your Modern Business Guide To Data Analysis Methods And Techniques

Data analysis methods and techniques blog post by datapine

Table of Contents

1) What Is Data Analysis?

2) Why Is Data Analysis Important?

3) What Is The Data Analysis Process?

4) Types Of Data Analysis Methods

5) Top Data Analysis Techniques To Apply

6) Quality Criteria For Data Analysis

7) Data Analysis Limitations & Barriers

8) Data Analysis Skills

9) Data Analysis In The Big Data Environment

In our data-rich age, understanding how to analyze and extract true meaning from our business’s digital insights is one of the primary drivers of success.

Despite the colossal volume of data we create every day, a mere 0.5% is actually analyzed and used for data discovery , improvement, and intelligence. While that may not seem like much, considering the amount of digital information we have at our fingertips, half a percent still accounts for a vast amount of data.

With so much data and so little time, knowing how to collect, curate, organize, and make sense of all of this potentially business-boosting information can be a minefield – but online data analysis is the solution.

In science, data analysis uses a more complex approach with advanced techniques to explore and experiment with data. On the other hand, in a business context, data is used to make data-driven decisions that will enable the company to improve its overall performance. In this post, we will cover the analysis of data from an organizational point of view while still going through the scientific and statistical foundations that are fundamental to understanding the basics of data analysis. 

To put all of that into perspective, we will answer a host of important analytical questions, explore analytical methods and techniques, while demonstrating how to perform analysis in the real world with a 17-step blueprint for success.

What Is Data Analysis?

Data analysis is the process of collecting, modeling, and analyzing data using various statistical and logical methods and techniques. Businesses rely on analytics processes and tools to extract insights that support strategic and operational decision-making.

All these various methods are largely based on two core areas: quantitative and qualitative research.

To explain the key differences between qualitative and quantitative research, here’s a video for your viewing pleasure:

Gaining a better understanding of different techniques and methods in quantitative research as well as qualitative insights will give your analyzing efforts a more clearly defined direction, so it’s worth taking the time to allow this particular knowledge to sink in. Additionally, you will be able to create a comprehensive analytical report that will skyrocket your analysis.

Apart from qualitative and quantitative categories, there are also other types of data that you should be aware of before dividing into complex data analysis processes. These categories include: 

  • Big data: Refers to massive data sets that need to be analyzed using advanced software to reveal patterns and trends. It is considered to be one of the best analytical assets as it provides larger volumes of data at a faster rate. 
  • Metadata: Putting it simply, metadata is data that provides insights about other data. It summarizes key information about specific data that makes it easier to find and reuse for later purposes. 
  • Real time data: As its name suggests, real time data is presented as soon as it is acquired. From an organizational perspective, this is the most valuable data as it can help you make important decisions based on the latest developments. Our guide on real time analytics will tell you more about the topic. 
  • Machine data: This is more complex data that is generated solely by a machine such as phones, computers, or even websites and embedded systems, without previous human interaction.

Why Is Data Analysis Important?

Before we go into detail about the categories of analysis along with its methods and techniques, you must understand the potential that analyzing data can bring to your organization.

  • Informed decision-making : From a management perspective, you can benefit from analyzing your data as it helps you make decisions based on facts and not simple intuition. For instance, you can understand where to invest your capital, detect growth opportunities, predict your income, or tackle uncommon situations before they become problems. Through this, you can extract relevant insights from all areas in your organization, and with the help of dashboard software , present the data in a professional and interactive way to different stakeholders.
  • Reduce costs : Another great benefit is to reduce costs. With the help of advanced technologies such as predictive analytics, businesses can spot improvement opportunities, trends, and patterns in their data and plan their strategies accordingly. In time, this will help you save money and resources on implementing the wrong strategies. And not just that, by predicting different scenarios such as sales and demand you can also anticipate production and supply. 
  • Target customers better : Customers are arguably the most crucial element in any business. By using analytics to get a 360° vision of all aspects related to your customers, you can understand which channels they use to communicate with you, their demographics, interests, habits, purchasing behaviors, and more. In the long run, it will drive success to your marketing strategies, allow you to identify new potential customers, and avoid wasting resources on targeting the wrong people or sending the wrong message. You can also track customer satisfaction by analyzing your client’s reviews or your customer service department’s performance.

What Is The Data Analysis Process?

Data analysis process graphic

When we talk about analyzing data there is an order to follow in order to extract the needed conclusions. The analysis process consists of 5 key stages. We will cover each of them more in detail later in the post, but to start providing the needed context to understand what is coming next, here is a rundown of the 5 essential steps of data analysis. 

  • Identify: Before you get your hands dirty with data, you first need to identify why you need it in the first place. The identification is the stage in which you establish the questions you will need to answer. For example, what is the customer's perception of our brand? Or what type of packaging is more engaging to our potential customers? Once the questions are outlined you are ready for the next step. 
  • Collect: As its name suggests, this is the stage where you start collecting the needed data. Here, you define which sources of data you will use and how you will use them. The collection of data can come in different forms such as internal or external sources, surveys, interviews, questionnaires, and focus groups, among others.  An important note here is that the way you collect the data will be different in a quantitative and qualitative scenario. 
  • Clean: Once you have the necessary data it is time to clean it and leave it ready for analysis. Not all the data you collect will be useful, when collecting big amounts of data in different formats it is very likely that you will find yourself with duplicate or badly formatted data. To avoid this, before you start working with your data you need to make sure to erase any white spaces, duplicate records, or formatting errors. This way you avoid hurting your analysis with bad-quality data. 
  • Analyze : With the help of various techniques such as statistical analysis, regressions, neural networks, text analysis, and more, you can start analyzing and manipulating your data to extract relevant conclusions. At this stage, you find trends, correlations, variations, and patterns that can help you answer the questions you first thought of in the identify stage. Various technologies in the market assist researchers and average users with the management of their data. Some of them include business intelligence and visualization software, predictive analytics, and data mining, among others. 
  • Interpret: Last but not least you have one of the most important steps: it is time to interpret your results. This stage is where the researcher comes up with courses of action based on the findings. For example, here you would understand if your clients prefer packaging that is red or green, plastic or paper, etc. Additionally, at this stage, you can also find some limitations and work on them. 

Now that you have a basic understanding of the key data analysis steps, let’s look at the top 17 essential methods.

17 Essential Types Of Data Analysis Methods

Before diving into the 17 essential types of methods, it is important that we go over really fast through the main analysis categories. Starting with the category of descriptive up to prescriptive analysis, the complexity and effort of data evaluation increases, but also the added value for the company.

a) Descriptive analysis - What happened.

The descriptive analysis method is the starting point for any analytic reflection, and it aims to answer the question of what happened? It does this by ordering, manipulating, and interpreting raw data from various sources to turn it into valuable insights for your organization.

Performing descriptive analysis is essential, as it enables us to present our insights in a meaningful way. Although it is relevant to mention that this analysis on its own will not allow you to predict future outcomes or tell you the answer to questions like why something happened, it will leave your data organized and ready to conduct further investigations.

b) Exploratory analysis - How to explore data relationships.

As its name suggests, the main aim of the exploratory analysis is to explore. Prior to it, there is still no notion of the relationship between the data and the variables. Once the data is investigated, exploratory analysis helps you to find connections and generate hypotheses and solutions for specific problems. A typical area of ​​application for it is data mining.

c) Diagnostic analysis - Why it happened.

Diagnostic data analytics empowers analysts and executives by helping them gain a firm contextual understanding of why something happened. If you know why something happened as well as how it happened, you will be able to pinpoint the exact ways of tackling the issue or challenge.

Designed to provide direct and actionable answers to specific questions, this is one of the world’s most important methods in research, among its other key organizational functions such as retail analytics , e.g.

c) Predictive analysis - What will happen.

The predictive method allows you to look into the future to answer the question: what will happen? In order to do this, it uses the results of the previously mentioned descriptive, exploratory, and diagnostic analysis, in addition to machine learning (ML) and artificial intelligence (AI). Through this, you can uncover future trends, potential problems or inefficiencies, connections, and casualties in your data.

With predictive analysis, you can unfold and develop initiatives that will not only enhance your various operational processes but also help you gain an all-important edge over the competition. If you understand why a trend, pattern, or event happened through data, you will be able to develop an informed projection of how things may unfold in particular areas of the business.

e) Prescriptive analysis - How will it happen.

Another of the most effective types of analysis methods in research. Prescriptive data techniques cross over from predictive analysis in the way that it revolves around using patterns or trends to develop responsive, practical business strategies.

By drilling down into prescriptive analysis, you will play an active role in the data consumption process by taking well-arranged sets of visual data and using it as a powerful fix to emerging issues in a number of key areas, including marketing, sales, customer experience, HR, fulfillment, finance, logistics analytics , and others.

Top 17 data analysis methods

As mentioned at the beginning of the post, data analysis methods can be divided into two big categories: quantitative and qualitative. Each of these categories holds a powerful analytical value that changes depending on the scenario and type of data you are working with. Below, we will discuss 17 methods that are divided into qualitative and quantitative approaches. 

Without further ado, here are the 17 essential types of data analysis methods with some use cases in the business world: 

A. Quantitative Methods 

To put it simply, quantitative analysis refers to all methods that use numerical data or data that can be turned into numbers (e.g. category variables like gender, age, etc.) to extract valuable insights. It is used to extract valuable conclusions about relationships, differences, and test hypotheses. Below we discuss some of the key quantitative methods. 

1. Cluster analysis

The action of grouping a set of data elements in a way that said elements are more similar (in a particular sense) to each other than to those in other groups – hence the term ‘cluster.’ Since there is no target variable when clustering, the method is often used to find hidden patterns in the data. The approach is also used to provide additional context to a trend or dataset.

Let's look at it from an organizational perspective. In a perfect world, marketers would be able to analyze each customer separately and give them the best-personalized service, but let's face it, with a large customer base, it is timely impossible to do that. That's where clustering comes in. By grouping customers into clusters based on demographics, purchasing behaviors, monetary value, or any other factor that might be relevant for your company, you will be able to immediately optimize your efforts and give your customers the best experience based on their needs.

2. Cohort analysis

This type of data analysis approach uses historical data to examine and compare a determined segment of users' behavior, which can then be grouped with others with similar characteristics. By using this methodology, it's possible to gain a wealth of insight into consumer needs or a firm understanding of a broader target group.

Cohort analysis can be really useful for performing analysis in marketing as it will allow you to understand the impact of your campaigns on specific groups of customers. To exemplify, imagine you send an email campaign encouraging customers to sign up for your site. For this, you create two versions of the campaign with different designs, CTAs, and ad content. Later on, you can use cohort analysis to track the performance of the campaign for a longer period of time and understand which type of content is driving your customers to sign up, repurchase, or engage in other ways.  

A useful tool to start performing cohort analysis method is Google Analytics. You can learn more about the benefits and limitations of using cohorts in GA in this useful guide . In the bottom image, you see an example of how you visualize a cohort in this tool. The segments (devices traffic) are divided into date cohorts (usage of devices) and then analyzed week by week to extract insights into performance.

Cohort analysis chart example from google analytics

3. Regression analysis

Regression uses historical data to understand how a dependent variable's value is affected when one (linear regression) or more independent variables (multiple regression) change or stay the same. By understanding each variable's relationship and how it developed in the past, you can anticipate possible outcomes and make better decisions in the future.

Let's bring it down with an example. Imagine you did a regression analysis of your sales in 2019 and discovered that variables like product quality, store design, customer service, marketing campaigns, and sales channels affected the overall result. Now you want to use regression to analyze which of these variables changed or if any new ones appeared during 2020. For example, you couldn’t sell as much in your physical store due to COVID lockdowns. Therefore, your sales could’ve either dropped in general or increased in your online channels. Through this, you can understand which independent variables affected the overall performance of your dependent variable, annual sales.

If you want to go deeper into this type of analysis, check out this article and learn more about how you can benefit from regression.

4. Neural networks

The neural network forms the basis for the intelligent algorithms of machine learning. It is a form of analytics that attempts, with minimal intervention, to understand how the human brain would generate insights and predict values. Neural networks learn from each and every data transaction, meaning that they evolve and advance over time.

A typical area of application for neural networks is predictive analytics. There are BI reporting tools that have this feature implemented within them, such as the Predictive Analytics Tool from datapine. This tool enables users to quickly and easily generate all kinds of predictions. All you have to do is select the data to be processed based on your KPIs, and the software automatically calculates forecasts based on historical and current data. Thanks to its user-friendly interface, anyone in your organization can manage it; there’s no need to be an advanced scientist. 

Here is an example of how you can use the predictive analysis tool from datapine:

Example on how to use predictive analytics tool from datapine

**click to enlarge**

5. Factor analysis

The factor analysis also called “dimension reduction” is a type of data analysis used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. The aim here is to uncover independent latent variables, an ideal method for streamlining specific segments.

A good way to understand this data analysis method is a customer evaluation of a product. The initial assessment is based on different variables like color, shape, wearability, current trends, materials, comfort, the place where they bought the product, and frequency of usage. Like this, the list can be endless, depending on what you want to track. In this case, factor analysis comes into the picture by summarizing all of these variables into homogenous groups, for example, by grouping the variables color, materials, quality, and trends into a brother latent variable of design.

If you want to start analyzing data using factor analysis we recommend you take a look at this practical guide from UCLA.

6. Data mining

A method of data analysis that is the umbrella term for engineering metrics and insights for additional value, direction, and context. By using exploratory statistical evaluation, data mining aims to identify dependencies, relations, patterns, and trends to generate advanced knowledge.  When considering how to analyze data, adopting a data mining mindset is essential to success - as such, it’s an area that is worth exploring in greater detail.

An excellent use case of data mining is datapine intelligent data alerts . With the help of artificial intelligence and machine learning, they provide automated signals based on particular commands or occurrences within a dataset. For example, if you’re monitoring supply chain KPIs , you could set an intelligent alarm to trigger when invalid or low-quality data appears. By doing so, you will be able to drill down deep into the issue and fix it swiftly and effectively.

In the following picture, you can see how the intelligent alarms from datapine work. By setting up ranges on daily orders, sessions, and revenues, the alarms will notify you if the goal was not completed or if it exceeded expectations.

Example on how to use intelligent alerts from datapine

7. Time series analysis

As its name suggests, time series analysis is used to analyze a set of data points collected over a specified period of time. Although analysts use this method to monitor the data points in a specific interval of time rather than just monitoring them intermittently, the time series analysis is not uniquely used for the purpose of collecting data over time. Instead, it allows researchers to understand if variables changed during the duration of the study, how the different variables are dependent, and how did it reach the end result. 

In a business context, this method is used to understand the causes of different trends and patterns to extract valuable insights. Another way of using this method is with the help of time series forecasting. Powered by predictive technologies, businesses can analyze various data sets over a period of time and forecast different future events. 

A great use case to put time series analysis into perspective is seasonality effects on sales. By using time series forecasting to analyze sales data of a specific product over time, you can understand if sales rise over a specific period of time (e.g. swimwear during summertime, or candy during Halloween). These insights allow you to predict demand and prepare production accordingly.  

8. Decision Trees 

The decision tree analysis aims to act as a support tool to make smart and strategic decisions. By visually displaying potential outcomes, consequences, and costs in a tree-like model, researchers and company users can easily evaluate all factors involved and choose the best course of action. Decision trees are helpful to analyze quantitative data and they allow for an improved decision-making process by helping you spot improvement opportunities, reduce costs, and enhance operational efficiency and production.

But how does a decision tree actually works? This method works like a flowchart that starts with the main decision that you need to make and branches out based on the different outcomes and consequences of each decision. Each outcome will outline its own consequences, costs, and gains and, at the end of the analysis, you can compare each of them and make the smartest decision. 

Businesses can use them to understand which project is more cost-effective and will bring more earnings in the long run. For example, imagine you need to decide if you want to update your software app or build a new app entirely.  Here you would compare the total costs, the time needed to be invested, potential revenue, and any other factor that might affect your decision.  In the end, you would be able to see which of these two options is more realistic and attainable for your company or research.

9. Conjoint analysis 

Last but not least, we have the conjoint analysis. This approach is usually used in surveys to understand how individuals value different attributes of a product or service and it is one of the most effective methods to extract consumer preferences. When it comes to purchasing, some clients might be more price-focused, others more features-focused, and others might have a sustainable focus. Whatever your customer's preferences are, you can find them with conjoint analysis. Through this, companies can define pricing strategies, packaging options, subscription packages, and more. 

A great example of conjoint analysis is in marketing and sales. For instance, a cupcake brand might use conjoint analysis and find that its clients prefer gluten-free options and cupcakes with healthier toppings over super sugary ones. Thus, the cupcake brand can turn these insights into advertisements and promotions to increase sales of this particular type of product. And not just that, conjoint analysis can also help businesses segment their customers based on their interests. This allows them to send different messaging that will bring value to each of the segments. 

10. Correspondence Analysis

Also known as reciprocal averaging, correspondence analysis is a method used to analyze the relationship between categorical variables presented within a contingency table. A contingency table is a table that displays two (simple correspondence analysis) or more (multiple correspondence analysis) categorical variables across rows and columns that show the distribution of the data, which is usually answers to a survey or questionnaire on a specific topic. 

This method starts by calculating an “expected value” which is done by multiplying row and column averages and dividing it by the overall original value of the specific table cell. The “expected value” is then subtracted from the original value resulting in a “residual number” which is what allows you to extract conclusions about relationships and distribution. The results of this analysis are later displayed using a map that represents the relationship between the different values. The closest two values are in the map, the bigger the relationship. Let’s put it into perspective with an example. 

Imagine you are carrying out a market research analysis about outdoor clothing brands and how they are perceived by the public. For this analysis, you ask a group of people to match each brand with a certain attribute which can be durability, innovation, quality materials, etc. When calculating the residual numbers, you can see that brand A has a positive residual for innovation but a negative one for durability. This means that brand A is not positioned as a durable brand in the market, something that competitors could take advantage of. 

11. Multidimensional Scaling (MDS)

MDS is a method used to observe the similarities or disparities between objects which can be colors, brands, people, geographical coordinates, and more. The objects are plotted using an “MDS map” that positions similar objects together and disparate ones far apart. The (dis) similarities between objects are represented using one or more dimensions that can be observed using a numerical scale. For example, if you want to know how people feel about the COVID-19 vaccine, you can use 1 for “don’t believe in the vaccine at all”  and 10 for “firmly believe in the vaccine” and a scale of 2 to 9 for in between responses.  When analyzing an MDS map the only thing that matters is the distance between the objects, the orientation of the dimensions is arbitrary and has no meaning at all. 

Multidimensional scaling is a valuable technique for market research, especially when it comes to evaluating product or brand positioning. For instance, if a cupcake brand wants to know how they are positioned compared to competitors, it can define 2-3 dimensions such as taste, ingredients, shopping experience, or more, and do a multidimensional scaling analysis to find improvement opportunities as well as areas in which competitors are currently leading. 

Another business example is in procurement when deciding on different suppliers. Decision makers can generate an MDS map to see how the different prices, delivery times, technical services, and more of the different suppliers differ and pick the one that suits their needs the best. 

A final example proposed by a research paper on "An Improved Study of Multilevel Semantic Network Visualization for Analyzing Sentiment Word of Movie Review Data". Researchers picked a two-dimensional MDS map to display the distances and relationships between different sentiments in movie reviews. They used 36 sentiment words and distributed them based on their emotional distance as we can see in the image below where the words "outraged" and "sweet" are on opposite sides of the map, marking the distance between the two emotions very clearly.

Example of multidimensional scaling analysis

Aside from being a valuable technique to analyze dissimilarities, MDS also serves as a dimension-reduction technique for large dimensional data. 

B. Qualitative Methods

Qualitative data analysis methods are defined as the observation of non-numerical data that is gathered and produced using methods of observation such as interviews, focus groups, questionnaires, and more. As opposed to quantitative methods, qualitative data is more subjective and highly valuable in analyzing customer retention and product development.

12. Text analysis

Text analysis, also known in the industry as text mining, works by taking large sets of textual data and arranging them in a way that makes it easier to manage. By working through this cleansing process in stringent detail, you will be able to extract the data that is truly relevant to your organization and use it to develop actionable insights that will propel you forward.

Modern software accelerate the application of text analytics. Thanks to the combination of machine learning and intelligent algorithms, you can perform advanced analytical processes such as sentiment analysis. This technique allows you to understand the intentions and emotions of a text, for example, if it's positive, negative, or neutral, and then give it a score depending on certain factors and categories that are relevant to your brand. Sentiment analysis is often used to monitor brand and product reputation and to understand how successful your customer experience is. To learn more about the topic check out this insightful article .

By analyzing data from various word-based sources, including product reviews, articles, social media communications, and survey responses, you will gain invaluable insights into your audience, as well as their needs, preferences, and pain points. This will allow you to create campaigns, services, and communications that meet your prospects’ needs on a personal level, growing your audience while boosting customer retention. There are various other “sub-methods” that are an extension of text analysis. Each of them serves a more specific purpose and we will look at them in detail next. 

13. Content Analysis

This is a straightforward and very popular method that examines the presence and frequency of certain words, concepts, and subjects in different content formats such as text, image, audio, or video. For example, the number of times the name of a celebrity is mentioned on social media or online tabloids. It does this by coding text data that is later categorized and tabulated in a way that can provide valuable insights, making it the perfect mix of quantitative and qualitative analysis.

There are two types of content analysis. The first one is the conceptual analysis which focuses on explicit data, for instance, the number of times a concept or word is mentioned in a piece of content. The second one is relational analysis, which focuses on the relationship between different concepts or words and how they are connected within a specific context. 

Content analysis is often used by marketers to measure brand reputation and customer behavior. For example, by analyzing customer reviews. It can also be used to analyze customer interviews and find directions for new product development. It is also important to note, that in order to extract the maximum potential out of this analysis method, it is necessary to have a clearly defined research question. 

14. Thematic Analysis

Very similar to content analysis, thematic analysis also helps in identifying and interpreting patterns in qualitative data with the main difference being that the first one can also be applied to quantitative analysis. The thematic method analyzes large pieces of text data such as focus group transcripts or interviews and groups them into themes or categories that come up frequently within the text. It is a great method when trying to figure out peoples view’s and opinions about a certain topic. For example, if you are a brand that cares about sustainability, you can do a survey of your customers to analyze their views and opinions about sustainability and how they apply it to their lives. You can also analyze customer service calls transcripts to find common issues and improve your service. 

Thematic analysis is a very subjective technique that relies on the researcher’s judgment. Therefore,  to avoid biases, it has 6 steps that include familiarization, coding, generating themes, reviewing themes, defining and naming themes, and writing up. It is also important to note that, because it is a flexible approach, the data can be interpreted in multiple ways and it can be hard to select what data is more important to emphasize. 

15. Narrative Analysis 

A bit more complex in nature than the two previous ones, narrative analysis is used to explore the meaning behind the stories that people tell and most importantly, how they tell them. By looking into the words that people use to describe a situation you can extract valuable conclusions about their perspective on a specific topic. Common sources for narrative data include autobiographies, family stories, opinion pieces, and testimonials, among others. 

From a business perspective, narrative analysis can be useful to analyze customer behaviors and feelings towards a specific product, service, feature, or others. It provides unique and deep insights that can be extremely valuable. However, it has some drawbacks.  

The biggest weakness of this method is that the sample sizes are usually very small due to the complexity and time-consuming nature of the collection of narrative data. Plus, the way a subject tells a story will be significantly influenced by his or her specific experiences, making it very hard to replicate in a subsequent study. 

16. Discourse Analysis

Discourse analysis is used to understand the meaning behind any type of written, verbal, or symbolic discourse based on its political, social, or cultural context. It mixes the analysis of languages and situations together. This means that the way the content is constructed and the meaning behind it is significantly influenced by the culture and society it takes place in. For example, if you are analyzing political speeches you need to consider different context elements such as the politician's background, the current political context of the country, the audience to which the speech is directed, and so on. 

From a business point of view, discourse analysis is a great market research tool. It allows marketers to understand how the norms and ideas of the specific market work and how their customers relate to those ideas. It can be very useful to build a brand mission or develop a unique tone of voice. 

17. Grounded Theory Analysis

Traditionally, researchers decide on a method and hypothesis and start to collect the data to prove that hypothesis. The grounded theory is the only method that doesn’t require an initial research question or hypothesis as its value lies in the generation of new theories. With the grounded theory method, you can go into the analysis process with an open mind and explore the data to generate new theories through tests and revisions. In fact, it is not necessary to collect the data and then start to analyze it. Researchers usually start to find valuable insights as they are gathering the data. 

All of these elements make grounded theory a very valuable method as theories are fully backed by data instead of initial assumptions. It is a great technique to analyze poorly researched topics or find the causes behind specific company outcomes. For example, product managers and marketers might use the grounded theory to find the causes of high levels of customer churn and look into customer surveys and reviews to develop new theories about the causes. 

How To Analyze Data? Top 17 Data Analysis Techniques To Apply

17 top data analysis techniques by datapine

Now that we’ve answered the questions “what is data analysis’”, why is it important, and covered the different data analysis types, it’s time to dig deeper into how to perform your analysis by working through these 17 essential techniques.

1. Collaborate your needs

Before you begin analyzing or drilling down into any techniques, it’s crucial to sit down collaboratively with all key stakeholders within your organization, decide on your primary campaign or strategic goals, and gain a fundamental understanding of the types of insights that will best benefit your progress or provide you with the level of vision you need to evolve your organization.

2. Establish your questions

Once you’ve outlined your core objectives, you should consider which questions will need answering to help you achieve your mission. This is one of the most important techniques as it will shape the very foundations of your success.

To help you ask the right things and ensure your data works for you, you have to ask the right data analysis questions .

3. Data democratization

After giving your data analytics methodology some real direction, and knowing which questions need answering to extract optimum value from the information available to your organization, you should continue with democratization.

Data democratization is an action that aims to connect data from various sources efficiently and quickly so that anyone in your organization can access it at any given moment. You can extract data in text, images, videos, numbers, or any other format. And then perform cross-database analysis to achieve more advanced insights to share with the rest of the company interactively.  

Once you have decided on your most valuable sources, you need to take all of this into a structured format to start collecting your insights. For this purpose, datapine offers an easy all-in-one data connectors feature to integrate all your internal and external sources and manage them at your will. Additionally, datapine’s end-to-end solution automatically updates your data, allowing you to save time and focus on performing the right analysis to grow your company.

data connectors from datapine

4. Think of governance 

When collecting data in a business or research context you always need to think about security and privacy. With data breaches becoming a topic of concern for businesses, the need to protect your client's or subject’s sensitive information becomes critical. 

To ensure that all this is taken care of, you need to think of a data governance strategy. According to Gartner , this concept refers to “ the specification of decision rights and an accountability framework to ensure the appropriate behavior in the valuation, creation, consumption, and control of data and analytics .” In simpler words, data governance is a collection of processes, roles, and policies, that ensure the efficient use of data while still achieving the main company goals. It ensures that clear roles are in place for who can access the information and how they can access it. In time, this not only ensures that sensitive information is protected but also allows for an efficient analysis as a whole. 

5. Clean your data

After harvesting from so many sources you will be left with a vast amount of information that can be overwhelming to deal with. At the same time, you can be faced with incorrect data that can be misleading to your analysis. The smartest thing you can do to avoid dealing with this in the future is to clean the data. This is fundamental before visualizing it, as it will ensure that the insights you extract from it are correct.

There are many things that you need to look for in the cleaning process. The most important one is to eliminate any duplicate observations; this usually appears when using multiple internal and external sources of information. You can also add any missing codes, fix empty fields, and eliminate incorrectly formatted data.

Another usual form of cleaning is done with text data. As we mentioned earlier, most companies today analyze customer reviews, social media comments, questionnaires, and several other text inputs. In order for algorithms to detect patterns, text data needs to be revised to avoid invalid characters or any syntax or spelling errors. 

Most importantly, the aim of cleaning is to prevent you from arriving at false conclusions that can damage your company in the long run. By using clean data, you will also help BI solutions to interact better with your information and create better reports for your organization.

6. Set your KPIs

Once you’ve set your sources, cleaned your data, and established clear-cut questions you want your insights to answer, you need to set a host of key performance indicators (KPIs) that will help you track, measure, and shape your progress in a number of key areas.

KPIs are critical to both qualitative and quantitative analysis research. This is one of the primary methods of data analysis you certainly shouldn’t overlook.

To help you set the best possible KPIs for your initiatives and activities, here is an example of a relevant logistics KPI : transportation-related costs. If you want to see more go explore our collection of key performance indicator examples .

Transportation costs logistics KPIs

7. Omit useless data

Having bestowed your data analysis tools and techniques with true purpose and defined your mission, you should explore the raw data you’ve collected from all sources and use your KPIs as a reference for chopping out any information you deem to be useless.

Trimming the informational fat is one of the most crucial methods of analysis as it will allow you to focus your analytical efforts and squeeze every drop of value from the remaining ‘lean’ information.

Any stats, facts, figures, or metrics that don’t align with your business goals or fit with your KPI management strategies should be eliminated from the equation.

8. Build a data management roadmap

While, at this point, this particular step is optional (you will have already gained a wealth of insight and formed a fairly sound strategy by now), creating a data governance roadmap will help your data analysis methods and techniques become successful on a more sustainable basis. These roadmaps, if developed properly, are also built so they can be tweaked and scaled over time.

Invest ample time in developing a roadmap that will help you store, manage, and handle your data internally, and you will make your analysis techniques all the more fluid and functional – one of the most powerful types of data analysis methods available today.

9. Integrate technology

There are many ways to analyze data, but one of the most vital aspects of analytical success in a business context is integrating the right decision support software and technology.

Robust analysis platforms will not only allow you to pull critical data from your most valuable sources while working with dynamic KPIs that will offer you actionable insights; it will also present them in a digestible, visual, interactive format from one central, live dashboard . A data methodology you can count on.

By integrating the right technology within your data analysis methodology, you’ll avoid fragmenting your insights, saving you time and effort while allowing you to enjoy the maximum value from your business’s most valuable insights.

For a look at the power of software for the purpose of analysis and to enhance your methods of analyzing, glance over our selection of dashboard examples .

10. Answer your questions

By considering each of the above efforts, working with the right technology, and fostering a cohesive internal culture where everyone buys into the different ways to analyze data as well as the power of digital intelligence, you will swiftly start to answer your most burning business questions. Arguably, the best way to make your data concepts accessible across the organization is through data visualization.

11. Visualize your data

Online data visualization is a powerful tool as it lets you tell a story with your metrics, allowing users across the organization to extract meaningful insights that aid business evolution – and it covers all the different ways to analyze data.

The purpose of analyzing is to make your entire organization more informed and intelligent, and with the right platform or dashboard, this is simpler than you think, as demonstrated by our marketing dashboard .

An executive dashboard example showcasing high-level marketing KPIs such as cost per lead, MQL, SQL, and cost per customer.

This visual, dynamic, and interactive online dashboard is a data analysis example designed to give Chief Marketing Officers (CMO) an overview of relevant metrics to help them understand if they achieved their monthly goals.

In detail, this example generated with a modern dashboard creator displays interactive charts for monthly revenues, costs, net income, and net income per customer; all of them are compared with the previous month so that you can understand how the data fluctuated. In addition, it shows a detailed summary of the number of users, customers, SQLs, and MQLs per month to visualize the whole picture and extract relevant insights or trends for your marketing reports .

The CMO dashboard is perfect for c-level management as it can help them monitor the strategic outcome of their marketing efforts and make data-driven decisions that can benefit the company exponentially.

12. Be careful with the interpretation

We already dedicated an entire post to data interpretation as it is a fundamental part of the process of data analysis. It gives meaning to the analytical information and aims to drive a concise conclusion from the analysis results. Since most of the time companies are dealing with data from many different sources, the interpretation stage needs to be done carefully and properly in order to avoid misinterpretations. 

To help you through the process, here we list three common practices that you need to avoid at all costs when looking at your data:

  • Correlation vs. causation: The human brain is formatted to find patterns. This behavior leads to one of the most common mistakes when performing interpretation: confusing correlation with causation. Although these two aspects can exist simultaneously, it is not correct to assume that because two things happened together, one provoked the other. A piece of advice to avoid falling into this mistake is never to trust just intuition, trust the data. If there is no objective evidence of causation, then always stick to correlation. 
  • Confirmation bias: This phenomenon describes the tendency to select and interpret only the data necessary to prove one hypothesis, often ignoring the elements that might disprove it. Even if it's not done on purpose, confirmation bias can represent a real problem, as excluding relevant information can lead to false conclusions and, therefore, bad business decisions. To avoid it, always try to disprove your hypothesis instead of proving it, share your analysis with other team members, and avoid drawing any conclusions before the entire analytical project is finalized.
  • Statistical significance: To put it in short words, statistical significance helps analysts understand if a result is actually accurate or if it happened because of a sampling error or pure chance. The level of statistical significance needed might depend on the sample size and the industry being analyzed. In any case, ignoring the significance of a result when it might influence decision-making can be a huge mistake.

13. Build a narrative

Now, we’re going to look at how you can bring all of these elements together in a way that will benefit your business - starting with a little something called data storytelling.

The human brain responds incredibly well to strong stories or narratives. Once you’ve cleansed, shaped, and visualized your most invaluable data using various BI dashboard tools , you should strive to tell a story - one with a clear-cut beginning, middle, and end.

By doing so, you will make your analytical efforts more accessible, digestible, and universal, empowering more people within your organization to use your discoveries to their actionable advantage.

14. Consider autonomous technology

Autonomous technologies, such as artificial intelligence (AI) and machine learning (ML), play a significant role in the advancement of understanding how to analyze data more effectively.

Gartner predicts that by the end of this year, 80% of emerging technologies will be developed with AI foundations. This is a testament to the ever-growing power and value of autonomous technologies.

At the moment, these technologies are revolutionizing the analysis industry. Some examples that we mentioned earlier are neural networks, intelligent alarms, and sentiment analysis.

15. Share the load

If you work with the right tools and dashboards, you will be able to present your metrics in a digestible, value-driven format, allowing almost everyone in the organization to connect with and use relevant data to their advantage.

Modern dashboards consolidate data from various sources, providing access to a wealth of insights in one centralized location, no matter if you need to monitor recruitment metrics or generate reports that need to be sent across numerous departments. Moreover, these cutting-edge tools offer access to dashboards from a multitude of devices, meaning that everyone within the business can connect with practical insights remotely - and share the load.

Once everyone is able to work with a data-driven mindset, you will catalyze the success of your business in ways you never thought possible. And when it comes to knowing how to analyze data, this kind of collaborative approach is essential.

16. Data analysis tools

In order to perform high-quality analysis of data, it is fundamental to use tools and software that will ensure the best results. Here we leave you a small summary of four fundamental categories of data analysis tools for your organization.

  • Business Intelligence: BI tools allow you to process significant amounts of data from several sources in any format. Through this, you can not only analyze and monitor your data to extract relevant insights but also create interactive reports and dashboards to visualize your KPIs and use them for your company's good. datapine is an amazing online BI software that is focused on delivering powerful online analysis features that are accessible to beginner and advanced users. Like this, it offers a full-service solution that includes cutting-edge analysis of data, KPIs visualization, live dashboards, reporting, and artificial intelligence technologies to predict trends and minimize risk.
  • Statistical analysis: These tools are usually designed for scientists, statisticians, market researchers, and mathematicians, as they allow them to perform complex statistical analyses with methods like regression analysis, predictive analysis, and statistical modeling. A good tool to perform this type of analysis is R-Studio as it offers a powerful data modeling and hypothesis testing feature that can cover both academic and general data analysis. This tool is one of the favorite ones in the industry, due to its capability for data cleaning, data reduction, and performing advanced analysis with several statistical methods. Another relevant tool to mention is SPSS from IBM. The software offers advanced statistical analysis for users of all skill levels. Thanks to a vast library of machine learning algorithms, text analysis, and a hypothesis testing approach it can help your company find relevant insights to drive better decisions. SPSS also works as a cloud service that enables you to run it anywhere.
  • SQL Consoles: SQL is a programming language often used to handle structured data in relational databases. Tools like these are popular among data scientists as they are extremely effective in unlocking these databases' value. Undoubtedly, one of the most used SQL software in the market is MySQL Workbench . This tool offers several features such as a visual tool for database modeling and monitoring, complete SQL optimization, administration tools, and visual performance dashboards to keep track of KPIs.
  • Data Visualization: These tools are used to represent your data through charts, graphs, and maps that allow you to find patterns and trends in the data. datapine's already mentioned BI platform also offers a wealth of powerful online data visualization tools with several benefits. Some of them include: delivering compelling data-driven presentations to share with your entire company, the ability to see your data online with any device wherever you are, an interactive dashboard design feature that enables you to showcase your results in an interactive and understandable way, and to perform online self-service reports that can be used simultaneously with several other people to enhance team productivity.

17. Refine your process constantly 

Last is a step that might seem obvious to some people, but it can be easily ignored if you think you are done. Once you have extracted the needed results, you should always take a retrospective look at your project and think about what you can improve. As you saw throughout this long list of techniques, data analysis is a complex process that requires constant refinement. For this reason, you should always go one step further and keep improving. 

Quality Criteria For Data Analysis

So far we’ve covered a list of methods and techniques that should help you perform efficient data analysis. But how do you measure the quality and validity of your results? This is done with the help of some science quality criteria. Here we will go into a more theoretical area that is critical to understanding the fundamentals of statistical analysis in science. However, you should also be aware of these steps in a business context, as they will allow you to assess the quality of your results in the correct way. Let’s dig in. 

  • Internal validity: The results of a survey are internally valid if they measure what they are supposed to measure and thus provide credible results. In other words , internal validity measures the trustworthiness of the results and how they can be affected by factors such as the research design, operational definitions, how the variables are measured, and more. For instance, imagine you are doing an interview to ask people if they brush their teeth two times a day. While most of them will answer yes, you can still notice that their answers correspond to what is socially acceptable, which is to brush your teeth at least twice a day. In this case, you can’t be 100% sure if respondents actually brush their teeth twice a day or if they just say that they do, therefore, the internal validity of this interview is very low. 
  • External validity: Essentially, external validity refers to the extent to which the results of your research can be applied to a broader context. It basically aims to prove that the findings of a study can be applied in the real world. If the research can be applied to other settings, individuals, and times, then the external validity is high. 
  • Reliability : If your research is reliable, it means that it can be reproduced. If your measurement were repeated under the same conditions, it would produce similar results. This means that your measuring instrument consistently produces reliable results. For example, imagine a doctor building a symptoms questionnaire to detect a specific disease in a patient. Then, various other doctors use this questionnaire but end up diagnosing the same patient with a different condition. This means the questionnaire is not reliable in detecting the initial disease. Another important note here is that in order for your research to be reliable, it also needs to be objective. If the results of a study are the same, independent of who assesses them or interprets them, the study can be considered reliable. Let’s see the objectivity criteria in more detail now. 
  • Objectivity: In data science, objectivity means that the researcher needs to stay fully objective when it comes to its analysis. The results of a study need to be affected by objective criteria and not by the beliefs, personality, or values of the researcher. Objectivity needs to be ensured when you are gathering the data, for example, when interviewing individuals, the questions need to be asked in a way that doesn't influence the results. Paired with this, objectivity also needs to be thought of when interpreting the data. If different researchers reach the same conclusions, then the study is objective. For this last point, you can set predefined criteria to interpret the results to ensure all researchers follow the same steps. 

The discussed quality criteria cover mostly potential influences in a quantitative context. Analysis in qualitative research has by default additional subjective influences that must be controlled in a different way. Therefore, there are other quality criteria for this kind of research such as credibility, transferability, dependability, and confirmability. You can see each of them more in detail on this resource . 

Data Analysis Limitations & Barriers

Analyzing data is not an easy task. As you’ve seen throughout this post, there are many steps and techniques that you need to apply in order to extract useful information from your research. While a well-performed analysis can bring various benefits to your organization it doesn't come without limitations. In this section, we will discuss some of the main barriers you might encounter when conducting an analysis. Let’s see them more in detail. 

  • Lack of clear goals: No matter how good your data or analysis might be if you don’t have clear goals or a hypothesis the process might be worthless. While we mentioned some methods that don’t require a predefined hypothesis, it is always better to enter the analytical process with some clear guidelines of what you are expecting to get out of it, especially in a business context in which data is utilized to support important strategic decisions. 
  • Objectivity: Arguably one of the biggest barriers when it comes to data analysis in research is to stay objective. When trying to prove a hypothesis, researchers might find themselves, intentionally or unintentionally, directing the results toward an outcome that they want. To avoid this, always question your assumptions and avoid confusing facts with opinions. You can also show your findings to a research partner or external person to confirm that your results are objective. 
  • Data representation: A fundamental part of the analytical procedure is the way you represent your data. You can use various graphs and charts to represent your findings, but not all of them will work for all purposes. Choosing the wrong visual can not only damage your analysis but can mislead your audience, therefore, it is important to understand when to use each type of data depending on your analytical goals. Our complete guide on the types of graphs and charts lists 20 different visuals with examples of when to use them. 
  • Flawed correlation : Misleading statistics can significantly damage your research. We’ve already pointed out a few interpretation issues previously in the post, but it is an important barrier that we can't avoid addressing here as well. Flawed correlations occur when two variables appear related to each other but they are not. Confusing correlations with causation can lead to a wrong interpretation of results which can lead to building wrong strategies and loss of resources, therefore, it is very important to identify the different interpretation mistakes and avoid them. 
  • Sample size: A very common barrier to a reliable and efficient analysis process is the sample size. In order for the results to be trustworthy, the sample size should be representative of what you are analyzing. For example, imagine you have a company of 1000 employees and you ask the question “do you like working here?” to 50 employees of which 49 say yes, which means 95%. Now, imagine you ask the same question to the 1000 employees and 950 say yes, which also means 95%. Saying that 95% of employees like working in the company when the sample size was only 50 is not a representative or trustworthy conclusion. The significance of the results is way more accurate when surveying a bigger sample size.   
  • Privacy concerns: In some cases, data collection can be subjected to privacy regulations. Businesses gather all kinds of information from their customers from purchasing behaviors to addresses and phone numbers. If this falls into the wrong hands due to a breach, it can affect the security and confidentiality of your clients. To avoid this issue, you need to collect only the data that is needed for your research and, if you are using sensitive facts, make it anonymous so customers are protected. The misuse of customer data can severely damage a business's reputation, so it is important to keep an eye on privacy. 
  • Lack of communication between teams : When it comes to performing data analysis on a business level, it is very likely that each department and team will have different goals and strategies. However, they are all working for the same common goal of helping the business run smoothly and keep growing. When teams are not connected and communicating with each other, it can directly affect the way general strategies are built. To avoid these issues, tools such as data dashboards enable teams to stay connected through data in a visually appealing way. 
  • Innumeracy : Businesses are working with data more and more every day. While there are many BI tools available to perform effective analysis, data literacy is still a constant barrier. Not all employees know how to apply analysis techniques or extract insights from them. To prevent this from happening, you can implement different training opportunities that will prepare every relevant user to deal with data. 

Key Data Analysis Skills

As you've learned throughout this lengthy guide, analyzing data is a complex task that requires a lot of knowledge and skills. That said, thanks to the rise of self-service tools the process is way more accessible and agile than it once was. Regardless, there are still some key skills that are valuable to have when working with data, we list the most important ones below.

  • Critical and statistical thinking: To successfully analyze data you need to be creative and think out of the box. Yes, that might sound like a weird statement considering that data is often tight to facts. However, a great level of critical thinking is required to uncover connections, come up with a valuable hypothesis, and extract conclusions that go a step further from the surface. This, of course, needs to be complemented by statistical thinking and an understanding of numbers. 
  • Data cleaning: Anyone who has ever worked with data before will tell you that the cleaning and preparation process accounts for 80% of a data analyst's work, therefore, the skill is fundamental. But not just that, not cleaning the data adequately can also significantly damage the analysis which can lead to poor decision-making in a business scenario. While there are multiple tools that automate the cleaning process and eliminate the possibility of human error, it is still a valuable skill to dominate. 
  • Data visualization: Visuals make the information easier to understand and analyze, not only for professional users but especially for non-technical ones. Having the necessary skills to not only choose the right chart type but know when to apply it correctly is key. This also means being able to design visually compelling charts that make the data exploration process more efficient. 
  • SQL: The Structured Query Language or SQL is a programming language used to communicate with databases. It is fundamental knowledge as it enables you to update, manipulate, and organize data from relational databases which are the most common databases used by companies. It is fairly easy to learn and one of the most valuable skills when it comes to data analysis. 
  • Communication skills: This is a skill that is especially valuable in a business environment. Being able to clearly communicate analytical outcomes to colleagues is incredibly important, especially when the information you are trying to convey is complex for non-technical people. This applies to in-person communication as well as written format, for example, when generating a dashboard or report. While this might be considered a “soft” skill compared to the other ones we mentioned, it should not be ignored as you most likely will need to share analytical findings with others no matter the context. 

Data Analysis In The Big Data Environment

Big data is invaluable to today’s businesses, and by using different methods for data analysis, it’s possible to view your data in a way that can help you turn insight into positive action.

To inspire your efforts and put the importance of big data into context, here are some insights that you should know:

  • By 2026 the industry of big data is expected to be worth approximately $273.4 billion.
  • 94% of enterprises say that analyzing data is important for their growth and digital transformation. 
  • Companies that exploit the full potential of their data can increase their operating margins by 60% .
  • We already told you the benefits of Artificial Intelligence through this article. This industry's financial impact is expected to grow up to $40 billion by 2025.

Data analysis concepts may come in many forms, but fundamentally, any solid methodology will help to make your business more streamlined, cohesive, insightful, and successful than ever before.

Key Takeaways From Data Analysis 

As we reach the end of our data analysis journey, we leave a small summary of the main methods and techniques to perform excellent analysis and grow your business.

17 Essential Types of Data Analysis Methods:

  • Cluster analysis
  • Cohort analysis
  • Regression analysis
  • Factor analysis
  • Neural Networks
  • Data Mining
  • Text analysis
  • Time series analysis
  • Decision trees
  • Conjoint analysis 
  • Correspondence Analysis
  • Multidimensional Scaling 
  • Content analysis 
  • Thematic analysis
  • Narrative analysis 
  • Grounded theory analysis
  • Discourse analysis 

Top 17 Data Analysis Techniques:

  • Collaborate your needs
  • Establish your questions
  • Data democratization
  • Think of data governance 
  • Clean your data
  • Set your KPIs
  • Omit useless data
  • Build a data management roadmap
  • Integrate technology
  • Answer your questions
  • Visualize your data
  • Interpretation of data
  • Consider autonomous technology
  • Build a narrative
  • Share the load
  • Data Analysis tools
  • Refine your process constantly 

We’ve pondered the data analysis definition and drilled down into the practical applications of data-centric analytics, and one thing is clear: by taking measures to arrange your data and making your metrics work for you, it’s possible to transform raw information into action - the kind of that will push your business to the next level.

Yes, good data analytics techniques result in enhanced business intelligence (BI). To help you understand this notion in more detail, read our exploration of business intelligence reporting .

And, if you’re ready to perform your own analysis, drill down into your facts and figures while interacting with your data on astonishing visuals, you can try our software for a free, 14-day trial .

University Library, University of Illinois at Urbana-Champaign

University of Illinois Library Wordmark

Qualitative Data Analysis: Qualitative Data Analysis Strategies

  • Atlas.ti web
  • R for text analysis
  • Microsoft Excel & spreadsheets
  • Other options
  • Planning Qual Data Analysis
  • Free Tools for QDA
  • QDA with NVivo
  • QDA with Atlas.ti
  • QDA with MAXQDA
  • PKM for QDA
  • QDA with Quirkos
  • Working Collaboratively
  • Qualitative Methods Texts
  • Transcription
  • Data organization
  • Example Publications

Defining Strategies for Qualitative Data Analysis

Analysis is a process of deconstructing and reconstructing evidence that involves purposeful interrogation and critical thinking about data in order to produce a meaningful interpretation and relevant understanding in answer to the questions asked or that arise in the process of investigation (Bazeley, 2021, p. 3) 

When we analyze qualitative data, we need systematic, rigorous, and transparent ways of manipulating our data in order to begin developing answers to our research questions. We also need to keep careful track of the steps we've taken to conduct our analysis in order to communicate this process to readers and reviewers. 

Beyond coding, it is not always clear of what steps you should take to analyze your data. In this series of pages, I offer some basic information about different strategies you might use to analyze your qualitative data, as well as information on how you can use these strategies in different QDA software programs. 

Useful Resources on QDA Strategies

Cover Art

QDA strategies in this guide

  • Data Organization

Cited on this page

Bazeley, P. (2021). Qualitative data analysis: Practical strategies (2nd ed.). Sage.

  • << Previous: Qualitative Methods Texts
  • Next: Transcription >>
  • Last Updated: Apr 5, 2024 2:23 PM
  • URL: https://guides.library.illinois.edu/qualitative

Grad Coach

Qualitative Data Analysis Methods 101:

The “big 6” methods + examples.

By: Kerryn Warren (PhD) | Reviewed By: Eunice Rautenbach (D.Tech) | May 2020 (Updated April 2023)

Qualitative data analysis methods. Wow, that’s a mouthful. 

If you’re new to the world of research, qualitative data analysis can look rather intimidating. So much bulky terminology and so many abstract, fluffy concepts. It certainly can be a minefield!

Don’t worry – in this post, we’ll unpack the most popular analysis methods , one at a time, so that you can approach your analysis with confidence and competence – whether that’s for a dissertation, thesis or really any kind of research project.

Qualitative data analysis methods

What (exactly) is qualitative data analysis?

To understand qualitative data analysis, we need to first understand qualitative data – so let’s step back and ask the question, “what exactly is qualitative data?”.

Qualitative data refers to pretty much any data that’s “not numbers” . In other words, it’s not the stuff you measure using a fixed scale or complex equipment, nor do you analyse it using complex statistics or mathematics.

So, if it’s not numbers, what is it?

Words, you guessed? Well… sometimes , yes. Qualitative data can, and often does, take the form of interview transcripts, documents and open-ended survey responses – but it can also involve the interpretation of images and videos. In other words, qualitative isn’t just limited to text-based data.

So, how’s that different from quantitative data, you ask?

Simply put, qualitative research focuses on words, descriptions, concepts or ideas – while quantitative research focuses on numbers and statistics . Qualitative research investigates the “softer side” of things to explore and describe , while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them. If you’re keen to learn more about the differences between qual and quant, we’ve got a detailed post over here .

qualitative data analysis vs quantitative data analysis

So, qualitative analysis is easier than quantitative, right?

Not quite. In many ways, qualitative data can be challenging and time-consuming to analyse and interpret. At the end of your data collection phase (which itself takes a lot of time), you’ll likely have many pages of text-based data or hours upon hours of audio to work through. You might also have subtle nuances of interactions or discussions that have danced around in your mind, or that you scribbled down in messy field notes. All of this needs to work its way into your analysis.

Making sense of all of this is no small task and you shouldn’t underestimate it. Long story short – qualitative analysis can be a lot of work! Of course, quantitative analysis is no piece of cake either, but it’s important to recognise that qualitative analysis still requires a significant investment in terms of time and effort.

Need a helping hand?

analysis strategies for research

In this post, we’ll explore qualitative data analysis by looking at some of the most common analysis methods we encounter. We’re not going to cover every possible qualitative method and we’re not going to go into heavy detail – we’re just going to give you the big picture. That said, we will of course includes links to loads of extra resources so that you can learn more about whichever analysis method interests you.

Without further delay, let’s get into it.

The “Big 6” Qualitative Analysis Methods 

There are many different types of qualitative data analysis, all of which serve different purposes and have unique strengths and weaknesses . We’ll start by outlining the analysis methods and then we’ll dive into the details for each.

The 6 most popular methods (or at least the ones we see at Grad Coach) are:

  • Content analysis
  • Narrative analysis
  • Discourse analysis
  • Thematic analysis
  • Grounded theory (GT)
  • Interpretive phenomenological analysis (IPA)

Let’s take a look at each of them…

QDA Method #1: Qualitative Content Analysis

Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

With content analysis, you could, for instance, identify the frequency with which an idea is shared or spoken about – like the number of times a Kardashian is mentioned on Twitter. Or you could identify patterns of deeper underlying interpretations – for instance, by identifying phrases or words in tourist pamphlets that highlight India as an ancient country.

Because content analysis can be used in such a wide variety of ways, it’s important to go into your analysis with a very specific question and goal, or you’ll get lost in the fog. With content analysis, you’ll group large amounts of text into codes , summarise these into categories, and possibly even tabulate the data to calculate the frequency of certain concepts or variables. Because of this, content analysis provides a small splash of quantitative thinking within a qualitative method.

Naturally, while content analysis is widely useful, it’s not without its drawbacks . One of the main issues with content analysis is that it can be very time-consuming , as it requires lots of reading and re-reading of the texts. Also, because of its multidimensional focus on both qualitative and quantitative aspects, it is sometimes accused of losing important nuances in communication.

Content analysis also tends to concentrate on a very specific timeline and doesn’t take into account what happened before or after that timeline. This isn’t necessarily a bad thing though – just something to be aware of. So, keep these factors in mind if you’re considering content analysis. Every analysis method has its limitations , so don’t be put off by these – just be aware of them ! If you’re interested in learning more about content analysis, the video below provides a good starting point.

QDA Method #2: Narrative Analysis 

As the name suggests, narrative analysis is all about listening to people telling stories and analysing what that means . Since stories serve a functional purpose of helping us make sense of the world, we can gain insights into the ways that people deal with and make sense of reality by analysing their stories and the ways they’re told.

You could, for example, use narrative analysis to explore whether how something is being said is important. For instance, the narrative of a prisoner trying to justify their crime could provide insight into their view of the world and the justice system. Similarly, analysing the ways entrepreneurs talk about the struggles in their careers or cancer patients telling stories of hope could provide powerful insights into their mindsets and perspectives . Simply put, narrative analysis is about paying attention to the stories that people tell – and more importantly, the way they tell them.

Of course, the narrative approach has its weaknesses , too. Sample sizes are generally quite small due to the time-consuming process of capturing narratives. Because of this, along with the multitude of social and lifestyle factors which can influence a subject, narrative analysis can be quite difficult to reproduce in subsequent research. This means that it’s difficult to test the findings of some of this research.

Similarly, researcher bias can have a strong influence on the results here, so you need to be particularly careful about the potential biases you can bring into your analysis when using this method. Nevertheless, narrative analysis is still a very useful qualitative analysis method – just keep these limitations in mind and be careful not to draw broad conclusions . If you’re keen to learn more about narrative analysis, the video below provides a great introduction to this qualitative analysis method.

QDA Method #3: Discourse Analysis 

Discourse is simply a fancy word for written or spoken language or debate . So, discourse analysis is all about analysing language within its social context. In other words, analysing language – such as a conversation, a speech, etc – within the culture and society it takes place. For example, you could analyse how a janitor speaks to a CEO, or how politicians speak about terrorism.

To truly understand these conversations or speeches, the culture and history of those involved in the communication are important factors to consider. For example, a janitor might speak more casually with a CEO in a company that emphasises equality among workers. Similarly, a politician might speak more about terrorism if there was a recent terrorist incident in the country.

So, as you can see, by using discourse analysis, you can identify how culture , history or power dynamics (to name a few) have an effect on the way concepts are spoken about. So, if your research aims and objectives involve understanding culture or power dynamics, discourse analysis can be a powerful method.

Because there are many social influences in terms of how we speak to each other, the potential use of discourse analysis is vast . Of course, this also means it’s important to have a very specific research question (or questions) in mind when analysing your data and looking for patterns and themes, or you might land up going down a winding rabbit hole.

Discourse analysis can also be very time-consuming  as you need to sample the data to the point of saturation – in other words, until no new information and insights emerge. But this is, of course, part of what makes discourse analysis such a powerful technique. So, keep these factors in mind when considering this QDA method. Again, if you’re keen to learn more, the video below presents a good starting point.

QDA Method #4: Thematic Analysis

Thematic analysis looks at patterns of meaning in a data set – for example, a set of interviews or focus group transcripts. But what exactly does that… mean? Well, a thematic analysis takes bodies of data (which are often quite large) and groups them according to similarities – in other words, themes . These themes help us make sense of the content and derive meaning from it.

Let’s take a look at an example.

With thematic analysis, you could analyse 100 online reviews of a popular sushi restaurant to find out what patrons think about the place. By reviewing the data, you would then identify the themes that crop up repeatedly within the data – for example, “fresh ingredients” or “friendly wait staff”.

So, as you can see, thematic analysis can be pretty useful for finding out about people’s experiences , views, and opinions . Therefore, if your research aims and objectives involve understanding people’s experience or view of something, thematic analysis can be a great choice.

Since thematic analysis is a bit of an exploratory process, it’s not unusual for your research questions to develop , or even change as you progress through the analysis. While this is somewhat natural in exploratory research, it can also be seen as a disadvantage as it means that data needs to be re-reviewed each time a research question is adjusted. In other words, thematic analysis can be quite time-consuming – but for a good reason. So, keep this in mind if you choose to use thematic analysis for your project and budget extra time for unexpected adjustments.

Thematic analysis takes bodies of data and groups them according to similarities (themes), which help us make sense of the content.

QDA Method #5: Grounded theory (GT) 

Grounded theory is a powerful qualitative analysis method where the intention is to create a new theory (or theories) using the data at hand, through a series of “ tests ” and “ revisions ”. Strictly speaking, GT is more a research design type than an analysis method, but we’ve included it here as it’s often referred to as a method.

What’s most important with grounded theory is that you go into the analysis with an open mind and let the data speak for itself – rather than dragging existing hypotheses or theories into your analysis. In other words, your analysis must develop from the ground up (hence the name). 

Let’s look at an example of GT in action.

Assume you’re interested in developing a theory about what factors influence students to watch a YouTube video about qualitative analysis. Using Grounded theory , you’d start with this general overarching question about the given population (i.e., graduate students). First, you’d approach a small sample – for example, five graduate students in a department at a university. Ideally, this sample would be reasonably representative of the broader population. You’d interview these students to identify what factors lead them to watch the video.

After analysing the interview data, a general pattern could emerge. For example, you might notice that graduate students are more likely to read a post about qualitative methods if they are just starting on their dissertation journey, or if they have an upcoming test about research methods.

From here, you’ll look for another small sample – for example, five more graduate students in a different department – and see whether this pattern holds true for them. If not, you’ll look for commonalities and adapt your theory accordingly. As this process continues, the theory would develop . As we mentioned earlier, what’s important with grounded theory is that the theory develops from the data – not from some preconceived idea.

So, what are the drawbacks of grounded theory? Well, some argue that there’s a tricky circularity to grounded theory. For it to work, in principle, you should know as little as possible regarding the research question and population, so that you reduce the bias in your interpretation. However, in many circumstances, it’s also thought to be unwise to approach a research question without knowledge of the current literature . In other words, it’s a bit of a “chicken or the egg” situation.

Regardless, grounded theory remains a popular (and powerful) option. Naturally, it’s a very useful method when you’re researching a topic that is completely new or has very little existing research about it, as it allows you to start from scratch and work your way from the ground up .

Grounded theory is used to create a new theory (or theories) by using the data at hand, as opposed to existing theories and frameworks.

QDA Method #6:   Interpretive Phenomenological Analysis (IPA)

Interpretive. Phenomenological. Analysis. IPA . Try saying that three times fast…

Let’s just stick with IPA, okay?

IPA is designed to help you understand the personal experiences of a subject (for example, a person or group of people) concerning a major life event, an experience or a situation . This event or experience is the “phenomenon” that makes up the “P” in IPA. Such phenomena may range from relatively common events – such as motherhood, or being involved in a car accident – to those which are extremely rare – for example, someone’s personal experience in a refugee camp. So, IPA is a great choice if your research involves analysing people’s personal experiences of something that happened to them.

It’s important to remember that IPA is subject – centred . In other words, it’s focused on the experiencer . This means that, while you’ll likely use a coding system to identify commonalities, it’s important not to lose the depth of experience or meaning by trying to reduce everything to codes. Also, keep in mind that since your sample size will generally be very small with IPA, you often won’t be able to draw broad conclusions about the generalisability of your findings. But that’s okay as long as it aligns with your research aims and objectives.

Another thing to be aware of with IPA is personal bias . While researcher bias can creep into all forms of research, self-awareness is critically important with IPA, as it can have a major impact on the results. For example, a researcher who was a victim of a crime himself could insert his own feelings of frustration and anger into the way he interprets the experience of someone who was kidnapped. So, if you’re going to undertake IPA, you need to be very self-aware or you could muddy the analysis.

IPA can help you understand the personal experiences of a person or group concerning a major life event, an experience or a situation.

How to choose the right analysis method

In light of all of the qualitative analysis methods we’ve covered so far, you’re probably asking yourself the question, “ How do I choose the right one? ”

Much like all the other methodological decisions you’ll need to make, selecting the right qualitative analysis method largely depends on your research aims, objectives and questions . In other words, the best tool for the job depends on what you’re trying to build. For example:

  • Perhaps your research aims to analyse the use of words and what they reveal about the intention of the storyteller and the cultural context of the time.
  • Perhaps your research aims to develop an understanding of the unique personal experiences of people that have experienced a certain event, or
  • Perhaps your research aims to develop insight regarding the influence of a certain culture on its members.

As you can probably see, each of these research aims are distinctly different , and therefore different analysis methods would be suitable for each one. For example, narrative analysis would likely be a good option for the first aim, while grounded theory wouldn’t be as relevant. 

It’s also important to remember that each method has its own set of strengths, weaknesses and general limitations. No single analysis method is perfect . So, depending on the nature of your research, it may make sense to adopt more than one method (this is called triangulation ). Keep in mind though that this will of course be quite time-consuming.

As we’ve seen, all of the qualitative analysis methods we’ve discussed make use of coding and theme-generating techniques, but the intent and approach of each analysis method differ quite substantially. So, it’s very important to come into your research with a clear intention before you decide which analysis method (or methods) to use.

Start by reviewing your research aims , objectives and research questions to assess what exactly you’re trying to find out – then select a qualitative analysis method that fits. Never pick a method just because you like it or have experience using it – your analysis method (or methods) must align with your broader research aims and objectives.

No single analysis method is perfect, so it can often make sense to adopt more than one  method (this is called triangulation).

Let’s recap on QDA methods…

In this post, we looked at six popular qualitative data analysis methods:

  • First, we looked at content analysis , a straightforward method that blends a little bit of quant into a primarily qualitative analysis.
  • Then we looked at narrative analysis , which is about analysing how stories are told.
  • Next up was discourse analysis – which is about analysing conversations and interactions.
  • Then we moved on to thematic analysis – which is about identifying themes and patterns.
  • From there, we went south with grounded theory – which is about starting from scratch with a specific question and using the data alone to build a theory in response to that question.
  • And finally, we looked at IPA – which is about understanding people’s unique experiences of a phenomenon.

Of course, these aren’t the only options when it comes to qualitative data analysis, but they’re a great starting point if you’re dipping your toes into qualitative research for the first time.

If you’re still feeling a bit confused, consider our private coaching service , where we hold your hand through the research process to help you develop your best work.

analysis strategies for research

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Research design for qualitative and quantitative studies

84 Comments

Richard N

This has been very helpful. Thank you.

netaji

Thank you madam,

Mariam Jaiyeola

Thank you so much for this information

Nzube

I wonder it so clear for understand and good for me. can I ask additional query?

Lee

Very insightful and useful

Susan Nakaweesi

Good work done with clear explanations. Thank you.

Titilayo

Thanks so much for the write-up, it’s really good.

Hemantha Gunasekara

Thanks madam . It is very important .

Gumathandra

thank you very good

Pramod Bahulekar

This has been very well explained in simple language . It is useful even for a new researcher.

Derek Jansen

Great to hear that. Good luck with your qualitative data analysis, Pramod!

Adam Zahir

This is very useful information. And it was very a clear language structured presentation. Thanks a lot.

Golit,F.

Thank you so much.

Emmanuel

very informative sequential presentation

Shahzada

Precise explanation of method.

Alyssa

Hi, may we use 2 data analysis methods in our qualitative research?

Thanks for your comment. Most commonly, one would use one type of analysis method, but it depends on your research aims and objectives.

Dr. Manju Pandey

You explained it in very simple language, everyone can understand it. Thanks so much.

Phillip

Thank you very much, this is very helpful. It has been explained in a very simple manner that even a layman understands

Anne

Thank nicely explained can I ask is Qualitative content analysis the same as thematic analysis?

Thanks for your comment. No, QCA and thematic are two different types of analysis. This article might help clarify – https://onlinelibrary.wiley.com/doi/10.1111/nhs.12048

Rev. Osadare K . J

This is my first time to come across a well explained data analysis. so helpful.

Tina King

I have thoroughly enjoyed your explanation of the six qualitative analysis methods. This is very helpful. Thank you!

Bromie

Thank you very much, this is well explained and useful

udayangani

i need a citation of your book.

khutsafalo

Thanks a lot , remarkable indeed, enlighting to the best

jas

Hi Derek, What other theories/methods would you recommend when the data is a whole speech?

M

Keep writing useful artikel.

Adane

It is important concept about QDA and also the way to express is easily understandable, so thanks for all.

Carl Benecke

Thank you, this is well explained and very useful.

Ngwisa

Very helpful .Thanks.

Hajra Aman

Hi there! Very well explained. Simple but very useful style of writing. Please provide the citation of the text. warm regards

Hillary Mophethe

The session was very helpful and insightful. Thank you

This was very helpful and insightful. Easy to read and understand

Catherine

As a professional academic writer, this has been so informative and educative. Keep up the good work Grad Coach you are unmatched with quality content for sure.

Keep up the good work Grad Coach you are unmatched with quality content for sure.

Abdulkerim

Its Great and help me the most. A Million Thanks you Dr.

Emanuela

It is a very nice work

Noble Naade

Very insightful. Please, which of this approach could be used for a research that one is trying to elicit students’ misconceptions in a particular concept ?

Karen

This is Amazing and well explained, thanks

amirhossein

great overview

Tebogo

What do we call a research data analysis method that one use to advise or determining the best accounting tool or techniques that should be adopted in a company.

Catherine Shimechero

Informative video, explained in a clear and simple way. Kudos

Van Hmung

Waoo! I have chosen method wrong for my data analysis. But I can revise my work according to this guide. Thank you so much for this helpful lecture.

BRIAN ONYANGO MWAGA

This has been very helpful. It gave me a good view of my research objectives and how to choose the best method. Thematic analysis it is.

Livhuwani Reineth

Very helpful indeed. Thanku so much for the insight.

Storm Erlank

This was incredibly helpful.

Jack Kanas

Very helpful.

catherine

very educative

Wan Roslina

Nicely written especially for novice academic researchers like me! Thank you.

Talash

choosing a right method for a paper is always a hard job for a student, this is a useful information, but it would be more useful personally for me, if the author provide me with a little bit more information about the data analysis techniques in type of explanatory research. Can we use qualitative content analysis technique for explanatory research ? or what is the suitable data analysis method for explanatory research in social studies?

ramesh

that was very helpful for me. because these details are so important to my research. thank you very much

Kumsa Desisa

I learnt a lot. Thank you

Tesfa NT

Relevant and Informative, thanks !

norma

Well-planned and organized, thanks much! 🙂

Dr. Jacob Lubuva

I have reviewed qualitative data analysis in a simplest way possible. The content will highly be useful for developing my book on qualitative data analysis methods. Cheers!

Nyi Nyi Lwin

Clear explanation on qualitative and how about Case study

Ogobuchi Otuu

This was helpful. Thank you

Alicia

This was really of great assistance, it was just the right information needed. Explanation very clear and follow.

Wow, Thanks for making my life easy

C. U

This was helpful thanks .

Dr. Alina Atif

Very helpful…. clear and written in an easily understandable manner. Thank you.

Herb

This was so helpful as it was easy to understand. I’m a new to research thank you so much.

cissy

so educative…. but Ijust want to know which method is coding of the qualitative or tallying done?

Ayo

Thank you for the great content, I have learnt a lot. So helpful

Tesfaye

precise and clear presentation with simple language and thank you for that.

nneheng

very informative content, thank you.

Oscar Kuebutornye

You guys are amazing on YouTube on this platform. Your teachings are great, educative, and informative. kudos!

NG

Brilliant Delivery. You made a complex subject seem so easy. Well done.

Ankit Kumar

Beautifully explained.

Thanks a lot

Kidada Owen-Browne

Is there a video the captures the practical process of coding using automated applications?

Thanks for the comment. We don’t recommend using automated applications for coding, as they are not sufficiently accurate in our experience.

Mathewos Damtew

content analysis can be qualitative research?

Hend

THANK YOU VERY MUCH.

Dev get

Thank you very much for such a wonderful content

Kassahun Aman

do you have any material on Data collection

Prince .S. mpofu

What a powerful explanation of the QDA methods. Thank you.

Kassahun

Great explanation both written and Video. i have been using of it on a day to day working of my thesis project in accounting and finance. Thank you very much for your support.

BORA SAMWELI MATUTULI

very helpful, thank you so much

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

PW Skills | Blog

Data Analysis Techniques in Research – Methods, Tools & Examples

' src=

Varun Saharawat is a seasoned professional in the fields of SEO and content writing. With a profound knowledge of the intricate aspects of these disciplines, Varun has established himself as a valuable asset in the world of digital marketing and online content creation.

data analysis techniques in research

Data analysis techniques in research are essential because they allow researchers to derive meaningful insights from data sets to support their hypotheses or research objectives.

Data Analysis Techniques in Research : While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence. Data analysis involves refining, transforming, and interpreting raw data to derive actionable insights that guide informed decision-making for businesses.

Data Analytics Course

A straightforward illustration of data analysis emerges when we make everyday decisions, basing our choices on past experiences or predictions of potential outcomes.

If you want to learn more about this topic and acquire valuable skills that will set you apart in today’s data-driven world, we highly recommend enrolling in the Data Analytics Course by Physics Wallah . And as a special offer for our readers, use the coupon code “READER” to get a discount on this course.

Table of Contents

What is Data Analysis?

Data analysis is the systematic process of inspecting, cleaning, transforming, and interpreting data with the objective of discovering valuable insights and drawing meaningful conclusions. This process involves several steps:

  • Inspecting : Initial examination of data to understand its structure, quality, and completeness.
  • Cleaning : Removing errors, inconsistencies, or irrelevant information to ensure accurate analysis.
  • Transforming : Converting data into a format suitable for analysis, such as normalization or aggregation.
  • Interpreting : Analyzing the transformed data to identify patterns, trends, and relationships.

Types of Data Analysis Techniques in Research

Data analysis techniques in research are categorized into qualitative and quantitative methods, each with its specific approaches and tools. These techniques are instrumental in extracting meaningful insights, patterns, and relationships from data to support informed decision-making, validate hypotheses, and derive actionable recommendations. Below is an in-depth exploration of the various types of data analysis techniques commonly employed in research:

1) Qualitative Analysis:

Definition: Qualitative analysis focuses on understanding non-numerical data, such as opinions, concepts, or experiences, to derive insights into human behavior, attitudes, and perceptions.

  • Content Analysis: Examines textual data, such as interview transcripts, articles, or open-ended survey responses, to identify themes, patterns, or trends.
  • Narrative Analysis: Analyzes personal stories or narratives to understand individuals’ experiences, emotions, or perspectives.
  • Ethnographic Studies: Involves observing and analyzing cultural practices, behaviors, and norms within specific communities or settings.

2) Quantitative Analysis:

Quantitative analysis emphasizes numerical data and employs statistical methods to explore relationships, patterns, and trends. It encompasses several approaches:

Descriptive Analysis:

  • Frequency Distribution: Represents the number of occurrences of distinct values within a dataset.
  • Central Tendency: Measures such as mean, median, and mode provide insights into the central values of a dataset.
  • Dispersion: Techniques like variance and standard deviation indicate the spread or variability of data.

Diagnostic Analysis:

  • Regression Analysis: Assesses the relationship between dependent and independent variables, enabling prediction or understanding causality.
  • ANOVA (Analysis of Variance): Examines differences between groups to identify significant variations or effects.

Predictive Analysis:

  • Time Series Forecasting: Uses historical data points to predict future trends or outcomes.
  • Machine Learning Algorithms: Techniques like decision trees, random forests, and neural networks predict outcomes based on patterns in data.

Prescriptive Analysis:

  • Optimization Models: Utilizes linear programming, integer programming, or other optimization techniques to identify the best solutions or strategies.
  • Simulation: Mimics real-world scenarios to evaluate various strategies or decisions and determine optimal outcomes.

Specific Techniques:

  • Monte Carlo Simulation: Models probabilistic outcomes to assess risk and uncertainty.
  • Factor Analysis: Reduces the dimensionality of data by identifying underlying factors or components.
  • Cohort Analysis: Studies specific groups or cohorts over time to understand trends, behaviors, or patterns within these groups.
  • Cluster Analysis: Classifies objects or individuals into homogeneous groups or clusters based on similarities or attributes.
  • Sentiment Analysis: Uses natural language processing and machine learning techniques to determine sentiment, emotions, or opinions from textual data.

Also Read: AI and Predictive Analytics: Examples, Tools, Uses, Ai Vs Predictive Analytics

Data Analysis Techniques in Research Examples

To provide a clearer understanding of how data analysis techniques are applied in research, let’s consider a hypothetical research study focused on evaluating the impact of online learning platforms on students’ academic performance.

Research Objective:

Determine if students using online learning platforms achieve higher academic performance compared to those relying solely on traditional classroom instruction.

Data Collection:

  • Quantitative Data: Academic scores (grades) of students using online platforms and those using traditional classroom methods.
  • Qualitative Data: Feedback from students regarding their learning experiences, challenges faced, and preferences.

Data Analysis Techniques Applied:

1) Descriptive Analysis:

  • Calculate the mean, median, and mode of academic scores for both groups.
  • Create frequency distributions to represent the distribution of grades in each group.

2) Diagnostic Analysis:

  • Conduct an Analysis of Variance (ANOVA) to determine if there’s a statistically significant difference in academic scores between the two groups.
  • Perform Regression Analysis to assess the relationship between the time spent on online platforms and academic performance.

3) Predictive Analysis:

  • Utilize Time Series Forecasting to predict future academic performance trends based on historical data.
  • Implement Machine Learning algorithms to develop a predictive model that identifies factors contributing to academic success on online platforms.

4) Prescriptive Analysis:

  • Apply Optimization Models to identify the optimal combination of online learning resources (e.g., video lectures, interactive quizzes) that maximize academic performance.
  • Use Simulation Techniques to evaluate different scenarios, such as varying student engagement levels with online resources, to determine the most effective strategies for improving learning outcomes.

5) Specific Techniques:

  • Conduct Factor Analysis on qualitative feedback to identify common themes or factors influencing students’ perceptions and experiences with online learning.
  • Perform Cluster Analysis to segment students based on their engagement levels, preferences, or academic outcomes, enabling targeted interventions or personalized learning strategies.
  • Apply Sentiment Analysis on textual feedback to categorize students’ sentiments as positive, negative, or neutral regarding online learning experiences.

By applying a combination of qualitative and quantitative data analysis techniques, this research example aims to provide comprehensive insights into the effectiveness of online learning platforms.

Also Read: Learning Path to Become a Data Analyst in 2024

Data Analysis Techniques in Quantitative Research

Quantitative research involves collecting numerical data to examine relationships, test hypotheses, and make predictions. Various data analysis techniques are employed to interpret and draw conclusions from quantitative data. Here are some key data analysis techniques commonly used in quantitative research:

1) Descriptive Statistics:

  • Description: Descriptive statistics are used to summarize and describe the main aspects of a dataset, such as central tendency (mean, median, mode), variability (range, variance, standard deviation), and distribution (skewness, kurtosis).
  • Applications: Summarizing data, identifying patterns, and providing initial insights into the dataset.

2) Inferential Statistics:

  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. This technique includes hypothesis testing, confidence intervals, t-tests, chi-square tests, analysis of variance (ANOVA), regression analysis, and correlation analysis.
  • Applications: Testing hypotheses, making predictions, and generalizing findings from a sample to a larger population.

3) Regression Analysis:

  • Description: Regression analysis is a statistical technique used to model and examine the relationship between a dependent variable and one or more independent variables. Linear regression, multiple regression, logistic regression, and nonlinear regression are common types of regression analysis .
  • Applications: Predicting outcomes, identifying relationships between variables, and understanding the impact of independent variables on the dependent variable.

4) Correlation Analysis:

  • Description: Correlation analysis is used to measure and assess the strength and direction of the relationship between two or more variables. The Pearson correlation coefficient, Spearman rank correlation coefficient, and Kendall’s tau are commonly used measures of correlation.
  • Applications: Identifying associations between variables and assessing the degree and nature of the relationship.

5) Factor Analysis:

  • Description: Factor analysis is a multivariate statistical technique used to identify and analyze underlying relationships or factors among a set of observed variables. It helps in reducing the dimensionality of data and identifying latent variables or constructs.
  • Applications: Identifying underlying factors or constructs, simplifying data structures, and understanding the underlying relationships among variables.

6) Time Series Analysis:

  • Description: Time series analysis involves analyzing data collected or recorded over a specific period at regular intervals to identify patterns, trends, and seasonality. Techniques such as moving averages, exponential smoothing, autoregressive integrated moving average (ARIMA), and Fourier analysis are used.
  • Applications: Forecasting future trends, analyzing seasonal patterns, and understanding time-dependent relationships in data.

7) ANOVA (Analysis of Variance):

  • Description: Analysis of variance (ANOVA) is a statistical technique used to analyze and compare the means of two or more groups or treatments to determine if they are statistically different from each other. One-way ANOVA, two-way ANOVA, and MANOVA (Multivariate Analysis of Variance) are common types of ANOVA.
  • Applications: Comparing group means, testing hypotheses, and determining the effects of categorical independent variables on a continuous dependent variable.

8) Chi-Square Tests:

  • Description: Chi-square tests are non-parametric statistical tests used to assess the association between categorical variables in a contingency table. The Chi-square test of independence, goodness-of-fit test, and test of homogeneity are common chi-square tests.
  • Applications: Testing relationships between categorical variables, assessing goodness-of-fit, and evaluating independence.

These quantitative data analysis techniques provide researchers with valuable tools and methods to analyze, interpret, and derive meaningful insights from numerical data. The selection of a specific technique often depends on the research objectives, the nature of the data, and the underlying assumptions of the statistical methods being used.

Also Read: Analysis vs. Analytics: How Are They Different?

Data Analysis Methods

Data analysis methods refer to the techniques and procedures used to analyze, interpret, and draw conclusions from data. These methods are essential for transforming raw data into meaningful insights, facilitating decision-making processes, and driving strategies across various fields. Here are some common data analysis methods:

  • Description: Descriptive statistics summarize and organize data to provide a clear and concise overview of the dataset. Measures such as mean, median, mode, range, variance, and standard deviation are commonly used.
  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. Techniques such as hypothesis testing, confidence intervals, and regression analysis are used.

3) Exploratory Data Analysis (EDA):

  • Description: EDA techniques involve visually exploring and analyzing data to discover patterns, relationships, anomalies, and insights. Methods such as scatter plots, histograms, box plots, and correlation matrices are utilized.
  • Applications: Identifying trends, patterns, outliers, and relationships within the dataset.

4) Predictive Analytics:

  • Description: Predictive analytics use statistical algorithms and machine learning techniques to analyze historical data and make predictions about future events or outcomes. Techniques such as regression analysis, time series forecasting, and machine learning algorithms (e.g., decision trees, random forests, neural networks) are employed.
  • Applications: Forecasting future trends, predicting outcomes, and identifying potential risks or opportunities.

5) Prescriptive Analytics:

  • Description: Prescriptive analytics involve analyzing data to recommend actions or strategies that optimize specific objectives or outcomes. Optimization techniques, simulation models, and decision-making algorithms are utilized.
  • Applications: Recommending optimal strategies, decision-making support, and resource allocation.

6) Qualitative Data Analysis:

  • Description: Qualitative data analysis involves analyzing non-numerical data, such as text, images, videos, or audio, to identify themes, patterns, and insights. Methods such as content analysis, thematic analysis, and narrative analysis are used.
  • Applications: Understanding human behavior, attitudes, perceptions, and experiences.

7) Big Data Analytics:

  • Description: Big data analytics methods are designed to analyze large volumes of structured and unstructured data to extract valuable insights. Technologies such as Hadoop, Spark, and NoSQL databases are used to process and analyze big data.
  • Applications: Analyzing large datasets, identifying trends, patterns, and insights from big data sources.

8) Text Analytics:

  • Description: Text analytics methods involve analyzing textual data, such as customer reviews, social media posts, emails, and documents, to extract meaningful information and insights. Techniques such as sentiment analysis, text mining, and natural language processing (NLP) are used.
  • Applications: Analyzing customer feedback, monitoring brand reputation, and extracting insights from textual data sources.

These data analysis methods are instrumental in transforming data into actionable insights, informing decision-making processes, and driving organizational success across various sectors, including business, healthcare, finance, marketing, and research. The selection of a specific method often depends on the nature of the data, the research objectives, and the analytical requirements of the project or organization.

Also Read: Quantitative Data Analysis: Types, Analysis & Examples

Data Analysis Tools

Data analysis tools are essential instruments that facilitate the process of examining, cleaning, transforming, and modeling data to uncover useful information, make informed decisions, and drive strategies. Here are some prominent data analysis tools widely used across various industries:

1) Microsoft Excel:

  • Description: A spreadsheet software that offers basic to advanced data analysis features, including pivot tables, data visualization tools, and statistical functions.
  • Applications: Data cleaning, basic statistical analysis, visualization, and reporting.

2) R Programming Language:

  • Description: An open-source programming language specifically designed for statistical computing and data visualization.
  • Applications: Advanced statistical analysis, data manipulation, visualization, and machine learning.

3) Python (with Libraries like Pandas, NumPy, Matplotlib, and Seaborn):

  • Description: A versatile programming language with libraries that support data manipulation, analysis, and visualization.
  • Applications: Data cleaning, statistical analysis, machine learning, and data visualization.

4) SPSS (Statistical Package for the Social Sciences):

  • Description: A comprehensive statistical software suite used for data analysis, data mining, and predictive analytics.
  • Applications: Descriptive statistics, hypothesis testing, regression analysis, and advanced analytics.

5) SAS (Statistical Analysis System):

  • Description: A software suite used for advanced analytics, multivariate analysis, and predictive modeling.
  • Applications: Data management, statistical analysis, predictive modeling, and business intelligence.

6) Tableau:

  • Description: A data visualization tool that allows users to create interactive and shareable dashboards and reports.
  • Applications: Data visualization , business intelligence , and interactive dashboard creation.

7) Power BI:

  • Description: A business analytics tool developed by Microsoft that provides interactive visualizations and business intelligence capabilities.
  • Applications: Data visualization, business intelligence, reporting, and dashboard creation.

8) SQL (Structured Query Language) Databases (e.g., MySQL, PostgreSQL, Microsoft SQL Server):

  • Description: Database management systems that support data storage, retrieval, and manipulation using SQL queries.
  • Applications: Data retrieval, data cleaning, data transformation, and database management.

9) Apache Spark:

  • Description: A fast and general-purpose distributed computing system designed for big data processing and analytics.
  • Applications: Big data processing, machine learning, data streaming, and real-time analytics.

10) IBM SPSS Modeler:

  • Description: A data mining software application used for building predictive models and conducting advanced analytics.
  • Applications: Predictive modeling, data mining, statistical analysis, and decision optimization.

These tools serve various purposes and cater to different data analysis needs, from basic statistical analysis and data visualization to advanced analytics, machine learning, and big data processing. The choice of a specific tool often depends on the nature of the data, the complexity of the analysis, and the specific requirements of the project or organization.

Also Read: How to Analyze Survey Data: Methods & Examples

Importance of Data Analysis in Research

The importance of data analysis in research cannot be overstated; it serves as the backbone of any scientific investigation or study. Here are several key reasons why data analysis is crucial in the research process:

  • Data analysis helps ensure that the results obtained are valid and reliable. By systematically examining the data, researchers can identify any inconsistencies or anomalies that may affect the credibility of the findings.
  • Effective data analysis provides researchers with the necessary information to make informed decisions. By interpreting the collected data, researchers can draw conclusions, make predictions, or formulate recommendations based on evidence rather than intuition or guesswork.
  • Data analysis allows researchers to identify patterns, trends, and relationships within the data. This can lead to a deeper understanding of the research topic, enabling researchers to uncover insights that may not be immediately apparent.
  • In empirical research, data analysis plays a critical role in testing hypotheses. Researchers collect data to either support or refute their hypotheses, and data analysis provides the tools and techniques to evaluate these hypotheses rigorously.
  • Transparent and well-executed data analysis enhances the credibility of research findings. By clearly documenting the data analysis methods and procedures, researchers allow others to replicate the study, thereby contributing to the reproducibility of research findings.
  • In fields such as business or healthcare, data analysis helps organizations allocate resources more efficiently. By analyzing data on consumer behavior, market trends, or patient outcomes, organizations can make strategic decisions about resource allocation, budgeting, and planning.
  • In public policy and social sciences, data analysis is instrumental in developing and evaluating policies and interventions. By analyzing data on social, economic, or environmental factors, policymakers can assess the effectiveness of existing policies and inform the development of new ones.
  • Data analysis allows for continuous improvement in research methods and practices. By analyzing past research projects, identifying areas for improvement, and implementing changes based on data-driven insights, researchers can refine their approaches and enhance the quality of future research endeavors.

However, it is important to remember that mastering these techniques requires practice and continuous learning. That’s why we highly recommend the Data Analytics Course by Physics Wallah . Not only does it cover all the fundamentals of data analysis, but it also provides hands-on experience with various tools such as Excel, Python, and Tableau. Plus, if you use the “ READER ” coupon code at checkout, you can get a special discount on the course.

For Latest Tech Related Information, Join Our Official Free Telegram Group : PW Skills Telegram Group

Data Analysis Techniques in Research FAQs

What are the 5 techniques for data analysis.

The five techniques for data analysis include: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis Qualitative Analysis

What are techniques of data analysis in research?

Techniques of data analysis in research encompass both qualitative and quantitative methods. These techniques involve processes like summarizing raw data, investigating causes of events, forecasting future outcomes, offering recommendations based on predictions, and examining non-numerical data to understand concepts or experiences.

What are the 3 methods of data analysis?

The three primary methods of data analysis are: Qualitative Analysis Quantitative Analysis Mixed-Methods Analysis

What are the four types of data analysis techniques?

The four types of data analysis techniques are: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis

  • What is Data Analytics in Database?

database analytics

Database analytics is a method of interpreting and analyzing data stored inside the database to extract meaningful insights. Read the…

  • Big Data: What Do You Mean By Big Data?

big data

Big data is a tremendous volume of complex data collected from various sources, such as text, videos, audio, email, etc.…

  • Finance Data Analysis: What is a Financial Data Analysis?

finance data analysis

Finance data analysis is used increasingly by many companies worldwide. Data analysis in finance helps to collect various financial-related raw…

right adv

Related Articles

  • What are Data Analysis Tools?
  • Top 20 Big Data Tools Used By Professionals
  • Best Courses For Data Analytics: Top 10 Courses For Your Career in Trend
  • 10 Most Popular Big Data Analytics Tools
  • Top Best Big Data Analytics Classes 2024
  • Big Data and Analytics – Definition, Benefits, and More
  • Best 5 Unique Strategies to Use Artificial Intelligence Data Analytics

bottom banner

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

The Beginner's Guide to Statistical Analysis | 5 Steps & Examples

Statistical analysis means investigating trends, patterns, and relationships using quantitative data . It is an important research tool used by scientists, governments, businesses, and other organizations.

To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process . You need to specify your hypotheses and make decisions about your research design, sample size, and sampling procedure.

After collecting data from your sample, you can organize and summarize the data using descriptive statistics . Then, you can use inferential statistics to formally test hypotheses and make estimates about the population. Finally, you can interpret and generalize your findings.

This article is a practical introduction to statistical analysis for students and researchers. We’ll walk you through the steps using two research examples. The first investigates a potential cause-and-effect relationship, while the second investigates a potential correlation between variables.

Table of contents

Step 1: write your hypotheses and plan your research design, step 2: collect data from a sample, step 3: summarize your data with descriptive statistics, step 4: test hypotheses or make estimates with inferential statistics, step 5: interpret your results, other interesting articles.

To collect valid data for statistical analysis, you first need to specify your hypotheses and plan out your research design.

Writing statistical hypotheses

The goal of research is often to investigate a relationship between variables within a population . You start with a prediction, and use statistical analysis to test that prediction.

A statistical hypothesis is a formal way of writing a prediction about a population. Every research prediction is rephrased into null and alternative hypotheses that can be tested using sample data.

While the null hypothesis always predicts no effect or no relationship between variables, the alternative hypothesis states your research prediction of an effect or relationship.

  • Null hypothesis: A 5-minute meditation exercise will have no effect on math test scores in teenagers.
  • Alternative hypothesis: A 5-minute meditation exercise will improve math test scores in teenagers.
  • Null hypothesis: Parental income and GPA have no relationship with each other in college students.
  • Alternative hypothesis: Parental income and GPA are positively correlated in college students.

Planning your research design

A research design is your overall strategy for data collection and analysis. It determines the statistical tests you can use to test your hypothesis later on.

First, decide whether your research will use a descriptive, correlational, or experimental design. Experiments directly influence variables, whereas descriptive and correlational studies only measure variables.

  • In an experimental design , you can assess a cause-and-effect relationship (e.g., the effect of meditation on test scores) using statistical tests of comparison or regression.
  • In a correlational design , you can explore relationships between variables (e.g., parental income and GPA) without any assumption of causality using correlation coefficients and significance tests.
  • In a descriptive design , you can study the characteristics of a population or phenomenon (e.g., the prevalence of anxiety in U.S. college students) using statistical tests to draw inferences from sample data.

Your research design also concerns whether you’ll compare participants at the group level or individual level, or both.

  • In a between-subjects design , you compare the group-level outcomes of participants who have been exposed to different treatments (e.g., those who performed a meditation exercise vs those who didn’t).
  • In a within-subjects design , you compare repeated measures from participants who have participated in all treatments of a study (e.g., scores from before and after performing a meditation exercise).
  • In a mixed (factorial) design , one variable is altered between subjects and another is altered within subjects (e.g., pretest and posttest scores from participants who either did or didn’t do a meditation exercise).
  • Experimental
  • Correlational

First, you’ll take baseline test scores from participants. Then, your participants will undergo a 5-minute meditation exercise. Finally, you’ll record participants’ scores from a second math test.

In this experiment, the independent variable is the 5-minute meditation exercise, and the dependent variable is the math test score from before and after the intervention. Example: Correlational research design In a correlational study, you test whether there is a relationship between parental income and GPA in graduating college students. To collect your data, you will ask participants to fill in a survey and self-report their parents’ incomes and their own GPA.

Measuring variables

When planning a research design, you should operationalize your variables and decide exactly how you will measure them.

For statistical analysis, it’s important to consider the level of measurement of your variables, which tells you what kind of data they contain:

  • Categorical data represents groupings. These may be nominal (e.g., gender) or ordinal (e.g. level of language ability).
  • Quantitative data represents amounts. These may be on an interval scale (e.g. test score) or a ratio scale (e.g. age).

Many variables can be measured at different levels of precision. For example, age data can be quantitative (8 years old) or categorical (young). If a variable is coded numerically (e.g., level of agreement from 1–5), it doesn’t automatically mean that it’s quantitative instead of categorical.

Identifying the measurement level is important for choosing appropriate statistics and hypothesis tests. For example, you can calculate a mean score with quantitative data, but not with categorical data.

In a research study, along with measures of your variables of interest, you’ll often collect data on relevant participant characteristics.

Prevent plagiarism. Run a free check.

Population vs sample

In most cases, it’s too difficult or expensive to collect data from every member of the population you’re interested in studying. Instead, you’ll collect data from a sample.

Statistical analysis allows you to apply your findings beyond your own sample as long as you use appropriate sampling procedures . You should aim for a sample that is representative of the population.

Sampling for statistical analysis

There are two main approaches to selecting a sample.

  • Probability sampling: every member of the population has a chance of being selected for the study through random selection.
  • Non-probability sampling: some members of the population are more likely than others to be selected for the study because of criteria such as convenience or voluntary self-selection.

In theory, for highly generalizable findings, you should use a probability sampling method. Random selection reduces several types of research bias , like sampling bias , and ensures that data from your sample is actually typical of the population. Parametric tests can be used to make strong statistical inferences when data are collected using probability sampling.

But in practice, it’s rarely possible to gather the ideal sample. While non-probability samples are more likely to at risk for biases like self-selection bias , they are much easier to recruit and collect data from. Non-parametric tests are more appropriate for non-probability samples, but they result in weaker inferences about the population.

If you want to use parametric tests for non-probability samples, you have to make the case that:

  • your sample is representative of the population you’re generalizing your findings to.
  • your sample lacks systematic bias.

Keep in mind that external validity means that you can only generalize your conclusions to others who share the characteristics of your sample. For instance, results from Western, Educated, Industrialized, Rich and Democratic samples (e.g., college students in the US) aren’t automatically applicable to all non-WEIRD populations.

If you apply parametric tests to data from non-probability samples, be sure to elaborate on the limitations of how far your results can be generalized in your discussion section .

Create an appropriate sampling procedure

Based on the resources available for your research, decide on how you’ll recruit participants.

  • Will you have resources to advertise your study widely, including outside of your university setting?
  • Will you have the means to recruit a diverse sample that represents a broad population?
  • Do you have time to contact and follow up with members of hard-to-reach groups?

Your participants are self-selected by their schools. Although you’re using a non-probability sample, you aim for a diverse and representative sample. Example: Sampling (correlational study) Your main population of interest is male college students in the US. Using social media advertising, you recruit senior-year male college students from a smaller subpopulation: seven universities in the Boston area.

Calculate sufficient sample size

Before recruiting participants, decide on your sample size either by looking at other studies in your field or using statistics. A sample that’s too small may be unrepresentative of the sample, while a sample that’s too large will be more costly than necessary.

There are many sample size calculators online. Different formulas are used depending on whether you have subgroups or how rigorous your study should be (e.g., in clinical research). As a rule of thumb, a minimum of 30 units or more per subgroup is necessary.

To use these calculators, you have to understand and input these key components:

  • Significance level (alpha): the risk of rejecting a true null hypothesis that you are willing to take, usually set at 5%.
  • Statistical power : the probability of your study detecting an effect of a certain size if there is one, usually 80% or higher.
  • Expected effect size : a standardized indication of how large the expected result of your study will be, usually based on other similar studies.
  • Population standard deviation: an estimate of the population parameter based on a previous study or a pilot study of your own.

Once you’ve collected all of your data, you can inspect them and calculate descriptive statistics that summarize them.

Inspect your data

There are various ways to inspect your data, including the following:

  • Organizing data from each variable in frequency distribution tables .
  • Displaying data from a key variable in a bar chart to view the distribution of responses.
  • Visualizing the relationship between two variables using a scatter plot .

By visualizing your data in tables and graphs, you can assess whether your data follow a skewed or normal distribution and whether there are any outliers or missing data.

A normal distribution means that your data are symmetrically distributed around a center where most values lie, with the values tapering off at the tail ends.

Mean, median, mode, and standard deviation in a normal distribution

In contrast, a skewed distribution is asymmetric and has more values on one end than the other. The shape of the distribution is important to keep in mind because only some descriptive statistics should be used with skewed distributions.

Extreme outliers can also produce misleading statistics, so you may need a systematic approach to dealing with these values.

Calculate measures of central tendency

Measures of central tendency describe where most of the values in a data set lie. Three main measures of central tendency are often reported:

  • Mode : the most popular response or value in the data set.
  • Median : the value in the exact middle of the data set when ordered from low to high.
  • Mean : the sum of all values divided by the number of values.

However, depending on the shape of the distribution and level of measurement, only one or two of these measures may be appropriate. For example, many demographic characteristics can only be described using the mode or proportions, while a variable like reaction time may not have a mode at all.

Calculate measures of variability

Measures of variability tell you how spread out the values in a data set are. Four main measures of variability are often reported:

  • Range : the highest value minus the lowest value of the data set.
  • Interquartile range : the range of the middle half of the data set.
  • Standard deviation : the average distance between each value in your data set and the mean.
  • Variance : the square of the standard deviation.

Once again, the shape of the distribution and level of measurement should guide your choice of variability statistics. The interquartile range is the best measure for skewed distributions, while standard deviation and variance provide the best information for normal distributions.

Using your table, you should check whether the units of the descriptive statistics are comparable for pretest and posttest scores. For example, are the variance levels similar across the groups? Are there any extreme values? If there are, you may need to identify and remove extreme outliers in your data set or transform your data before performing a statistical test.

From this table, we can see that the mean score increased after the meditation exercise, and the variances of the two scores are comparable. Next, we can perform a statistical test to find out if this improvement in test scores is statistically significant in the population. Example: Descriptive statistics (correlational study) After collecting data from 653 students, you tabulate descriptive statistics for annual parental income and GPA.

It’s important to check whether you have a broad range of data points. If you don’t, your data may be skewed towards some groups more than others (e.g., high academic achievers), and only limited inferences can be made about a relationship.

A number that describes a sample is called a statistic , while a number describing a population is called a parameter . Using inferential statistics , you can make conclusions about population parameters based on sample statistics.

Researchers often use two main methods (simultaneously) to make inferences in statistics.

  • Estimation: calculating population parameters based on sample statistics.
  • Hypothesis testing: a formal process for testing research predictions about the population using samples.

You can make two types of estimates of population parameters from sample statistics:

  • A point estimate : a value that represents your best guess of the exact parameter.
  • An interval estimate : a range of values that represent your best guess of where the parameter lies.

If your aim is to infer and report population characteristics from sample data, it’s best to use both point and interval estimates in your paper.

You can consider a sample statistic a point estimate for the population parameter when you have a representative sample (e.g., in a wide public opinion poll, the proportion of a sample that supports the current government is taken as the population proportion of government supporters).

There’s always error involved in estimation, so you should also provide a confidence interval as an interval estimate to show the variability around a point estimate.

A confidence interval uses the standard error and the z score from the standard normal distribution to convey where you’d generally expect to find the population parameter most of the time.

Hypothesis testing

Using data from a sample, you can test hypotheses about relationships between variables in the population. Hypothesis testing starts with the assumption that the null hypothesis is true in the population, and you use statistical tests to assess whether the null hypothesis can be rejected or not.

Statistical tests determine where your sample data would lie on an expected distribution of sample data if the null hypothesis were true. These tests give two main outputs:

  • A test statistic tells you how much your data differs from the null hypothesis of the test.
  • A p value tells you the likelihood of obtaining your results if the null hypothesis is actually true in the population.

Statistical tests come in three main varieties:

  • Comparison tests assess group differences in outcomes.
  • Regression tests assess cause-and-effect relationships between variables.
  • Correlation tests assess relationships between variables without assuming causation.

Your choice of statistical test depends on your research questions, research design, sampling method, and data characteristics.

Parametric tests

Parametric tests make powerful inferences about the population based on sample data. But to use them, some assumptions must be met, and only some types of variables can be used. If your data violate these assumptions, you can perform appropriate data transformations or use alternative non-parametric tests instead.

A regression models the extent to which changes in a predictor variable results in changes in outcome variable(s).

  • A simple linear regression includes one predictor variable and one outcome variable.
  • A multiple linear regression includes two or more predictor variables and one outcome variable.

Comparison tests usually compare the means of groups. These may be the means of different groups within a sample (e.g., a treatment and control group), the means of one sample group taken at different times (e.g., pretest and posttest scores), or a sample mean and a population mean.

  • A t test is for exactly 1 or 2 groups when the sample is small (30 or less).
  • A z test is for exactly 1 or 2 groups when the sample is large.
  • An ANOVA is for 3 or more groups.

The z and t tests have subtypes based on the number and types of samples and the hypotheses:

  • If you have only one sample that you want to compare to a population mean, use a one-sample test .
  • If you have paired measurements (within-subjects design), use a dependent (paired) samples test .
  • If you have completely separate measurements from two unmatched groups (between-subjects design), use an independent (unpaired) samples test .
  • If you expect a difference between groups in a specific direction, use a one-tailed test .
  • If you don’t have any expectations for the direction of a difference between groups, use a two-tailed test .

The only parametric correlation test is Pearson’s r . The correlation coefficient ( r ) tells you the strength of a linear relationship between two quantitative variables.

However, to test whether the correlation in the sample is strong enough to be important in the population, you also need to perform a significance test of the correlation coefficient, usually a t test, to obtain a p value. This test uses your sample size to calculate how much the correlation coefficient differs from zero in the population.

You use a dependent-samples, one-tailed t test to assess whether the meditation exercise significantly improved math test scores. The test gives you:

  • a t value (test statistic) of 3.00
  • a p value of 0.0028

Although Pearson’s r is a test statistic, it doesn’t tell you anything about how significant the correlation is in the population. You also need to test whether this sample correlation coefficient is large enough to demonstrate a correlation in the population.

A t test can also determine how significantly a correlation coefficient differs from zero based on sample size. Since you expect a positive correlation between parental income and GPA, you use a one-sample, one-tailed t test. The t test gives you:

  • a t value of 3.08
  • a p value of 0.001

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

analysis strategies for research

The final step of statistical analysis is interpreting your results.

Statistical significance

In hypothesis testing, statistical significance is the main criterion for forming conclusions. You compare your p value to a set significance level (usually 0.05) to decide whether your results are statistically significant or non-significant.

Statistically significant results are considered unlikely to have arisen solely due to chance. There is only a very low chance of such a result occurring if the null hypothesis is true in the population.

This means that you believe the meditation intervention, rather than random factors, directly caused the increase in test scores. Example: Interpret your results (correlational study) You compare your p value of 0.001 to your significance threshold of 0.05. With a p value under this threshold, you can reject the null hypothesis. This indicates a statistically significant correlation between parental income and GPA in male college students.

Note that correlation doesn’t always mean causation, because there are often many underlying factors contributing to a complex variable like GPA. Even if one variable is related to another, this may be because of a third variable influencing both of them, or indirect links between the two variables.

Effect size

A statistically significant result doesn’t necessarily mean that there are important real life applications or clinical outcomes for a finding.

In contrast, the effect size indicates the practical significance of your results. It’s important to report effect sizes along with your inferential statistics for a complete picture of your results. You should also report interval estimates of effect sizes if you’re writing an APA style paper .

With a Cohen’s d of 0.72, there’s medium to high practical significance to your finding that the meditation exercise improved test scores. Example: Effect size (correlational study) To determine the effect size of the correlation coefficient, you compare your Pearson’s r value to Cohen’s effect size criteria.

Decision errors

Type I and Type II errors are mistakes made in research conclusions. A Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s false.

You can aim to minimize the risk of these errors by selecting an optimal significance level and ensuring high power . However, there’s a trade-off between the two errors, so a fine balance is necessary.

Frequentist versus Bayesian statistics

Traditionally, frequentist statistics emphasizes null hypothesis significance testing and always starts with the assumption of a true null hypothesis.

However, Bayesian statistics has grown in popularity as an alternative approach in the last few decades. In this approach, you use previous research to continually update your hypotheses based on your expectations and observations.

Bayes factor compares the relative strength of evidence for the null versus the alternative hypothesis rather than making a conclusion about rejecting the null hypothesis or not.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval

Methodology

  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hostile attribution bias
  • Affect heuristic

Is this article helpful?

Other students also liked.

  • Descriptive Statistics | Definitions, Types, Examples
  • Inferential Statistics | An Easy Introduction & Examples
  • Choosing the Right Statistical Test | Types & Examples

More interesting articles

  • Akaike Information Criterion | When & How to Use It (Example)
  • An Easy Introduction to Statistical Significance (With Examples)
  • An Introduction to t Tests | Definitions, Formula and Examples
  • ANOVA in R | A Complete Step-by-Step Guide with Examples
  • Central Limit Theorem | Formula, Definition & Examples
  • Central Tendency | Understanding the Mean, Median & Mode
  • Chi-Square (Χ²) Distributions | Definition & Examples
  • Chi-Square (Χ²) Table | Examples & Downloadable Table
  • Chi-Square (Χ²) Tests | Types, Formula & Examples
  • Chi-Square Goodness of Fit Test | Formula, Guide & Examples
  • Chi-Square Test of Independence | Formula, Guide & Examples
  • Coefficient of Determination (R²) | Calculation & Interpretation
  • Correlation Coefficient | Types, Formulas & Examples
  • Frequency Distribution | Tables, Types & Examples
  • How to Calculate Standard Deviation (Guide) | Calculator & Examples
  • How to Calculate Variance | Calculator, Analysis & Examples
  • How to Find Degrees of Freedom | Definition & Formula
  • How to Find Interquartile Range (IQR) | Calculator & Examples
  • How to Find Outliers | 4 Ways with Examples & Explanation
  • How to Find the Geometric Mean | Calculator & Formula
  • How to Find the Mean | Definition, Examples & Calculator
  • How to Find the Median | Definition, Examples & Calculator
  • How to Find the Mode | Definition, Examples & Calculator
  • How to Find the Range of a Data Set | Calculator & Formula
  • Hypothesis Testing | A Step-by-Step Guide with Easy Examples
  • Interval Data and How to Analyze It | Definitions & Examples
  • Levels of Measurement | Nominal, Ordinal, Interval and Ratio
  • Linear Regression in R | A Step-by-Step Guide & Examples
  • Missing Data | Types, Explanation, & Imputation
  • Multiple Linear Regression | A Quick Guide (Examples)
  • Nominal Data | Definition, Examples, Data Collection & Analysis
  • Normal Distribution | Examples, Formulas, & Uses
  • Null and Alternative Hypotheses | Definitions & Examples
  • One-way ANOVA | When and How to Use It (With Examples)
  • Ordinal Data | Definition, Examples, Data Collection & Analysis
  • Parameter vs Statistic | Definitions, Differences & Examples
  • Pearson Correlation Coefficient (r) | Guide & Examples
  • Poisson Distributions | Definition, Formula & Examples
  • Probability Distribution | Formula, Types, & Examples
  • Quartiles & Quantiles | Calculation, Definition & Interpretation
  • Ratio Scales | Definition, Examples, & Data Analysis
  • Simple Linear Regression | An Easy Introduction & Examples
  • Skewness | Definition, Examples & Formula
  • Statistical Power and Why It Matters | A Simple Introduction
  • Student's t Table (Free Download) | Guide & Examples
  • T-distribution: What it is and how to use it
  • Test statistics | Definition, Interpretation, and Examples
  • The Standard Normal Distribution | Calculator, Examples & Uses
  • Two-Way ANOVA | Examples & When To Use It
  • Type I & Type II Errors | Differences, Examples, Visualizations
  • Understanding Confidence Intervals | Easy Examples & Formulas
  • Understanding P values | Definition and Examples
  • Variability | Calculating Range, IQR, Variance, Standard Deviation
  • What is Effect Size and Why Does It Matter? (Examples)
  • What Is Kurtosis? | Definition, Examples & Formula
  • What Is Standard Error? | How to Calculate (Guide with Examples)

What is your plagiarism score?

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical Literature
  • Classical Reception
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Language Acquisition
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Culture
  • Music and Religion
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Science
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Society
  • Law and Politics
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Games
  • Computer Security
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business History
  • Business Strategy
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Methodology
  • Economic Systems
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research

A newer edition of this book is available.

  • < Previous chapter
  • Next chapter >

28 Coding and Analysis Strategies

Johnny Saldaña, School of Theatre and Film, Arizona State University

  • Published: 04 August 2014
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter provides an overview of selected qualitative data analytic strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding, descriptive coding, values coding, dramaturgical coding, and versus coding. Strategies for constructing themes and assertions from the data follow. Analytic memo writing is woven throughout the preceding as a method for generating additional analytic insight. Next, display and arts-based strategies are provided, followed by recommended qualitative data analytic software programs and a discussion on verifying the researcher’s analytic findings.

Coding and Analysis Strategies

Anthropologist Clifford Geertz (1983) charmingly mused, “Life is just a bowl of strategies” (p. 25). Strategy , as I use it here, refers to a carefully considered plan or method to achieve a particular goal. The goal in this case is to develop a write-up of your analytic work with the qualitative data you have been given and collected as part of a study. The plans and methods you might employ to achieve that goal are what this article profiles.

Some may perceive strategy as an inappropriate if not colonizing word, suggesting formulaic or regimented approaches to inquiry. I assure you that that is not my intent. My use of strategy is actually dramaturgical in nature: strategies are actions that characters in plays take to overcome obstacles to achieve their objectives. Actors portraying these characters rely on action verbs to generate belief within themselves and to motivate them as they interpret the lines and move appropriately on stage. So what I offer is a qualitative researcher’s array of actions from which to draw to overcome the obstacles to thinking to achieve an analysis of your data. But unlike the pre-scripted text of a play in which the obstacles, strategies, and outcomes have been predetermined by the playwright, your work must be improvisational—acting, reacting, and interacting with data on a moment-by-moment basis to determine what obstacles stand in your way, and thus what strategies you should take to reach your goals.

Another intriguing quote to keep in mind comes from research methodologist Robert E. Stake (1995) who posits, “Good research is not about good methods as much as it is about good thinking” (p. 19). In other words, strategies can take you only so far. You can have a box full of tools, but if you do not know how to use them well or use them creatively, the collection seems rather purposeless. One of the best ways we learn is by doing . So pick up one or more of these strategies (in the form of verbs) and take analytic action with your data. Also keep in mind that these are discussed in the order in which they may typically occur, although humans think cyclically, iteratively, and reverberatively, and each particular research project has its own unique contexts and needs. So be prepared for your mind to jump purposefully and/or idiosyncratically from one strategy to another throughout the study.

QDA (Qualitative Data Analysis) Strategy: To Foresee

To foresee in QDA is to reflect beforehand on what forms of data you will most likely need and collect, which thus informs what types of data analytic strategies you anticipate using.

Analysis, in a way, begins even before you collect data. As you design your research study in your mind and on a word processor page, one strategy is to consider what types of data you may need to help inform and answer your central and related research questions. Interview transcripts, participant observation field notes, documents, artifacts, photographs, video recordings, and so on are not only forms of data but foundations for how you may plan to analyze them. A participant interview, for example, suggests that you will transcribe all or relevant portions of the recording, and use both the transcription and the recording itself as sources for data analysis. Any analytic memos (discussed later) or journal entries you make about your impressions of the interview also become data to analyze. Even the computing software you plan to employ will be relevant to data analysis as it may help or hinder your efforts.

As your research design formulates, compose one to two paragraphs that outline how your QDA may proceed. This will necessitate that you have some background knowledge of the vast array of methods available to you. Thus surveying the literature is vital preparatory work.

QDA Strategy: To Survey

To survey in QDA is to look for and consider the applicability of the QDA literature in your field that may provide useful guidance for your forthcoming data analytic work.

General sources in QDA will provide a good starting point for acquainting you with the data analytic strategies available for the variety of genres in qualitative inquiry (e.g., ethnography, phenomenology, case study, arts-based research, mixed methods). One of the most accessible is Graham R. Gibbs’ (2007)   Analysing Qualitative Data , and one of the most richly detailed is Frederick J. Wertz et al.'s (2011)   Five Ways of Doing Qualitative Analysis . The author’s core texts for this article came from The Coding Manual for Qualitative Researchers ( Saldaña, 2009 , 2013 ) and Fundamentals of Qualitative Research ( Saldaña, 2011 ).

If your study’s methodology or approach is grounded theory, for example, then a survey of methods works by such authors as Barney G. Glaser, Anselm L. Strauss, Juliet Corbin and, in particular, the prolific Kathy Charmaz (2006) may be expected. But there has been a recent outpouring of additional book publications in grounded theory by Birks & Mills (2011) , Bryant & Charmaz (2007) , Stern & Porr (2011) , plus the legacy of thousands of articles and chapters across many disciplines that have addressed grounded theory in their studies.

Particular fields such as education, psychology, social work, health care, and others also have their own QDA methods literature in the form of texts and journals, plus international conferences and workshops for members of the profession. Most important is to have had some university coursework and/or mentorship in qualitative research to suitably prepare you for the intricacies of QDA. Also acknowledge that the emergent nature of qualitative inquiry may require you to adopt different analytic strategies from what you originally planned.

QDA Strategy: To Collect

To collect in QDA is to receive the data given to you by participants and those data you actively gather to inform your study.

QDA is concurrent with data collection and management. As interviews are transcribed, field notes are fleshed out, and documents are filed, the researcher uses the opportunity to carefully read the corpus and make preliminary notations directly on the data documents by highlighting, bolding, italicizing, or noting in some way any particularly interesting or salient portions. As these data are initially reviewed, the researcher also composes supplemental analytic memos that include first impressions, reminders for follow-up, preliminary connections, and other thinking matters about the phenomena at work.

Some of the most common fieldwork tools you might use to collect data are notepads, pens and pencils, file folders for documents, a laptop or desktop with word processing software (Microsoft Word and Excel are most useful) and internet access, a digital camera, and a voice recorder. Some fieldworkers may even employ a digital video camera to record social action, as long as participant permissions have been secured. But everything originates from the researcher himself or herself. Your senses are immersed in the cultural milieu you study, taking in and holding on to relevant details or “significant trivia,” as I call them. You become a human camera, zooming out to capture the broad landscape of your field site one day, then zooming in on a particularly interesting individual or phenomenon the next. Your analysis is only as good as the data you collect.

Fieldwork can be an overwhelming experience because so many details of social life are happening in front of you. Take a holistic approach to your entree, but as you become more familiar with the setting and participants, actively focus on things that relate to your research topic and questions. Of course, keep yourself open to the intriguing, surprising, and disturbing ( Sunstein & Chiseri-Strater, 2012 , p. 115), for these facets enrich your study by making you aware of the unexpected.

QDA Strategy: To Feel

To feel in QDA is to gain deep emotional insight into the social worlds you study and what it means to be human.

Virtually everything we do has an accompanying emotion(s), and feelings are both reactions and stimuli for action. Others’ emotions clue you to their motives, attitudes, values, beliefs, worldviews, identities, and other subjective perceptions and interpretations. Acknowledge that emotional detachment is not possible in field research. Attunement to the emotional experiences of your participants plus sympathetic and empathetic responses to the actions around you are necessary in qualitative endeavors. Your own emotional responses during fieldwork are also data because they document the tacit and visceral. It is important during such analytic reflection to assess why your emotional reactions were as they were. But it is equally important not to let emotions alone steer the course of your study. A proper balance must be found between feelings and facts.

QDA Strategy: To Organize

To organize in QDA is to maintain an orderly repository of data for easy access and analysis.

Even in the smallest of qualitative studies, a large amount of data will be collected across time. Prepare both a hard drive and hard copy folders for digital data and paperwork, and back up all materials for security from loss. I recommend that each data “chunk” (e.g., one interview transcript, one document, one day’s worth of field notes) get its own file, with subfolders specifying the data forms and research study logistics (e.g., interviews, field notes, documents, Institutional Review Board correspondence, calendar).

For small-scale qualitative studies, I have found it quite useful to maintain one large master file with all participant and field site data copied and combined with the literature review and accompanying researcher analytic memos. This master file is used to cut and paste related passages together, deleting what seems unnecessary as the study proceeds, and eventually transforming the document into the final report itself. Cosmetic devices such as font style, font size, rich text (italicizing, bolding, underlining, etc.), and color can help you distinguish between different data forms and highlight significant passages. For example, descriptive, narrative passages of field notes are logged in regular font. “Quotations, things spoken by participants, are logged in bold font.”   Observer’s comments, such as the researcher’s subjective impressions or analytic jottings, are set in italics.

QDA Strategy: To Jot

To jot in QDA is to write occasional, brief notes about your thinking or reminders for follow up.

A jot is a phrase or brief sentence that will literally fit on a standard size “sticky note.” As data are brought and documented together, take some initial time to review their contents and to jot some notes about preliminary patterns, participant quotes that seem quite vivid, anomalies in the data, and so forth.

As you work on a project, keep something to write with or to voice record with you at all times to capture your fleeting thoughts. You will most likely find yourself thinking about your research when you're not working exclusively on the project, and a “mental jot” may occur to you as you ruminate on logistical or analytic matters. Get the thought documented in some way for later retrieval and elaboration as an analytic memo.

QDA Strategy: To Prioritize

To prioritize in QDA is to determine which data are most significant in your corpus and which tasks are most necessary.

During fieldwork, massive amounts of data in various forms may be collected, and your mind can get easily overwhelmed from the magnitude of the quantity, its richness, and its management. Decisions will need to be made about the most pertinent of them because they help answer your research questions or emerge as salient pieces of evidence. As a sweeping generalization, approximately one half to two thirds of what you collect may become unnecessary as you proceed toward the more formal stages of QDA.

To prioritize in QDA is to also determine what matters most in your assembly of codes, categories, themes, assertions, and concepts. Return back to your research purpose and questions to keep you framed for what the focus should be.

QDA Strategy: To Analyze

To analyze in QDA is to observe and discern patterns within data and to construct meanings that seem to capture their essences and essentials.

Just as there are a variety of genres, elements, and styles of qualitative research, so too are there a variety of methods available for QDA. Analytic choices are most often based on what methods will harmonize with your genre selection and conceptual framework, what will generate the most sufficient answers to your research questions, and what will best represent and present the project’s findings.

Analysis can range from the factual to the conceptual to the interpretive. Analysis can also range from a straightforward descriptive account to an emergently constructed grounded theory to an evocatively composed short story. A qualitative research project’s outcomes may range from rigorously achieved, insightful answers to open-ended, evocative questions; from rich descriptive detail to a bullet-pointed list of themes; and from third-person, objective reportage to first-person, emotion-laden poetry. Just as there are multiple destinations in qualitative research, there are multiple pathways and journeys along the way.

Analysis is accelerated as you take cognitive ownership of your data. By reading and rereading the corpus, you gain intimate familiarity with its contents and begin to notice significant details as well as make new insights about their meanings. Patterns, categories, and their interrelationships become more evident the more you know the subtleties of the database.

Since qualitative research’s design, fieldwork, and data collection are most often provisional, emergent, and evolutionary processes, you reflect on and analyze the data as you gather them and proceed through the project. If preplanned methods are not working, you change them to secure the data you need. There is generally a post-fieldwork period when continued reflection and more systematic data analysis occur, concurrent with or followed by additional data collection, if needed, and the more formal write-up of the study, which is in itself an analytic act. Through field note writing, interview transcribing, analytic memo writing, and other documentation processes, you gain cognitive ownership of your data; and the intuitive, tacit, synthesizing capabilities of your brain begin sensing patterns, making connections, and seeing the bigger picture. The purpose and outcome of data analysis is to reveal to others through fresh insights what we have observed and discovered about the human condition. And fortunately, there are heuristics for reorganizing and reflecting on your qualitative data to help you achieve that goal.

QDA Strategy: To Pattern

To pattern in QDA is to detect similarities within and regularities among the data you have collected.

The natural world is filled with patterns because we, as humans, have constructed them as such. Stars in the night sky are not just a random assembly; our ancestors pieced them together to form constellations like the Big Dipper. A collection of flowers growing wild in a field has a pattern, as does an individual flower’s patterns of leaves and petals. Look at the physical objects humans have created and notice how pattern oriented we are in our construction, organization, and decoration. Look around you in your environment and notice how many patterns are evident on your clothing, in a room, and on most objects themselves. Even our sometimes mundane daily and long-term human actions are reproduced patterns in the form of roles, relationships, rules, routines, and rituals.

This human propensity for pattern making follows us into QDA. From the vast array of interview transcripts, field notes, documents, and other forms of data, there is this instinctive, hardwired need to bring order to the collection—not just to reorganize it but to look for and construct patterns out of it. The discernment of patterns is one of the first steps in the data analytic process, and the methods described next are recommended ways to construct them.

QDA Strategy: To Code

To code in QDA is to assign a truncated, symbolic meaning to each datum for purposes of qualitative analysis.

Coding is a heuristic—a method of discovery—to the meanings of individual sections of data. These codes function as a way of patterning, classifying, and later reorganizing them into emergent categories for further analysis. Different types of codes exist for different types of research genres and qualitative data analytic approaches, but this article will focus on only a few selected methods. First, a definition of a code:

A code in qualitative data analysis is most often a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data. The data can consist of interview transcripts, participant observation fieldnotes, journals, documents, literature, artifacts, photographs, video, websites, e-mail correspondence, and so on. The portion of data to be coded can... range in magnitude from a single word to a full sentence to an entire page of text to a stream of moving images.... Just as a title represents and captures a book or film or poem’s primary content and essence, so does a code represent and capture a datum’s primary content and essence. [ Saldaña, 2009 , p. 3]

One helpful pre-coding task is to divide long selections of field note or interview transcript data into shorter stanzas . Stanza division “chunks” the corpus into more manageable paragraph-like units for coding assignments and analysis. The transcript sample that follows illustrates one possible way of inserting line breaks in-between self-standing passages of interview text for easier readability.

Process Coding

As a first coding example, the following interview excerpt about an employed, single, lower-middle-class adult male’s spending habits during the difficult economic times in the U.S. during 2008–2012 is coded in the right-hand margin in capital letters. The superscript numbers match the datum unit with its corresponding code. This particular method is called process coding, which uses gerunds (“-ing” words) exclusively to represent action suggested by the data. Processes can consist of observable human actions (e.g., BUYING BARGAINS), mental processes (e.g., THINKING TWICE), and more conceptual ideas (e.g., APPRECIATING WHAT YOU’VE GOT). Notice that the interviewer’s (I) portions are not coded, just the participant’s (P). A code is applied each time the subtopic of the interview shifts—even within a stanza—and the same codes can (and should) be used more than once if the subtopics are similar. The central research question driving this qualitative study is, “In what ways are middle-class Americans influenced and affected by the current [2008–2012] economic recession?”

Different researchers analyzing this same piece of data may develop completely different codes, depending on their lenses and filters. The previous codes are only one person’s interpretation of what is happening in the data, not the definitive list. The process codes have transformed the raw data units into new representations for analysis. A listing of them applied to this interview transcript, in the order they appear, reads:

BUYING BARGAINS

QUESTIONING A PURCHASE

THINKING TWICE

STOCKING UP

REFUSING SACRIFICE

PRIORITIZING

FINDING ALTERNATIVES

LIVING CHEAPLY

NOTICING CHANGES

STAYING INFORMED

MAINTAINING HEALTH

PICKING UP THE TAB

APPRECIATING WHAT YOU’VE GOT

Coding the data is the first step in this particular approach to QDA, and categorization is just one of the next possible steps.

QDA Strategy: To Categorize

To categorize in QDA is to cluster similar or comparable codes into groups for pattern construction and further analysis.

Humans categorize things in innumerable ways. Think of an average apartment or house’s layout. The rooms of a dwelling have been constructed or categorized by their builders and occupants according to function. A kitchen is designated as an area to store and prepare food and the cooking and dining materials such as pots, pans, and utensils. A bedroom is designated for sleeping, a closet for clothing storage, a bathroom for bodily functions and hygiene, and so on. Each room is like a category in which related and relevant patterns of human action occur. Of course, there are exceptions now and then, such as eating breakfast in bed rather than in a dining area or living in a small studio apartment in which most possessions are contained within one large room (but nonetheless are most often organized and clustered into subcategories according to function and optimal use of space).

The point here is that the patterns of social action we designate into particular categories during QDA are not perfectly bounded. Category construction is our best attempt to cluster the most seemingly alike things into the most seemingly appropriate groups. Categorizing is reorganizing and reordering the vast array of data from a study because it is from these smaller, larger, and meaning-rich units that we can better grasp the particular features of each one and the categories’ possible interrelationships with one another.

One analytic strategy with a list of codes is to classify them into similar clusters. Obviously, the same codes share the same category, but it is also possible that a single code can merit its own group if you feel it is unique enough. After the codes have been classified, a category label is applied to each grouping. Sometimes a code can also double as a category name if you feel it best summarizes the totality of the cluster. Like coding, categorizing is an interpretive act, for there can be different ways of separating and collecting codes that seem to belong together. The cut-and-paste functions of a word processor are most useful for exploring which codes share something in common.

Below is my categorization of the fifteen codes generated from the interview transcript presented earlier. Like the gerunds for process codes, the categories have also been labeled as “-ing” words to connote action. And there was no particular reason why fifteen codes resulted in three categories—there could have been less or even more, but this is how the array came together after my reflections on which codes seemed to belong together. The category labels are ways of answering “why” they belong together. For at-a-glance differentiation, I place codes in CAPITAL LETTERS and categories in upper and lower case Bold Font :

Category 1: Thinking Strategically

Category 2: Spending Strategically

Category 3: Living Strategically

APPRECIATING WHAT YOU'VE GOT

Notice that the three category labels share a common word: “strategically.” Where did this word come from? It came from analytic reflection on the original data, the codes, and the process of categorizing the codes and generating their category labels. It was the analyst’s choice based on the interpretation of what primary action was happening. Your categories generated from your coded data do not need to share a common word or phrase, but I find that this technique, when appropriate, helps build a sense of unity to the initial analytic scheme.

The three categories— Thinking Strategically , Spending Strategically , and Living Strategically —are then reflected upon for how they might interact and interplay. This is where the next major facet of data analysis, analytic memos, enters the scheme. But a necessary section on the basic principles of interrelationship and analytic reasoning must precede that discussion.

QDA Strategy: To Interrelate

To interrelate in QDA is to propose connections within, between, and among the constituent elements of analyzed data.

One task of QDA is to explore the ways our patterns and categories interact and interplay. I use these terms to suggest the qualitative equivalent of statistical correlation, but interaction and interplay are much more than a simple relationship. They imply interrelationship . Interaction refers to reverberative connections—for example, how one or more categories might influence and affect the others, how categories operate concurrently, or whether there is some kind of “domino” effect to them. Interplay refers to the structural and processual nature of categories—for example, whether some type of sequential order, hierarchy, or taxonomy exists; whether any overlaps occur; whether there is superordinate and subordinate arrangement; and what types of organizational frameworks or networks might exist among them. The positivist construct of “cause and effect” becomes influences and affects in QDA.

There can even be patterns of patterns and categories of categories if your mind thinks conceptually and abstractly enough. Our minds can intricately connect multiple phenomena but only if the data and their analyses support the constructions. We can speculate about interaction and interplay all we want, but it is only through a more systematic investigation of the data—in other words, good thinking—that we can plausibly establish any possible interrelationships.

QDA Strategy: To Reason

To reason in QDA is to think in ways that lead to causal probabilities, summative findings, and evaluative conclusions.

Unlike quantitative research, with its statistical formulas and established hypothesis-testing protocols, qualitative research has no standardized methods of data analysis. Rest assured, there are recommended guidelines from the field’s scholars and a legacy of analytic strategies from which to draw. But the primary heuristics (or methods of discovery) you apply during a study are deductive , inductive , abductive , and retroductive reasoning. Deduction is what we generally draw and conclude from established facts and evidence. Induction is what we experientially explore and infer to be transferable from the particular to the general, based on an examination of the evidence and an accumulation of knowledge. Abduction is surmising from the evidence that which is most likely, those explanatory hunches based on clues. “Whereas deductive inferences are certain (so long as their premises are true) and inductive inferences are probable, abductive inferences are merely plausible” ( Shank, 2008 , p. 1). Retroduction is historic reconstruction, working backwards to figure out how the current conditions came to exist.

It is not always necessary to know the names of these four ways of reasoning as you proceed through analysis. In fact, you will more than likely reverberate quickly from one to another depending on the task at hand. But what is important to remember about reasoning is:

to base your conclusions primarily on the participants’ experiences, not just your own

not to take the obvious for granted, as sometimes the expected won't always happen. Your hunches can be quite right and, at other times, quite wrong

to examine the evidence carefully and make reasonable inferences

to logically yet imaginatively think about what is going on and how it all comes together.

Futurists and inventors propose three questions when they think about creating new visions for the world: What is possible (induction)? What is plausible (abduction)? What is preferable (deduction)? These same three questions might be posed as you proceed through QDA and particularly through analytic memo writing, which is retroductive reflection on your analytic work thus far.

QDA Strategy: To Memo

To memo in QDA is to reflect in writing on the nuances, inferences, meanings, and transfer of coded and categorized data plus your analytic processes.

Like field note writing, perspectives vary among practitioners as to the methods for documenting the researcher’s analytic insights and subjective experiences. Some advise that such reflections should be included in field notes as relevant to the data. Others advise that a separate researcher’s journal should be maintained for recording these impressions. And still others advise that these thoughts be documented as separate analytic memos. I prescribe the latter as a method because it is generated by and directly connected to the data themselves.

An analytic memo is a “think piece” of reflexive free writing, a narrative that sets in words your interpretations of the data. Coding and categorizing are heuristics to detect some of the possible patterns and interrelationships at work within the corpus, and an analytic memo further articulates your deductive, inductive, abductive, and retroductive thinking processes on what things may mean. Though the metaphor is a bit flawed and limiting, think of codes and their consequent categories as separate jigsaw puzzle pieces, and their integration into an analytic memo as the trial assembly of the complete picture.

What follows is an example of an analytic memo based on the earlier process coded and categorized interview transcript. It is not intended as the final write-up for a publication but as an open-ended reflection on the phenomena and processes suggested by the data and their analysis thus far. As the study proceeds, however, initial and substantive analytic memos can be revisited and revised for eventual integration into the final report. Note how the memo is dated and given a title for future and further categorization, how participant quotes are occasionally included for evidentiary support, and how the category names are bolded and the codes kept in capital letters to show how they integrate or weave into the thinking:

March 18, 2012 EMERGENT CATEGORIES: A STRATEGIC AMALGAM There’s a popular saying now: “Smart is the new rich.” This participant is Thinking Strategically about his spending through such tactics as THINKING TWICE and QUESTIONING A PURCHASE before he decides to invest in a product. There’s a heightened awareness of both immediate trends and forthcoming economic bad news that positively affects his Spending Strategically . However, he seems unaware that there are even more ways of LIVING CHEAPLY by FINDING ALTERNATIVES. He dines at all-you-can-eat restaurants as a way of STOCKING UP on meals, but doesn’t state that he could bring lunch from home to work, possibly saving even more money. One of his “bad habits” is cigarettes, which he refuses to give up; but he doesn’t seem to realize that by quitting smoking he could save even more money, not to mention possible health care costs. He balks at the idea of paying $1.50 for a soft drink, but doesn’t mind paying $6.00–$7.00 for a pack of cigarettes. Penny-wise and pound-foolish. Addictions skew priorities. Living Strategically , for this participant during “scary times,” appears to be a combination of PRIORITIZING those things which cannot be helped, such as pet care and personal dental care; REFUSING SACRIFICE for maintaining personal creature-comforts; and FINDING ALTERNATIVES to high costs and excessive spending. Living Strategically is an amalgam of thinking and action-oriented strategies.

There are several recommended topics for analytic memo writing throughout the qualitative study. Memos are opportunities to reflect on and write about:

how you personally relate to the participants and/or the phenomenon

your study’s research questions

your code choices and their operational definitions

the emergent patterns, categories, themes, assertions, and concepts

the possible networks (links, connections, overlaps, flows) among the codes, patterns, categories, themes, assertions, and concepts

an emergent or related existent theory

any problems with the study

any personal or ethical dilemmas with the study

future directions for the study

the analytic memos generated thus far [labeled “metamemos”]

the final report for the study [adapted from Saldaña, 2013 , p. 49]

Since writing is analysis, analytic memos expand on the inferential meanings of the truncated codes and categories as a transitional stage into a more coherent narrative with hopefully rich social insight.

QDA Strategy: To Code—A Different Way

The first example of coding illustrated process coding, a way of exploring general social action among humans. But sometimes a researcher works with an individual case study whose language is unique, or with someone the researcher wishes to honor by maintaining the authenticity of his or her speech in the analysis. These reasons suggest that a more participant-centered form of coding may be more appropriate.

In Vivo Coding

A second frequently applied method of coding is called in vivo coding. The root meaning of “in vivo” is “in that which is alive” and refers to a code based on the actual language used by the participant ( Strauss, 1987 ). What words or phrases in the data record you select as codes are those that seem to stand out as significant or summative of what is being said.

Using the same transcript of the male participant living in difficult economic times, in vivo codes are listed in the right-hand column. I recommend that in vivo codes be placed in quotation marks as a way of designating that the code is extracted directly from the data record. Note that instead of fifteen codes generated from process coding, the total number of in vivo codes is thirty. This is not to suggest that there should be specific numbers or ranges of codes used for particular methods. In vivo codes, though, tend to be applied more frequently to data. Again, the interviewer’s questions and prompts are not coded, just the participant's responses:

The thirty in vivo codes are then extracted from the transcript and listed in the order they appear to prepare them for analytic action and reflection:

“SKYROCKETED”

“TWO-FOR-ONE”

“THE LITTLE THINGS”

“THINK TWICE”

“ALL-YOU-CAN-EAT”

“CHEAP AND FILLING”

“BAD HABITS”

“DON'T REALLY NEED”

“LIVED KIND OF CHEAP”

“NOT A BIG SPENDER”

“HAVEN'T CHANGED MY HABITS”

“NOT PUTTING AS MUCH INTO SAVINGS”

“SPENDING MORE”

“ANOTHER DING IN MY WALLET”

“HIGH MAINTENANCE”

“COUPLE OF THOUSAND”

“INSURANCE IS JUST WORTHLESS”

“PICK UP THE TAB”

“IT ALL ADDS UP”

“NOT AS BAD OFF”

“SCARY TIMES”

Even though no systematic reorganization or categorization has been conducted with the codes thus far, an analytic memo of first impressions can still be composed:

March 19, 2012 CODE CHOICES: THE EVERYDAY LANGUAGE OF ECONOMICS After eyeballing the in vivo codes list, I noticed that variants of “CHEAP” appear most often. I recall a running joke between me and a friend of mine when we were shopping for sales. We’d say, “We're not ‘cheap,’ we're frugal .” There’s no formal economic or business language is this transcript—no terms such as “recession” or “downsizing”—just the everyday language of one person trying to cope during “SCARY TIMES” with “ANOTHER DING IN MY WALLET.” The participant notes that he’s always “LIVED KIND OF CHEAP” and is “NOT A BIG SPENDER” and, due to his employment, “NOT AS BAD OFF” as others in the country. Yet even with his middle class status, he’s still feeling the monetary pinch, dining at inexpensive “ALL-YOU-CAN-EAT” restaurants and worried about the rising price of peanut butter, observing that he’s “NOT PUTTING AS MUCH INTO SAVINGS” as he used to. Of all the codes, “ANOTHER DING IN MY WALLET” stands out to me, particularly because on the audio recording he sounded bitter and frustrated. It seems that he’s so concerned about “THE LITTLE THINGS” because of high veterinary and dental charges. The only way to cope with a “COUPLE OF THOUSAND” dollars worth of medical expenses is to find ways of trimming the excess in everyday facets of living: “IT ALL ADDS UP.”

Like process coding, in vivo codes could be clustered into similar categories, but another simple data analytic strategy is also possible.

QDA Strategy: To Outline

To outline in QDA is to hierarchically, processually, and/or temporally assemble such things as codes, categories, themes, assertions, and concepts into a coherent, text-based display.

Traditional outlining formats and content provide not only templates for writing a report but templates for analytic organization. This principle can be found in several CAQDAS (Computer Assisted Qualitative Data Analysis Software) programs through their use of such functions as “hierarchies,” “trees,” and “nodes,” for example. Basic outlining is simply a way of arranging primary, secondary, and sub-secondary items into a patterned display. For example, an organized listing of things in a home might consist of:

Large appliances

Refrigerator

Stove-top oven

Microwave oven

Small appliances

Coffee maker

Dining room

In QDA, outlining may include descriptive nouns or topics but, depending on the study, it may also involve processes or phenomena in extended passages, such as in vivo codes or themes.

The complexity of what we learn in the field can be overwhelming, and outlining is a way of organizing and ordering that complexity so that it does not become complicated. The cut-and-paste and tab functions of a word processor page enable you to arrange and rearrange the salient items from your preliminary coded analytic work into a more streamlined flow. By no means do I suggest that the intricate messiness of life can always be organized into neatly formatted arrangements, but outlining is an analytic act that stimulates deep reflection on both the interconnectedness and interrelationships of what we study. As an example, here are the thirty in vivo codes generated from the initial transcript analysis, arranged in such a way as to construct five major categories:

“DON’T REALLY NEED”

“HAVEN’T CHANGED MY HABITS”

Now that the codes have been rearranged into an outline format, an analytic memo is composed to expand on the rationale and constructed meanings in progress:

March 19, 2012 NETWORKS: EMERGENT CATEGORIES The five major categories I constructed from the in vivo codes are: “SCARY TIMES,” “PRIORTY,” “ANOTHER DING IN MY WALLET,” “THE LITTLE THINGS,” and “LIVED KIND OF CHEAP.” One of the things that hit me today was that the reason he may be pinching pennies on smaller purchases is that he cannot control the larger ones he has to deal with. Perhaps the only way we can cope with or seem to have some sense of agency over major expenses is to cut back on the smaller ones that we can control. $1,000 for a dental bill? Skip lunch for a few days a week. Insulin medication to buy for a pet? Don’t buy a soft drink from a vending machine. Using this reasoning, let me try to interrelate and weave the categories together as they relate to this particular participant: During these scary economic times, he prioritizes his spending because there seems to be just one ding after another to his wallet. A general lifestyle of living cheaply and keeping an eye out for how to save money on the little things compensates for those major expenses beyond his control.

QDA Strategy: To Code—In Even More Ways

The process and in vivo coding examples thus far have demonstrated only two specific methods of thirty-two documented approaches ( Saldaña, 2013 ). Which one(s) you choose for your analysis depends on such factors as your conceptual framework, the genre of qualitative research for your project, the types of data you collect, and so on. The following sections present a few other approaches available for coding qualitative data that you may find useful as starting points.

Descriptive Coding

Descriptive codes are primarily nouns that simply summarize the topic of a datum. This coding approach is particularly useful when you have different types of data gathered for one study, such as interview transcripts, field notes, documents, and visual materials such as photographs. Descriptive codes not only help categorize but also index the data corpus’ basic contents for further analytic work. An example of an interview portion coded descriptively, taken from the participant living in tough economic times, follows to illustrate how the same data can be coded in multiple ways:

For initial analysis, descriptive codes are clustered into similar categories to detect such patterns as frequency (i.e., categories with the largest number of codes), interrelationship (i.e., categories that seem to connect in some way), and initial work for grounded theory development.

Values Coding

Values coding identifies the values, attitudes, and beliefs of a participant, as shared by the individual and/or interpreted by the analyst. This coding method infers the “heart and mind” of an individual or group’s worldview as to what is important, perceived as true, maintained as opinion, and felt strongly. The three constructs are coded separately but are part of a complex interconnected system.

Briefly, a value (V) is what we attribute as important, be it a person, thing, or idea. An attitude (A) is the evaluative way we think and feel about ourselves, others, things, or ideas. A belief (B) is what we think and feel as true or necessary, formed from our “personal knowledge, experiences, opinions, prejudices, morals, and other interpretive perceptions of the social world” ( Saldaña, 2009 , pp. 89–90). Values coding explores intrapersonal, interpersonal, and cultural constructs or ethos . It is an admittedly slippery task to code this way, for it is sometimes difficult to discern what is a value, attitude, or belief because they are intricately interrelated. But the depth you can potentially obtain is rich. An example of values coding follows:

For analysis, categorize the codes for each of the three different constructs together (i.e., all values in one group, attitudes in a second group, and beliefs in a third group). Analytic memo writing about the patterns and possible interrelationships may reveal a more detailed and intricate worldview of the participant.

Dramaturgical Coding

Dramaturgical coding perceives life as performance and its participants as characters in a social drama. Codes are assigned to the data (i.e., a “play script”) that analyze the characters in action, reaction, and interaction. Dramaturgical coding of participants examines their objectives (OBJ) or wants, needs, and motives; the conflicts (CON) or obstacles they face as they try to achieve their objectives; the tactics (TAC) or strategies they employ to reach their objectives; their attitudes (ATT) toward others and their given circumstances; the particular emotions (EMO) they experience throughout; and their subtexts (SUB) or underlying and unspoken thoughts. The following is an example of dramaturgically coded data:

Not included in this particular interview excerpt are the emotions the participant may have experienced or talked about. His later line, “that’s another ding in my wallet,” would have been coded EMO: BITTER. A reader may not have inferred that specific emotion from seeing the line in print. But the interviewer, present during the event and listening carefully to the audio recording during transcription, noted that feeling in his tone of voice.

For analysis, group similar codes together (e.g., all objectives in one group, all conflicts in another group, all tactics in a third group), or string together chains of how participants deal with their circumstances to overcome their obstacles through tactics (e.g., OBJ: SAVING MEAL MONEY > TAC: SKIPPING MEALS). Explore how the individuals or groups manage problem solving in their daily lives. Dramaturgical coding is particularly useful as preliminary work for narrative inquiry story development or arts-based research representations such as performance ethnography.

Versus Coding

Versus coding identifies the conflicts, struggles, and power issues observed in social action, reaction, and interaction as an X VS. Y code, such as: MEN VS. WOMEN, CONSERVATIVES VS. LIBERALS, FAITH VS. LOGIC, and so on. Conflicts are rarely this dichotomous. They are typically nuanced and much more complex. But humans tend to perceive these struggles with an US VS. THEM mindset. The codes can range from the observable to the conceptual and can be applied to data that show humans in tension with others, themselves, or ideologies.

What follows are examples of versus codes applied to the case study participant’s descriptions of his major medical expenses:

As an initial analytic tactic, group the versus codes into one of three categories: the Stakeholders , their Perceptions and/or Actions , and the Issues at stake. Examine how the three interrelate and identify the central ideological conflict at work as an X vs. Y category. Analytic memos and the final write-up can detail the nuances of the issues.

Remember that what has been profiled in this section is a broad brushstroke description of just a few basic coding processes, several of which can be compatibly “mixed and matched” within a single analysis (see Saldaña’s [2013]   The Coding Manual for Qualitative Researchers for a complete discussion). Certainly with additional data, more in-depth analysis can occur, but coding is only one approach to extracting and constructing preliminary meanings from the data corpus. What now follows are additional methods for qualitative analysis.

QDA Strategy: To Theme

To theme in QDA is to construct summative, phenomenological meanings from data through extended passages of text.

Unlike codes, which are most often single words or short phrases that symbolically represent a datum, themes are extended phrases or sentences that summarize the manifest (apparent) and latent (underlying) meanings of data ( Auerbach & Silverstein, 2003 ; Boyatzis, 1998 ). Themes, intended to represent the essences and essentials of humans’ lived experiences, can also be categorized or listed in superordinate and subordinate outline formats as an analytic tactic.

Below is the interview transcript example used in the coding sections above. (Hopefully you are not too fatigued at this point with the transcript, but it’s important to know how inquiry with the same data set can be approached in several different ways.) During the investigation of the ways middle-class Americans are influenced and affected by the current (2008–2012) economic recession, the researcher noticed that participants’ stories exhibited facets of what he labeled “economic intelligence” or EI (based on the formerly developed theories of Howard Gardner’s multiple intelligences and Daniel Goleman’s emotional intelligence). Notice how themeing interprets what is happening through the use of two distinct phrases—ECONOMIC INTELLIGENCE IS (i.e., manifest or apparent meanings) and ECONOMIC INTELLIGENCE MEANS (i.e., latent or underlying meanings):

Unlike the fifteen process codes and thirty in vivo codes in the previous examples, there are now fourteen themes to work with. In the order they appear, they are:

EI IS TAKING ADVANTAGE OF UNEXPECTED OPPORTUNITY

EI MEANS THINKING BEFORE YOU ACT

EI IS BUYING CHEAP

EI MEANS SACRIFICE

EI IS SAVING A FEW DOLLARS NOW AND THEN

EI MEANS KNOWING YOUR FLAWS

EI IS SETTING PRIORITIES

EI IS FINDING CHEAPER FORMS OF ENTERTAINMENT

EI MEANS LIVING AN INEXPENSIVE LIFESTYLE

EI IS NOTICING PERSONAL AND NATIONAL ECONOMIC TRENDS

EI MEANS YOU CANNOT CONTROL EVERYTHING

EI IS TAKING CARE OF ONE’S OWN HEALTH

EI MEANS KNOWING YOUR LUCK

There are several ways to categorize the themes as preparation for analytic memo writing. The first is to arrange them in outline format with superordinate and subordinate levels, based on how the themes seem to take organizational shape and structure. Simply cutting and pasting the themes in multiple arrangements on a word processor page eventually develops a sense of order to them. For example:

A second approach is to categorize the themes into similar clusters and to develop different category labels or theoretical constructs . A theoretical construct is an abstraction that transforms the central phenomenon’s themes into broader applications but can still use “is” and “means” as prompts to capture the bigger picture at work:

Theoretical Construct 1: EI Means Knowing the Unfortunate Present

Supporting Themes:

Theoretical Construct 2: EI is Cultivating a Small Fortune

Theoretical Construct 3: EI Means a Fortunate Future

What follows is an analytic memo generated from the cut-and-paste arrangement of themes into an outline and into theoretical constructs:

March 19, 2012 EMERGENT THEMES: FORTUNE/FORTUNATELY/UNFORTUNATELY I first reorganized the themes by listing them in two groups: “is” and “means.” The “is” statements seemed to contain positive actions and constructive strategies for economic intelligence. The “means” statements held primarily a sense of caution and restriction with a touch of negativity thrown in. The first outline with two major themes, LIVING AN INEXPENSIVE LIFESTYLE and YOU CANNOT CONTROL EVERYTHING also had this same tone. This reminded me of the old children’s picture book, Fortunately/Unfortunately , and the themes of “fortune” as a motif for the three theoretical constructs came to mind. Knowing the Unfortunate Present means knowing what’s (most) important and what’s (mostly) uncontrollable in one’s personal economic life. Cultivating a Small Fortune consists of those small money-saving actions that, over time, become part of one's lifestyle. A Fortunate Future consists of heightened awareness of trends and opportunities at micro and macro levels, with the understanding that health matters can idiosyncratically affect one’s fortune. These three constructs comprise this particular individual’s EI—economic intelligence.

Again, keep in mind that the examples above for coding and themeing were from one small interview transcript excerpt. The number of codes and their categorization would obviously increase, given a longer interview and/or multiple interviews to analyze. But the same basic principles apply: codes and themes relegated into patterned and categorized forms are heuristics—stimuli for good thinking through the analytic memo-writing process on how everything plausibly interrelates. Methodologists vary in the number of recommended final categories that result from analysis, ranging anywhere from three to seven, with traditional grounded theorists prescribing one central or core category from coded work.

QDA Strategy: To Assert

To assert in QDA is to put forward statements that summarize particular fieldwork and analytic observations that the researcher believes credibly represent and transcend the experiences.

Educational anthropologist Frederick Erickson (1986) wrote a significant and influential chapter on qualitative methods that outlined heuristics for assertion development . Assertions are declarative statements of summative synthesis, supported by confirming evidence from the data, and revised when disconfirming evidence or discrepant cases require modification of the assertions. These summative statements are generated from an interpretive review of the data corpus and then supported and illustrated through narrative vignettes—reconstructed stories from field notes, interview transcripts, or other data sources that provide a vivid profile as part of the evidentiary warrant.

Coding or themeing data can certainly precede assertion development as a way of gaining intimate familiarity with the data, but Erickson’s methods are a more admittedly intuitive yet systematic heuristic for analysis. Erickson promotes analytic induction and exploration of and inferences about the data, based on an examination of the evidence and an accumulation of knowledge. The goal is not to look for “proof” to support the assertions but plausibility of inference-laden observations about the local and particular social world under investigation.

Assertion development is the writing of general statements, plus subordinate yet related ones called subassertions , and a major statement called a key assertion that represents the totality of the data. One also looks for key linkages between them, meaning that the key assertion links to its related assertions, which then link to their respective subassertions. Subassertions can include particulars about any discrepant related cases or specify components of their parent assertions.

Excerpts from the interview transcript of our case study will be used to illustrate assertion development at work. By now, you should be quite familiar with the contents, so I will proceed directly to the analytic example. First, there is a series of thematically related statements the participant makes:

“Buy one package of chicken, get the second one free. Now that was a bargain. And I got some.”

“With Sweet Tomatoes I get those coupons for a few bucks off for lunch, so that really helps.”

“I don’t go to movies anymore. I rent DVDs from Netflix or Redbox or watch movies online—so much cheaper than paying over ten or twelve bucks for a movie ticket.”

Assertions can be categorized into low-level and high-level inferences . Low-level inferences address and summarize “what is happening” within the particulars of the case or field site—the “micro.” High-level inferences extend beyond the particulars to speculate on “what it means” in the more general social scheme of things—the “meso” or “macro.” A reasonable low-level assertion about the three statements above collectively might read: The participant finds several small ways to save money during a difficult economic period . A high-level inference that transcends the case to the macro level might read: Selected businesses provide alternatives and opportunities to buy products and services at reduced rates during a recession to maintain consumer spending.

Assertions are instantiated (i.e., supported) by concrete instances of action or participant testimony, whose patterns lead to more general description outside the specific field site. The author’s interpretive commentary can be interspersed throughout the report, but the assertions should be supported with the evidentiary warrant . A few assertions and subassertions based on the case interview transcript might read (and notice how high-level assertions serve as the paragraphs’ topic sentences):

Selected businesses provide alternatives and opportunities to buy products and services at reduced rates during a recession to maintain consumer spending. Restaurants, for example, need to find ways during difficult economic periods when potential customers may be opting to eat inexpensively at home rather than spending more money by dining out. Special offers can motivate cash-strapped clientele to patronize restaurants more frequently. An adult male dealing with such major expenses as underinsured dental care offers: “With Sweet Tomatoes I get those coupons for a few bucks off for lunch, so that really helps.” The film and video industries also seem to be suffering from a double-whammy during the current recession: less consumer spending on higher-priced entertainment, resulting in a reduced rate of movie theatre attendance (currently 39 percent of the American population, according to CNN); coupled with a media technology and business revolution that provides consumers less costly alternatives through video rentals and internet viewing: “I don’t go to movies anymore. I rent DVDs from Netflix or Redbox or watch movies online—so much cheaper than paying over ten or twelve bucks for a movie ticket.”

“Particularizability”—the search for specific and unique dimensions of action at a site and/or the specific and unique perspectives of an individual participant—is not intended to filter out trivial excess but to magnify the salient characteristics of local meaning. Although generalizable knowledge serves little purpose in qualitative inquiry since each naturalistic setting will contain its own unique set of social and cultural conditions, there will be some aspects of social action that are plausibly universal or “generic” across settings and perhaps even across time. To work toward this, Erickson advocates that the interpretive researcher look for “concrete universals” by studying actions at a particular site in detail, then comparing those to other sites that have also been studied in detail. The exhibit or display of these generalizable features is to provide a synoptic representation, or a view of the whole. What the researcher attempts to uncover is what is both particular and general at the site of interest, preferably from the perspective of the participants. It is from the detailed analysis of actions at a specific site that these universals can be concretely discerned, rather than abstractly constructed as in grounded theory.

In sum, assertion development is a qualitative data analytic strategy that relies on the researcher’s intense review of interview transcripts, field notes, documents, and other data to inductively formulate composite statements that credibly summarize and interpret participant actions and meanings, and their possible representation of and transfer into broader social contexts and issues.

QDA Strategy: To Display

To display in QDA is to visually present the processes and dynamics of human or conceptual action represented in the data.

Qualitative researchers use not only language but illustrations to both analyze and display the phenomena and processes at work in the data. Tables, charts, matrices, flow diagrams, and other models help both you and your readers cognitively and conceptually grasp the essence and essentials of your findings. As you have seen thus far, even simple outlining of codes, categories, and themes is one visual tactic for organizing the scope of the data. Rich text, font, and format features such as italicizing, bolding, capitalizing, indenting, and bullet pointing provide simple emphasis to selected words and phrases within the longer narrative.

“Think display” was a phrase coined by methodologists Miles and Huberman (1994) to encourage the researcher to think visually as data were collected and analyzed. The magnitude of text can be essentialized into graphics for “at-a-glance” review. Bins in various shapes and lines of various thicknesses, along with arrows suggesting pathways and direction, render the study as a portrait of action. Bins can include the names of codes, categories, concepts, processes, key participants, and/or groups.

As a simple example, Figure 28.1 illustrates the three categories’ interrelationship derived from process coding. It displays what could be the apex of this interaction, LIVING STRATEGICALLY, and its connections to THINKING STRATEGICALLY, which influences and affects SPENDING STRATEGICALLY.

Figure 28.2 represents a slightly more complex (if not playful) model, based on the five major in vivo codes/categories generated from analysis. The graphic is used as a way of initially exploring the interrelationship and flow from one category to another. The use of different font styles, font sizes, and line and arrow thicknesses are intended to suggest the visual qualities of the participant’s language and his dilemmas—a way of heightening in vivo coding even further.

Accompanying graphics are not always necessary for a qualitative report. They can be very helpful for the researcher during the analytic stage as a heuristic for exploring how major ideas interrelate, but illustrations are generally included in published work when they will help supplement and clarify complex processes for readers. Photographs of the field setting or the participants (and only with their written permission) also provide evidentiary reality to the write-up and help your readers get a sense of being there.

QDA Strategy: To Narrate

To narrate in QDA is to create an evocative literary representation and presentation of the data in the form of creative nonfiction.

All research reports are stories of one kind or another. But there is yet another approach to QDA that intentionally documents the research experience as story, in its traditional literary sense. Narrative inquiry plots and story lines the participant’s experiences into what might be initially perceived as a fictional short story or novel. But the story is carefully crafted and creatively written to provide readers with an almost omniscient perspective about the participants’ worldview. The transformation of the corpus from database to creative nonfiction ranges from systematic transcript analysis to open ended literary composition. The narrative, though, should be solidly grounded in and emerge from the data as a plausible rendering of social life.

A simple illustration of category interrelationship.

An illustration with rich text and artistic features.

The following is a narrative vignette based on interview transcript selections from the participant living through tough economic times:

Jack stood in front of the soft drink vending machine at work and looked almost worriedly at the selections. With both hands in his pants pockets, his fingers jingled the few coins he had inside them as he contemplated whether he could afford the purchase. One dollar and fifty cents for a twenty-ounce bottle of Diet Coke. One dollar and fifty cents. “I can practically get a two-liter bottle for that same price at the grocery store,” he thought. Then Jack remembered the upcoming dental surgery he needed—that would cost one thousand dollars—and the bottle of insulin and syringes he needed to buy for his diabetic, “high maintenance” cat—about one hundred and twenty dollars. He sighed, took his hands out of his pockets, and walked away from the vending machine. He was skipping lunch that day anyway so he could stock up on dinner later at the cheap-but-filling-all-you-can-eat Chinese buffet. He could get his Diet Coke there.

Narrative inquiry representations, like literature, vary in tone, style, and point of view. The common goal, however, is to create an evocative portrait of participants through the aesthetic power of literary form. A story does not always have to have a moral explicitly stated by its author. The reader reflects on personal meanings derived from the piece and how the specific tale relates to one’s self and the social world.

QDA Strategy: To Poeticize

To poeticize in QDA is to create an evocative literary representation and presentation of the data in the form of poetry.

One form for analyzing or documenting analytic findings is to strategically truncate interview transcripts, field notes, and other pertinent data into poetic structures. Like coding, poetic constructions capture the essence and essentials of data in a creative, evocative way. The elegance of the format attests to the power of carefully chosen language to represent and convey complex human experience.

In vivo codes (codes based on the actual words used by participants themselves) can provide imagery, symbols, and metaphors for rich category, theme, concept, and assertion development, plus evocative content for arts-based interpretations of the data. Poetic inquiry takes note of what words and phrases seem to stand out from the data corpus as rich material for reinterpretation. Using some of the participant’s own language from the interview transcript illustrated above, a poetic reconstruction or “found poetry” might read:

Scary Times Scary times... spending more (another ding in my wallet) a couple of thousand (another ding in my wallet) insurance is just worthless (another ding in my wallet) pick up the tab (another ding in my wallet) not putting as much into savings (another ding in my wallet) It all adds up. Think twice: don't really need skip Think twice, think cheap: coupons bargains two-for-one free Think twice, think cheaper: stock up all-you-can-eat (cheap—and filling) It all adds up.

Anna Deavere Smith, a verbatim theatre performer, attests that people speak in forms of “organic poetry” in everyday life. Thus in vivo codes can provide core material for poetic representation and presentation of lived experiences, potentially transforming the routine and mundane into the epic. Some researchers also find the genre of poetry to be the most effective way to compose original work that reflects their own fieldwork experiences and autoethnographic stories.

QDA Strategy: To Compute

To compute in QDA is to employ specialized software programs for qualitative data management and analysis.

CAQDAS is an acronym for Computer Assisted Qualitative Data Analysis Software. There are diverse opinions among practitioners in the field about the utility of such specialized programs for qualitative data management and analysis. The software, unlike statistical computation, does not actually analyze data for you at higher conceptual levels. CAQDAS software packages serve primarily as a repository for your data (both textual and visual) that enable you to code them, and they can perform such functions as calculate the number of times a particular word or phrase appears in the data corpus (a particularly useful function for content analysis) and can display selected facets after coding, such as possible interrelationships. Certainly, basic word-processing software such as Microsoft Word, Excel, and Access provide utilities that can store and, with some pre-formatting and strategic entry, organize qualitative data to enable the researcher’s analytic review. The following internet addresses are listed to help in exploriong these CAQDAS packages and obtaining demonstration/trial software and tutorials:

AnSWR: www.cdc.gov/hiv/topics/surveillance/resources/software/answr

ATLAS.ti: www.atlasti.com

Coding Analysis Toolkit (CAT): cat.ucsur.pitt.edu/

Dedoose: www.dedoose.com

HyperRESEARCH: www.researchware.com

MAXQDA: www.maxqda.com

NVivo: www.qsrinternational.com

QDA Miner: www.provalisresearch.com

Qualrus: www.qualrus.com

Transana (for audio and video data materials): www.transana.org

Weft QDA: www.pressure.to/qda/

Some qualitative researchers attest that the software is indispensable for qualitative data management, especially for large-scale studies. Others feel that the learning curve of CAQDAS is too lengthy to be of pragmatic value, especially for small-scale studies. From my own experience, if you have an aptitude for picking up quickly on the scripts of software programs, explore one or more of the packages listed. If you are a novice to qualitative research, though, I recommend working manually or “by hand” for your first project so you can focus exclusively on the data and not on the software.

QDA Strategy: To Verify

To verify in QDA is to administer an audit of “quality control” to your analysis.

After your data analysis and the development of key findings, you may be thinking to yourself, “Did I get it right?” “Did I learn anything new?” Reliability and validity are terms and constructs of the positivist quantitative paradigm that refer to the replicability and accuracy of measures. But in the qualitative paradigm, other constructs are more appropriate.

Credibility and trustworthiness ( Lincoln & Guba, 1985 ) are two factors to consider when collecting and analyzing the data and presenting your findings. In our qualitative research projects, we need to present a convincing story to our audiences that we “got it right” methodologically. In other words, the amount of time we spent in the field, the number of participants we interviewed, the analytic methods we used, the thinking processes evident to reach our conclusions, and so on should be “just right” to persuade the reader that we have conducted our jobs soundly. But remember that we can never conclusively “prove” something; we can only, at best, convincingly suggest. Research is an act of persuasion.

Credibility in a qualitative research report can be established through several ways. First, citing the key writers of related works in your literature review is a must. Seasoned researchers will sometimes assess whether a novice has “done her homework” by reviewing the bibliography or references. You need not list everything that seminal writers have published about a topic, but their names should appear at least once as evidence that you know the field’s key figures and their work.

Credibility can also be established by specifying the particular data analytic methods you employed (e.g., “Interview transcripts were taken through two cycles of process coding, resulting in five primary categories”), through corroboration of data analysis with the participants themselves (e.g., “I asked my participants to read and respond to a draft of this report for their confirmation of accuracy and recommendations for revision”) or through your description of how data and findings were substantiated (e.g., “Data sources included interview transcripts, participant observation field notes, and participant response journals to gather multiple perspectives about the phenomenon”).

Creativity scholar Sir Ken Robinson is attributed with offering this cautionary advice about making a convincing argument: “Without data, you’re just another person with an opinion.” Thus researchers can also support their findings with relevant, specific evidence by quoting participants directly and/or including field note excerpts from the data corpus. These serve both as illustrative examples for readers and to present more credible testimony of what happened in the field.

Trustworthiness , or providing credibility to the writing, is when we inform the reader of our research processes. Some make the case by stating the duration of fieldwork (e.g., “Seventy-five clock hours were spent in the field”; “The study extended over a twenty-month period”). Others put forth the amounts of data they gathered (e.g., “Twenty-seven individuals were interviewed”; “My field notes totaled approximately 250 pages”). Sometimes trustworthiness is established when we are up front or confessional with the analytic or ethical dilemmas we encountered (e.g., “It was difficult to watch the participant’s teaching effectiveness erode during fieldwork”; “Analysis was stalled until I recoded the entire data corpus with a new perspective.”).

The bottom line is that credibility and trustworthiness are matters of researcher honesty and integrity . Anyone can write that he worked ethically, rigorously, and reflexively, but only the writer will ever know the truth. There is no shame if something goes wrong with your research. In fact, it is more than likely the rule, not the exception. Work and write transparently to achieve credibility and trustworthiness with your readers.

The length of this article does not enable me to expand on other qualitative data analytic strategies, such as to conceptualize, abstract, theorize, and write. Yet there are even more subtle thinking strategies to employ throughout the research enterprise, such as to synthesize, problematize, persevere, imagine, and create. Each researcher has his or her own ways of working, and deep reflection (another strategy) on your own methodology and methods as a qualitative inquirer throughout fieldwork and writing provides you with metacognitive awareness of data analytic processes and possibilities.

Data analysis is one of the most elusive processes in qualitative research, perhaps because it is a backstage, behind-the-scenes, in-your-head enterprise. It is not that there are no models to follow. It is just that each project is contextual and case specific. The unique data you collect from your unique research design must be approached with your unique analytic signature. It truly is a learning-by-doing process, so accept that and leave yourself open to discovery and insight as you carefully scrutinize the data corpus for patterns, categories, themes, concepts, assertions, and possibly new theories through strategic analysis.

Auerbach, C. F. , & Silverstein, L. B. ( 2003 ). Qualitative data: An introduction to coding and analysis . New York: New York University Press.

Google Scholar

Google Preview

Birks, M. , & Mills, J. ( 2011 ). Grounded theory: A practical guide . London: Sage.

Boyatzis, R. E. ( 1998 ). Transforming qualitative information: Thematic analysis and code development . Thousand Oaks, CA: Sage.

Bryant, A. , & Charmaz, K. (Eds.). ( 2007 ). The Sage handbook of grounded theory . London: Sage.

Charmaz, K. ( 2006 ). Constructing grounded theory: A practical guide through qualitative analysis . Thousand Oaks, CA: Sage.

Erickson, F. ( 1986 ). Qualitative methods in research on teaching. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed.) (pp. 119–161). New York: Macmillan.

Geertz, C. ( 1983 ). Local knowledge: Further essays in interpretive anthropology . New York: Basic Books.

Gibbs, G. R. ( 2007 ). Analysing qualitative data . London: Sage.

Lincoln, Y. S. , & Guba, E. G. ( 1985 ). Naturalistic inquiry . Newbury Park, CA: Sage.

Miles, M. B. , & Huberman, A. M. ( 1994 ). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage.

Saldaña, J. ( 2009 ). The coding manual for qualitative researchers . London: Sage.

Saldaña, J. ( 2011 ). Fundamentals of qualitative research . New York: Oxford University Press.

Saldaña, J. ( 2013 ). The coding manual for qualitative researchers (2nd ed.). London: Sage.

Shank, G. ( 2008 ). Abduction. In L. M. Given (Ed.), The Sage encyclopedia of qualitative research methods (pp. 1–2). Thousand Oaks, CA: Sage.

Stake, R. E. ( 1995 ). The art of case study research . Thousand Oaks, CA: Sage.

Stern, P. N. , & Porr, C. J. ( 2011 ). Essentials of accessible grounded theory . Walnut Creek, CA: Left Coast Press.

Strauss, A. L. ( 1987 ). Qualitative analysis for social scientists . Cambridge: Cambridge University Press.

Sunstein, B. S. , & Chiseri-Strater, E. ( 2012 ). FieldWorking: Reading and writing research (4th ed.). Boston: Bedford/St. Martin’s.

Wertz, F. J. , Charmaz, K. , McMullen, L. M. , Josselson, R. , Anderson, R. , & McSpadden, E. ( 2011 ). Fives ways of doing qualitative analysis: Phenomenological psychology, grounded theory, discourse analysis, narrative research, and intuitive inquiry . New York: Guilford.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Quantitative Data Analysis: A Comprehensive Guide

By: Ofem Eteng Published: May 18, 2022

Related Articles

analysis strategies for research

A healthcare giant successfully introduces the most effective drug dosage through rigorous statistical modeling, saving countless lives. A marketing team predicts consumer trends with uncanny accuracy, tailoring campaigns for maximum impact.

Table of Contents

These trends and dosages are not just any numbers but are a result of meticulous quantitative data analysis. Quantitative data analysis offers a robust framework for understanding complex phenomena, evaluating hypotheses, and predicting future outcomes.

In this blog, we’ll walk through the concept of quantitative data analysis, the steps required, its advantages, and the methods and techniques that are used in this analysis. Read on!

What is Quantitative Data Analysis?

Quantitative data analysis is a systematic process of examining, interpreting, and drawing meaningful conclusions from numerical data. It involves the application of statistical methods, mathematical models, and computational techniques to understand patterns, relationships, and trends within datasets.

Quantitative data analysis methods typically work with algorithms, mathematical analysis tools, and software to gain insights from the data, answering questions such as how many, how often, and how much. Data for quantitative data analysis is usually collected from close-ended surveys, questionnaires, polls, etc. The data can also be obtained from sales figures, email click-through rates, number of website visitors, and percentage revenue increase. 

Quantitative Data Analysis vs Qualitative Data Analysis

When we talk about data, we directly think about the pattern, the relationship, and the connection between the datasets – analyzing the data in short. Therefore when it comes to data analysis, there are broadly two types – Quantitative Data Analysis and Qualitative Data Analysis.

Quantitative data analysis revolves around numerical data and statistics, which are suitable for functions that can be counted or measured. In contrast, qualitative data analysis includes description and subjective information – for things that can be observed but not measured.

Let us differentiate between Quantitative Data Analysis and Quantitative Data Analysis for a better understanding.

Data Preparation Steps for Quantitative Data Analysis

Quantitative data has to be gathered and cleaned before proceeding to the stage of analyzing it. Below are the steps to prepare a data before quantitative research analysis:

  • Step 1: Data Collection

Before beginning the analysis process, you need data. Data can be collected through rigorous quantitative research, which includes methods such as interviews, focus groups, surveys, and questionnaires.

  • Step 2: Data Cleaning

Once the data is collected, begin the data cleaning process by scanning through the entire data for duplicates, errors, and omissions. Keep a close eye for outliers (data points that are significantly different from the majority of the dataset) because they can skew your analysis results if they are not removed.

This data-cleaning process ensures data accuracy, consistency and relevancy before analysis.

  • Step 3: Data Analysis and Interpretation

Now that you have collected and cleaned your data, it is now time to carry out the quantitative analysis. There are two methods of quantitative data analysis, which we will discuss in the next section.

However, if you have data from multiple sources, collecting and cleaning it can be a cumbersome task. This is where Hevo Data steps in. With Hevo, extracting, transforming, and loading data from source to destination becomes a seamless task, eliminating the need for manual coding. This not only saves valuable time but also enhances the overall efficiency of data analysis and visualization, empowering users to derive insights quickly and with precision

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.

Start for free now!

Now that you are familiar with what quantitative data analysis is and how to prepare your data for analysis, the focus will shift to the purpose of this article, which is to describe the methods and techniques of quantitative data analysis.

Methods and Techniques of Quantitative Data Analysis

Quantitative data analysis employs two techniques to extract meaningful insights from datasets, broadly. The first method is descriptive statistics, which summarizes and portrays essential features of a dataset, such as mean, median, and standard deviation.

Inferential statistics, the second method, extrapolates insights and predictions from a sample dataset to make broader inferences about an entire population, such as hypothesis testing and regression analysis.

An in-depth explanation of both the methods is provided below:

  • Descriptive Statistics
  • Inferential Statistics

1) Descriptive Statistics

Descriptive statistics as the name implies is used to describe a dataset. It helps understand the details of your data by summarizing it and finding patterns from the specific data sample. They provide absolute numbers obtained from a sample but do not necessarily explain the rationale behind the numbers and are mostly used for analyzing single variables. The methods used in descriptive statistics include: 

  • Mean:   This calculates the numerical average of a set of values.
  • Median: This is used to get the midpoint of a set of values when the numbers are arranged in numerical order.
  • Mode: This is used to find the most commonly occurring value in a dataset.
  • Percentage: This is used to express how a value or group of respondents within the data relates to a larger group of respondents.
  • Frequency: This indicates the number of times a value is found.
  • Range: This shows the highest and lowest values in a dataset.
  • Standard Deviation: This is used to indicate how dispersed a range of numbers is, meaning, it shows how close all the numbers are to the mean.
  • Skewness: It indicates how symmetrical a range of numbers is, showing if they cluster into a smooth bell curve shape in the middle of the graph or if they skew towards the left or right.

2) Inferential Statistics

In quantitative analysis, the expectation is to turn raw numbers into meaningful insight using numerical values, and descriptive statistics is all about explaining details of a specific dataset using numbers, but it does not explain the motives behind the numbers; hence, a need for further analysis using inferential statistics.

Inferential statistics aim to make predictions or highlight possible outcomes from the analyzed data obtained from descriptive statistics. They are used to generalize results and make predictions between groups, show relationships that exist between multiple variables, and are used for hypothesis testing that predicts changes or differences.

There are various statistical analysis methods used within inferential statistics; a few are discussed below.

  • Cross Tabulations: Cross tabulation or crosstab is used to show the relationship that exists between two variables and is often used to compare results by demographic groups. It uses a basic tabular form to draw inferences between different data sets and contains data that is mutually exclusive or has some connection with each other. Crosstabs help understand the nuances of a dataset and factors that may influence a data point.
  • Regression Analysis: Regression analysis estimates the relationship between a set of variables. It shows the correlation between a dependent variable (the variable or outcome you want to measure or predict) and any number of independent variables (factors that may impact the dependent variable). Therefore, the purpose of the regression analysis is to estimate how one or more variables might affect a dependent variable to identify trends and patterns to make predictions and forecast possible future trends. There are many types of regression analysis, and the model you choose will be determined by the type of data you have for the dependent variable. The types of regression analysis include linear regression, non-linear regression, binary logistic regression, etc.
  • Monte Carlo Simulation: Monte Carlo simulation, also known as the Monte Carlo method, is a computerized technique of generating models of possible outcomes and showing their probability distributions. It considers a range of possible outcomes and then tries to calculate how likely each outcome will occur. Data analysts use it to perform advanced risk analyses to help forecast future events and make decisions accordingly.
  • Analysis of Variance (ANOVA): This is used to test the extent to which two or more groups differ from each other. It compares the mean of various groups and allows the analysis of multiple groups.
  • Factor Analysis:   A large number of variables can be reduced into a smaller number of factors using the factor analysis technique. It works on the principle that multiple separate observable variables correlate with each other because they are all associated with an underlying construct. It helps in reducing large datasets into smaller, more manageable samples.
  • Cohort Analysis: Cohort analysis can be defined as a subset of behavioral analytics that operates from data taken from a given dataset. Rather than looking at all users as one unit, cohort analysis breaks down data into related groups for analysis, where these groups or cohorts usually have common characteristics or similarities within a defined period.
  • MaxDiff Analysis: This is a quantitative data analysis method that is used to gauge customers’ preferences for purchase and what parameters rank higher than the others in the process. 
  • Cluster Analysis: Cluster analysis is a technique used to identify structures within a dataset. Cluster analysis aims to be able to sort different data points into groups that are internally similar and externally different; that is, data points within a cluster will look like each other and different from data points in other clusters.
  • Time Series Analysis: This is a statistical analytic technique used to identify trends and cycles over time. It is simply the measurement of the same variables at different times, like weekly and monthly email sign-ups, to uncover trends, seasonality, and cyclic patterns. By doing this, the data analyst can forecast how variables of interest may fluctuate in the future. 
  • SWOT analysis: This is a quantitative data analysis method that assigns numerical values to indicate strengths, weaknesses, opportunities, and threats of an organization, product, or service to show a clearer picture of competition to foster better business strategies

How to Choose the Right Method for your Analysis?

Choosing between Descriptive Statistics or Inferential Statistics can be often confusing. You should consider the following factors before choosing the right method for your quantitative data analysis:

1. Type of Data

The first consideration in data analysis is understanding the type of data you have. Different statistical methods have specific requirements based on these data types, and using the wrong method can render results meaningless. The choice of statistical method should align with the nature and distribution of your data to ensure meaningful and accurate analysis.

2. Your Research Questions

When deciding on statistical methods, it’s crucial to align them with your specific research questions and hypotheses. The nature of your questions will influence whether descriptive statistics alone, which reveal sample attributes, are sufficient or if you need both descriptive and inferential statistics to understand group differences or relationships between variables and make population inferences.

Pros and Cons of Quantitative Data Analysis

1. Objectivity and Generalizability:

  • Quantitative data analysis offers objective, numerical measurements, minimizing bias and personal interpretation.
  • Results can often be generalized to larger populations, making them applicable to broader contexts.

Example: A study using quantitative data analysis to measure student test scores can objectively compare performance across different schools and demographics, leading to generalizable insights about educational strategies.

2. Precision and Efficiency:

  • Statistical methods provide precise numerical results, allowing for accurate comparisons and prediction.
  • Large datasets can be analyzed efficiently with the help of computer software, saving time and resources.

Example: A marketing team can use quantitative data analysis to precisely track click-through rates and conversion rates on different ad campaigns, quickly identifying the most effective strategies for maximizing customer engagement.

3. Identification of Patterns and Relationships:

  • Statistical techniques reveal hidden patterns and relationships between variables that might not be apparent through observation alone.
  • This can lead to new insights and understanding of complex phenomena.

Example: A medical researcher can use quantitative analysis to pinpoint correlations between lifestyle factors and disease risk, aiding in the development of prevention strategies.

1. Limited Scope:

  • Quantitative analysis focuses on quantifiable aspects of a phenomenon ,  potentially overlooking important qualitative nuances, such as emotions, motivations, or cultural contexts.

Example: A survey measuring customer satisfaction with numerical ratings might miss key insights about the underlying reasons for their satisfaction or dissatisfaction, which could be better captured through open-ended feedback.

2. Oversimplification:

  • Reducing complex phenomena to numerical data can lead to oversimplification and a loss of richness in understanding.

Example: Analyzing employee productivity solely through quantitative metrics like hours worked or tasks completed might not account for factors like creativity, collaboration, or problem-solving skills, which are crucial for overall performance.

3. Potential for Misinterpretation:

  • Statistical results can be misinterpreted if not analyzed carefully and with appropriate expertise.
  • The choice of statistical methods and assumptions can significantly influence results.

This blog discusses the steps, methods, and techniques of quantitative data analysis. It also gives insights into the methods of data collection, the type of data one should work with, and the pros and cons of such analysis.

Gain a better understanding of data analysis with these essential reads:

  • Data Analysis and Modeling: 4 Critical Differences
  • Exploratory Data Analysis Simplified 101
  • 25 Best Data Analysis Tools in 2024

Carrying out successful data analysis requires prepping the data and making it analysis-ready. That is where Hevo steps in.

Want to give Hevo a try? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You may also have a look at the amazing Hevo price , which will assist you in selecting the best plan for your requirements.

Share your experience of understanding Quantitative Data Analysis in the comment section below! We would love to hear your thoughts.

Ofem Eteng

Ofem is a freelance writer specializing in data-related topics, who has expertise in translating complex concepts. With a focus on data science, analytics, and emerging technologies.

No-code Data Pipeline for your Data Warehouse

  • Data Analysis
  • Data Warehouse
  • Quantitative Data Analysis

Continue Reading

Sarad Mohanan

Best Data Reconciliation Tools: Complete Guide

Satyam Agrawal

What is Data Reconciliation? Everything to Know

Sarthak Bhardwaj

Data Observability vs Data Quality: Difference and Relationships Explored

I want to read this e-book.

analysis strategies for research

A Longitudinal Analysis of Relations from Motivation to Self-regulatory Strategy on Academic Achievement in Academically Higher-Achieving Students

  • Regular Article
  • Published: 10 April 2024

Cite this article

  • Kwang Surk Jung   ORCID: orcid.org/0000-0002-9968-4398 1  

Explore all metrics

This research aims to analyze the relations from motivation to self-regulatory strategy on academic achievement in high school among academically higher-achieving students. Methods in autoregressive cross-lagged modeling by Mplus8.5 are used to evaluate 309 high school students with higher achievement in language or mathematics from the Korean Education Longitudinal Study 2013 (KELS 2013). First, motivation on cognitive regulatory strategy and cognitive regulatory strategy on behavioral regulatory strategy statistically significant positive relate to the same year and two years later. Second, the longitudinally mediated self-regulatory strategy between motivation and high school academic achievement is not statistically significant. These findings would strengthen the rationale for practical strategies for academic achievement in learner-centered gifted education.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

analysis strategies for research

Abdulla Alabbasi, A. M., Thompson, T. L., Runco, M. A., Alansari, L. A., & Ayoub, A. E. A. (2022). Gender differences in creative potential: A meta-analysis of mean differences and variability. Psychology of Aesthetics, Creativity, and the Arts, . https://doi.org/10.1037/aca0000506

Article   Google Scholar  

Achenbach, T. M., & Rescorla, L. A. (2001). Mannual for the ASEBA (An Integrated System of Multi-informant Assessment) school-age forms and profiles . ASEBA. https://search.worldcat.org/ko/title/53902766

Applebaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA publications and communications board task force report. American Psychologist, 73 (1), 3–25. https://doi.org/10.1037/amp0000191

Baker, J. A., Bridger, R., & Evans, K. (1998). Models of underachievement among gifted preadolescents: The role of personal, family, and school factors. Gifted Child Quarterly, 42 (1), 5–15. https://doi.org/10.1177/001698629804200102

Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents (pp. 307–337). Information Age Publishing.

Google Scholar  

Benbow, C. P. (2012). Identifying and nurturing future innovators in science, technology, engineering, and mathematics: A review of findings from the study of mathematically precocious youth. Peabody Journal of Education, 87 (1), 16–25. https://doi.org/10.1080/0161956X.2012.642236

Bouffard-Bouchard, T. (1993). Self-regulation on a concept-formation task among average and gifted students. Journal of Experimental Child Psychology, 56 (1), 115–134. https://doi.org/10.1006/jecp.1993.1028

Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education, 27 , 1–13. https://doi.org/10.1016/j.iheduc.2015.04.007

Bureau, J. S., Howard, J. L., Chong, J. X. Y., & Guay, F. (2022). Pathways to student motivation: A Meta-analysis of antecedents of autonomous and controlled motivations. Review of Educational Research, 92 (1), 46–72. https://doi.org/10.3102/00346543211042426

Campbell, J. R., & Feng, A. X. (2011). Comparing adult productivity of American mathematics, chemistry, and physics Olympians with Terman’s longitudinal study. Roeper Review, 33 , 18–25. https://doi.org/10.1080/02783193.2011.530203

Castejón, J. L., Gilar, R., Veas, A., & Minano, P. (2016). Differences in learning strategies, goal orientations, and self-concept between overachieving, normal-achieving, and underachieving secondary students. Frontiers in Psychology . https://doi.org/10.3389/fpsyg.2016.01438

Chan, L. K. S. (1996). Motivational orientations and metacognitive abilities of intellectually gifted students. Gifted Child Quarterly, 40 (4), 184–193. https://doi.org/10.1177/001698629604000403

Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14 (3), 464–504. https://doi.org/10.1080/10705510701301834

Cho, M. K. (2009). 자기조절 학습전략 프로그램이 미성취 영재아의 자기조절 학습능력과 학업 성취에 미치는 효과 [The effects of self-regulated learning strategy program on the self-regulated learning strategy and the academic achievements in gifted underachievers]. [Master’s thesis, Korea National University of Education] DCollection Korea National University of Education. http://dcollection.knue.ac.kr/jsp/common/DcLoOrgPer.jsp?sItemId=000000019651

Creswell, J. W. (2014). Research design: Qualitative, quantitative, & mixed methods approaches (4th ed.). Sage.

Dietrich, J., Schmiedek, F., & Moeller, J. (2022). Academic motivation and emotions are experienced in learning situations, so let’s study them. Introduction to the special issue. Learning and Instruction . https://doi.org/10.1016/J.LEARNINSTRUC.2022.101623

Dresel, M., & Haugwitz, M. (2005). The relationship between cognitive abilities and self-regulated learning: Evidence for interactions with academic self-concept and gender. High Ability Studies, 16 (2), 201–218. https://doi.org/10.1080/13598130600618066

Eccles, J. S. (2004). Schools, academic motivation, and stage-environment fit. In R. M. Lerner & L. D. Steinberg (Eds.), Handbook of adolescent psychology (2nd ed., pp. 125–153). Wiley.

Chapter   Google Scholar  

Ergen, B., & Kanadli, S. (2017). The effect of self-regulated learning strategies on academic achievement: A meta-analysis study. Eurasian Journal of EducationalResearch, 69 , 55–74.

Feuchter, M. D., & Preckel, F. (2021). Reducing boredom in gifted education-evaluating the effects of full-time ability grouping. Journal of Educational Psychology, 114 (6), 1477–1493. https://doi.org/10.1037/edu0000694.supp

Ford, J. K., Smith, E. M., Weissbein, D. A., Gully, S. M., & Salas, E. (1998). Relationships of goal orientation, metacognitive activity, and practice strategies with learning outcomes and transfer. Journal of Applied Psychology, 83 (2), 218–233. https://doi.org/10.1037/0021-9010.83.2.218

Gagné, F. (2010). Motivation within the DMGT 2.0 framework. High ability Studies, 21 (2), 81–99. https://doi.org/10.1080/13598139.2010.525341

Gottfired, A. E., Fleming, J. S., & Gottfried, A. W. (2001). Continuity of academic intrinsic motivation from childhood through late adolescence: A longitudinal study. Journal of Educational Psychology, 93 , 3–13. https://doi.org/10.1037/0022-0663.93.1.3

Gottfried, A. W., Cook, C. R., Gottfried, A. E., & Morris, P. E. (2005). Educational characteristics of adolescents with gifted academic intrinsic motivation: A longitudinal investigation from school entry through early adulthood. Gifted Child Quarterly, 49 (2), 172–186. https://doi.org/10.1177/001698620504900206

Gottfried, A. E., & Gottfried, A. W. (1996). A longitudinal study of academic intrinsic motivation in intellectually gifted children: Childhood through early adolescence. Gifted Child Quarterly, 40 (4), 179–183. https://doi.org/10.1177/001698629604000402

Guthrie, J. T., Klauda, S. L., & Ho, A. N. (2013). Modeling the relationships among reading instruction, motivation, engagement, and achievement for adolescents. Reading Research Quarterly, 48 (1), 9–26. https://doi.org/10.1002/rrq.035

Hall, G. J., & Albers, C. A. (2022). Modeling associations of English proficiency and working memory with mathematics growth. School Psychology, 37 (4), 339–354. https://doi.org/10.1037/spq0000506

Han, S., We, H., & Lee, J. (2021). 어머니가 지각한 양육행동이 아동의 집행기능 곤란과 불안을 통해 학교적응에 미치는 영향: 자기회기교차지연모델을 적용한 종단연구 [The effect of mother’s perceived parenting behaviors on school adjustment through executive function difficulty and anxiety of children: A longitudinal study using the auto-regressive cross-lagged model]. Korean Journal of Child Studies, 42 (2), 245–259. https://doi.org/10.5723/kjcs.2021.42.2.2

Hayes, A. F. (2023). The PROCESS macro for SPSS, SAS, and R . https://processmacro.org/

Hayes, A. F. (2009). Beyond Baron and Kenny: Statistical mediation analysis in the new millennium. Communication Monographs, 76 (4), 408–420. https://doi.org/10.1080/03637750903310360

Heller, K. A. (2013). Findings from the Munich longitudinal study of giftedness and their impact on identification, gifted education and counseling. Talent Development & Excellence, 5 (1), 51–64.

Heller, K. A., Perleth, C., & Lim, T. K. (2005). The Munich model of giftedness designed to identify and promote gifted students. In R. J. Sternberg & J. E. Davidson (Eds.), Conceptions of giftedness (pp. 147–170). Cambridge University Press.

Heo, Y. B., & Kim, A. Y. (2012). 학생이 지각한 교사의 자율성 지지와 자기 주도 학습 능력 간의 관계에서 기본 심리 욕구의 매개 효과 [The mediating effects of basic psychological needs in the relationship between students` perception of their teacher`s autonomy support and self-directed learning]. The Korean Journal of Educational Psychology, 26 (4), 1075–1096.

Hornstra, L., Van Weerdenburg, M., Van den Brand, M., Hoogeveen, L., & Bakx, A. (2022). High-ability students’ need satisfaction and motivation in pull-out and regular classes: A quantitative and qualitative comparison between settings. Roeper Review, 44 (3), 157–172. https://doi.org/10.1080/02783193.2022.2071367

Howard, J. L., Bureau, J. S., Guay, F., Chong, J. X. Y., & Ryan, R. M. (2021). Student motivation and associated outcomes: A meta-analysis from self-determination theory. Perspectives on Psychological Science, 16 (6), 1300–1323. https://doi.org/10.1177/1745691620966789

Howard, J. L., Gagné, M., van den Broeck, A., Guay, F., Chatzisarantis, N., Ntoumanis, N., & Pelletier, L. G. (2020). A review and empirical comparison of motivation scoring methods: An application to self-determination theory. Motivation and Emotion, 44 , 534–548. https://doi.org/10.1007/s11031-020-09831-9

Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6 (1), 1–55. https://doi.org/10.1080/10705519909540118

Hyun, J., Kim, Y., Ryu, H., Park, H., & Kim, T. (2005). KEDI 종합검사도구 개발 연구(I) : 예비검사 개발, 실시 및 결과분석을 중심으로 [KEDI Comprehensive inspection tool development research 1: Pre-inspection development, implementation, & analysis of results]. Korea Educational Development Institute. https://www.kedi.re.kr/khome/main/research/selectPubForm.do?plNum0=5545

IBM (1989/2022). SPSS statistics 29.0.0. https://www.ibm.com/docs/en/spss-statistics/29.0.0

Korea Education Development Institute, KEDI (2022.8.8). The 16th Korean longitudinal study of education conference data book [Unpublished raw data]. Korea Education Development Institute. https://www.kedi.re.kr

Jeon, K. (2021). 영재와 일반학생이 인식하는 부모의 통제적·자율적 양육태도가 창의성과 내재동기의 종단적 변화에 미치는 영향 [The effects of controlling and autonomy-supportive parenting attitudes on students’ longitudinal changes in creativity and intrinsic motivation]. Journal of Gifted/talented Education, 31 (2), 161–186. https://doi.org/10.9722/JGTE.2021.31.2.161

Jo, S. M. (2011a). 영재의 학업성취에 영향을 주는 심리적 요인들: 자기결정성, 학습목표지향성, 자기효능감, 지능관 및 자기조절학습전략을 중심으로 [A study of factors effecting on gifted students’ achievement: Self-determination, learning goal-orientation, self-efficacy, implicit theory of intelligence, and self-regulated learning strategy]. Journal of Gifted/talented Education, 21 (3), 611–630.

Jo, S. M. (2011b). 영재의 자기조절학습전략에 대한 종단 연구 [A longitudinal study of gifted students’ self-regulated learning strategy]. The Journal of the Korean Society for the Gifted and Talented, 10 (3), 33–52.

Kim, Y., Kim, S., Park, S., Min, B., Kang, S., Kim, H., & Shin, J. (2006). 한국교육종단연구 [Korean education longitudinal study] 2005(II)(RR2006–22). Korea Educational Development Institute. https://www.kedi.re.kr/khome/main/research/selectPubForm.do?plNum0=5915

Kim, Y., Kim, S., Park, S., Min, B., Kang, S., Kim, H., & Shin, J. (2006). 한국교육종단연구 [Korean education longitudinal study] 2005(II) (RR2006–22). Korea Educational Development Institute. https://www.kedi.re.kr/khome/main/research/selectPubForm.do?plNum0=5915

Kim, Y., Namgung, J., Choi, B., Park, K., Lee, Y., Kim, M., Song, S., Kim, W., & Kim, N. (2015). 한국교육종단연구 : 조사개요보고서 [Korean education longitudinal study: Survey overview report] KELS 2013(III) (TR2015–78). Korea Educational Development Institute. https://www.kedi.re.kr/khome/main/research/selectPubForm.do?plNum0=10997

Kim, S. Y. (2016). 구조방정식 모형의 기본과 확장, MPLUS 예제와 함께 [Fundamentals and extensions of structural equation modeling, with MPLUS examples]. Hakji. https://www.hakjisa.co.kr/subpage.html?page=book_book_info&bidx=3431

Kim, J., & Lim, H. (2017). 중학교 영재학급 참여 경험이 학업성취와 자기조절학습능력에 미치는 영향: 비참여집단과의 종단적 비교를 중심으로 [Effect of participating middle school gifted class on academic achievement and self-regulated learning: Longitudinal comparison with non-participating group]. Journal of Gifted/talented Education, 27 (3), 405–429. https://doi.org/10.9722/JGTE.2017.27.3.405

Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2017). Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses. Computers & Education, 104 , 18–33. https://doi.org/10.1016/j.compedu.2016.10.001

Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed.). Guilford Press.

Korean Standard Classification of Occupations, KSCO (2017). 한국 표준 직업 분류, 사회 분류, 표준 분류 [Classification list]. https://kssc.kostat.go.kr:8443/ksscNew_web/ekssc/main/main.do#

Larsen, R. (2011). Missing data imputation versus full information maximum likelihood with second-level dependencies. Structural Equation Modeling, 18 (4), 649–662. https://doi.org/10.1080/10705511.2011.607721

Lavrijsen, J., Preckel, F., Verachtert, P., Vansteenkiste, M., & Verschueren, K. (2021). Are motivational benefits of adequately challenging schoolwork related to students’ need for cognition, cognitive ability, or both? Personality and Individual Differences . https://doi.org/10.1016/j.paid.2020.110558

Leader, W. S. (2008). Metacognition among students identified as gifted or nongifted using the DISCOVER assessment (Publication No. 3304165) [Doctoral dissertation, University of Arizona]. ProQuest Dissertations & Theses Global.

Lee, M. S., & Ryu, K. A. (2011). 초등 영재 및 일반 학생의 상대적 자율성에 대한 학습동기 연속체 관점 [The continuum perspective of an academic motivation for relative autonomy in gifted and non-gifted elementary students]. The Journal of Elementary Education, 24 (1), 155–173.

Lim, H., Kim, Y., Kim, S., & Lee, K. (2009). 한국교육종단연구 [Korean education longitudinal study] 2005(V)(RR2009–26). Korea Educational Development Institute. https://www.kedi.re.kr/khome/main/research/selectPubForm.do?plNum0=7014

Lubinski, D., & Benbow, C. P. (2006). Study of mathematically precocious youth after 35 years: Uncovering antecedents for the development of math-science expertise. Perspectives on Psychological Science, 1 (4), 316–345. https://doi.org/10.1111/j.1745-6916.2006.00019.x

MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1 (2), 130–149. https://doi.org/10.1037/1082-989X.1.2.130

McCoach, D. B., & Flake, J. K. (2018). The role of motivation. In S. I. Pfeiffer, E. Shaunessy-Dedrick, & M. Foley-Nicpon (Eds.), APA handbook of giftedness and talent (pp. 201–2013). American Psychological Association.

McCoach, D. B., & Siegle, D. (2003). Factors that differentiate underachieving gifted students from high-achieving gifted students. Gifted Child Quarterly, 47 (2), 144–154. https://doi.org/10.1177/001698620304700205

Modrek, A. S., Kuhn, D., Conway, A., & Avidsson, T. S. (2019). Cognitive regulation, not behavior regulation, predicts learning. Learning and Instruction, 60 , 237–244. https://doi.org/10.1016/j.learninstruc.2017.12.001

Muir-Broaddus, J. E. (1995). Gifted underachievement insights from the characteristics of strategic functioning associated with giftedness and achievement. Learning and Individual Differences, 7 (3), 189–206. https://doi.org/10.1016/1041-6080(95)90010-1

Muthén, L. K., & Muthén, B. O. (2017). Mplus user’s guide (8th ed.). Muthén & Muthén.

Nacaroğlu, O., Bektaş, O., & Tüysüz, M. (2021). Examination of science self-regulation skills of gifted and non-gifted students. Journal on Efficiency and Responsibility in Education and Science, 14 (4), 230–246. https://doi.org/10.7160/eriesj.2021.140403

National Center for Education Statistics (2003). Educational longitudinal study of 2002: Base year field test report . https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=200303

Neihart, M., Pfeiffer, S. I., & Cross, T. L. (Eds.). (2015). The social and emotional development of gifted children: What do we know? (2nd ed.). Prufrock Press.

Obergriesser, S., & Stӧger, H. (2015). The role of emotions, motivation, and learning behavior in underachievement and results of an intervention. High Ability Studies, 26 (1), 167–190. https://doi.org/10.1080/13598139.2015.1043003

Ohtani, K., & Hisasaka, T. (2018). Beyond intelligence: A meta-analytic review of the relationship among metacognition, intelligence, and academic performance. Metacognition Learning, 13 , 179–212. https://doi.org/10.1007/s11409-018-9183-8

Olszewski-Kubilius, P., Subotnik, R. F., & Worrell, F. C. (2015). Antecedent and concurrent psychosocial skills that support high levels of achievement within talent domains. High Ability Studies, 26 (2), 195–210. https://doi.org/10.1080/13598139.2015.1095077

Opdenakker, M.-C., Maulana, R., & den Brok, P. (2012). Teacher-student interpersonal relationships and academic motivation within one school year: Developmental changes and linkage. School Effectiveness and School Improvement, 23 (1), 95–119. https://doi.org/10.1080/09243453.2011.619198

Oz, E. (2021). The effect self-regulated learning on students’ academic achievement: A metaanalysis. International Online Journal of Educational Sciences, 13 (5), 1409–1429. https://doi.org/10.15345/iojes.2021.05.008

Park, K., Kwon, H., Kim, J., Baek, S., Choi, I., Song, S., & Lee, B. (2019). 한국교육종단연구: 조사개요보고서 [Korean education longitudinal study: Survey overview report] 2013(VII) (TR2019–67). Korea Educational Development Institute. https://www.kedi.re.kr/khome/main/research/selectPubForm.do?plNum0=12805

Paulhus, D. L., & Vazire, S. (2007). The self-report method. In R. W. Robins, R. C. Fraley, & R. F. Krueger (Eds.), Handbook of research methods in personality psychology (pp. 224–239). Guilford. https://www.guilford.com/books/Handbook-of-Research-Methods-in-Personality-Psychology/Robins-Fraley-Krueger/9781606236123

Paz-Baruch, N., & Hazema, H. (2023). Self-regulated learning and motivation among gifted and high-achieving students in science, technology, engineering, and mathematics disciplines: Examining differences between students from diverse socioeconomic levels. Journal for the Education of the Gifted, 46 (1), 34–76. https://doi.org/10.1177/01623532221143825

Peeters, J., De Backer, F., Reina, V. R., Kindekens, A., Buffel, T., & Lombaerts, K. (2014). The role of teachers’ self-regulatory capacities in the implementation of self-regulated learning practices. Procedia - Social and Behavioral Sciences, 116 , 1963–1970. https://doi.org/10.1016/j.sbspro.2014.01.504

Peng, P., Lin, X., Ünal, Z. E., Lee, K., Namkung, J., Chow, J., & Sales, A. (2020). Examining the mutual relations between language and mathematics: A meta-analysis. Psychological Bulletin, 146 (7), 595–634. https://doi.org/10.1037/bul0000231

Perleth, C. (2001). Follow-up-Untersuchungen zur Münchner Hochbegabungsstudie [Follow-ups to the Munich Study of Giftedness]. In K. A. Heller (Ed.), Hochbegabung im Kindes- und Jugendalter [High ability in children and adolescents] (2nd ed., pp. 357–446). Hogrefe.

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82 (1), 33–40. https://doi.org/10.1037/0022-0663.82.1.33

Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53 (3), 801–813. https://doi.org/10.1177/0013164493053003024

Pitts, S. C., West, S. G., & Tein, J.-Y. (1996). Longitudinal measurement models in evaluation research: Examining stability and change. Evaluation and Program Planning, 19 (4), 333–350. https://doi.org/10.1016/S0149-7189(96)00027-4

Preckel, F., Golle, J., Grabner, R., Jarvin, L., Kozbelt, A., Müllensiefen, D., Olszewski-Kubilius, P., Schneider, W., Subotnik, R., Vock, M., & Worrell, F. C. (2020). Talent development in achievement domains: A psychological framework for within- and cross-domain research. Perspectives on Psychological Science, 15 (3), 691–722. https://doi.org/10.1177/1745691619895030

Pyryt, M. C. (1995). Media themes and threads. Journal for the Education of the Gifted., 19 (1), 118–123. https://doi.org/10.1177/016235329501900108

Rambo-Hernandez, K. E., Peters, S. J., & Plucker, J. A. (2019). Quantifying and exploring elementary school excellence gaps across schools and time. Journal of Advanced Academics, 30 (4), 383–415. https://doi.org/10.1177/1932202X19864116

Redding, C., & Grissom, J. A. (2021). Do students in gifted programs perform better? Linking gifted program participation to achievement and nonachievement outcomes. Educational Evaluation and Policy Analysis, 43 (3), 520–544. https://doi.org/10.3102/01623737211008919

Reeve, J. (2002). Self-determination theory applied to educational settings. In E. L. Deci & R. M. Ryan (Eds.), Handbook of self-determination research (pp. 183–205). The University of Rochester Press.

Reis, S. M., & Renzulli, J. S. (2010). Is there still a need for gifted education? An examination of current research. Learning and Individual Differences, 20 (4), 308–317. https://doi.org/10.1016/j.lindif.2009.10.012

Renzulli, J. S. (2012). Reexamining the role of gifted education and talent development for the 21st century: A four-part theoretical approach. Gifted Child Quarterly, 56 (3), 150–159. https://doi.org/10.1177/0016986212444901

Rinn, A. N., & Bishop, J. (2015). Gifted adults: A systematic review and analysis of the literature. Gifted Child Quarterly, 59 (4), 213–235. https://doi.org/10.1177/0016986215600795

Roof, H., & Chimuma, L. (2022). The relationship among reading, math and science achievement: Exploring the growth trajectories over three time points. Educational Research: Theory and Practice, 33 (2), 32–49.

Russell, D. W., Kahn, J. H., Spoth, R., & Altmaier, E. M. (1998). Analyzing data from experimental studies: A latent variable structural equation modeling approach. Journal of Counseling Psychology, 45 (1), 18–29. https://doi.org/10.1037/0022-0167.45.1.18

Ryan, R. M., & Deci, E. L. (2020). Intrinsic and extrinsic motivation from a selfdetermination theory perspective: Definitions, theory, practices, and future directions. Contemporary Educational Psychology . https://doi.org/10.1016/j.cedpsych.2020.101860

Scherrer, V., & Preckel, F. (2019). Development of motivational variables and self-esteem during the school career: A meta-analysis of longitudinal studies. Review of Educational Research, 89 (2), 211–258. https://doi.org/10.3102/0034654318819127

Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modelling (2nd ed.). Lawrence Erlbaum Associates.

Book   Google Scholar  

Shin, M., & Ahn, D. H. (2014). 영재와 평재의 자기 조절 전략에 미치 요인 [Factors influencing self-regulated strategies: On autonomy support and beliefs of intelligence ability of gifted and non-gifted students]. Journal of Gifted/talented Education, 24 (5), 877–892. https://doi.org/10.9722/JGTE.2014.24.5.877

Skaalvik, E. M., Federici, R. A., & Klassen, R. M. (2015). Mathematics achievement and self-efficacy: Relations with motivation for mathematics. International Journal of Educational Research, 72 , 129–136. https://doi.org/10.1016/j.ijer.2015.06.008

Sternberg, R. J. (2007). A systems model of leadership: WICS(Wisdom, Creativity, & Intelligence). American Psychologist, 62 (1), 34–42. https://doi.org/10.1037/0003-066X.62.1.34

Sternberg, R. J. (2020). Transformational giftedness: Rethinking our paradigm for gifted education. Roeper Review, 42 (4), 230–240.

Sternberg, R. J., & Lubart, T. I. (1996). Investing in creativity. American Psychologist, 51 (7), 677–688. https://doi.org/10.1037/0003-066X.51.7.677

Subotnik, R. F., Olszewski-Kubilius, P., & Worrell, F. C. (2011). Rethinking giftedness and gifted education: A proposed direction forward based on psychological science. Psychological Science in the Public Interest, 12 (1), 3–54. https://doi.org/10.1177/1529100611418056

Tang, M., & Neber, H. (2008). Motivation and self-regulated science learning in high-achieving students: Differences related to nation, gender, and grade-level. High Ability Studies, 19 (2), 103–116. https://doi.org/10.1080/13598130802503959

Terman, L. M. (1926). Genetic studies of genius: Mental and physical traits of a thousand gifted children . Stanford University Press.

Thompson, N. (2022). Vygotskian scaffolding techniques as motivational pedagogy for gifted mathematicians in further education: A diary-interview study. Journal of Further and Higher Education . https://doi.org/10.1080/0309877X.2022.2142103

Usher, E. L., & Pajares, F. (2008). Self-efficacy for self-regulated learning: A validation study. Educational and Psychological Measurement, 68 (3), 443–463. https://doi.org/10.1177/001316440730847

Vansteenkiste, M., Lens, W., & Deci, E. L. (2006). Intrinsic versus extrinsic goal contents in self-determination theory: Another look at the quality of academic motivation. Educational Psychologist, 41 (1), 19–33. https://doi.org/10.1207/s15326985ep4101_4

von Elm, E., Altmann, D., Egger, M., Pocock, S. C., Gøtzsche, P. C., Vandenbroucke, J. P., für die STROBE-Initiative. (2008). Das strengthening the reporting of observational studies in epidemiology (STROBE) statement. The Internist, 49 , 688–693. https://doi.org/10.1007/s00108-008-2138-4

Wang, J., Xue, H., & Wu, X. (2023). Mental health and academic achievement among Chinese adolescents during COVID-19 pandemic: The mediating role of self-regulation learning. Social Psychology of Education: An International Journal . https://doi.org/10.1007/s11218-023-09772-4

Williams, L. J., & O’Boyle, E. H. (2008). Measurement models for linking latent variables and indicators: A review of human resource management research using parcels. Human Resource Management Review, 18 (4), 233–242. https://doi.org/10.1016/j.hrmr.2008.07.002

Wirthwein, L., Bergold, S., Preckel, F., & Steinmayr, R. (2019). Personality and school functioning of intellectually gifted and nongifted adolescents: Self-perceptions and parents’ assessments. Learning and Individual Differences, 73 , 16–29. https://doi.org/10.1016/j.lindif.2019.04.003

Wolters, C. A., & Benzon, M. B. (2013). Assessing and predicting college students’ use of strategies for the self-regulation of motivation. The Journal of Experimental Education, 81 (2), 199–221.

Woodcock, R. W., & Johnson, M. B. (1977). Woodcock-Johnson psycho-educational battery-Revised . DLM.

Yang, M. H., & Cheong, Y. S. (2012). 우수동기 집단의 심리적 특성 및 자기조절학습 변화 탐색; 성취목표지향성을 기반으로 [A longitudinal study on the students with high academic motivation and their developmental changes with time]. Korean Journal of Youth Studies, 19 (7), 191–214.

Yoon, C. H. (2009). Self-regulated learning and instructional factors in the scientific inquiry of scientifically gifted Korean middle school students. Gifted Child Quarterly, 53 (3), 203–216. https://doi.org/10.1177/0016986209334961

You, J. W., & Kang, M. (2014). The role of academic emotions in the relationship between perceived academic control and self-regulated learning in online learning. Computers & Education, 77 , 125–133. https://doi.org/10.1016/j.compedu.2014.04.018

Ziegler, A., & Perleth, C. (1997). Schafft es Sisyphos, den Stein den Berg hinaufzurollen? Eine kritische Bestandsaufnahme der Diagnose- und Fördermöglichkeiten von Begabten in der beruflichen Erstaus-und Weiterbildung vor dem Hintergrund des Münchner Begabungs-Prozeß modells. [Will Sisyphus be able to roll the stone up the mountain? A critical examination of the status of diagnosis and promotion of the gifted in occupational education set against the Munich Talent Model]. Psychologie in Erziehung und Unterricht [Psychology in Education and Teaching], 44 , 152–163.

Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25 (1), 3–17. https://doi.org/10.1207/s15326985ep2501_2

Zimmerman, B. J., & Martinez-Pons, M. (1988). Construct validation of a strategy model of student self-regulated learning. Journal of Educational Psychology, 80 (3), 284–290. https://doi.org/10.1037/0022-0663.80.3.284

Download references

Acknowledgements

The author of this research prepared this English language document by correcting Grammarly premium (2023; www.grammarly.com ), and translating DeepL translator (2017; https://www.deepl.com/translator ).

Author information

Authors and affiliations.

Interdisciplinary Program of Gifted Education, Ewha Womans University, Seoul, Republic of Korea

Kwang Surk Jung

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kwang Surk Jung .

Ethics declarations

Conflict of interest.

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Jung, K.S. A Longitudinal Analysis of Relations from Motivation to Self-regulatory Strategy on Academic Achievement in Academically Higher-Achieving Students. Asia-Pacific Edu Res (2024). https://doi.org/10.1007/s40299-024-00843-4

Download citation

Accepted : 07 March 2024

Published : 10 April 2024

DOI : https://doi.org/10.1007/s40299-024-00843-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Academically higher-achieving students
  • Academic achievement
  • Self-regulatory strategy
  • Longitudinal analysis

Mathematics Subject Classification

  • TAPE-D-23-00286R2
  • Find a journal
  • Publish with us
  • Track your research

ORIGINAL RESEARCH article

Rhetorical strategies in chinese and english talk show humour: a comparative analysis.

TIANLI ZHOU

  • 1 Zunyi Medical University, Zunyi, China
  • 2 Department of Foreign Languages, Faculty of Modern Language and Communication, Putra Malaysia University, Serdang, Malaysia

The final, formatted version of the article will be published soon.

Select one of your emails

You have multiple emails registered with Frontiers:

Notify me on publication

Please enter your email address:

If you already have an account, please login

You don't have a Frontiers account ? You can register here

Humour is a kind of cognitive psychology activity, and it is diverse among individuals.One of the main characteristics of talk shows is to produce humorous discourse to make the audience laugh; however, rare studies have made a deeper comparative investigation on the rhetorical strategies in different language humorous utterances. Therefore, the current study adopted a mixed method of sequential explanatory design to identify the types of rhetorical strategies in the monologue verbal humour of Chinese and English talk shows, examine their similarities and differences. 200 monologue samples from 2016 to 2022, which consisted of 100 monologues of Chinese talk shows (CTS) and 100 monologues of English talk shows (ETS), were downloaded from the internet as language corpus. Berger's theory was adopted to identify the types of rhetorical strategies. Based on the obtained findings, this study found that both language talk show hosts use a variety type of rhetorical strategies to produce humorous discourse. The comparison of similarities and differences revealed that the most frequently used rhetorical strategies in both talk shows were almost similar (e.g., satire, exaggeration, facetiousness, and ridicule), but the percentage of usage of these various rhetorical strategies in both talk shows was slightly different. Interestingly, misunderstanding occurred twenty times in CTS but was not found in ETS. Meanwhile, simile and personification were used more often in ETS. Conclusively, this study contributes valuable insights on the use of different types of rhetorical strategies to create verbal humour in different language contexts.

Keywords: rhetorical strategies, Talk show, Humour, comparative analysis, Chinese and English

Received: 02 Feb 2024; Accepted: 12 Apr 2024.

Copyright: © 2024 ZHOU and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Shiyue Chen, Department of Foreign Languages, Faculty of Modern Language and Communication, Putra Malaysia University, Serdang, 43400, Malaysia

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • Open access
  • Published: 29 March 2023

Mapping ethical issues in the use of smart home health technologies to care for older persons: a systematic review

  • Nadine Andrea Felber   ORCID: orcid.org/0000-0001-8207-2996 1 ,
  • Yi Jiao (Angelina) Tian   ORCID: orcid.org/0000-0003-2969-9655 1 ,
  • Félix Pageau   ORCID: orcid.org/0000-0002-4249-7399 2 ,
  • Bernice Simone Elger   ORCID: orcid.org/0000-0003-0857-0510 1 &
  • Tenzin Wangmo 1  

BMC Medical Ethics volume  24 , Article number:  24 ( 2023 ) Cite this article

6095 Accesses

7 Citations

17 Altmetric

Metrics details

The worldwide increase in older persons demands technological solutions to combat the shortage of caregiving and to enable aging in place. Smart home health technologies (SHHTs) are promoted and implemented as a possible solution from an economic and practical perspective. However, ethical considerations are equally important and need to be investigated.

We conducted a systematic review according to the PRISMA guidelines to investigate if and how ethical questions are discussed in the field of SHHTs in caregiving for older persons.

156 peer-reviewed articles published in English, German and French were retrieved and analyzed across 10 electronic databases. Using narrative analysis, 7 ethical categories were mapped: privacy, autonomy, responsibility, human vs. artificial interactions, trust, ageism and stigma, and other concerns.

The findings of our systematic review show the (lack of) ethical consideration when it comes to the development and implementation of SHHTs for older persons. Our analysis is useful to promote careful ethical consideration when carrying out technology development, research and deployment to care for older persons.

Registration

We registered our systematic review in the PROSPERO network under CRD42021248543.

Peer Review reports

Introduction/background

Significant advancements in medicine, public health and technology are allowing the world population to grow increasingly older adding to the steady rise in the proportion of senior citizens (aged over 65) [ 1 ]. Because of this growth in the aging population, the demand for and financial costs of caring for older adults are both rising [ 2 ]. That older persons generally wish to age in place and receive healthcare at home [ 2 ] may mean accepting risks such as falling, a risk that increases with frailty [ 3 ]. However, many prefer accepting these risks rather than moving into long term care facilities [ 4 , 5 , 6 ].

A solution to this multi-facetted problem of ageing safely at home and receiving appropriate care, while keeping costs at bay may be the use of smart home health technologies (SHHTs). A smart home is defined by Demiris and colleagues as “ residence wired with technology features that monitor the well-being and activities of their residents to improve overall quality of life, increase independence and prevent emergencies” [ 7 ]. SHHTs then, represent a certain type of smart home technology, which include non-invasive, unobtrusive, interoperable and possibly wearable technologies that use a concept called the Internet-of-Things (IoT) [ 8 ]. These technologies could thereby remotely monitor the older resident and register any abnormal deviations in the daily habits and vital signs while sending alerts to their formal and informal caregivers when necessary. These SHHTs could permit older people (and their caregivers) to receive the necessary medical support and attention at their convenience and will, thereby allowing them to continue living independently in their home environment.

All of these functions offer benefits to older persons wishing to age at home. While focusing on practical advantages is important, an equally important question to ask is how ethical these technologies are when used in the care of older persons. Principles of biomedical ethics, such as autonomy, justice [ 9 ], privacy [ 10 ], and responsibility [ 11 ] should not only be respected by medical professionals, but by technology developers and build-into the technologies as well.

The goal of our systematic review is therefore to investigate whether and which ethical concerns are discussed in the pertinent theoretical and empirical research on SHHTs for older persons between 2000 and 2020. Different from previous literature reviews [ 12 , 13 , 14 ],, which only explored practical aspects, we explicitly examined if and how researchers treated the ethical aspects of SHHTs in their studies, adding an important, yet often overlooked aspect to the systematic literature. Moreover, we present how and which ethical concerns are discussed in the theoretical literature and which ones in empirical literature, to shed light on possible gaps regarding which and how different ethical concerns are developed. Identifying these gaps is the first important step to eventually connecting bioethical considerations to the real world, adapting policies, guidelines and technologies itself [ 15 ]. Thus, our systematic review is the first one to do so in the context of ethical issues in SHHTs used for caregiving for older persons.

Search strategy

With the guidance of an information specialist from the University of Basel, our team developed a search strategy according to the PICO principle: Population 1 (Older adults), Population 2 (Caregivers), Intervention (Smart home health technologies), and Context (Home). The outcome of ethics was intentionally omitted as we wanted to capture all relevant studies without narrowing concerns that we would classify as “ethical”. Within each category, synonyms and spelling variations for the keywords were used to include all relevant studies. We then adapted the search string by using database-specific thesaurus terms in all ten searched electronic databases: EMBASE, Medline, PsycINFO, CINAHL, SocIndex, SCOPUS, IEEE, Web of Science, Philpapers, and Philosophers Index. We limited the search to peer-reviewed papers published between January 1st, 2000 and December 31st, 2020, written in the English, French, and German languages. This time frame allowed us to map the evolution to SHHTs as a new field.

The inclusion criteria were the following: (1) The article must be an empirical or theoretical original research contribution. Hence, book chapters, conference proceedings, newspaper articles, commentary, dissertations, and thesis were excluded. Also excluded were other systematic reviews since their inclusion would duplicate findings from our individual studies. (2) When the included study was empirical, the study’s population of interest must be older persons over 65 years of age, and/or professional or informal caregivers who provide care to older persons. Informal caregivers include anyone in the community who provided support without financial compensation. Professional caregivers include nurses and related professions who receive financial compensation for their caregiving services. (3) The included study must investigate SHHTs and their use in the older persons’ place of dwelling.

First, we carried out the systematic search across databases and removed all duplicates through EndNote (see supplementary Table  1 in appendix part 1 for a list of all included articles). One member of the research team screened all titles manually and excluded irrelevant papers. Then, two authors screened the abstracts and excluded irrelevant papers, and any disagreements were solved by a third author. She then also combined all included articles and removed further duplicates.

figure 1

PRISMA 2020 Flowchart

Final inclusion and data extraction

All included articles were searched and retrieved online (and excluded if full text was not available). Three co-authors then started data extraction, where several papers were excluded due to irrelevant content. To code the extracted data, a template was developed, which was tested in a first round of data extraction and then used in Microsoft Excel during the remaining extraction process. Study demographics and ethical considerations were recorded. Each extracting author was responsible for a portion of articles. If uncertainties or disputes occurred, they were solved by discussion. To ensure that our data extraction was not biased, 10% of the articles were reviewed independently. Upon comparing data extracted of those 10% of our overall sample, we found that items extracted reached 80% consistency.

Data synthesis

The extracted datasets were combined and ethical discussions encountered in the publications were analyzed using narrative synthesis [ 16 ]. During this stage, the authors discussed the data and recognized seven first-order ethical categories. Information within these categories were further analyzed to form sub-categories that describe and/or add further information to the key ethical category.

Nature of included articles

Our search initially identified 10,924 papers in ten databases. After the duplicates were removed, 9067 papers remained whose titles were screened resulting in exclusion of 5215 papers (Fig.  1 ). The examination of remaining 3845 abstracts of articles led to the inclusion of 374 papers for full-texts for retrieval. As we were unable to find 20 papers after several attempts, the remaining 354 full-texts were included for full-text review. In this full-text review phase, we further excluded 198 full-texts with reasons (such as technologies employed in hospitals, or technologies unrelated to health). Ultimately, this systematic review included 144 empirical and 12 theoretical papers specifying normative considerations of SHHTs in the context of caregiving for older persons.

Almost all publications (154 out of 156) were written in English, and over 67% [ 105 ] were published between 2014 and 2020. About a quarter (26%; 41 papers) were published between 2007 and 2013 and only 7% (10 articles) were from 2000 to 2006. Apart from the 12 theoretical papers, the methodology used in the 144 empirical papers included the following: 42 articles (29%) used a mixed-methods approach, 39 (27%) experimental, 38 (26%) qualitative, 15 (10%) quantitative, and the remaining were of an observational, ethnographical, case-study, or iterative testing nature.

The functions of SHHTs tested or studied in the included empirical papers were categorized as such: 29 articles (20.14%) were solely involved with (a) physiological and functioning monitoring technologies, 16 (11.11%) solely with (b) safety/security monitoring and assistance functions, 23 (15.97%) solely promoted (c) social interactions, and 9 (6.25%) solely for (d) cognitive and sensory assistance. However, 46 articles (29%) also involved technologies that fulfilled more than one of the categorized functions. The specific types of SHHTs included in this review comprised: intelligent homes (71 articles, 49.3%); assistive autonomous robots (49 articles, 34.03%); virtual/augmented/mixed reality (7, 4.4%); and AI-enabled health smart apps and wearables (4 articles, 1.39%). Likewise, the remaining 20 articles (12.8.8%) involved either multiple technologies or those that did not fall into any of the above categories.

Ethical considerations

Of the 156 papers included, 55 did not mention any ethical considerations (See supplementary Table  1 in appendix part 1). Among the 101 papers that noted one or more ethical considerations, we grouped them into 7 main categories (1) privacy, (2) human vs. artificial relationships, (3) autonomy, (4) responsibility, (5) social stigma and ageism, (6) trust, and (7) other normative issues (see Table  1 ). Each of these categories consists of various sub-categories that provided more information on how smart home health technologies (possibly) affected or interacted with the older persons or caregivers in the context of caregiving (Table  2 ). Each of the seven ethical considerations are explained in depth in the following paragraphs.

This key category was cited across 58 articles. In theoretical articles, privacy was one of the most often discussed ethical consideration, as 9 out of 12 mentioned privacy related concerns. Among the 58 articles, four sub-issues within privacy were discussed.

(A)The awareness of privacy was reported as varying according to the type of SHHT end-user. Whereas some end-users were more aware or privacy in relation to SHHTs, others denoted little or a total lack of consideration, while some had differing levels of concerns for privacy that changed as it is weighed against other values, such as access to healthcare [ 17 ] or feeling of safety [ 18 ]. Both caregivers and researchers often took privacy concerns into account [ 19 , 20 , 21 ], while older persons themselves did not share the same degree of fears or concerns [ 22 , 23 , 24 ]. Older persons in fact were less concerned about privacy than costs and usability [ 23 ]. Furthermore, they were willing to trade privacy for safety and the ability to live at home. Nevertheless, several papers acknowledged that privacy is an individualized value, whereby its significance depends on both the person and their context, thus their preferences cannot be generalized [ 25 , 26 , 27 , 28 ]. Lastly, there were also some papers that explicitly stated that there were no privacy concerns found by the participants, or that participants found it useful to have monitoring without mentioning privacy as a barrier [ 29 , 30 , 31 ].

The second prevalent sub-issue within privacy was (B) privacy by choice. Both older persons and their caregivers expressed a preference for having a choice in technology used, in what data is collected, and where technology should or should not be to installed [ 32 , 33 ]. For example, some spaces were perceived as more private and thus monitoring felt more intrusive [ 34 , 35 , 36 ]. Formal caregivers were concerned about monitoring technologies being used as a recording device for their work [ 37 , 38 ]. Furthermore, older persons were often worried about cameras [ 39 , 40 ] and “eyes watching”, even if no cameras were involved [ 41 , 42 , 43 ].

The third privacy concern was (C) risk and regulation of privacy, which included discussions surrounding dissemination of data or active data theft [ 44 , 45 , 46 , 47 ], as well as change in behavior or relationships due to interaction with technology [ 48 , 49 ]. Researchers were aware of both legal and design-contextual measures that must be observed in order to ensure that these risks were minimized [ 45 , 50 , 51 ].

The final sub-issue that we categorized was (D) privacy in the case of cognitive impairment. This included disagreements if cognitive impairment warrants more intrusive measures or if privacy should be protected for everyone in the same way [ 52 , 53 ].

Human versus artificial relationships

54 articles in our review contained data pertinent to trade-offs between human and artificial caregiving. Firstly, (A) there was a general fear that robots would replace humans in providing care for older persons [ 28 , 54 , 55 , 56 ], along with related concerns such as losing jobs [ 40 , 57 ], disadvantages with substituting real interpersonal contact [ 17 , 46 ], and thus increasing the negative effects associated with social isolation [ 41 , 58 ].

Many papers also emphasized (B) the importance of human caregiving, underlining the necessity of human touch [ 26 , 47 , 50 , 59 ] believing that technology should and could not replace humans in connections [ 17 ], love [ 33 ], relationships [ 60 ], and care through attention to subtle signs of health decline in every in-person visit [ 57 ]. Older persons also preferred human contact over machines and had guarded reactions to purely virtual relationships[ 31 , 61 , 62 ]. The use of technology was seen to dehumanize care, as care should be inherently human-oriented [ 27 , 48 ].

There was data alluding to (C) the positive reactions to technologies performing caregiving tasks and possibly forming attachments with the technology[ 47 , 49 , 58 ]. Furthermore, some papers cited participants reacting positively to robots replacing human care, where the concept of “good care” could be redefined [ 63 , 64 , 65 , 66 ]. Solely theoretical papers also identified possible benefits of tech for socialization and relationship building [ 67 , 68 ].

Finally, many articles raised the idea of (D) collaboration between machine and human to provide caregiving to older persons [ 69 ]. These studies highlighted the possible harms if such collaboration was not achieved, such as informal caregivers withdrawing from care responsibilities [ 70 ] or the reinforcement of oppressive care relations [ 71 ]. Interestingly, opinions varied on whether the caregiving technology, such as a robot should have “life-like” appearance, voices, and emotional expressions, while recognizing the current technological limits in actually providing those features to a satisfactory level [ 46 ]. For example, some users preferred for the robot to communicate with voice commands, while others wanted to further customize this function with specific requests on the types of voices generated [ 65 , 72 ].

40 papers mentioned autonomy of the older person with respect to the use of SHHTs. The first sub-theme categorized was in relation to (A) control, which encompassed positive aspects like (possible) empowerment through technology [ 25 , 26 , 73 , 74 ] and negative aspects such as the possibility of technology taking control over the older person, thus increasing dependence [ 55 , 75 ] or decreasing freedom of decision making [ 48 ]. Several studies reported the wishes of older persons to be in control when using the technology (e.g. technology should be easily switched off or on) and be in control of its potential, meaning the extend of data collected or transferred, for example [ 17 , 30 , 70 , 76 ]. Furthermore, they should have the option to not use technology in spaces where they do not wish to, e.g., public spaces [ 35 ]. The issue of increased dependency was discussed as a loss or rather, fear of the loss of autonomy due to greater reliance on technology as well as the fear of being monitored all the time [ 28 , 48 ]. In addition, using technology was deemed to make older persons more dependent and to increase isolation [ 77 ].

The second sub-category within autonomy highlighted the need for the technology to (B) protect the autonomy and dignity of its older end-users, which also included the unethical practice of deception (e.g.[ 46 , 49 , 54 , 78 ], infantilization [ 31 , 60 ], or paternalism [ 17 , 27 , 57 ], as a way to disrespect older persons’ dignity and autonomy [ 79 , 80 , 81 ]. Also reported was that these users may accept technology to avoid being a burden on others, thus underscoring the value of technology to enhance functional autonomy, understood here as independent functioning [ 52 , 82 , 83 ]. Other studies mentioned this kind of trade-off between autonomy and other values or interests as well. For example, between respecting the autonomy of the older persons versus nudging them towards certain behavior (perceived as beneficial for them) through the help of technology [ 32 ], or between autonomy and safety [ 24 ].

Two sub-issues within autonomy primarily discussed in the theoretical publications were (C) relational autonomy [ 27 , 41 , 49 , 58 ] and (D) explanations on why autonomy should actually be preserved. The former emphasized the fact that older persons do not and should not live isolated lives and that there should be respect and promotion of their relationships with family members, friends, caregivers, and the community as a whole [ 27 , 47 ]. The latter described the benefits of respecting autonomy, such as increased happiness and well-being [ 65 , 67 ] or a sense of purpose [ 84 ], and thus favoring the promotion of autonomy and choice also from a normative perspective.

Responsibility

This theme included data across 25 articles that mentioned concerns such as the effect of using technologies on the current responsibilities of caregivers and older persons themselves. Specifically, the papers discussed (A) the downsides of assistive home technology on responsibility. That is, the use of technology conflicted with moral ideas around responsibility [ 58 ], especially for caregivers [ 57 , 59 ]. Its use also raised more practical concerns, such as the fear of shifting the responsibility onto the technology and thus, diminishing vigilance and/or care. Related to this thought was also a fear of increased responsibility on both older persons [ 60 ] and their caregivers, who were worried about extra work time was needed to integrate technology into their work, learn its functions, analyze data, and respond to potentially higher frequencies of alerts [ 18 , 35 , 36 , 53 , 85 ].

Additionally, studies reported (B) continuous negotiation between (formal) caregivers’ (professional) responsibilities of care and the opportunities that smart technologies could provide [ 26 , 47 , 55 , 70 , 82 ]. For example, increased need for cooperation between informal and formal caregivers due to technology was foreseen [ 81 ] and fear expressed that over-reliance on female caregivers was exacerbated [ 71 ]. Nevertheless, the use of smart home health technologies was often seen to (C) reduce the burden of care, where caregivers could direct their attention and time to the most-needed situations and better align the responsibilities of care [ 5 , 18 , 49 , 74 , 80 , 81 ]. This shift of burden onto a technology was also reported by older persons as freeing [ 48 ].

Ageism and stigma

24 articles discussed ageism and stigma, which included discussions about fear of (A) being stigmatized by others with the use of SHHTs [ 73 , 86 ]. Older persons thought acceptance of such technologies also alluded to an admission of failure [ 82 ], or being perceived by others as frail, old, forgetful [ 77 , 87 ], or even stupid [ 26 , 33 , 88 ]. This resulted in them expressing ageist views stating that they did not need the technology “yet” [ 84 , 89 ]. Some papers reported the belief that the presence of robots was disrespectful for older people [ 52 , 85 , 90 ] and technologies do little to alleviate frustration and the impression of “being stupid” that older persons may have when they are faced with the complexities of the healthcare system [ 73 ]. Furthermore, older persons in a few studies did express unfamiliarity with learning new technologies in old age [ 42 , 66 , 91 ], coupled with fears of falling behind and not keeping up with their development, and feeling pressured to use technology [ 62 , 89 ].

Within ageism and stigma, (B) social influence was deemed to cause older persons to believe that the longer they have been using technology, the more their loved ones want them to use it as well, creating a sort of reinforcing loop [ 27 ]. Other social points were related to self-esteem, meaning that older persons needed to reach a certain threshold first to publicly admit that they need technology [ 85 ], or doubts by caregivers if they were able to use the devices [ 36 ]. This possibly led older persons to prefer unobtrusive technology and those that could not be noticed by visitors [ 22 , 55 , 88 ].

Lastly, (C) two theoretical articles raised concerns in regard to technology exacerbating stigmatization of women and migrants in caregiving. Both Parks [ 47 ] and Roberts & Mort [ 71 ] suggested that caregiving technology which does not question the underlying expectation that women give care to their relatives will worsen such gendered expectations in caregiving.

We identified 18 articles that mentioned some aspect of trust. For both older persons and caregivers, there was often (A) a general mistrust with technologies compared with existing human caregiving [ 33 , 42 ]. Therefore, caregivers became proxies and were relied on to “understand it” and continue providing care [ 48 ]. For caregivers the lack of trust was associated with the use of technologies, for example, leaving older persons alone with technology [ 81 ], worrying that older persons would not trust the technology [ 29 , 32 ] or that it could change their professional role [ 23 ]. One paper even reported that using technology meant caregivers themselves are not trusted [ 92 ]. Surprisingly, some studies found that older persons had no problem trusting technology, even considering it safer and more reliable than humans [ 58 , 70 ].

The second sub-theme concerned (B) characteristics promoting trust. That is, the degree of automation [ 30 ](, the involvement of trusted humans in design and use [ 34 , 93 ], perceived usefulness of the technology and spent time with the technology all influenced trust [ 59 , 72 , 94 ]. For robots specifically, they were trusted more than virtual agents, such as Alexa [ 60 , 65 ]. Taking this step further, studies discovered that robots with a higher degree of automation or a lower degree in anthropomorphism level increased trust [ 30 ].

There were several miscellaneous considerations not fitting the ones already mentioned above, and we categorized them as follows. Firstly, two theoretical articles mentioned (A) considerations related to research. Ho, [ 27 ] pointed out that empirical evidence of the usefulness of SHHTs is lacking, which therefore may make them less relevant as a possible solution for aging in place. Palm et al. (2013) suggested that, if research would consider the fact that many costs of caregiving are hidden because of non-paid informal caregivers, the actual economic benefits of SHHTs are unknown. Lastly, two articles alluded to (B) psychological phenomena related to the use of SHHTs. Pirhonen et al., [ 58 ] suggested that robots can promote the ethical value of well-being through the promotion of feelings of hope. The other phenomenon was feeling of blame and fear associated with the adoption of the technology, as caregivers may be pushed to use SHHTs in order to not be blamed for failing to use technology [ 18 ]. This then also nudged caregivers to think that using SHHTs cannot do any harm, so it is better to use it than not use it.

Our systematic review investigated if and how ethical considerations appear in the current research on SHHTs in the context of caregiving for older persons. As we included both empirical and theoretical works of literature, our review is more comprehensive that existing systematic reviews (e.g.[ 12 , 13 , 14 ], that have either only explored the empirical side of the research and neglected to study ethical concerns. Our review offers an informative and useful insights on dominant ethical issues related to caregiving, such as autonomy and trust [ 95 , 96 ]. At the same time, the study findings brings forth less known ethical concerns that arise when using technologies in the caregiving context, such as responsibility [ 97 ] and ageism and stigma.

The first key finding of our systematic review is the silence on ethics in SHHTs research for caregiving purposes. Over a third of the reviewed publications did not mention any ethical concern. One possible explanation is related to scarcity [ 98 ]. In the context of research in caregiving for older persons, “scarcity” can be understood in a variety of ways: one way is to see the available space for ethical principles in medical technology research as scarce. For example, according to Einav & Ranzani [ 99 ] “Medical technology itself is not required to be ethical; the ethics of medical technology revolves around when, how and on whom each technology is used” (p.1612). Determining the answers to these questions is done empirically, by providing proof of benefit of the technology, ongoing reporting on (possibly harmful) long term effects, and so on [ 99 ]. Given that publication space in journal is limited to a certain amount of text, the available space that ethical considerations can take up is scarce. Therefore, adding deliberations about the unearthed values or issues in our systematic review, like trust, responsibility or ageism, may simply not fit in the space available in research publications. This may also be the reason why the values of beneficence and non-maleficence were not found through our narrative analysis. While both values are considered crucial in biomedical ethics [ 9 ], the empirically measured benefits may be considered enough by the authors to demonstrate beneficence (and non-maleficence), leading them to not mention the ethical values explicitly again in their publications.

Another interpretation is the scarcity of time, and the felt pressure to “solve” the problem of limited resources in caregiving [ 2 ]. Researchers might be therefore more inclined to focus on the empirical data showing benefits, rather than to engage in elaborations on ethical issues that arise with those benefits. Lastly, as researchers have to compete for limited funding [ 100 ] and given that technological research receives more funding than biomedical ethics [ 101 ], it is likely that the numbers of publications mentioning purely empirical studies exceeds those publications that solely mention the ethical issues (as our theoretical papers did) or that combine empirical and ethical parts. Further research needs to investigate these hypotheses further.

It is not surprising that privacy was the most discussed ethical issue in relation to SHHTs in caregiving. The topic of privacy, especially in relation to monitoring technologies and/or health, has been widely discussed (see for example [ 102 , 103 , 104 ]. A particularly interesting finding within this ethical concern was related to privacy and cognitive impairment. While discussions around autonomy and cognitive impairment are popular in bioethical research (see e.g. [ 105 , 106 ], privacy, on the other hand, has recently gained more attention for both researchers and designers [ 107 ]. The relation in the reviewed studies between cognitive impairment and privacy seemed to be reversely correlated –intrusions into the privacy of older persons with cognitive impairments were deemed as more justified [ 35 , 53 ], which necessarily does not mean that its ethical, but a practical fact that such intrusions become possible or necessary in the given context. A possible explanation lies in the connectedness of autonomy and privacy, in the sense that autonomy is needed to consent for any sort of intrusions [ 108 ].

Surprisingly, more research papers mentioned the topic of human vs. artificial relationships as an ethical concern than autonomy. Autonomy is often the most discussed ethics topic when it comes to use of technology [ 96 ]. However, fears associated with technology replacing human care has recently gained traction [ 109 , 110 , 111 ].The significance of this theme is likely due to the fact that caregiving for older persons has been (and is) a very human-centric activity [ 112 ]. As mentioned before, the persons willing and able to do this labor (both paid and unpaid caregiver) are limited and their pool is shrinking [ 113 ]. The idea of technology possibly filling this gap is not new [ 114 ], but is also clearly causing wariness among both older persons and caregivers, as we have discovered [ 56 , 61 ]. Frequently mentioned was the fear of care being replaced by technology. This finding was to be expected, as nursing is not the only profession where introduction of technology caused fears of job loss [ 115 ]. Within this ethical concern, the importance of human touch and human interaction was underlined [ 110 , 111 ]. Human touch is an important asset for caregivers when they care for older patients, particularly those with dementia, as it is one of the few ways to establish connection and to calm the patient with dementia [ 116 ]. Similarly, human touch and face-to-face interactions are mentioned as a critical aspect of caregiving in general, both for the care recipient and the caregiver [ 117 , 118 ]. While caregivers see the aspect of touching and interacting with older care recipients as a way to make their actions more meaningful and healing [ 90 , 117 ], for care recipients being touched, talked and listened to is part of feeling respected and experiencing dignity [ 118 , 119 ]. Introducing technology into the caregiving profession may therefore quickly elicit associations with cold and lifeless objects [ 59 ]. Future developments, both in the design of the technologies themselves and their implementation in caregiving will require critical discussion among concerned stakeholders and careful decision on how and to what extent the human touch and human care must be preserved.

A unique ethical concern that we have not seen in previous research [ 120 , 121 ] is responsibility, and remarkable within this concern was SHHTs’ negative impact on it. As previously mentioned, the human being and human interaction are seen as central to caregiving [ 117 , 118 ]. This can possibly be extended to concepts exclusively attributable to humans, such as the concept of moral responsibility [ 122 ]. Shifting caregiving tasks onto a technological device, which, by being a device and not a human carer, cannot be morally responsible in the same way as a human being can [ 123 ], may introduce a sense of void that caregivers are reluctant to create. Studies have shown that a mismatch in professional and personal values in nursing causes emotional discomfort and stress [ 124 ], therefore the shift in the professional environment caused by SHHTs is likely to be met with aversion. Additionally, the negative impact of SHHTs on caregiving responsibility was also tied to practical concerns, like not having enough time to learn how to use the technology by the caregivers [ 35 ], or needing to have access to and checking the older person’s health data [ 36 ]. Such concerns point to the possibility that SHHTs can create unforeseen tasks, which could turn into true burdens, instead of alleviating caregivers. Indeed, there are indications that the increase in information about the older person through monitoring technologies causes stress for both caregivers and older persons, as the former feel pressure to look at the available data, while the latter prefer to hide unfavorable information to not seem burdensome for their caregivers [ 125 ]. Another consequence of SHHTs that emerged as a sub-category was the renegotiation of responsibilities among the different stakeholders. In the field of (assistive) technology, this renegotiation is an ongoing process with efforts to make technology and its developers more accountable, through new policies and regulations [ 126 ]. In the realm of assistive technology in healthcare, these negotiations focus on high-risk cases and emergencies [ 127 ]. Who is responsible for the death of a person if the assistive technology failed to recognize an emergency, or to alert humans in time? Such issues around responsibility and legal liability are partially responsible for the slow uptake of technology in caregiving [ 128 ].

Another important but less discussed ethical concern was ageism and stigma. Ageist prejudices include being perceived as slow, useless, burdensome, and incompetent [ 129 ]. Fear of aging and becoming a burden to others is a fear many older persons have, as current social norms demand independence until death [ 130 ]. Furthermore, the general ubiquitous use of technology has possibly exacerbated the issue of ageism, as life became fast paced and more pressure is placed on aging persons to keep up [ 131 ]. While this would call for more attention to studying ageism in relation to technology, our findings indicate that, it does not unfortunately seem at the forefront of concerns that are prevalent in the literature (and thereby the society).

Related to ageism, is the wish of older persons to not be perceived as old and/or in the need of assistance (in the form of technology) explains the prevalent demand for unobtrusive technology. Obtrusiveness, in the context of SHHTs, is defined as “undesirably prominent and or/noticeable”, yet this definition should include the user’s perception and environment, and is thus not an objectively applicable definition [ 132 ]. Nevertheless, we can infer that by “unobtrusive”, users mean SHHTs that is not noticeable by them or, mostly importantly, by other persons to possibly reduce stigma associated with using a technology deemed to be for persons with certain limitations. Further research will have to confirm if unobtrusive technology actually reduces stigma and/or fosters acceptance of such SHHTs in caregiving.

Lastly, the sub-theme of stigmatization of women and immigrants in caregiving and possibly exacerbating their caregiving burden through technology was only discovered in two theoretical publications [ 47 , 71 ]. While it is well known that caregiving burden mostly falls upon women [ 133 , 134 ], many of them with a migration background when it comes to live-in caregivers [ 135 , 136 ]. It is surprising that we found no redistribution of burden of care with technology. This is likely due to the fact that caregiving – be it technologically assisted or not – remains perceived as a more feminine and, unfortunately, low status profession [ 137 ]. The development of technology, however, are still mostly associated with masculinity This tension between the innovators and actual users of technology can lead to the exacerbation of stigma for female and migrant caregivers, as the human bias is conserved by the technology, instead of disrupted through it [ 137 ].

Finally, trust was an expected ethical concern, given that it is a widely discussed topic in relation to technology (see for example, [ 123 , 138 ] and also in the context of nursing [ 95 , 139 ]. Older persons were trusting caregivers to understand SHHTs [ 48 ], while caregivers feared that older persons would not trust the used technology, even though said persons did not express such concerns [ 32 ]. A possibility to mitigate such misunderstandings and put both caregivers and care recipients on an equal understanding of the technology are education tools [ 140 ]. Another surprising finding was that some older persons were inclined to trust SHHTs even more than human caregivers, as they were seen as more reliable [ 70 ]. This trust in technology was increased when a physical robot instead of an only virtual agent was involved [ 60 , 65 ]. Studies in the realm of embodiment of virtual agents and robots suggest that the presence of a body or face promotes human-like interactions with said agents [ 51 ]. Furthermore, our systematic review discovered other characteristics which promote trust in SHHTs, such as perceived usefulness [ 94 ] or time spent with the technology [ 59 ]. Another important aspect is the already existing trust in the person introducing the technology to the user [ 34 , 93 ]. In combining these characteristics in the design and implementation of SHHTs in caregiving, researchers and technology developers need to find creative mechanisms to facilitate trustworthiness and foster adoption of new technologies in caregiving.

Limitations

While we searched 10 databases for publications over a span of 20 years, we are aware that older or newer publications will have escaped our systematic review. Relevant new literature that we have found when writing our results have been incorporated in this manuscript. Furthermore, as we specifically refrained from using terms related to ethics in our search strings to also capture the instances of absence of ethical concerns, this choice may have led to missing a few articles as a consequence, especially in regards to theoretical publications. Lastly, due to lack of resources, we were unable to carry out independent data extraction for all included papers (N = 156) and chose to validate the quality of extracted data by using a random selection of 10% of the included sample. Since there was high agreement on extracted data, we are confident about the quality of our study findings.

SHHTs offer the possibility to mitigate the shortage of human caregiving resources and to enable older persons to age in place, being adequately supported by technology. However, this shift in caregiving comes with ethical challenges. If and how these ethical challenges are mentioned in the current research around SHHTs in caregiving for older persons was the goal of this systematic review. Through analyzing 156 articles, both empirical and theoretical, we discovered that, while over one third of articles did not mention any ethical concerns whatsoever, the other two thirds discussed a plethora of ethical issues. Specifically, we discovered the emergence of concerns with the use of technology in the care of older persons around the theme of human vs. artificial relationships, ageism and stigma, and responsibility. In short, our systematic review offers a comprehensive overview of the currently discussed ethical issues in the context of SHHTs in caregiving for older persons. However, scholars in the fields of gerontology, ethics, and technology working on such issues would be already (or should be) aware that ethical concerns will change with each developing technology and the population it is used for. For instance, with the rise of Artificial intelligence/Machine Learning, new intelligent or smart technologies will continue to mature with use and time. Thus, ethical value such as autonomy will require re-evaluation with this significant content development as well as deciding, if the person would/should be asked to re-consent or how should this decision making proceed should he or she have developed dementia. In sum, more critical work is necessary to prospectively act on ethical concerns that may arise with new and developing technologies that could be used in reducing caregiving burden now and in the future.

Data Availability

All data generated or analyzed during this systematic review are included in this published article and its appendices. Appendix part 1 contains all included articles and their characteristics. Appendix part 2 contains the search strategy and all search strings for all searched databases, as well as the PROSPERO registration number.

Hertog S, Cohen B, Population. 2030: Demographic challenges and opportunities for sustainable development planning. undefined [Internet]. 2015 [zitiert 11. Juli 2022]; Verfügbar unter: https://www.semanticscholar.org/paper/Population-2030%3A-Demographic-challenges-and-for-Hertog-Cohen/f0c 5c06b4bf7b53f7cb61fe155e762ec23edbc0b

Bosch-Farré C, Malagón-Aguilera MC, Ballester-Ferrando D, Bertran-Noguer C, Bonmatí-Tomàs A, Gelabert-Vilella S. u. a. healthy ageing in place: enablers and barriers from the perspective of the elderly. A qualitative study. Int J Environ Res Public Health. 2020;17(18):1–23.

Article   Google Scholar  

Cuvillier B, Chaumon M, Body S, Cros F. Detecting falls at home: user-centered design of a pervasive technology. Hum Technol November. 2016;12(2):165–92.

Fitzpatrick JM, Tzouvara V. Facilitators and inhibitors of transition for older people who have relocated to a long-term care facility: a systematic review. Health Soc Care Community Mai. 2019;27(3):e57–81.

Lee DTF, Woo J, Mackenzie AE. A review of older people’s experiences with residential care placement. J Adv Nurs Januar. 2002;37(1):19–27.

Rohrmann S. Epidemiology of Frailty in Older People. In: Veronese N, Herausgeber. Frailty and Cardiovascular Diseases: Research into an Elderly Population [Internet]. Cham: Springer International Publishing; 2020 [zitiert 10. Februar 2022]. S. 21–7. (Advances in Experimental Medicine and Biology). Verfügbar unter: https://doi.org/10.1007/978-3-030-33330-0_3

Demiris G, Hensel BK, Skubic M, Rantz M. Senior residents’ perceived need of and preferences for “smart home” sensor technologies. Int J Technol Assess Health Care Januar. 2008;24(01):120–4.

Majumder S, Aghayi E, Noferesti M, Memarzadeh-Tehran H, Mondal T, Pang Z. u. a. Smart Homes for Elderly Healthcare-Recent advances and Research Challenges. Sens 31 Oktober. 2017;17(11):E2496.

Google Scholar  

Holm S. Autonomy, authenticity, or best interest: everyday decision-making and persons with dementia. Med Health Care Philos 1 Mai. 2001;4(2):153–9.

Trothen TJ. Intelligent Assistive Technology Ethics for Aging Adults: Spiritual Impacts as a Necessary Consideration | EndNote Click [Internet]. 2022 [zitiert 12. Juli 2022]. Verfügbar unter: https://click.endnote.com/viewer?doi=10.3390%2Frel13050452 &token=WzMxNTc3MzUsIjEwLjMzOTAvcmVsMTMwNTA0NTIiXQ.zGm-wqWo8mCI5L8GIchNEsUCQjg

Cook AM. Ethical issues related to the Use/Non-Use of Assistive Technologies. Dev Disabil Bull. 2009;37:127–52.

Demiris G, Hensel BK. Technologies for an aging society: a systematic review of „smart home“ applications.Yearb Med Inform. 2008;33–40.

Liu L, Stroulia E, Nikolaidis I, Miguel-Cruz A, Rios Rincon A. Smart homes and home health monitoring technologies for older adults: a systematic review. Int J Med Inf Juli. 2016;91:44–59.

Moraitou M, Pateli A, Fotiou S. Smart Health Caring Home: a systematic review of Smart Home Care for Elders and Chronic Disease Patients. In: Vlamos P, editor. Herausgeber. GeNeDis 2016. Cham: Springer International Publishing; 2017. pp. 255–64. (Advances in Experimental Medicine and Biology).

Chapter   Google Scholar  

Klingler C, Silva DS, Schuermann C, Reis AA, Saxena A, Strech D. Ethical issues in public health surveillance: a systematic qualitative review. BMC Public Health 4 April. 2017;17(1):295.

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rogers M. Guidance on the conduct of narrative synthesis in systematic Reviews. A Product from the ESRC Methods Programme. Version 1 | Semantic Scholar [Internet]. 2006 [zitiert 15. September 2022]. Verfügbar unter: https://www.semanticscholar.org/paper/Guidance-on-the-conduct-of-narrative-synthesis-in-A-Popay-Roberts/ed8b23836338f 6fdea0cc55e161b0fc5805f9e27

Draper H, Sorell T. Ethical values and social care robots for older people: an international qualitative study. Ethics Inf Technol 1 März. 2017;19(1):49–68.

Hall A, Wilson CB, Stanmore E, Todd C. Implementing monitoring technologies in care homes for people with dementia: a qualitative exploration using normalization process theory. Int J Nurs Stud Juli. 2017;72:60–70.

Airola E, Rasi P. [PDF] Domestication of a Robotic Medication-Dispensing Service Among Older People in Finnish Lapland | Semantic Scholar [Internet]. 2020 [zitiert 6. September 2022]. Verfügbar unter: https://www.semanticscholar.org/paper/Domestication-of-a-Robotic-Medication-Dispensing-in-Airola-Rasi/c8a 84330af2410efdc0c6efcf56fbaf3490a8292

Aloulou H, Mokhtari M, Tiberghien T, Biswas J, Phua C, Kenneth Lin. JH, u. a. Deployment of assistive living technology in a nursing home environment: methods and lessons learned. BMC Med Inform Decis Mak 8 April. 2013;13:42.

Bankole A, Anderson M, Homdee N. BESI: Behavioral and Environmental Sensing and Intervention for Dementia Caregiver Empowerment—Phases 1 and 2 [Internet]. 2020 [zitiert 6. September 2022]. Verfügbar unter: https://journals.sagepub.com/doi/full/10.1177/1533317520906686

Alexander GL, Rantz M, Skubic M, Aud MA, Wakefield B, Florea E. u. a. sensor systems for monitoring functional status in assisted living facility residents. Res Gerontol Nurs Oktober. 2008;1(4):238–44.

Cavallo F, Aquilano M, Arvati M. An ambient assisted living approach in designing domiciliary services combined with innovative technologies for patients with Alzheimer’s disease: a case study. Am J Alzheimers Dis Other Demen Februar. 2015;30(1):69–77.

Hunter I, Elers P, Lockhart C, Guesgen H, Singh A, Whiddett D. Issues Associated With the Management and Governance of Sensor Data and Information to Assist Aging in Place: Focus Group Study With Health Care Professionals. JMIR MHealth UHealth. 2. Dezember 2020;8(12):e24157.

Cahill J, Portales R, McLoughin S, Nagan N, Henrichs B, Wetherall S. IoT/Sensor-Based infrastructures promoting a sense of Home, Independent Living, Comfort and Wellness. Sens 24 Januar. 2019;19(3):485.

Demiris G, Hensel B. Smart Homes” for patients at the end of life. J Hous Elder 20 Februar. 2009;23(1–2):106–15.

Ho A. Are we ready for artificial intelligence health monitoring in elder care? BMC Geriatr Dezember. 2020;20(1):358.

Kang HG, Mahoney DF, Hoenig H, Hirth VA, Bonato P. Hajjar I, u. a. In situ monitoring of Health in older adults: Technologies and Issues: ISSUES IN IN SITU GERIATRIC HEALTH MONITORING. J Am Geriatr Soc August. 2010;58(8):1579–86.

Arthanat S, Begum M, Gu T, LaRoche DP, Xu D, Zhang N. Caregiver perspectives on a smart home-based socially assistive robot for individuals with Alzheimer’s disease and related dementia. Disabil Rehabil Assist Technol 2 Oktober. 2020;15(7):789–98.

Erebak S, Turgut T. Caregivers’ attitudes toward potential robot coworkers in elder care. Cogn Technol Work Mai. 2019;21(2):327–36.

Meiland FJM, Hattink BJJ, Overmars-Marx T, de Boer ME, Jedlitschka A, Ebben PWG. u. a. participation of end users in the design of assistive technology for people with mild to severe cognitive problems; the european Rosetta project. Int Psychogeriatr Mai. 2014;26(5):769–79.

Bedaf S, Marti P, De Witte L. What are the preferred characteristics of a service robot for the elderly? A multi-country focus group study with older adults and caregivers. Assist Technol 27 Mai. 2019;31(3):147–57.

Epstein I, Aligato A, Krimmel T, Mihailidis A. Older adults’ and caregivers’ perspectives on In-Home Monitoring Technology. J Gerontol Nurs 15 März. 2016;42:1–8.

Chung J, Demiris G, Thompson HJ, Chen KY, Burr R, Patel S. u. a. feasibility testing of a home-based sensor system to monitor mobility and daily activities in korean American older adults. Int J Older People Nurs. 2017;12(1):e12127.

Niemelä M, van Aerschot L, Tammela A, Aaltonen I, Lammi H. Towards ethical guidelines of using Telepresence Robots in Residential Care. Int J Soc Robot 1 Juni. 2019;13(3):431–9.

Robinson EL, Park G, Lane K, Skubic M, Rantz M. Technology for healthy independent living: creating a tailored In-Home Sensor System for older adults and family caregivers. J Gerontol Nurs Juli. 2020;46(7):35–40.

Barnier F, Chekkar R. Building automation, an acceptable solution to dependence? Responses through an Acceptability Survey about a Sensors platform. IRBM Juni. 2018;39(3):167–79.

Birks M, Bodak M, Barlas J, Harwood J, Pether M. Robotic Seals as Therapeutic Tools in an aged care facility: a qualitative study. J Aging Res 1 Januar. 2016;2016:1–7.

Bertera E, Tran B, Wuertz E, Bonner A. A study of the receptivity to telecare technology in a community-based elderly minority population. J Telemed Telecare 1 Februar. 2007;13:327–32.

Mahoney D. An evidence-based adoption of Technology Model for Remote Monitoring of Elders’ Daily Activities. Ageing Int 1 September. 2010;36:66–81.

Boissy P, Corriveau H, Michaud F, Labonte D, Royer MP. A qualitative study of in-home robotic telepresence for home care of community-living elderly subjects. J Telemed Telecare 1 Februar. 2007;13:79–84.

Bradford DK, Kasteren YV, Zhang Q, Karunanithi M. Watching over me: positive, negative and neutral perceptions of in-home monitoring held by independent-living older residents in an australian pilot study. Ageing Soc Juli. 2018;38(7):1377–98.

Cohen C, Kampel T, Verloo H. Acceptability Among Community Healthcare Nurses of Intelligent Wireless Sensor-system Technology for the Rapid Detection of Health Issues in Home-dwelling Older Adults. Open Nurs J [Internet]. 17. April 2017 [zitiert 6. September 2022];11(1). Verfügbar unter: https://opennursingjournal.com/VOLUME/11/PAGE/54/

Boise L, Wild K, Mattek N, Ruhl M, Dodge HH, Kaye J. Willingness of older adults to share data and privacy concerns after exposure to unobtrusive in-home monitoring. Gerontechnology 22 Januar. 2013;11(3):428–35.

Li CZ, Borycki EM. Smart Homes for Healthcare. Stud Health Technol Inform. 2019;257:283–7.

Moyle W. The promise of technology in the future of dementia care. Nat Rev Neurol Juni. 2019;15(6):353–9.

Parks JA. Home-Based Care, Technology, and the maintenance of selves. HEC Forum Juni. 2015;27(2):127–41.

Essén A. The two facets of electronic care surveillance: An exploration of the views of older people who live with monitoring devices | Request PDF [Internet]. 2008 [zitiert 6. September 2022]. Verfügbar unter: https://www.researchgate.net/publication/5457066_The_two_facets_of_electronic_care_surveillance_An_exploration_of_the_views_of_older_people_who_live_with_monitoring_devices

Preuß D, Legal F. Living with the animals: animal or robotic companions for the elderly in smart homes? J Med Ethics Juni. 2017;43(6):407–10.

Geier J, Mauch M, Patsch M, Paulicke D. Wie Pflegekräfte im ambulanten Bereich den Einsatz von Telepräsenzsystemen einschätzen - eine qualitative studie. Pflege Februar. 2020;33(1):43–51.

Kim JY, Liu N, Tan HX, Chu CH. Unobtrusive monitoring to Detect Depression for Elderly with Chronic Illnesses. IEEE Sens J September. 2017;17(17):5694–704.

Barrett E, Burke M, Whelan S, Santorelli A, Oliveira BL, Cavallo F. u. a. evaluation of a companion robot for individuals with dementia: quantitative findings of the MARIO project in an irish residential care setting. J Gerontol Nurs 1 Januar. 2019;47(7):36–45.

Kinney JM, Kart CS, Murdoch LD, Conley CJ. Striving to provide Safety assistance for families of Elders: the SAFE House Project. Dement 1 Oktober. 2004;3(3):351–70.

Baisch S, Kolling T, Rühl S, Klein B, Pantel J, Oswald F. Emotionale Roboter im Pflegekontext: Empirische Analyse des bisherigen Einsatzes und der Wirkungen von Paro und Pleo. Z Für Gerontol Geriatr. Januar 2018;51(1):16–24.

Bobillier Chaumon ME, Cuvillier B, Body Bekkadja S, Cros F. Detecting falls at home: user-centered design of a Pervasive Technology. Hum Technol 29 November. 2016;12:165–92.

Lussier M, Couture M, Moreau M, Laliberté C, Giroux S, Pigot H. u. a. integrating an ambient assisted living monitoring system into clinical decision-making in home care: an embedded case study. Gerontechnology 15 März. 2020;19:77–92.

Klein B, Schlömer I. A robotic shower system: Acceptance and ethical issues. Z Gerontol Geriatr Januar. 2018;51(1):25–31.

Pirhonen J, Melkas H, Laitinen A, Pekkarinen S. Could robots strengthen the sense of autonomy of older people residing in assisted living facilities?—A future-oriented study. Ethics Inf Technol Juni. 2020;22(2):151–62.

Kleiven HH, Ljunggren B, Solbjør M. Health professionals’ experiences with the implementation of a digital medication dispenser in home care services - a qualitative study. BMC Health Serv Res 16 April. 2020;20(1):320.

Görer B, Salah AA, Akın HL. An autonomous robotic exercise tutor for elderly people. Auton Robots 1 März. 2017;41(3):657–78.

Berridge C, Chan KT, Choi Y. Sensor-Based Passive Remote Monitoring and Discordant Values: Qualitative Study of the Experiences of Low-Income Immigrant Elders in the United States. JMIR MHealth UHealth.25. März 2019;7(3):e11516.

Frennert S, Forsberg A, Östlund B. Elderly People’s Perceptions of a Telehealthcare System: Relative Advantage, Compatibility, Complexity and Observability.J Technol Hum Serv. 1. Juli2013;31.

Huisman C, Kort H. Two-year use of Care Robot Zora in dutch nursing Homes: an evaluation study. Healthc Basel Switz 19 Februar. 2019;7(1):E31.

Libin A, Cohen-Mansfield J. Therapeutic robocat for nursing home residents with dementia: preliminary inquiry. Am J Alzheimers Dis Other Demen April. 2004;19(2):111–6.

Marti P, Stienstra JT. Exploring empathy in interaction: scenarios of respectful robotics. GeroPsych J Gerontopsychology Geriatr Psychiatry. 2013;26:101–12.

Wang RH, Sudhama A, Begum M, Huq R, Mihailidis A. Robots to assist daily activities: views of older adults with Alzheimer’s disease and their caregivers. Int Psychogeriatr Januar. 2017;29(1):67–79.

Mitzner TL, Chen TL, Kemp CC, Rogers WA. Identifying the potential for Robotics to assist older adults in different living environments. Int J Soc Robot April. 2014;6(2):213–27.

Kelly D. Smart support at home: the integration of telecare technology with primary and community care systems. Br J Healthc Comput Inform Manage 1 April. 2005;22(3):19–21.

Woods O. Subverting the logics of “smartness” in Singapore: Smart eldercare and parallel regimes of sustainability. Sustain Cities Soc Februar. 2020;53:101940.

Jenkins S, Draper H, Care. Monitoring, and companionship: views on Care Robots from Older People and their carers. Int J Soc Robot 1 November. 2015;7(5):673–83.

Roberts C, Mort M. Reshaping what counts as care: older people, work and new technologies. Alter April. 2009;3(2):138–58.

Lee S, Naguib AM. Toward a sociable and dependable Elderly Care Robot: design, implementation and user study. J Intell Robot Syst April. 2020;98(1):5–17.

Bowes A, McColgan G. Telecare for Older People: promoting independence, participation, and identity. Res Aging Januar. 2013;35(1):32–49.

Morris ME. Social networks as health feedback displays. IEEE Internet Comput September. 2005;9(5):29–37.

Milligan C, Roberts C, Mort M. Telecare and older people: who cares where? Soc Sci Med 1982 Februar. 2011;72(3):347–54.

Sánchez VG, Anker-Hansen C, Taylor I, Eilertsen G. Older people’s attitudes and perspectives of Welfare Technology in Norway. J Multidiscip Healthc. 2019;12:841–53.

Faucounau V, Wu YH, Boulay M, Maestrutti M, Rigaud AS. Caregivers’ requirements for in-home robotic agent for supporting community-living elderly subjects with cognitive impairment. Technol Health Care 30 April. 2009;17(1):33–40.

Ropero F, Vaquerizo-Hdez D, Muñoz P, Barrero D, R-Moreno M. LARES: An AI-based teleassistance system for emergency home monitoring. Cogn Syst Res. 1. April2019;56.

Naick M. Innovative approaches of using assistive technology to support carers to care for people with night-time incontinence issues. World Fed Occup Ther Bull 3 Juli. 2017;73(2):128–30.

Obayashi K, Kodate N, Shigeru M. Can connected technologies improve sleep quality and safety of older adults and care-givers? An evaluation study of sleep monitors and communicative robots at a residential care home in Japan. Technol Soc 1 Juli. 2020;62:101318.

Palm E. Who cares? Moral Obligations in formal and Informal Care Provision in the light of ICT-Based Home Care. Health Care Anal Juni. 2013;21(2):171–88.

Korchut A, Szklener S, Abdelnour C, Tantinya N, Hernández-Farigola J, Ribes JC. u. a. challenges for Service Robots-Requirements of Elderly adults with cognitive impairments. Front Neurol. 2017;8:228.

O’Brien K, Liggett A, Ramirez-Zohfeld V, Sunkara P, Lindquist LA. Voice-Controlled Intelligent Personal Assistants to support aging in place. J Am Geriatr Soc Januar. 2020;68(1):176–9.

Londei ST, Rousseau J, Ducharme F, St-Arnaud A, Meunier J, Saint-Arnaud J. u. a. An intelligent videomonitoring system for fall detection at home: perceptions of elderly people. J Telemed Telecare. 2009;15(8):383–90.

Melkas H. Innovative assistive technology in finnish public elderly-care services: a focus on productivity. Work Read Mass 1 Januar. 2013;46(1):77–91.

Peter C, Kreiner A, Schröter M, Kim H, Bieber G, Öhberg F. u. a. AGNES: connecting people in a multimodal way. J Multimodal User Interfaces 1 November. 2013;7(3):229–45.

Rawtaer I, Mahendran R, Kua EH, Tan HP, Tan HX, Lee TS. u. a. early detection of mild cognitive impairment with In-Home sensors to monitor behavior patterns in Community-Dwelling Senior Citizens in Singapore: cross-sectional feasibility study. J Med Internet Res 5 Mai. 2020;22(5):e16854.

Gokalp H, de Folter J, Verma V, Fursse J, Jones R, Clarke M. Integrated Telehealth and Telecare for Monitoring Frail Elderly with Chronic Disease. Telemed J E-Health Off J Am Telemed Assoc. Dezember 2018;24(12):940–57.

Holthe T, Halvorsrud L, Lund A. A critical occupational perspective on user engagement of older adults in an assisted living facility in technology research over three years. J Occup Sci 2 Juli. 2020;27(3):376–89.

Wright J. Tactile care, mechanical hugs: japanese caregivers and robotic lifting devices. Asian Anthropol 2 Januar. 2018;17(1):24–39.

Mitseva A, Peterson CB, Karamberi C, Oikonomou LC, Ballis AV, Giannakakos C. u. a. gerontechnology: providing a helping hand when caring for cognitively impaired older adults-intermediate results from a controlled study on the satisfaction and acceptance of informal caregivers. Curr Gerontol Geriatr Res. 2012;2012:401705.

Snyder M, Dringus L, Maitland Schladen, Chenall R, Oviawe E. „Remote Monitoring Technologies in Dementia Care: An Interpretative Phe“ by Martha Snyder, Laurie Dringus[Internet]. 2020 [zitiert 13. September 2022]. Verfügbar unter: https://nsuworks.nova.edu/tqr/vol25/iss5/5/

Suwa S, Tsujimura M, Ide H, Kodate N, Ishimaru M, Shimamura A. u. a. home-care professionals’ ethical perceptions of the Development and Use of Home-care Robots for older adults in Japan. Int J Human–Computer Interact 26 August. 2020;36(14):1295–303.

Torta E, Werner F, Johnson D, Juola J, Cuijpers R, Bazzani M. Evaluation of a Small Socially-Assistive Humanoid Robot in Intelligent Homes for the Care of the Elderly. J Intell Robot Syst. 1. Februar2014

Dinç L, Gastmans C. Trust and trustworthiness in nursing: an argument-based literature review. Nurs Inq September. 2012;19(3):223–37.

Moilanen T, Suhonen R, Kangasniemi M. Nursing support for older people’s autonomy in residential care: an integrative review. Int J Older People Nurs März. 2022;17(2):e12428.

Doorn N. Responsibility ascriptions in technology development and engineering: three perspectives. Sci Eng Ethics März. 2012;18(1):69–90.

Kooli C. COVID-19: public health issues and ethical dilemmas. Ethics Med Public Health Juni. 2021;17:100635.

Einav S, Ranzani OT. Focus on better care and ethics: are medical ethics lagging behind the development of new medical technologies? Intensive Care Med 1 August. 2020;46(8):1611–3.

Bahl R, Bahl S. Publication pressure versus Ethics, in Research and Publication. Indian J Community Med Off Publ Indian Assoc Prev Soc Med Dezember. 2021;46(4):584–6.

Pratt B, Hyder A. Fair Resource Allocation to Health Research: Priority Topics for Bioethics Scholarship - Pratt – 2017 - Bioethics - Wiley Online Library [Internet]. 2017 [zitiert 2. September 2022]. Verfügbar unter: https://onlinelibrary.wiley.com/doi/full/ https://doi.org/10.1111/bioe.12350

Malin B, Goodman K. Section editors for the IMIA Yearbook Special Section. Between Access and privacy: Challenges in sharing Health Data. Yearb Med Inform August. 2018;27(1):55–9.

Martani A, Egli P, Widmer M, Elger B. Data protection and biomedical research in Switzerland: setting the record straight. Swiss Med Wkly 24 August. 2020;150:w20332.

Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med Januar. 2019;25(1):37–43.

Brady Wagner LC. Clinical ethics in the context of language and cognitive impairment: rights and protections. Semin Speech Lang November. 2003;24(4):275–84.

Sharkey A, Sharkey N. Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf Technol März. 2012;14(1):27–40.

Berridge C, Demiris G, Kaye J. Domain experts on Dementia-Care Technologies: mitigating risk in design and implementation. Sci Eng Ethics 18 Februar. 2021;27(1):14.

Greshake Tzovaras B, Angrist M, Arvai K, Dulaney M, Estrada-Galiñanes V, Gunderson B. u. a. open humans: a platform for participant-centered research and personal data exploration. GigaScience 1 Juni. 2019;8(6):giz076.

Ienca M, Lipps M, Wangmo T, Jotterand F, Elger B, Kressig R. Health professionals’ and researchers’ views on Intelligent Assistive Technology for psychogeriatric care. Gerontechnology 8 Oktober. 2018;17:139–50.

Ienca M, Schneble C, Kressig RW, Wangmo T. Digital health interventions for healthy ageing: a qualitative user evaluation and ethical assessment. BMC Geriatr 2 Juli. 2021;21(1):412.

Wangmo T, Lipps M, Kressig RW, Ienca M. Ethical concerns with the use of intelligent assistive technology: findings from a qualitative study with professional stakeholders. BMC Med Ethics 19 Dezember. 2019;20(1):98.

Caregiving AARP. NA for. Caregiving in the United States 2020 [Internet]. AARP. 2020 [zitiert 31. August 2022]. Verfügbar unter: https://www.aarp.org/ppi/info-2020/caregiving-in-the-united-states.html

McGilton KS, Vellani S, Yeung L, Chishtie J, Commisso E, Ploeg J. u. a. identifying and understanding the health and social care needs of older adults with multiple chronic conditions and their caregivers: a scoping review. BMC Geriatr 1 Oktober. 2018;18(1):231.

Sriram V, Jenkinson C, Peters M. Informal carers’ experience of assistive technology use in dementia care at home: a systematic review. BMC Geriatr 14 Juni. 2019;19(1):160.

Schwabe H, Castellacci F. Automation, workers’ skills and job satisfaction. PLoS ONE. 2020;15(11):e0242929.

Huelat B, Pochron S. Stress in the Volunteer Caregiver: Human-Centric Technology Can Support Both Caregivers and People with Dementia. Medicina (Mex). 26. Mai 2020;56:257.

Edvardsson JD, Sandman PO, Rasmussen BH. Meanings of giving touch in the care of older patients: becoming a valuable person and professional. J Clin Nurs Juli. 2003;12(4):601–9.

Stöckigt B, Suhr R, Sulmann D, Teut M, Brinkhaus B. Implementation of intentional touch for geriatric patients with Chronic Pain: a qualitative pilot study. Complement Med Res. 2019;26(3):195–205.

Felber NA, Pageau F, McLean A, Wangmo T. The concept of social dignity as a yardstick to delimit ethical use of robotic assistance in the care of older persons. Med Health Care Philos März. 2022;25(1):99–110.

Ienca M, Wangmo T, Jotterand F, Kressig RW, Elger BS. Ethical design of intelligent assistive technologies for dementia: a descriptive review. Sci Eng Ethics. 2018;24(4):1035.

Zhu J, Shi K, Yang C, Niu Y, Zeng Y, Zhang N. Ethical issues of smart home-based elderly care: A scoping review. J Nurs Manag [Internet]. 22. November 2021 [zitiert 15. September 2022];n/a(n/a). Verfügbar unter: https://onlinelibrary.wiley.com/doi/abs/ https://doi.org/10.1111/jonm.13521

Talbert M. Moral Responsibility. In: Zalta EN, Herausgeber. The Stanford Encyclopedia of Philosophy [Internet]. Winter 2019. Metaphysics Research Lab, Stanford University; 2019 [zitiert 1. Juli 2020]. Verfügbar unter: https://plato.stanford.edu/archives/win2019/entries/moral-responsibility/

DeCamp M, Tilburt JC. Why we cannot trust artificial intelligence in medicine. Lancet Digit Health Dezember. 2019;1(8):e390.

Dall’Ora C, Ball J, Reinius M, Griffiths P. Burnout in nursing: a theoretical review. Hum Resour Health 5 Juni. 2020;18(1):41.

Madara Marasinghe K. Assistive technologies in reducing caregiver burden among informal caregivers of older adults: a systematic review. Disabil Rehabil Assist Technol. 2016;11(5):353–60.

Shah H. Algorithmic accountability. Philos Transact A Math Phys Eng Sci 13 September. 2018;376(2128):20170362.

Fiske A, Henningsen P, Buyx A. Your Robot Therapist will see you now: ethical implications of embodied Artificial Intelligence in Psychiatry, psychology, and psychotherapy. J Med Internet Res 9 Mai. 2019;21(5):e13216.

Scott Kruse C, Karem P, Shifflett K, Vegi L, Ravi K, Brooks M. Evaluating barriers to adopting telemedicine worldwide: a systematic review. J Telemed Telecare Januar. 2018;24(1):4–12.

Chasteen AL, Horhota M, Crumley-Branyon JJ. Overlooked and underestimated: experiences of Ageism in Young, Middle-Aged, and older adults. J Gerontol B Psychol Sci Soc Sci 13 August. 2021;76(7):1323–8.

Svidén G, Wikström BM, Hjortsjö-Norberg M. Elderly Persons’ reflections on relocating to living at Sheltered Housing. Scand J Occup Ther 1 Januar. 2002;9(1):10–6.

McLean A. Ethical frontiers of ICT and older users: cultural, pragmatic and ethical issues. Ethics Inf Technol 1 Dezember. 2011;13(4):313–26.

Zwijsen SA, Niemeijer AR, Hertogh CMPM. Ethics of using assistive technology in the care for community-dwelling elderly people: an overview of the literature. Aging Ment Health Mai. 2011;15(4):419–27.

Jeong JS, Kim SY, Kim JN, Ashamed, Caregivers. Self-Stigma, information, and coping among dementia patient families. J Health Commun 1 November. 2020;25(11):870–8.

Mackinnon CJ. Applying feminist, multicultural, and social justice theory to diverse women who function as caregivers in end-of-life and palliative home care. Palliat Support Care Dezember. 2009;7(4):501–12.

Ha NHL, Chong MS, Choo RWM, Tam WJ, Yap PLK. Caregiving burden in foreign domestic workers caring for frail older adults in Singapore. Int Psychogeriatr August. 2018;30(8):1139–47.

Morales-Gázquez MJ, Medina-Artiles EN, López-Liria R, Aguilar-Parra JM, Trigueros-Ramos R, González-Bernal. JJ, u. a. migrant caregivers of older people in Spain: qualitative insights into relatives’ Experiences. Int J Environ Res Public Health 24 April. 2020;17(8):E2953.

Frennert S. Gender blindness: on health and welfare technology, AI and gender equality in community care. Nurs Inq Dezember. 2021;28(4):e12419.

Starke G, van den Brule R, Elger BS, Haselager P. Intentional machines: a defence of trust in medical artificial intelligence. Bioethics. 2022;36(2):154–61.

Ozaras G, Abaan S. Investigation of the trust status of the nurse-patient relationship. Nurs Ethics August. 2018;25(5):628–39.

Berridge C, Turner NR, Liu L, Karras SW, Chen A, Fredriksen-Goldsen K. u. a. Advance Planning for Technology Use in Dementia Care: Development, Design, and feasibility of a Novel Self-administered decision-making Tool. JMIR Aging 27 Juli. 2022;5(3):e39335.

Download references

Acknowledgements

We thank the information specialist of the University of Basel who advised us on our search strategy.

Open access funding provided by University of Basel. This study was supported financially by the Swiss National Science Foundation (SNF NRP-77 Digital Transformation, Grant Number 407740_187464/1) as part of the SmaRt homES, Older adUlts, and caRegivers: Facilitating social aCceptance and negotiating rEsponsibilities [RESOURCE] project. The funder neither took part in the writing process, nor does any part of the views expressed in the review belong to the funder.

Author information

Authors and affiliations.

Institute of Biomedical Ethics, University of Basel, Bernoullistrasse 28, 4056, Basel, Switzerland

Nadine Andrea Felber, Yi Jiao (Angelina) Tian, Bernice Simone Elger & Tenzin Wangmo

Faculty of Medicine, Université Laval, 1050 Av. de la Médecine, G1V0A6, Québec, QC, Canada

Félix Pageau

You can also search for this author in PubMed   Google Scholar

Contributions

Creation of the search strategy and data extraction was a joint effort of NAF and AT. FP and TW extracted data and prepared it for analysis. AT contributed majorly to the data analysis, together with NAF who is the first author of this manuscript. TW and BE provided final comments and edits. All authors read and approved the manuscript before submission.

Corresponding author

Correspondence to Nadine Andrea Felber .

Ethics declarations

Ethics approval and consent to participate.

not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Additional File 1: PRISMA 2020 checklist

Additional file 2: appendix part 1, additional file 3: appendix part 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Felber, N.A., Tian, Y., Pageau, F. et al. Mapping ethical issues in the use of smart home health technologies to care for older persons: a systematic review. BMC Med Ethics 24 , 24 (2023). https://doi.org/10.1186/s12910-023-00898-w

Download citation

Received : 15 September 2022

Accepted : 02 March 2023

Published : 29 March 2023

DOI : https://doi.org/10.1186/s12910-023-00898-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Biomedical ethics
  • Older persons
  • Health technology

BMC Medical Ethics

ISSN: 1472-6939

analysis strategies for research

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

4 Reasons Why Managers Fail

  • Swagatam Basu,
  • Atrijit Das,
  • Vitorio Bretas,
  • Jonah Shepp

analysis strategies for research

Nearly half of all managers report buckling under the stress of their role and struggling to deliver.

Gartner research has found that managers today are accountable for 51% more responsibilities than they can effectively manage — and they’re starting to buckle under the pressure: 54% are suffering from work-induced stress and fatigue, and 44% are struggling to provide personalized support to their direct reports. Ultimately, one in five managers said they would prefer not being people managers given a choice. Further analysis found that 48% of managers are at risk of failure based on two criteria: 1) inconsistency in current performance and 2) lack of confidence in the manager’s ability to lead the team to future success. This article offers four predictors of manager failure and offers suggestions for organizations on how to address them.

The job of the manager has become unmanageable. Organizations are becoming flatter every year. The average manager’s number of direct reports has increased by 2.8 times over the last six years, according to Gartner research. In the past few years alone, many managers have had to make a series of pivots — from moving to remote work to overseeing hybrid teams to implementing return-to-office mandates.

analysis strategies for research

  • Swagatam Basu is senior director of research in the Gartner HR practice and has spent nearly a decade researching leader and manager effectiveness. His work spans additional HR topics including learning and development, employee experience and recruiting. Swagatam specializes in research involving extensive quantitative analysis, structured and unstructured data mining and predictive modeling.
  • Atrijit Das is a senior specialist, quantitative analytics and data science, in the Gartner HR practice. He drives data-based research that produces actionable insights on core HR topics including performance management, learning and development, and change management.
  • Vitorio Bretas is a director in the Gartner HR practice, supporting HR executives in the execution of their most critical business strategies. He focuses primarily on leader and manager effectiveness and recruiting. Vitorio helps organizations get the most from their talent acquisition and leader effectiveness initiatives.
  • Jonah Shepp is a senior principal, research in the Gartner HR practice. He edits the Gartner  HR Leaders Monthly  journal, covering HR best practices on topics ranging from talent acquisition and leadership to total rewards and the future of work. An accomplished writer and editor, his work has appeared in numerous publications, including  New York   Magazine ,  Politico   Magazine ,  GQ , and  Slate .

Partner Center

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can J Hosp Pharm
  • v.68(3); May-Jun 2015

Logo of cjhp

Qualitative Research: Data Collection, Analysis, and Management

Introduction.

In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area. Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. Whereas quantitative research methods can be used to determine how many people undertake particular behaviours, qualitative methods can help researchers to understand how and why such behaviours take place. Within the context of pharmacy practice research, qualitative approaches have been used to examine a diverse array of topics, including the perceptions of key stakeholders regarding prescribing by pharmacists and the postgraduation employment experiences of young pharmacists (see “Further Reading” section at the end of this article).

In the previous paper, 1 we outlined 3 commonly used methodologies: ethnography 2 , grounded theory 3 , and phenomenology. 4 Briefly, ethnography involves researchers using direct observation to study participants in their “real life” environment, sometimes over extended periods. Grounded theory and its later modified versions (e.g., Strauss and Corbin 5 ) use face-to-face interviews and interactions such as focus groups to explore a particular research phenomenon and may help in clarifying a less-well-understood problem, situation, or context. Phenomenology shares some features with grounded theory (such as an exploration of participants’ behaviour) and uses similar techniques to collect data, but it focuses on understanding how human beings experience their world. It gives researchers the opportunity to put themselves in another person’s shoes and to understand the subjective experiences of participants. 6 Some researchers use qualitative methodologies but adopt a different standpoint, and an example of this appears in the work of Thurston and others, 7 discussed later in this paper.

Qualitative work requires reflection on the part of researchers, both before and during the research process, as a way of providing context and understanding for readers. When being reflexive, researchers should not try to simply ignore or avoid their own biases (as this would likely be impossible); instead, reflexivity requires researchers to reflect upon and clearly articulate their position and subjectivities (world view, perspectives, biases), so that readers can better understand the filters through which questions were asked, data were gathered and analyzed, and findings were reported. From this perspective, bias and subjectivity are not inherently negative but they are unavoidable; as a result, it is best that they be articulated up-front in a manner that is clear and coherent for readers.

THE PARTICIPANT’S VIEWPOINT

What qualitative study seeks to convey is why people have thoughts and feelings that might affect the way they behave. Such study may occur in any number of contexts, but here, we focus on pharmacy practice and the way people behave with regard to medicines use (e.g., to understand patients’ reasons for nonadherence with medication therapy or to explore physicians’ resistance to pharmacists’ clinical suggestions). As we suggested in our earlier article, 1 an important point about qualitative research is that there is no attempt to generalize the findings to a wider population. Qualitative research is used to gain insights into people’s feelings and thoughts, which may provide the basis for a future stand-alone qualitative study or may help researchers to map out survey instruments for use in a quantitative study. It is also possible to use different types of research in the same study, an approach known as “mixed methods” research, and further reading on this topic may be found at the end of this paper.

The role of the researcher in qualitative research is to attempt to access the thoughts and feelings of study participants. This is not an easy task, as it involves asking people to talk about things that may be very personal to them. Sometimes the experiences being explored are fresh in the participant’s mind, whereas on other occasions reliving past experiences may be difficult. However the data are being collected, a primary responsibility of the researcher is to safeguard participants and their data. Mechanisms for such safeguarding must be clearly articulated to participants and must be approved by a relevant research ethics review board before the research begins. Researchers and practitioners new to qualitative research should seek advice from an experienced qualitative researcher before embarking on their project.

DATA COLLECTION

Whatever philosophical standpoint the researcher is taking and whatever the data collection method (e.g., focus group, one-to-one interviews), the process will involve the generation of large amounts of data. In addition to the variety of study methodologies available, there are also different ways of making a record of what is said and done during an interview or focus group, such as taking handwritten notes or video-recording. If the researcher is audio- or video-recording data collection, then the recordings must be transcribed verbatim before data analysis can begin. As a rough guide, it can take an experienced researcher/transcriber 8 hours to transcribe one 45-minute audio-recorded interview, a process than will generate 20–30 pages of written dialogue.

Many researchers will also maintain a folder of “field notes” to complement audio-taped interviews. Field notes allow the researcher to maintain and comment upon impressions, environmental contexts, behaviours, and nonverbal cues that may not be adequately captured through the audio-recording; they are typically handwritten in a small notebook at the same time the interview takes place. Field notes can provide important context to the interpretation of audio-taped data and can help remind the researcher of situational factors that may be important during data analysis. Such notes need not be formal, but they should be maintained and secured in a similar manner to audio tapes and transcripts, as they contain sensitive information and are relevant to the research. For more information about collecting qualitative data, please see the “Further Reading” section at the end of this paper.

DATA ANALYSIS AND MANAGEMENT

If, as suggested earlier, doing qualitative research is about putting oneself in another person’s shoes and seeing the world from that person’s perspective, the most important part of data analysis and management is to be true to the participants. It is their voices that the researcher is trying to hear, so that they can be interpreted and reported on for others to read and learn from. To illustrate this point, consider the anonymized transcript excerpt presented in Appendix 1 , which is taken from a research interview conducted by one of the authors (J.S.). We refer to this excerpt throughout the remainder of this paper to illustrate how data can be managed, analyzed, and presented.

Interpretation of Data

Interpretation of the data will depend on the theoretical standpoint taken by researchers. For example, the title of the research report by Thurston and others, 7 “Discordant indigenous and provider frames explain challenges in improving access to arthritis care: a qualitative study using constructivist grounded theory,” indicates at least 2 theoretical standpoints. The first is the culture of the indigenous population of Canada and the place of this population in society, and the second is the social constructivist theory used in the constructivist grounded theory method. With regard to the first standpoint, it can be surmised that, to have decided to conduct the research, the researchers must have felt that there was anecdotal evidence of differences in access to arthritis care for patients from indigenous and non-indigenous backgrounds. With regard to the second standpoint, it can be surmised that the researchers used social constructivist theory because it assumes that behaviour is socially constructed; in other words, people do things because of the expectations of those in their personal world or in the wider society in which they live. (Please see the “Further Reading” section for resources providing more information about social constructivist theory and reflexivity.) Thus, these 2 standpoints (and there may have been others relevant to the research of Thurston and others 7 ) will have affected the way in which these researchers interpreted the experiences of the indigenous population participants and those providing their care. Another standpoint is feminist standpoint theory which, among other things, focuses on marginalized groups in society. Such theories are helpful to researchers, as they enable us to think about things from a different perspective. Being aware of the standpoints you are taking in your own research is one of the foundations of qualitative work. Without such awareness, it is easy to slip into interpreting other people’s narratives from your own viewpoint, rather than that of the participants.

To analyze the example in Appendix 1 , we will adopt a phenomenological approach because we want to understand how the participant experienced the illness and we want to try to see the experience from that person’s perspective. It is important for the researcher to reflect upon and articulate his or her starting point for such analysis; for example, in the example, the coder could reflect upon her own experience as a female of a majority ethnocultural group who has lived within middle class and upper middle class settings. This personal history therefore forms the filter through which the data will be examined. This filter does not diminish the quality or significance of the analysis, since every researcher has his or her own filters; however, by explicitly stating and acknowledging what these filters are, the researcher makes it easer for readers to contextualize the work.

Transcribing and Checking

For the purposes of this paper it is assumed that interviews or focus groups have been audio-recorded. As mentioned above, transcribing is an arduous process, even for the most experienced transcribers, but it must be done to convert the spoken word to the written word to facilitate analysis. For anyone new to conducting qualitative research, it is beneficial to transcribe at least one interview and one focus group. It is only by doing this that researchers realize how difficult the task is, and this realization affects their expectations when asking others to transcribe. If the research project has sufficient funding, then a professional transcriber can be hired to do the work. If this is the case, then it is a good idea to sit down with the transcriber, if possible, and talk through the research and what the participants were talking about. This background knowledge for the transcriber is especially important in research in which people are using jargon or medical terms (as in pharmacy practice). Involving your transcriber in this way makes the work both easier and more rewarding, as he or she will feel part of the team. Transcription editing software is also available, but it is expensive. For example, ELAN (more formally known as EUDICO Linguistic Annotator, developed at the Technical University of Berlin) 8 is a tool that can help keep data organized by linking media and data files (particularly valuable if, for example, video-taping of interviews is complemented by transcriptions). It can also be helpful in searching complex data sets. Products such as ELAN do not actually automatically transcribe interviews or complete analyses, and they do require some time and effort to learn; nonetheless, for some research applications, it may be a valuable to consider such software tools.

All audio recordings should be transcribed verbatim, regardless of how intelligible the transcript may be when it is read back. Lines of text should be numbered. Once the transcription is complete, the researcher should read it while listening to the recording and do the following: correct any spelling or other errors; anonymize the transcript so that the participant cannot be identified from anything that is said (e.g., names, places, significant events); insert notations for pauses, laughter, looks of discomfort; insert any punctuation, such as commas and full stops (periods) (see Appendix 1 for examples of inserted punctuation), and include any other contextual information that might have affected the participant (e.g., temperature or comfort of the room).

Dealing with the transcription of a focus group is slightly more difficult, as multiple voices are involved. One way of transcribing such data is to “tag” each voice (e.g., Voice A, Voice B). In addition, the focus group will usually have 2 facilitators, whose respective roles will help in making sense of the data. While one facilitator guides participants through the topic, the other can make notes about context and group dynamics. More information about group dynamics and focus groups can be found in resources listed in the “Further Reading” section.

Reading between the Lines

During the process outlined above, the researcher can begin to get a feel for the participant’s experience of the phenomenon in question and can start to think about things that could be pursued in subsequent interviews or focus groups (if appropriate). In this way, one participant’s narrative informs the next, and the researcher can continue to interview until nothing new is being heard or, as it says in the text books, “saturation is reached”. While continuing with the processes of coding and theming (described in the next 2 sections), it is important to consider not just what the person is saying but also what they are not saying. For example, is a lengthy pause an indication that the participant is finding the subject difficult, or is the person simply deciding what to say? The aim of the whole process from data collection to presentation is to tell the participants’ stories using exemplars from their own narratives, thus grounding the research findings in the participants’ lived experiences.

Smith 9 suggested a qualitative research method known as interpretative phenomenological analysis, which has 2 basic tenets: first, that it is rooted in phenomenology, attempting to understand the meaning that individuals ascribe to their lived experiences, and second, that the researcher must attempt to interpret this meaning in the context of the research. That the researcher has some knowledge and expertise in the subject of the research means that he or she can have considerable scope in interpreting the participant’s experiences. Larkin and others 10 discussed the importance of not just providing a description of what participants say. Rather, interpretative phenomenological analysis is about getting underneath what a person is saying to try to truly understand the world from his or her perspective.

Once all of the research interviews have been transcribed and checked, it is time to begin coding. Field notes compiled during an interview can be a useful complementary source of information to facilitate this process, as the gap in time between an interview, transcribing, and coding can result in memory bias regarding nonverbal or environmental context issues that may affect interpretation of data.

Coding refers to the identification of topics, issues, similarities, and differences that are revealed through the participants’ narratives and interpreted by the researcher. This process enables the researcher to begin to understand the world from each participant’s perspective. Coding can be done by hand on a hard copy of the transcript, by making notes in the margin or by highlighting and naming sections of text. More commonly, researchers use qualitative research software (e.g., NVivo, QSR International Pty Ltd; www.qsrinternational.com/products_nvivo.aspx ) to help manage their transcriptions. It is advised that researchers undertake a formal course in the use of such software or seek supervision from a researcher experienced in these tools.

Returning to Appendix 1 and reading from lines 8–11, a code for this section might be “diagnosis of mental health condition”, but this would just be a description of what the participant is talking about at that point. If we read a little more deeply, we can ask ourselves how the participant might have come to feel that the doctor assumed he or she was aware of the diagnosis or indeed that they had only just been told the diagnosis. There are a number of pauses in the narrative that might suggest the participant is finding it difficult to recall that experience. Later in the text, the participant says “nobody asked me any questions about my life” (line 19). This could be coded simply as “health care professionals’ consultation skills”, but that would not reflect how the participant must have felt never to be asked anything about his or her personal life, about the participant as a human being. At the end of this excerpt, the participant just trails off, recalling that no-one showed any interest, which makes for very moving reading. For practitioners in pharmacy, it might also be pertinent to explore the participant’s experience of akathisia and why this was left untreated for 20 years.

One of the questions that arises about qualitative research relates to the reliability of the interpretation and representation of the participants’ narratives. There are no statistical tests that can be used to check reliability and validity as there are in quantitative research. However, work by Lincoln and Guba 11 suggests that there are other ways to “establish confidence in the ‘truth’ of the findings” (p. 218). They call this confidence “trustworthiness” and suggest that there are 4 criteria of trustworthiness: credibility (confidence in the “truth” of the findings), transferability (showing that the findings have applicability in other contexts), dependability (showing that the findings are consistent and could be repeated), and confirmability (the extent to which the findings of a study are shaped by the respondents and not researcher bias, motivation, or interest).

One way of establishing the “credibility” of the coding is to ask another researcher to code the same transcript and then to discuss any similarities and differences in the 2 resulting sets of codes. This simple act can result in revisions to the codes and can help to clarify and confirm the research findings.

Theming refers to the drawing together of codes from one or more transcripts to present the findings of qualitative research in a coherent and meaningful way. For example, there may be examples across participants’ narratives of the way in which they were treated in hospital, such as “not being listened to” or “lack of interest in personal experiences” (see Appendix 1 ). These may be drawn together as a theme running through the narratives that could be named “the patient’s experience of hospital care”. The importance of going through this process is that at its conclusion, it will be possible to present the data from the interviews using quotations from the individual transcripts to illustrate the source of the researchers’ interpretations. Thus, when the findings are organized for presentation, each theme can become the heading of a section in the report or presentation. Underneath each theme will be the codes, examples from the transcripts, and the researcher’s own interpretation of what the themes mean. Implications for real life (e.g., the treatment of people with chronic mental health problems) should also be given.

DATA SYNTHESIS

In this final section of this paper, we describe some ways of drawing together or “synthesizing” research findings to represent, as faithfully as possible, the meaning that participants ascribe to their life experiences. This synthesis is the aim of the final stage of qualitative research. For most readers, the synthesis of data presented by the researcher is of crucial significance—this is usually where “the story” of the participants can be distilled, summarized, and told in a manner that is both respectful to those participants and meaningful to readers. There are a number of ways in which researchers can synthesize and present their findings, but any conclusions drawn by the researchers must be supported by direct quotations from the participants. In this way, it is made clear to the reader that the themes under discussion have emerged from the participants’ interviews and not the mind of the researcher. The work of Latif and others 12 gives an example of how qualitative research findings might be presented.

Planning and Writing the Report

As has been suggested above, if researchers code and theme their material appropriately, they will naturally find the headings for sections of their report. Qualitative researchers tend to report “findings” rather than “results”, as the latter term typically implies that the data have come from a quantitative source. The final presentation of the research will usually be in the form of a report or a paper and so should follow accepted academic guidelines. In particular, the article should begin with an introduction, including a literature review and rationale for the research. There should be a section on the chosen methodology and a brief discussion about why qualitative methodology was most appropriate for the study question and why one particular methodology (e.g., interpretative phenomenological analysis rather than grounded theory) was selected to guide the research. The method itself should then be described, including ethics approval, choice of participants, mode of recruitment, and method of data collection (e.g., semistructured interviews or focus groups), followed by the research findings, which will be the main body of the report or paper. The findings should be written as if a story is being told; as such, it is not necessary to have a lengthy discussion section at the end. This is because much of the discussion will take place around the participants’ quotes, such that all that is needed to close the report or paper is a summary, limitations of the research, and the implications that the research has for practice. As stated earlier, it is not the intention of qualitative research to allow the findings to be generalized, and therefore this is not, in itself, a limitation.

Planning out the way that findings are to be presented is helpful. It is useful to insert the headings of the sections (the themes) and then make a note of the codes that exemplify the thoughts and feelings of your participants. It is generally advisable to put in the quotations that you want to use for each theme, using each quotation only once. After all this is done, the telling of the story can begin as you give your voice to the experiences of the participants, writing around their quotations. Do not be afraid to draw assumptions from the participants’ narratives, as this is necessary to give an in-depth account of the phenomena in question. Discuss these assumptions, drawing on your participants’ words to support you as you move from one code to another and from one theme to the next. Finally, as appropriate, it is possible to include examples from literature or policy documents that add support for your findings. As an exercise, you may wish to code and theme the sample excerpt in Appendix 1 and tell the participant’s story in your own way. Further reading about “doing” qualitative research can be found at the end of this paper.

CONCLUSIONS

Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. It can be used in pharmacy practice research to explore how patients feel about their health and their treatment. Qualitative research has been used by pharmacists to explore a variety of questions and problems (see the “Further Reading” section for examples). An understanding of these issues can help pharmacists and other health care professionals to tailor health care to match the individual needs of patients and to develop a concordant relationship. Doing qualitative research is not easy and may require a complete rethink of how research is conducted, particularly for researchers who are more familiar with quantitative approaches. There are many ways of conducting qualitative research, and this paper has covered some of the practical issues regarding data collection, analysis, and management. Further reading around the subject will be essential to truly understand this method of accessing peoples’ thoughts and feelings to enable researchers to tell participants’ stories.

Appendix 1. Excerpt from a sample transcript

The participant (age late 50s) had suffered from a chronic mental health illness for 30 years. The participant had become a “revolving door patient,” someone who is frequently in and out of hospital. As the participant talked about past experiences, the researcher asked:

  • What was treatment like 30 years ago?
  • Umm—well it was pretty much they could do what they wanted with you because I was put into the er, the er kind of system er, I was just on
  • endless section threes.
  • Really…
  • But what I didn’t realize until later was that if you haven’t actually posed a threat to someone or yourself they can’t really do that but I didn’t know
  • that. So wh-when I first went into hospital they put me on the forensic ward ’cause they said, “We don’t think you’ll stay here we think you’ll just
  • run-run away.” So they put me then onto the acute admissions ward and – er – I can remember one of the first things I recall when I got onto that
  • ward was sitting down with a er a Dr XXX. He had a book this thick [gestures] and on each page it was like three questions and he went through
  • all these questions and I answered all these questions. So we’re there for I don’t maybe two hours doing all that and he asked me he said “well
  • when did somebody tell you then that you have schizophrenia” I said “well nobody’s told me that” so he seemed very surprised but nobody had
  • actually [pause] whe-when I first went up there under police escort erm the senior kind of consultants people I’d been to where I was staying and
  • ermm so er [pause] I . . . the, I can remember the very first night that I was there and given this injection in this muscle here [gestures] and just
  • having dreadful side effects the next day I woke up [pause]
  • . . . and I suffered that akathesia I swear to you, every minute of every day for about 20 years.
  • Oh how awful.
  • And that side of it just makes life impossible so the care on the wards [pause] umm I don’t know it’s kind of, it’s kind of hard to put into words
  • [pause]. Because I’m not saying they were sort of like not friendly or interested but then nobody ever seemed to want to talk about your life [pause]
  • nobody asked me any questions about my life. The only questions that came into was they asked me if I’d be a volunteer for these student exams
  • and things and I said “yeah” so all the questions were like “oh what jobs have you done,” er about your relationships and things and er but
  • nobody actually sat down and had a talk and showed some interest in you as a person you were just there basically [pause] um labelled and you
  • know there was there was [pause] but umm [pause] yeah . . .

This article is the 10th in the CJHP Research Primer Series, an initiative of the CJHP Editorial Board and the CSHP Research Committee. The planned 2-year series is intended to appeal to relatively inexperienced researchers, with the goal of building research capacity among practising pharmacists. The articles, presenting simple but rigorous guidance to encourage and support novice researchers, are being solicited from authors with appropriate expertise.

Previous articles in this series:

Bond CM. The research jigsaw: how to get started. Can J Hosp Pharm . 2014;67(1):28–30.

Tully MP. Research: articulating questions, generating hypotheses, and choosing study designs. Can J Hosp Pharm . 2014;67(1):31–4.

Loewen P. Ethical issues in pharmacy practice research: an introductory guide. Can J Hosp Pharm. 2014;67(2):133–7.

Tsuyuki RT. Designing pharmacy practice research trials. Can J Hosp Pharm . 2014;67(3):226–9.

Bresee LC. An introduction to developing surveys for pharmacy practice research. Can J Hosp Pharm . 2014;67(4):286–91.

Gamble JM. An introduction to the fundamentals of cohort and case–control studies. Can J Hosp Pharm . 2014;67(5):366–72.

Austin Z, Sutton J. Qualitative research: getting started. C an J Hosp Pharm . 2014;67(6):436–40.

Houle S. An introduction to the fundamentals of randomized controlled trials in pharmacy research. Can J Hosp Pharm . 2014; 68(1):28–32.

Charrois TL. Systematic reviews: What do you need to know to get started? Can J Hosp Pharm . 2014;68(2):144–8.

Competing interests: None declared.

Further Reading

Examples of qualitative research in pharmacy practice.

  • Farrell B, Pottie K, Woodend K, Yao V, Dolovich L, Kennie N, et al. Shifts in expectations: evaluating physicians’ perceptions as pharmacists integrated into family practice. J Interprof Care. 2010; 24 (1):80–9. [ PubMed ] [ Google Scholar ]
  • Gregory P, Austin Z. Postgraduation employment experiences of new pharmacists in Ontario in 2012–2013. Can Pharm J. 2014; 147 (5):290–9. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Marks PZ, Jennnings B, Farrell B, Kennie-Kaulbach N, Jorgenson D, Pearson-Sharpe J, et al. “I gained a skill and a change in attitude”: a case study describing how an online continuing professional education course for pharmacists supported achievement of its transfer to practice outcomes. Can J Univ Contin Educ. 2014; 40 (2):1–18. [ Google Scholar ]
  • Nair KM, Dolovich L, Brazil K, Raina P. It’s all about relationships: a qualitative study of health researchers’ perspectives on interdisciplinary research. BMC Health Serv Res. 2008; 8 :110. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pojskic N, MacKeigan L, Boon H, Austin Z. Initial perceptions of key stakeholders in Ontario regarding independent prescriptive authority for pharmacists. Res Soc Adm Pharm. 2014; 10 (2):341–54. [ PubMed ] [ Google Scholar ]

Qualitative Research in General

  • Breakwell GM, Hammond S, Fife-Schaw C. Research methods in psychology. Thousand Oaks (CA): Sage Publications; 1995. [ Google Scholar ]
  • Given LM. 100 questions (and answers) about qualitative research. Thousand Oaks (CA): Sage Publications; 2015. [ Google Scholar ]
  • Miles B, Huberman AM. Qualitative data analysis. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]
  • Patton M. Qualitative research and evaluation methods. Thousand Oaks (CA): Sage Publications; 2002. [ Google Scholar ]
  • Willig C. Introducing qualitative research in psychology. Buckingham (UK): Open University Press; 2001. [ Google Scholar ]

Group Dynamics in Focus Groups

  • Farnsworth J, Boon B. Analysing group dynamics within the focus group. Qual Res. 2010; 10 (5):605–24. [ Google Scholar ]

Social Constructivism

  • Social constructivism. Berkeley (CA): University of California, Berkeley, Berkeley Graduate Division, Graduate Student Instruction Teaching & Resource Center; [cited 2015 June 4]. Available from: http://gsi.berkeley.edu/gsi-guide-contents/learning-theory-research/social-constructivism/ [ Google Scholar ]

Mixed Methods

  • Creswell J. Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]

Collecting Qualitative Data

  • Arksey H, Knight P. Interviewing for social scientists: an introductory resource with examples. Thousand Oaks (CA): Sage Publications; 1999. [ Google Scholar ]
  • Guest G, Namey EE, Mitchel ML. Collecting qualitative data: a field manual for applied research. Thousand Oaks (CA): Sage Publications; 2013. [ Google Scholar ]

Constructivist Grounded Theory

  • Charmaz K. Grounded theory: objectivist and constructivist methods. In: Denzin N, Lincoln Y, editors. Handbook of qualitative research. 2nd ed. Thousand Oaks (CA): Sage Publications; 2000. pp. 509–35. [ Google Scholar ]

COMMENTS

  1. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  2. Quantitative Data Analysis Methods & Techniques 101

    Quantitative data analysis is one of those things that often strikes fear in students. It's totally understandable - quantitative analysis is a complex topic, full of daunting lingo, like medians, modes, correlation and regression.Suddenly we're all wishing we'd paid a little more attention in math class…. The good news is that while quantitative data analysis is a mammoth topic ...

  3. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  4. The 7 Most Useful Data Analysis Techniques [2024 Guide]

    Cohort analysis in action: How Ticketmaster used cohort analysis to boost revenue. e. Cluster analysis. Cluster analysis is an exploratory technique that seeks to identify structures within a dataset. The goal of cluster analysis is to sort different data points into groups (or clusters) that are internally homogeneous and externally heterogeneous.

  5. What is data analysis? Methods, techniques, types & how-to

    e) Prescriptive analysis - How will it happen. Another of the most effective types of analysis methods in research. Prescriptive data techniques cross over from predictive analysis in the way that it revolves around using patterns or trends to develop responsive, practical business strategies.

  6. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  7. Qualitative Data Analysis Strategies

    Defining Strategies for Qualitative Data Analysis Analysis is a process of deconstructing and reconstructing evidence that involves purposeful interrogation and critical thinking about data in order to produce a meaningful interpretation and relevant understanding in answer to the questions asked or that arise in the process of investigation ...

  8. Qualitative Data Analysis Methods: Top 6 + Examples

    QDA Method #1: Qualitative Content Analysis. Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

  9. Data Analysis Techniques In Research

    Data analysis techniques in research are essential because they allow researchers to derive meaningful insights from data sets to support their hypotheses or research objectives.. Data Analysis Techniques in Research: While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence.

  10. Qualitative Data Analysis Strategies

    This chapter provides an overview of selected qualitative data analysis strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding, descriptive coding ...

  11. Learning to Do Qualitative Data Analysis: A Starting Point

    For many researchers unfamiliar with qualitative research, determining how to conduct qualitative analyses is often quite challenging. Part of this challenge is due to the seemingly limitless approaches that a qualitative researcher might leverage, as well as simply learning to think like a qualitative researcher when analyzing data. From framework analysis (Ritchie & Spencer, 1994) to content ...

  12. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  13. The Beginner's Guide to Statistical Analysis

    Statistical analysis means investigating trends, patterns, and relationships using quantitative data. It is an important research tool used by scientists, governments, businesses, and other organizations. ... A research design is your overall strategy for data collection and analysis. It determines the statistical tests you can use to test your ...

  14. Coding and Analysis Strategies

    Abstract. This chapter provides an overview of selected qualitative data analytic strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding ...

  15. Quantitative Data Analysis: A Comprehensive Guide

    Quantitative data has to be gathered and cleaned before proceeding to the stage of analyzing it. Below are the steps to prepare a data before quantitative research analysis: Step 1: Data Collection. Before beginning the analysis process, you need data. Data can be collected through rigorous quantitative research, which includes methods such as ...

  16. Creating a Data Analysis Plan: What to Consider When Choosing

    The first step in a data analysis plan is to describe the data collected in the study. This can be done using figures to give a visual presentation of the data and statistics to generate numeric descriptions of the data. Selection of an appropriate figure to represent a particular set of data depends on the measurement level of the variable.

  17. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  18. Data Analysis in Qualitative Research: A Brief Guide to Using Nvivo

    Data analysis in qualitative research is defined as the process of systematically searching and arranging the interview transcripts, observation notes, or other non-textual materials that the researcher accumulates to increase the understanding of the phenomenon.7 The process of analysing qualitative data predominantly involves coding or ...

  19. Basic statistical tools in research and data analysis

    Abstract. Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise ...

  20. Data analysis strategies for qualitative research

    Data analysis strategies for qualitative research. Michelle Byrne RN, MS, PhD, CNOR, Michelle Byrne RN, MS, PhD, CNOR. Nursing Research Committee. Search for more papers by this author. Michelle Byrne RN, MS, PhD, CNOR, Michelle Byrne RN, MS, PhD, CNOR. Nursing Research Committee. Search for more papers by this author. First published: 01 ...

  21. Strategic Analysis in the Public Sector using Semantic Web Technologies

    This research investigates the application of ontologies in the field of public sector strategy management to enhance the capacity of organizations to make informed data-driven decisions, efficiently allocate resources, and effectively navigate the intricate landscape of the public sector. ... Following the strategy analysis, recommendations ...

  22. A Longitudinal Analysis of Relations from Motivation to Self ...

    This research aims to analyze the relations from motivation to self-regulatory strategy on academic achievement in high school among academically higher-achieving students. Methods in autoregressive cross-lagged modeling by Mplus8.5 are used to evaluate 309 high school students with higher achievement in language or mathematics from the Korean Education Longitudinal Study 2013 (KELS 2013 ...

  23. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  24. PDF Naval Science and Technology Strategy

    Funded Research and Development Centers and University Affiliated Research Centers (UARCs). By teaming with the NPS Innovation Center and the school's faculty, staff and students, we can innovate faster. NPS S&T students' theses focus on relevant naval problems with faculty immersed in naval culture—we will reinforce both. Midshipmen at USNA

  25. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  26. Frontiers

    Therefore, the current study adopted a mixed method of sequential explanatory design to identify the types of rhetorical strategies in the monologue verbal humour of Chinese and English talk shows, examine their similarities and differences. 200 monologue samples from 2016 to 2022, which consisted of 100 monologues of Chinese talk shows (CTS ...

  27. Mapping ethical issues in the use of smart home health technologies to

    Our analysis is useful to promote careful ethical consideration when carrying out technology development, research and deployment to care for older persons. We registered our systematic review in the PROSPERO network under CRD42021248543. ... Creation of the search strategy and data extraction was a joint effort of NAF and AT. FP and TW ...

  28. 4 Reasons Why Managers Fail

    Further analysis found that 48% of managers are at risk of failure based on two criteria: 1) inconsistency in current performance and 2) lack of confidence in the manager's ability to lead the ...

  29. Qualitative Research: Data Collection, Analysis, and Management

    INTRODUCTION. In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area.

  30. Sustainability

    The environmental, social and governance (ESG) performance of construction enterprises still needs to be improved. Therefore, in order to better utilize resources effectively to improve enterprise ESG performance, this paper explores the configuration paths for Chinese construction enterprises to improve their ESG performance using the (fuzzy set qualitative comparative analysis) fsQCA method.