Grad Coach

Qualitative Data Coding 101

How to code qualitative data, the smart way (with examples).

By: Jenna Crosley (PhD) | Reviewed by:Dr Eunice Rautenbach | December 2020

As we’ve discussed previously , qualitative research makes use of non-numerical data – for example, words, phrases or even images and video. To analyse this kind of data, the first dragon you’ll need to slay is  qualitative data coding  (or just “coding” if you want to sound cool). But what exactly is coding and how do you do it? 

Overview: Qualitative Data Coding

In this post, we’ll explain qualitative data coding in simple terms. Specifically, we’ll dig into:

  • What exactly qualitative data coding is
  • What different types of coding exist
  • How to code qualitative data (the process)
  • Moving from coding to qualitative analysis
  • Tips and tricks for quality data coding

Qualitative Data Coding: The Basics

What is qualitative data coding?

Let’s start by understanding what a code is. At the simplest level,  a code is a label that describes the content  of a piece of text. For example, in the sentence:

“Pigeons attacked me and stole my sandwich.”

You could use “pigeons” as a code. This code simply describes that the sentence involves pigeons.

So, building onto this,  qualitative data coding is the process of creating and assigning codes to categorise data extracts.   You’ll then use these codes later down the road to derive themes and patterns for your qualitative analysis (for example, thematic analysis ). Coding and analysis can take place simultaneously, but it’s important to note that coding does not necessarily involve identifying themes (depending on which textbook you’re reading, of course). Instead, it generally refers to the process of  labelling and grouping similar types of data  to make generating themes and analysing the data more manageable. 

Makes sense? Great. But why should you bother with coding at all? Why not just look for themes from the outset? Well, coding is a way of making sure your  data is valid . In other words, it helps ensure that your  analysis is undertaken systematically  and that other researchers can review it (in the world of research, we call this transparency). In other words, good coding is the foundation of high-quality analysis.

Definition of qualitative coding

What are the different types of coding?

Now that we’ve got a plain-language definition of coding on the table, the next step is to understand what overarching types of coding exist – in other words, coding approaches . Let’s start with the two main approaches, inductive and deductive .

With deductive coding, you, as the researcher, begin with a set of  pre-established codes  and apply them to your data set (for example, a set of interview transcripts). Inductive coding on the other hand, works in reverse, as you create the set of codes based on the data itself – in other words, the codes emerge from the data. Let’s take a closer look at both.

Deductive coding 101

With deductive coding, we make use of pre-established codes, which are developed before you interact with the present data. This usually involves drawing up a set of  codes based on a research question or previous research . You could also use a code set from the codebook of a previous study.

For example, if you were studying the eating habits of college students, you might have a research question along the lines of 

“What foods do college students eat the most?”

As a result of this research question, you might develop a code set that includes codes such as “sushi”, “pizza”, and “burgers”.  

Deductive coding allows you to approach your analysis with a very tightly focused lens and quickly identify relevant data . Of course, the downside is that you could miss out on some very valuable insights as a result of this tight, predetermined focus. 

Deductive coding of data

Inductive coding 101 

But what about inductive coding? As we touched on earlier, this type of coding involves jumping right into the data and then developing the codes  based on what you find  within the data. 

For example, if you were to analyse a set of open-ended interviews , you wouldn’t necessarily know which direction the conversation would flow. If a conversation begins with a discussion of cats, it may go on to include other animals too, and so you’d add these codes as you progress with your analysis. Simply put, with inductive coding, you “go with the flow” of the data.

Inductive coding is great when you’re researching something that isn’t yet well understood because the coding derived from the data helps you explore the subject. Therefore, this type of coding is usually used when researchers want to investigate new ideas or concepts , or when they want to create new theories. 

Inductive coding definition

A little bit of both… hybrid coding approaches

If you’ve got a set of codes you’ve derived from a research topic, literature review or a previous study (i.e. a deductive approach), but you still don’t have a rich enough set to capture the depth of your qualitative data, you can  combine deductive and inductive  methods – this is called a  hybrid  coding approach. 

To adopt a hybrid approach, you’ll begin your analysis with a set of a priori codes (deductive) and then add new codes (inductive) as you work your way through the data. Essentially, the hybrid coding approach provides the best of both worlds, which is why it’s pretty common to see this in research.

Need a helping hand?

qualitative research and coding

How to code qualitative data

Now that we’ve looked at the main approaches to coding, the next question you’re probably asking is “how do I actually do it?”. Let’s take a look at the  coding process , step by step.

Both inductive and deductive methods of coding typically occur in two stages:  initial coding  and  line by line coding . 

In the initial coding stage, the objective is to get a general overview of the data by reading through and understanding it. If you’re using an inductive approach, this is also where you’ll develop an initial set of codes. Then, in the second stage (line by line coding), you’ll delve deeper into the data and (re)organise it according to (potentially new) codes. 

Step 1 – Initial coding

The first step of the coding process is to identify  the essence  of the text and code it accordingly. While there are various qualitative analysis software packages available, you can just as easily code textual data using Microsoft Word’s “comments” feature. 

Let’s take a look at a practical example of coding. Assume you had the following interview data from two interviewees:

What pets do you have?

I have an alpaca and three dogs.

Only one alpaca? They can die of loneliness if they don’t have a friend.

I didn’t know that! I’ll just have to get five more. 

I have twenty-three bunnies. I initially only had two, I’m not sure what happened. 

In the initial stage of coding, you could assign the code of “pets” or “animals”. These are just initial,  fairly broad codes  that you can (and will) develop and refine later. In the initial stage, broad, rough codes are fine – they’re just a starting point which you will build onto in the second stage. 

While there are various analysis software packages, you can just as easily code text data using Word's "comments" feature.

How to decide which codes to use

But how exactly do you decide what codes to use when there are many ways to read and interpret any given sentence? Well, there are a few different approaches you can adopt. The  main approaches  to initial coding include:

  • In vivo coding 

Process coding

  • Open coding

Descriptive coding

Structural coding.

  • Value coding

Let’s take a look at each of these:

In vivo coding

When you use in vivo coding , you make use of a  participants’ own words , rather than your interpretation of the data. In other words, you use direct quotes from participants as your codes. By doing this, you’ll avoid trying to infer meaning, rather staying as close to the original phrases and words as possible. 

In vivo coding is particularly useful when your data are derived from participants who speak different languages or come from different cultures. In these cases, it’s often difficult to accurately infer meaning due to linguistic or cultural differences. 

For example, English speakers typically view the future as in front of them and the past as behind them. However, this isn’t the same in all cultures. Speakers of Aymara view the past as in front of them and the future as behind them. Why? Because the future is unknown, so it must be out of sight (or behind us). They know what happened in the past, so their perspective is that it’s positioned in front of them, where they can “see” it. 

In a scenario like this one, it’s not possible to derive the reason for viewing the past as in front and the future as behind without knowing the Aymara culture’s perception of time. Therefore, in vivo coding is particularly useful, as it avoids interpretation errors.

Next up, there’s process coding , which makes use of  action-based codes . Action-based codes are codes that indicate a movement or procedure. These actions are often indicated by gerunds (words ending in “-ing”) – for example, running, jumping or singing.

Process coding is useful as it allows you to code parts of data that aren’t necessarily spoken, but that are still imperative to understanding the meaning of the texts. 

An example here would be if a participant were to say something like, “I have no idea where she is”. A sentence like this can be interpreted in many different ways depending on the context and movements of the participant. The participant could shrug their shoulders, which would indicate that they genuinely don’t know where the girl is; however, they could also wink, showing that they do actually know where the girl is. 

Simply put, process coding is useful as it allows you to, in a concise manner, identify the main occurrences in a set of data and provide a dynamic account of events. For example, you may have action codes such as, “describing a panda”, “singing a song about bananas”, or “arguing with a relative”.

qualitative research and coding

Descriptive coding aims to summarise extracts by using a  single word or noun  that encapsulates the general idea of the data. These words will typically describe the data in a highly condensed manner, which allows the researcher to quickly refer to the content. 

Descriptive coding is very useful when dealing with data that appear in forms other than traditional text – i.e. video clips, sound recordings or images. For example, a descriptive code could be “food” when coding a video clip that involves a group of people discussing what they ate throughout the day, or “cooking” when coding an image showing the steps of a recipe. 

Structural coding involves labelling and describing  specific structural attributes  of the data. Generally, it includes coding according to answers to the questions of “ who ”, “ what ”, “ where ”, and “ how ”, rather than the actual topics expressed in the data. This type of coding is useful when you want to access segments of data quickly, and it can help tremendously when you’re dealing with large data sets. 

For example, if you were coding a collection of theses or dissertations (which would be quite a large data set), structural coding could be useful as you could code according to different sections within each of these documents – i.e. according to the standard  dissertation structure . What-centric labels such as “hypothesis”, “literature review”, and “methodology” would help you to efficiently refer to sections and navigate without having to work through sections of data all over again. 

Structural coding is also useful for data from open-ended surveys. This data may initially be difficult to code as they lack the set structure of other forms of data (such as an interview with a strict set of questions to be answered). In this case, it would useful to code sections of data that answer certain questions such as “who?”, “what?”, “where?” and “how?”.

Let’s take a look at a practical example. If we were to send out a survey asking people about their dogs, we may end up with a (highly condensed) response such as the following: 

Bella is my best friend. When I’m at home I like to sit on the floor with her and roll her ball across the carpet for her to fetch and bring back to me. I love my dog.

In this set, we could code  Bella  as “who”,  dog  as “what”,  home  and  floor  as “where”, and  roll her ball  as “how”. 

Values coding

Finally, values coding involves coding that relates to the  participant’s worldviews . Typically, this type of coding focuses on excerpts that reflect the values, attitudes, and beliefs of the participants. Values coding is therefore very useful for research exploring cultural values and intrapersonal and experiences and actions.   

To recap, the aim of initial coding is to understand and  familiarise yourself with your data , to  develop an initial code set  (if you’re taking an inductive approach) and to take the first shot at  coding your data . The coding approaches above allow you to arrange your data so that it’s easier to navigate during the next stage, line by line coding (we’ll get to this soon). 

While these approaches can all be used individually, it’s important to remember that it’s possible, and potentially beneficial, to  combine them . For example, when conducting initial coding with interviews, you could begin by using structural coding to indicate who speaks when. Then, as a next step, you could apply descriptive coding so that you can navigate to, and between, conversation topics easily. 

Step 2 – Line by line coding

Once you’ve got an overall idea of our data, are comfortable navigating it and have applied some initial codes, you can move on to line by line coding. Line by line coding is pretty much exactly what it sounds like – reviewing your data, line by line,  digging deeper  and assigning additional codes to each line. 

With line-by-line coding, the objective is to pay close attention to your data to  add detail  to your codes. For example, if you have a discussion of beverages and you previously just coded this as “beverages”, you could now go deeper and code more specifically, such as “coffee”, “tea”, and “orange juice”. The aim here is to scratch below the surface. This is the time to get detailed and specific so as to capture as much richness from the data as possible. 

In the line-by-line coding process, it’s useful to  code everything  in your data, even if you don’t think you’re going to use it (you may just end up needing it!). As you go through this process, your coding will become more thorough and detailed, and you’ll have a much better understanding of your data as a result of this, which will be incredibly valuable in the analysis phase.

Line-by-line coding explanation

Moving from coding to analysis

Once you’ve completed your initial coding and line by line coding, the next step is to  start your analysis . Of course, the coding process itself will get you in “analysis mode” and you’ll probably already have some insights and ideas as a result of it, so you should always keep notes of your thoughts as you work through the coding.  

When it comes to qualitative data analysis, there are  many different types of analyses  (we discuss some of the  most popular ones here ) and the type of analysis you adopt will depend heavily on your research aims, objectives and questions . Therefore, we’re not going to go down that rabbit hole here, but we’ll cover the important first steps that build the bridge from qualitative data coding to qualitative analysis.

When starting to think about your analysis, it’s useful to  ask yourself  the following questions to get the wheels turning:

  • What actions are shown in the data? 
  • What are the aims of these interactions and excerpts? What are the participants potentially trying to achieve?
  • How do participants interpret what is happening, and how do they speak about it? What does their language reveal?
  • What are the assumptions made by the participants? 
  • What are the participants doing? What is going on? 
  • Why do I want to learn about this? What am I trying to find out? 
  • Why did I include this particular excerpt? What does it represent and how?

The type of qualitative analysis you adopt will depend heavily on your research aims, objectives and research questions.

Code categorisation

Categorisation is simply the process of reviewing everything you’ve coded and then  creating code categories  that can be used to guide your future analysis. In other words, it’s about creating categories for your code set. Let’s take a look at a practical example.

If you were discussing different types of animals, your initial codes may be “dogs”, “llamas”, and “lions”. In the process of categorisation, you could label (categorise) these three animals as “mammals”, whereas you could categorise “flies”, “crickets”, and “beetles” as “insects”. By creating these code categories, you will be making your data more organised, as well as enriching it so that you can see new connections between different groups of codes. 

Theme identification

From the coding and categorisation processes, you’ll naturally start noticing themes. Therefore, the logical next step is to  identify and clearly articulate the themes  in your data set. When you determine themes, you’ll take what you’ve learned from the coding and categorisation and group it all together to develop themes. This is the part of the coding process where you’ll try to draw meaning from your data, and start to  produce a narrative . The nature of this narrative depends on your research aims and objectives, as well as your research questions (sounds familiar?) and the  qualitative data analysis method  you’ve chosen, so keep these factors front of mind as you scan for themes. 

Themes help you develop a narrative in your qualitative analysis

Tips & tricks for quality coding

Before we wrap up, let’s quickly look at some general advice, tips and suggestions to ensure your qualitative data coding is top-notch.

  • Before you begin coding,  plan out the steps  you will take and the coding approach and technique(s) you will follow to avoid inconsistencies. 
  • When adopting deductive coding, it’s useful to  use a codebook  from the start of the coding process. This will keep your work organised and will ensure that you don’t forget any of your codes. 
  • Whether you’re adopting an inductive or deductive approach,  keep track of the meanings  of your codes and remember to revisit these as you go along.
  • Avoid using synonyms  for codes that are similar, if not the same. This will allow you to have a more uniform and accurate coded dataset and will also help you to not get overwhelmed by your data.
  • While coding, make sure that you  remind yourself of your aims  and coding method. This will help you to  avoid  directional drift , which happens when coding is not kept consistent. 
  • If you are working in a team, make sure that everyone has  been trained and understands  how codes need to be assigned. 

qualitative research and coding

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

What is a research question?

31 Comments

Finan Sabaroche

I appreciated the valuable information provided to accomplish the various stages of the inductive and inductive coding process. However, I would have been extremely satisfied to be appraised of the SPECIFIC STEPS to follow for: 1. Deductive coding related to the phenomenon and its features to generate the codes, categories, and themes. 2. Inductive coding related to using (a) Initial (b) Axial, and (c) Thematic procedures using transcribe data from the research questions

CD Fernando

Thank you so much for this. Very clear and simplified discussion about qualitative data coding.

Kelvin

This is what I want and the way I wanted it. Thank you very much.

Prasad

All of the information’s are valuable and helpful. Thank for you giving helpful information’s. Can do some article about alternative methods for continue researches during the pandemics. It is more beneficial for those struggling to continue their researchers.

Bahiru Haimanot

Thank you for your information on coding qualitative data, this is a very important point to be known, really thank you very much.

Christine Wasanga

Very useful article. Clear, articulate and easy to understand. Thanks

Andrew Wambua

This is very useful. You have simplified it the way I wanted it to be! Thanks

elaine clarke

Thank you so very much for explaining, this is quite helpful!

Enis

hello, great article! well written and easy to understand. Can you provide some of the sources in this article used for further reading purposes?

Kay Sieh Smith

You guys are doing a great job out there . I will not realize how many students you help through your articles and post on a daily basis. I have benefited a lot from your work. this is remarkable.

Wassihun Gebreegizaber Woldesenbet

Wonderful one thank you so much.

Thapelo Mateisi

Hello, I am doing qualitative research, please assist with example of coding format.

A. Grieme

This is an invaluable website! Thank you so very much!

Pam

Well explained and easy to follow the presentation. A big thumbs up to you. Greatly appreciate the effort 👏👏👏👏

Ceylan

Thank you for this clear article with examples

JOHNSON Padiyara

Thank you for the detailed explanation. I appreciate your great effort. Congrats!

Kwame Aboagye

Ahhhhhhhhhh! You just killed me with your explanation. Crystal clear. Two Cheers!

Stacy Ellis

D0 you have primary references that was used when creating this? If so, can you share them?

Ifeanyi Idam

Being a complete novice to the field of qualitative data analysis, your indepth analysis of the process of thematic analysis has given me better insight. Thank you so much.

Takalani Nemaungani

Excellent summary

Temesgen Yadeta Dibaba

Thank you so much for your precise and very helpful information about coding in qualitative data.

Ruby Gabor

Thanks a lot to this helpful information. You cleared the fog in my brain.

Derek Jansen

Glad to hear that!

Rosemary

This has been very helpful. I am excited and grateful.

Robert Siwer

I still don’t understand the coding and categorizing of qualitative research, please give an example on my research base on the state of government education infrastructure environment in PNG

Uvara Isaac Ude

Wahho, this is amazing and very educational to have come across this site.. from a little search to a wide discovery of knowledge.

Thanks I really appreciate this.

Jennifer Maslin

Thank you so much! Very grateful.

Vanassa Robinson

This was truly helpful. I have been so lost, and this simplified the process for me.

Julita Maradzika

Just at the right time when I needed to distinguish between inductive and

deductive data analysis of my Focus group discussion results very helpful

Sergio D. Mahinay, Jr.

Very useful across disciplines and at all levels. Thanks…

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • AI & NLP
  • Churn & Loyalty
  • Customer Experience
  • Customer Journeys
  • Customer Metrics
  • Feedback Analysis
  • Product Experience
  • Product Updates
  • Sentiment Analysis
  • Surveys & Feedback Collection
  • Try Thematic

Welcome to the community

qualitative research and coding

Coding Qualitative Data: How to Code Qualitative Research

How many hours have you spent sitting in front of Excel spreadsheets trying to find new insights from customer feedback?

You know that asking open-ended survey questions gives you more actionable insights than asking your customers for just a numerical Net Promoter Score (NPS) . But when you ask open-ended, free-text questions, you end up with hundreds (or even thousands) of free-text responses.

How can you turn all of that text into quantifiable, applicable information about your customers’ needs and expectations? By coding qualitative data.

Keep reading to learn:

  • What coding qualitative data means (and why it’s important)
  • Different methods of coding qualitative data
  • How to manually code qualitative data to find significant themes in your data

What is coding in qualitative research?

Coding is the process of labeling and organizing your qualitative data to identify different themes and the relationships between them.

When coding customer feedback , you assign labels to words or phrases that represent important (and recurring) themes in each response. These labels can be words, phrases, or numbers; we recommend using words or short phrases, since they’re easier to remember, skim, and organize.

Coding qualitative research to find common themes and concepts is part of thematic analysis . Thematic analysis extracts themes from text by analyzing the word and sentence structure.

Within the context of customer feedback, it's important to understand the many different types of qualitative feedback a business can collect, such as open-ended surveys, social media comments, reviews & more.

What is qualitative data analysis?

Qualitative data analysis is the process of examining and interpreting qualitative data to understand what it represents.

Qualitative data is defined as any non-numerical and unstructured data; when looking at customer feedback, qualitative data usually refers to any verbatim or text-based feedback such as reviews, open-ended responses in surveys , complaints, chat messages, customer interviews, case notes or social media posts

For example, NPS metric can be strictly quantitative, but when you ask customers why they gave you a rating a score, you will need qualitative data analysis methods in place to understand the comments that customers leave alongside numerical responses.

Methods of qualitative data analysis

Thematic analysis.

This refers to the uncovering of themes, by analyzing the patterns and relationships in a set of qualitative data. A theme emerges or is built when related findings appear to be meaningful and there are multiple occurences. Thematic analysis can be used by anyone to transform and organize open-ended responses, online reviews and other qualitative data into significant themes.

Content analysis:

This refers to the categorization, tagging and thematic analysis of qualitative data. Essentially content analysis is a quantification of themes, by counting the occurrence of concepts, topics or themes. Content analysis can involve combining the categories in qualitative data with quantitative data, such as behavioral data or demographic data, for deeper insights.

Narrative analysis:

Some qualitative data, such as interviews or field notes may contain a story on how someone experienced something. For example, the process of choosing a product, using it, evaluating its quality and decision to buy or not buy this product next time. The goal of narrative analysis is to turn the individual narratives into data that can be coded. This is then analyzed to understand how events or experiences had an impact on the people involved.

Discourse analysis:

This refers to analysis of what people say in social and cultural context. The goal of discourse analysis is to understand user or customer behavior by uncovering their beliefs, interests and agendas. These are reflected in the way they express their opinions, preferences and experiences. It’s particularly useful when your focus is on building or strengthening a brand , by examining how they use metaphors and rhetorical devices.

Framework analysis:

When performing qualitative data analysis, it is useful to have a framework to organize the buckets of meaning. A taxonomy or code frame (a hierarchical set of themes used in coding qualitative data) is an example of the result. Don't fall into the trap of starting with a framework to make it faster to organize your data.  You should look at how themes relate to each other by analyzing the data and consistently check that you can validate that themes are related to each other .

Grounded theory:

This method of analysis starts by formulating a theory around a single data case. Therefore the theory is “grounded’ in actual data. Then additional cases can be examined to see if they are relevant and can add to the original theory.

Why is it important to code qualitative data?

Coding qualitative data makes it easier to interpret customer feedback. Assigning codes to words and phrases in each response helps capture what the response is about which, in turn, helps you better analyze and summarize the results of the entire survey.

Researchers use coding and other qualitative data analysis processes to help them make data-driven decisions based on customer feedback. When you use coding to analyze your customer feedback, you can quantify the common themes in customer language. This makes it easier to accurately interpret and analyze customer satisfaction.

What is thematic coding?

Thematic coding, also called thematic analysis, is a type of qualitative data analysis that finds themes in text by analyzing the meaning of words and sentence structure.

When you use thematic coding to analyze customer feedback for example, you can learn which themes are most frequent in feedback. This helps you understand what drives customer satisfaction in an accurate, actionable way.

To learn more about how Thematic analysis software helps you automate the data coding process, check out this article .

Automated vs. Manual coding of qualitative data

Methods of coding qualitative data fall into three categories: automated coding and manual coding, and a blend of the two.

You can automate the coding of your qualitative data with thematic analysis software . Thematic analysis and qualitative data analysis software use machine learning, artificial intelligence (AI) , and natural language processing (NLP) to code your qualitative data and break text up into themes.

Thematic analysis software is autonomous , which means…

  • You don’t need to set up themes or categories in advance.
  • You don’t need to train the algorithm — it learns on its own.
  • You can easily capture the “unknown unknowns” to identify themes you may not have spotted on your own.

…all of which will save you time (and lots of unnecessary headaches) when analyzing your customer feedback.

Businesses are also seeing the benefit of using thematic analysis software. The capacity to aggregate data sources into a single source of analysis helps to break down data silos, unifying the analysis and insights across departments . This is now being referred to as Omni channel analysis or Unified Data Analytics .

Use Thematic Analysis Software

Try Thematic today to discover why leading companies rely on the platform to automate the coding of qualitative customer feedback at scale. Whether you have tons of customer reviews, support chat or open-ended survey responses, Thematic brings every valuable insight to the surface, while saving you thousands of hours.

Advances in natural language processing & machine learning have made it possible to automate the analysis of qualitative data, in particular content and framework analysis.  The most commonly used software for automated coding of qualitative data is text analytics software such as Thematic .

While manual human analysis is still popular due to its perceived high accuracy, automating most of the analysis is quickly becoming the preferred choice. Unlike manual analysis, which is prone to bias and doesn’t scale to the amount of qualitative data that is generated today, automating analysis is not only more consistent and therefore can be more accurate, but can also save a ton of time, and therefore money.

Our Theme Editor tool ensures you take a reflexive approach, an important step in thematic analysis. The drag-and-drop tool makes it easy to refine, validate, and rename themes as you get more data. By guiding the AI, you can ensure your results are always precise, easy to understand and perfectly aligned with your objectives.

Thematic is the best software to automate code qualitative feedback at scale.

Don't just take it from us. Here's what some of our customers have to say:

I'm a fan of Thematic's ability to save time and create heroes. It does an excellent job using a single view to break down the verbatims into themes displayed by volume, sentiment and impact on our beacon metric, often but not exclusively NPS.
It does a superlative job using GenAI in summarizing a theme or sub-theme down to a single paragraph making it clear what folks are trying to say. Peter K, Snr Research Manager.
Thematic is a very intuitive tool to use. It boasts a robust level of granularity, allowing the user to see the general breadth of verbatim themes, dig into the sub-themes, and further into the sentiment of the open text itself. Artem C, Sr Manager of Research. LinkedIn.

AI-powered software to transform qualitative data at scale through a thematic and content analysis.

How to manually code qualitative data

For the rest of this post, we’ll focus on manual coding. Different researchers have different processes, but manual coding usually looks something like this:

  • Choose whether you’ll use deductive or inductive coding.
  • Read through your data to get a sense of what it looks like. Assign your first set of codes.
  • Go through your data line-by-line to code as much as possible. Your codes should become more detailed at this step.
  • Categorize your codes and figure out how they fit into your coding frame.
  • Identify which themes come up the most — and act on them.

Let’s break it down a little further…

Deductive coding vs. inductive coding

Before you start qualitative data coding, you need to decide which codes you’ll use.

What is Deductive Coding?

Deductive coding means you start with a predefined set of codes, then assign those codes to the new qualitative data. These codes might come from previous research, or you might already know what themes you’re interested in analyzing. Deductive coding is also called concept-driven coding.

For example, let’s say you’re conducting a survey on customer experience . You want to understand the problems that arise from long call wait times, so you choose to make “wait time” one of your codes before you start looking at the data.

The deductive approach can save time and help guarantee that your areas of interest are coded. But you also need to be careful of bias; when you start with predefined codes, you have a bias as to what the answers will be. Make sure you don’t miss other important themes by focusing too hard on proving your own hypothesis.  

What is Inductive Coding?

Inductive coding , also called open coding, starts from scratch and creates codes based on the qualitative data itself. You don’t have a set codebook; all codes arise directly from the survey responses.

Here’s how inductive coding works:

  • Break your qualitative dataset into smaller samples.
  • Read a sample of the data.
  • Create codes that will cover the sample.
  • Reread the sample and apply the codes.
  • Read a new sample of data, applying the codes you created for the first sample.
  • Note where codes don’t match or where you need additional codes.
  • Create new codes based on the second sample.
  • Go back and recode all responses again.
  • Repeat from step 5 until you’ve coded all of your data.

If you add a new code, split an existing code into two, or change the description of a code, make sure to review how this change will affect the coding of all responses. Otherwise, the same responses at different points in the survey could end up with different codes.

Sounds like a lot of work, right? Inductive coding is an iterative process, which means it takes longer and is more thorough than deductive coding. A major advantage is that it gives you a more complete, unbiased look at the themes throughout your data.

Combining inductive and deductive coding

In practice, most researchers use a blend of inductive and deductive approaches to coding.

For example, with Thematic, the AI inductively comes up with themes, while also framing the analysis so that it reflects how business decisions are made . At the end of the analysis, researchers use the Theme Editor to iterate or refine themes. Then, in the next wave of analysis, as new data comes in, the AI starts deductively with the theme taxonomy.

Categorize your codes with coding frames

Once you create your codes, you need to put them into a coding frame. A coding frame represents the organizational structure of the themes in your research. There are two types of coding frames: flat and hierarchical.

Flat Coding Frame

A flat coding frame assigns the same level of specificity and importance to each code. While this might feel like an easier and faster method for manual coding, it can be difficult to organize and navigate the themes and concepts as you create more and more codes. It also makes it hard to figure out which themes are most important, which can slow down decision making.

Hierarchical Coding Frame

Hierarchical frames help you organize codes based on how they relate to one another. For example, you can organize the codes based on your customers’ feelings on a certain topic:

Hierarchical Coding Frame example

In this example:

  • The top-level code describes the topic (customer service)
  • The mid-level code specifies whether the sentiment is positive or negative
  • The third level details the attribute or specific theme associated with the topic

Hierarchical framing supports a larger code frame and lets you organize codes based on organizational structure. It also allows for different levels of granularity in your coding.

Whether your code frames are hierarchical or flat, your code frames should be flexible. Manually analyzing survey data takes a lot of time and effort; make sure you can use your results in different contexts.

For example, if your survey asks customers about customer service, you might only use codes that capture answers about customer service. Then you realize that the same survey responses have a lot of comments about your company’s products. To learn more about what people say about your products, you may have to code all of the responses from scratch! A flexible coding frame covers different topics and insights, which lets you reuse the results later on.

Tips for manually coding qualitative data

Now that you know the basics of coding your qualitative data, here are some tips on making the most of your qualitative research.

Use a codebook to keep track of your codes

As you code more and more data, it can be hard to remember all of your codes off the top of your head. Tracking your codes in a codebook helps keep you organized throughout the data analysis process. Your codebook can be as simple as an Excel spreadsheet or word processor document. As you code new data, add new codes to your codebook and reorganize categories and themes as needed.

Make sure to track:

  • The label used for each code
  • A description of the concept or theme the code refers to
  • Who originally coded it
  • The date that it was originally coded or updated
  • Any notes on how the code relates to other codes in your analysis

How to create high-quality codes - 4 tips

1. cover as many survey responses as possible..

The code should be generic enough to apply to multiple comments, but specific enough to be useful in your analysis. For example, “Product” is a broad code that will cover a variety of responses — but it’s also pretty vague. What about the product? On the other hand, “Product stops working after using it for 3 hours” is very specific and probably won’t apply to many responses. “Poor product quality” or “short product lifespan” might be a happy medium.

2. Avoid commonalities.

Having similar codes is okay as long as they serve different purposes. “Customer service” and “Product” are different enough from one another, while “Customer service” and “Customer support” may have subtle differences but should likely be combined into one code.

3. Capture the positive and the negative.

Try to create codes that contrast with each other to track both the positive and negative elements of a topic separately. For example, “Useful product features” and “Unnecessary product features” would be two different codes to capture two different themes.

4. Reduce data — to a point.

Let’s look at the two extremes: There are as many codes as there are responses, or each code applies to every single response. In both cases, the coding exercise is pointless; you don’t learn anything new about your data or your customers. To make your analysis as useful as possible, try to find a balance between having too many and too few codes.

Group responses based on themes, not words

Make sure to group responses with the same themes under the same code, even if they don’t use the same exact wording. For example, a code such as “cleanliness” could cover responses including words and phrases like:

  • Looked like a dump
  • Could eat off the floor

Having only a few codes and hierarchical framing makes it easier to group different words and phrases under one code. If you have too many codes, especially in a flat frame, your results can become ambiguous and themes can overlap. Manual coding also requires the coder to remember or be able to find all of the relevant codes; the more codes you have, the harder it is to find the ones you need, no matter how organized your codebook is.

Make accuracy a priority

Manually coding qualitative data means that the coder’s cognitive biases can influence the coding process. For each study, make sure you have coding guidelines and training in place to keep coding reliable, consistent, and accurate .

One thing to watch out for is definitional drift, which occurs when the data at the beginning of the data set is coded differently than the material coded later. Check for definitional drift across the entire dataset and keep notes with descriptions of how the codes vary across the results.

If you have multiple coders working on one team, have them check one another’s coding to help eliminate cognitive biases.

Conclusion: 6 main takeaways for coding qualitative data

Here are 6 final takeaways for manually coding your qualitative data:

  • Coding is the process of labeling and organizing your qualitative data to identify themes. After you code your qualitative data, you can analyze it just like numerical data.
  • Inductive coding (without a predefined code frame) is more difficult, but less prone to bias, than deductive coding.
  • Code frames can be flat (easier and faster to use) or hierarchical (more powerful and organized).
  • Your code frames need to be flexible enough that you can make the most of your results and use them in different contexts.
  • When creating codes, make sure they cover several responses, contrast one another, and strike a balance between too much and too little information.
  • Consistent coding = accuracy. Establish coding procedures and guidelines and keep an eye out for definitional drift in your qualitative data analysis.

Some more detail in our downloadable guide

If you’ve made it this far, you’ll likely be interested in our free guide: Best practices for analyzing open-ended questions.

The guide includes some of the topics covered in this article, and goes into some more niche details.

If your company is looking to automate your qualitative coding process, try Thematic !

If you're looking to trial multiple solutions, check out our free buyer's guide . It covers what to look for when trialing different feedback analytics solutions to ensure you get the depth of insights you need.

Happy coding!

Authored by Alyona Medelyan, PhD – Natural Language Processing & Machine Learning

qualitative research and coding

CEO and Co-Founder

Alyona has a PhD in NLP and Machine Learning. Her peer-reviewed articles have been cited by over 2600 academics. Her love of writing comes from years of PhD research.

We make it easy to discover the customer and product issues that matter.

Unlock the value of feedback at scale, in one platform. Try it for free now!

  • Questions to ask your Feedback Analytics vendor
  • How to end customer churn for good
  • Scalable analysis of NPS verbatims
  • 5 Text analytics approaches
  • How to calculate the ROI of CX

Our experts will show you how Thematic works, how to discover pain points and track the ROI of decisions. To access your free trial, book a personal demo today.

Recent posts

Watercare is New Zealand's largest water and wastewater service provider. They are responsible for bringing clean water to 1.7 million people in Tamaki Makaurau (Auckland) and safeguarding the wastewater network to minimize impact on the environment. Water is a sector that often gets taken for granted, with drainage and

Become a qualitative theming pro! Creating a perfect code frame is hard, but thematic analysis software makes the process much easier.

Qualtrics is one of the most well-known and powerful Customer Feedback Management platforms. But even so, it has limitations. We recently hosted a live panel where data analysts from two well-known brands shared their experiences with Qualtrics, and how they extended this platform’s capabilities. Below, we’ll share the

University Library, University of Illinois at Urbana-Champaign

University of Illinois Library Wordmark

Qualitative Data Analysis: Coding

  • Atlas.ti web
  • R for text analysis
  • Microsoft Excel & spreadsheets
  • Other options
  • Planning Qual Data Analysis
  • Free Tools for QDA
  • QDA with NVivo
  • QDA with Atlas.ti
  • QDA with MAXQDA
  • PKM for QDA
  • QDA with Quirkos
  • Working Collaboratively
  • Qualitative Methods Texts
  • Transcription
  • Data organization
  • Example Publications

Coding Qualitative Data

Planning your coding strategy.

Coding is a qualitative data analysis strategy in which some aspect of the data is assigned a descriptive label that allows the researcher to identify related content across the data. How you decide to code - or whether to code- your data should be driven by your methodology. But there are rarely step-by-step descriptions, and you'll have to make many decisions about how to code for your own project.

Some questions to consider as you decide how to code your data:

What will you code? 

What aspects of your data will you code? If you are not coding all of your available data, how will you decide which elements need to be coded? If you have recordings interviews or focus groups, or other types of multimedia data, will you create transcripts to analyze and code? Or will you code the media itself (see Farley, Duppong & Aitken, 2020 on direct coding of audio recordings rather than transcripts). 

Where will your codes come from? 

Depending on your methodology, your coding scheme may come from previous research and be applied to your data (deductive). Or you my try to develop codes entirely from the data, ignoring as much as possible, previous knowledge of the topic under study, to develop a scheme grounded in your data (inductive). In practice, however, many practices will fall between these two approaches. 

How will you apply your codes to your data? 

You may decide to use software to code your qualitative data, to re-purpose other software tools (e.g. Word or spreadsheet software) or work primarily with physical versions of your data. Qualitative software is not strictly necessary, though it does offer some advantages, like: 

  • Codes can be easily re-labeled, merged, or split. You can also choose to apply multiple coding schemes to the same data, which means you can explore multiple ways of understanding the same data. Your analysis, then, is not limited by how often you are able to work with physical data, such as paper transcripts. 
  • Most software programs for QDA include the ability to export and import coding schemes. This means you can create a re-use a coding scheme from a previous study, or that was developed in outside of the software, without having to manually create each code. 
  • Some software for QDA includes the ability to directly code image, video, and audio files. This may mean saving time over creating transcripts. Or, your coding may be enhanced by access to the richness of mediated content, compared to transcripts.
  • Using QDA software may also allow you the ability to use auto-coding functions. You may be able to automatically code all of the statements by speaker in a focus group transcript, for example, or identify and code all of the paragraphs that include a specific phrase. 

What will be coded? 

Will you deploy a line-by-line coding approach, with smaller codes eventually condensed into larger categories or concepts? Or will you start with codes applied to larger segments of the text, perhaps later reviewing the examples to explore and re-code for differences between the segments? 

How will you explain the coding process? 

  • Regardless of how you approach coding, the process should be clearly communicated when you report your research, though this is not always the case (Deterding & Waters, 2021).
  • Carefully consider the use of phrases like "themes emerged." This phrasing implies that the themes lay passively in the data, waiting for the researcher to pluck them out. This description leaves little room for describing how the researcher "saw" the themes and decided which were relevant to the study. Ryan and Bernard (2003) offer a terrific guide to ways that you might identify themes in the data, using both your own observations as well as manipulations of the data. 

How will you report the results of your coding process? 

How you report your coding process should align with the methodology you've chosen. Your methodology may call for careful and consistent application of a coding scheme, with reports of inter-rater reliability and counts of how often a code appears within the data. Or you may use the codes to help develop a rich description of an experience, without needing to indicate precisely how often the code was applied. 

How will you code collaboratively?

If you are working with another researcher or a team, your coding process requires careful planning and implementation. You will likely need to have regular conversations about your process, particularly if your goal is to develop and consistently apply a coding scheme across your data. 

Coding Features in QDA Software Programs

  • Atlas.ti (Mac)
  • Atlas.ti (Windows)
  • NVivo (Windows)
  • NVivo (Mac)
  • Coding data See how to create and manage codes and apply codes to segments of the data (known as quotations in Atlas.ti).

  • Search and Code Using the search and code feature lets you locate and automatically code data through text search, regular expressions, Named Entity Recognition, and Sentiment Analysis.
  • Focus Group Coding Properly prepared focus group documents can be automatically coded by speaker.
  • Inter-Coder Agreement Coded text, audio, and video documents can be tested for inter-coder agreement. ICA is not available for images or PDF documents.
  • Quotation Reader Once you've coded data, you can view just the data that has been assigned that code.

  • Find Redundant Codings (Mac) This tool identifies "overlapping or embedded" quotations that have the same code, that are the result of manual coding or errors when merging project files.
  • Coding Data in Atlas.ti (Windows) Demonstrates how to create new codes, manage codes and applying codes to segments of the data (known as quotations in Atlas.ti)
  • Search and Code in Atlas.ti (Windows) You can use a text search, regular expressions, Named Entity Recognition, and Sentiment Analysis to identify and automatically code data in Atlas.ti.
  • Focus Group Coding in Atlas.ti (Windows) Properly prepared focus group transcripts can be automatically coded by speaker.
  • Inter-coder Agreement in Atlas.ti (Windows) Coded text, audio, and video documents can be tested for inter-coder agreement. ICA is not available for images or PDF documents.
  • Quotation Reader in Atlas.ti (Windows) Once you've coded data, you can view and export the quotations that have been assigned that code.
  • Find Redundant Codings in Atlas.ti (Windows) This tool identifies "overlapping or embedded" quotations that have the same code, that are the result of manual coding or errors when merging project files.
  • Coding in NVivo (Windows) This page includes an overview of the coding features in NVivo.
  • Automatic Coding in Documents in NVivo (Windows) You can use paragraph formatting styles or speaker names to automatically format documents.
  • Coding Comparison Query in NVivo (Windows) You can use the coding comparison feature to compare how different users have coded data in NVivo.
  • Review the References in a Node in NVivo (Windows) References are the term that NVivo uses for coded segments of the data. This shows you how to view references related to a code (or any node)
  • Text Search Queries in NVivo (Windows) Text queries let you search for specific text in your data. The results of your query can be saved as a node (a form of auto coding).
  • Coding Query in NVivo (Windows) Use a coding query to display references from your data for a single code or multiples of codes.
  • Code Files and Manage Codes in NVivo (Mac) This page offers an overview of coding features in NVivo. Note that NVivo uses the concept of a node to refer to any structure around which you organize your data. Codes are a type of node, but you may see these terms used interchangeably.
  • Automatic Coding in Datasets in NVivo (Mac) A dataset in NVivo is data that is in rows and columns, as in a spreadsheet. If a column is set to be codable, you can also automatically code the data. This approach could be used for coding open-ended survey data.
  • Text Search Query in NVivo (Mac) Use the text search query to identify relevant text in your data and automatically code references by saving as a node.
  • Review the References in a Node in NVivo (Mac) NVivo uses the term references to refer to data that has been assigned to a code or any node. You can use the reference view to see the data linked to a specific node or combination of nodes.
  • Coding Comparison Query in NVivo (Mac) Use the coding comparison query to calculate a measure of inter-rater reliability when you've worked with multiple coders.

The MAXQDA interface is the same across Mac and Windows devices. 

  • The "Code System" in MAXQDA This section of the manual shows how to create and manage codes in MAXQDA's code system.
  • How to Code with MAXQDA

  • Display Coded Segments in the Document Browser Once you've coded a document within MAXQDA, you can choose which of those codings will appear on the document, as well as choose whether or not the text is highlighted in the color linked to the code.
  • Creative Coding in MAXQDA Use the creative coding feature to explore the relationships between codes in your system. If you develop a new structure to you codes that you like, you can apply the changes to your overall code scheme.
  • Text Search in MAXQDA Use a Text Search to identify data that matches your search terms and automatically code the results. You can choose whether to code only the matching results, the sentence the results are in, or the paragraph the results appear in.
  • Segment Retrieval in MAXQDA Data that has been coded is considered a segment. Segment retrieval is how you display the segments that match a code or combination of codes. You can use the activation feature to show only the segments from a document group, or that match a document variable.
  • Intercorder Agreement in MAXQDA MAXQDA includes the ability to compare coding between two coders on a single project.
  • Create Tags in Taguette Taguette uses the term tag to refer to codes. You can create single tags as well as a tag hierarchy using punctuation marks.
  • Highlighting in Taguette Select text with a document (a highlight) and apply tags to code data in Taguette.

Useful Resources on Coding

Cover Art

Deterding, N. M., & Waters, M. C. (2021). Flexible coding of in-depth interviews: A twenty-first-century approach. Sociological Methods & Research , 50 (2), 708–739. https://doi.org/10.1177/0049124118799377

Farley, J., Duppong Hurley, K., & Aitken, A. A. (2020). Monitoring implementation in program evaluation with direct audio coding. Evaluation and Program Planning , 83 , 101854. https://doi.org/10.1016/j.evalprogplan.2020.101854

Ryan, G. W., & Bernard, H. R. (2003). Techniques to identify themes. Field Methods , 15 (1), 85–109. https://doi.org/10.1177/1525822X02239569. 

  • << Previous: Data organization
  • Next: Citations >>
  • Last Updated: Apr 5, 2024 2:23 PM
  • URL: https://guides.library.illinois.edu/qualitative

A guide to coding qualitative research data

Last updated

12 February 2023

Reviewed by

Each time you ask open-ended and free-text questions, you'll end up with numerous free-text responses. When your qualitative data piles up, how do you sift through it to determine what customers value? And how do you turn all the gathered texts into quantifiable and actionable information related to your user's expectations and needs?

Qualitative data can offer significant insights into respondents’ attitudes and behavior. But to distill large volumes of text / conversational data into clear and insightful results can be daunting. One way to resolve this is through qualitative research coding.

Streamline data coding

Use global data tagging systems in Dovetail so everyone analyzing research is speaking the same language

  • What is coding in qualitative research?

This is the system of classifying and arranging qualitative data . Coding in qualitative research involves separating a phrase or word and tagging it with a code. The code describes a data group and separates the information into defined categories or themes. Using this system, researchers can find and sort related content.

They can also combine categorized data with other coded data sets for analysis, or analyze it separately. The primary goal of coding qualitative data is to change data into a consistent format in support of research and reporting.

A code can be a phrase or a word that depicts an idea or recurring theme in the data. The code’s label must be intuitive and encapsulate the essence of the researcher's observations or participants' responses. You can generate these codes using two approaches to coding qualitative data: manual coding and automated coding.

  • Why is it important to code qualitative data?

By coding qualitative data, it's easier to identify consistency and scale within a set of individual responses. Assigning codes to phrases and words within feedback helps capture what the feedback entails. That way, you can better analyze and   understand the outcome of the entire survey.

Researchers use coding and other qualitative data analysis procedures to make data-driven decisions according to customer responses. Coding in customer feedback will help you assess natural themes in the customers’ language. With this, it's easy to interpret and analyze customer satisfaction .

  • How do inductive and deductive approaches to qualitative coding work?

Before you start qualitative research coding, you must decide whether you're starting with some predefined code frames, within which the data will be sorted (deductive approach). Or, you may plan to develop and evolve the codes while reviewing the qualitative data generated by the research (inductive approach). A combination of both approaches is also possible.

In most instances, a combined approach will be best. For example, researchers will have some predefined codes/themes they expect to find in the data, but will allow for a degree of discovery in the data where new themes and codes come to light.

Inductive coding

This is an exploratory method in which new data codes and themes are generated by the review of qualitative data. It initiates and generates code according to the source of the data itself. It's ideal for investigative research, in which you devise a new idea, theory, or concept. 

Inductive coding is otherwise called open coding. There's no predefined code-frame within inductive coding, as all codes are generated by reviewing the raw qualitative data.

If you're adding a new code, changing a code descriptor, or dividing an existing code in half, ensure you review the wider code frame to determine whether this alteration will impact other feedback codes.  Failure to do this may lead to similar responses at various points in the qualitative data,  generating different codes while containing similar themes or insights.

Inductive coding is more thorough and takes longer than deductive coding, but offers a more unbiased and comprehensive overview of the themes within your data.

Deductive coding

This is a hierarchical approach to coding. In this method, you develop a codebook using your initial code frames. These frames may depend on an ongoing research theory or questions. Go over the data once again and filter data to different codes. 

After generating your qualitative data, your codes must be a match for the code frame you began with. Program evaluation research could use this coding approach.

Inductive and deductive approaches

Research studies usually blend both inductive and deductive coding approaches. For instance, you may use a deductive approach for your initial set of code sets, and later use an inductive approach to generate fresh codes and recalibrate them while you review and analyze your data.

  • What are the practical steps for coding qualitative data?

You can code qualitative data in the following ways:

1. Conduct your first-round pass at coding qualitative data

You need to review your data and assign codes to different pieces in this step. You don't have to generate the right codes since you will iterate and evolve them ahead of the second-round coding review.

Let's look at examples of the coding methods you may use in this step.

Open coding : This involves the distilling down of qualitative data into separate, distinct coded elements.

Descriptive coding : In this method, you create a description that encapsulates the data section’s content. Your code name must be a noun or a term that describes what the qualitative data relates to.

Values coding : This technique categorizes qualitative data that relates to the participant's attitudes, beliefs, and values.

Simultaneous coding : You can apply several codes to a single piece of qualitative data using this approach.

Structural coding : In this method, you can classify different parts of your qualitative data based on a predetermined design to perform additional analysis within the design.

In Vivo coding : Use this as the initial code to represent specific phrases or single words generated via a qualitative interview (i.e., specifically what the respondent said).

Process coding : A process of coding which captures action within data.  Usually, this will be in the form of gerunds ending in “ing” (e.g., running, searching, reviewing).

2. Arrange your qualitative codes into groups and subcodes

You can start organizing codes into groups once you've completed your initial round of qualitative data coding. There are several ways to arrange these groups. 

You can put together codes related to one another or address the same subjects or broad concepts, under each category. Continue working with these groups and rearranging the codes until you develop a framework that aligns with your analysis.

3. Conduct more rounds of qualitative coding

Conduct more iterations of qualitative data coding to review the codes and groups you've already established. You can change the names and codes, combine codes, and re-group the work you've already done during this phase. 

In contrast, the initial attempt at data coding may have been hasty and haphazard. But these coding rounds focus on re-analyzing, identifying patterns, and drawing closer to creating concepts and ideas.

Below are a few techniques for qualitative data coding that are often applied in second-round coding.

Pattern coding : To describe a pattern, you join snippets of data, similarly classified under a single umbrella code.

Thematic analysis coding : When examining qualitative data, this method helps to identify patterns or themes.

Selective coding/focused coding : You can generate finished code sets and groups using your first pass of coding.

Theoretical coding : By classifying and arranging codes, theoretical coding allows you to create a theoretical framework's hypothesis. You develop a theory using the codes and groups that have been generated from the qualitative data.

Content analysis coding : This starts with an existing theory or framework and uses qualitative data to either support or expand upon it.

Axial coding : Axial coding allows you to link different codes or groups together. You're looking for connections and linkages between the information you discovered in earlier coding iterations.

Longitudinal coding : In this method, by organizing and systematizing your existing qualitative codes and categories, it is possible to monitor and measure them over time.

Elaborative coding : This involves applying a hypothesis from past research and examining how your present codes and groups relate to it.

4. Integrate codes and groups into your concluding narrative

When you finish going through several rounds of qualitative data coding and applying different forms of coding, use the generated codes and groups to build your final conclusions. The final result of your study could be a collection of findings, theory, or a description, depending on the goal of your study.

Start outlining your hypothesis , observations , and story while citing the codes and groups that served as its foundation. Create your final study results by structuring this data.

  • What are the two methods of coding qualitative data?

You can carry out data coding in two ways: automatic and manual. Manual coding involves reading over each comment and manually assigning labels. You'll need to decide if you're using inductive or deductive coding.

Automatic qualitative data analysis uses a branch of computer science known as Natural Language Processing to transform text-based data into a format that computers can comprehend and assess. It's a cutting-edge area of artificial intelligence and machine learning that has the potential to alter how research and insight is designed and delivered.

Although automatic coding is faster than human coding, manual coding still has an edge due to human oversight and limitations in terms of computer power and analysis.

  • What are the advantages of qualitative research coding?

Here are the benefits of qualitative research coding:

Boosts validity : gives your data structure and organization to be more certain the conclusions you are drawing from it are valid

Reduces bias : minimizes interpretation biases by forcing the researcher to undertake a systematic review and analysis of the data 

Represents participants well : ensures your analysis reflects the views and beliefs of your participant pool and prevents you from overrepresenting the views of any individual or group

Fosters transparency : allows for a logical and systematic assessment of your study by other academics

  • What are the challenges of qualitative research coding?

It would be best to consider theoretical and practical limitations while analyzing and interpreting data. Here are the challenges of qualitative research coding:

Labor-intensive: While you can use software for large-scale text management and recording, data analysis is often verified or completed manually.

Lack of reliability: Qualitative research is often criticized due to a lack of transparency and standardization in the coding and analysis process, being subject to a collection of researcher bias. 

Limited generalizability : Detailed information on specific contexts is often gathered using small samples. Drawing generalizable findings is challenging even with well-constructed analysis processes as data may need to be more widely gathered to be genuinely representative of attitudes and beliefs within larger populations.

Subjectivity : It is challenging to reproduce qualitative research due to researcher bias in data analysis and interpretation. When analyzing data, the researchers make personal value judgments about what is relevant and what is not. Thus, different people may interpret the same data differently.

  • What are the tips for coding qualitative data?

Here are some suggestions for optimizing the value of your qualitative research now that you are familiar with the fundamentals of coding qualitative data.

Keep track of your codes using a codebook or code frame

It can be challenging to recall all your codes offhand as you code more and more data. Keeping track of your codes in a codebook or code frame will keep you organized as you analyze the data. An Excel spreadsheet or word processing document might be your codebook's basic format.

Ensure you track:

The label applied to each code and the time it was first coded or modified

An explanation of the idea or subject matter that the code relates to

Who the original coder is

Any notes on the relationship between the code and other codes in your analysis

Add new codes to your codebook as you code new data, and rearrange categories and themes as necessary.

  • How do you create high-quality codes?

Here are four useful tips to help you create high-quality codes.

1. Cover as many survey responses as possible

The code should be generic enough to aid your analysis while remaining general enough to apply to various comments. For instance, "product" is a general code that can apply to many replies but is also ambiguous. 

Also, the specific statement, "product stops working after using it for 3 hours" is unlikely to apply to many answers. A good compromise might be "poor product quality" or "short product lifespan."

2. Avoid similarities

Having similar codes is acceptable only if they serve different objectives. While "product" and "customer service" differ from each other, "customer support" and "customer service" can be unified into a single code.

3. Take note of the positive and the negative

Establish contrasting codes to track an issue's negative and positive aspects separately. For instance, two codes to identify distinct themes would be "excellent customer service" and "poor customer service."

4. Minimize data—to a point

Try to balance having too many and too few codes in your analysis to make it as useful as possible.

What is the best way to code qualitative data?

Depending on the goal of your research, the procedure of coding qualitative data can vary. But generally, it entails: 

Reading through your data

Assigning codes to selected passages

Carrying out several rounds of coding

Grouping codes into themes

Developing interpretations that result in your final research conclusions 

You can begin by first coding snippets of text or data to summarize or characterize them and then add your interpretative perspective in the second round of coding.

A few techniques are more or less acceptable depending on your study’s goal; there is no right or incorrect way to code a data set.

What is an example of a code in qualitative research?

A code is, at its most basic level, a label specifying how you should read a text. The phrase, "Pigeons assaulted me and took my meal," is an illustration. You can use pigeons as a code word.

Is there coding in qualitative research?

An essential component of qualitative data analysis is coding. Coding aims to give structure to free-form data so one can systematically study it.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 25 November 2023

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

qualitative research and coding

Users report unexpectedly high data usage, especially during streaming sessions.

qualitative research and coding

Users find it hard to navigate from the home page to relevant playlists in the app.

qualitative research and coding

It would be great to have a sleep timer feature, especially for bedtime listening.

qualitative research and coding

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research

A newer edition of this book is available.

  • < Previous chapter
  • Next chapter >

The Oxford Handbook of Qualitative Research

28 Coding and Analysis Strategies

Johnny Saldaña, School of Theatre and Film, Arizona State University

  • Published: 04 August 2014
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter provides an overview of selected qualitative data analytic strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding, descriptive coding, values coding, dramaturgical coding, and versus coding. Strategies for constructing themes and assertions from the data follow. Analytic memo writing is woven throughout the preceding as a method for generating additional analytic insight. Next, display and arts-based strategies are provided, followed by recommended qualitative data analytic software programs and a discussion on verifying the researcher’s analytic findings.

Coding and Analysis Strategies

Anthropologist Clifford Geertz (1983) charmingly mused, “Life is just a bowl of strategies” (p. 25). Strategy , as I use it here, refers to a carefully considered plan or method to achieve a particular goal. The goal in this case is to develop a write-up of your analytic work with the qualitative data you have been given and collected as part of a study. The plans and methods you might employ to achieve that goal are what this article profiles.

Some may perceive strategy as an inappropriate if not colonizing word, suggesting formulaic or regimented approaches to inquiry. I assure you that that is not my intent. My use of strategy is actually dramaturgical in nature: strategies are actions that characters in plays take to overcome obstacles to achieve their objectives. Actors portraying these characters rely on action verbs to generate belief within themselves and to motivate them as they interpret the lines and move appropriately on stage. So what I offer is a qualitative researcher’s array of actions from which to draw to overcome the obstacles to thinking to achieve an analysis of your data. But unlike the pre-scripted text of a play in which the obstacles, strategies, and outcomes have been predetermined by the playwright, your work must be improvisational—acting, reacting, and interacting with data on a moment-by-moment basis to determine what obstacles stand in your way, and thus what strategies you should take to reach your goals.

Another intriguing quote to keep in mind comes from research methodologist Robert E. Stake (1995) who posits, “Good research is not about good methods as much as it is about good thinking” (p. 19). In other words, strategies can take you only so far. You can have a box full of tools, but if you do not know how to use them well or use them creatively, the collection seems rather purposeless. One of the best ways we learn is by doing . So pick up one or more of these strategies (in the form of verbs) and take analytic action with your data. Also keep in mind that these are discussed in the order in which they may typically occur, although humans think cyclically, iteratively, and reverberatively, and each particular research project has its own unique contexts and needs. So be prepared for your mind to jump purposefully and/or idiosyncratically from one strategy to another throughout the study.

QDA (Qualitative Data Analysis) Strategy: To Foresee

To foresee in QDA is to reflect beforehand on what forms of data you will most likely need and collect, which thus informs what types of data analytic strategies you anticipate using.

Analysis, in a way, begins even before you collect data. As you design your research study in your mind and on a word processor page, one strategy is to consider what types of data you may need to help inform and answer your central and related research questions. Interview transcripts, participant observation field notes, documents, artifacts, photographs, video recordings, and so on are not only forms of data but foundations for how you may plan to analyze them. A participant interview, for example, suggests that you will transcribe all or relevant portions of the recording, and use both the transcription and the recording itself as sources for data analysis. Any analytic memos (discussed later) or journal entries you make about your impressions of the interview also become data to analyze. Even the computing software you plan to employ will be relevant to data analysis as it may help or hinder your efforts.

As your research design formulates, compose one to two paragraphs that outline how your QDA may proceed. This will necessitate that you have some background knowledge of the vast array of methods available to you. Thus surveying the literature is vital preparatory work.

QDA Strategy: To Survey

To survey in QDA is to look for and consider the applicability of the QDA literature in your field that may provide useful guidance for your forthcoming data analytic work.

General sources in QDA will provide a good starting point for acquainting you with the data analytic strategies available for the variety of genres in qualitative inquiry (e.g., ethnography, phenomenology, case study, arts-based research, mixed methods). One of the most accessible is Graham R. Gibbs’ (2007)   Analysing Qualitative Data , and one of the most richly detailed is Frederick J. Wertz et al.'s (2011)   Five Ways of Doing Qualitative Analysis . The author’s core texts for this article came from The Coding Manual for Qualitative Researchers ( Saldaña, 2009 , 2013 ) and Fundamentals of Qualitative Research ( Saldaña, 2011 ).

If your study’s methodology or approach is grounded theory, for example, then a survey of methods works by such authors as Barney G. Glaser, Anselm L. Strauss, Juliet Corbin and, in particular, the prolific Kathy Charmaz (2006) may be expected. But there has been a recent outpouring of additional book publications in grounded theory by Birks & Mills (2011) , Bryant & Charmaz (2007) , Stern & Porr (2011) , plus the legacy of thousands of articles and chapters across many disciplines that have addressed grounded theory in their studies.

Particular fields such as education, psychology, social work, health care, and others also have their own QDA methods literature in the form of texts and journals, plus international conferences and workshops for members of the profession. Most important is to have had some university coursework and/or mentorship in qualitative research to suitably prepare you for the intricacies of QDA. Also acknowledge that the emergent nature of qualitative inquiry may require you to adopt different analytic strategies from what you originally planned.

QDA Strategy: To Collect

To collect in QDA is to receive the data given to you by participants and those data you actively gather to inform your study.

QDA is concurrent with data collection and management. As interviews are transcribed, field notes are fleshed out, and documents are filed, the researcher uses the opportunity to carefully read the corpus and make preliminary notations directly on the data documents by highlighting, bolding, italicizing, or noting in some way any particularly interesting or salient portions. As these data are initially reviewed, the researcher also composes supplemental analytic memos that include first impressions, reminders for follow-up, preliminary connections, and other thinking matters about the phenomena at work.

Some of the most common fieldwork tools you might use to collect data are notepads, pens and pencils, file folders for documents, a laptop or desktop with word processing software (Microsoft Word and Excel are most useful) and internet access, a digital camera, and a voice recorder. Some fieldworkers may even employ a digital video camera to record social action, as long as participant permissions have been secured. But everything originates from the researcher himself or herself. Your senses are immersed in the cultural milieu you study, taking in and holding on to relevant details or “significant trivia,” as I call them. You become a human camera, zooming out to capture the broad landscape of your field site one day, then zooming in on a particularly interesting individual or phenomenon the next. Your analysis is only as good as the data you collect.

Fieldwork can be an overwhelming experience because so many details of social life are happening in front of you. Take a holistic approach to your entree, but as you become more familiar with the setting and participants, actively focus on things that relate to your research topic and questions. Of course, keep yourself open to the intriguing, surprising, and disturbing ( Sunstein & Chiseri-Strater, 2012 , p. 115), for these facets enrich your study by making you aware of the unexpected.

QDA Strategy: To Feel

To feel in QDA is to gain deep emotional insight into the social worlds you study and what it means to be human.

Virtually everything we do has an accompanying emotion(s), and feelings are both reactions and stimuli for action. Others’ emotions clue you to their motives, attitudes, values, beliefs, worldviews, identities, and other subjective perceptions and interpretations. Acknowledge that emotional detachment is not possible in field research. Attunement to the emotional experiences of your participants plus sympathetic and empathetic responses to the actions around you are necessary in qualitative endeavors. Your own emotional responses during fieldwork are also data because they document the tacit and visceral. It is important during such analytic reflection to assess why your emotional reactions were as they were. But it is equally important not to let emotions alone steer the course of your study. A proper balance must be found between feelings and facts.

QDA Strategy: To Organize

To organize in QDA is to maintain an orderly repository of data for easy access and analysis.

Even in the smallest of qualitative studies, a large amount of data will be collected across time. Prepare both a hard drive and hard copy folders for digital data and paperwork, and back up all materials for security from loss. I recommend that each data “chunk” (e.g., one interview transcript, one document, one day’s worth of field notes) get its own file, with subfolders specifying the data forms and research study logistics (e.g., interviews, field notes, documents, Institutional Review Board correspondence, calendar).

For small-scale qualitative studies, I have found it quite useful to maintain one large master file with all participant and field site data copied and combined with the literature review and accompanying researcher analytic memos. This master file is used to cut and paste related passages together, deleting what seems unnecessary as the study proceeds, and eventually transforming the document into the final report itself. Cosmetic devices such as font style, font size, rich text (italicizing, bolding, underlining, etc.), and color can help you distinguish between different data forms and highlight significant passages. For example, descriptive, narrative passages of field notes are logged in regular font. “Quotations, things spoken by participants, are logged in bold font.”   Observer’s comments, such as the researcher’s subjective impressions or analytic jottings, are set in italics.

QDA Strategy: To Jot

To jot in QDA is to write occasional, brief notes about your thinking or reminders for follow up.

A jot is a phrase or brief sentence that will literally fit on a standard size “sticky note.” As data are brought and documented together, take some initial time to review their contents and to jot some notes about preliminary patterns, participant quotes that seem quite vivid, anomalies in the data, and so forth.

As you work on a project, keep something to write with or to voice record with you at all times to capture your fleeting thoughts. You will most likely find yourself thinking about your research when you're not working exclusively on the project, and a “mental jot” may occur to you as you ruminate on logistical or analytic matters. Get the thought documented in some way for later retrieval and elaboration as an analytic memo.

QDA Strategy: To Prioritize

To prioritize in QDA is to determine which data are most significant in your corpus and which tasks are most necessary.

During fieldwork, massive amounts of data in various forms may be collected, and your mind can get easily overwhelmed from the magnitude of the quantity, its richness, and its management. Decisions will need to be made about the most pertinent of them because they help answer your research questions or emerge as salient pieces of evidence. As a sweeping generalization, approximately one half to two thirds of what you collect may become unnecessary as you proceed toward the more formal stages of QDA.

To prioritize in QDA is to also determine what matters most in your assembly of codes, categories, themes, assertions, and concepts. Return back to your research purpose and questions to keep you framed for what the focus should be.

QDA Strategy: To Analyze

To analyze in QDA is to observe and discern patterns within data and to construct meanings that seem to capture their essences and essentials.

Just as there are a variety of genres, elements, and styles of qualitative research, so too are there a variety of methods available for QDA. Analytic choices are most often based on what methods will harmonize with your genre selection and conceptual framework, what will generate the most sufficient answers to your research questions, and what will best represent and present the project’s findings.

Analysis can range from the factual to the conceptual to the interpretive. Analysis can also range from a straightforward descriptive account to an emergently constructed grounded theory to an evocatively composed short story. A qualitative research project’s outcomes may range from rigorously achieved, insightful answers to open-ended, evocative questions; from rich descriptive detail to a bullet-pointed list of themes; and from third-person, objective reportage to first-person, emotion-laden poetry. Just as there are multiple destinations in qualitative research, there are multiple pathways and journeys along the way.

Analysis is accelerated as you take cognitive ownership of your data. By reading and rereading the corpus, you gain intimate familiarity with its contents and begin to notice significant details as well as make new insights about their meanings. Patterns, categories, and their interrelationships become more evident the more you know the subtleties of the database.

Since qualitative research’s design, fieldwork, and data collection are most often provisional, emergent, and evolutionary processes, you reflect on and analyze the data as you gather them and proceed through the project. If preplanned methods are not working, you change them to secure the data you need. There is generally a post-fieldwork period when continued reflection and more systematic data analysis occur, concurrent with or followed by additional data collection, if needed, and the more formal write-up of the study, which is in itself an analytic act. Through field note writing, interview transcribing, analytic memo writing, and other documentation processes, you gain cognitive ownership of your data; and the intuitive, tacit, synthesizing capabilities of your brain begin sensing patterns, making connections, and seeing the bigger picture. The purpose and outcome of data analysis is to reveal to others through fresh insights what we have observed and discovered about the human condition. And fortunately, there are heuristics for reorganizing and reflecting on your qualitative data to help you achieve that goal.

QDA Strategy: To Pattern

To pattern in QDA is to detect similarities within and regularities among the data you have collected.

The natural world is filled with patterns because we, as humans, have constructed them as such. Stars in the night sky are not just a random assembly; our ancestors pieced them together to form constellations like the Big Dipper. A collection of flowers growing wild in a field has a pattern, as does an individual flower’s patterns of leaves and petals. Look at the physical objects humans have created and notice how pattern oriented we are in our construction, organization, and decoration. Look around you in your environment and notice how many patterns are evident on your clothing, in a room, and on most objects themselves. Even our sometimes mundane daily and long-term human actions are reproduced patterns in the form of roles, relationships, rules, routines, and rituals.

This human propensity for pattern making follows us into QDA. From the vast array of interview transcripts, field notes, documents, and other forms of data, there is this instinctive, hardwired need to bring order to the collection—not just to reorganize it but to look for and construct patterns out of it. The discernment of patterns is one of the first steps in the data analytic process, and the methods described next are recommended ways to construct them.

QDA Strategy: To Code

To code in QDA is to assign a truncated, symbolic meaning to each datum for purposes of qualitative analysis.

Coding is a heuristic—a method of discovery—to the meanings of individual sections of data. These codes function as a way of patterning, classifying, and later reorganizing them into emergent categories for further analysis. Different types of codes exist for different types of research genres and qualitative data analytic approaches, but this article will focus on only a few selected methods. First, a definition of a code:

A code in qualitative data analysis is most often a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data. The data can consist of interview transcripts, participant observation fieldnotes, journals, documents, literature, artifacts, photographs, video, websites, e-mail correspondence, and so on. The portion of data to be coded can... range in magnitude from a single word to a full sentence to an entire page of text to a stream of moving images.... Just as a title represents and captures a book or film or poem’s primary content and essence, so does a code represent and capture a datum’s primary content and essence. [ Saldaña, 2009 , p. 3]

One helpful pre-coding task is to divide long selections of field note or interview transcript data into shorter stanzas . Stanza division “chunks” the corpus into more manageable paragraph-like units for coding assignments and analysis. The transcript sample that follows illustrates one possible way of inserting line breaks in-between self-standing passages of interview text for easier readability.

Process Coding

As a first coding example, the following interview excerpt about an employed, single, lower-middle-class adult male’s spending habits during the difficult economic times in the U.S. during 2008–2012 is coded in the right-hand margin in capital letters. The superscript numbers match the datum unit with its corresponding code. This particular method is called process coding, which uses gerunds (“-ing” words) exclusively to represent action suggested by the data. Processes can consist of observable human actions (e.g., BUYING BARGAINS), mental processes (e.g., THINKING TWICE), and more conceptual ideas (e.g., APPRECIATING WHAT YOU’VE GOT). Notice that the interviewer’s (I) portions are not coded, just the participant’s (P). A code is applied each time the subtopic of the interview shifts—even within a stanza—and the same codes can (and should) be used more than once if the subtopics are similar. The central research question driving this qualitative study is, “In what ways are middle-class Americans influenced and affected by the current [2008–2012] economic recession?”

Different researchers analyzing this same piece of data may develop completely different codes, depending on their lenses and filters. The previous codes are only one person’s interpretation of what is happening in the data, not the definitive list. The process codes have transformed the raw data units into new representations for analysis. A listing of them applied to this interview transcript, in the order they appear, reads:

BUYING BARGAINS

QUESTIONING A PURCHASE

THINKING TWICE

STOCKING UP

REFUSING SACRIFICE

PRIORITIZING

FINDING ALTERNATIVES

LIVING CHEAPLY

NOTICING CHANGES

STAYING INFORMED

MAINTAINING HEALTH

PICKING UP THE TAB

APPRECIATING WHAT YOU’VE GOT

Coding the data is the first step in this particular approach to QDA, and categorization is just one of the next possible steps.

QDA Strategy: To Categorize

To categorize in QDA is to cluster similar or comparable codes into groups for pattern construction and further analysis.

Humans categorize things in innumerable ways. Think of an average apartment or house’s layout. The rooms of a dwelling have been constructed or categorized by their builders and occupants according to function. A kitchen is designated as an area to store and prepare food and the cooking and dining materials such as pots, pans, and utensils. A bedroom is designated for sleeping, a closet for clothing storage, a bathroom for bodily functions and hygiene, and so on. Each room is like a category in which related and relevant patterns of human action occur. Of course, there are exceptions now and then, such as eating breakfast in bed rather than in a dining area or living in a small studio apartment in which most possessions are contained within one large room (but nonetheless are most often organized and clustered into subcategories according to function and optimal use of space).

The point here is that the patterns of social action we designate into particular categories during QDA are not perfectly bounded. Category construction is our best attempt to cluster the most seemingly alike things into the most seemingly appropriate groups. Categorizing is reorganizing and reordering the vast array of data from a study because it is from these smaller, larger, and meaning-rich units that we can better grasp the particular features of each one and the categories’ possible interrelationships with one another.

One analytic strategy with a list of codes is to classify them into similar clusters. Obviously, the same codes share the same category, but it is also possible that a single code can merit its own group if you feel it is unique enough. After the codes have been classified, a category label is applied to each grouping. Sometimes a code can also double as a category name if you feel it best summarizes the totality of the cluster. Like coding, categorizing is an interpretive act, for there can be different ways of separating and collecting codes that seem to belong together. The cut-and-paste functions of a word processor are most useful for exploring which codes share something in common.

Below is my categorization of the fifteen codes generated from the interview transcript presented earlier. Like the gerunds for process codes, the categories have also been labeled as “-ing” words to connote action. And there was no particular reason why fifteen codes resulted in three categories—there could have been less or even more, but this is how the array came together after my reflections on which codes seemed to belong together. The category labels are ways of answering “why” they belong together. For at-a-glance differentiation, I place codes in CAPITAL LETTERS and categories in upper and lower case Bold Font :

Category 1: Thinking Strategically

Category 2: Spending Strategically

Category 3: Living Strategically

APPRECIATING WHAT YOU'VE GOT

Notice that the three category labels share a common word: “strategically.” Where did this word come from? It came from analytic reflection on the original data, the codes, and the process of categorizing the codes and generating their category labels. It was the analyst’s choice based on the interpretation of what primary action was happening. Your categories generated from your coded data do not need to share a common word or phrase, but I find that this technique, when appropriate, helps build a sense of unity to the initial analytic scheme.

The three categories— Thinking Strategically , Spending Strategically , and Living Strategically —are then reflected upon for how they might interact and interplay. This is where the next major facet of data analysis, analytic memos, enters the scheme. But a necessary section on the basic principles of interrelationship and analytic reasoning must precede that discussion.

QDA Strategy: To Interrelate

To interrelate in QDA is to propose connections within, between, and among the constituent elements of analyzed data.

One task of QDA is to explore the ways our patterns and categories interact and interplay. I use these terms to suggest the qualitative equivalent of statistical correlation, but interaction and interplay are much more than a simple relationship. They imply interrelationship . Interaction refers to reverberative connections—for example, how one or more categories might influence and affect the others, how categories operate concurrently, or whether there is some kind of “domino” effect to them. Interplay refers to the structural and processual nature of categories—for example, whether some type of sequential order, hierarchy, or taxonomy exists; whether any overlaps occur; whether there is superordinate and subordinate arrangement; and what types of organizational frameworks or networks might exist among them. The positivist construct of “cause and effect” becomes influences and affects in QDA.

There can even be patterns of patterns and categories of categories if your mind thinks conceptually and abstractly enough. Our minds can intricately connect multiple phenomena but only if the data and their analyses support the constructions. We can speculate about interaction and interplay all we want, but it is only through a more systematic investigation of the data—in other words, good thinking—that we can plausibly establish any possible interrelationships.

QDA Strategy: To Reason

To reason in QDA is to think in ways that lead to causal probabilities, summative findings, and evaluative conclusions.

Unlike quantitative research, with its statistical formulas and established hypothesis-testing protocols, qualitative research has no standardized methods of data analysis. Rest assured, there are recommended guidelines from the field’s scholars and a legacy of analytic strategies from which to draw. But the primary heuristics (or methods of discovery) you apply during a study are deductive , inductive , abductive , and retroductive reasoning. Deduction is what we generally draw and conclude from established facts and evidence. Induction is what we experientially explore and infer to be transferable from the particular to the general, based on an examination of the evidence and an accumulation of knowledge. Abduction is surmising from the evidence that which is most likely, those explanatory hunches based on clues. “Whereas deductive inferences are certain (so long as their premises are true) and inductive inferences are probable, abductive inferences are merely plausible” ( Shank, 2008 , p. 1). Retroduction is historic reconstruction, working backwards to figure out how the current conditions came to exist.

It is not always necessary to know the names of these four ways of reasoning as you proceed through analysis. In fact, you will more than likely reverberate quickly from one to another depending on the task at hand. But what is important to remember about reasoning is:

to base your conclusions primarily on the participants’ experiences, not just your own

not to take the obvious for granted, as sometimes the expected won't always happen. Your hunches can be quite right and, at other times, quite wrong

to examine the evidence carefully and make reasonable inferences

to logically yet imaginatively think about what is going on and how it all comes together.

Futurists and inventors propose three questions when they think about creating new visions for the world: What is possible (induction)? What is plausible (abduction)? What is preferable (deduction)? These same three questions might be posed as you proceed through QDA and particularly through analytic memo writing, which is retroductive reflection on your analytic work thus far.

QDA Strategy: To Memo

To memo in QDA is to reflect in writing on the nuances, inferences, meanings, and transfer of coded and categorized data plus your analytic processes.

Like field note writing, perspectives vary among practitioners as to the methods for documenting the researcher’s analytic insights and subjective experiences. Some advise that such reflections should be included in field notes as relevant to the data. Others advise that a separate researcher’s journal should be maintained for recording these impressions. And still others advise that these thoughts be documented as separate analytic memos. I prescribe the latter as a method because it is generated by and directly connected to the data themselves.

An analytic memo is a “think piece” of reflexive free writing, a narrative that sets in words your interpretations of the data. Coding and categorizing are heuristics to detect some of the possible patterns and interrelationships at work within the corpus, and an analytic memo further articulates your deductive, inductive, abductive, and retroductive thinking processes on what things may mean. Though the metaphor is a bit flawed and limiting, think of codes and their consequent categories as separate jigsaw puzzle pieces, and their integration into an analytic memo as the trial assembly of the complete picture.

What follows is an example of an analytic memo based on the earlier process coded and categorized interview transcript. It is not intended as the final write-up for a publication but as an open-ended reflection on the phenomena and processes suggested by the data and their analysis thus far. As the study proceeds, however, initial and substantive analytic memos can be revisited and revised for eventual integration into the final report. Note how the memo is dated and given a title for future and further categorization, how participant quotes are occasionally included for evidentiary support, and how the category names are bolded and the codes kept in capital letters to show how they integrate or weave into the thinking:

March 18, 2012 EMERGENT CATEGORIES: A STRATEGIC AMALGAM There’s a popular saying now: “Smart is the new rich.” This participant is Thinking Strategically about his spending through such tactics as THINKING TWICE and QUESTIONING A PURCHASE before he decides to invest in a product. There’s a heightened awareness of both immediate trends and forthcoming economic bad news that positively affects his Spending Strategically . However, he seems unaware that there are even more ways of LIVING CHEAPLY by FINDING ALTERNATIVES. He dines at all-you-can-eat restaurants as a way of STOCKING UP on meals, but doesn’t state that he could bring lunch from home to work, possibly saving even more money. One of his “bad habits” is cigarettes, which he refuses to give up; but he doesn’t seem to realize that by quitting smoking he could save even more money, not to mention possible health care costs. He balks at the idea of paying $1.50 for a soft drink, but doesn’t mind paying $6.00–$7.00 for a pack of cigarettes. Penny-wise and pound-foolish. Addictions skew priorities. Living Strategically , for this participant during “scary times,” appears to be a combination of PRIORITIZING those things which cannot be helped, such as pet care and personal dental care; REFUSING SACRIFICE for maintaining personal creature-comforts; and FINDING ALTERNATIVES to high costs and excessive spending. Living Strategically is an amalgam of thinking and action-oriented strategies.

There are several recommended topics for analytic memo writing throughout the qualitative study. Memos are opportunities to reflect on and write about:

how you personally relate to the participants and/or the phenomenon

your study’s research questions

your code choices and their operational definitions

the emergent patterns, categories, themes, assertions, and concepts

the possible networks (links, connections, overlaps, flows) among the codes, patterns, categories, themes, assertions, and concepts

an emergent or related existent theory

any problems with the study

any personal or ethical dilemmas with the study

future directions for the study

the analytic memos generated thus far [labeled “metamemos”]

the final report for the study [adapted from Saldaña, 2013 , p. 49]

Since writing is analysis, analytic memos expand on the inferential meanings of the truncated codes and categories as a transitional stage into a more coherent narrative with hopefully rich social insight.

QDA Strategy: To Code—A Different Way

The first example of coding illustrated process coding, a way of exploring general social action among humans. But sometimes a researcher works with an individual case study whose language is unique, or with someone the researcher wishes to honor by maintaining the authenticity of his or her speech in the analysis. These reasons suggest that a more participant-centered form of coding may be more appropriate.

In Vivo Coding

A second frequently applied method of coding is called in vivo coding. The root meaning of “in vivo” is “in that which is alive” and refers to a code based on the actual language used by the participant ( Strauss, 1987 ). What words or phrases in the data record you select as codes are those that seem to stand out as significant or summative of what is being said.

Using the same transcript of the male participant living in difficult economic times, in vivo codes are listed in the right-hand column. I recommend that in vivo codes be placed in quotation marks as a way of designating that the code is extracted directly from the data record. Note that instead of fifteen codes generated from process coding, the total number of in vivo codes is thirty. This is not to suggest that there should be specific numbers or ranges of codes used for particular methods. In vivo codes, though, tend to be applied more frequently to data. Again, the interviewer’s questions and prompts are not coded, just the participant's responses:

The thirty in vivo codes are then extracted from the transcript and listed in the order they appear to prepare them for analytic action and reflection:

“SKYROCKETED”

“TWO-FOR-ONE”

“THE LITTLE THINGS”

“THINK TWICE”

“ALL-YOU-CAN-EAT”

“CHEAP AND FILLING”

“BAD HABITS”

“DON'T REALLY NEED”

“LIVED KIND OF CHEAP”

“NOT A BIG SPENDER”

“HAVEN'T CHANGED MY HABITS”

“NOT PUTTING AS MUCH INTO SAVINGS”

“SPENDING MORE”

“ANOTHER DING IN MY WALLET”

“HIGH MAINTENANCE”

“COUPLE OF THOUSAND”

“INSURANCE IS JUST WORTHLESS”

“PICK UP THE TAB”

“IT ALL ADDS UP”

“NOT AS BAD OFF”

“SCARY TIMES”

Even though no systematic reorganization or categorization has been conducted with the codes thus far, an analytic memo of first impressions can still be composed:

March 19, 2012 CODE CHOICES: THE EVERYDAY LANGUAGE OF ECONOMICS After eyeballing the in vivo codes list, I noticed that variants of “CHEAP” appear most often. I recall a running joke between me and a friend of mine when we were shopping for sales. We’d say, “We're not ‘cheap,’ we're frugal .” There’s no formal economic or business language is this transcript—no terms such as “recession” or “downsizing”—just the everyday language of one person trying to cope during “SCARY TIMES” with “ANOTHER DING IN MY WALLET.” The participant notes that he’s always “LIVED KIND OF CHEAP” and is “NOT A BIG SPENDER” and, due to his employment, “NOT AS BAD OFF” as others in the country. Yet even with his middle class status, he’s still feeling the monetary pinch, dining at inexpensive “ALL-YOU-CAN-EAT” restaurants and worried about the rising price of peanut butter, observing that he’s “NOT PUTTING AS MUCH INTO SAVINGS” as he used to. Of all the codes, “ANOTHER DING IN MY WALLET” stands out to me, particularly because on the audio recording he sounded bitter and frustrated. It seems that he’s so concerned about “THE LITTLE THINGS” because of high veterinary and dental charges. The only way to cope with a “COUPLE OF THOUSAND” dollars worth of medical expenses is to find ways of trimming the excess in everyday facets of living: “IT ALL ADDS UP.”

Like process coding, in vivo codes could be clustered into similar categories, but another simple data analytic strategy is also possible.

QDA Strategy: To Outline

To outline in QDA is to hierarchically, processually, and/or temporally assemble such things as codes, categories, themes, assertions, and concepts into a coherent, text-based display.

Traditional outlining formats and content provide not only templates for writing a report but templates for analytic organization. This principle can be found in several CAQDAS (Computer Assisted Qualitative Data Analysis Software) programs through their use of such functions as “hierarchies,” “trees,” and “nodes,” for example. Basic outlining is simply a way of arranging primary, secondary, and sub-secondary items into a patterned display. For example, an organized listing of things in a home might consist of:

Large appliances

Refrigerator

Stove-top oven

Microwave oven

Small appliances

Coffee maker

Dining room

In QDA, outlining may include descriptive nouns or topics but, depending on the study, it may also involve processes or phenomena in extended passages, such as in vivo codes or themes.

The complexity of what we learn in the field can be overwhelming, and outlining is a way of organizing and ordering that complexity so that it does not become complicated. The cut-and-paste and tab functions of a word processor page enable you to arrange and rearrange the salient items from your preliminary coded analytic work into a more streamlined flow. By no means do I suggest that the intricate messiness of life can always be organized into neatly formatted arrangements, but outlining is an analytic act that stimulates deep reflection on both the interconnectedness and interrelationships of what we study. As an example, here are the thirty in vivo codes generated from the initial transcript analysis, arranged in such a way as to construct five major categories:

“DON’T REALLY NEED”

“HAVEN’T CHANGED MY HABITS”

Now that the codes have been rearranged into an outline format, an analytic memo is composed to expand on the rationale and constructed meanings in progress:

March 19, 2012 NETWORKS: EMERGENT CATEGORIES The five major categories I constructed from the in vivo codes are: “SCARY TIMES,” “PRIORTY,” “ANOTHER DING IN MY WALLET,” “THE LITTLE THINGS,” and “LIVED KIND OF CHEAP.” One of the things that hit me today was that the reason he may be pinching pennies on smaller purchases is that he cannot control the larger ones he has to deal with. Perhaps the only way we can cope with or seem to have some sense of agency over major expenses is to cut back on the smaller ones that we can control. $1,000 for a dental bill? Skip lunch for a few days a week. Insulin medication to buy for a pet? Don’t buy a soft drink from a vending machine. Using this reasoning, let me try to interrelate and weave the categories together as they relate to this particular participant: During these scary economic times, he prioritizes his spending because there seems to be just one ding after another to his wallet. A general lifestyle of living cheaply and keeping an eye out for how to save money on the little things compensates for those major expenses beyond his control.

QDA Strategy: To Code—In Even More Ways

The process and in vivo coding examples thus far have demonstrated only two specific methods of thirty-two documented approaches ( Saldaña, 2013 ). Which one(s) you choose for your analysis depends on such factors as your conceptual framework, the genre of qualitative research for your project, the types of data you collect, and so on. The following sections present a few other approaches available for coding qualitative data that you may find useful as starting points.

Descriptive Coding

Descriptive codes are primarily nouns that simply summarize the topic of a datum. This coding approach is particularly useful when you have different types of data gathered for one study, such as interview transcripts, field notes, documents, and visual materials such as photographs. Descriptive codes not only help categorize but also index the data corpus’ basic contents for further analytic work. An example of an interview portion coded descriptively, taken from the participant living in tough economic times, follows to illustrate how the same data can be coded in multiple ways:

For initial analysis, descriptive codes are clustered into similar categories to detect such patterns as frequency (i.e., categories with the largest number of codes), interrelationship (i.e., categories that seem to connect in some way), and initial work for grounded theory development.

Values Coding

Values coding identifies the values, attitudes, and beliefs of a participant, as shared by the individual and/or interpreted by the analyst. This coding method infers the “heart and mind” of an individual or group’s worldview as to what is important, perceived as true, maintained as opinion, and felt strongly. The three constructs are coded separately but are part of a complex interconnected system.

Briefly, a value (V) is what we attribute as important, be it a person, thing, or idea. An attitude (A) is the evaluative way we think and feel about ourselves, others, things, or ideas. A belief (B) is what we think and feel as true or necessary, formed from our “personal knowledge, experiences, opinions, prejudices, morals, and other interpretive perceptions of the social world” ( Saldaña, 2009 , pp. 89–90). Values coding explores intrapersonal, interpersonal, and cultural constructs or ethos . It is an admittedly slippery task to code this way, for it is sometimes difficult to discern what is a value, attitude, or belief because they are intricately interrelated. But the depth you can potentially obtain is rich. An example of values coding follows:

For analysis, categorize the codes for each of the three different constructs together (i.e., all values in one group, attitudes in a second group, and beliefs in a third group). Analytic memo writing about the patterns and possible interrelationships may reveal a more detailed and intricate worldview of the participant.

Dramaturgical Coding

Dramaturgical coding perceives life as performance and its participants as characters in a social drama. Codes are assigned to the data (i.e., a “play script”) that analyze the characters in action, reaction, and interaction. Dramaturgical coding of participants examines their objectives (OBJ) or wants, needs, and motives; the conflicts (CON) or obstacles they face as they try to achieve their objectives; the tactics (TAC) or strategies they employ to reach their objectives; their attitudes (ATT) toward others and their given circumstances; the particular emotions (EMO) they experience throughout; and their subtexts (SUB) or underlying and unspoken thoughts. The following is an example of dramaturgically coded data:

Not included in this particular interview excerpt are the emotions the participant may have experienced or talked about. His later line, “that’s another ding in my wallet,” would have been coded EMO: BITTER. A reader may not have inferred that specific emotion from seeing the line in print. But the interviewer, present during the event and listening carefully to the audio recording during transcription, noted that feeling in his tone of voice.

For analysis, group similar codes together (e.g., all objectives in one group, all conflicts in another group, all tactics in a third group), or string together chains of how participants deal with their circumstances to overcome their obstacles through tactics (e.g., OBJ: SAVING MEAL MONEY > TAC: SKIPPING MEALS). Explore how the individuals or groups manage problem solving in their daily lives. Dramaturgical coding is particularly useful as preliminary work for narrative inquiry story development or arts-based research representations such as performance ethnography.

Versus Coding

Versus coding identifies the conflicts, struggles, and power issues observed in social action, reaction, and interaction as an X VS. Y code, such as: MEN VS. WOMEN, CONSERVATIVES VS. LIBERALS, FAITH VS. LOGIC, and so on. Conflicts are rarely this dichotomous. They are typically nuanced and much more complex. But humans tend to perceive these struggles with an US VS. THEM mindset. The codes can range from the observable to the conceptual and can be applied to data that show humans in tension with others, themselves, or ideologies.

What follows are examples of versus codes applied to the case study participant’s descriptions of his major medical expenses:

As an initial analytic tactic, group the versus codes into one of three categories: the Stakeholders , their Perceptions and/or Actions , and the Issues at stake. Examine how the three interrelate and identify the central ideological conflict at work as an X vs. Y category. Analytic memos and the final write-up can detail the nuances of the issues.

Remember that what has been profiled in this section is a broad brushstroke description of just a few basic coding processes, several of which can be compatibly “mixed and matched” within a single analysis (see Saldaña’s [2013]   The Coding Manual for Qualitative Researchers for a complete discussion). Certainly with additional data, more in-depth analysis can occur, but coding is only one approach to extracting and constructing preliminary meanings from the data corpus. What now follows are additional methods for qualitative analysis.

QDA Strategy: To Theme

To theme in QDA is to construct summative, phenomenological meanings from data through extended passages of text.

Unlike codes, which are most often single words or short phrases that symbolically represent a datum, themes are extended phrases or sentences that summarize the manifest (apparent) and latent (underlying) meanings of data ( Auerbach & Silverstein, 2003 ; Boyatzis, 1998 ). Themes, intended to represent the essences and essentials of humans’ lived experiences, can also be categorized or listed in superordinate and subordinate outline formats as an analytic tactic.

Below is the interview transcript example used in the coding sections above. (Hopefully you are not too fatigued at this point with the transcript, but it’s important to know how inquiry with the same data set can be approached in several different ways.) During the investigation of the ways middle-class Americans are influenced and affected by the current (2008–2012) economic recession, the researcher noticed that participants’ stories exhibited facets of what he labeled “economic intelligence” or EI (based on the formerly developed theories of Howard Gardner’s multiple intelligences and Daniel Goleman’s emotional intelligence). Notice how themeing interprets what is happening through the use of two distinct phrases—ECONOMIC INTELLIGENCE IS (i.e., manifest or apparent meanings) and ECONOMIC INTELLIGENCE MEANS (i.e., latent or underlying meanings):

Unlike the fifteen process codes and thirty in vivo codes in the previous examples, there are now fourteen themes to work with. In the order they appear, they are:

EI IS TAKING ADVANTAGE OF UNEXPECTED OPPORTUNITY

EI MEANS THINKING BEFORE YOU ACT

EI IS BUYING CHEAP

EI MEANS SACRIFICE

EI IS SAVING A FEW DOLLARS NOW AND THEN

EI MEANS KNOWING YOUR FLAWS

EI IS SETTING PRIORITIES

EI IS FINDING CHEAPER FORMS OF ENTERTAINMENT

EI MEANS LIVING AN INEXPENSIVE LIFESTYLE

EI IS NOTICING PERSONAL AND NATIONAL ECONOMIC TRENDS

EI MEANS YOU CANNOT CONTROL EVERYTHING

EI IS TAKING CARE OF ONE’S OWN HEALTH

EI MEANS KNOWING YOUR LUCK

There are several ways to categorize the themes as preparation for analytic memo writing. The first is to arrange them in outline format with superordinate and subordinate levels, based on how the themes seem to take organizational shape and structure. Simply cutting and pasting the themes in multiple arrangements on a word processor page eventually develops a sense of order to them. For example:

A second approach is to categorize the themes into similar clusters and to develop different category labels or theoretical constructs . A theoretical construct is an abstraction that transforms the central phenomenon’s themes into broader applications but can still use “is” and “means” as prompts to capture the bigger picture at work:

Theoretical Construct 1: EI Means Knowing the Unfortunate Present

Supporting Themes:

Theoretical Construct 2: EI is Cultivating a Small Fortune

Theoretical Construct 3: EI Means a Fortunate Future

What follows is an analytic memo generated from the cut-and-paste arrangement of themes into an outline and into theoretical constructs:

March 19, 2012 EMERGENT THEMES: FORTUNE/FORTUNATELY/UNFORTUNATELY I first reorganized the themes by listing them in two groups: “is” and “means.” The “is” statements seemed to contain positive actions and constructive strategies for economic intelligence. The “means” statements held primarily a sense of caution and restriction with a touch of negativity thrown in. The first outline with two major themes, LIVING AN INEXPENSIVE LIFESTYLE and YOU CANNOT CONTROL EVERYTHING also had this same tone. This reminded me of the old children’s picture book, Fortunately/Unfortunately , and the themes of “fortune” as a motif for the three theoretical constructs came to mind. Knowing the Unfortunate Present means knowing what’s (most) important and what’s (mostly) uncontrollable in one’s personal economic life. Cultivating a Small Fortune consists of those small money-saving actions that, over time, become part of one's lifestyle. A Fortunate Future consists of heightened awareness of trends and opportunities at micro and macro levels, with the understanding that health matters can idiosyncratically affect one’s fortune. These three constructs comprise this particular individual’s EI—economic intelligence.

Again, keep in mind that the examples above for coding and themeing were from one small interview transcript excerpt. The number of codes and their categorization would obviously increase, given a longer interview and/or multiple interviews to analyze. But the same basic principles apply: codes and themes relegated into patterned and categorized forms are heuristics—stimuli for good thinking through the analytic memo-writing process on how everything plausibly interrelates. Methodologists vary in the number of recommended final categories that result from analysis, ranging anywhere from three to seven, with traditional grounded theorists prescribing one central or core category from coded work.

QDA Strategy: To Assert

To assert in QDA is to put forward statements that summarize particular fieldwork and analytic observations that the researcher believes credibly represent and transcend the experiences.

Educational anthropologist Frederick Erickson (1986) wrote a significant and influential chapter on qualitative methods that outlined heuristics for assertion development . Assertions are declarative statements of summative synthesis, supported by confirming evidence from the data, and revised when disconfirming evidence or discrepant cases require modification of the assertions. These summative statements are generated from an interpretive review of the data corpus and then supported and illustrated through narrative vignettes—reconstructed stories from field notes, interview transcripts, or other data sources that provide a vivid profile as part of the evidentiary warrant.

Coding or themeing data can certainly precede assertion development as a way of gaining intimate familiarity with the data, but Erickson’s methods are a more admittedly intuitive yet systematic heuristic for analysis. Erickson promotes analytic induction and exploration of and inferences about the data, based on an examination of the evidence and an accumulation of knowledge. The goal is not to look for “proof” to support the assertions but plausibility of inference-laden observations about the local and particular social world under investigation.

Assertion development is the writing of general statements, plus subordinate yet related ones called subassertions , and a major statement called a key assertion that represents the totality of the data. One also looks for key linkages between them, meaning that the key assertion links to its related assertions, which then link to their respective subassertions. Subassertions can include particulars about any discrepant related cases or specify components of their parent assertions.

Excerpts from the interview transcript of our case study will be used to illustrate assertion development at work. By now, you should be quite familiar with the contents, so I will proceed directly to the analytic example. First, there is a series of thematically related statements the participant makes:

“Buy one package of chicken, get the second one free. Now that was a bargain. And I got some.”

“With Sweet Tomatoes I get those coupons for a few bucks off for lunch, so that really helps.”

“I don’t go to movies anymore. I rent DVDs from Netflix or Redbox or watch movies online—so much cheaper than paying over ten or twelve bucks for a movie ticket.”

Assertions can be categorized into low-level and high-level inferences . Low-level inferences address and summarize “what is happening” within the particulars of the case or field site—the “micro.” High-level inferences extend beyond the particulars to speculate on “what it means” in the more general social scheme of things—the “meso” or “macro.” A reasonable low-level assertion about the three statements above collectively might read: The participant finds several small ways to save money during a difficult economic period . A high-level inference that transcends the case to the macro level might read: Selected businesses provide alternatives and opportunities to buy products and services at reduced rates during a recession to maintain consumer spending.

Assertions are instantiated (i.e., supported) by concrete instances of action or participant testimony, whose patterns lead to more general description outside the specific field site. The author’s interpretive commentary can be interspersed throughout the report, but the assertions should be supported with the evidentiary warrant . A few assertions and subassertions based on the case interview transcript might read (and notice how high-level assertions serve as the paragraphs’ topic sentences):

Selected businesses provide alternatives and opportunities to buy products and services at reduced rates during a recession to maintain consumer spending. Restaurants, for example, need to find ways during difficult economic periods when potential customers may be opting to eat inexpensively at home rather than spending more money by dining out. Special offers can motivate cash-strapped clientele to patronize restaurants more frequently. An adult male dealing with such major expenses as underinsured dental care offers: “With Sweet Tomatoes I get those coupons for a few bucks off for lunch, so that really helps.” The film and video industries also seem to be suffering from a double-whammy during the current recession: less consumer spending on higher-priced entertainment, resulting in a reduced rate of movie theatre attendance (currently 39 percent of the American population, according to CNN); coupled with a media technology and business revolution that provides consumers less costly alternatives through video rentals and internet viewing: “I don’t go to movies anymore. I rent DVDs from Netflix or Redbox or watch movies online—so much cheaper than paying over ten or twelve bucks for a movie ticket.”

“Particularizability”—the search for specific and unique dimensions of action at a site and/or the specific and unique perspectives of an individual participant—is not intended to filter out trivial excess but to magnify the salient characteristics of local meaning. Although generalizable knowledge serves little purpose in qualitative inquiry since each naturalistic setting will contain its own unique set of social and cultural conditions, there will be some aspects of social action that are plausibly universal or “generic” across settings and perhaps even across time. To work toward this, Erickson advocates that the interpretive researcher look for “concrete universals” by studying actions at a particular site in detail, then comparing those to other sites that have also been studied in detail. The exhibit or display of these generalizable features is to provide a synoptic representation, or a view of the whole. What the researcher attempts to uncover is what is both particular and general at the site of interest, preferably from the perspective of the participants. It is from the detailed analysis of actions at a specific site that these universals can be concretely discerned, rather than abstractly constructed as in grounded theory.

In sum, assertion development is a qualitative data analytic strategy that relies on the researcher’s intense review of interview transcripts, field notes, documents, and other data to inductively formulate composite statements that credibly summarize and interpret participant actions and meanings, and their possible representation of and transfer into broader social contexts and issues.

QDA Strategy: To Display

To display in QDA is to visually present the processes and dynamics of human or conceptual action represented in the data.

Qualitative researchers use not only language but illustrations to both analyze and display the phenomena and processes at work in the data. Tables, charts, matrices, flow diagrams, and other models help both you and your readers cognitively and conceptually grasp the essence and essentials of your findings. As you have seen thus far, even simple outlining of codes, categories, and themes is one visual tactic for organizing the scope of the data. Rich text, font, and format features such as italicizing, bolding, capitalizing, indenting, and bullet pointing provide simple emphasis to selected words and phrases within the longer narrative.

“Think display” was a phrase coined by methodologists Miles and Huberman (1994) to encourage the researcher to think visually as data were collected and analyzed. The magnitude of text can be essentialized into graphics for “at-a-glance” review. Bins in various shapes and lines of various thicknesses, along with arrows suggesting pathways and direction, render the study as a portrait of action. Bins can include the names of codes, categories, concepts, processes, key participants, and/or groups.

As a simple example, Figure 28.1 illustrates the three categories’ interrelationship derived from process coding. It displays what could be the apex of this interaction, LIVING STRATEGICALLY, and its connections to THINKING STRATEGICALLY, which influences and affects SPENDING STRATEGICALLY.

Figure 28.2 represents a slightly more complex (if not playful) model, based on the five major in vivo codes/categories generated from analysis. The graphic is used as a way of initially exploring the interrelationship and flow from one category to another. The use of different font styles, font sizes, and line and arrow thicknesses are intended to suggest the visual qualities of the participant’s language and his dilemmas—a way of heightening in vivo coding even further.

Accompanying graphics are not always necessary for a qualitative report. They can be very helpful for the researcher during the analytic stage as a heuristic for exploring how major ideas interrelate, but illustrations are generally included in published work when they will help supplement and clarify complex processes for readers. Photographs of the field setting or the participants (and only with their written permission) also provide evidentiary reality to the write-up and help your readers get a sense of being there.

QDA Strategy: To Narrate

To narrate in QDA is to create an evocative literary representation and presentation of the data in the form of creative nonfiction.

All research reports are stories of one kind or another. But there is yet another approach to QDA that intentionally documents the research experience as story, in its traditional literary sense. Narrative inquiry plots and story lines the participant’s experiences into what might be initially perceived as a fictional short story or novel. But the story is carefully crafted and creatively written to provide readers with an almost omniscient perspective about the participants’ worldview. The transformation of the corpus from database to creative nonfiction ranges from systematic transcript analysis to open ended literary composition. The narrative, though, should be solidly grounded in and emerge from the data as a plausible rendering of social life.

A simple illustration of category interrelationship.

An illustration with rich text and artistic features.

The following is a narrative vignette based on interview transcript selections from the participant living through tough economic times:

Jack stood in front of the soft drink vending machine at work and looked almost worriedly at the selections. With both hands in his pants pockets, his fingers jingled the few coins he had inside them as he contemplated whether he could afford the purchase. One dollar and fifty cents for a twenty-ounce bottle of Diet Coke. One dollar and fifty cents. “I can practically get a two-liter bottle for that same price at the grocery store,” he thought. Then Jack remembered the upcoming dental surgery he needed—that would cost one thousand dollars—and the bottle of insulin and syringes he needed to buy for his diabetic, “high maintenance” cat—about one hundred and twenty dollars. He sighed, took his hands out of his pockets, and walked away from the vending machine. He was skipping lunch that day anyway so he could stock up on dinner later at the cheap-but-filling-all-you-can-eat Chinese buffet. He could get his Diet Coke there.

Narrative inquiry representations, like literature, vary in tone, style, and point of view. The common goal, however, is to create an evocative portrait of participants through the aesthetic power of literary form. A story does not always have to have a moral explicitly stated by its author. The reader reflects on personal meanings derived from the piece and how the specific tale relates to one’s self and the social world.

QDA Strategy: To Poeticize

To poeticize in QDA is to create an evocative literary representation and presentation of the data in the form of poetry.

One form for analyzing or documenting analytic findings is to strategically truncate interview transcripts, field notes, and other pertinent data into poetic structures. Like coding, poetic constructions capture the essence and essentials of data in a creative, evocative way. The elegance of the format attests to the power of carefully chosen language to represent and convey complex human experience.

In vivo codes (codes based on the actual words used by participants themselves) can provide imagery, symbols, and metaphors for rich category, theme, concept, and assertion development, plus evocative content for arts-based interpretations of the data. Poetic inquiry takes note of what words and phrases seem to stand out from the data corpus as rich material for reinterpretation. Using some of the participant’s own language from the interview transcript illustrated above, a poetic reconstruction or “found poetry” might read:

Scary Times Scary times... spending more (another ding in my wallet) a couple of thousand (another ding in my wallet) insurance is just worthless (another ding in my wallet) pick up the tab (another ding in my wallet) not putting as much into savings (another ding in my wallet) It all adds up. Think twice: don't really need skip Think twice, think cheap: coupons bargains two-for-one free Think twice, think cheaper: stock up all-you-can-eat (cheap—and filling) It all adds up.

Anna Deavere Smith, a verbatim theatre performer, attests that people speak in forms of “organic poetry” in everyday life. Thus in vivo codes can provide core material for poetic representation and presentation of lived experiences, potentially transforming the routine and mundane into the epic. Some researchers also find the genre of poetry to be the most effective way to compose original work that reflects their own fieldwork experiences and autoethnographic stories.

QDA Strategy: To Compute

To compute in QDA is to employ specialized software programs for qualitative data management and analysis.

CAQDAS is an acronym for Computer Assisted Qualitative Data Analysis Software. There are diverse opinions among practitioners in the field about the utility of such specialized programs for qualitative data management and analysis. The software, unlike statistical computation, does not actually analyze data for you at higher conceptual levels. CAQDAS software packages serve primarily as a repository for your data (both textual and visual) that enable you to code them, and they can perform such functions as calculate the number of times a particular word or phrase appears in the data corpus (a particularly useful function for content analysis) and can display selected facets after coding, such as possible interrelationships. Certainly, basic word-processing software such as Microsoft Word, Excel, and Access provide utilities that can store and, with some pre-formatting and strategic entry, organize qualitative data to enable the researcher’s analytic review. The following internet addresses are listed to help in exploriong these CAQDAS packages and obtaining demonstration/trial software and tutorials:

AnSWR: www.cdc.gov/hiv/topics/surveillance/resources/software/answr

ATLAS.ti: www.atlasti.com

Coding Analysis Toolkit (CAT): cat.ucsur.pitt.edu/

Dedoose: www.dedoose.com

HyperRESEARCH: www.researchware.com

MAXQDA: www.maxqda.com

NVivo: www.qsrinternational.com

QDA Miner: www.provalisresearch.com

Qualrus: www.qualrus.com

Transana (for audio and video data materials): www.transana.org

Weft QDA: www.pressure.to/qda/

Some qualitative researchers attest that the software is indispensable for qualitative data management, especially for large-scale studies. Others feel that the learning curve of CAQDAS is too lengthy to be of pragmatic value, especially for small-scale studies. From my own experience, if you have an aptitude for picking up quickly on the scripts of software programs, explore one or more of the packages listed. If you are a novice to qualitative research, though, I recommend working manually or “by hand” for your first project so you can focus exclusively on the data and not on the software.

QDA Strategy: To Verify

To verify in QDA is to administer an audit of “quality control” to your analysis.

After your data analysis and the development of key findings, you may be thinking to yourself, “Did I get it right?” “Did I learn anything new?” Reliability and validity are terms and constructs of the positivist quantitative paradigm that refer to the replicability and accuracy of measures. But in the qualitative paradigm, other constructs are more appropriate.

Credibility and trustworthiness ( Lincoln & Guba, 1985 ) are two factors to consider when collecting and analyzing the data and presenting your findings. In our qualitative research projects, we need to present a convincing story to our audiences that we “got it right” methodologically. In other words, the amount of time we spent in the field, the number of participants we interviewed, the analytic methods we used, the thinking processes evident to reach our conclusions, and so on should be “just right” to persuade the reader that we have conducted our jobs soundly. But remember that we can never conclusively “prove” something; we can only, at best, convincingly suggest. Research is an act of persuasion.

Credibility in a qualitative research report can be established through several ways. First, citing the key writers of related works in your literature review is a must. Seasoned researchers will sometimes assess whether a novice has “done her homework” by reviewing the bibliography or references. You need not list everything that seminal writers have published about a topic, but their names should appear at least once as evidence that you know the field’s key figures and their work.

Credibility can also be established by specifying the particular data analytic methods you employed (e.g., “Interview transcripts were taken through two cycles of process coding, resulting in five primary categories”), through corroboration of data analysis with the participants themselves (e.g., “I asked my participants to read and respond to a draft of this report for their confirmation of accuracy and recommendations for revision”) or through your description of how data and findings were substantiated (e.g., “Data sources included interview transcripts, participant observation field notes, and participant response journals to gather multiple perspectives about the phenomenon”).

Creativity scholar Sir Ken Robinson is attributed with offering this cautionary advice about making a convincing argument: “Without data, you’re just another person with an opinion.” Thus researchers can also support their findings with relevant, specific evidence by quoting participants directly and/or including field note excerpts from the data corpus. These serve both as illustrative examples for readers and to present more credible testimony of what happened in the field.

Trustworthiness , or providing credibility to the writing, is when we inform the reader of our research processes. Some make the case by stating the duration of fieldwork (e.g., “Seventy-five clock hours were spent in the field”; “The study extended over a twenty-month period”). Others put forth the amounts of data they gathered (e.g., “Twenty-seven individuals were interviewed”; “My field notes totaled approximately 250 pages”). Sometimes trustworthiness is established when we are up front or confessional with the analytic or ethical dilemmas we encountered (e.g., “It was difficult to watch the participant’s teaching effectiveness erode during fieldwork”; “Analysis was stalled until I recoded the entire data corpus with a new perspective.”).

The bottom line is that credibility and trustworthiness are matters of researcher honesty and integrity . Anyone can write that he worked ethically, rigorously, and reflexively, but only the writer will ever know the truth. There is no shame if something goes wrong with your research. In fact, it is more than likely the rule, not the exception. Work and write transparently to achieve credibility and trustworthiness with your readers.

The length of this article does not enable me to expand on other qualitative data analytic strategies, such as to conceptualize, abstract, theorize, and write. Yet there are even more subtle thinking strategies to employ throughout the research enterprise, such as to synthesize, problematize, persevere, imagine, and create. Each researcher has his or her own ways of working, and deep reflection (another strategy) on your own methodology and methods as a qualitative inquirer throughout fieldwork and writing provides you with metacognitive awareness of data analytic processes and possibilities.

Data analysis is one of the most elusive processes in qualitative research, perhaps because it is a backstage, behind-the-scenes, in-your-head enterprise. It is not that there are no models to follow. It is just that each project is contextual and case specific. The unique data you collect from your unique research design must be approached with your unique analytic signature. It truly is a learning-by-doing process, so accept that and leave yourself open to discovery and insight as you carefully scrutinize the data corpus for patterns, categories, themes, concepts, assertions, and possibly new theories through strategic analysis.

Auerbach, C. F. , & Silverstein, L. B. ( 2003 ). Qualitative data: An introduction to coding and analysis . New York: New York University Press.

Google Scholar

Google Preview

Birks, M. , & Mills, J. ( 2011 ). Grounded theory: A practical guide . London: Sage.

Boyatzis, R. E. ( 1998 ). Transforming qualitative information: Thematic analysis and code development . Thousand Oaks, CA: Sage.

Bryant, A. , & Charmaz, K. (Eds.). ( 2007 ). The Sage handbook of grounded theory . London: Sage.

Charmaz, K. ( 2006 ). Constructing grounded theory: A practical guide through qualitative analysis . Thousand Oaks, CA: Sage.

Erickson, F. ( 1986 ). Qualitative methods in research on teaching. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed.) (pp. 119–161). New York: Macmillan.

Geertz, C. ( 1983 ). Local knowledge: Further essays in interpretive anthropology . New York: Basic Books.

Gibbs, G. R. ( 2007 ). Analysing qualitative data . London: Sage.

Lincoln, Y. S. , & Guba, E. G. ( 1985 ). Naturalistic inquiry . Newbury Park, CA: Sage.

Miles, M. B. , & Huberman, A. M. ( 1994 ). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage.

Saldaña, J. ( 2009 ). The coding manual for qualitative researchers . London: Sage.

Saldaña, J. ( 2011 ). Fundamentals of qualitative research . New York: Oxford University Press.

Saldaña, J. ( 2013 ). The coding manual for qualitative researchers (2nd ed.). London: Sage.

Shank, G. ( 2008 ). Abduction. In L. M. Given (Ed.), The Sage encyclopedia of qualitative research methods (pp. 1–2). Thousand Oaks, CA: Sage.

Stake, R. E. ( 1995 ). The art of case study research . Thousand Oaks, CA: Sage.

Stern, P. N. , & Porr, C. J. ( 2011 ). Essentials of accessible grounded theory . Walnut Creek, CA: Left Coast Press.

Strauss, A. L. ( 1987 ). Qualitative analysis for social scientists . Cambridge: Cambridge University Press.

Sunstein, B. S. , & Chiseri-Strater, E. ( 2012 ). FieldWorking: Reading and writing research (4th ed.). Boston: Bedford/St. Martin’s.

Wertz, F. J. , Charmaz, K. , McMullen, L. M. , Josselson, R. , Anderson, R. , & McSpadden, E. ( 2011 ). Fives ways of doing qualitative analysis: Phenomenological psychology, grounded theory, discourse analysis, narrative research, and intuitive inquiry . New York: Guilford.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Study Site Homepage

The Coding Manual for Qualitative Researchers

Student resources.

Welcome to the companion website for The Coding Manual for Qualitative Research , third edition, by Johnny Saldaña.  This website offers a wealth of additional resources to support students and lecturers including:

CAQDAS links giving guidance and links to a variety of qualitative data analysis software.

Code lists including data extracted from the author’s study, “Lifelong Learning Impact: Adult Perceptions of Their High School Speech and/or Theatre Participation” (McCammon, Saldaña, Hines, & Omasta, 2012), which you can download and make your own practice manipulations to the data.

Coding examples from SAGE journals providing actual examples of coding at work, giving you insight into coding procedures.

Three sample interview transcripts that allow you to test your coding skills.

Group exercises for small and large groups encourage you to get to grips with basic principles of coding, partner development, categorization and qualitative data analysis

Flashcard glossary of terms enables you to test your knowledge of the terminology commonly used in qualitative research and coding.

About the book

Johnny Saldaña’s unique and invaluable manual demystifies the qualitative coding process with a comprehensive assessment of different coding types, examples and exercises. The ideal reference for students, teachers, and practitioners of qualitative inquiry, it is essential reading across the social sciences and neatly guides you through the multiple approaches available for coding qualitative data.

Its wide array of strategies, from the more straightforward to the more complex, is skilfully explained and carefully exemplified, providing a complete toolkit of codes and skills that can be applied to any research project. For each code Saldaña provides information about the method's origin, gives a detailed description of the method, demonstrates its practical applications, and sets out a clearly illustrated example with analytic follow up. 

This international bestseller is an extremely usable, robust manual and is a must-have resource for qualitative researchers at all levels.

This website may contain links to both internal and external websites. All links included were active at the time the website was launched. SAGE does not operate these external websites and does not necessarily endorse the views expressed within them. SAGE cannot take responsibility for the changing content or nature of linked sites, as these sites are outside of our control and subject to change without our knowledge. If you do find an inactive link to an external website, please try to locate that website by using a search engine. SAGE will endeavour to update inactive or broken links when possible. 

Banner

Coding Qualitative Data

  • First Online: 02 January 2023

Cite this chapter

qualitative research and coding

  • Marla Rogers 4  

Part of the book series: Springer Texts in Education ((SPTE))

4669 Accesses

With the advent and proliferation of analysis software (e.g., Nvivo, Atlas.ti), coding data has become much easier in terms of application. Where autocoding algorithms do much to assist and enlighten a researcher in analysis, coding qualitative data remains an act that must largely be undertaken by a human in order to fully address the research question(s) (Kaufmann, A. A., Barcomb, A., & Riehle, D. (2020). Supporting interview analysis with autocoding. HICSS. https://www.semanticscholar.org/paper/Supporting-Interview-Analysis-with-Autocoding-Kaufmann-Barcomb/b6e045859b5ce94e1eb144a9545b26c5e9fa6f32 ). Even seasoned qualitative researchers can find the process of coding their datum corpus to be arduous at times. For novice researchers, the task can quickly become baffling and overwhelming.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

qualitative research and coding

Analyzing Qualitative Data Using NVivo

qualitative research and coding

Creating Inheritable Digital Codebooks for Qualitative Research Data Analysis

qualitative research and coding

How We Code

Anonymous Author. (2019, July 2). Resolve: Finding a resolution for infertility: Infertility support group and discussion community [online discussion post]. https://www.inspire.com/

Basit, T. N. (2003). Manual or electronic? The role of coding in qualitative data analysis. Educational Research, 45 (2), 143–154.

Article   Google Scholar  

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3 (2), 77–101.

Caulfield, J. (2019, September 6). How to do thematic analysis . www.scribbr.com/methodology/thematicanalysis

Creswell, J. (2015). 30 Essential skills for the qualitative researcher . SAGE.

Google Scholar  

Elliot, V. (2018). Thinking about the coding process in qualitative data analysis. The Qualitative Report, 23 (11), 2850–2861. https://nsuworks.nova.edu/tqr/vol23/iss11/14

Kaufmann, A. A., Barcomb, A., & Riehle, D. (2020). Supporting interview analysis with autocoding. HICSS. https://www.semanticscholar.org/paper/Supporting-Interview-Analysis-with-Autocoding-Kaufmann-Barcomb/b6e045859b5ce94e1eb144a9545b26c5e9fa6f32

Saldana, J. (2009). The coding manual for qualitative researchers. SAGE.

Further Readings

Analyzing Qualitative Data: Nvivo 12 Pro for Windows (2 hours). https://www.youtube.com/watch?v=CKPS4LF9G8A

How to Analyze Interview Transcripts. (2 minutes). https://www.rev.com/blog/analyze-interview-transcripts-in-qualitative-research

How to Know You Are Coding Correctly (4 minutes). https://www.youtube.com/watch?v=iL7Ww5kpnIM

Download references

Author information

Authors and affiliations.

University of Saskatchewan, Saskatoon, Canada

Marla Rogers

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marla Rogers .

Editor information

Editors and affiliations.

Department of Educational Administration, College of Education, University of Saskatchewan, Saskatoon, SK, Canada

Janet Mola Okoko

Scott Tunison

Department of Educational Administration, University of Saskatchewan, Saskatoon, SK, Canada

Keith D. Walker

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Rogers, M. (2023). Coding Qualitative Data. In: Okoko, J.M., Tunison, S., Walker, K.D. (eds) Varieties of Qualitative Research Methods. Springer Texts in Education. Springer, Cham. https://doi.org/10.1007/978-3-031-04394-9_12

Download citation

DOI : https://doi.org/10.1007/978-3-031-04394-9_12

Published : 02 January 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-04396-3

Online ISBN : 978-3-031-04394-9

eBook Packages : Education Education (R0)

You are here

The Coding Manual for Qualitative Researchers

The Coding Manual for Qualitative Researchers

  • Johnny Saldana - Arizona State University, USA
  • Description

“ Especially useful for utilization in higher education, administrative research, general development, the arts, social sciences, nursing, business, and health care. That may seem like a vast application, but both students and professionals will appreciate the clarity and the emblematic mentorship this book provides. ” – American Journal of Qualitative Research

This invaluable manual from world-renowned expert Johnny Saldaña illuminates the process of qualitative coding and provides clear, insightful guidance for qualitative researchers at all levels. The fourth edition includes a range of updates that build upon the huge success of the previous editions:

  • A structural reformat has increased accesibility; the 3 sections from the previous edition are now spread over 15 chapters for easier sectional reference
  • There are two new first cycle coding methods join the 33 others in the collection: Metaphor Coding and Themeing the Data: Categorically
  • Includes a brand new companion website with links to SAGE journal articles, sample transcripts, links to CAQDAS sites, student exercises, links to video and digital content
  • Analytic software screenshots and academic references have been updated, alongside several new figures added throughout the manual

Saldana presents a range of coding options with advantages and disadvantages to help researchers to choose the most appropriate approach for their project, reinforcing their perspective with real world examples, used to show step-by-step processes and to demonstrate important skills

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

Supplements

This coding manual is the best go-to text for qualitative data analysis, both for a manual approach and for computer-assisted analysis. It offers a range of coding strategies applicable to any research projects, written in accessible language, making this text highly practical as well as theoretically comprehensive. 

With this expanded fourth edition of The Coding Manual for Qualitative Researchers, Saldaña  has proved to be an exemplary archivist of the field of qualitative methods, whilst never losing sight of the practical issues involved in inducting new researchers to the variety of coding methods available to them. His text provides great worked examples which build up understanding, skills and confidence around coding for the new researcher, whilst also enhancing established researchers’ grasp of the key principles of coding. 

Johnny Saldaña’s Coding Manual for Qualitative Researcher s has been an indispensable resource for students, teachers and practitioners since it was first published in 2009. With its expanded contents, new coding methods and more intuitive structure, the fourth edition deserves a prominent place on every qualitative researcher’s bookshelf.

An essential text for qualitative research training and fieldwork. Along with updated examples and applications, Saldaña's fourth edition introduces multiple new coding methods, solidifying this as the most comprehensive, practical qualitative coding guide on the market today.

This book really is the coding manual for qualitative researchers, both aspiring and seasoned. The text is well-organized and thorough. With several new methods included in the fourth edition, this is an essential reference text for qualitative analysts.  

This book will be of particular help to PhD students rather than masters.

This will be of particular help to PhD students rather than Masters

Great update to the third addition.

This is a great resource for qualitative researchers of all levels. It gives clear details on different ways to code, it gives clear examples, and there are citations of others who have used that type of coding. It is great for use in the methods section of articles. It is also valuable for introducing graduate students different ways to code. It is an indispensable resource.

Excellent resource for learning how to analyze qualitative data.

  • Over 30 techniques are now included
  • A brand new companion website with links to SAGE journal articles, sample transcripts, links to CAQDAS sites, student exercises, links to video and digital content

Preview this book

For instructors, select a purchasing option, related products.

Thinking Qualitatively

Logo for Open Educational Resources

Chapter 18. Data Analysis and Coding

Introduction.

Piled before you lie hundreds of pages of fieldnotes you have taken, observations you’ve made while volunteering at city hall. You also have transcripts of interviews you have conducted with the mayor and city council members. What do you do with all this data? How can you use it to answer your original research question (e.g., “How do political polarization and party membership affect local politics?”)? Before you can make sense of your data, you will have to organize and simplify it in a way that allows you to access it more deeply and thoroughly. We call this process coding . [1] Coding is the iterative process of assigning meaning to the data you have collected in order to both simplify and identify patterns. This chapter introduces you to the process of qualitative data analysis and the basic concept of coding, while the following chapter (chapter 19) will take you further into the various kinds of codes and how to use them effectively.

To those who have not yet conducted a qualitative study, the sheer amount of collected data will be a surprise. Qualitative data can be absolutely overwhelming—it may mean hundreds if not thousands of pages of interview transcripts, or fieldnotes, or retrieved documents. How do you make sense of it? Students often want very clear guidelines here, and although I try to accommodate them as much as possible, in the end, analyzing qualitative data is a bit more of an art than a science: “The process of bringing order, structure, and interpretation to a mass of collected data is messy, ambiguous, time-consuming, creative, and fascinating. It does not proceed in a linear fashion: it is not neat. At times, the researcher may feel like an eccentric and tormented artist; not to worry, this is normal” ( Marshall and Rossman 2016:214 ).

To complicate matters further, each approach (e.g., Grounded Theory, deep ethnography, phenomenology) has its own language and bag of tricks (techniques) when it comes to analysis. Grounded Theory, for example, uses in vivo coding to generate new theoretical insights that emerge from a rigorous but open approach to data analysis. Ethnographers, in contrast, are more focused on creating a rich description of the practices, behaviors, and beliefs that operate in a particular field. They are less interested in generating theory and more interested in getting the picture right, valuing verisimilitude in the presentation. And then there are some researchers who seek to account for the qualitative data using almost quantitative methods of analysis, perhaps counting and comparing the uses of certain narrative frames in media accounts of a phenomenon. Qualitative content analysis (QCA) often includes elements of counting (see chapter 17). For these researchers, having very clear hypotheses and clearly defined “variables” before beginning analysis is standard practice, whereas the same would be expressly forbidden by those researchers, like grounded theorists, taking a more emergent approach.

All that said, there are some helpful techniques to get you started, and these will be presented in this and the following chapter. As you become more of an expert yourself, you may want to read more deeply about the tradition that speaks to your research. But know that there are many excellent qualitative researchers that use what works for any given study, who take what they can from each tradition. Most of us find this permissible (but watch out for the methodological purists that exist among us).

Null

Qualitative Data Analysis as a Long Process!

Although most of this and the following chapter will focus on coding, it is important to understand that coding is just one (very important) aspect of the long data-analysis process. We can consider seven phases of data analysis, each of which is important for moving your voluminous data into “findings” that can be reported to others. The first phase involves data organization. This might mean creating a special password-protected Dropbox folder for storing your digital files. It might mean acquiring computer-assisted qualitative data-analysis software ( CAQDAS ) and uploading all transcripts, fieldnotes, and digital files to its storage repository for eventual coding and analysis. Finding a helpful way to store your material can take a lot of time, and you need to be smart about this from the very beginning. Losing data because of poor filing systems or mislabeling is something you want to avoid. You will also want to ensure that you have procedures in place to protect the confidentiality of your interviewees and informants. Filing signed consent forms (with names) separately from transcripts and linking them through an ID number or other code that only you have access to (and store safely) are important.

Once you have all of your material safely and conveniently stored, you will need to immerse yourself in the data. The second phase consists of reading and rereading or viewing and reviewing all of your data. As you do this, you can begin to identify themes or patterns in the data, perhaps writing short memos to yourself about what you are seeing. You are not committing to anything in this third phase but rather keeping your eyes and mind open to what you see. In an actual study, you may very well still be “in the field” or collecting interviews as you do this, and what you see might push you toward either concluding your data collection or expanding so that you can follow a particular group or factor that is emerging as important. For example, you may have interviewed twelve international college students about how they are adjusting to life in the US but realized as you read your transcripts that important gender differences may exist and you have only interviewed two women (and ten men). So you go back out and make sure you have enough female respondents to check your impression that gender matters here. The seven phases do not proceed entirely linearly! It is best to think of them as recursive; conceptually, there is a path to follow, but it meanders and flows.

Coding is the activity of the fourth phase . The second part of this chapter and all of chapter 19 will focus on coding in greater detail. For now, know that coding is the primary tool for analyzing qualitative data and that its purpose is to both simplify and highlight the important elements buried in mounds of data. Coding is a rigorous and systematic process of identifying meaning, patterns, and relationships. It is a more formal extension of what you, as a conscious human being, are trained to do every day when confronting new material and experiences. The “trick” or skill is to learn how to take what you do naturally and semiconsciously in your mind and put it down on paper so it can be documented and verified and tested and refined.

At the conclusion of the coding phase, your material will be searchable, intelligible, and ready for deeper analysis. You can begin to offer interpretations based on all the work you have done so far. This fifth phase might require you to write analytic memos, beginning with short (perhaps a paragraph or two) interpretations of various aspects of the data. You might then attempt stitching together both reflective and analytical memos into longer (up to five pages) general interpretations or theories about the relationships, activities, patterns you have noted as salient.

As you do this, you may be rereading the data, or parts of the data, and reviewing your codes. It’s possible you get to this phase and decide you need to go back to the beginning. Maybe your entire research question or focus has shifted based on what you are now thinking is important. Again, the process is recursive , not linear. The sixth phase requires you to check the interpretations you have generated. Are you really seeing this relationship, or are you ignoring something important you forgot to code? As we don’t have statistical tests to check the validity of our findings as quantitative researchers do, we need to incorporate self-checks on our interpretations. Ask yourself what evidence would exist to counter your interpretation and then actively look for that evidence. Later on, if someone asks you how you know you are correct in believing your interpretation, you will be able to explain what you did to verify this. Guard yourself against accusations of “ cherry-picking ,” selecting only the data that supports your preexisting notion or expectation about what you will find. [2]

The seventh and final phase involves writing up the results of the study. Qualitative results can be written in a variety of ways for various audiences (see chapter 20). Due to the particularities of qualitative research, findings do not exist independently of their being written down. This is different for quantitative research or experimental research, where completed analyses can somewhat speak for themselves. A box of collected qualitative data remains a box of collected qualitative data without its written interpretation. Qualitative research is often evaluated on the strength of its presentation. Some traditions of qualitative inquiry, such as deep ethnography, depend on written thick descriptions, without which the research is wholly incomplete, even nonexistent. All of that practice journaling and writing memos (reflective and analytical) help develop writing skills integral to the presentation of the findings.

Remember that these are seven conceptual phases that operate in roughly this order but with a lot of meandering and recursivity throughout the process. This is very different from quantitative data analysis, which is conducted fairly linearly and processually (first you state a falsifiable research question with hypotheses, then you collect your data or acquire your data set, then you analyze the data, etc.). Things are a bit messier when conducting qualitative research. Embrace the chaos and confusion, and sort your way through the maze. Budget a lot of time for this process. Your research question might change in the middle of data collection. Don’t worry about that. The key to being nimble and flexible in qualitative research is to start thinking and continue thinking about your data, even as it is being collected. All seven phases can be started before all the data has been gathered. Data collection does not always precede data analysis. In some ways, “qualitative data collection is qualitative data analysis.… By integrating data collection and data analysis, instead of breaking them up into two distinct steps, we both enrich our insights and stave off anxiety. We all know the anxiety that builds when we put something off—the longer we put it off, the more anxious we get. If we treat data collection as this mass of work we must do before we can get started on the even bigger mass of work that is analysis, we set ourselves up for massive anxiety” ( Rubin 2021:182–183 ; emphasis added).

The Coding Stage

A code is “a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data” ( Saldaña 2014:5 ). Codes can be applied to particular sections of or entire transcripts, documents, or even videos. For example, one might code a video taken of a preschooler trying to solve a puzzle as “puzzle,” or one could take the transcript of that video and highlight particular sections or portions as “arranging puzzle pieces” (a descriptive code) or “frustration” (a summative emotion-based code). If the preschooler happily shouts out, “I see it!” you can denote the code “I see it!” (this is an example of an in vivo, participant-created code). As one can see from even this short example, there are many different kinds of codes and many different strategies and techniques for coding, more of which will be discussed in detail in chapter 19. The point to remember is that coding is a rigorous systematic process—to some extent, you are always coding whenever you look at a person or try to make sense of a situation or event, but you rarely do this consciously. Coding is the process of naming what you are seeing and how you are simplifying the data so that you can make sense of it in a way that is consistent with your study and in a way that others can understand and follow and replicate. Another way of saying this is that a code is “a researcher-generated interpretation that symbolizes or translates data” ( Vogt et al. 2014:13 ).

As with qualitative data analysis generally, coding is often done recursively, meaning that you do not merely take one pass through the data to create your codes. Saldaña ( 2014 ) differentiates first-cycle coding from second-cycle coding. The goal of first-cycle coding is to “tag” or identify what emerges as important codes. Note that I said emerges—you don’t always know from the beginning what will be an important aspect of the study or not, so the coding process is really the place for you to begin making the kinds of notes necessary for future analyses. In second-cycle coding, you will want to be much more focused—no longer gathering wholly new codes but synthesizing what you have into metacodes.

You might also conceive of the coding process in four parts (figure 18.1). First, identify a representative or diverse sample set of interview transcripts (or fieldnotes or other documents). This is the group you are going to use to get a sense of what might be emerging. In my own study of career obstacles to success among first-generation and working-class persons in sociology, I might select one interview from each career stage: a graduate student, a junior faculty member, a senior faculty member.

qualitative research and coding

Second, code everything (“ open coding ”). See what emerges, and don’t limit yourself in any way. You will end up with a ton of codes, many more than you will end up with, but this is an excellent way to not foreclose an interesting finding too early in the analysis. Note the importance of starting with a sample of your collected data, because otherwise, open coding all your data is, frankly, impossible and counterproductive. You will just get stuck in the weeds.

Third, pare down your coding list. Where you may have begun with fifty (or more!) codes, you probably want no more than twenty remaining. Go back through the weeds and pull out everything that does not have the potential to bloom into a nicely shaped garden. Note that you should do this before tackling all of your data . Sometimes, however, you might need to rethink the sample you chose. Let’s say that the graduate student interview brought up some interesting gender issues that were pertinent to female-identifying sociologists, but both the junior and the senior faculty members identified as male. In that case, I might read through and open code at least one other interview transcript, perhaps a female-identifying senior faculty member, before paring down my list of codes.

This is also the time to create a codebook if you are using one, a master guide to the codes you are using, including examples (see Sample Codebooks 1 and 2 ). A codebook is simply a document that lists and describes the codes you are using. It is easy to forget what you meant the first time you penciled a coded notation next to a passage, so the codebook allows you to be clear and consistent with the use of your codes. There is not one correct way to create a codebook, but generally speaking, the codebook should include (1) the code (either name or identification number or both), (2) a description of what the code signifies and when and where it should be applied, and (3) an example of the code to help clarify (2). Listing all the codes down somewhere also allows you to organize and reorganize them, which can be part of the analytical process. It is possible that your twenty remaining codes can be neatly organized into five to seven master “themes.” Codebooks can and should develop as you recursively read through and code your collected material. [3]

Fourth, using the pared-down list of codes (or codebook), read through and code all the data. I know many qualitative researchers who work without a codebook, but it is still a good practice, especially for beginners. At the very least, read through your list of codes before you begin this “ closed coding ” step so that you can minimize the chance of missing a passage or section that needs to be coded. The final step is…to do it all again. Or, at least, do closed coding (step four) again. All of this takes a great deal of time, and you should plan accordingly.

Researcher Note

People often say that qualitative research takes a lot of time. Some say this because qualitative researchers often collect their own data. This part can be time consuming, but to me, it’s the analytical process that takes the most time. I usually read every transcript twice before starting to code, then it usually takes me six rounds of coding until I’m satisfied I’ve thoroughly coded everything. Even after the coding, it usually takes me a year to figure out how to put the analysis together into a coherent argument and to figure out what language to use. Just deciding what name to use for a particular group or idea can take months. Understanding this going in can be helpful so that you know to be patient with yourself.

—Jessi Streib, author of The Power of the Past and Privilege Lost 

Note that there is no magic in any of this, nor is there any single “right” way to code or any “correct” codes. What you see in the data will be prompted by your position as a researcher and your scholarly interests. Where the above codes on a preschooler solving a puzzle emerged from my own interest in puzzle solving, another researcher might focus on something wholly different. A scholar of linguistics, for example, may focus instead on the verbalizations made by the child during the discovery process, perhaps even noting particular vocalizations (incidence of grrrs and gritting of the teeth, for example). Your recording of the codes you used is the important part, as it allows other researchers to assess the reliability and validity of your analyses based on those codes. Chapter 19 will provide more details about the kinds of codes you might develop.

Saldaña ( 2014 ) lists seven “necessary personal attributes” for successful coding. To paraphrase, they are the following:

  • Having (or practicing) good organizational skills
  • Perseverance
  • The ability and willingness to deal with ambiguity
  • Flexibility
  • Creativity, broadly understood, which includes “the ability to think visually, to think symbolically, to think in metaphors, and to think of as many ways as possible to approach a problem” (20)
  • Commitment to being rigorously ethical
  • Having an extensive vocabulary [4]

Writing Analytic Memos during/after Coding

Coding the data you have collected is only one aspect of analyzing it. Too many beginners have coded their data and then wondered what to do next. Coding is meant to help organize your data so that you can see it more clearly, but it is not itself an analysis. Thinking about the data, reviewing the coded data, and bringing in the previous literature (here is where you use your literature review and theory) to help make sense of what you have collected are all important aspects of data analysis. Analytic memos are notes you write to yourself about the data. They can be short (a single page or even a paragraph) or long (several pages). These memos can themselves be the subject of subsequent analytic memoing as part of the recursive process that is qualitative data analysis.

Short analytic memos are written about impressions you have about the data, what is emerging, and what might be of interest later on. You can write a short memo about a particular code, for example, and why this code seems important and where it might connect to previous literature. For example, I might write a paragraph about a “cultural capital” code that I use whenever a working-class sociologist says anything about “not fitting in” with their peers (e.g., not having the right accent or hairstyle or private school background). I could then write a little bit about Bourdieu, who originated the notion of cultural capital, and try to make some connections between his definition and how I am applying it here. I can also use the memo to raise questions or doubts I have about what I am seeing (e.g., Maybe the type of school belongs somewhere else? Is this really the right code?). Later on, I can incorporate some of this writing into the theory section of my final paper or article. Here are some types of things that might form the basis of a short memo: something you want to remember, something you noticed that was new or different, a reaction you had, a suspicion or hunch that you are developing, a pattern you are noticing, any inferences you are starting to draw. Rubin ( 2021 ) advises, “Always include some quotation or excerpt from your dataset…that set you off on this idea. It’s happened to me so many times—I’ll have a really strong reaction to a piece of data, write down some insight without the original quotation or context, and then [later] have no idea what I was talking about and have no way of recreating my insight because I can’t remember what piece of data made me think this way” ( 203 ).

All CAQDAS programs include spaces for writing, generating, and storing memos. You can link a memo to a particular transcript, for example. But you can just as easily keep a notebook at hand in which you write notes to yourself, if you prefer the more tactile approach. Drawing pictures that illustrate themes and patterns you are beginning to see also works. The point is to write early and write often, as these memos are the building blocks of your eventual final product (chapter 20).

In the next chapter (chapter 19), we will go a little deeper into codes and how to use them to identify patterns and themes in your data. This chapter has given you an idea of the process of data analysis, but there is much yet to learn about the elements of that process!

Qualitative Data-Analysis Samples

The following three passages are examples of how qualitative researchers describe their data-analysis practices. The first, by Harvey, is a useful example of how data analysis can shift the original research questions. The second example, by Thai, shows multiple stages of coding and how these stages build upward to conceptual themes and theorization. The third example, by Lamont, shows a masterful use of a variety of techniques to generate theory.

Example 1: “Look Someone in the Eye” by Peter Francis Harvey ( 2022 )

I entered the field intending to study gender socialization. However, through the iterative process of writing fieldnotes, rereading them, conducting further research, and writing extensive analytic memos, my focus shifted. Abductive analysis encourages the search for unexpected findings in light of existing literature. In my early data collection, fieldnotes, and memoing, classed comportment was unmistakably prominent in both schools. I was surprised by how pervasive this bodily socialization proved to be and further surprised by the discrepancies between the two schools.…I returned to the literature to compare my empirical findings.…To further clarify patterns within my data and to aid the search for disconfirming evidence, I constructed data matrices (Miles, Huberman, and Saldaña 2013). While rereading my fieldnotes, I used ATLAS.ti to code and recode key sections (Miles et al. 2013), punctuating this process with additional analytic memos. ( 2022:1420 )

Example 2:” Policing and Symbolic Control” by Mai Thai ( 2022 )

Conventional to qualitative research, my analyses iterated between theory development and testing. Analytical memos were written throughout the data collection, and my analyses using MAXQDA software helped me develop, confirm, and challenge specific themes.…My early coding scheme which included descriptive codes (e.g., uniform inspection, college trips) and verbatim codes of the common terms used by field site participants (e.g., “never quit,” “ghetto”) led me to conceptualize valorization. Later analyses developed into thematic codes (e.g., good citizens, criminality) and process codes (e.g., valorization, criminalization), which helped refine my arguments. ( 2022:1191–1192 )

Example 3: The Dignity of Working Men by Michèle Lamont ( 2000 )

To analyze the interviews, I summarized them in a 13-page document including socio-demographic information as well as information on the boundary work of the interviewees. To facilitate comparisons, I noted some of the respondents’ answers on grids and summarized these on matrix displays using techniques suggested by Miles and Huberman for standardizing and processing qualitative data. Interviews were also analyzed one by one, with a focus on the criteria that each respondent mobilized for the evaluation of status. Moreover, I located each interviewee on several five-point scales pertaining to the most significant dimensions they used to evaluate status. I also compared individual interviewees with respondents who were similar to and different from them, both within and across samples. Finally, I classified all the transcripts thematically to perform a systematic analysis of all the important themes that appear in the interviews, approaching the latter as data against which theoretical questions can be explored. ( 2000:256–257 )

Sample Codebook 1

This is an abridged version of the codebook used to analyze qualitative responses to a question about how class affects careers in sociology. Note the use of numbers to organize the flow, supplemented by highlighting techniques (e.g., bolding) and subcoding numbers.

01. CAPS: Any reference to “capitals” in the response, even if the specific words are not used

01.1: cultural capital 01.2: social capital 01.3: economic capital

(can be mixed: “0.12”= both cultural and asocial capital; “0.23”= both social and economic)

01. CAPS: a reference to “capitals” in which the specific words are used [ bold : thus, 01.23 means that both social capital and economic capital were mentioned specifically

02. DEBT: discussion of debt

02.1: mentions personal issues around debt 02.2: discusses debt but in the abstract only (e.g., “people with debt have to worry”)

03. FirstP: how the response is positioned

03.1: neutral or abstract response 03.2: discusses self (“I”) 03.3: discusses others (“they”)

Sample Coded Passage:

* Question: What other codes jump out to you here? Shouldn’t there be a code for feelings of loneliness or alienation? What about an emotions code ?

Sample Codebook 2

This is an example that uses "word" categories only, with descriptions and examples for each code

Further Readings

Elliott, Victoria. 2018. “Thinking about the Coding Process in Qualitative Analysis.” Qualitative Report 23(11):2850–2861. Address common questions those new to coding ask, including the use of “counting” and how to shore up reliability.

Friese, Susanne. 2019. Qualitative Data Analysis with ATLAS.ti. 3rd ed. A good guide to ATLAS.ti, arguably the most used CAQDAS program. Organized around a series of “skills training” to get you up to speed.

Jackson, Kristi, and Pat Bazeley. 2019. Qualitative Data Analysis with NVIVO . 3rd ed. Thousand Oaks, CA: SAGE. If you want to use the CAQDAS program NVivo, this is a good affordable guide to doing so. Includes copious examples, figures, and graphic displays.

LeCompte, Margaret D. 2000. “Analyzing Qualitative Data.” Theory into Practice 39(3):146–154. A very practical and readable guide to the entire coding process, with particular applicability to educational program evaluation/policy analysis.

Miles, Matthew B., and A. Michael Huberman. 1994. Qualitative Data Analysis: An Expanded Sourcebook . 2nd ed. Thousand Oaks, CA: SAGE. A classic reference on coding. May now be superseded by Miles, Huberman, and Saldaña (2019).

Miles, Matthew B., A. Michael Huberman, and Johnny Saldaña. 2019. Qualitative Data Analysis: A Methods Sourcebook . 4th ed. Thousand Oaks, CA; SAGE. A practical methods sourcebook for all qualitative researchers at all levels using visual displays and examples. Highly recommended.

Saldaña, Johnny. 2014. The Coding Manual for Qualitative Researchers . 2nd ed. Thousand Oaks, CA: SAGE. The most complete and comprehensive compendium of coding techniques out there. Essential reference.

Silver, Christina. 2014. Using Software in Qualitative Research: A Step-by-Step Guide. 2nd ed. Thousand Oaks, CA; SAGE. If you are unsure which CAQDAS program you are interested in using or want to compare the features and usages of each, this guidebook is quite helpful.

Vogt, W. Paul, Elaine R. Vogt, Diane C. Gardner, and Lynne M. Haeffele2014. Selecting the Right Analyses for Your Data: Quantitative, Qualitative, and Mixed Methods . New York: The Guilford Press. User-friendly reference guide to all forms of analysis; may be particularly helpful for those engaged in mixed-methods research.

  • When you have collected content (historical, media, archival) that interests you because of its communicative aspect, content analysis (chapter 17) is appropriate. Whereas content analysis is both a research method and a tool of analysis, coding is a tool of analysis that can be used for all kinds of data to address any number of questions. Content analysis itself includes coding. ↵
  • Scientific research, whether quantitative or qualitative, demands we keep an open mind as we conduct our research, that we are “neutral” regarding what is actually there to find. Students who are trained in non-research-based disciplines such as the arts or philosophy or who are (admirably) focused on pursuing social justice can too easily fall into the trap of thinking their job is to “demonstrate” something through the data. That is not the job of a researcher. The job of a researcher is to present (and interpret) findings—things “out there” (even if inside other people’s hearts and minds). One helpful suggestion: when formulating your research question, if you already know the answer (or think you do), scrap that research. Ask a question to which you do not yet know the answer. ↵
  • Codebooks are particularly useful for collaborative research so that codes are applied and interpreted similarly. If you are working with a team of researchers, you will want to take extra care that your codebooks remain in synch and that any refinements or developments are shared with fellow coders. You will also want to conduct an “intercoder reliability” check, testing whether the codes you have developed are clearly identifiable so that multiple coders are using them similarly. Messy, unclear codes that can be interpreted differently by different coders will make it much more difficult to identify patterns across the data. ↵
  • Note that this is important for creating/denoting new codes. The vocabulary does not need to be in English or any particular language. You can use whatever words or phrases capture what it is you are seeing in the data. ↵

A first-cycle coding process in which gerunds are used to identify conceptual actions, often for the purpose of tracing change and development over time.  Widely used in the Grounded Theory approach.

A first-cycle coding process in which terms or phrases used by the participants become the code applied to a particular passage.  It is also known as “verbatim coding,” “indigenous coding,” “natural coding,” “emic coding,” and “inductive coding,” depending on the tradition of inquiry of the researcher.  It is common in Grounded Theory approaches and has even given its name to one of the primary CAQDAS programs (“NVivo”).

Computer-assisted qualitative data-analysis software.  These are software packages that can serve as a repository for qualitative data and that enable coding, memoing, and other tools of data analysis.  See chapter 17 for particular recommendations.

The purposeful selection of some data to prove a preexisting expectation or desired point of the researcher where other data exists that would contradict the interpretation offered.  Note that it is not cherry picking to select a quote that typifies the main finding of a study, although it would be cherry picking to select a quote that is atypical of a body of interviews and then present it as if it is typical.

A preliminary stage of coding in which the researcher notes particular aspects of interest in the data set and begins creating codes.  Later stages of coding refine these preliminary codes.  Note: in Grounded Theory , open coding has a more specific meaning and is often called initial coding : data are broken down into substantive codes in a line-by-line manner, and incidents are compared with one another for similarities and differences until the core category is found.  See also closed coding .

A set of codes, definitions, and examples used as a guide to help analyze interview data.  Codebooks are particularly helpful and necessary when research analysis is shared among members of a research team, as codebooks allow for standardization of shared meanings and code attributions.

The final stages of coding after the refinement of codes has created a complete list or codebook in which all the data is coded using this refined list or codebook.  Compare to open coding .

A first-cycle coding process in which emotions and emotionally salient passages are tagged.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

  • Tools and Resources
  • Customer Services
  • Affective Science
  • Biological Foundations of Psychology
  • Clinical Psychology: Disorders and Therapies
  • Cognitive Psychology/Neuroscience
  • Developmental Psychology
  • Educational/School Psychology
  • Forensic Psychology
  • Health Psychology
  • History and Systems of Psychology
  • Individual Differences
  • Methods and Approaches in Psychology
  • Neuropsychology
  • Organizational and Institutional Psychology
  • Personality
  • Psychology and Other Disciplines
  • Social Psychology
  • Sports Psychology
  • Share This Facebook LinkedIn Twitter

Article contents

General coding and analysis in qualitative research.

  • Michael G. Pratt Michael G. Pratt Carroll School of Management, Boston College
  • https://doi.org/10.1093/acrefore/9780190236557.013.859
  • Published online: 31 January 2023

Coding and analysis are central to qualitative research, moving the researcher from study design and data collection to discovery, theorizing, and writing up the findings in some form (e.g., a journal article, report, book chapter or book). Analysis is a systematic way of approaching data for the purpose of better understanding it. In qualitative research, such understanding often involves the process of translating raw data—such as interview transcripts, observation notes, or videos—into a more abstract understanding of that data, often in the form of theory. Analytical techniques common to qualitative approaches include writing memos, narratives, cases, timelines, and figures, based on one’s data. Coding often involves using short labels to capture key elements in the data. Codes can either emerge from the data, or they can be predetermined based on extant theorizing. The type of coding one engages in depends on whether one is being inductive, deductive or abductive. Although often confounded, coding is only a part of the broader analytical process.

In many qualitative approaches, coding and analysis occur concurrently with data collection, although the type and timing of specific coding and analysis practices vary by method (e.g., ethnography versus grounded theory). These coding and analytic techniques are used to facilitate the intuitive leaps, flashes of insight, and moments of doubt and discovery necessary for theorizing. When building new theory, care should be taken to ensure that one’s coding does not do undue “violence to experience”: rather, coding should reflect the lived experiences of those one has studied.

  • qualitative methods
  • grounded theory
  • ethnography
  • inductive research

You do not currently have access to this article

Please login to access the full content.

Access to the full content requires a subscription

Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 17 May 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|185.148.24.167]
  • 185.148.24.167

Character limit 500 /500

All-in-one Qualitative Coding Software

Start your free trial.

Free MAXQDA trial for Windows and Mac

Your trial will end automatically after 14 days and will not renew. There is no need for cancelation.

Elevate your qualitative research with cutting-edge Qualitative Coding Software

MAXQDA is your go-to solution for qualitative coding, setting the standard as the top choice among Qualitative Coding Software. This powerful software is meticulously designed to accommodate a diverse array of data formats, including text, audio, and video, while offering an extensive toolkit tailored specifically for qualitative coding endeavors. Whether your research demands data categorization, thematic visualization, mixed-methods analysis, or quantitative content examination, MAXQDA empowers you to seamlessly uncover the profound insights crucial for your qualitative research.

Document viewer

Your analysis.

Qualitative Coding Software MAXQDA Interface

Revolutionize Your Research: Unleash the Power of Qualitative Coding Software

Qualitative coding software is an essential companion for researchers and analysts seeking to delve deeper into their qualitative data. MAXQDA’s user-friendly interface and versatile feature set make it the ideal tool for those embarking on qualitative coding journeys. Its capabilities span across various data types, ensuring you have the tools required to effectively organize, analyze, and interpret your qualitative data.

Developed by and for researchers – since 1989

qualitative research and coding

Having used several qualitative data analysis software programs, there is no doubt in my mind that MAXQDA has advantages over all the others. In addition to its remarkable analytical features for harnessing data, MAXQDA’s stellar customer service, online tutorials, and global learning community make it a user friendly and top-notch product.

Sally S. Cohen – NYU Rory Meyers College of Nursing

Qualitative Coding is Faster and Smarter with MAXQDA

MAXQDA makes qualitative coding faster and easier than ever before. Code and analyze all kinds of data – from texts to images and audio/video files, websites, tweets, focus group discussions, survey responses, and much more. MAXQDA is at once powerful and easy-to-use, innovative and user-friendly, as well as the only leading qualitative coding software that is 100% identical on Windows and Mac.

As your all-in-one Qualitative Coding Software, MAXQDA can be used to manage your entire research project. Easily import a wide range of data types such as texts, interviews, focus groups, PDFs, web pages, spreadsheets, articles, e-books, bibliographic data, videos, audio files, and even social media data. Organize your data in groups, link relevant quotes to each other, make use of MAXQDA’s wide range of coding possibilities for all kind of data and for coding inductively as well as deductively. Your project file stays flexible and you can expand and refine your category system as you go to suit your research.

All-in-one Qualitative Coding Software MAXQDA: Import of documents

Qualitative coding made easy

Coding qualitative data lies at the heart of many qualitative data analysis methods. That’s why MAXQDA offers many possibilities for coding qualitative data. Simply drag and drop codes from the code system to the highlighted text segment or use highlighters to mark important passages, if you don’t have a name for your category yet. Of course, you can apply your codes and highlighters to many more data types, such as audio and video clips, or social media data. In addition, MAXQDA permits many further ways of coding qualitative data. For example, you can assign symbols and emojis to your data segments.

Tools tailor made for coding inductively

Besides theory-driven qualitative data analysis, MAXQDA as an all-in-one qualitative coding software strives to empower researchers that rely on data-driven approaches for coding qualitative data inductively. Use the in-vivo coding tool to select and highlight meaningful terms in a text and automatically add them as codes in your code system while coding the text segment with the code, or use MAXQDA’s handy paraphrase mode to summarize the material in your own words and inductively form new categories. In addition, a segment can also be assigned to a new (free) code which enables researchers to employ a Grounded Theory approach.

Using Qualitative Coding Software MAXQDA to Organize Your Qualitative Data: Memo Tools

Organize your code system

When coding your qualitative data, you can easily get lost. But with MAXQDA as your qualitative coding software, you will never lose track of the bigger picture. Create codes with just one click and apply them to your data quickly via drag & drop. Organize your code system to up to 10 levels and use colors to directly distinguish categories. If you want to code your data in more than one perspective, code sets are the way to go. Your project file stays flexible and you can expand and refine your category system as you go to suit your research.

Further ways of coding qualitative data

MAXQDA offers many more functionalities to facilitate the coding of your data. That’s why researchers all around the world use MAXQDA as their qualitative coding software. Select and highlight meaningful terms in a text and automatically add them as codes in your code system, code your material using self-defined keyboard shortcuts, code a text passage via color coding, or use hundreds of symbols and emoticons to code important text segments. Search for keywords in your text and let MAXQDA automatically code them or recode coded segments directly from the retrieved segments window. With the unique Smart Coding tool reviewing and customizing your categorization system never has been this easy.

Visual text exploration with MAXQDA's Word Tree

Creative coding

Coding qualitative data can be overwhelming, but with MAXQDA as your qualitative coding software, you have an easy-to-use solution. In case you created many codes which in hindsight vary greatly in their scope and level of abstraction, MAXQDA is there to help. Creative coding effectively supports the creative process of generating, sorting, and organizing your codes to create a logical structure for your code system. The graphic surface of MAXMaps – MAXQDA’s tool for creating concept maps – is the ideal place to move codes, form meaningful groups and insert parent codes. Of course, MAXQDA automatically transfers changes made in Creative Coding Mode to your Code System.

Visualize your qualitative coding and data

As an all-in-one Qualitative Coding Software, MAXQDA offers a variety of visual tools that are tailor-made for qualitative research. Create stunning visualizations to analyze your material. Of course, you can export your visualizations in various formats to enrich your final report. Visualize the progression of themes with the Codeline, use the Word Cloud to explore key terms and the central themes, or make use of the graphical representation possibilities of MAXMaps, which in particular permit the creation of concept maps. Thanks to the interactive connection between your visualizations with your MAXQDA data, you’ll never lose sight of the big picture.

Daten visualization with Qualitative Coding Software MAXQDA

AI Assist: Qualitative coding software meets AI

AI Assist – your virtual research assistant – supports your qualitative coding with various tools. AI Assist simplifies your work by automatically analyzing and summarizing elements of your research project and by generating suggestions for subcodes. No matter which AI tool you use – you can customize your results to suit your needs.

Free tutorials and guides on qualitative coding software

MAXQDA offers a variety of free learning resources for qualitative coding, making it easy for both beginners and advanced users to learn how to use the software. From free video tutorials and webinars to step-by-step guides and sample projects, these resources provide a wealth of information to help you understand the features and functionality of MAXQDA as qualitative coding software. For beginners, the software’s user-friendly interface and comprehensive help center make it easy to get started with your data analysis, while advanced users will appreciate the detailed guides and tutorials that cover more complex features and techniques. Whether you’re just starting out or are an experienced researcher, MAXQDA’s free learning resources will help you get the most out of your qualitative coding software.

Free Tutorials for Qualitative Coding Software MAXQDA

Free MAXQDA Trial for Windows and Mac

Get your maxqda license, compare the features of maxqda and maxqda analytics pro, faq: qualitative coding software.

When it comes to qualitative coding software, MAXQDA stands out as a top choice for researchers. MAXQDA is a comprehensive qualitative data analysis tool that offers a wide range of features designed to streamline the coding process and assist researchers in making sense of their qualitative data.

MAXQDA’s user-friendly interface and robust set of tools make it a reliable and powerful option for qualitative coding tasks, making it a popular choice among researchers.

One highly recommended software tool for coding qualitative data is MAXQDA. MAXQDA provides researchers with a set of tools for analyzing and interpreting their qualitative data, making it an excellent choice for qualitative coding tasks.

MAXQDA offers a range of features, including text analysis and data visualization, making it a comprehensive solution for qualitative data analysis.

Coding qualitative data involves systematically categorizing and labeling segments of your data to identify themes, patterns, and trends. MAXQDA simplifies this process by providing an intuitive interface and tools specifically designed for qualitative coding tasks.

To code qualitative data with MAXQDA, you typically follow these steps:

  • Import your qualitative data into MAXQDA, such as interview transcripts, survey responses, or text documents.
  • Read through your data to gain a deep understanding of the content.
  • Identify keywords, phrases, or themes relevant to your research objectives.
  • Create codes in MAXQDA to represent these keywords, phrases, or themes.
  • Apply the created codes to specific segments of your data by highlighting or selecting the relevant text.

MAXQDA’s flexibility and organization features make it an excellent choice for coding qualitative data efficiently and effectively.

Qualitative coding methods are techniques used to analyze and categorize qualitative data. These methods help researchers make sense of the data and identify key themes, patterns, and insights. MAXQDA supports various qualitative coding methods, making it a versatile tool for researchers.

Some common qualitative coding methods include:

  • Thematic Coding: This involves identifying and categorizing recurring themes or topics in the data.
  • Content Analysis: Researchers analyze the content of the data to understand its meaning and context.
  • Grounded Theory: A systematic approach to developing theories based on the data itself.
  • Framework Analysis: A method for structuring and analyzing large amounts of qualitative data.
  • Constant Comparative Analysis: Comparing new data with existing data to refine codes and categories.

MAXQDA’s tools and features are designed to support these coding methods, allowing researchers to choose the approach that best suits their research goals.

Qualitative coding is the process of systematically analyzing and categorizing qualitative data to identify patterns, themes, and insights. It involves assigning codes or labels to specific segments of qualitative data, such as interview transcripts, survey responses, or text documents. These codes help researchers organize and make sense of the data, facilitating data interpretation and the extraction of meaningful information.

MAXQDA is a valuable tool for qualitative coding as it provides researchers with the means to create, apply, and manage codes efficiently, allowing for a more structured and rigorous analysis of qualitative data.

For Mac users looking for qualitative coding software, MAXQDA is an excellent choice. MAXQDA offers a Mac version of its software that is fully compatible with macOS, providing Mac users with a seamless qualitative data analysis experience.

With MAXQDA for Mac, researchers can take advantage of all the features and capabilities that make MAXQDA a top choice in qualitative coding software. Whether you’re conducting research on a Mac computer or prefer the Mac environment, MAXQDA is a reliable and efficient solution.

For students venturing into qualitative research, MAXQDA is an ideal qualitative coding software choice. MAXQDA offers a user-friendly interface and a range of resources designed to support students in their research journey. It provides academic licenses at affordable prices, making it accessible to students on a budget.

MAXQDA’s intuitive design and comprehensive features empower students to code, analyze, and interpret qualitative data effectively. It also offers educational resources and tutorials to help students get started with qualitative research and coding.

Qualitative coding software, such as MAXQDA, offers a range of key features that are essential for effective qualitative data analysis. Some of the key features of qualitative coding software include:

  • Code Management: The ability to create, organize, and manage codes for data segmentation.
  • Data Import: The capability to import various types of qualitative data, including text, audio, and video files.
  • Annotation Tools: Tools for adding comments, annotations, and notes to the data for context and analysis.
  • Data Visualization: Graphs, charts, and visual aids to represent and explore data patterns.
  • Search and Retrieval: Efficient search functions to locate specific data segments or codes within large datasets.
  • Collaboration Tools: Features for collaborative coding and analysis with team members.
  • Reporting and Export: The ability to generate reports, export data, and share findings with others.

MAXQDA excels in offering these features and more, making it a comprehensive solution for qualitative coding and analysis.

Qualitative coding software, like MAXQDA, plays a crucial role in assisting researchers with qualitative data interpretation. Here’s how:

1. Structure and Organization: Coding software helps researchers organize their qualitative data into manageable segments by assigning codes and categories. This structured approach facilitates easier data interpretation by breaking down complex information into meaningful units.

2. Pattern Recognition: By coding and categorizing data, researchers can quickly identify patterns, trends, and recurring themes. MAXQDA’s tools allow for easy visualization of these patterns, aiding in data interpretation.

3. Cross-Referencing: Qualitative coding software allows researchers to cross-reference data segments, codes, and categories. This cross-referencing helps in exploring relationships and connections within the data, leading to deeper insights.

4. Collaboration: Collaborative coding and analysis tools in software like MAXQDA enable researchers to work together, share interpretations, and refine their understanding of the data collectively.

In summary, qualitative coding software streamlines the process of data interpretation by providing tools and features that enhance the researcher’s ability to uncover meaningful insights from qualitative data.

Yes, qualitative coding software, including MAXQDA, is suitable for both beginners and experienced researchers. MAXQDA is known for its user-friendly interface, making it accessible to those who are new to qualitative research and coding.

For beginners, MAXQDA provides educational resources and tutorials to help them get started with qualitative data analysis. It offers a gentle learning curve, allowing novice researchers to quickly grasp the essentials of coding and analysis.

Experienced researchers benefit from MAXQDA’s advanced features and capabilities. It offers a robust set of tools for in-depth analysis, data visualization, and complex coding tasks. Researchers with extensive experience can leverage these features to enhance the rigor and depth of their qualitative research.

In essence, MAXQDA caters to researchers at all levels, making it a versatile choice for qualitative coding.

Qualitative coding can be done without software, but it can be a more time-consuming and labor-intensive process. When coding without software, researchers typically rely on manual methods such as highlighting, underlining, or physically tagging segments of printed text.

However, using qualitative coding software like MAXQDA offers several advantages. It streamlines the coding process, provides tools for efficient organization and retrieval of coded data, and offers features like data visualization and collaboration. These benefits can significantly enhance the quality and efficiency of qualitative coding.

While it’s possible to code qualitatively without software, utilizing a dedicated tool like MAXQDA can save researchers time and effort and lead to more rigorous and comprehensive data analysis.

qualitative research and coding

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

Acknowledgements

Abbreviations, authors’ contributions.

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping
  • 10 JavaScript Libraries for Data Visualization
  • Data Visualization Interview Questions
  • Multivariate Data Visualization with R
  • 10 Types of Tableau Charts For Data Visualization
  • Data Visualization using Turicreate in Python
  • Data Visualizations in Power View
  • Why is Data Visualization so Important in Data Science?
  • Data Visualization Specialist Jobs in Canada
  • Difference Between Data Science and Data Visualization
  • Charts and Graphs for Data Visualization
  • Data Visualization using ggvis Package in R
  • Data Visualization in Excel
  • Top 8 Python Libraries for Data Visualization
  • Top R Libraries for Data Visualization in 2024
  • How to use HTML for Data Visualization ?
  • Top 10 Libraries for Data Visualization in 2020
  • Short Note on Data Visualization
  • 6 Tips for Creating Effective Data Visualizations
  • Data Visualization using Matplotlib

Data Visulization Techniques for Qualitative Research

Data visualization techniques play a crucial role in qualitative research by helping researchers explore and communicate patterns, relationships, and insights within their data. Here are some effective techniques commonly used in qualitative research. Qualitative data, conveyed through narratives, descriptions, and quotations, differs significantly from quantitative numerical data, necessitating distinct display strategies. The richness of qualitative data lies in its contextual nuances, which must be preserved in visual representations to accurately reflect underlying meanings and relationships. However, this depth of information poses a challenge in maintaining clarity and insightfulness in visualizations. Unlike standardized quantitative data, qualitative data is unstructured and varied, making it challenging to produce consistent and informative visual representations. To fully comprehend complex events, qualitative research employs an exploratory and interpretive methodology.

In this post, we will look into some Data Visualization Techniques to present Qualitative data.

Table of Content

Different Types of Techniques for Visualizing Qualitative Data

1. word clouds, 2. text networks, 3. heatmaps, 4. chronology charts, 5. mind maps and concept maps, 6. flow charts, 7. narrative visualizations, importance of data visualization in qualitative research, best practices for visualizing qualitative data, data visualization techniques for qualitative research- faqs.

Qualitative data lends itself especially well to the following visualization techniques:

Word frequency determines the size and prominence of words in a word cloud, which is a visual representation of text data. They provide a brief synopsis of important ideas and terms and may be particularly helpful for huge datasets, such as social media analysis or open-ended survey replies.

word-cloud-copy-2

  • Identifying key themes or topics in qualitative data.
  • Visualizing the frequency of words or concepts within a text corpus.
  • Highlighting prominent terms in interviews, surveys, or open-ended responses.

Text-Networks-in-visualization-copy-2

  • Revealing relationships between words or concepts in textual data.
  • They support the identification of connections, overarching themes, and conceptual co-occurrences in the data.
  • Text networks are useful for investigating semantic structures and may be used to the creation of theories or the comprehension of intricate connections.

Within a matrix, data values are represented by color changes in heatmaps. They are used in qualitative research to illustrate the prevalence of certain themes or codes among various variables or time periods. Heatmaps provide a concise visual synopsis that facilitates the identification of noteworthy regions or unforeseen outcomes.

heatmap

  • Identifying patterns or clusters in qualitative data.
  • Visualizing the intensity or density of themes or concepts across multiple dimensions.
  • Highlighting areas of interest or divergence within a dataset.

Chronology charts are a great tool for showing how themes or ideas change over time, particularly in studies that follow a subject across time or when examining how an idea or phenomena develops.

Chronology-Charts-copy

  • Illustrating the chronological order of events, actions, or developments.
  • Visualizing temporal patterns, trends, or changes over time.
  • Analyzing the sequence of activities or decision-making processes.

Mind-Maps-&-Concept-Maps-copy-2

  • Organizing and structuring complex qualitative data into hierarchical frameworks.
  • Visualizing relationships between concepts, ideas, or components of a system.
  • Brainstorming ideas, exploring connections, and generating new insights.

Flow charts are an effective tool in data visualization approaches for qualitative research. They provide a visual depiction of processes, workflows, and linkages, making complicated information more accessible and understandable. Flow charts assits in depicting phases of the research process, from data collection to analysis.

They are used to map narrative structures, demonstrating how tales or events are related within the data visualizing the sequence of steps or stages in a workflow.

Helpful in clarifying complex systems or pathways in a visual format.

Flowcharts-(especially-for-processes-or-decision-trees)-copy-2

Narrative visualizations are effective data visualization strategies for qualitative research. They blend narrative elements with visual data representation to communicate ideas and conclusions in an engaging and intelligible way. Narrative visualizations lead the audience through the data, offering context, emphasizing key results, and making difficult material more understandable. This strategy is especially useful in qualitative research, where data is often composed of textual material, interviews, and observational notes.

Narrative visualizations enhance understanding by presenting complicated qualitative by:

Narrative-Visualizations-copy

  • Combining text, visuals, and multimedia elements to engage audiences.
  • Exploring complex qualitative insights through interactive storytelling.
  • Narrative visualizations help to communicate qualitative results to a larger audience, including non-experts.

For several reasons, data visualization is essential in qualitative research:

  • Improved Communication : Compared to text alone, visualization is a more effective tool for explaining complicated concepts and connections. Graphs, charts, and diagrams may help make complex relationships easier to understand for a wider range of people, including those with different degrees of subject matter experience.
  • Promote Insights : Patterns and trends that would otherwise go undetected in raw qualitative data can be made visible via the use of visual representations of data. With the comprehensive perspective that visualizations provide, researchers may more easily spot relationships, anomalies, and patterns.
  • Engage Audiences : Stylish, well-thought-out images have the power to pique the attention of both the general audience and stakeholders. This interaction promotes further investigation and conversation as well as a better comprehension of the study results.
  • Memorability : People tend to remember images better than words. The possibility that important ideas will be remembered and maintained by the audience is increased when study results are presented graphically.
  • Assist in Decision-Making : By offering a concise summary of the study findings, visual data representations help in well-informed decision-making. For stakeholders and policymakers who must analyze and act upon study findings, this is very helpful.

In order to guarantee the efficient and accurate representation of qualitative data, consider below recommended practices:

  • Clarity and Simplicity: To make the message understandable and obvious, aim for simplicity in your visualizations. Refrain from overcomplication, since it might overshadow the main points.
  • Preserve Context: Make sure the original data’s richness and context are preserved in the display. Avoid simplifying things too much. Where needed, use more language or notes to help explain.
  • Effective Use of Color : While color may improve understanding, too much of it or the wrong kind of color can take away from the content. Use color deliberately and consistently to draw attention to connections or patterns.
  • Label and Annotate : To aid viewers in understanding, provide relevant labels, titles, and annotations. Make sure the main points can be understood even in the absence of more explanation and that the visuals are self-explanatory.
  • To achieve a unified and polished appearance, keep design components, typefaces, and color schemes consistent throughout visualizations. Maintaining consistency improves the overall visual appeal and facilitates comparisons.
  • Investigate Several Representations : Try out several visualization strategies to see which one best suits your data. Instead of depending on just pre-made chart types, think about creating custom visualizations that are suited to your particular dataset.

For qualitative researchers, data visualization is an invaluable tool that helps them make sense of complicated, rich data and detect patterns as well as explain results. Researchers are able to adequately portray and study the intricacies and complexity of human experiences, actions, and views by using suitable approaches, best practices, and developing technology. Data visualization will play a more and more important role in supporting comprehension, teamwork, and powerful narrative as qualitative research develops.

Which data visualization trends are we seeing emerge for qualitative research?

Immersion and interactive visualizations, automated visualization generation, multimodal and multimedia visualizations, collaborative and participatory visualizations, integration with mixed methods research, explainable AI and interpretable visualizations, and the democratization of visualization tools are some of the emerging trends in visualization.

How can academics make sure that data visualization techniques are morally and responsibly done?

Informed permission should be obtained, participant privacy and confidentiality should be given top priority, interpretive integrity should be maintained, biased or misleading visualizations should be avoided, cultural sensitivity should be taken into account, and accessible visualizations should be created.

What abilities are required for qualitative research data visualization that works?

Understanding qualitative research techniques, interpreting and analyzing data, visual communication and design concepts, developing narratives and stories, and being proficient with pertinent visualization tools and technologies are all crucial abilities.

How might intricate qualitative data be made simpler for efficient visualization?

In order to simplify complicated qualitative data, one must concentrate on the most important linkages and insights found in the data. Decide which quotations, themes, or patterns best capture the main idea of your study. Make use of visuals like word clouds, bar charts, or mind maps that provide a clear and succinct summary. To make sure your target audience can comprehend and use the visualization, think about adding further information or comments.

What typical mistakes should one avoid when putting qualitative data into a visual format?

Oversimplification, data distortion or misrepresentation, and context-free presentation are some common mistakes to avoid. Make sure the intricacy and subtleties of the original data are preserved in your representations. Keep ethical issues in mind, particularly those pertaining to participant privacy and informed permission. Furthermore, stay away from using improper or very complicated graphics that might mislead or confuse your viewers. Make your visual representations accurate, simple, and clear.

Please Login to comment...

Similar reads, improve your coding skills with practice.

 alt=

What kind of Experience do you want to share?

  • Open access
  • Published: 11 May 2024

How do we understand the value of drug checking as a component of harm reduction services? A qualitative exploration of client and provider perspectives

  • Lissa Moran 1 ,
  • Jeff Ondocsin 1 , 2 ,
  • Simon Outram 1 ,
  • Daniel Ciccarone 2 ,
  • Daniel Werb 3 , 4 ,
  • Nicole Holm 2 &
  • Emily A. Arnold 1  

Harm Reduction Journal volume  21 , Article number:  92 ( 2024 ) Cite this article

234 Accesses

Metrics details

Mortality related to opioid overdose in the U.S. has risen sharply in the past decade. In California, opioid overdose death rates more than tripled from 2018 to 2021, and deaths from synthetic opioids such as fentanyl increased more than seven times in those three years alone. Heightened attention to this crisis has attracted funding and programming opportunities for prevention and harm reduction interventions. Drug checking services offer people who use drugs the opportunity to test the chemical content of their own supply, but are not widely used in North America. We report on qualitative data from providers and clients of harm reduction and drug checking services, to explore how these services are used, experienced, and considered.

We conducted in-depth semi-structured key informant interviews across two samples of drug checking stakeholders: “clients” (individuals who use drugs and receive harm reduction services) and “providers” (subject matter experts and those providing clinical and harm reduction services to people who use drugs). Provider interviews were conducted via Zoom from June-November, 2022. Client interviews were conducted in person in San Francisco over a one-week period in November 2022. Data were analyzed following the tenets of thematic analysis.

We found that the value of drug checking includes but extends well beyond overdose prevention. Participants discussed ways that drug checking can fill a regulatory vacuum, serve as a tool of informal market regulation at the community level, and empower public health surveillance systems and clinical response. We present our findings within three key themes: (1) the role of drug checking in overdose prevention; (2) benefits to the overall agency, health, and wellbeing of people who use drugs; and (3) impacts of drug checking services at the community and systems levels.

This study contributes to growing evidence of the effectiveness of drug checking services in mitigating risks associated with substance use, including overdose, through enabling people who use and sell drugs to test their own supply. It further contributes to discussions around the utility of drug checking and harm reduction, in order to inform legislation and funding allocation.

The opioid crisis in the U.S. consists of multiple overlapping and inter-related waves of surging opioid exposure, dependency, overdose, and death rates. Each wave has emerged from different eras of an evolving drug market and multiple intersecting contextual factors such as trends in pharmaceutical manufacturing and prescription, socioeconomic inequities, and positive supply shocks of both licit and illicit opioids [ 1 , 2 , 3 ]. Though its history can be traced back to the 1980s and 1990s, the past decade has redefined the crisis [ 4 ].

By the time the U.S. Department of Health and Human Services (HHS) declared the opioid crisis a public health emergency in 2017 [ 5 ], a wave of unprecedented magnitude had been on the rise for nearly 4 years, marked by the rapid proliferation of fentanyl and synthetic analogues into the drug market [ 4 , 6 ]. Even as mortality from heroin and prescription opioids leveled off or decreased, opioid overdose and death rates rose precipitously [ 6 ]. From 2018 to 2021, the rates of opioid overdose deaths in the U.S. nearly doubled, and by 2021, roughly 9 out of every 10 opioid overdoses in the country (88%) were fentanyl-related [ 7 ].

In California, home to the highest number of opioid-related deaths in the U.S. [ 8 ], the opioid overdose death rate curve from 2011 to 2021 tells a harrowing story. The third wave was later to arrive in California than the national average, but its onset was rapid and dramatic. Opioid overdose death rates more than tripled from 2018 to 2021, and synthetic opioid (e.g., fentanyl) deaths increased 7.2 times, responsible for 37% of opioid overdose deaths in 2018, and 86% just three years later [ 9 ].

In response, the California Department of Public Health has committed to the expansion and promotion of policies, programs, and services to combat the overdose epidemic, with a special focus on harm reduction and drug checking strategies [ 10 ].

Drug checking services

Drug checking services (DCS) have garnered particular interest as an expansion of harm reduction strategies, as they offer the opportunity for people who use drugs to test the chemical content of their own supply [ 11 , 12 ]. In doing so, people who use drugs may be afforded the possibility of changing their use behavior to remove or reduce the likelihood of harm [ 13 , 14 ]. Multiple DCS have been operating in Europe for years—particularly in venues known for high rates of recreational drug use like music festivals [ 14 , 15 ]—but are less common in North America. In the U.S. and Canada, DCS have emerged primarily in response to the needs of marginalized people who use opioids, and operate predominantly within the context of frontline services [ 16 , 17 , 18 ].

Though not mainstream or broadly implemented, studies from North America indicate that DCS are generally acceptable among people who use drugs [ 19 , 20 ], and report that both service users and providers have expressed desire for better access to DCS, legal protections for those providing and using drug checking, and advanced technologies that provide information on drug concentrations—not just drugs present—at the point of care [ 21 , 22 , 23 , 24 ]. Several studies explore the potential impact of drug checking when used at various points along the supply chain [ 25 , 26 ], with findings that suggest feasibility, acceptability, and uptake of DCS among drug sellers [ 27 ], noting particular importance to drug sellers who are embedded in their community and hold long-term trusted relationships with customers [ 28 , 29 ].

Arguably the most common and well-known drug checking modality in North America are fentanyl testing strips (FTS), or lateral flow assays, which were originally designed for the clinical use of detecting fentanyl in urine samples, but have been publicly available for several years for modified use with drug samples [ 30 , 31 , 32 , 33 ]. FTS have been a powerful tool to combat accidental fentanyl exposure: they are small, portable, relatively accessible, and detect fentanyl in minute concentrations that could still be enough to trigger an overdose in an opiate-naïve individual [ 31 , 34 ]. They have been found to be particularly useful for outreach and street use [ 13 , 25 , 35 ]. That said, FTS are not useful in the same way for those who intend to use fentanyl, where the overdose risk is not in the presence of fentanyl, but in the concentration and presence of additional adulterants like sedatives [ 36 ].

Drug checking technology has advanced, and continues to advance, such that a greater amount can be known about the chemical components of a drug sample in a shorter period of time, in a broader array of environments [ 37 ]. Multiple drug checking modalities can inform people who use drugs about the presence of unexpected adulterants, such as benzodiazepines and xylazine, among others. Technologies that offer the greatest specificity and sensitivity include Gas Chromatography Mass Spectrometry and High-Performance Liquid Chromatography, which can detect the presence and concentrations of a wide array of chemicals present in even small amounts in a sample, but must be used in a laboratory setting by a trained technician [ 37 ]. More flexible technologies have emerged, like Fourier-Transform Infrared Spectroscopy (FTIR) [ 38 ], which is semi-portable, and returns information on the main chemical components of a drug sample (above 5% concentration) in a matter of minutes [ 31 ]. Paper spray mass spectrometry is more expensive than FTIR but is just as fast, and provides quantitative results [ 39 ]. Today, multi-technology-based drug checking services are available in some areas as standalone programs, or as added components to existing harm reduction centers [ 30 , 40 ].

These innovations continue to advance amidst complex and evolving social, legal, political, and funding conditions [ 11 , 21 , 41 , 42 ]. Legally, drug checking can be complicated as a public service, requiring the handling and, often, exchange of illicit drug material, of which possession and distribution is often criminalized [ 21 ]. Harm reduction initiatives more broadly—DCS, syringe access services, naloxone distribution, HIV/HCV testing, wound care, supervised consumption sites, and medications for opioid use disorder (MOUD), among others—can at times be unpopular socially and politically, as stigma associated with addiction and drug use combined with concerns about the goals and practices of harm reduction can generate powerful community pushback [ 41 , 42 , 43 , 44 , 45 , 46 , 47 ]. Legislators and policymakers at local, state, and federal levels who rely on constituent support may therefore shy away from supporting various harm reduction strategies, despite endorsement from public health officials and robust evidence showing that harm reduction improves the health, survival, and recovery potential for people who use drugs, without compromising community safety [ 48 , 49 ]. At the same time, California was one of several states to bring lawsuits against opioid manufacturers, distributors, and pharmacy chains, alleging that they played an active and/or negligent role in the genesis and exacerbation of the opioid crisis [ 50 ]. Of the $43.3 billion in settlement funds that have been awarded thus far, California may receive nearly $4 billion [ 51 ]. These funds are specifically earmarked for activities that are to include “prevention, intervention, harm reduction, treatment and recovery services.” [ 52 ].

As the opioid crisis reaches an unprecedented magnitude and strategies to address it are at once both a priority and a topic of controversy, we aimed to explore the value of drug checking services and their role within harm reduction more broadly. In this study, we report on qualitative data from providers and clients of harm reduction and drug checking services, to explore how these services are used, experienced, and considered. We aim to contribute to an existing qualitative evidence base exploring the value and utility of drug checking services, particularly as data are leveraged to inform political narratives, legislation, and funding allocation.

For this study, we conducted in-depth semi-structured key informant interviews across two samples: a “provider” sample and a “client” sample. The “provider” sample consisted of individuals providing clinical and harm reduction services to people who use drugs, as well as drug checking subject matter experts such as researchers and program heads. The “client” sample consisted of individuals who use drugs and were receiving harm reduction services at an agency where multiple forms of drug checking were included in the services provided.

From June to November 2022, two authors (DC & LM) conducted in-depth semi-structured key informant interviews with 11 providers—8 working in the U.S., 2 working in Canada, and one working in both countries. Included in the sample were 2 clinical providers, 4 researchers, and 5 harm reduction service providers [Table  1 ].

We employed purposive sampling of known providers first, then snowball sampling, contacting additional potential participants at informants’ recommendation. All potential participants were contacted via email and invited to participate. If the participant agreed, an appointment was made for the interview to take place over Zoom. Interviews lasted between approximately 45 and 60 min, and solicited provider perspectives on the state of the drug market in their area, the perceived needs of and challenges faced by their local client population, and their attitudes and experiences with drug checking methods and programs and integrating such programs into existing services. Verbal consent was collected at the outset of the interviews, which were then recorded. Audio from the recordings was isolated and transcribed using a secure third-party professional transcription service. All transcripts were deidentified and researchers created unique anonymous ID numbers for each participant. Participating providers were offered an honorarium of $100 in the form of a gift card. The study protocol was reviewed by the University of California San Francisco Institutional Review Board (IRB #22-36262).

Client participant ( n  = 13) recruitment and data collection took place over a one-week period in November 2022 [Table  2 ].

We employed a non-random convenience sample, recruiting from four harm reduction programs in San Francisco, where clients were approached either by interviewers (NH & JO) or program staff who had been instructed on eligibility requirements. Eligible participants were at least 18 years of age, and currently using fentanyl, heroin, or methamphetamine. Clients were excluded from eligibility if they were intoxicated or otherwise unable to provide informed consent. Given that current drug use was an eligibility requirement, we assessed “intoxicated” as an inability to respond to simple questions, providing responses that are incoherent or unintelligible, or if the participant indicates that they are too high to continue. Potential participants who were eligible and interested were then formally verbally consented and interviewed on-site. Client interviews explored participants’ history of drug use and experiences with harm reduction services, as well as their awareness of, attitudes about, and experiences with various drug checking modalities. Interviews lasted approximately 30–60 min and were recorded, then submitted to the same external third-party transcription service being used for provider interviews. Participants were provided a $25 cash incentive as a token of appreciation for their time and expertise, and were provided unique ID numbers to anonymize their data. This study protocol, distinct from the protocol covering provider interviews, was reviewed and approved as well by the UCSF IRB (#22-36640).

Client interview transcripts were uploaded to Dedoose, a qualitative analytic program [ 53 ]. Four analysts (EA, LM, SO, and JO), two of whom were involved in data collection (LM & JO), read transcribed interviews from both client and provider data sets and drafted summaries which were then systematically reviewed as a team. Following the tenets of thematic analysis and adopting the framework developed by Miles and Huberman (1994) [ 54 ], the team collaboratively identified cross-cutting themes from interview summaries, covering areas of concordance, discordance, and particular importance, as well as exemplar and negative cases. Once major themes and sub-themes were identified and articulated, authors drafted analytic memos which consolidated and explored in detail each major theme.

Following publication of an article focused on findings from the provider data set [ 55 ], further analysis of the client data set included the development of a formal coding scheme (SO), based on a priori codes extracted from the interview guide, as well as codes reflecting themes and sub-themes identified in the summarizing process and further refined via ongoing weekly analytic meetings. Coding was led by the primary qualitative analyst [SO] with secondary coding by client interviewer and author [JO]. The application of codes was discussed regularly among all team members, focusing on discrepancies between primary and secondary coders, insights developed, and the potential emergent themes. Discrepancies occurred approximately 10% of the time, and these were resolved through group consensus in accordance with established qualitative research methods [ 56 ].

Through key informant interviews, we captured diverse perspectives on how existing and emerging drug checking services are being used, and their potential for future impact within the harm reduction suite of services.

We present our findings within three key themes: (1) the role of drug checking in overdose prevention; (2) benefits to the overall agency, health, and wellbeing of people who use drugs; and (3) impacts of drug checking services at the community and systems levels.

The role of drug checking in overdose prevention

Service providers and clients expressed varying opinions on the extent to which information from drug checking services would prevent overdose and, indeed, whether overdose prevention is the appropriate metric by which drug checking’s impact should be measured. Clients reported diverse experiences and perspectives on how they use (or don’t use) drug checking, and expectations for their own future use.

Fentanyl test strips

Almost all client participants reported having had some experience with fentanyl testing strips (FTS), either using them personally or seeing others use them. Attitudes about FTS varied. Some expressed concern that they are difficult to use correctly or that they have heard they may be unreliable (prone to false positives or negatives):

We were using them constantly when they were telling us that all the drugs had fentanyl in them. But then we found out that if you don’t put enough water on speed, that it can come up positive because of some chemical. [Client, 40, female].

Others reported relying on them heavily and using them often:

I’ve just got to have that insurance that there’s no fentanyl in [my drugs]. … I have a drawer. Like that? That’s all full of test strips. Usually every time I come to a needle exchange, if they have them, I grab as many as I can and just put them in the drawer. [Client, 43, male].

Spectrometry

Although many had not heard of spectrometry, spectroscopy, or anything beyond FTS, once it was described what a range of drug checking services could look like, clients were interested and excited about the possibilities. Some expressed interest in using mobile or site-based spectroscopy, but were concerned about their safety, one expressing worry about “ judgment from the community ” or bystanders taking videos and calling the police, another wondering if they would be an “ easy target ” for law enforcement harassment. Those who reported having used FTIR as part of their harm reduction visits, however, had positive things to say:

Interviewer: And how do you feel about that testing service at the van? Participant: I think it’s remarkably great. Interviewer: yeah? Participant: Yeah. They answered my questions, exactly what I wanted to know. [Client, 66, male]

Some participants described high percentages of testing experiences coming back with a positive or unexpected result, like a client who said that he’d used the FTIR mobile service four times with meth from four different suppliers, and “ only one came back pure .”

Using drug checking results

What participants reported doing with the results of checking their drugs varied as well. Some participants spoke about specific situations where drug checking prompted them to avoid buying contaminated drugs.

Actually I just used [drug checking] yesterday. Luckily, I didn’t buy the heroin I was going to, because it tested for fentanyl . [Client, 32, male]

Other community members expressed disinterest in checking drugs, often citing a lack of realistic options for using test results in a way that made sense for them. One participant stated directly that they didn’t want to test because they didn’t want to have to not use drugs if they got a result they didn’t like:

What if it comes up with fentanyl in it? Then I bought it but I can’t do it? They’re not going to take it back, the people I bought it from. I mean even if I get them to write me a receipt, you know? [Client, 49, male]

Another client said that she was interested in drug checking generally, but wouldn’t bother if she only had a little bit and was relying on it to keep her from getting sick:

If I was trying to [check my drugs], I would do it when I had enough to do that, you know. Because if I was dope sick and I only had two hits of fentanyl, I probably would not [test]. [Client, 24, female]

Data from service provider interviews echoed these dynamics. We heard from provider participants that, broadly, drug checking services prevent overdose directly some of the time, but not all the time, by way of individual behavior change on a case-by-case basis. One provider—a clinician with a lengthy career in addiction medicine and harm reduction—echoed doubts about how common it would be for a patient to make use choices based on drug checking results, broadening the focus to personal harm reduction behavior change rather than abstinence behavior alone:

And then the question is, what do you do about it? I’ve had a patient who is, like, yeah, I tested it. It was positive for fentanyl. I go, well, what did you do? Well, we just used anyway because it’s all we had. And we had, like, the Narcan out, and I – I just felt really sleepy afterwards. … So I guess that’s the other question – if you do drug testing and it isn’t what you expect, like, you can’t take it back to the dealer and say, hey, this isn’t – I want a refund; right? So what do you do with that information? And if, you know, if you’re in withdrawal and you really need to use that drug, like, what kind of safeguards are you going to take if you decide, yeah, I’m going to go ahead and use this; right? [Clinician, U.S.]

Other service providers similarly drew a distinction between drug checking sparking behavior change that prevents overdose versus behavior change that reduces the risk of death from overdose, situating drug checking services as a set of tools that dovetail with existing personal harm reduction strategies.

The reality is, you know, people still are using their drugs. Now, a large proportion of people who use our service say that they’ll do something differently after, you know, accessing our service, so they maybe will do a test dose first, or start, like, start with a smaller dose, or use with a friend, or use at an SCS [supervised consumption site]. [Direct service provider, Canada].

Overdose prevention versus overdose rates

Interestingly, many service providers when asked for their perspective on the role of drug checking services in overdose prevention expressed concern about a gulf between the overdose prevention they observe at the service level versus what they see represented in population-level data.

Will drug checking save a life? Absolutely. Yes, for sure. Will it, at a population level, drop overdose rates? I don’t know the answer to that. [Researcher, U.S.]

Participants offered multiple explanations for this. One described challenges inherent in proving prevention, while another explained how population overdose rates can obscure the impact of drug checking programs when they operate within a rapidly-changing drug supply:

It will be very hard to prove within these prevention paradoxes. I think prevention is one of those things that is so important, but within our scientific frameworks … preventable events are so rare and on the grand scheme of things, they’re really hard to prove. … But will [DCS] save lives? Yeah. [Clinician, U.S.] The numbers aren’t showing [an overall decrease in overdose], right, because at the same time, even though we’re offering this service, the supply is just getting worse and worse, so overdose rates are rising. [Direct service provider, Canada].

Not every participant who commented on this gulf found it to be wide or troubling, but instead remarked on it as a neutral distance between two related but distinct constructs, one of which is a measure of what outcomes drug checking information could yield, and the other of which is a fundamental right to that information.

It’s really a great question if we’re going to see things pan out in the numbers. I certainly hope so and I certainly think so, but I think that we just have the right to know what we’re putting into our bodies, regardless of what outcome measures are. We deserve to know what’s in our drugs . [Direct service provider, U.S.]

Similarly, a direct service provider offered a structural perspective on overdose prevention, decoupling the value of drug checking services from overdose outcomes, prioritizing instead the intrinsic value of equipping people with critical information about what they are putting in their body and the importance of empowering people to make decisions with as much information as possible.

I don’t really know if [drug checking] is going to decrease the rate of overdose. In my mind, the problems that contribute to overdose are prohibition, law enforcement harassment, and everything that surrounds that that creates a shitty drug supply and then prevents people from investigating it. But what [drug checking] does do, again, is this piece around like, people should know that they can find out there’s more in their drug. … I think that it just enables people to make better educated decisions around their substance use and to understand their bodies better . [Direct service provider, U.S.]

Benefits to the overall agency, health, and wellbeing of people who use drugs

Drug checking services offer users the tools to independently identify risks in the drug supply and make decisions about what to do with that information in the short and long term. Many of the service providers interviewed for this study, when asked how drug checking would impact overdose rates, gave some version of a reframed response, repositioning the focus from the drug use decisions themselves to the importance of information in fortifying the overall agency, health, and wellbeing of people who use drugs.

The provider quoted in the above section went on to reflect on the intrinsic value of giving people information, arguing that it contributes to essential experiences of bodily autonomy and health equity:

What’s really important to me as well is just sort of building this momentum around people feeling entitled to bodily autonomy and seeing that [drug checking] is a part of [that], and having folks know that, yeah, they fucking deserve to have this information. They are entitled to know what is in their stuff. And so, that’s not the only piece to health equity and justice around substances and substance use, but I think that it’s a significant piece. [Direct service provider, U.S.]

Knowledge of what is in their drugs can also confirm users’ internal experience. One provider, who had piloted an early drug checking intervention in a major metropolitan area in the U.S., believed that drug checking for people who use drugs offers confirmation of the embodied experience of their substance use, which in this provider’s experience was often regarded with skepticism by health workers:

I think that people are able to connect experiences that they’re feeling in their body with real information. And I think that actually validates the really organic knowledge and experiential knowledge of drug users as the true experts about drugs. You know, when we were doing our project in [city] and fentanyl was not everywhere [yet]—almost 100% of the time, if someone brought us a sample and said, “I think this has fentanyl in it,” it was true. … It validates experience where people’s experiential knowledge is not really validated by an educational system. It’s always this kind of thing where public health people are telling drug users what’s true. And drug checking sort of validates that drug users actually know what’s true, and we’re just using science to confirm it.” [Direct service provider, U.S.]

Client interviews echoed this theme. Several clients recounted experiences that illustrated how navigating the drug market is becoming increasingly difficult, and that drug checking provides an important tool that they can pair with their own instincts and expertise as they try to keep themselves safe.

I can look at it and I can be like, “Wait a minute, we might want to test that.” Because speed and fentanyl are different. They actually look different than the other one, so when I start seeing traces of fentanyl being in the speed, I go, “We need to check that before we do any of it.” And, hey, sometimes I’m wrong. [Client, 43, male] The [meth] that was in the medicine bottle [tested positive for fentanyl], yeah. But I kind of knew it was going to because I packed a bowl right before and if it’s dirty … yeah, the color starts changing wrong right away. [Client, 43, male] I like that [drug checking] gives us some certainty of what’s in the drug … like with the heroin, there was stuff in that that just did not feel good. I’d love to know what they were cutting that stuff with. We used to joke it was shoe polish because it was so dark and dirty, but it’s really important what you put in your body . [Client, 48, female]

Our client data further provide evidence that people who use drugs are making health-related decisions for themselves and care about their own health and wellbeing. Woven throughout community member interviews were examples of health-seeking decision-making in users’ everyday lives, demonstrating agency in considering health behaviors and expressing both implicitly and explicitly a desire to care for themselves. Examples of these pro-health micro-decisions include choosing not to smoke out of foil (it’s “ not healthy to smoke out of ” and “ it’s going to give us Alzheimer’s or something ”) or reducing smoking marijuana due to a “ sensitive ” respiratory system. One informant laid out explicitly their hopes for their future, shaped too by an acute awareness of the risks of the current drug market:

I don’t want to be a statistic out here. I want to go back to regular life and experience all the rest of the highs that there still are out there before I die. I want to jump out of an airplane, or take a balloon ride, or ride more rollercoasters. … I don’t want to limit myself to one freaking high. … it’s not worth it anymore at all. … You’d never OD on meth before. Meth and weed were two things you just didn’t overdose on. If you did too much, you passed out and you slept it off and that was it. Now, no matter what drugs you’re doing, every time you use, it’s a 50–50 chance that you could die. [Client, 49, female]

These excerpts from client interviews highlight the demand among potential DCS users for strategies that contribute to their agency, health, and wellbeing, even within the context of continued drug use in the short- or long-term.

Impacts of drug checking services at community and systems levels

In addition to use at the individual level, participants talked extensively about the ways that they experience and imagine DCS having an impact at community and systems levels. They described the ways that drug checking could facilitate upstream regulation of the drug market, how the information and transparency made possible by checking drugs can fill a policy and regulatory vacuum, and how drug checking can empower public health surveillance systems and clinical response.

Community level regulation of the drug market

Multiple informants, both service providers and clients, reflected on the use—or potential use—of drug checking as a grassroots tool to regulate the drug market.

Participants talked about using, or thinking one could use, DCS as a vetting tool for sellers or suppliers.

And if people could get their shit tested, almost every time if not every time, not only would it help them to be safer by them regulating themselves and knowing what’s in their stuff … But I feel like if they knew exactly what was in it, they could go tell their guys that they got it from, “Look, man, I’m not buying that shit anymore if it’s like that. If that shit -- if this or that’s in it or whatever. Or if you don’t, whatever, I’m not buying it from you. I’m buying it from someone else.” And that might even make them be… It’ll hold them more accountable. [Client, 32, male]

This use was so important to one participant that they expressed interest in their samples being sent for more extensive in-lab spectrometry testing that could give them greater detail about the compounds and amounts in their sample:

Hey, [a full spectrometry report] may take a week, but at least in that week, I find out if I should go back to that person or not. [Client, 43, male]

Client participants frequently referred to DCS as a tool to “keep [suppliers] honest”; that is, as informal regulatory pressure on currently unregulated illegal drug markets. Some reported that they spread the word if drugs from a supplier come up contaminated or low-grade. One participant, who uses fentanyl, reported using FTS to ensure that what they are about to buy is, indeed, fentanyl:

I keep them [FTS] around. … Then I say, “Can I test it?” and I test it in front of them. And like some of it’s turned up negative. And so I totally outed them out on the block with it. It pisses them off – it kind of keeps them honest. … When you got a bunch of test strips, I can go down the line and keep, yeah, at least trying to keep them honest, you know. I got a pile of those things right now. That’s actually what I use them for. [Client, 40, male]

Of particular value, according to our participants, was the idea that spectrometry would provide formal documentation of drugs’ contents. Analytical evidence that something was either dangerously contaminated or not what the seller claimed it to be can shift the balance of power in the transactional dynamic, placing upstream pressure on suppliers to better monitor what they are contributing to the market.

If you could get results that are on paper or on a text or on a whatever, then you could bring it to them that, “Look, dude. I’m not fucking around. You need to make this shit right or I’m not buying it anymore.” That would be a game-changer . [Client, 32, male]

From the service provider standpoint, one participant, a drug checking technician and program manager with a longstanding history in their city’s drug scene, identified similar opportunities for DCS to impact the drug market, were it made easily accessible to those at multiple points in the drug supply chain in addition to consumers.

It’s not just people who are consuming the drugs that can use the service. It’s also people who are selling them. And so, oftentimes people who are not essentially the first or second hands that are creating the substance and then moving it down the chain towards the end consumer, they don’t know what is in their product. For folks who are selling drugs, if they’re able to come and get an ingredient list, they can then kind of know what to say to folks who are buying. [Direct service provider, U.S.]

This was not discussed as just a hypothetical. One informant who sells drugs validated this use as feasible and valuable:

I want to make sure what I’m buying is what it is. … I do sell it myself, so [spectrometry]’s a good service because that’s what I want to know is the chemical balance as to how much it is and how much it isn’t and whether it’s good every time. [Client, 66, male]

Filling a policy and regulatory vacuum

In the absence of a government or regulatory body that will monitor and report on the verified contents of illicit drugs, our data suggest that drug checking services, and spectrometry in particular, may be filling a policy and regulatory vacuum.

Clients likened the idea of having access to a list of drugs present in a sample to knowing ingredients of something that they would eat.

I mean we know what’s in our food, right? The packaging is all labeled and the ingredients are listed. It’s just too important, especially with drugs. Especially because we don’t know who’s making them. We don’t know exactly where they’re coming from. And every single one is different. Every week is different. Even if you buy it from the same person all the time, they’re always having something different. Maybe you’ll have the same thing twice or three times but that’s it. [Client, 48, female]

Providers, meanwhile, explicitly framed the value of drug checking within the context of an unmet regulatory need. One service provider qualified many of their statements about drug checking services with “until prohibition goes away,” situating DCS as being necessary only in a regulatory vacuum. Another spoke more directly to the relationship between drug checking and regulation:

And with drugs, because of prohibition, we just have this unknown, unregulated supply, and people are – what they’re putting in their bodies and what they’re purchasing is obscured, right? And so, drug checking is like a series of sort of imperfect tools to help consumers of drugs regain a little bit of control in the form of information around what it is that they are using. …. And there’s a very good argument that, if we had some kind of safe, regulated supply, we wouldn’t need drug checking at all, which is true . [Direct service provider, U.S.]

Empowering public health surveillance systems and clinical response

Data from our interviews suggest that drug checking technologies and programming may also contribute meaningfully at a structural level, to public health surveillance systems and clinical response. Aggregated sample results provide real-time data about what drug compositions are trending across regions, and what the clinical implications may be for providers treating clients who use drugs [ 57 ]. One drug checking program team posted results to their website in the hopes of informing local clinicians and public health policy makers about what was circulating in the drug supply. This program manager talked about making results available “at the societal level”:

And then at the kind of societal level what we do … [is] every other week we take all of the results from the samples that we’ve checked, and we combine them, and then we put out a report and update our website about, like, what’s circulating in the drug supply. So we talk about, you know, trends in the drug supply over that period, and new drugs that have been introduced, and what those drugs could mean, that type of thing. So service doesn’t only benefit individuals, but it also benefits the larger community by being able to say, okay, this is what we’re seeing. If you can’t access the service, you still at least know, you know, what is circulating. [Direct service provider, Canada]

Community members expressed an awareness of this function. One participant cited drug checking’s role in a larger tracking network as one of the things they value most about the service:

I liked a lot about [drug checking]. One, that it was available in the first place. Two, that it was not just doing its own thing. It was part of a larger network that was keeping track of what drugs were popping up on the streets and what their makeup was. I really like that that’s happening. [Client, 30, male]

At the point-of-service level, provider informants discussed significant benefits that drug checking could provide to clinicians and other medical professionals who work closely with people who use drugs. This informant posited specifically that having more detailed knowledge about what was circulating in the drug supply could help clinicians better formulate strategies for managing opioid use disorder and transitioning patients onto MOUD:

Understanding what’s actually in the supply… allows clinicians to tailor the care that they are providing to people who use drugs. So, you know, if they know that the average amount of fentanyl in a fentanyl sample is this and they want to transition someone off the unregulated drug supply onto, like, a pharmaceutical alternative, well, what pharmaceutical alternative is actually suitable based on what they’ve been using? [Direct service provider, Canada]

This is especially critical given the significant difficulties that have been recently reported when transitioning people using fentanyl to appropriate longitudinal services [ 58 ]. A provider we interviewed who runs a mail-based drug checking service in the U.S. reported that developing a more thorough knowledge of the drug supply outside of the current surveillance panoply may provide important clinical toxicology assistance to help physicians connect health outcomes to specific substances or components of the drug supply, and more quickly provide tailored treatment:

There’s one other really big one for me, which is that it allows us to link specific physiological harms with specific chemicals. So, we’re not just talking about dope anymore. We’re talking about this component of dope causing this specific reaction. What we have been able to do is, we’ll get calls from our central hospital on campus, and they’ll say, “We have this patient with an idiosyncratic presentation. Boom, boom, boom, boom, boom, boom. Here it is. We think it might be… You know, they’ve been injecting this, this, and this. We have some of their samples. Can we get them tested?” Or if they don’t have the samples, they’re like, “This is what the symptoms are. This is where they’re from. What are you seeing about the drug supply in their area?” And I can be like, “Well, yeah, there’s been a spike in levamisole in that area or xylazine,” you know, whatever it is. And then they can get to treatment quicker because the physicians have a more specific knowledge about the ideology of the harm that they’re observing in clinic. [Researcher, U.S.]

Negative cases

While the vast majority of participant responses reflected positive experiences with or attitudes about DCS, some participants additionally expressed ambivalence or concern. Many of these perspectives are embedded within the themes reported above, but deserve reiteration: service users expressed concerns about the accuracy of drug checking technologies, their privacy and safety relative to community stigma and law enforcement, and anxiety about having to make hard choices about drug use in the face of an unexpected result. Service providers expressed concern about the “then what” of drug checking, citing constrained choices and limits to what could be realistically expected in terms of behavior change without other supports in place. Some further lamented the challenges of translating the benefits of what they were seeing in practice to what is visible to a broader audience.

Not included in the above findings, but important to note, are two additional concerns that arose in interviews. First, service users and providers cautioned that the street drug supply changes so quickly that new compounds may be showing up on the street before they are identified in spectrometry libraries, potentially limiting their ability to accurately identify contaminants. Finally, one provider, a clinician with a longstanding career in addiction medicine and harm reduction, closed their interview with a somber caution against decontextualizing drug checking from a broader commitment to multi-method harm reduction, health equity, and social justice.

[I worry that] we’re just throwing yet another technology at a much bigger problem. My fear is that people will say, oh, now we have drug checking, so now we can stop trying to dismantle, you know, structures of racism and oppression in society, right? We can stop looking for homes for people because we have this technology that’s going to prevent people from dying. … It doesn’t work that way. [Clinician, U.S.]

While the magnitude of the opioid crisis is often communicated in terms of overdose and death rates, the harms associated with opioid use—intentional or unintentional—in an unregulated drug market extend far beyond those data points alone, and so too must the strategies leveed to combat them. Our findings demonstrate that drug checking services offer diverse benefits at the individual, community, public health, and health systems levels.

Overdose prevention and beyond

If the question is, do and will these technologies contribute to overdose prevention , our findings suggest that the answer is yes, with some important caveats. The first being that, according to our participants, they do not prevent overdose all the time. Our findings reflect that individuals make complex and highly contextualized decisions regarding their use behavior each time they use drugs. Information about the chemical composition of a drug sample sometimes leads to decisions to abstain, but more often leads to decisions to engage in other types of harm reduction behaviors—like using with a friend rather than alone, making sure to have naloxone on hand, using at a supervised consumption site, alerting others to a bad batch, using a tester first, or avoiding a certain supplier in the future. Sometimes it leads to no observable behavior change at all.

Further, DCS have not been scaled up to meet the needs of everyone at risk for overdose; until it is, it is premature to discuss population-level prevention. This study does not purport DCS to be in and of themselves sufficient to prevent overdose, but they are clearly part of a continuum of services that can prevent overdose mortality.

Many participants took care to note as well that the needs of people who use drugs are not solely to avoid overdose; people navigating drug use are whole people, and the stigmatization and criminalization of drug use regulates their access to a multitude of essential needs and liberties, like health care, housing, employment, agency, and a host of social and legal protections. Access to information that contributes to agency and autonomy, and enables more informed decision-making, is an essential service regardless of other outcomes.

Of course, among harm reductionists and researchers acquainted with the diverse and dynamic ways that harm reduction functions within communities, this is not news. Our findings reflect and reinforce much of the existing evidence from studies aiming to understand the role of drug checking within the larger constellation of harm reduction and, indeed, the role of harm reduction itself.

One recent qualitative study in particular reported themes with striking similarities to the prevailing themes from our interviews. Wallace et al. [ 59 ] explored the potential impacts of community drug checking on prospective service users, finding drug checking to “increase quality control in an unregulated market,” “improve the health and wellbeing of people who use substances,” and “mediate policies around substance use.”

Our findings further add to existing evidence that links drug checking with consumer empowerment within an opaque drug market [ 25 , 26 , 29 ] and underlines the reciprocal relationship between individual agency and the adoption of harm reduction strategies [ 46 , 60 , 61 ].

Of note is the shifting context in which many existing drug checking studies, including ours, are situated. In some areas, fentanyl appears most often as an unwanted adulterant in another drug—be it a non-opioid or a less potent opioid like heroin—and DCS are used primarily for fentanyl avoidance [ 13 , 19 ]. Increasingly, however, pockets of consumers are preferring fentanyl, as seen in our San Francisco client sample and within populations reflected in recent drug checking studies. Our data echo the broader finding that drug checking technologies are likely to be used differently by fentanyl-seeking opioid users versus fentanyl-avoiding opioid users, and differently still among those using stimulants, psychedelics, or other non-opioid drugs [ 22 , 62 ].

On the subject of behavior change—whether and how drug checking can be understood to prompt changes in drug use behavior—our findings align with existing evidence showing that drug checking is at times followed by contaminated drug disposal, and at times followed by the employment of personal harm reduction techniques such as spreading information within the community [ 30 , 63 ], and reduction in polysubstance use or dosage [ 13 , 14 , 15 , 64 ]. Lacking as we do a robust methodological-empirical foundation to assess this type of causality, whether and to what extent drug checking in various contexts leads to less use or more safe use among different populations cannot be stated concretely [ 16 , 65 , 66 ]. Whether individuals change their use behavior based on drug checking results is highly informed by such matters as how limited their access to drugs is, realistic options for modified use, and their perceived relative risks of knowingly ingesting a potentially dangerous compound or compounds versus not.

The tension at the center of harm reduction policy

The role of harm reduction services within communities have long reflected a central tension: in contrast with abstinence and criminalization models, harm reduction is often socially and politically criticized as enabling drug use and making neighborhoods less safe [ 67 , 68 , 69 ], while research consistently finds harm reduction to yield positive outcomes for both service users and surrounding communities [ 70 , 71 ]. In addition to improving the health and wellbeing of people using drugs, evidence suggests that those accessing harm reduction services are more likely to ultimately seek treatment and pursue recovery [ 49 , 70 , 72 , 73 ]. Concerns about public safety, too, while in many cases expressed in good faith, have been shown to be largely misplaced: multiple studies show harm reduction programs to have no significant impact on nearby violent or property-related crime, with some findings suggesting improved indicators of public order and safety [ 48 , 49 , 74 , 75 ]. Harm reduction strategies have additionally been found to be cost-effective in the short term and cost-saving to public monies in the medium- and long-term [ 76 ]. Nonetheless, public perception of harm reduction has historically been interwoven with deeply entrenched cultural stigmas related to race and ethnicity, socioeconomics, and an imprecise moralism that positions access to health and protection as a privilege that should be earned or denied based on behavior [ 67 , 69 , 71 ].

This tension plays out most concretely in the public policy space. Even as the opioid crisis dominates public health discourse and funding is earmarked for research and programming to combat it [ 77 ], harm reduction programs on the ground are under siege. At the federal level, the House Appropriations bill for the Fiscal Year 2024 HHS budget dramatically cuts funding to HIV/AIDS programs—a budget umbrella under which many harm reduction, substance use support and treatment programs are funded [ 78 , 79 ]. In California, a $15.2 million state grant supporting syringe access services has dried up amidst an overdose crisis at its peak, with no plans for replacement [ 80 ]. In 2022, a landmark bill (SB58) that would have authorized overdose prevention programs with supervised consumption in Los Angeles, Oakland, and San Francisco was vetoed by the Governor, despite broad support and robust evidence behind it [ 81 ]. Funds for such safe consumption sites have further been excluded from receiving opioid settlement funds in San Francisco [ 82 ], and in September of 2023, a bill was put forth by the San Francisco Mayor’s office to require drug screening and mandatory treatment for anyone receiving public services [ 83 ]. This, despite the expressly articulated commitment to and acknowledged necessity of harm reduction services—services explicitly aimed at helping people who use drugs to be more safe rather than abstaining from use—highlighted in policy language across multiple levels of government and legislature [ 10 , 84 , 85 , 86 , 87 ].

It is worth noting that one of the harm reduction sites where several of this study’s client participants were receiving services was defunded shortly after we completed data collection, and since then, overdose death rates in the city have climbed [ 88 ] and public order in that area has reportedly deteriorated [ 89 ].

The framing of effectiveness is crucial in this policy environment

In light of these tensions, we offer the findings of this study as a contribution to an evidence base that may play an increasingly central role in California’s—and the nation’s—opioid crisis response. The allowable expenditures for opioid settlement funds list “evidence-informed programs to reduce the harms associated with intravenous drug use” as a focus area [ 51 ] and California’s Overdose Prevention Initiative describes its approach as being “data-driven.” [ 10 ] The proposed HHS FY2024 budget, in addition to cutting much of the funding that covers harm reduction programming, proposes the rejection of “controversial programs” while maintaining funding for “an effective opioid response.” [ 78 ] As California faces a $68 billion budget deficit [ 90 ] and supplementary federal and settlement funds are to be apportioned based on strategy effectiveness and the body of scientific evidence, the role of research comes into sharper focus. It is the strength or weakness of the evidence base—of the complexity of the research inquiry and integrity of the data—that may ultimately frame which initiatives are eligible for support.

When asked about the place and promise of drug checking within the broader constellation of harm reduction services, it was drug users’ humanity and right to health, more so than the public health implications, that grounded many of our participants’ responses. Their responses implicated, too, the underlying operating principle that, ultimately, people make choices that make sense for them. Whether by the hand of addiction or desire, constrained options or access, or every individual’s complex hierarchy of relative dangers and needs, people’s choices are reflections of their full humanity. Approaches to stemming the tide of this crisis cannot be effective unless they are built on respect for the individuals living it, and focused on understanding their needs.

We encourage continued research and reporting on drug checking services and emerging technologies, with an emphasis on exploring effectiveness within a broad scope, reflective of the impacts of these services on whole lives and systems.

Limitations

Many of the community members we interviewed had not heard of spectrometry or spectroscopy, and the interview represented the first time they were introduced to the technology as a concept and the first time they considered whether and how they could see themselves using it in their own lives. This limits the range of our findings among the client sample, given that much of our qualitative data speaks to hypothetical future use rather than past or current use of emerging technologies. The absence of data on client use should not be interpreted to mean that participants chose not to use DCS.

Additionally, the sampling frame for clients was limited to one setting, while providers were sampled from across North America, and the small sample size for both groups may have limited saturation. Finally, providers did not reflect all North American regions where drug checking has been implemented, nor all DCS models, limiting the generalizability of findings.

Our manuscript contributes to growing evidence of the effectiveness of drug checking services in mitigating a range of risks associated with substance use, including overdose, and offer diverse benefits at the individual, community, public health, and health systems levels. For that reason, policymakers should consider allocating resources towards its implementation and scale-up in settings impacted by overdose mortality.

Data availability

Due to ethical restrictions, the data generated and analyzed during the current study are not available to those outside the study team. Data and materials are of a sensitive nature, and participants did not consent to transcripts of their interviews being publicly available. Portions of interviews about which editors have questions or concerns may be provided upon request after any details that may risk the confidentiality of the participants beyond de-identification have been removed. Researchers who meet the criteria for access to confidential data may send requests for the interview transcripts to the Human Research Protection Program (HRPP)/IRB at the University of California, San Francisco at 415-476-1814 or [email protected].

Abbreviations

Fourier–Transform Infrared Spectroscopy

Fentanyl testing strips

US Department of Health and Human Services

Medications for opioid use disorder

Volkow ND, Blanco C. The changing Opioid Crisis: development, challenges and opportunities. Mol Psychiatry. 2021;26(1):218–33.

Article   PubMed   Google Scholar  

Ciccarone D. Fentanyl in the US heroin supply: a rapidly changing risk environment. Int J Drug Policy. 2017;46:107–11.

Article   PubMed   PubMed Central   Google Scholar  

Dasgupta N, Beletsky L, Ciccarone D. Opioid Crisis: no Easy fix to its Social and Economic determinants. Am J Public Health. 2018;108(2):182–6.

Ciccarone D. The rise of Illicit fentanyls, stimulants and the Fourth Wave of the Opioid Overdose Crisis. Curr Opin Psychiatry. 2021;34(4):344–50.

U.S. Department of Health and Human Services (HHS). List of Public Health Emergency Declarations [Internet]. Administration for Strategic Preparedness & Response. [cited 2023 Oct 11]. https://aspr.hhs.gov:443/legal/PHE/Pages/default.aspx .

Center for Disease Control and Prevention (CDC). Understanding the Opioid Overdose Epidemic [Internet]. 2023. https://www.cdc.gov/opioids/basics/epidemic.html .

Center for Disease Control and Prevention (CDC). Data brief - Overdose deaths in the United States [Internet]. https://www.cdc.gov/nchs/data/databriefs/db457-tables.pdf#4 .

Center for Disease Control and Prevention, National Center for Health Statistics. Drug Overdose Mortality by State [Internet]. 2022. https://www.cdc.gov/nchs/pressroom/sosmap/drug_poisoning_mortality/drug_poisoning.htm .

California Department of Public Health (CDPH) - Substance and Addiction Prevention Branch (SAPB). CDPH Center for Health Statistics and Informatics Vital Statistics - Multiple Cause of Death and California Comprehensive Death Files [Internet]. California Overdose Surveillance Dashboard. [cited 2023 Nov 6]. https://skylab.cdph.ca.gov/ODdash/?tab=CA .

California Department of Public Health (CDPH). Substance and Addiction Prevention Branch. California’s Approach to the Overdose Epidemic [Internet]. [cited 2023 Oct 11]. https://www.cdph.ca.gov/Programs/CCDPHP/sapb/Pages/CA-Approach.aspx .

Barratt MJ, Measham F. What is drug checking. Anyway? Drugs Habits Soc Policy. 2022;23(3):176–87.

Article   Google Scholar  

Brunt T. Drug checking as a harm reduction tool for recreational drug users: opportunities and challenges. Background paper commissioned by the EMCDDA for Health and social responses to drug problems: a European guide [Internet]. 2017 Oct 30 [cited 2023 Oct 11]; https://apo.org.au/node/219011 .

Peiper NC, Clarke SD, Vincent LB, Ciccarone D, Kral AH, Zibbell JE. Fentanyl test strips as an opioid overdose prevention strategy: findings from a syringe services program in the Southeastern United States. Int J Drug Policy. 2019;63:122–8.

Measham F, Turnbull G. Intentions, actions and outcomes: a follow up survey on harm reduction practices after using an English festival drug checking service. Int J Drug Policy. 2021;95:103270.

Measham F, Simmons H. Who uses drug checking services? Assessing uptake and outcomes at English festivals in 2018. Drugs Habits Soc Policy. 2022;23(3):188–99.

Maghsoudi N, Tanguay J, Scarfone K, Rammohan I, Ziegler C, Werb D, et al. Drug checking services for people who use drugs: a systematic review. Addict Abingdon Engl. 2022;117(3):532–44.

Maghsoudi N, McDonald K, Stefan C, Beriault DR, Mason K, Barnaby L, et al. Evaluating networked drug checking services in Toronto, Ontario: study protocol and rationale. Harm Reduct J. 2020;17(1):9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Johns Hopkins Bloomberg School of Public Health and Bloomberg American Health Initiative. Fentanyl Overdose Reduction Checking Analysis Study (FORECAST). Baltimore, MD; 2018.

Krieger MS, Yedinak JL, Buxton JA, Lysyshyn M, Bernstein E, Rich JD, et al. High willingness to use rapid fentanyl test strips among young adults who use drugs. Harm Reduct J. 2018;15(1):7.

Rammohan I, Bouck Z, Fusigboye S, Bowles J, McDonald K, Maghsoudi N, et al. Drug checking use and interest among people who inject drugs in Toronto, Canada. Int J Drug Policy. 2022;107:103781.

Davis CS, Lieberman AJ, O’Kelley-Bangsberg M. Legality of drug checking equipment in the United States: a systematic legal analysis. Drug Alcohol Depend. 2022;234:109425.

Swartz JA, Lieberman M, Jimenez AD, Mackesy-Amiti ME, Whitehead HD, Hayes KL, et al. Current attitudes toward drug checking services and a comparison of expected with actual drugs present in street drug samples collected from opioid users. Harm Reduct J. 2023;20(1):87.

Reagan-Udall Foundation for the FDA. Fentanyl Drug Checking and Screening: Roundtable on Clinical Perspectives [Internet]. 2021. https://reaganudall.org/sites/default/files/2022-06/FTS_Clinician%20Roundtable_Final_Complete%206.30.pdf .

Reagan-Udall Foundation for the FDA. Fentanyl Drug Checking and Screening: Roundtable on Community Perspectives [Internet]. 2021. https://reaganudall.org/sites/default/files/2022-06/FTS_Community%20Roundtable_Final_Complete%206.30.pdf .

Weicker NP, Owczarzak J, Urquhart G, Park JN, Rouhani S, Ling R, et al. Agency in the fentanyl era: exploring the utility of fentanyl test strips in an opaque drug market. Int J Drug Policy. 2020;84:102900.

Bardwell G, Boyd J, Arredondo J, McNeil R, Kerr T. Trusting the source: the potential role of drug dealers in reducing drug-related harms via drug checking. Drug Alcohol Depend. 2019;198:1–6.

Long V, Arredondo J, Ti L, Grant C, DeBeck K, Milloy MJ, et al. Factors associated with drug checking service utilization among people who use drugs in a Canadian setting. Harm Reduct J. 2020;17(1):100.

Carroll JJ, Rich JD, Green TC. The protective effect of trusted dealers against opioid overdose in the U.S. Int J Drug Policy. 2020;78:102695.

Betsos A, Valleriani J, Boyd J, Bardwell G, Kerr T, McNeil R. I couldn’t live with killing one of my friends or anybody: a rapid ethnographic study of drug sellers’ use of drug checking. Int J Drug Policy. 2021;87:102845.

Wallace B, Hills R, Rothwell J, Kumar D, Garber I, van Roode T, et al. Implementing an integrated multi-technology platform for drug checking: Social, scientific, and technological considerations. Drug Test Anal. 2021;13(4):734–46.

Article   CAS   PubMed   Google Scholar  

McCrae K, Tobias S, Grant C, Lysyshyn M, Laing R, Wood E, et al. Assessing the limit of detection of Fourier-transform infrared spectroscopy and immunoassay strips for fentanyl in a real-world setting. Drug Alcohol Rev. 2020;39(1):98–102.

Center for Disease Control and Prevention (CDC). Fentanyl Test Strips: A Harm Reduction Strategy [Internet]. 2023 [cited 2023 Oct 11]. https://www.cdc.gov/stopoverdose/fentanyl/fentanyl-test-strips.html .

Miller A. More states allow fentanyl test strips as a tool to prevent overdoses [Internet]. CNN. 2022 [cited 2023 Oct 12]. https://www.cnn.com/2022/05/04/health/fentanyl-test-strips-khn/index.html .

Green TC, Park JN, Gilbert M, McKenzie M, Struth E, Lucas R, et al. An assessment of the limits of detection, sensitivity and specificity of three devices for public health-based drug checking of fentanyl in street-acquired samples. Int J Drug Policy. 2020;77:102661.

Park JN, Frankel S, Morris M, Dieni O, Fahey-Morrison L, Luta M, et al. Evaluation of fentanyl test strip distribution in two Mid-atlantic syringe services programs. Int J Drug Policy. 2021;94:103196.

Laing MK, Ti L, Marmel A, Tobias S, Shapiro AM, Laing R, et al. An outbreak of novel psychoactive substance benzodiazepines in the unregulated drug supply: preliminary results from a community drug checking program using point-of-care and confirmatory methods. Int J Drug Policy. 2021;93:103169.

Harper L, Powell J, Pijl EM. An overview of forensic drug testing methods and their suitability for harm reduction point-of-care services. Harm Reduct J. 2017;14(1):52.

Tupper KW, McCrae K, Garber I, Lysyshyn M, Wood E. Initial results of a drug checking pilot program to detect fentanyl adulteration in a Canadian setting. Drug Alcohol Depend. 2018;190:242–5.

Borden SA, Saatchi A, Vandergrift GW, Palaty J, Lysyshyn M, Gill CG. A new quantitative drug checking technology for harm reduction: pilot study in Vancouver, Canada using paper spray mass spectrometry. Drug Alcohol Rev. 2022;41(2):410–8.

Gozdzialski L, Wallace B, Hore D. Point-of-care community drug checking technologies: an insider look at the scientific principles and practical considerations. Harm Reduct J. 2023;20:39.

Barry CL. Fentanyl and the evolving opioid epidemic: what strategies should Policy makers consider? Psychiatr Serv Wash DC. 2018;69(1):100–3.

Tsai AC, Kiang MV, Barnett ML, Beletsky L, Keyes KM, McGinty EE, et al. Stigma as a fundamental hindrance to the United States opioid overdose crisis response. PLOS Med. 2019;16(11):e1002969.

Khan GK, Harvey L, Johnson S, Long P, Kimmel S, Pierre C, et al. Integration of a community-based harm reduction program into a safety net hospital: a qualitative study. Harm Reduct J. 2022;19:35.

Kulesza M, Teachman BA, Werntz AJ, Gasser ML, Lindgren KP. Correlates of public support toward federal funding for harm reduction strategies. Subst Abuse Treat Prev Policy. 2015;10:25.

Socia KM, Stone R, Palacios WR, Cluverius J. Focus on prevention: the public is more supportive of overdose prevention sites than they are of safe injection facilities. Criminol Public Policy. 2021;20(4):729–54.

Vearrier L. The value of harm reduction for injection drug use: a clinical and public health ethics analysis. Dis–Mon DM. 2019;65(5):119–41.

Csák R, Shirley-Beavan S, McHenry AE, Daniels C, Burke-Shyne N. Harm reduction must be recognised an essential public health intervention during crises. Harm Reduct J. 2021;18:128.

Levengood TW, Yoon GH, Davoust MJ, Ogden SN, Marshall BDL, Cahill SR, et al. Supervised Injection facilities as Harm reduction: a systematic review. Am J Prev Med. 2021;61(5):738–49.

Kennedy MC, Karamouzian M, Kerr T. Public Health and Public Order Outcomes Associated with supervised drug Consumption facilities: a systematic review. Curr HIV/AIDS Rep. 2017;14(5):161–83.

Plaintiff’s Executive Committee. National Opioids Settlement [Internet]. [cited 2023 Oct 11]. https://nationalopioidsettlement.com/ .

California Department of Heathcare Services (DHCS). California’s Opioid Settlements [Internet]. https://www.dhcs.ca.gov/provgovpart/Pages/California-Opioid-Settlements.aspx .

California Department of Heathcare Services (DHCS). Janssen & Distributors Settlement Funds Allowable Expenditures [Internet]. https://www.dhcs.ca.gov/Documents/CSD/CA-OSF-Allowable-Expenses.pdf .

SocioCultural R, Consultants LLC. Dedoose 9.0.17, web application for managing, analyzing, and presenting qualitative and mixed method research data. Los Angeles, CA; 2021.

Miles M, Huberman A. Qualitative data analysis: an expanded sourcebook. 2nd ed. Sage; 1994.

Ondocsin J, Ciccarone D, Moran L, Outram S, Werb D, Thomas L, et al. Insights from drug checking programs: Practicing Bootstrap Public Health whilst Tailoring to local drug user needs. Int J Environ Res Public Health. 2023;20(11):5999.

Saldaña J. The coding manual for qualitative researchers. London: Sage; 2009.

Google Scholar  

Ciccarone D, Ondocsin J, Mars S. Heroin uncertainties: exploring users’ perceptions of fentanyl-adulterated and -substituted ‘heroin’. Int J Drug Policy. 2017;46:146–55.

Duber HC, Barata IA, Cioè-Peña E, Liang SY, Ketcham E, Macias-Konstantopoulos W, et al. Identification, management, and transition of care for patients with opioid Use Disorder in the Emergency Department. Ann Emerg Med. 2018;72(4):420–31.

Wallace B, van Roode T, Pagan F, Hore D, Pauly B. The potential impacts of community drug checking within the overdose crisis: qualitative study exploring the perspective of prospective service users. BMC Public Health. 2021;21(1):1156.

Boucher LM, Marshall Z, Martin A, Larose-Hébert K, Flynn JV, Lalonde C, et al. Expanding conceptualizations of harm reduction: results from a qualitative community-based participatory research study with people who inject drugs. Harm Reduct J. 2017;14(1):18.

Gowan T, Whetstone S, Andic T. Addiction, agency, and the politics of self-control: doing harm reduction in a heroin users’ group. Soc Sci Med 1982. 2012;74(8):1251–60.

Beaulieu T, Wood E, Tobias S, Lysyshyn M, Patel P, Matthews J, et al. Is expected substance type associated with timing of drug checking service utilization? A cross-sectional study. Harm Reduct J. 2021;18(1):66.

Ivers JH, Killeen N, Keenan E. Drug use, harm-reduction practices and attitudes toward the utilisation of drug safety testing services in an Irish cohort of festival-goers. Ir J Med Sci 1971 -. 2022;191(4):1701–10.

Mars SG, Ondocsin J, Ciccarone D. Toots, tastes and tester shots: user accounts of drug sampling methods for gauging heroin potency. Harm Reduct J. 2018;15(1):26.

Tilhou AS, Zaborek J, Baltes A, Salisbury-Afshar E, Malicki J, Brown R. Differences in drug use behaviors that impact overdose risk among individuals who do and do not use fentanyl test strips for drug checking. Harm Reduct J. 2023;20(1):41.

Aarhus Universitet. Literature Review of Drug Checking in Nightlife – Methods, Services, and Effects [Internet], Aarhus. Denmark: Center for Rusmiddelforskning, Psykologisk Institut; 2019. https://www.sst.dk/-/media/Udgivelser/2019/Engelsk-version-Litteraturgennemgang-om-stoftest-i-nattelivet.ashx .

Des Jarlais DC. Harm reduction in the USA: the research perspective and an archive to David Purchase. Harm Reduct J. 2017;14(1):51.

Jackson LA, Dechman M, Mathias H, Gahagan J, Morrison K. Safety and danger: perceptions of the implementation of harm reduction programs in two communities in Nova Scotia, Canada. Health Soc Care Community. 2022;30(1):360–71.

Keane H. Critiques of harm reduction, morality and the promise of human rights. Int J Drug Policy. 2003;14(3):227–32.

Armbrecht E, Guzauskas G, Hansen R, Pandey R, Fazioli K, Chapman R. Supervised injection facilities and other supervised consumption sites: effectiveness and value; final evidence report [Internet]. Institute for Clinical and Economic Review; 2021 Jan. https://icer.org/wp-content/uploads/2020/10/ICER_SIF_Final-Evidence-Report_010821.pdf .

Klein A. Harm reduction works: evidence and inclusion in Drug Policy and Advocacy. Health Care Anal HCA J Health Philos Policy. 2020;28(4):404–14.

DeBeck K, Kerr T, Bird L, Zhang R, Marsh D, Tyndall M, et al. Injection drug use cessation and use of North America’s first medically supervised safer injecting facility. Drug Alcohol Depend. 2011;113(2–3):172–6.

Hagan H, McGough JP, Thiede H, Hopkins S, Duchin J, Alexander ER. Reduced injection frequency and increased entry and retention in drug treatment associated with needle-exchange participation in Seattle drug injectors. J Subst Abuse Treat. 2000;19(3):247–52.

Fixler AL, Jacobs LA, Jones DB, Arnold A, Underwood EE. There Goes the Neighborhood? The Public Safety Enhancing Effects of a Mobile Harm Reduction Intervention [Internet]. medRxiv; 2023 [cited 2023 Nov 1]. p. 2023.05.30.23290739. https://www.medrxiv.org/content/ https://doi.org/10.1101/2023.05.30.23290739v1 .

Chalfin A, Del Pozo B, Mitre-Becerril D. Overdose Prevention Centers, Crime, and disorder in New York City. JAMA Netw Open. 2023;6(11):e2342228.

Wilson DP, Donald B, Shattock AJ, Wilson D, Fraser-Hurt N. The cost-effectiveness of harm reduction. Int J Drug Policy. 2015;26:S5–11.

Weiss M, Zoorob M. Political frames of public health crises: discussing the opioid epidemic in the US Congress. Soc Sci Med. 2021;281:114087.

Committee Releases FY, Labor. Health and Human Services, Education, and Related Agencies Appropriations Bill [Internet]. House Committee on Appropriations - Republicans. 2023 [cited 2023 Oct 11]. https://appropriations.house.gov/news/press-releases/committee-releases-fy24-labor-health-and-human-services-education-and-related .

San Francisco AIDS, Foundation. Devastating cuts proposed to Federal HIV budget [Internet]. San Francisco AIDS Foundation. 2023 [cited 2023 Oct 11]. https://www.sfaf.org/collections/breaking-news/devastating-cuts-proposed-to-federal-hiv-budget/ .

Emily Alpert Reyes. Amid an overdose crisis, a California grant that helped syringe programs is drying up. Los Angeles Times [Internet]. 2023 Feb 19 [cited 2023 Oct 11]; https://www.latimes.com/california/story/2023-02-19/overdose-california-grant-syringe-programs .

Healthright 360. Outrage at Governor Newsom’s decision to veto SB 57, landmark overdose prevention bill | News | HealthRIGHT 360 [Internet]. [cited 2023 Oct 11]. https://www.healthright360.org/news/outrage-governor-newsom%E2%80%99s-decision-veto-sb-57-landmark-overdose-prevention-bill# .

Sjostedt D. SF Can’t Use Opioid Settlement on Drug Sites, Attorney Says. The San Francisco Standard [Internet]. 2023 Jan 21 [cited 2023 Oct 11]; https://sfstandard.com/2023/01/20/san-francisco-cant-use-opioid-settlement-funds-for-drug-sites-attorney-says/ .

City and County of San Francisco - News. Mayor London Breed Announces New Initiative to Require Screening and Treatment for Substance Use Disorder in Order to Receive County-Funded Cash Assistance | San Francisco [Internet]. [cited 2023 Oct 11]. https://sf.gov/news/mayor-london-breed-announces-new-initiative-require-screening-and-treatment-substance-use .

Executive Office of the President, Office of National Drug Control Policy. The Biden-Harris Administration’s Statement of Drug Policy Priorities for Year One [Internet]. Washington, DC; https://www.whitehouse.gov/wp-content/uploads/2021/03/BidenHarris-Statement-of-Drug-Policy-Priorities-April-1.pdf .

U.S. Department of Health and Human Services (HHS). Harm Reduction [Internet]. Overdose Prevention Strategy. 2021 [cited 2023 Oct 11]. https://www.hhs.gov/overdose-prevention/harm-reduction .

California Department of Public Health (CDPH). Substance and Addiction Prevention Branch. OPI Landing Page [Internet]. Overdose Prevention Initiative (OPI). [cited 2023 Oct 11]. https://www.cdph.ca.gov/Programs/CCDPHP/sapb/Pages/OPI-landing.aspx .

Center for Disease Control and Prevention (CDC). Opioid Rapid Response Program (ORRP) | Opioids | CDC [Internet]. 2022 [cited 2023 Oct 11]. https://www.cdc.gov/opioids/opioid-rapid-response-program.html .

Thadani T, Jung Y. After Mayor Breed’s Tenderloin Center closed, S.F. overdose deaths jumped. Here’s what the data shows. San Francisco Chronicle [Internet]. [cited 2023 Dec 20]; https://www.sfchronicle.com/sf/article/sf-mayor-breed-overdose-tenderloin-center-fentanyl-17846320.php .

Quintana S. Closure of SF’s Controversial Tenderloin Linkage Center Creates New Issue for the City [Internet]. NBC Bay Area. 2022. https://www.nbcbayarea.com/news/local/san-francisco/tenderloin-linkage-center-closure/3101390/ .

Petek G. The 2024-25 Budget: California’s Fiscal Outlook [Internet]. Legislative Analyst’s Office; 2023 Dec. https://lao.ca.gov/reports/2023/4819/2024-25-Fiscal-Outlook-120723.pdf .

Download references

Acknowledgements

This study would not have been possible without the client participants who so generously shared insights about their lives and how they access harm reduction services, and our provider key informants and their work on behalf of people who use drugs. The authors would also like to thank the staff of the Northern California HIV/AIDS Policy Research Center who supported the project during its inception, data collection, and writing.

This research was funded by the California HIV/AIDS Research Program (CHRP) to the Northern California HIV/AIDS Policy Research Center (PI Arnold), H21PC3238. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Author information

Authors and affiliations.

Center for AIDS Prevention Studies, Department of Medicine, University of California, San Francisco, CA, 94143, USA

Lissa Moran, Jeff Ondocsin, Simon Outram & Emily A. Arnold

Family & Community Medicine, Department of Medicine, University of California, San Francisco, CA, 94143, USA

Jeff Ondocsin, Daniel Ciccarone & Nicole Holm

Centre on Drug Policy Evaluation, St. Michael’s Hospital, Toronto, ON, M5B 1W8, Canada

Daniel Werb

Division of Infectious Diseases & Global Public Health, UC San Diego School of Medicine, University of California, San Diego, CA, 92093, USA

You can also search for this author in PubMed   Google Scholar

Contributions

E.A.A. and D.C. conceptualized and designed the study; J.O., L.M., D.C., and N.H. were responsible for data collection, each conducting in-depth key informant interviews. L.M., J.O., S.O., and E.A.A. analyzed the data. L.M. led the writing of the original manuscript draft with significant contributions from J.O., S.O., and E.A.A. L.M., J.O., D.C., S.O., D.W., N.H., and E.A.A. were directly involved in iterative review and revision. E.A.A. provided supervision, project administration, and funding acquisition. All authors have read and agreed to the submitted version of the manuscript.

Corresponding author

Correspondence to Lissa Moran .

Ethics declarations

Ethics approval and consent to participate.

The study was conducted in accordance with the Declaration of Helsinki and informed consent was obtained from all subjects involved in the study. The study protocol and consent procedures were reviewed and approved by the UCSF IRB (#22-36640) on 12 September 2022.

Consent for publication

Not applicable.

Competing interests

D.W. is a founder of DoseCheck, a commercial entity that is developing a mobile drug checking technology. D.C. reports the following relevant financial relationships during the past 12 months: (1) he is a scientific advisor to Celero Systems; and (2) he has been retained as an expert witness in ongoing prescription opioid litigation by Motley Rice, LLP. The remaining authors have no relevant financial or non-financial interests to disclose. The remaining authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Moran, L., Ondocsin, J., Outram, S. et al. How do we understand the value of drug checking as a component of harm reduction services? A qualitative exploration of client and provider perspectives. Harm Reduct J 21 , 92 (2024). https://doi.org/10.1186/s12954-024-01014-w

Download citation

Received : 23 January 2024

Accepted : 02 May 2024

Published : 11 May 2024

DOI : https://doi.org/10.1186/s12954-024-01014-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Drug checking
  • Harm reduction
  • Substance use
  • Qualitative
  • North america

Harm Reduction Journal

ISSN: 1477-7517

qualitative research and coding

  • Open access
  • Published: 13 May 2024

“ We might not have been in hospital, but we were frontline workers in the community ”: a qualitative study exploring unmet need and local community-based responses for marginalised groups in Greater Manchester during the COVID-19 pandemic

  • Stephanie Gillibrand 1 ,
  • Ruth Watkinson 2 ,
  • Melissa Surgey 2 ,
  • Basma Issa 3 &
  • Caroline Sanders 2 , 4  

BMC Health Services Research volume  24 , Article number:  621 ( 2024 ) Cite this article

60 Accesses

2 Altmetric

Metrics details

The response to the COVID-19 pandemic saw a significant increase in demand for the voluntary, community, faith and social enterprise (VCFSE) sector to provide support to local communities. In Greater Manchester (GM), the VCFSE sector and informal networks provided health and wellbeing support in multiple ways, culminating in its crucial supportive role in the provision of the COVID-19 vaccination rollout across the GM city region. However, the support provided by the VCFSE sector during the pandemic remains under-recognised. The aims of the study were to: understand the views and experiences of marginalised communities in GM during the COVID-19 pandemic; explore how community engagement initiatives played a role during the pandemic and vaccine rollout; assess what can be learnt from the work of key stakeholders (community members, VCFSEs, health-system stakeholders) for future health research and service delivery.

The co-designed study utilised a participatory approach throughout and was co-produced with a Community Research Advisory Group (CRAG). Focus groups and semi-structured interviews were conducted remotely between September-November 2021, with 35 participants from local marginalised communities, health and care system stakeholders and VCFSE representatives. Thematic framework analysis was used to analyse the data.

Local communities in GM were not supported sufficiently by mainstream services during the course of the COVID-19 pandemic, resulting in increased pressure onto the VCFSE sector to respond to local communities’ need. Community-based approaches were deemed crucial to the success of the vaccination drive and in providing support to local communities more generally during the pandemic, whereby such approaches were in a unique position to reach members of diverse communities to boost uptake of the vaccine. Despite this, the support delivered by the VCFSE sector remains under-recognised and under-valued by the health system and decision-makers.

Conclusions

A number of challenges associated with collaborative working were experienced by the VSCE sector and health system in delivering the vaccination programme in partnership with the VCFSE sector. There is a need to create a broader, more inclusive health system which allows and promotes inter-sectoral working. Flexibility and adaptability in ongoing and future service delivery should be championed for greater cross-sector working.

Peer Review reports

The response to the COVID-19 pandemic saw a significant increase in demand for the voluntary, community, faith and social enterprise (VCFSE) sector to provide support to local communities [ 1 , 2 ]. The role of communities was seen as crucial to supporting the pandemic response, to better mobilise public health pandemic responses and supportive health services [ 3 ]. VCFSE organisations nationally had to quickly mobilise to adapt their service offer to meet increased demand, new gaps in service provision and deliver services in different ways to address the challenges faced by local communities. These included loss of income and financial hardship, closure of schools and childcare, increased social isolation, digital exclusion, and increased mental health issues [ 4 ]. However, previous research has concluded that support provided by the voluntary sector during the pandemic has been under-recognised [ 5 ]. Some authors have explored the role that VCFSEs played at the national level, in supporting communities during the pandemic [ 4 , 5 , 6 ]. Yet, whilst it is well-known that tens of thousands of UK volunteers supported local vaccine delivery [ 7 ], no existing academic literature has explored the role of VCFSEs in supporting the vaccination rollout.

We focus on Greater Manchester (GM), where increased support from VCFSE organisations, including smaller, community-based networks, responded to increased demand from local communities and the NHS to provide key health and wellbeing-related services, including food and care packages for clinically vulnerable households, food bank services, support for people experiencing homelessness, mental health and domestic violence services and support to local community organisations [ 8 ]. This support culminated in the sector’s supportive role in the delivery of the COVID-19 vaccination rollout, in response to the need for mass immunisation across the region.

Over the last decade, the English health and care system has been evolving to integrate health and social care. A key focus is building closer working relationships between the NHS, local authorities and other providers– including the VCFSE sector– to deliver joined up care for communities [ 9 , 10 ]. To aid integration, a new model for organising health and care on different geographical footprints has been developed: Integrated Care Systems (ICSs), place-based partnerships and neighbourhood models. These collaborative partnerships bring together existing health and care organisations to coordinate health and care planning and delivery in a more integrated way and include councils, NHS provider trusts, Primary Care Networks, GP federations and health and care commissioners [ 11 ]. These new geographically-based partnerships have an emphasis on collaborative working beyond traditional health and care partners. This includes acknowledging the role that VCFSE organisations can have in supporting wider population wellbeing, particularly as part of multi-disciplinary neighbourhood teams embedded in local communities [ 12 ]. National guidance on the development of ICSs and place-based partnerships strongly encourages health and care leaders to include VCFSE organisations in partnership arrangements and embed them into service delivery [ 12 ]. In GM, the partnership working approach pre-dates the formal mandating of ICSs, with a combined authority which brings together the ten local authorities and an association of Clinical Commissioning Groups (CCGs) which represented health commissioners, and a VCFSE umbrella group which also operates as a joint venture to represent the sector’s interests at a GM level Footnote 1 . However, reorganisation to the ICS system may present new local challenges for the VCFSE sector to find a meaningful ‘seat at the table’. That withstanding, the COVID-19 pandemic coincided with the development of ICSs and place-based partnerships as arguably one of the earliest and most intense tests of partnership working across health and care organisations within the current policy landscape.

Here, we present findings from a co-designed qualitative research project, drawing on insights from 35 participants, including members of diverse communities in GM, VCFSE participants, and key decision-making health and care system stakeholders. The aims of the study were to: understand the views and experiences of marginalised communities in GM during the COVID-19 pandemic; explore how community engagement initiatives played a role during the pandemic and vaccine rollout; assess what can be learnt from the work of key stakeholders (including community members, VCFSEs, health and care system stakeholders) for future health research and service delivery. The rationale for the study developed from a related piece of work assessing inequalities in the COVID-19 vaccine uptake in GM [ 13 ]. At that time, there was little research on the experiences of under-served communities during the pandemic. As such, the public and stakeholder engagement for the related project identified a need for a qualitative workstream to explore more fully the drivers behind and context surrounding the vaccination programme in GM, centring also local communities’ experiences during the pandemic (explored in a related paper [ 14 ]).

In this paper, we examine the role the VCFSE sector played in supporting unmet needs for marginalised groups in GM during the COVID-19 pandemic and as part of the rapid rollout of the COVID-19 vaccination programme. We consider the opportunities and barriers that may influence the full integration of the VCFSE sector into health and care services in the future. This paper provides additional evidence around the role of local community-led support in the context of identified unmet needs from marginalised local communities. Whilst focused on GM, it provides an exemplar of the role of VCFSEs and community networks during the pandemic, with relevant learning for other regions and international settings with place-based partnerships.

Study design

The study utilised a participatory approach throughout and was co-designed and co-produced with a diverse Community Research Advisory Group (CRAG). The CRAG were members of local community groups who were disproportionately impacted by the COVID-19 pandemic, including one member who is a co-author on this paper. This included members of three VCFSE organisations working with specific ethnic minority communities including Caribbean and African, South Asian and Syrian communities.

CRAG members acted as champions for the research, supporting design of appropriate information and fostering connections for recruitment via their existing community networks. The strong partnerships built through our approach were crucial to enabling a sense of trust and legitimacy for the research amongst underserved communities invited to participate.

Interviews and focus groups took place between September-November 2021 and sought to explore: the context surrounding the rollout of the vaccination programme; key aspects of support delivered as part of the vaccination programme; the use of localised approaches to support vaccine delivery including engagement initiatives, as well as broader community-level responses to the COVID-19 pandemic; perceptions around barriers to vaccine uptake Footnote 2 ; experiences of local communities (including healthcare) during the pandemic Footnote 3 . During the data collection period, national pandemic restrictions were largely lifted with no restrictions on social distancing or limits to gatherings, and all public venues reopened. A self-isolation period of 10 days after a positive COVID-19 test remained a legal requirement, but self-isolation after contact with a positive case was not required if fully vaccinated [ 15 ]. By July 2021, every UK adult had been offered their first dose of the COVID-19 vaccine, with every adult offered both doses by mid-September 2021 [ 16 ]. By early September 2021, more than 92 million doses had been administered in the UK [ 15 ].

Interviews and focus groups were conducted by one member of the research team (SG) and were conducted remotely due to the pandemic, via Zoom and telephone calls. The limitations of undertaking remote qualitative research interviews are acknowledged in academic literature, including potential restrictions to expressing compassion and assessing the participant’s environment [ 17 , 18 ]. However, given the remaining prevalence of COVID-19 at the time of interview, it was judged that the ensuing risk posed by COVID-19 to both researchers and participants outweighed the potential drawbacks. Nevertheless, participants were offered face-to-face options if they were unable to participate remotely to maximise inclusion (although no participants chose to participate face-to-face).

Interviews and focus groups were audio recorded with an encrypted recorder and transcribed by a professional transcription service. Informed written consent to participate was taken prior to the interviews and focus groups. The average length of the interviews was 34 min and average length of the focus groups was 99 min. Two focus groups were co-facilitated by a CRAG member, a member of the local community who works for a mental health charity that supports local South Asian communities, who also provided translation support. In respect to authors positionality, coauthors SG, RW, MS and CS are university researchers in academic roles and had prior links to the CRAG members via a wider community forum (co-ordinated by the NIHR funded Applied Research Collaboration for Greater Manchester). The wider group met regularly to discuss and share learning regarding community experiences, community action and related research during the pandemic. BI is a member of the CRAG and a member of a local Syrian community.

Sampling & recruitment

The sampling strategy for community participants centred around groups that had been disproportionately affected by the COVID-19 pandemic in England, including ethnic minority groups, young adults, and those with long-term physical and mental health conditions. VCFSE participants included community and religious leaders, members of local community VCFSE organisations and smaller, informal community networks and groups from local communities. Health and care system stakeholders included local council workers and health and care system stakeholders (e.g. those organising the vaccination response in CCGs and GP Federations). Characteristics of the sample are provided in Table  1 . Overall, the study achieved a diverse sample of participants on the basis of gender and ethnicity.

A combination of purposive and snowballing sampling was used to recruit via pre-established links and connections to community networks and stakeholders to ensure the inclusion of specific seldom-heard groups. For example, members of African and Caribbean communities were recruited via a charity which supports the health of these groups, and members of South Asian communities were recruited via a mental health charity.

Quotes are described by respondent type (community member, VCFSE participant, health and care system stakeholder) and participant identifier number to maintain anonymity whilst providing important contextual detail.

Data analysis

We analysed the data using an adapted framework approach [ 19 ]. We adopted a framework approach to analysis as this is viewed as a helpful method when working within large multidisciplinary teams or when not all members of the team have experience of qualitative data analysis, as was the case within our team. This structured thematic approach is also considered valuable when handling large volumes of data [ 20 , 21 ] and was found to be a helpful way to present, discuss and refine the themes within the research team and CRAG meetings. We created an initial list of themes from coding four transcripts, and discussions with CRAG members: personal or family experiences/stories; work/education experiences; racism and racialised experiences; trust and mistrust; fear and anxiety; value of community/community approaches; access to services including healthcare; operational and logistical factors around vaccine rollout; communication and (mis)information. We used this set of themes and sub themes to code the remaining transcripts, including further inductively generated codes as analysis progressed, regularly discussing within the team.

We shared transcript coding amongst the study team, with one team member responsible for collating coded transcripts into a charting framework of themes/subthemes with illustrative transcript extracts. The themes were refined throughout the analysis period (November 2021-March 2022) with the research team and CRAG and were sense-checked with CRAG members and the wider study team, to synthesise a final iteration of the themes and sub-themes (see supplementary material). We present findings related to five overarching themes: (1) unmet needs of local communities during the pandemic: inaccessible care and distrust; (2) community-led approaches: social support and leadership to support services; (3) community led support to COVID-19 vaccination delivery; (4) operational and logistical barriers to community-based pandemic responses: challenges faced by the voluntary and community sector; (5) learning from the pandemic response in GM: trust building and harnessing community assets. Themes are discussed in more detail below.

Ethical approval

This study was approved by University of Manchester Ethics Committee (Proportionate University Research Ethics Committee) 24/06/21. Ref 2021-11646-19665.

Unmet needs of local communities during the pandemic: inaccessible care and distrust

The COVID-19 pandemic brought an unprecedented shift in the way NHS services could function due to social distancing and lockdown measures. Pressures included unprecedented demand on hospital capacity and infection control measures (within hospitals and across the NHS) which reduced workforce capacity. There were also staff shortages due to high levels of COVID-19 infection amongst NHS staff, and shortages in non-acute capacity due to staff re-deployment [ 22 , 23 ]. In an effort to reduce pressure on the NHS, the policy mantra “Protect the NHS” was coined as a keynote slogan from the early stages of the pandemic [ 24 ].

It is within this context that many community participants raised (spontaneously) that there was a general inability to access health services during the pandemic, including GP and specialist services.

when I tried to contact my doctor’s surgery I was on the call for over an hour, number 20, number 15. Then by the time I’m under ten I get cut off. And it happened continuously. I just couldn’t get through and I just gave up really…now it’s like a phone consultation before you can even go and see someone, and even for that you’re waiting two, three weeks. (1029, VCFSE participant)

This resulted in frustration amongst some community participants, who questioned the logic of “protecting the NHS”, seemingly at the expense of their health-related needs. This led to sentiments that other health needs were de-prioritised by decision-makers during the pandemic. It was felt that this logic was counter-productive and fell short of the principles of protecting the most vulnerable.

We were like it just didn’t matter, it could have been much more serious than just a cough or a cold, [] but the help was just not there” (1028, community participant). what about people who actually need to see a doctor so the very vulnerable ones that we’re supposed to be protecting. Yes, we’re protecting the NHS, I understand that, I said, but we’ve also got to protect all those vulnerable people that are out there that are actually isolated (1011, community participant).

Community participants described their fear of accessing healthcare service because of potential risks of catching the virus in these settings, and fear of insufficient care due to well-publicised pressures in NHS settings. Some VCFSE participants noted that the widely publicised pressures faced by the NHS, and heightened media and political attention around COVID-19 cases in health settings led to fear and anxiety Footnote 4 .

I didn’t go to the hospital because I was scared shitless whether I was going to come out alive from hospital.” (1023, community participant). …the number of people who didn’t access services when they should have done… They were either terrified they were going to go into hospital and catch COVID straightaway and die, or they were terrified that they were taking [the hospital space] away from someone else (2003, VCFSE participant).

Overall, this led to a strong sense that mainstream services were not supporting the needs of local communities. This was especially felt for those requiring specialist services (e.g. mental health or secondary services), and for those who had faced intersecting inequalities, such as health issues, language and digital/IT barriers, and newly settled refugees and immigrants.

Community-led approaches: social support and leadership to support services

As a consequence of this unmet need, VCFSE and community participants identified that local communities themselves increased activities to provide community support. Participants felt strongly that this increased support provided by the VCFSE sector and community networks remains under-recognised and under-valued by the health system and wider public.

BAME organisations were going around door to door, giving hand sanitisers, giving masks to everybody [ ]. And it was the BAME community that was the most active during COVID delivering medication, delivering food to houses, doing the shopping. [ ] Nobody gave credit to that. Nobody talks about the good work that the BAME community has done. (1020, community participant)

A number of community and VCFSE sector participants highlighted the work done at the community level, by either themselves or other networks to support local communities. This included providing support packages, running errands for vulnerable community members, cooking and food shopping services, a helpline and communication networks for local communities, and online wellbeing and support groups.

We might not have been in hospital, but we were frontline workers in the community. (1028, community participant)

Support was provided by formal VCFSE organisations and by smaller, sometimes informal, community networks and channels, in which support mechanisms included mental health support and wellbeing focused communications to combat loneliness and boost wellbeing. This was often focused around outreach and the provision of community-based support to the most marginalised and vulnerable groups that had been disproportionately impacted during the pandemic, e.g. recently settled refugees and asylum seekers, older individuals.

We have an Iranian group in Salford…And one of them spotted this young woman in the queue and she thought she looked Iranian, you know….anyway she started a conversation, and this person had been an asylum seeker at the beginning of the pandemic and had been in a detention centre during the pandemic. And then, finally got their leave to remain and then were just basically dumped in Salford. [ ] just having that friendly face and someone was trying to start that conversation, she was able to be linked into this group of women who support other refugees and asylum seekers from the Middle East. (2014, VCFSE participant)

Community led support to COVID-19 vaccination delivery

The VCFSE sector and community networks also played a crucial part in supporting the COVID-19 vaccine delivery. Community, VCFSE and system-sector participants recognised the unique role that the VCFSE sector had played in reaching diverse communities and sections of communities not reached by the mainstream vaccination programme. For example, VCFSE groups aided vaccine delivery by helping run vaccine ‘pop-up’ sites in community spaces including mosques and other religious sites, children’s centres, and local specialist charities (e.g.: refugee and sex worker charities).

The use of community ‘champions’ and community ‘connectors’ to convey messaging around the vaccination drive were deemed especially vital in this regard. Trusted members of communities (e.g. community leaders) who had crucial pre-existing communication channels were able to effectively interact with different parts of communities to advocate for the vaccine and address misinformation. Situated within communities themselves, these ‘champions’ held established trust within communities, allowing conversations surrounding the vaccine to be held on the basis of shared experiences, honesty, openness, compassion and understanding.

So, as with any ethnic minority community, unless you’re part of it, it’s almost impossible to completely dig out all its norms and its very, very fine distinctions…[ ] what is acceptable, what is not acceptable[ ]? Unless you’re part of it, or you’ve really immersed yourself in the culture for decades, it’s almost impossible to get it (2015, VCFSE participant) One of the strongest approaches that you can take to increase uptake in any community, whether it be pregnant women or a faith group or a geographical area or a cultural group, is that if you’ve got a representative from that community leading on and advocating for the vaccine, you’re going to have the best impact (2011, health and care system stakeholder participant). unless Imams or significant people in the community were coming out for them and saying, it’s absolutely fine, it’s safe, and culturally it’s the right thing to do, there was a bit of uncertainty there (2010, health and care system stakeholder participant).

Health and care system stakeholders also emphasised the importance of “community ownership” of vaccination approaches, and of system responsiveness to identified needs and priorities at the community level. Health and care system stakeholders recognised that they were able to utilise community links to have better on-the-ground knowledge, provided in real time, to supplement locally held data to inform targeted efforts to boost uptake. This included council led initiatives including door-knocking with council staff, local health improvement practitioners, and VCFSE representatives working together to provide information about vaccine clinics and register people for vaccine appointments.

if messages went out and they didn’t land right they [the VCFSE sector] could be the first people [that] would hear about that and they could feed that back to us. [ ]….we were able to regularly go to them and say, look from a geographical perspective we can see these key areas…[ ] the people aren’t coming for vaccinations, [ ] what more can you tell us. Or, we can say, from these ethnicities in this area we’re not getting the numbers, what more can you tell us. And when we’ve fed them that intelligence then they could then use that to go and gain further insight for us, so they were a kind of, key mechanism (2010, health and care system participant).

Operational and logistical barriers to community-based pandemic responses: challenges faced by the voluntary and community sector

VCFSE sector and health and care system stakeholder participants reported significant logistical barriers to partnership working to support communities during the pandemic. Barriers included red tape and bureaucracy, which delayed responses to communities’ health and wellbeing needs.

whilst we were buying masks and hand sanitisers and going door to door, [ ] the council were still getting their paperwork in order, their policies in order, it was meeting after meeting. It took them seven to eight weeks for them to say [ ] we’ve got masks, would you like to help dish them out. (1029, VCFSE participant)

VCFSE and health and care system participants also raised challenges with respect to the VCFSE sector supporting the vaccination programme. This resulted in frustration amongst both VCFSE and health and care system participants who recognised the value of these community-based approaches.

The time that trickles through to the council and the time that the council turn around and say all right, we’ll actually let you do it was weeks later, and the community is turning round to us and saying to us well, what’s going on? We don’t like being messed around like this… (2008, VCFSE participant).

Participants highlighted the numerous health-related bodies with various roles which comprise a complex system for VCFSE partners to navigate, in part due to organisational and cultural clashes. Frustration was felt by both VCFSE and health and care system stakeholder participants (from local councils) in this respect. One VCFSE participant discussing the vaccine rollout noted:

We hit dead end after dead end within the council and there was literally very little response….You’ve got so many departments within this massive organisation called the council…[ ].it’s very difficult to navigate all that and deal with all that bureaucracy… (2008, VCFSE participant).

Broader institutional and organisational barriers to VCFSE support were identified, where cultural clashes between differing values and ways of working emerged, including ethos surrounding risk aversion and the system-level commitment to privilege value-for-money during the vaccination rollout. More practical issues around information governance and training were also raised as barriers to collaborative working.

I don’t think that they understand the power of community and the way community works. I don’t think that at a governmental level they understand what it means to penetrate into a community and actually understand what needs to be done to help a community…[ ] If they did and they had better links and ties into understanding that and helping that then we likely wouldn’t have had so many hurdles to get through (2008, VCFSE participant). ….in terms of public money, this is a public programme, we need to get value for the public pound. So we’re saying to [VCFSE organisation], how much is it going to cost? And [VCFSE organisation] are like, well, we don’t really know, until we deliver it. And we’re like, well, we can’t really approve it, until we know what it’s going to cost…. (2006, health and care system stakeholder participant)

Overall, these issues surmounted to difficulties of power-sharing between public sector organisations and VCFSEs during a time of rapid response to a public health crisis, political, institutional, and other external pressures. This was echoed amongst VCFSE and health and care system stakeholder participants, where frustration towards this was felt from both sides.

the public sector [ ] need to get better at letting go of some of the control. So even still, after I said, so many times, [VCFSE organisation] are delivering this, [VCFSE organisation] are doing everything, [ ] I still got the comms team going, are we doing a leaflet? No, [VCFSE organisation] are doing it, this is a [VCFSE organisation] programme, this isn’t a Council programme. (2006, local authority participant) it is difficult sometimes working with organisations, I find myself very much stuck in the middle sometimes [ ] I engage with [community groups] and ask them how best we do it and then we put things in place that they’ve asked for, and then they’ve told us it’s not working why have you done it like that. [ ] I think it’s acknowledgement to do it right, it takes time, and it takes effort, it takes resource. (2010, local authority participant)

Health and care system stakeholders also highlighted the importance of accessibility and localised vaccination hubs to reach different parts of diverse local communities e.g. sites in local mosques and sites near local supermarkets to reach different demographics. For instance, having mobile vaccination sites to reduce accessibility barriers, alongside dialogue-based initiatives to answer questions and respond to concerns from local communities about the vaccine, with the view to building trust without explicit pressure to receive the vaccine. Describing their efforts to engage with a member of the local community over the vaccine, two local health and care system stakeholders detailed the following example of how localised, communication-based approaches were deemed successful:

She came to the clinic and there were a lot of tears. It was very emotional. She’d been through a very difficult journey and had got pregnant by IVF, so it was a big decision for her, a big risk that she thought she was taking. Whether she took the vaccine or not, it felt like a risk to her, [ ] we were able to sit down and talk to her. We had some peers there. So we had other pregnant women there who’d had the vaccine, that were able to give her some confidence. We had the specialist multicultural midwife there, [ ] And we literally just sat and drank coffee with her and let her talk and she ended up agreeing to have the vaccine [ ] (2011, system-level stakeholder). …And the feedback from that lady was amazing. A couple of weeks ago I contacted her to make sure she was going to come down for her booster and she was just so grateful. [ ] she’d had backlash from her family and people within her community for taking up the vaccine and they still thought it was a massive risk. But she had no doubts that she’d done absolutely the right thing… (2012, system-level stakeholder).

Learning from the pandemic response in GM: trust building and harnessing community assets

Taking these findings from health and care system stakeholders, community and VCFSE participants, several learning points were identified.

In terms of vaccine delivery, some health and care system stakeholder participants reflected the need for more joined-up ways of working, across existing services and amongst VCFSE partners, to ensure efficiency and maximise uptake by embedding the vaccination programmes into other health services. For example, offering vaccination through health visiting or health checks, or offering COVID-19 vaccine boosters and flu vaccinations in single visits at care homes. These settings could also provide opportunities for dialogue with local communities where there is pushback against vaccination. Another health and care system stakeholder identified the need for greater joined up delivery of services; utilising the VCFSE sector to deliver multiple services simultaneously, including the vaccine, to improve vaccine uptake and access to other healthcare services:

the sex worker clinic is a good example of that. [ ] People were coming in for another reason, to get their health check and to get their support from the advisors there at that voluntary organisation, [ ]…if there’s a multiple purpose at the site, for people to attend, you can start to engage them in the conversation and then take the opportunity and vaccinate them. So I’m really interested in looking at that a little bit more, about how that can help to increase uptake. (2011, health and care system stakeholder participant)

A VCFSE participant suggested using educational settings such as schools as a channel to disseminate public health and vaccine-related information, as trusted settings which have wide-reach to many different communities.

A number of health and care system stakeholders, VCFSE and community participants noted that long-term, continuous, meaningful engagement is crucial to build longer-term trust between institutions and communities, and to improve the efficacy of public health measures. It was felt that more concentrated efforts were required from the NHS and other statutory organisations to reach the most marginalised and minoritised communities, for example through door-knocking and welfare calls. Participants highlighted that this was required not solely at times of public health crises, but as part of continued engagement efforts, in order to adequately engage with the most marginalised groups and effectively build long-term trust. This may be done most effectively by building on existing links to marginalised communities, for example using education liaison staff to understand traveller communities’ perspectives on the vaccine.

proactive engagement with communities both locally and nationally to say, [the health system] are looking at this, what’s people’s thoughts, views, you know, is there any issues with this, what more can we do, what do you need to know to make an informed decision. This is what we were thinking of, how would this land…I think we could learn by, [ ] doing that insight work, spending more time working with communities at a kind of, national, regional, and local level (2010, health and care system stakeholder participant). [the health system] could have engaged better with communities, I think bringing them in at the beginning. So, having them sat around the table, representatives from different groups, understanding how to engage with them from the very beginning…I think they could have used the data very very early on to inform who were engaging. We didn’t quite get it right at the beginning, we didn’t link the public health data teams with the comms and engagement teams (2013, health and care system stakeholder participant).

The tone of communications was also seen to be important. One health and care system stakeholder participant noted that the strategy of pushing communications and public health messaging aimed at behavioural change did not achieve the desired effect as these did not engage effectively with the communities to alleviate or address key concerns about the vaccine. These were deemed less successful than starting from a place of understanding and openness to generate constructive dialogue which could foster trust and respect.

There was also more specific learning identified in terms of collaboration between public sector institutions, VCFSEs and community links, with this seen as vital to build strong, long-term relationships between sectors based on trust and mutual respect. This should also involve working to share knowledge between sectors in real-time.

Health and care system stakeholder and VCFSE participants both suggested a failure to further develop partnerships fostered during the pandemic would be a lost opportunity that could potentially create distrust and additional barriers between communities, VCFSEs and public organisations, perhaps further marginalising seldom-heard groups.

we need to find ways which we have ongoing engagement, and I think it needs to be more informal. People don’t want to be just constantly asked and asked and asked (2010, health and care system stakeholder participant). a network of just sharing information and insight, rather than just engaging when you’ve got something specific to engage about. (2010, health and care system stakeholder participant) We were then thinking to ourselves, well, maybe we shouldn’t be doing this. If it’s going to cause us damage, if the council can’t work with us properly maybe we just shouldn’t do it. We’ve got to weigh up. We don’t want to lose our trust within the community (2008, VCFSE participant).

In terms of dynamics and working arrangements between sectors, participants thought it important to allow community organisations and VCFSEs to lead on their areas of speciality, e.g.: community organisations leading on outreach and communications within and to communities. This relates to the identified need of pursuing adaptable and flexible approaches to vaccine delivery. Moreover, there is a need to allow more joined-up decision-making between the health system and VCFSEs to ensure better use of local intelligence and improved planning.

Discussion & policy implications

Unmet need and the role of communities during the pandemic.

Our findings clearly demonstrate that local communities were not supported sufficiently by mainstream services during the COVID-19 pandemic. This in turn led to frustration, fear and loss of faith in the healthcare system as a whole, evidenced also in responses to the COVID-19 vaccination programme in which distrust results from wider experiences of historical marginalisation and structural inequalities [ 14 ]. In the absence of mainstream service support, our findings demonstrate how VCFSE organisations and community networks mobilised to support local communities to fulfil unmet health, social care, and wellbeing needs. This supports emerging evidence from across England which finds that the VCFSE sector played a key role in supporting communities during the pandemic [ 6 , 8 , 25 ].

The importance of community-based, localised approaches, community-led and community owned initiatives, ‘community champions’ and community connectors’ were also highlighted as crucial to the success of the COVID-19 vaccination drive. Participants noted that community-led approaches were uniquely positioned to reach some communities when mainstream approaches were unsuccessful. This is echoed in existing literature, where the role of localised community responses was deemed important to reach marginalised groups, as part of the wider pandemic response [ 26 ].

Operational and logistical barriers

Operational and logistical barriers created dissonance between communities and the system. These barriers included difficulties with decision-making and power-sharing between VCFSE and commissioning or clinical organisations, organisational cultural clashes, red-tape and bureaucracy, and complex systems and power structures to navigate. This builds on existing evidence of barriers to partnership working during the pandemic, including cultural clashes and bureaucracy/red tape [ 5 , 27 ]. The VCFSE sector also suffered from the closure of services, and reduced funding and resources due to increased demand for services and needing to adapt service provision [ 8 ].

These factors hindered collaborative working and created risk for VCFSEs, including putting tension on relationships with local communities resulting from delays implementing services. In most VCFSE-health system partnerships, participants noted that power is generally held by the health system partner, but reputational risk and additional resource-based costs lie with VCFSE partners. Supporting capacity building and workforce resource within the voluntary sector will strengthen this [ 28 ].

Inadequate processes to establish collaborative working enhance distrust between the health system and VCFSE sector, which in turn enhances difficulties for collaborative working. Trust is an important factor in how the system interacts with VCFSEs, with a lack of trust leading to further bottlenecks in VCFSE activities [ 29 ]. Alongside this, is the need for greater health system appreciation for the VCFSE sector, with VSCE partners reporting they faced greater scrutiny and more arduous administrative processes than private sector partners [ 2 , 29 ].

Learning from the pandemic: service prioritisation

All sectors of the health and care system face pressures from resource shortages, internal and external targets [ 30 , 31 ]. This is often linked to drives to increase the value-for-money of services, but key questions remain as to how to assimilate the goals of achieving health equity within value-for-money objectives [ 32 ]. To this end, prioritising value-for-money may come at odds with reducing health inequities. For example, during the rollout of the vaccination programme, additional resources and innovative approaches were required to reach marginalised communities [ 33 , 34 ]. This is supported by emerging evidence from England and internationally that efforts to drive vaccination uptake and reduce inequities in uptake amongst marginalised populations require significant resources and a breadth of approaches to maximise uptake [ 34 ]. Our findings suggest that changes in vaccine uptake were smaller and slower to be realised in these populations, resulting in a “slow burn” in terms of demonstrating quantifiable outcomes. Given the NHS principles of equity [ 10 , 35 ], reaching these groups should remain a public health priority, and failure to prioritise these groups may incur greater long-term financial costs resulting from greater health service needs. Our findings support that challenging entrenched attitudes and frameworks for how success is measured and adapting structures to better incentivise targeted interventions for marginalised or high-risk groups is essential to prioritising addressing unmet needs amongst marginalised communities.

The changing commissioning landscape

The development of ICSs and place-based partnerships has changed how health and care services are commissioned. National guidance encourages health and care leaders to include VCFSE organisations in partnership arrangements and embed them into service delivery [ 12 ], with ‘alliance models’ between ICSs and the VCFSE sector [ 36 ] established in certain regions (see for example [ 37 ]. However, this rests on “a partnership of the willing” [ 37 ] between ICS partners and VCFSE sector players, and concrete guidance for achieving collaborative working in practice, is lacking. As the findings in this paper point to, evolving decision-making processes may add to resource burdens for VCFSE organisations. Traditional health and care partners such as the NHS and local authorities should consider how their ways of working may need to change to foster full VCFSE inclusion on an equal standing, otherwise only the VCFSE stakeholders with sufficient capacity and resource may be able to be meaningfully involved.

Creating a VCFSE-accessible health and care system

In terms of fostering relationships between different sectors, participants acknowledged that pre-pandemic efforts to engage communities and community networks and VCFSEs were insufficient, with more meaningful, well-resourced engagement required going forward. It was also identified by participants the importance of avoiding tokenistic involvement of the VCFSE sector, which may be counter-productive for developing meaningful long-term partnerships. More equal relationships between statutory and VCFSE sectors are needed to foster improved collaborative working [ 5 , 38 ], and this is identified already at the GM level [ 28 ]. Central to this is actioned principles of co-design, including power-sharing, community ownership and trust. In order for co-design strategies to be successful, recognition of the role of the VCFSE sector and their ownership of approaches must be championed within co-design strategies and the enactment of co-designed activities.

Relatedly, greater trust of the VCFSE sector to deliver services effectively and efficiently is needed from health and social care decision-makers to ensure that funding compliance measures and processes are proportionate and not overly burdensome, to avoid funding bottlenecks which in turn impact service delivery [ 2 ]. Currently at the national level, VCFSE applicants typically only become aware of funding through existing networks, leaving less-connected organisations to find out ‘by chance’, thereby limiting reach amongst other organisations [ 2 ]. This may be especially true for smaller or ad-hoc VCFSE networks and groups. Our findings support that bottlenecks to applying for funding should be removed, and more streamlined processes for accessing funding championed [ 2 ].

Our findings also suggest that health systems should engage with the full breadth of the VCFSE sector, creating space for the involvement of smaller scale and less formal organisations as partners. Sharing of best practice and advice for adapting to local contexts should be promoted, alongside evaluation of collaborative models.

Finally, the pandemic period saw unprecedented state-sponsored investment into the VCFSE sector [ 29 ]. Within the GM context, this funding enabled VCFSEs to develop organisational capacity and systems, develop new partnerships, and better respond to the (unmet) needs of local communities [ 39 ]. Currently there are no clear plans to maintain this investment, but sustained inter-sector partnership working will require continued investment in the VCFSE sector.

Strengths & limitations

There are two main limitations to this study. Firstly, whilst the study achieved diversity in its sample, we could not achieve representation across all marginalised communities and therefore could not cover the experiences of all marginalised communities in-depth. As such, whilst the analyses provides valuable insights, such insights may not be transferrable and do not reflect all communities in GM. Secondly, whilst other studies focused on multiple city-regions or areas, our study is limited to the city region of GM. However, this focus provides an in-depth analysis on one region, and, as we discuss in the framing of the paper, we contend that the analysis presented in this paper serves as an exemplar to explore further at the national and international level. It should also be noted that co-design approaches are inevitably time and resource-heavy, and this was challenging in the context of this study, as local stakeholders wanted timely insights to inform the vaccination programme. However, one of the key strengths of our participatory approach was that this enabled a direct connection with the experiences of communities as relevant to the research, in order to shape the research questions, as well as the design and conduct of the study.

Overall, the contribution of the VCFSE sector during the pandemic is clear, with significant support provided in respect to community health and wellbeing and vaccination delivery. Nevertheless, there remains much to learn from the pandemic period, with the potential to harness capacity to tackle inequalities and build trust through shared learning and greater collaborative working. Maintaining an environment in which VCFSE partners are under-recognised, under-valued, and seemingly face further bureaucratic barriers will only exacerbate issues to collaborative working. There are also significant questions around systemic issues and sustainability, which must be addressed to overcome existing barriers to collaborative working between sectors. For instance, our findings identify the importance of flexibility and adaptability, in ongoing and future service delivery. Where this is not pursued this may not only impact service delivery but also create roadblocks to collaboration between sectors, creating divisions between entities whilst ultimately trying to effect change on similar goals (i.e. improved population health). ICS–VCFSE Alliances and community connectors may be a mechanism to promote this, but clear, actionable guidance will be required to translate rhetoric to real-world progress.

Data availability

Data for this research data will not be made publicly available as individual privacy could be compromised. Please contact Stephanie Gillibrand ([email protected]) for further information.

10 GM is an umbrella group which seeks to represent the VCSE sector in GM. More information is available here: https://10gm.org.uk/ .

These themes are explored in a related paper by Gillibrand et al. [ 14 ].

Topic guides are provided as supplementary material.

Distrust was also raised in relation to fear and anxiety in NHS settings, and this is discussed in detail in a related paper from this study by Gillibrand et al. [ 14 ].

Abbreviations

Clinical Commissioning Groups

Community Research Advisory Group

Greater Manchester

Integrated Care Systems

Voluntary, Community and Social Enterprise

Craston MRB. Susan Mackay, Daniel Cameron, Rebecca Writer-Davies, Dylan Spielman. Impact Evaluation of the Coronavirus Community Support Fund. 2021.

NatCen Social Research. Evaluation of VCSE COVID-19 Emergency Funding Package. Department for Digital, Culture, Media & Sport (DCMS); 2022. 27 April 2022.

Marston CRA, Miles S. Community participation is crucial in a pandemic. Lancet. 2020;395(10238):1676–8.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Frost S, Rippon S, Gamsu M, Southby K, Bharadwa M, Chapman J. Space to Connect Keeping in Touch sessions: A summary write up (unpublished). Leeds: Leeds Beckett University; 2021 2021.

Pilkington G, Southby K, Gamsu M, Bagnall AM, Bharadwa M, Chapman J, Freeman C. Through different eyes: How different stakeholders have understood the contribution of the voluntary sector to connecting and supporting people in the pandemic.; 2021.

Dayson CaW A. Capacity through crisis: The role and contribution of the VCSE Sector in Sheffield during the COVID-19 pandemic; 2021.

Timmins B. The COVID-19 vaccination programme: trials, tribulations and successes. The Kings Fund; 2022.

Howarth M, Martin P, Hepburn P, Sheriff G, Witkam R. A Realist evaluation of the state of the Greater Manchester Voluntary, Community and Social Enterprise Sector 2021. GMCVO/University of Salford; 2021.

NHS England. Five Year Forward View. Leeds. 2014 October 2014.

NHS England. The NHS Long Term Plan. NHS England. 2019 January 2019.

Surgey M. With great power: Taking responsibility for integrated care. 2022.

NHS England. Integrating care: next steps to building strong and effective integrated care systems across England. Leeds: NHS England; 2020.

Google Scholar  

RE W. Ethnic inequalities in COVID-19 vaccine uptake and comparison to seasonal influenza vaccine uptake in Greater Manchester, UK: a cohort study. PLoS Med. 2022;19(3).

Gillibrand S, Kapadia D, Watkinson R, Issa B, Kwaku-Odoi C, Sanders C. Marginalisation and distrust in the context of the COVID-19 vaccination programme: experiences of communities in a northern UK city region. BMC Public Health. 2024;24(1):853.

Article   PubMed   PubMed Central   Google Scholar  

Cabinet Office. COVID-19 response: autumn and Winter Plan 2021. Guidance: GOV.UK; 2021.

Department of Health and Social Care. Every adult in UK offered COVID-19 vaccine [press release]. GOV.UK, 19 July 2021 2021.

Irani E. The Use of Videoconferencing for qualitative interviewing: opportunities, challenges, and considerations. Clin Nurs Res. 2019;28(1):3–8.

Article   PubMed   Google Scholar  

Seitz S. Pixilated partnerships, overcoming obstacles in qualitative interviews via Skype: a research note. Qualitative Res. 2016;16(2):229–35.

Article   Google Scholar  

Gale NK. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):1–8.

Castleberry A, Nolen A. Thematic analysis of qualitative research data: is it as easy as it sounds? Curr Pharm Teach Learn. 2018;10(6):807–15.

Braun V, Clarke V. Thematic analysis. APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological2012. pp. 57–71.

Burn S, Propper C, Stoye G, Warner M, Aylin P, Bottle A. What happened to English NHS hospital activity during the COVID-19 pandemic? Brief Note IFS; 2021 13th May 2021.

NHS. COVID-19: Deploying our people safely. 2020 [updated 30th April 2020. https://www.england.nhs.uk/coronavirus/documents/COVID-19-deploying-our-people-safely/ .

Department of Health and Social Care. New TV advert urges public to stay at home to protect the NHS and save lives. [press release]. Department of Health and Social Care, 21st. January 2021 2021.

McCabe A, Wilson M, Macmillian R. Stronger than anyone thought: communities responding to COVID-19. Local Trust. Sheffieldn Hallam University. TSRC.; 2020.

McCabe A, Afridi A, Langdale E. Community responses to COVID-19: connecting communities? How relationships have mattered in community responses to COVID-19 Local Trust. TSRC, Sheffield Hallam University; 2022. January 2022.

Carpenter J. Exploring lessons from Covid-19 for the role of the voluntary sector in Integrated Care Systems. July 2021. Oxford Brookes University; 2021.

Greater Manchester Combined Authority. GM VCSE Accord Agreement. 2021 [ https://www.greatermanchester-ca.gov.uk/media/5207/gm-vcse-accord-2021-2026-final-signed-october-2021-for-publication.pdf .

Department for Digital, Culture, Media & Sport. Financial support for voluntary, community and social enterprise (VCSE) organisations to respond to coronavirus (COVID-19).: Department for Digital, Culture, Media & Sport and Office for Civil Society. 2020 [updated 20th may 2020. https://www.gov.uk/guidance/financial-support-for-voluntary-community-and-social-enterprise-vcse-organisations-to-respond-to-coronavirus-COVID-19 .

Smee C. Improving value for money in the United Kingdom National Health Service: performance measurement and improvement in a centralised system. Measuring Up: Improving Health Systems Performance in OECD Countries; 2002.

McCann L, Granter E, Hassard J, Hyde P. You can’t do both—something will give: limitations of the targets culture in managing UK health care workforces. Hum Resour Manag. 2015;54(5):773–91.

Smith P. Measuring value for money in healthcare: concepts and tools. London: Centre for Health Economics, University of York. The Health Foundation; 2009 September 2009.

Ekezie W, Awwad S, Krauchenberg A, Karara N, Dembiński Ł, Grossman Z, et al. Access to Vaccination among Disadvantaged, isolated and difficult-to-Reach communities in the WHO European Region: a systematic review. Vaccines. 2022;10(7):1038.

British Academy. Vaccine equity in Multicultural Urban settings: a comparative analysis of local government and community action, contextualised political economies and moral frameworks in Marseille and London. London: The British Academy; 2022.

England NHS. Core20PLUS5 (adults)– an approach to reducing healthcare inequalities 2023. https://www.england.nhs.uk/about/equality/equality-hub/national-healthcare-inequalities-improvement-programme/core20plus5/ .

NHS England. Building strong integrated care systems everywhere 2021. Available from here: https://www.england.nhs.uk/wp-content/uploads/2021/06/B0664-ics-clinical-and-care-professional-leadership.pdf .

Anfilogoff T, Marovitch J. Who Creates Health in Herts and West Essex? Presentation to NHS Confederation Seminar: Who Creates Health? 8 November 2022. 2022.

Bergen JWS. Pandemic pressures: how Greater Manchester equalities organisations have responded to the needs of older people during the covid-19 crisis. GMCVO; 2021.

Graham M. Learning from Covid-19 pandemic grant programmes lessons for funders and support agencies. May 2022. GMCVO; 2022.

Download references

Acknowledgements

The research team would like to thank ARC-GM PCIE team (Sue Wood, Aneela McAvoy, & Joanna Ferguson) and the Caribbean and African Health Network for their support in this study. We would also like to thank the Advisory Group members: Nasrine Akhtar, Basma Issa and Charles Kwaku-Odoi for their dedicated time, commitment, and valuable inputs into this research project and to partners who contributed to the early inception of this work, including members of the ARC-GM PCIE Panel & Forum & Nick Filer. We would also like to extend our thanks to the study participants for their participation in this research.

The project was funded by an internal University of Manchester grant and supported by the National Institute for Health and Care (NIHR) Applied Research Collaboration for Greater Manchester. Melissa Surgey’s doctoral fellowship is funded by the Applied Research Collaboration for Greater Manchester. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care.

Author information

Authors and affiliations.

Centre for Primary Care and Health Services Research, University of Manchester, Greater Manchester, England, UK

Stephanie Gillibrand

NIHR Applied Research Collaboration for Greater Manchester, Greater Manchester, England, UK

Ruth Watkinson, Melissa Surgey & Caroline Sanders

Independent (public contributor), Greater Manchester, England, UK

Greater Manchester Patient Safety Research Centre, University of Manchester, Greater Manchester, England, UK

Caroline Sanders

You can also search for this author in PubMed   Google Scholar

Contributions

SG, lead writer/editor, design of the work, RW, design of the work, drafting of article, review and revise suggestionsMS, draft of the article, review and revise suggestionsBI, design of the work, review and revise suggestionsCS, design of the work, draft of the article, review and revise suggestionsAll authors read and approved the final manuscript.

Corresponding author

Correspondence to Stephanie Gillibrand .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by University of Manchester Ethics Committee (Proportionate UREC) 24/06/21. Ref 2021-11646-19665. Informed consent to participate in the research was taken from all research participants ahead of their participation in the study. Consent to participate in the study was taken from each participant by a member of the research team. All experiments were performed in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, supplementary material 5, supplementary material 6, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Gillibrand, S., Watkinson, R., Surgey, M. et al. “ We might not have been in hospital, but we were frontline workers in the community ”: a qualitative study exploring unmet need and local community-based responses for marginalised groups in Greater Manchester during the COVID-19 pandemic. BMC Health Serv Res 24 , 621 (2024). https://doi.org/10.1186/s12913-024-10921-4

Download citation

Received : 10 November 2023

Accepted : 28 March 2024

Published : 13 May 2024

DOI : https://doi.org/10.1186/s12913-024-10921-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Marginalised groups

BMC Health Services Research

ISSN: 1472-6963

qualitative research and coding

COMMENTS

  1. Qualitative Data Coding 101 (With Examples)

    Step 1 - Initial coding. The first step of the coding process is to identify the essence of the text and code it accordingly. While there are various qualitative analysis software packages available, you can just as easily code textual data using Microsoft Word's "comments" feature.

  2. Coding Qualitative Data: How To Guide

    Coding qualitative research to find common themes and concepts is part of thematic analysis. Thematic analysis extracts themes from text by analyzing the word and sentence structure. Within the context of customer feedback, it's important to understand the many different types of qualitative feedback a business can collect, such as open-ended ...

  3. Coding

    Coding is a qualitative data analysis strategy in which some aspect of the data is assigned a descriptive label that allows the researcher to identify related content across the data. How you decide to code - or whether to code- your data should be driven by your methodology. But there are rarely step-by-step descriptions, and you'll have to ...

  4. (PDF) Qualitative Data Coding

    2. WORKSHOP. Qualitative Data Coding. ABSTRACT. In the quest to address a research problem, meeting the purpose of the study, and answer ing. qualitative research question (s), we actively look ...

  5. A Guide to Coding Qualitative Research Data

    The primary goal of coding qualitative data is to change data into a consistent format in support of research and reporting. A code can be a phrase or a word that depicts an idea or recurring theme in the data. The code's label must be intuitive and encapsulate the essence of the researcher's observations or participants' responses.

  6. Coding and Analysis Strategies

    This chapter provides an overview of selected qualitative data analytic strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding, descriptive coding ...

  7. The Coding Manual for Qualitative Researchers

    Welcome to the companion website for The Coding Manual for Qualitative Research, third edition, by Johnny Saldaña. This website offers a wealth of additional resources to support students and lecturers including: CAQDAS links giving guidance and links to a variety of qualitative data analysis software.. Code lists including data extracted from the author's study, "Lifelong Learning Impact ...

  8. To live (code) or to not: A new method for coding in qualitative research

    Coding is an integral part of qualitative research for many scholars that use interview or focus group data. However, current practices in coding require transcription of audio/visual data prior to coding.

  9. Coding Qualitative Data

    Simply put, coding is qualitative analysis. Coding is the analytical phase where researchers become immersed in their data, take the time to fully get to know it (Basit, 2003; Elliott, 2018), and allow its sense to be discerned.A code is "…a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or ...

  10. The Coding Manual for Qualitative Researchers

    This invaluable manual from world-renowned expert Johnny Saldaña illuminates the process of qualitative coding and provides clear, insightful guidance for qualitative researchers at all levels. The fourth edition includes a range of updates that build upon the huge success of the previous editions: A structural reformat has increased accessibility; the 3 sections from the previous edition are ...

  11. The Coding Manual for Qualitative Researchers

    Johnny Saldaña's Coding Manual for Qualitative Researchers has been an indispensable resource for students, teachers and practitioners since it was first published in 2009. With its expanded contents, new coding methods and more intuitive structure, the fourth edition deserves a prominent place on every qualitative researcher's bookshelf.

  12. Essential Guide to Coding Qualitative Data

    Qualitative content analysis is a research method for systematically identifying, coding, and analyzing patterns of meaning in qualitative data. Qualitative data can be collected from a variety of sources, such as interviews, focus groups, documents, and social media posts.

  13. Chapter 18. Data Analysis and Coding

    The Coding Manual for Qualitative Researchers. 2nd ed. Thousand Oaks, CA: SAGE. The most complete and comprehensive compendium of coding techniques out there. Essential reference. Silver, Christina. 2014. Using Software in Qualitative Research: A Step-by-Step Guide. 2nd ed. Thousand Oaks, CA; SAGE. If you are unsure which CAQDAS program you are ...

  14. Qualitative Coding Boot Camp: An Intensive Training and Overview for

    Introduction. Qualitative coding is a tool for analyzing data involving strings of meaningful words. While many schools and universities have staff who can assist faculty with quantitative data analysis, qualitative data analysis is interpretive and requires both content-specific knowledge and research methodology tools.

  15. Guide to Coding Qualitative Data: Best Analysis Methods

    How to collect qualitative data. Coding qualitative data effectively starts with having the right data to begin with. Here are a few common sources you can turn to to gather qualitative data for your research project: Interviews: Conducting structured, semi-structured, or unstructured interviews with individuals or groups is a great way to ...

  16. PDF The SAGE Encyclopedia of Qualitative Research Methods

    Codes and coding are integral to the process of data analysis. Codes refer to concepts and their identification through explicit criteria. Codes may be developed prior to data collection or may emerge inductively through the coding process. In qualitative research, discussions of coding most often center on the inductive process of searching

  17. PDF 1 An Introduction to Codes and Coding

    Chapter Summary. This chapter first presents the purposes and goals of The Coding Manual for Qualitative Researchers. It then provides definitions and examples of codes and categories and their roles in qualitative data analysis. The procedures and mechanics of coding follow, along with discussions of analytic software and team collaboration.

  18. Coding qualitative data: a synthesis guiding the novice

    Having pooled our ex perience in coding qualitative material and teaching students how to. code, in this paper we synthesize the extensive literature on coding in the form of a hands-on. review ...

  19. The Living Codebook: Documenting the Process of Qualitative Data

    Transparency is once again a central issue of debate across types of qualitative research. Ethnographers focus on whether to name people, places, or to share data (Contreras 2019; Guenther 2009; Jerolmack and Murphy 2017; Reyes 2018b) and whether our data actually match the claims we make (e.g., Jerolmack and Khan 2014).Work on how to conduct qualitative data analysis, on the other hand, walks ...

  20. General Coding and Analysis in Qualitative Research

    Subscribe. Coding and analysis are central to qualitative research, moving the researcher from study design and data collection to discovery, theorizing, and writing up the findings in some form (e.g., a journal article, report, book chapter or book). Analysis is a systematic way of approaching data for the purpose of better understanding it.

  21. Qualitative Coding Software

    Elevate your qualitative research with cutting-edge Qualitative Coding Software. MAXQDA is your go-to solution for qualitative coding, setting the standard as the top choice among Qualitative Coding Software. This powerful software is meticulously designed to accommodate a diverse array of data formats, including text, audio, and video, while offering an extensive toolkit tailored specifically ...

  22. How to use and assess qualitative research methods

    Qualitative research is defined as "the study of the nature of phenomena", including "their quality, different manifestations, ... The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti.

  23. Deductive Qualitative Analysis: Evaluating, Expanding, and Refining

    Deductive qualitative analysis is one of several approaches to deductive qualitative research. For example, Crabtree and Miller's (1999) template approach to qualitative research promotes the use of a codebook template to structure analysis. Their hope was to facilitate parsimony of coding by only attending to data that fit a pre-defined ...

  24. Saturation in qualitative research: An evolutionary concept analysis

    Saturation in qualitative research is a context-dependent, subjective process that requires detailed systematic analysis. Saturation is used in four ways in qualitative research: theoretical saturation, data saturation, code or thematic saturation, and meaning saturation. The antecedents of saturation were classified into two categories: study ...

  25. CollabCoder: A Lower-barrier, Rigorous Workflow for Inductive

    that in qualitative coding, there's often a sentence or a section that can be assigned to multiple codes. In your current case, you are assigning ... Automate Coding for Qualitative Research. In Proceedings of the 2021 CHI Con-ference on Human Factors in Computing Systems (Yokohama, Japan) (CHI '21).

  26. Ensure Reliable Qualitative Data Coding in BI

    In the realm of Business Intelligence (BI), the reliability of qualitative data coding is pivotal. Qualitative data, which includes words, images, and observations rather than numbers, can offer ...

  27. Data Visulization Techniques for Qualitative Research

    Different Types of Techniques for Visualizing Qualitative Data. Qualitative data lends itself especially well to the following visualization techniques: 1. Word clouds. Word frequency determines the size and prominence of words in a word cloud, which is a visual representation of text data.

  28. How do we understand the value of drug checking as a component of harm

    We encourage continued research and reporting on drug checking services and emerging technologies, with an emphasis on exploring effectiveness within a broad scope, reflective of the impacts of these services on whole lives and systems. ... The coding manual for qualitative researchers. London: Sage; 2009. Google Scholar

  29. "We might not have been in hospital, but we were frontline workers in

    The limitations of undertaking remote qualitative research interviews are acknowledged in academic literature, including potential restrictions to expressing compassion and assessing the participant's environment ... We shared transcript coding amongst the study team, with one team member responsible for collating coded transcripts into a ...

  30. Towards a shared understanding of the learning health system in a large

    The coding manual for qualitative researchers. Thousand Oaks, CA: Sage Publications, 2021. ... Osuji TA, Frantsve-Hawley J, Jolles MP, Embedded Research Conference Priorities and Methods Workgroup, et al. Methods to identify and prioritize research projects and perform embedded research in learning healthcare systems.