We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Privacy policy

Open-Ended Questions in Qualitative Research: Strategies, Examples, and Best Practices

Table of content, understanding open-ended questions, designing open-ended questions, types of open-ended questions, conducting interviews and focus groups with open-ended questions, analyzing and interpreting open-ended responses, challenges and limitations of using open-ended questions, best practices for using open-ended questions in qualitative research, definition of open-ended questions.

Open-ended questions are a research tool that allows for a wide range of possible answers and encourages respondents to provide detailed and personalized responses. These types of questions typically begin with phrases such as “ How ,” “ What ,” or “ Why “, and require the respondent to provide their thoughts and opinions.

Open-ended questions are crucial in the following scenarios:

Understanding complex phenomena : When a topic is complex, multi-faceted, or difficult to measure with numerical data, qualitative research can provide a more nuanced and detailed understanding.

Studying subjective experiences: When the focus is on people’s perceptions, attitudes, beliefs, or experiences, qualitative research is better suited to capture the richness and diversity of their perspectives.

Developing theories: When a researcher wants to develop a model or theory to explain a phenomenon, qualitative research can provide a rich source of data to support the development of such hypotheses.

Evaluating programs or interventions: Qualitative research can help to evaluate the effectiveness of programs or interventions by collecting feedback from participants, stakeholders, or experts.

Researchers use open-ended methods in research, interviews, counseling, and other situations that may require detailed and in-depth responses.

Benefits of Using Open-Ended Questions in Qualitative Research

Qualitative research is most appropriate when the research question is exploratory, complex, subjective, theoretical, or evaluative. These questions are valuable in qualitative research for the following reasons:

More In-depth Responses

Open-ended questions allow participants to share their experiences and opinions in their own words, often leading to more in-depth and detailed responses.  For example, if a researcher is studying cancer survivors’ experiences, an open-ended question like, “Can you tell me about your experience with cancer?” may elicit a more detailed and nuanced response than a closed-ended question like “Did you find your cancer diagnosis to be difficult?”

Flexibility

Open-ended questions give the participant flexibility to respond to the questions in a way that makes sense to them, often revealing vital information that the researcher may have overlooked.

Better Understanding

Open-ended questions provide the researcher with a better understanding of the participant’s perspectives, beliefs, attitudes, and experiences, which is crucial in gaining insights into complex issues.

Uncovering New Insights

Open-ended questions can often lead to unexpected responses and reveal new information. When participants freely express themselves in their own words, they may bring up topics or perspectives that the researcher had not considered.

Building Rapport

Open-ended questions help build rapport with the participant, allowing the researcher to show interest in the participant’s responses and provide a space for them to share their experiences without feeling judged. This can lead to a positive research experience for participants, which may increase the likelihood of their continued participation in future studies.

Validating or Challenging Existing Theories

By allowing participants to provide their own perspectives and experiences, researchers can compare and contrast these responses with existing theories to see if they align or diverge. If the data from participants align with existing hypotheses, this can provide additional support for this data. On the other hand, if the information diverges from existing theories, this can indicate a need for further investigation or revision of the existing data.

Avoiding Bias and Preconceived Notions

Researchers may unintentionally guide participants towards a particular answer or perspective when using close-ended questions. This can introduce bias into the data and limit the range of responses that participants provide. By using open-ended questions, researchers can avoid this potential source of bias and allow participants to express their unique perspectives.

Differences Between Open-Ended and Closed-Ended Questions

Open-ended questions encourage numerous responses and allow respondents to provide their thoughts and opinions. “ What ,” “ How, ” or “ Why ” are some of the words used to phrase open-ended questions and are designed to elicit more detailed and expansive answers. Researchers use open-ended questions in ethnography, interviews , and focus groups to gather comprehensive information and participants’ insights.

Some examples of open-ended questions include:

  • What do you think about the current state of the economy?
  • How do you feel about global warming?
  • Why did you choose to pursue a career in law?

On the other hand, closed-ended questions only allow for a limited set of responses and are typically answered with a “Yes” or “No” or a specific option from a list of multiple choices. These questions are handy in surveys, customer service interactions and questionnaires to collect quantitative data that can be easily analyzed and quantified. They are significant when you want to gather specific information hastily or when you need to confirm or deny a particular fact.

Some examples of closed-ended questions include:

  • What was your shopping experience with our company like?
  • Have you ever traveled to Europe before?
  • Which of these brands do you prefer: Nike, Adidas, or Puma?

Both open-ended and closed-ended questions have their place in research and communication. Open-ended questions can provide rich and detailed information, while closed-ended questions can provide specific and measurable data. The appropriate question type typically depends on the research or communication goals, context and the information required.

Designing open-ended questions requires careful consideration and planning. Open-ended questions elicit more than just a simple “yes” or “no” response and instead allow for a broad range of answers that provide insight into the respondent’s thoughts, feelings, or experiences. When designing open-ended questions in qualitative research, it is critical to consider the best practices below:

open ended questions qualitative research

Before designing your questions, you must predetermine what you want to learn from your respondents. This, in turn, will help you craft clear and concise questions that are relevant to your research goals. Use simple language and avoid technical terms or jargon that might confuse respondents.

Avoid leading or biased language that could influence and limit the respondents’ answers. Instead, use neutral wording that allows participants to share their authentic thoughts and opinions. For example, instead of asking, “Did you enjoy the food you ate?” ask, “What was your experience at the restaurant?”

One of the advantages of open-ended questions is that they allow respondents to provide detailed and personalized responses. Encourage participants to elaborate on their answers by asking follow-up questions or probing for additional information.

One can deliver open-ended questions in various formats, including interviews, surveys, and focus groups. Consider which one is most appropriate for your research goals and target audience. Additionally, before using your questions in a survey or interview, test them with a small group of people to make sure they are clear and functional.

Open-ended questions give a participant the freedom to answer without restriction. Furthermore, these questions evoke detailed responses from participants, unlike close-ended questions that tend to lead to one-word answers.

Open-Ended Questions Categories

When a researcher wants to explore a topic or phenomenon that is not well understood, qualitative research can help generate hypotheses and insights. For instance, “Can you tell me more about your thoughts on animal poaching in Africa?” or “What is your opinion on the future of social media in business?”

Researchers use these questions to prompt respondents to think more deeply about a particular topic or experience, sometimes using anecdotes related to a specific topic. For example, “What did you learn from that experience?” or “How do you think you could have handled that situation differently?

Researchers use probing questions to gain deeper insight into a participant’s response. These questions aim to understand the reasoning and emotion behind a particular answer. For example, “What did you learn from that mistake?” or “How do you think you could have handled that situation differently?

These questions get more information or clarify a point. For example, “Can you explain that further?” or “Can you give me an example?”

These questions ask the respondents to imagine a hypothetical scenario and provide their thoughts or reactions. Examples of hypothetical questions include “What would you do if you won the lottery?” or “How do you think society would be different if everyone had access to free healthcare?”

These questions ask the respondent to describe something in detail, such as a person, place, or event. Examples of descriptive questions include “Can you tell me about your favorite vacation?” or “How would you describe your ideal job?”

When preparing for an interview , it is important to understand the types of interviews available, what topics will be covered, and how to ask open-ended questions.

Questions should be asked in terms of past, present, and future experiences and should be worded in such a way as to invite a more detailed response from the participant. It is also important to establish a clear sequence of questions so that all topics are addressed without interrupting the flow of conversation.

Planning and Preparing For Interviews and Focus Groups

Before starting an interview or focus group, creating a list of topics or areas you want to explore during your research is essential. Consider what questions will help you gain the most insight into the topic.

Once you’ve identified the topics, you can create more specific questions that will be used to guide the conversation. It can be helpful to categorize your questions into themes to ensure all topics are addressed during the interview.

As you write your questions, aim to keep them as open-ended as possible so that the participant has space to provide detailed feedback. Avoid leading questions and try to avoid yes or no answers. Also, allow participants to provide any additional thoughts they may have on the topic.

Let’s say you’re researching customer experience with an online store. Your broad topic categories might be customer service, product selection, ease of use, and shipping. Your questions could cover things like:

  • How satisfied are you with the customer service?
  • What do you think about the product selection?
  • Is it easy to find the products you’re looking for?

 Best Practices

During the conversation, only one person can talk at a time, and everyone should be able to contribute. To ensure participants understand the questions being asked, try asking them in multiple ways.

It is also important to pause briefly and review the question that has just been discussed before moving on. In addition, brief pauses and silences before and after asking a new question may help facilitate the discussion. If participants begin talking about something that may be an answer to a different question during the discussion, then feel free to allow the conversation to go in that direction.

With these strategies, examples, and best practices in mind, you can ensure that your interviews and focus groups are successful.

Tips For Asking Open-Ended Questions During Interviews and Focus Groups

Asking open-ended questions during interviews and focus groups is critical to qualitative research. Open-ended questions allow you to explore topics in-depth, uncover deeper insights, and gain valuable participant feedback.

However, crafting your questions with intention and purpose is important to ensure that you get the most out of your research.

open ended questions qualitative research

Start With General Questions

When crafting open-ended questions for interviews or focus groups, it’s important to start with general questions and move towards more specific ones. This strategy helps you uncover various perspectives and ideas before getting into the details.

Using neutral language helps to avoid bias and encourages honest answers from participants. It’s important to determine the goal of the focus group or interview before asking any questions. These findings will help guide your conversation and keep it on track.

Use of Engagement Questions

To get the conversation started during interviews or focus groups, engagement questions are a great way to break the ice. These types of questions can be about anything from personal experiences to interests.

For example: “How did you get here, and what was one unusual thing you saw on your way in?”, “What do you like to do to unwind in your free time?” or “When did you last purchase a product from this line?”.

Use of Exploratory Questions

Exploratory questions about features are also useful in this type of research. Questions such as: “What features would you talk about when recommending this product to a friend?”, “If you could change one thing about this product, what would you change?”, or “Do you prefer this product or that product, and why?” all help to uncover participants’ opinions and preferences.

Exploratory questions about experiences are also helpful; questions such as: “Tell me about a time you experienced a mishap when using this product?” help to identify potential problems that need to be addressed.

Researchers can gain valuable insights from participants by using these tips for asking open-ended questions during interviews and focus groups.

Strategies For Active Listening and Follow-Up Questioning

Active listening is an important skill to possess when conducting qualitative research. It’s essential to ensure you understand and respond to the person you are interviewing effectively. Here are some strategies for active listening and follow-up questioning:

Pay Attention to Non-Verbal Cues

It is important to pay attention to non-verbal cues such as body language and voice when listening. Pay attention to their facial expressions and tone of voice to better understand what they are saying. Make sure not to interrupt the other person, as this can make them feel like their opinions aren’t being heard.

Listen Without Judging or Jumping to Conclusions

It is important to listen without judgment or jumping to conclusions. Don’t plan what to say next while listening, as this will stop you from understanding what the other person is saying.

Use Non-Verbal Signals to Show That You’re Listening

Nodding, smiling, and making small noises like “yes” and “uh huh” can show that you are listening. These signals can help the person feel more comfortable and open up more.

Don’t Impose Your Opinions or Solutions

When interviewing someone, it is important not to impose your opinions or solutions. It is more important to understand the other person and try to find common ground than it is to be right.

Stay Focused While Listening

Finally, it is critical to stay focused while listening. Don’t let yourself get distracted by your own thoughts or daydreaming. Remain attentive and listen with an open mind.

These are all key elements in effectively gathering data and insights through qualitative research.

open ended questions qualitative research

Qualitative research depends on understanding the context and content of the responses to open-ended questions. Analyzing and interpreting these responses can be challenging for researchers, so it’s important to have a plan and strategies for getting the most value out of open-ended responses.

Strategies For Coding and Categorizing Responses

Coding qualitative data categorizes and organizes responses to open-ended questions in a research study. It is an essential part of the qualitative data analysis process and helps identify the responses’ patterns, themes, and trends.

Thematic Analysis and Qualitative Data Analysis Software

These are two methods for automated coding of customer feedback. Thematic analysis is the process of identifying patterns within qualitative data. This process can be done by manually sorting through customer feedback or using a software program to do the work for you.

Qualitative data analysis software also facilitates coding by providing powerful visualizations that allow users to identify trends and correlations between different customer responses.

Manual Coding

Manual coding is another method of coding qualitative data, where coders sort through responses and manually assign labels based on common themes. Coding the qualitative data, it makes it easier to interpret customer feedback and draw meaningful conclusions from it.

Coding customer feedback helps researchers make data-driven decisions based on customer satisfaction. It helps quantify the common themes in customer language, making it easier to interpret and analyze customer feedback accurately.

Strategies for manual coding include using predetermined codes for common words or phrases and assigning labels to customers’ responses according to certain categories. Examples of best practices for coding include using multiple coders to review responses for accuracy and consistency and creating a library of codes for ease of use.

Identifying Themes and Patterns in Responses

These processes involve reviewing the responses and searching for commonalities regarding words, phrases, topics, or ideas. Doing so can help researchers to gain a better understanding of the material they are analyzing.

There are several strategies that researchers can use when it comes to identifying themes and patterns in open-ended responses.

Manual Scan

One strategy is manually scanning the data and looking for words or phrases that appear multiple times.

Automatic Scan

Another approach is to use qualitative analysis software that can provide coding, categorization, and data analysis.

For example, if a survey asked people about their experience with a product, a researcher could look for common phrases such as “it was easy to use” or “I didn’t like it.” The researcher could then look for patterns regarding how frequently these phrases were used.

Concept Indicator Model

This model is an important part of the coding process in classic grounded theory. It involves a continuous process of exploring and understanding open-ended responses, which can often lead to the development of new conceptual ideas.

Coding Process

The coding process is broken down into two parts: substantive coding and theoretical coding. Substantive coding involves organizing data into meaningful categories, while theoretical coding looks at how those categories relate.

Forms of Coding

Within the concept indicator model are two forms of coding: open coding and selective coding. Open coding is used to explore responses without predetermined theories or preconceived ideas. It is an iterative process involving connecting categories and generating tentative conclusions.

On the other hand, selective coding uses predetermined theories or ideas to guide data analysis.

The concept indicator model also uses a cycling approach known as constant comparison and theoretical sampling. Constant comparison is the process of constantly comparing new data with previous data until saturation is reached.

Theoretical sampling involves examining different data types to determine which ones will be more useful for exploring the concepts and relationships under investigation.

Gaining experience and confidence in exploring and confirming conceptual ideas is essential for success in the concept indicator model.

Strategies such as brainstorming and creating examples can help analysts better understand the various concepts that emerge from the data.

Best practices such as involving multiple coders in the process, triangulating data from different sources, and including contextual information can also help increase the accuracy and reliability of coding results.

Interpreting and Analyzing Open-Ended Responses in Relation to Your Research Questions

  • Ensure Objectives are Met: For any study or project, you must ensure your objectives are met. To achieve this, the responses to open-ended questions must be categorized according to their subject, purpose, and theme. This step will help in recognizing patterns and drawing out commonalities.
  • Choose A Coding Method: Once you have identified the themes, you must choose a coding method to interpret and analyze the data.

There are various coding strategies that can be employed. For example, a directed coding strategy will help you focus on the themes you have identified in your research objectives. In contrast, an axial coding method can be used to connect related concepts together. With a coding method, it will be easier to make sense of the responses.

Use Narrative Analysis

This process involves looking for story elements such as plot, characters, setting, and conflict in the text. It can be useful for identifying shared experiences or values within a group.

By looking for these narrative elements, you can better understand how individuals perceive their own experiences and those of others.

Analyze the Findings

However, to understand the meanings that the responses may have, it is also important to analyze them. This stage is where techniques such as in-depth interviews, focus groups, and textual analysis come in.

These methods provide valuable insights into how the responses are related to each other and can help uncover potential connections and underlying motivations.

Summarize Your Findings

Once you have interpreted and analyzed the data, it is time to decide on your key findings. For example, you can summarize your findings according to different themes, discuss any implications of your research or suggest ways in which further research can be carried out.

These strategies provide valuable insights into the qualitative data collected from open-ended questions. However, to ensure that the data’s most effective outcomes are obtained, you need to familiarize yourself with the best practices in qualitative research.

Open-ended questions have the potential to generate rich and nuanced data in qualitative research. However, they also present certain challenges and limitations that researchers and educators need to be aware of.

We will now explore some of the challenges associated with using open-ended questions, including potential biases and subjectivity in responses, social desirability bias, and response bias.

We will also discuss strategies to address these challenges, such as balancing open-ended and closed-ended questions in research design. By understanding these limitations and employing best practices, researchers and educators can use open-ended questions to gather meaningful data and insights.

Addressing potential biases and subjectivity in responses

When we use open-ended questions in qualitative research, it’s crucial to be mindful of potential biases and subjectivity in responses. It’s natural for participants to bring their own experiences and beliefs to the table, which can impact their answers and skew the data. To tackle these challenges, we can take several steps to ensure that our research findings are as accurate and representative as possible.

One way to minimize subjectivity is to use neutral and unbiased language when framing our questions. By doing so, we can avoid leading or loaded questions that could influence participants’ responses. We can also use multiple methods to verify data and check responses, like conducting follow-up interviews or comparing responses with existing literature.

Another important consideration is to be open and transparent about the research process and participants’ rights. Addressing these biases also includes providing informed consent and guaranteeing confidentiality so that participants feel comfortable sharing their genuine thoughts and feelings. By recruiting diverse participants and ensuring that our data is representative and inclusive, we can also reduce potential biases and increase the validity of our findings.

By tackling biases and subjectivity in responses head-on, we can gather reliable and insightful data that can inform future research and enhance teaching methods.

Dealing with social desirability bias and response bias

In qualitative research, social desirability bias and response bias can pose significant challenges when analyzing data. Social desirability bias occurs when participants tend to respond in ways that align with social norms or expectations, rather than expressing their true feelings or beliefs. Response bias, on the other hand, happens when participants provide incomplete or inaccurate information due to factors like memory lapse or misunderstanding of the question.

To address these biases, researchers can use various strategies to encourage participants to be more candid and honest in their responses.

For instance, researchers can create a safe and supportive environment that fosters trust and openness, allowing participants to feel comfortable sharing their true thoughts and experiences. Researchers can also use probing techniques to encourage participants to elaborate on their answers, helping to uncover underlying beliefs and attitudes.

It’s also a good idea to mix up the types of questions you ask, utilizing both open-ended and closed-ended inquiries to get a variety of responses. Closed-ended questions can aid in the verification or confirmation of participants’ comments, but open-ended questions allow for a more in-depth investigation of themes and encourage participants to submit extensive and personal responses.

Balancing open-ended and closed-ended questions in your research design

An appropriate combination of open-ended and closed-ended questions is essential for developing an effective research design. Open-ended questions allow participants to provide detailed, nuanced responses and offer researchers the opportunity to uncover unexpected insights.

However, too many open-ended questions can make analysis challenging and time-consuming. Closed-ended questions, on the other hand, can provide concise and straightforward data that’s easy to analyze but may not capture the complexity of participants’ experiences.

Balancing the use of open-ended and closed-ended questions necessitates a careful evaluation of the study objectives, target audience, and issue under examination. Researchers must also consider the available time and resources for analysis.

When designing a research study, it’s essential to prioritize the research goals and choose questions that align with those goals. Careful selection of questions guarantees that the data gathered is pertinent and adds to a greater knowledge of the topic under consideration. Researchers should also consider the participants’ backgrounds and experiences and select questions that are appropriate and sensitive to their needs. Furthermore, adopting a mix of open-ended and closed-ended questions can assist researchers in triangulating data, which allows them to cross-validate their findings by comparing results from multiple sources or techniques.

Lastly, we will be exploring the best practices for utilizing open-ended questions in qualitative research. We cover a range of helpful tips and strategies for creating a research design that fosters rich and nuanced data while maintaining the integrity of your research.

Building an effective connection with your research participants, developing carefully developed research questions that align with your research objectives, remaining flexible and adaptable in your approach, and prioritizing ethical considerations throughout your research process are some of the key best practices we explore.

Building Rapport with Participants

Building rapport with research participants is an essential component of conducting effective qualitative research. Building rapport is all about creating trust and providing a comfortable environment where participants can feel free to share their thoughts and experiences.

The first thing a researcher should do is to introduce themselves and make the participant understand why the research is significant.  Additionally, active listening is critical in building rapport. Listening attentively to your participants’ responses and asking follow-up questions can demonstrate your interest in their experiences and perspective.

Maintaining a nonjudgmental, impartial position is also essential in developing rapport. Participants must feel free to express their opinions and experiences without fear of being judged or prejudiced.

Using respectful language, maintaining eye contact, and nodding along to participants’ responses can show that you are invested in their stories and care about their experiences.

Overall, establishing rapport with participants is an ongoing process that requires attention, care, and empathy.

Developing clear research questions

In research, developing clear research questions is an essential component of qualitative research using open-ended questions. The research questions provide a clear direction for the research process, enabling researchers to gather relevant and insightful data.

To create effective research questions, they must be specific, concise, and aligned with the overall research objectives. It is crucial to avoid overly broad or narrow questions that could impact the validity of the research.

Additionally, researchers should use language that is easy to understand. Researchers should avoid any technical jargon that may lead to confusion.

The order of the questions is also significant; they should flow logically, building on each other and ensuring they make sense. By developing clear research questions, researchers can collect and analyze data in a more effective and meaningful manner.                      

Maintaining a flexible and adaptable approach

When conducting qualitative research, maintaining a flexible and adaptable approach is crucial. Flexibility enables researchers to adjust their research methods and questions to ensure they capture rich and nuanced data that can answer their research questions.

However, staying adaptable can be a daunting task, as researchers may need to modify their research approach based on participants’ responses or unforeseen circumstances.

To maintain flexibility, researchers must have a clear understanding of their research questions and goals, while also remaining open to modifying their methods if necessary. It is also essential to keep detailed notes and regularly reflect on research progress to determine if adjustments are needed.

Staying adaptable is equally important as it requires researchers to be responsive to changes in participants’ attitudes and perspectives. Being able to pivot research direction and approach based on participant feedback is critical to achieving accurate and meaningful results.

Maintaining a flexible and adaptive strategy allows researchers to collect the most extensive and accurate data possible, resulting in a more in-depth understanding of the research topic. While it can be challenging to remain flexible and adaptable, doing so will ultimately lead to more robust research findings and greater insights into the topic at hand.

Being aware of ethical considerations

When conducting research, It is critical to remember the ethical aspects that control how individuals interact with one another in society and how these factors affect research. Ethical considerations refer to the principles or standards that should guide research to ensure it is conducted in an honest, transparent, and respectful manner.

Before beginning the study, researchers must obtain informed consent from participants. Obtaining consent means providing clear and comprehensive information about the research, its purpose, what participation entails, and the potential risks and benefits. Researchers must ensure that participants understand the information and voluntarily consent to participate.

Protecting the privacy and confidentiality of participants must be essential for researchers. They should look into safeguarding personal information, using pseudonyms or codes to protect identities, and securing any identifying information collected.

Researchers must avoid asking questions that are too personal, sensitive, or potentially harmful. If harm or distress occurs, researchers should provide participants with appropriate support and referral to relevant services.

Using open-ended questions in qualitative research presents both challenges and benefits. To address potential limitations, researchers should remain objective and neutral, create a safe and non-judgmental space, and use probing techniques. Best practices include building rapport, developing clear research questions, and being flexible. Open-ended questions offer the benefits of revealing rich and nuanced data, allowing for flexibility, and building rapport with participants. Ethical considerations must also be a top priority.

Interesting topics

  • How to add subtitles to a video? Fast & Easy
  • Subtitles, Closed Captions, and SDH Subtitles: How are they different?
  • Why captions are important? 8 good reasons
  • What is an SRT file, how to create it and use it in a video?
  • Everything You Need for Your Subtitle Translation
  • Top 10 Closed Captioning and Subtitling Services 2023
  • The Best Font for Subtitles : our top 8 picks!
  • Davinci Resolve
  • Adobe After Effects
  • Final Cut Pro X
  • Adobe Premiere Rush
  • Canvas Network
  • What is Transcription
  • Interview Transcription
  • Transcription guidelines
  • Audio transcription using Google Docs
  • MP3 to Text
  • How to transcribe YouTube Videos
  • Verbatim vs Edited Transcription
  • Legal Transcriptions
  • Transcription for students
  • Transcribe a Google hangouts meeting
  • Best Transcription Services
  • Best Transcription Softwares
  • Save time research interview transcription
  • The best apps to record a phone call
  • Improve audio quality with Adobe Audition
  • 10 best research tools every scholar should use
  • 7 Tips for Transcription in Field Research
  • Qualitative and Quantitative research
  • Spotify Podcast Guideline
  • Podcast Transcription
  • How to improve your podcasting skills
  • Convert podcasts into transcripts
  • Transcription for Lawyers: What is it and why do you need it?
  • How transcription can help solve legal challenges
  • The Best Transcription Tools for Lawyers and Law Firms
  • Panelist area
  • Become a panelist

Qualitative research: open-ended and closed-ended questions

' src=

Our guide to market research can be downloaded free of charge

From a very young age, we have been taught what open-ended , and closed-ended questions are. How are these terms applied to qualitative research methods , and in particular to interviews?

Kathryn J. Roulston reveals her definitions of an open-ended and closed-ended question in qualitative interviews in the SAGE Encyclopedia on Qualitative Research Methods . If you want to better understand how qualitative methods fit within a market research approach, we suggest you take a look at our step-by-step guide to market research which can be downloaded in our white papers section (free of charge and direct; we won’t ask you any contact details first).

credits : Shutterstock

Only for our subscribers: exclusive analyses and marketing advice

Esteban Hendrickx

"I thought the blog was good. But the newsletter is even better!"

Introduction

  • Closed-ended question
  • Open-ended question

Examples of closed and open-ended questions for satisfaction research

Examples of closed and open-ended questions for innovation research, some practical advice.

Let us begin by pointing out that open and closed-ended questions do not at first glance serve the same purpose in market research. Instead, open-ended questions are used in qualitative research (see the video above for more information) and closed-ended questions are used in quantitative research. But this is not an absolute rule.

In this article, you will, therefore, discover the definitions of closed and open-ended questions. We will also explain how to use them. Finally, you will find examples of how to reformulate closed-ended questions into open-ended questions in the case of :

  • satisfaction research
  • innovation research

Essential elements to remember

Open-ended questions:

  • for qualitative research (interviews and focus groups)
  • very useful in understanding in detail the respondent and his or her position concerning a defined topic/situation
  • particularly helpful in revealing new aspects , sub-themes, issues, and so forth that are unknown or unidentified

Closed-ended questions:

  • for quantitative research (questionnaires and surveys)
  • suitable for use with a wide range of respondents
  • allow a standardised analysis of the data
  • are intended to confirm the hypotheses (previously stated in the qualitative part)

A closed-ended question

A closed-ended question offers, as its name suggests, a limited number of answers. For example, the interviewee may choose a response from a panel of given proposals or a simple “yes” or “no”. They are intended to provide a precise, clearly identifiable and easily classified answer.

This type of question is used in particular during interviews whose purpose is to be encoded according to pre-established criteria. There is no room for free expression, as is the case for open-ended questions. Often, this type of question is integrated into 1-to-1 interview guides and focus groups and allows the interviewer to collect the same information from a wide range of respondents in the same format. Indeed, closed-ended questions are designed and oriented to follow a pattern and framework predefined by the interviewer.

open ended questions qualitative research

Two forms of closed-ended questions were identified by the researchers: specific closed-ended questions , where respondents are offered choice answers, and implicit closed-ended questions , which include assumptions about the answers that can be provided by respondents.

A specific closed-ended question would be formulated as follows, for example: “how many times a week do you eat pasta: never, once or twice a week, 3 to 4 times, 5 times a week or more?” The adapted version in the form of an implicit closed-ended question would be formulated as follows: “how many times a week do you eat pasta? ». The interviewer then assumes that the answers will be given in figures.

Net Promoter Score question at Proximus

The Net Promoter Score (or NPS) is an example of closed question (see example above)

While some researchers consider the use of closed-ended questions to be restrictive, others see in these questions – combined with open-ended questions – the possibility of generating different data for analysis. How these closed-ended questions can be used, formulated, sequenced, and introduced in interviews depends heavily upon the studies and research conducted upstream.

Read also Creating a questionnaire for quantitative market research

In what context are closed-ended questions used?

  • Quantitative research (tests, confirmation of the qualitative research and so on).
  • Research with a large panel of respondents (> 100 people)
  • Recurrent research whose results need to be compared
  • When you need confirmation, and the possible answers are limited in effect

An open-ended question

An open-ended question is a question that allows the respondent to express himself or herself freely on a given subject. This type of question is, as opposed to closed-ended questions, non-directive and allows respondents to use their own terms and direct their response at their convenience.

Open-ended questions, and therefore without presumptions, can be used to see which aspect stands out from the answers and thus could be interpreted as a fact, behaviour, reaction, etc. typical to a defined panel of respondents.

For example, we can very easily imagine open-ended questions such as “describe your morning routine”. Respondents are then free to describe their routine in their own words, which is an important point to consider. Indeed, the vocabulary used is also conducive to analysis and will be an element to be taken into account when adapting an interview guide, for example, and/or when developing a quantitative questionnaire.

open ended questions qualitative research

As we detail in our market research whitepaper , one of the recommendations to follow when using open-ended questions is to start by asking more general questions and end with more detailed questions. For example, after describing a typical day, the interviewer may ask for clarification on one of the aspects mentioned by the respondent. Also, open-ended questions can also be directed so that the interviewee evokes his or her feelings about a situation he or she may have mentioned earlier.

In what context are open-ended questions used?

  • Mainly in qualitative research (interviews and focus groups)
  • To recruit research participants
  • During research to test a design, a proof-of-concept, a prototype, and so on, it is essential to be able to identify the most appropriate solution.
  • Analysis of consumers and purchasing behaviour
  • Satisfaction research , reputation, customer experience and loyalty research, and so forth.
  • To specify the hypotheses that will enable the quantitative questionnaire to be drawn up and to propose a series of relevant answers (to closed-ended questions ).

It is essential for the interviewer to give respondents a framework when using open-ended questions. Without this context, interviewees could be lost in the full range of possible responses, and this could interfere with the smooth running of the interview. Another critical point concerning this type of question is the analytical aspect that follows. Indeed, since respondents are free to formulate their answers, the data collected will be less easy to classify according to fixed criteria.

The use of open-ended questions in quantitative questionnaires

Rules are made to be broken; it is well known. Most quantitative questionnaires, therefore, contain free fields in which the respondent is invited to express his or her opinions in a more “free” way. But how to interpret these answers?

When the quantity of answers collected is small (about ten) it will be easy to proceed manually, possibly by coding (for more information on the coding technique, go here ). You will thus quickly identify the main trends and recurring themes.

On the other hand, if you collect hundreds or even thousands of answers, the analysis of these free answers will be much more tedious. How can you do it? In this case, we advise you to use a semantic analysis tool. This is most often an online solution, specific to a language, which is based on an NLP (Natural Language Processing) algorithm. This algorithm will, very quickly, analyse your corpus and bring out the recurring themes . It is not a question here of calculating word frequencies, but instead of working on semantics to analyse the repetition of a subject.

Of course, the use of open-ended questions in interviews does not exclude the use of closed-ended questions. Alternating these two types of questions in interviews, whether 1-to-1 interviews, group conversations or focus groups, is conducive not only to maintaining a specific dynamic during the interview but also to be able to frame specific responses while leaving certain fields of expression free. In general, it is interesting for the different parties that the interview ends with an open-ended question where the interviewer asks the interviewee if he or she has anything to add or if he or she has any questions.

In this type of research, you confront the respondent with a new, innovative product or service. It is therefore important not to collect superficial opinions but to understand in depth the respondent’s attitude towards the subject of the market research.

As you will have understood, open-ended questions are particularly suitable for qualitative research (1-to-1 interviews and focus groups). How should they be formulated?

The Five W’s; (who did what, where, when, and why ) questioning method should be used rigorously and sparingly :

  • Who? Who? What? Where? When? How? How much? “are particularly useful for qualitative research and allow you to let your interlocutor develop and elaborate a constructed and informative answer.
  • Use the CIT (Critical Incident Technique) method with formulations that encourage your interviewer to go into the details of an experience: “Can you describe/tell me…? “, ” What did you feel? “, ” According to you… “
  • Avoid asking “Why?”: this question may push the interviewer into a corner, and the interviewer may seek logical reasoning for his or her previous answer. Be gentle with your respondents by asking them to tell you more, to give you specific examples, for example.

In contrast, closed-ended questions are mainly used and adapted to quantitative questionnaires since they facilitate the analysis of the results by framing the participants’ answers.

Image: Shutterstock

  • Market research methods

' src=

19 July 2021

Very useful sir….

Post your opinion

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Don't forget to check your spam folder .

You didn't receive the link?

Pour offrir les meilleures expériences, nous utilisons des technologies telles que les cookies pour stocker et/ou accéder aux informations des appareils. Le fait de consentir à ces technologies nous permettra de traiter des données telles que le comportement de navigation ou les ID uniques sur ce site. Le fait de ne pas consentir ou de retirer son consentement peut avoir un effet négatif sur certaines caractéristiques et fonctions.

Qualitative Study

Affiliations.

  • 1 University of Nebraska Medical Center
  • 2 GDB Research and Statistical Consulting
  • 3 GDB Research and Statistical Consulting/McLaren Macomb Hospital
  • PMID: 29262162
  • Bookshelf ID: NBK470395

Qualitative research is a type of research that explores and provides deeper insights into real-world problems. Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a stand-alone study, purely relying on qualitative data or it could be part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers to some basic concepts, definitions, terminology, and application of qualitative research.

Qualitative research at its core, ask open-ended questions whose answers are not easily put into numbers such as ‘how’ and ‘why’. Due to the open-ended nature of the research questions at hand, qualitative research design is often not linear in the same way quantitative design is. One of the strengths of qualitative research is its ability to explain processes and patterns of human behavior that can be difficult to quantify. Phenomena such as experiences, attitudes, and behaviors can be difficult to accurately capture quantitatively, whereas a qualitative approach allows participants themselves to explain how, why, or what they were thinking, feeling, and experiencing at a certain time or during an event of interest. Quantifying qualitative data certainly is possible, but at its core, qualitative data is looking for themes and patterns that can be difficult to quantify and it is important to ensure that the context and narrative of qualitative work are not lost by trying to quantify something that is not meant to be quantified.

However, while qualitative research is sometimes placed in opposition to quantitative research, where they are necessarily opposites and therefore ‘compete’ against each other and the philosophical paradigms associated with each, qualitative and quantitative work are not necessarily opposites nor are they incompatible. While qualitative and quantitative approaches are different, they are not necessarily opposites, and they are certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined that there is a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated together.

Examples of Qualitative Research Approaches

Ethnography

Ethnography as a research design has its origins in social and cultural anthropology, and involves the researcher being directly immersed in the participant’s environment. Through this immersion, the ethnographer can use a variety of data collection techniques with the aim of being able to produce a comprehensive account of the social phenomena that occurred during the research period. That is to say, the researcher’s aim with ethnography is to immerse themselves into the research population and come out of it with accounts of actions, behaviors, events, etc. through the eyes of someone involved in the population. Direct involvement of the researcher with the target population is one benefit of ethnographic research because it can then be possible to find data that is otherwise very difficult to extract and record.

Grounded Theory

Grounded Theory is the “generation of a theoretical model through the experience of observing a study population and developing a comparative analysis of their speech and behavior.” As opposed to quantitative research which is deductive and tests or verifies an existing theory, grounded theory research is inductive and therefore lends itself to research that is aiming to study social interactions or experiences. In essence, Grounded Theory’s goal is to explain for example how and why an event occurs or how and why people might behave a certain way. Through observing the population, a researcher using the Grounded Theory approach can then develop a theory to explain the phenomena of interest.

Phenomenology

Phenomenology is defined as the “study of the meaning of phenomena or the study of the particular”. At first glance, it might seem that Grounded Theory and Phenomenology are quite similar, but upon careful examination, the differences can be seen. At its core, phenomenology looks to investigate experiences from the perspective of the individual. Phenomenology is essentially looking into the ‘lived experiences’ of the participants and aims to examine how and why participants behaved a certain way, from their perspective . Herein lies one of the main differences between Grounded Theory and Phenomenology. Grounded Theory aims to develop a theory for social phenomena through an examination of various data sources whereas Phenomenology focuses on describing and explaining an event or phenomena from the perspective of those who have experienced it.

Narrative Research

One of qualitative research’s strengths lies in its ability to tell a story, often from the perspective of those directly involved in it. Reporting on qualitative research involves including details and descriptions of the setting involved and quotes from participants. This detail is called ‘thick’ or ‘rich’ description and is a strength of qualitative research. Narrative research is rife with the possibilities of ‘thick’ description as this approach weaves together a sequence of events, usually from just one or two individuals, in the hopes of creating a cohesive story, or narrative. While it might seem like a waste of time to focus on such a specific, individual level, understanding one or two people’s narratives for an event or phenomenon can help to inform researchers about the influences that helped shape that narrative. The tension or conflict of differing narratives can be “opportunities for innovation”.

Research Paradigm

Research paradigms are the assumptions, norms, and standards that underpin different approaches to research. Essentially, research paradigms are the ‘worldview’ that inform research. It is valuable for researchers, both qualitative and quantitative, to understand what paradigm they are working within because understanding the theoretical basis of research paradigms allows researchers to understand the strengths and weaknesses of the approach being used and adjust accordingly. Different paradigms have different ontology and epistemologies . Ontology is defined as the "assumptions about the nature of reality” whereas epistemology is defined as the “assumptions about the nature of knowledge” that inform the work researchers do. It is important to understand the ontological and epistemological foundations of the research paradigm researchers are working within to allow for a full understanding of the approach being used and the assumptions that underpin the approach as a whole. Further, it is crucial that researchers understand their own ontological and epistemological assumptions about the world in general because their assumptions about the world will necessarily impact how they interact with research. A discussion of the research paradigm is not complete without describing positivist, postpositivist, and constructivist philosophies.

Positivist vs Postpositivist

To further understand qualitative research, we need to discuss positivist and postpositivist frameworks. Positivism is a philosophy that the scientific method can and should be applied to social as well as natural sciences. Essentially, positivist thinking insists that the social sciences should use natural science methods in its research which stems from positivist ontology that there is an objective reality that exists that is fully independent of our perception of the world as individuals. Quantitative research is rooted in positivist philosophy, which can be seen in the value it places on concepts such as causality, generalizability, and replicability.

Conversely, postpositivists argue that social reality can never be one hundred percent explained but it could be approximated. Indeed, qualitative researchers have been insisting that there are “fundamental limits to the extent to which the methods and procedures of the natural sciences could be applied to the social world” and therefore postpositivist philosophy is often associated with qualitative research. An example of positivist versus postpositivist values in research might be that positivist philosophies value hypothesis-testing, whereas postpositivist philosophies value the ability to formulate a substantive theory.

Constructivist

Constructivism is a subcategory of postpositivism. Most researchers invested in postpositivist research are constructivist as well, meaning they think there is no objective external reality that exists but rather that reality is constructed. Constructivism is a theoretical lens that emphasizes the dynamic nature of our world. “Constructivism contends that individuals’ views are directly influenced by their experiences, and it is these individual experiences and views that shape their perspective of reality”. Essentially, Constructivist thought focuses on how ‘reality’ is not a fixed certainty and experiences, interactions, and backgrounds give people a unique view of the world. Constructivism contends, unlike in positivist views, that there is not necessarily an ‘objective’ reality we all experience. This is the ‘relativist’ ontological view that reality and the world we live in are dynamic and socially constructed. Therefore, qualitative scientific knowledge can be inductive as well as deductive.”

So why is it important to understand the differences in assumptions that different philosophies and approaches to research have? Fundamentally, the assumptions underpinning the research tools a researcher selects provide an overall base for the assumptions the rest of the research will have and can even change the role of the researcher themselves. For example, is the researcher an ‘objective’ observer such as in positivist quantitative work? Or is the researcher an active participant in the research itself, as in postpositivist qualitative work? Understanding the philosophical base of the research undertaken allows researchers to fully understand the implications of their work and their role within the research, as well as reflect on their own positionality and bias as it pertains to the research they are conducting.

Data Sampling

The better the sample represents the intended study population, the more likely the researcher is to encompass the varying factors at play. The following are examples of participant sampling and selection:

Purposive sampling- selection based on the researcher’s rationale in terms of being the most informative.

Criterion sampling-selection based on pre-identified factors.

Convenience sampling- selection based on availability.

Snowball sampling- the selection is by referral from other participants or people who know potential participants.

Extreme case sampling- targeted selection of rare cases.

Typical case sampling-selection based on regular or average participants.

Data Collection and Analysis

Qualitative research uses several techniques including interviews, focus groups, and observation. [1] [2] [3] Interviews may be unstructured, with open-ended questions on a topic and the interviewer adapts to the responses. Structured interviews have a predetermined number of questions that every participant is asked. It is usually one on one and is appropriate for sensitive topics or topics needing an in-depth exploration. Focus groups are often held with 8-12 target participants and are used when group dynamics and collective views on a topic are desired. Researchers can be a participant-observer to share the experiences of the subject or a non-participant or detached observer.

While quantitative research design prescribes a controlled environment for data collection, qualitative data collection may be in a central location or in the environment of the participants, depending on the study goals and design. Qualitative research could amount to a large amount of data. Data is transcribed which may then be coded manually or with the use of Computer Assisted Qualitative Data Analysis Software or CAQDAS such as ATLAS.ti or NVivo.

After the coding process, qualitative research results could be in various formats. It could be a synthesis and interpretation presented with excerpts from the data. Results also could be in the form of themes and theory or model development.

Dissemination

To standardize and facilitate the dissemination of qualitative research outcomes, the healthcare team can use two reporting standards. The Consolidated Criteria for Reporting Qualitative Research or COREQ is a 32-item checklist for interviews and focus groups. The Standards for Reporting Qualitative Research (SRQR) is a checklist covering a wider range of qualitative research.

Examples of Application

Many times a research question will start with qualitative research. The qualitative research will help generate the research hypothesis which can be tested with quantitative methods. After the data is collected and analyzed with quantitative methods, a set of qualitative methods can be used to dive deeper into the data for a better understanding of what the numbers truly mean and their implications. The qualitative methods can then help clarify the quantitative data and also help refine the hypothesis for future research. Furthermore, with qualitative research researchers can explore subjects that are poorly studied with quantitative methods. These include opinions, individual's actions, and social science research.

A good qualitative study design starts with a goal or objective. This should be clearly defined or stated. The target population needs to be specified. A method for obtaining information from the study population must be carefully detailed to ensure there are no omissions of part of the target population. A proper collection method should be selected which will help obtain the desired information without overly limiting the collected data because many times, the information sought is not well compartmentalized or obtained. Finally, the design should ensure adequate methods for analyzing the data. An example may help better clarify some of the various aspects of qualitative research.

A researcher wants to decrease the number of teenagers who smoke in their community. The researcher could begin by asking current teen smokers why they started smoking through structured or unstructured interviews (qualitative research). The researcher can also get together a group of current teenage smokers and conduct a focus group to help brainstorm factors that may have prevented them from starting to smoke (qualitative research).

In this example, the researcher has used qualitative research methods (interviews and focus groups) to generate a list of ideas of both why teens start to smoke as well as factors that may have prevented them from starting to smoke. Next, the researcher compiles this data. The research found that, hypothetically, peer pressure, health issues, cost, being considered “cool,” and rebellious behavior all might increase or decrease the likelihood of teens starting to smoke.

The researcher creates a survey asking teen participants to rank how important each of the above factors is in either starting smoking (for current smokers) or not smoking (for current non-smokers). This survey provides specific numbers (ranked importance of each factor) and is thus a quantitative research tool.

The researcher can use the results of the survey to focus efforts on the one or two highest-ranked factors. Let us say the researcher found that health was the major factor that keeps teens from starting to smoke, and peer pressure was the major factor that contributed to teens to start smoking. The researcher can go back to qualitative research methods to dive deeper into each of these for more information. The researcher wants to focus on how to keep teens from starting to smoke, so they focus on the peer pressure aspect.

The researcher can conduct interviews and/or focus groups (qualitative research) about what types and forms of peer pressure are commonly encountered, where the peer pressure comes from, and where smoking first starts. The researcher hypothetically finds that peer pressure often occurs after school at the local teen hangouts, mostly the local park. The researcher also hypothetically finds that peer pressure comes from older, current smokers who provide the cigarettes.

The researcher could further explore this observation made at the local teen hangouts (qualitative research) and take notes regarding who is smoking, who is not, and what observable factors are at play for peer pressure of smoking. The researcher finds a local park where many local teenagers hang out and see that a shady, overgrown area of the park is where the smokers tend to hang out. The researcher notes the smoking teenagers buy their cigarettes from a local convenience store adjacent to the park where the clerk does not check identification before selling cigarettes. These observations fall under qualitative research.

If the researcher returns to the park and counts how many individuals smoke in each region of the park, this numerical data would be quantitative research. Based on the researcher's efforts thus far, they conclude that local teen smoking and teenagers who start to smoke may decrease if there are fewer overgrown areas of the park and the local convenience store does not sell cigarettes to underage individuals.

The researcher could try to have the parks department reassess the shady areas to make them less conducive to the smokers or identify how to limit the sales of cigarettes to underage individuals by the convenience store. The researcher would then cycle back to qualitative methods of asking at-risk population their perceptions of the changes, what factors are still at play, as well as quantitative research that includes teen smoking rates in the community, the incidence of new teen smokers, among others.

Copyright © 2024, StatPearls Publishing LLC.

  • Introduction
  • Issues of Concern
  • Clinical Significance
  • Enhancing Healthcare Team Outcomes
  • Review Questions

Publication types

  • Study Guide

Qualitative Research Questions: Gain Powerful Insights + 25 Examples

We review the basics of qualitative research questions, including their key components, how to craft them effectively, & 25 example questions.

Einstein was many things—a physicist, a philosopher, and, undoubtedly, a mastermind. He also had an incredible way with words. His quote, "Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted," is particularly poignant when it comes to research. 

Some inquiries call for a quantitative approach, for counting and measuring data in order to arrive at general conclusions. Other investigations, like qualitative research, rely on deep exploration and understanding of individual cases in order to develop a greater understanding of the whole. That’s what we’re going to focus on today.

Qualitative research questions focus on the "how" and "why" of things, rather than the "what". They ask about people's experiences and perceptions , and can be used to explore a wide range of topics.

The following article will discuss the basics of qualitative research questions, including their key components, and how to craft them effectively. You'll also find 25 examples of effective qualitative research questions you can use as inspiration for your own studies.

Let’s get started!

What are qualitative research questions, and when are they used?

When researchers set out to conduct a study on a certain topic, their research is chiefly directed by an overarching question . This question provides focus for the study and helps determine what kind of data will be collected.

By starting with a question, we gain parameters and objectives for our line of research. What are we studying? For what purpose? How will we know when we’ve achieved our goals?

Of course, some of these questions can be described as quantitative in nature. When a research question is quantitative, it usually seeks to measure or calculate something in a systematic way.

For example:

  • How many people in our town use the library?
  • What is the average income of families in our city?
  • How much does the average person weigh?

Other research questions, however—and the ones we will be focusing on in this article—are qualitative in nature. Qualitative research questions are open-ended and seek to explore a given topic in-depth.

According to the Australian & New Zealand Journal of Psychiatry , “Qualitative research aims to address questions concerned with developing an understanding of the meaning and experience dimensions of humans’ lives and social worlds.”

This type of research can be used to gain a better understanding of people’s thoughts, feelings and experiences by “addressing questions beyond ‘what works’, towards ‘what works for whom when, how and why, and focusing on intervention improvement rather than accreditation,” states one paper in Neurological Research and Practice .

Qualitative questions often produce rich data that can help researchers develop hypotheses for further quantitative study.

  • What are people’s thoughts on the new library?
  • How does it feel to be a first-generation student at our school?
  • How do people feel about the changes taking place in our town?

As stated by a paper in Human Reproduction , “...‘qualitative’ methods are used to answer questions about experience, meaning, and perspective, most often from the standpoint of the participant. These data are usually not amenable to counting or measuring.”

Both quantitative and qualitative questions have their uses; in fact, they often complement each other. A well-designed research study will include a mix of both types of questions in order to gain a fuller understanding of the topic at hand.

If you would like to recruit unlimited participants for qualitative research for free and only pay for the interview you conduct, try using Respondent  today. 

Crafting qualitative research questions for powerful insights

Now that we have a basic understanding of what qualitative research questions are and when they are used, let’s take a look at how you can begin crafting your own.

According to a study in the International Journal of Qualitative Studies in Education, there is a certain process researchers should follow when crafting their questions, which we’ll explore in more depth.

1. Beginning the process 

Start with a point of interest or curiosity, and pose a draft question or ‘self-question’. What do you want to know about the topic at hand? What is your specific curiosity? You may find it helpful to begin by writing several questions.

For example, if you’re interested in understanding how your customer base feels about a recent change to your product, you might ask: 

  • What made you decide to try the new product?
  • How do you feel about the change?
  • What do you think of the new design/functionality?
  • What benefits do you see in the change?

2. Create one overarching, guiding question 

At this point, narrow down the draft questions into one specific question. “Sometimes, these broader research questions are not stated as questions, but rather as goals for the study.”

As an example of this, you might narrow down these three questions: 

into the following question: 

  • What are our customers’ thoughts on the recent change to our product?

3. Theoretical framing 

As you read the relevant literature and apply theory to your research, the question should be altered to achieve better outcomes. Experts agree that pursuing a qualitative line of inquiry should open up the possibility for questioning your original theories and altering the conceptual framework with which the research began.

If we continue with the current example, it’s possible you may uncover new data that informs your research and changes your question. For instance, you may discover that customers’ feelings about the change are not just a reaction to the change itself, but also to how it was implemented. In this case, your question would need to reflect this new information: 

  • How did customers react to the process of the change, as well as the change itself?

4. Ethical considerations 

A study in the International Journal of Qualitative Studies in Education stresses that ethics are “a central issue when a researcher proposes to study the lives of others, especially marginalized populations.” Consider how your question or inquiry will affect the people it relates to—their lives and their safety. Shape your question to avoid physical, emotional, or mental upset for the focus group.

In analyzing your question from this perspective, if you feel that it may cause harm, you should consider changing the question or ending your research project. Perhaps you’ve discovered that your question encourages harmful or invasive questioning, in which case you should reformulate it.

5. Writing the question 

The actual process of writing the question comes only after considering the above points. The purpose of crafting your research questions is to delve into what your study is specifically about” Remember that qualitative research questions are not trying to find the cause of an effect, but rather to explore the effect itself.

Your questions should be clear, concise, and understandable to those outside of your field. In addition, they should generate rich data. The questions you choose will also depend on the type of research you are conducting: 

  • If you’re doing a phenomenological study, your questions might be open-ended, in order to allow participants to share their experiences in their own words.
  • If you’re doing a grounded-theory study, your questions might be focused on generating a list of categories or themes.
  • If you’re doing ethnography, your questions might be about understanding the culture you’re studying.

Whenyou have well-written questions, it is much easier to develop your research design and collect data that accurately reflects your inquiry.

In writing your questions, it may help you to refer to this simple flowchart process for constructing questions:

open ended questions qualitative research

Download Free E-Book 

25 examples of expertly crafted qualitative research questions

It's easy enough to cover the theory of writing a qualitative research question, but sometimes it's best if you can see the process in practice. In this section, we'll list 25 examples of B2B and B2C-related qualitative questions.

Let's begin with five questions. We'll show you the question, explain why it's considered qualitative, and then give you an example of how it can be used in research.

1. What is the customer's perception of our company's brand?

Qualitative research questions are often open-ended and invite respondents to share their thoughts and feelings on a subject. This question is qualitative because it seeks customer feedback on the company's brand. 

This question can be used in research to understand how customers feel about the company's branding, what they like and don't like about it, and whether they would recommend it to others.

2. Why do customers buy our product?

This question is also qualitative because it seeks to understand the customer's motivations for purchasing a product. It can be used in research to identify the reasons  customers buy a certain product, what needs or desires the product fulfills for them, and how they feel about the purchase after using the product.

3. How do our customers interact with our products?

Again, this question is qualitative because it seeks to understand customer behavior. In this case, it can be used in research to see how customers use the product, how they interact with it, and what emotions or thoughts the product evokes in them.

4. What are our customers' biggest frustrations with our products?

By seeking to understand customer frustrations, this question is qualitative and can provide valuable insights. It can be used in research to help identify areas in which the company needs to make improvements with its products.

5. How do our customers feel about our customer service?

Rather than asking why customers like or dislike something, this question asks how they feel. This qualitative question can provide insights into customer satisfaction or dissatisfaction with a company. 

This type of question can be used in research to understand what customers think of the company's customer service and whether they feel it meets their needs.

20 more examples to refer to when writing your question

Now that you’re aware of what makes certain questions qualitative, let's move into 20 more examples of qualitative research questions:

  • How do your customers react when updates are made to your app interface?
  • How do customers feel when they complete their purchase through your ecommerce site?
  • What are your customers' main frustrations with your service?
  • How do people feel about the quality of your products compared to those of your competitors?
  • What motivates customers to refer their friends and family members to your product or service?
  • What are the main benefits your customers receive from using your product or service?
  • How do people feel when they finish a purchase on your website?
  • What are the main motivations behind customer loyalty to your brand?
  • How does your app make people feel emotionally?
  • For younger generations using your app, how does it make them feel about themselves?
  • What reputation do people associate with your brand?
  • How inclusive do people find your app?
  • In what ways are your customers' experiences unique to them?
  • What are the main areas of improvement your customers would like to see in your product or service?
  • How do people feel about their interactions with your tech team?
  • What are the top five reasons people use your online marketplace?
  • How does using your app make people feel in terms of connectedness?
  • What emotions do people experience when they're using your product or service?
  • Aside from the features of your product, what else about it attracts customers?
  • How does your company culture make people feel?

As you can see, these kinds of questions are completely open-ended. In a way, they allow the research and discoveries made along the way to direct the research. The questions are merely a starting point from which to explore.

This video offers tips on how to write good qualitative research questions, produced by Qualitative Research Expert, Kimberly Baker.

Wrap-up: crafting your own qualitative research questions.

Over the course of this article, we've explored what qualitative research questions are, why they matter, and how they should be written. Hopefully you now have a clear understanding of how to craft your own.

Remember, qualitative research questions should always be designed to explore a certain experience or phenomena in-depth, in order to generate powerful insights. As you write your questions, be sure to keep the following in mind:

  • Are you being inclusive of all relevant perspectives?
  • Are your questions specific enough to generate clear answers?
  • Will your questions allow for an in-depth exploration of the topic at hand?
  • Do the questions reflect your research goals and objectives?

If you can answer "yes" to all of the questions above, and you've followed the tips for writing qualitative research questions we shared in this article, then you're well on your way to crafting powerful queries that will yield valuable insights.

Download Free E-Book

Respondent_100+Questions_Banners_1200x644 (1)

Asking the right questions in the right way is the key to research success. That’s true for not just the discussion guide but for every step of a research project. Following are 100+ questions that will take you from defining your research objective through  screening and participant discussions.

Fill out the form below to access free e-book! 

Recommend Resources:

  • How to Recruit Participants for Qualitative Research
  • The Best UX Research Tools of 2022
  • 10 Smart Tips for Conducting Better User Interviews
  • 50 Powerful Questions You Should Ask In Your Next User Interview
  • How To Find Participants For User Research: 13 Ways To Make It Happen
  • UX Diary Study: 5 Essential Tips For Conducing Better Studies
  • User Testing Recruitment: 10 Smart Tips To Find Participants Fast
  • Qualitative Research Questions: Gain Powerful Insights + 25
  • How To Successfully Recruit Participants for A Study (2022 Edition)
  • How To Properly Recruit Focus Group Participants (2022 Edition)
  • The Best Unmoderated Usability Testing Tools of 2022

50 Powerful User Interview Questions You Should Consider Asking

We researched the best user interview questions you can use for your qualitative research studies. Use these 50 sample questions for your next...

How To ​​Unleash Your Extra Income Potential With Respondent

The number one question we get from new participants is “how can I get invited to participate in more projects.” In this article, we’ll discuss a few...

Understanding Why High-Quality Research Needs High-Quality Participants

Why are high-quality participants essential to your research? Read here to find out who they are, why you need them, and how to find them.

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Market Research
  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Open Ended Questions

Try Qualtrics for free

Your quick guide to open-ended questions in surveys.

17 min read In this guide, find out how you can use open-ended survey questions to glean more meaningful insights from your research, as well as how to analyse them and best practices.

When you want to get more comprehensive responses to a survey – answers beyond just yes or no – you’ll want to consider open-ended questions.

But what are open-ended questions? In this guide, we’ll go through what open-ended questions are, including how they can help gather information and provide greater context to your research findings.

What are open-ended questions?

Open-ended questions can offer you incredibly helpful insights into your respondent’s viewpoints. Here’s an explanation below of what they are and what they can do:

Free-form and not governed by simple one word answers (e.g. yes or no responses), an open-ended question allows respondents to answer in open-text format, giving them the creative thinking, freedom and space to answer in as much (or as little) detail as they like.

Open-ended questions help you to see things from the respondent’s perspective, as you get feedback in their own words instead of stock answers. Also, as you’re getting more meaningful answers and accurate responses, you can better analyze sentiment amongst your audience.

Get started with our free survey maker tool

Open-ended versus closed-ended questions

Open-ended questions provide more qualitative research data; contextual insights that accentuate quantitative information. With open-ended questions, you get more meaningful user research data.

Closed-ended questions, on the other hand, provide quantitative data ; limited insight but easy to analyze and compile into reports. Market researchers often add commentary to this kind of data to provide readers with background and further food for thought.

Here are the main differences with examples of open-ended and closed-ended questions:

For example, an open-ended question might be: “What do you think of statistical analysis software?”.

Whereas closed-ended questions would simply be: “Do you use statistical analysis software?” or “Have you used statistical analysis software in the past?”.

Open-ended questions afford much more freedom to respondents and can result in deeper and more meaningful insights. A closed question can be useful and fast, but doesn’t provide much context. Open-ended questions are helpful for understanding the “why”.

When and why should you use an open-ended question?

Open-ended questions are great for going more in-depth on a topic. Closed-ended questions may tell you the “what,” but open-ended questions will tell you the “why.”

Another benefit of open-ended questions is that they allow you to get answers from your respondents in their words. For example, it can help to know the language that customers use to describe a product of feature, so that the company can match the language in their product description to increase discoverability.

Open-ended questions can also help you to learn things you didn’t expect, especially as they encourage creativity, and get answers to slightly more complex issues. For example, you could ask the question “What are the main reasons you canceled your subscription?” as a closed-ended question by providing a list of reasons (too expensive, don’t use it anymore). However, you are limited only to reasons that you can think of. But if you don’t know why people are canceling, then it might be better to ask as an open-ended question.

You might ask open-ended questions when you are doing a pilot out preliminary research to validate a product idea. You can then use that information to generate closed-ended questions for a larger follow-up study.

However, it can be wise to limit the overall number of open-ended questions in a survey because they are burdensome.

In terms of what provides more valuable information, only you can decide that based on the requirements of your research study. You also have to take into account variables such as the cost and scale of your research study, as well as when you need the information. Open-ended questions can provide you with more context, but they’re also more information to sift through, whereas closed-ended questions provide you with a tidy, finite response.

If you still prefer the invaluable responses and data from open-ended questions, using software like Qualtrics Text IQ can automate this complicated process. Through AI technology Text IQ can understand sentiment and articulate thousands of open-ended responses into simplified dashboards.

Learn More: Qualtrics Text IQ

Open-ended question examples

While there are no set rules to the number of open-ended questions you can ask, of course you want to ask an open-ended question that correlates with your research objective.

Here are a few examples of open-ended survey questions related to your product:

  • What do you like most about this product?
  • What do you like least about this product?
  • How does our product compare to competitor products?
  • If someone asked you about our product, what would you say to them?
  • How can we improve our product?

You could even supplement closed-ended questions with an open-ended question to get more detail, e.g. “How often do you use our product?” — with a multiple choice, single word answers approach. These might be simple answers such as “Frequently”, “Sometimes”, “Never” — and if a respondent answers “Never”, you could follow with: “If you have never used our product, why not?”. This is a really easy way to understand why potential customers don’t use your product.

Also, incorporating open-ended questions into your surveys can provide useful information for salespeople throughout the sales process. For example, you might uncover insights that help your salespeople to reposition your products or improve the way they sell to new customers based on what existing customers feel. Though you might get helpful answers from a closed-ended question, open-ended questions give you more than a surface-level insight into their sentiments, emotions and thoughts.

It doesn’t need to be complicated, it can be as simple as what you see below. The survey doesn’t need to speak for itself, let your survey respondents say everything.

Asking open-ended questions: Crafting question that generate the best insights

Open responses can be difficult to quantify. Framing them correctly is key to getting useful data from your answers. Below are some open ended questions examples of what to avoid.

1. Avoid questions that are too broad or vague

Example :  “What changes has your company made in the last five years due to external events?”

Problem : There are too many potential responses to this query, which means you’ll get too broad a range of answers. What kind of changes are being referred to, economic, strategic, personnel etc.? What external events are useful to know about? Don’t overwhelm your respondent with an overly broadquestion – ask the right questions and get precise answers.

Solution : Target your questions with a specific clarification of what you want. For example, “What policy changes has your company made about working from home in the last 6 months as a result of the COVID-19 pandemic?”. Alternatively, use a close-ended question, or offer examples to give respondents something to work from.

2. Make sure that the purpose of the question is clear

Example :  “Why did you buy our product?”

Problem : This type of unclear-purpose question can lead to short, unhelpful answers. “Because I needed it” or “I fancied it” don’t necessarily give you data to work with.

Solution : Make it clear what you actually want to know. “When you bought our product, how did you intend to use it?” or “What are the main reasons you purchased [Our Brand] instead of another brand?” might be two alternatives that provide more context.

3. Keep questions simple and quick to answer

  Example :  “Please explain the required process that your brand uses to manage its contact center (i.e. technical software stack, approval process, employee review, data security, management, compliance management etc.). Please be as detailed as possible.”

Problem : The higher the level of effort, the lower the chances of getting a good range of responses or high quality answers. It’s unlikely that a survey respondent will take the time to give a detailed answer on something that’s not their favorite subject. This results in either short, unhelpful answers, or even worse, the respondent quits the survey and decides not to participate after seeing the length of time and effort required. This can end up causing bias with the type of respondents that answer the survey.

Solution : If you really need the level of detail, there are a few options to try. You can break up the question into multiple questions or share some information on why you really need this insight. You could offer a different way of submitting an answer, such as a voice to text or video recording functionality, or make the question optional to help respondents to keep progressing through the survey. Possibly the best solution is to change from open-ended questions in a survey to a qualitative research method, such as focus groups or one-to-one interviews, where lengthier responses and more effort are expected.

4. Ask only one  question at a time

Example :  “When was the last time you used our product? How was your experience?”

Problem : Too many queries at once can cause a feeling of mental burden in your respondents, which means you risk losing their interest. Some survey takers might read the first question but miss the second, or forget about it when writing their response.

Solution : Only ask one thing at a time!

5. Don’t ask for a minimum word count

Example :  “Please provide a summary of why you chose our brand over a competitor brand. [Minimum 50 characters].”

Problem : Even though making a minimum word count might seem like a way to get higher quality responses, this is often not the case. Respondents may well give up, or type gibberish to fill in the word count. Ideally, the responses you gather will be the natural response of the person you’re surveying – mandating a word count impedes this.

Solution : Leave off the word count. If you need to encourage longer responses, you can expand the text box size to fit more words in. Offer speech to text or video recording options to encourage lengthier responses, and explain why you need a detailed answer.

6. Don’t ask an open-ended question when a closed-ended question would be enough  

Example :  “Where are you from?”

Problem : It’s harder to control the data you’ll collect when you use an open question when a closed one would work. For example, someone could respond to the above question with “The US”, “The United States” or “America”.

Solution : To save time and effort on both your side and the participant’s side, use a drop-down with standardized responses.

7. Limit the total number of open-ended questions you ask  

Example :  “How do you feel about product 1?” “How do you feel about product 2?” “How do you feel about product 3?”

Problem : An open question requires more thought and effort than a closed one. Respondents can usually answer 4-6 closed questions in the same time as only 1 open one, and prefer to be able to answer quickly.

Solution : To reduce survey fatigue,lower drop-off rates, and save costs, only ask as many questions as you think you can get an answer for. Limit open-ended questions for ones where you really need context. Unless your respondents are highly motivated, keep it to 5 open-ended questions or fewer. Space them out to keep drop-offs to a minimum.

8. Don’t force respondents to answer open-ended questions

Example :  “How could your experience today have been improved? Please provide a detailed response.”

Problem : A customer may not have any suggestions for improvements. By requiring an answer, though, the customer is now forced to think of something that can be improved even if it would not make them more likely to use the service again.  Making these respondents answer means you risk bias. It could lead to prioritizing unnecessary improvements.

Solution : Give respondents the option to say “No” or “Not applicable” or “I don’t know” to queries, or to skip the question entirely.

How to analyze the results from open-ended questions

Step 1: collect and structure your responses.

Online survey tools can simplify the process of creating and sending questionnaires, as well as gathering responses to open-ended questions. These tools often have simple, customisable templates to make the process much more efficient and tailored to your requirements.

Some solutions offer different targeting variables, from geolocation to customer segments and site behavior. This allows you to offer customized promotions to drive conversions and gather the right feedback at every stage in the online journey.

Upon receipt, your data should be in a clear, structured format and you can then export it to a CSV or Excel file before automatic analysis. At this point, you’ll want to check the data (spelling, duplication, symbols) so that it’s easier for a machine to process and analyze.

Step 2: Use text analytics

One method that’s increasingly applied to open-ended responses is automation. These new tools make it easy to extract data from open-text question responses with minimal human intervention. It makes an open-ended question response as accessible and easy to analyze as that of a closed question, but with more detail provided.

For example, you could use automated coding via artificial intelligence to look into buckets of responses to your open-ended questions and assign them accordingly for review. This can save a great deal of time, but the accuracy depends on your choice of solution.

Alternatively, you could use sentiment analysis — a form of natural language processing — to systematically identify, extract and quantify information. With sentiment analysis, you can determine whether responses are positive or negative, which can be really useful for unstructured responses or for quick, large-scale reviews.

Some solutions also offer custom programming so you can apply your own code to analyze survey results, giving complete flexibility and accuracy.

Step 3: Visualize your results

With the right data analysis and visualization tools, you can see your survey results in the format most applicable to you and your stakeholders. For example, C-Suite may want to see information displayed using graphs rather than tables — whereas your research team might want a comprehensive breakdown of responses, including response percentages for each question.

This might be easier for a survey with closed-ended questions, but with the right analysis for open-ended questions’ responses, you can easily collate response data that’s easy to quantify.

With the survey tools that exist today, it’s incredibly easy to import and analyze data at scale to uncover trends and develop actionable insights. You can also apply your own programming code and data visualization techniques to get the information you need. No matter whether you’re using open-ended questions or getting one-word answers in emojis, you’re able to surface the most useful insights for action.

Ask the right open-ended questions with Qualtrics

With Qualtrics’ survey software , used by more than 13,000 brands and 99 of the top 100 business schools, you can get answers to the most important market, brand, customer, and product questions with ease. Choose from a huge range of multiple-choice questions (both open-ended questions and closed-ended) and tailor your survey to get the most in-depth responses to your queries.

You can build a positive relationship with your respondents and get a deeper understanding of what they think and feel with Qualtrics-powered surveys. The best part? It’s completely free to get started with.

Get started with our free survey maker tool today

Related resources

Post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, likert scales 14 min read, request demo.

Ready to learn more about Qualtrics?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Open-ended interview questions and saturation

Contributed equally to this work with: Susan C. Weller, Ben Vickers

Roles Conceptualization, Methodology, Project administration, Resources, Supervision, Writing – original draft

* E-mail: [email protected]

Affiliation Department of Preventive Medicine & Community Health, University of Texas Medical Branch, Galveston, Texas, United States of America

ORCID logo

Roles Data curation, Formal analysis, Investigation, Methodology, Writing – review & editing

Roles Conceptualization, Methodology, Writing – review & editing

¶ ‡ These authors also contributed equally to this work.

Affiliation Institute for Social Research, Arizona State University, Tempe, Arizona/University of Florida, Gainesville, Florida, United States of America

Roles Methodology, Writing – review & editing

Affiliation Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, Texas, United States of America

Roles Methodology, Resources, Writing – review & editing

Affiliation Department of Management, University of Kentucky, Lexington, Kentucky, United States of America

Affiliation Department of Anthropology, University of Florida, Gainesville, Florida, United States of America

Roles Conceptualization, Methodology, Resources, Writing – review & editing

  • Susan C. Weller, 
  • Ben Vickers, 
  • H. Russell Bernard, 
  • Alyssa M. Blackburn, 
  • Stephen Borgatti, 
  • Clarence C. Gravlee, 
  • Jeffrey C. Johnson

PLOS

  • Published: June 20, 2018
  • https://doi.org/10.1371/journal.pone.0198606
  • Reader Comments

Table 1

Sample size determination for open-ended questions or qualitative interviews relies primarily on custom and finding the point where little new information is obtained (thematic saturation). Here, we propose and test a refined definition of saturation as obtaining the most salient items in a set of qualitative interviews (where items can be material things or concepts, depending on the topic of study) rather than attempting to obtain all the items . Salient items have higher prevalence and are more culturally important. To do this, we explore saturation, salience, sample size, and domain size in 28 sets of interviews in which respondents were asked to list all the things they could think of in one of 18 topical domains. The domains—like kinds of fruits (highly bounded) and things that mothers do (unbounded)—varied greatly in size. The datasets comprise 20–99 interviews each (1,147 total interviews). When saturation was defined as the point where less than one new item per person would be expected, the median sample size for reaching saturation was 75 (range = 15–194). Thematic saturation was, as expected, related to domain size. It was also related to the amount of information contributed by each respondent but, unexpectedly, was reached more quickly when respondents contributed less information. In contrast, a greater amount of information per person increased the retrieval of salient items. Even small samples ( n = 10) produced 95% of the most salient ideas with exhaustive listing, but only 53% of those items were captured with limited responses per person (three). For most domains, item salience appeared to be a more useful concept for thinking about sample size adequacy than finding the point of thematic saturation. Thus, we advance the concept of saturation in salience and emphasize probing to increase the amount of information collected per respondent to increase sample efficiency.

Citation: Weller SC, Vickers B, Bernard HR, Blackburn AM, Borgatti S, Gravlee CC, et al. (2018) Open-ended interview questions and saturation. PLoS ONE 13(6): e0198606. https://doi.org/10.1371/journal.pone.0198606

Editor: Andrew Soundy, University of Birmingham, UNITED KINGDOM

Received: February 16, 2018; Accepted: May 22, 2018; Published: June 20, 2018

Copyright: © 2018 Weller et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are available as an Excel file in the Supporting Information files.

Funding: This project was partially supported by the Agency for Healthcare Research and Quality (R24HS022134). Funding for the original data sets was from the National Science Foundation (#BCS-0244104) for Gravlee et al. (2013), from the National Institute on Drug Abuse (R29DA10640) for Brewer et al. (2002), and from the Air Force Office of Scientific Research for Brewer (1995). Content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Open-ended questions are used alone or in combination with other interviewing techniques to explore topics in depth, to understand processes, and to identify potential causes of observed correlations. Open-ended questions may produce lists, short answers, or lengthy narratives, but in all cases, an enduring question is: How many interviews are needed to be sure that the range of salient items (in the case of lists) and themes (in the case of narratives) are covered. Guidelines for collecting lists, short answers, and narratives often recommend continuing interviews until saturation is reached. The concept of theoretical saturation —the point where the main ideas and variations relevant to the formulation of a theory have been identified—was first articulated by Glaser and Strauss [ 1 , 2 ] in the context of how to develop grounded theory. Most of the literature on analyzing qualitative data, however, deals with observable thematic saturation —the point during a series of interviews where few or no new ideas, themes, or codes appear [ 3 – 6 ].

Since the goal of research based on qualitative data is not necessarily to collect all or most ideas and themes but to collect the most important ideas and themes, salience may provide a better guide to sample size adequacy than saturation. Salience (often called cultural or cognitive salience) can be measured by the frequency of item occurrence (prevalence) or the order of mention [ 7 , 8 ]. These two indicators tend to be correlated [ 9 ]. In a set of lists of birds, for example, robins are reported more frequently and appear earlier in responses than are penguins. Salient terms are also more prevalent in everyday language [ 10 – 12 ]. Item salience also may be estimated by combining an item’s frequency across lists with its rank/position on individual lists [ 13 – 16 ].

In this article, we estimate the point of complete thematic saturation and the associated sample size and domain size for 28 sets of interviews in which respondents were asked to list all the things they could think of in one of 18 topical domains. The domains—like kinds of fruits (highly bounded) and things that mothers do (unbounded)—varied greatly in size. We also examine the impact of the amount of information produced per respondent on saturation and on the number of unique items obtained by comparing results generated by asking respondents to name all the relevant things they can with results obtained from a limited number of responses per question, as with standard open-ended questioning. Finally, we introduce an additional type of saturation based on the relative salience of items and themes— saturation in salience —and we explore whether the most salient items are captured at minimal sample sizes. A key conclusion is that saturation may be more meaningfully and more productively conceived of as the point where the most salient ideas have been obtained .

Recent research on saturation

Increasingly, researchers are applying systematic analysis and sampling theory to untangle the problems of saturation and sample size in the enormous variety of studies that rely on qualitative data—including life-histories, discourse analysis, ethnographic decision modeling, focus groups, grounded theory, and more. For example, Guest et al.[ 17 ] and others[ 18 – 19 ] found that about 12–16 interviews were adequate to achieve thematic saturation. Similarly, Hagaman and Wutich [ 20 ] found that they could reliably retrieve the three most salient themes from each of the four sites in the first 16 interviews.

Galvin[ 21 ] and Fugard and Potts[ 22 ] framed the sample size problem for qualitative data in terms of the likelihood that a specific idea or theme will or will not appear in a set of interviews, given the prevalence of those ideas in the population. They used traditional statistical theory to show that small samples retrieve only the most prevalent themes and that larger samples are more sensitive and can retrieve less prevalent themes as well. This framework can be applied to the expectation of observing or not observing almost anything. Here it would apply to the likelihood of observing a theme in a set of narrative responses, but it applies equally well for situations such as behavioral observations, where specific behaviors are being observed and sampled[ 23 ]. For example, to obtain ideas or themes that would be reported by about one out of five people (0.20 prevalence) or a behavior with the same prevalence, there is a 95% likelihood of seeing those themes or behaviors at least once in 14 interviews—if those themes or behaviors are independent.

Saturation and sample size have also begun to be examined with multivariate models and simulations. Tran et al. [ 24 ] estimated thematic saturation and the total number of themes from open-ended questions in a large survey and then simulated data to test predictions about sample size and saturation. They assumed that items were independent and found that sample sizes greater than 50 would add less than one new theme per additional person interviewed.

Similarly, Lowe et al. [ 25 ] estimated saturation and domain size in two examples and in simulated datasets, testing the effect of various parameters. Lowe et al. found that responses were not independent across respondents and that saturation may never be reached. In this context, non-independence refers to the fact that some responses are much more likely than others to be repeated across people. Instead of complete saturation, they suggested using a goal such as obtaining a percentage of the total domain that one would like to capture (e.g., 90%) and the average prevalence of items one would like to observe to estimate the appropriate sample size. For example, to obtain 90% of items with an average prevalence of 0.20, a sample size of 36 would be required. Van Rijnsoever [ 26 ] used simulated datasets to study the accumulation of themes across sample size increments and assessed the effect of different sampling strategies, item prevalence, and domain size on saturation. Van Rijnsoever’s results indicated that the point of saturation was dependent on the prevalence of the items.

As modeling estimates to date have been based on only one or two real-world examples, it is clear that more empirical examples are needed. Here, we use 28 real-world examples to estimate the impact of sample size, domain size, and amount of information per respondent on saturation and on the total number of items obtained. Using the proportion of people in a sample that mentioned an item as a measure of salience, we find that even small samples may adequately capture the most salient items.

Materials and methods

The datasets comprise 20–99 interviews each (1,147 total interviews). Each example elicits multiple responses from each individual in response to an open-ended question (“Name all the … you can think of”) or a question with probes (“What other … are there?”).

Data were obtained by contacting researchers who published analyses of free lists. Examples with 20 or more interviews were selected so that saturation could be examined incrementally through a range of sample sizes. Thirteen published examples were obtained on: illness terms [ 27 ] (in English and in Spanish); birds, flowers, and fabrics [ 28 ]; recreational/street drugs and fruits [ 29 ]; things mothers do (online, face-to-face, and written administration) and racial and ethnic groups [ 30 ] (online, face-to-face, and written administration). Fifteen unpublished classroom educational examples were obtained on: soda pops (Weller, n.d.); holidays (two replications), things that might appear in a living room, characteristics of a good leader (two replications), a good team (two replications), and a good team player (Johnson, n.d.); and bad words, industries (two replications), cultural industries (two replications), and scary things (Borgatti, n.d.). (Original data appear online in S1 Appendix The Original Data for the 28 Examples.)

Some interviews were face to face, some were written responses, and some were administered on-line. Investigators varied in their use of prompts, using nonspecific (What other … are there?), semantic (repeating prior responses and then asking for others), and/or alphabetic prompts (going through the alphabet and asking for others). Brewer [ 29 ] and Gravlee et al. [ 30 ] specifically examined the effect of prompting on response productivity, although the Brewer et al. examples in these analyses contain results before extensive prompting and the Gravlee et al. examples contain results after prompting. The 28 examples, their topic, source, sample size, the question used in the original data collection, and the three most frequently mentioned items appear in Table 1 . All data were collected and analyzed without personal identifying information.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0198606.t001

For each example, statistical models describe the pattern of obtaining new or unique items with incremental increases in sample size. Individual lists were first analyzed with Flame [ 31 , 32 ] to provide the list of unique items for each example and the Smith [ 14 ] and Sutrop [ 15 ] item salience scores. Duplicate items due to spelling, case errors, spacing, or variations were combined.

To help develop an interviewing stopping rule, a simple model was used to predict the unique number of items contributed by each additional respondent. Generalized linear models (GLM, log-linear models for count data) were used to predict the unique number of items added by each respondent (incrementing sample size), because number of unique items added by each respondent (count data) is approximately Poisson distributed. For each example, models were fit with ordinary least squares linear regression, Poisson, and negative binomial probability distributions. Respondents were assumed to be in random order, in the order in which they occurred in each dataset, although in some cases they were in the order they were interviewed. Goodness-of-fit was compared across the three models with minimized deviants (the Akaike Information Criterion, AIC) to find the best-fitting model [ 33 ]. Using the best-fitting model for each example, the point of saturation was estimated as the point where the expected number of new items was one or less. Sample size and domain size were estimated at the point of saturation, and total domain size was estimated for an infinite sample size from the model for each example as the limit of a geometric series (assuming a negative slope).

Because the GLM models above used only incremental sample size to predict the total number of unique items (domain size) and ignored variation in the number of items provided by each person and variation in item salience, an additional analysis was used to estimate domain size while accounting for subject and item heterogeneity. For that analysis, domain size was estimated with a capture-recapture estimation technique used for estimating the size of hidden populations. Domain size was estimated from the total number of items on individual lists and the number of matching items between pairs of lists with a log-linear analysis. For example, population size can be estimated from the responses of two people as the product of their number of responses divided by the number of matching items (assumed to be due to chance). If Person#1 named 15 illness terms and Person#2 named 31 terms and they matched on five illnesses, there would be 41 unique illness terms and the estimated total number of illness terms based on these two people would be (15 x 31) /5 = 93.

A log-linear solution generalizes this logic from a 2 x 2 table to a 2 K table [ 34 ]. the capture–recapture solution estimates total population size for hidden populations using the pattern of recapture (matching) between pairs of samples (respondents) to estimate the population size. An implementation in R with GLM uses a log-linear form to estimate population size based on recapture rates (Rcapture [ 35 , 36 ]). In this application, it is assumed that the population does not change between interviews (closed population) and models are fit with: (1) no variation across people or items (M 0 ); (2) variation only across respondents (M t ); (3) variation only across items (M h ); and (4) variation due to an interaction between people and items (M ht ). For each model, estimates were fit with binomial, Chao’s lower bound estimate, Poisson, Darroch log normal, and gamma distributions [ 35 ]. Variation among items (heterogeneity) is a test for a difference in the probabilities of item occurrence and, in this case, is equivalent to a test for a difference in item salience among the items. Due to the large number of combinations needed to estimate these models, Rcapture software estimates are provided for all four models only up to a sample of size 10. For larger sample sizes (all examples in this study had sample sizes of 20 or larger), only model 1 with no effects for people or items (the binomial model) and model 3 with item effects (item salience differences) were tested. Therefore, models were fit at size 10, to test all four models and then at the total available sample size.

Descriptive information for the examples appears in Table 2 . The first four columns list the name of the example, the sample size in the original study, the mean list length (with the range of the list length across respondents), and the total number of unique items obtained. For the Holiday1 example, interviews requested names of holidays (“Write down all the holidays you can think of”), there were 24 respondents, the average number of holidays listed per person (list length) was 13 (ranging from five to 29), and 62 unique holidays were obtained.

thumbnail

https://doi.org/10.1371/journal.pone.0198606.t002

Predicting thematic saturation from sample size

The free-list counts showed a characteristic descending curve where an initial person listed new themes and each additional person repeated some themes already reported and added new items, but fewer and fewer new items were added with incremental increases in sample size. All examples were fit using the GLM log-link and identity-link with normal, Poisson, and negative binomial distributions. The negative binomial model resulted in a better fit than the Poisson (or identity-link models) for most full-listing examples, providing the best fit to the downward sloping curve with a long tail. Of the 28 examples, only three were not best fit by negative binomial log-link models: the best-fitting model for two examples was the Poisson log-link model (GoodTeam1 and GoodTeam2Player) and one was best fit by the negative binomial identity-link model (CultInd1).

Sample size was a significant predictor of the number of new items for 21 of the 28 examples. Seven examples did not result in a statistically significant fit (Illnesses-US, Holiday2, Industries1, Industries2, GoodTLeader, GoodTeam2Player, and GoodTeam3). The best-fitting model was used to predict the point of saturation and domain size for all 28 examples ( S2 Appendix GLM Statistical Model Results for the 28 Examples).

Using the best-fitting GLM models we estimated the predicted sample size for reaching saturation. Saturation was defined as the point where less than one new item would be expected for each additional person interviewed. Using the models to solve for the sample size (X) when only one item was obtained per person (Y = 1) and rounding up to the nearest integer, provided the point of saturation (Y≤1.0). Table 2 , column five, reports the sample size where saturation was reached (N SAT ). For Holiday1, one or fewer new items were obtained per person when X = 16.98. Rounding up to the next integer provides the saturation point (N SAT = 17). For the Fruit domain, saturation occurred at a sample size of 15.

Saturation was reached at sample sizes of 15–194, with a median sample size of 75. Only five examples (Holiday1, Fruits, Birds, Flowers, and Drugs) reached saturation within the original study sample size and most examples did not reach saturation even after four or five dozen interviews. A more liberal definition of saturation, defined as the point where less than two new items would be expected for each additional person (solving for Y≤2), resulted in a median sample size for reaching saturation of 50 (range 10–146).

Some domains were well bounded and were elicited with small sample sizes. Some were not. In fact, most of the distributions exhibited a very long tail—where many items were mentioned by only one or two people. Fig 1 shows the predicted curves for all examples for sample sizes of 1 to 50. Saturation is the point where the descending curve crosses Y = 1 (or Y = 2). Although the expected number of unique ideas or themes obtained for successive respondents tends to decrease as the sample size increases, this occurs rapidly in some domains and slowly or not at all in other domains. Fruits, Holiday1, and Illness-G are domains with the three bottom-most curves and the steepest descent, indicating that saturation was reached rapidly and with small sample sizes. The three top-most curves are the Moms-F2F, Industries1, and Industries2 domains, which reached saturation at very large sample sizes or essentially did not reach saturation.

thumbnail

https://doi.org/10.1371/journal.pone.0198606.g001

Estimating domain size

Because saturation appeared to be related to domain size and some investigators state that a percentage of the domain might be a better standard [ 25 ], domain size was also estimated. First, total domain size was estimated with the GLM models obtained above. Domain size was estimated at the point of saturation by cumulatively summing the number of items obtained for sample sizes n = 1, n = 2, n = 3, … to N SAT . For the Holiday1 sample, summing the number of predicted unique items for sample sizes n = 1 to n = 17 should yield 51 items ( Table 2 , Domain Size at Saturation, D SAT ). Thus, the model predicted that approximately 51 holidays would be obtained by the time saturation was reached.

The total domain size was estimated using a geometric series, summing the estimated number of unique items obtained cumulatively across people in an infinitely large sample. For the Holiday1 domain, the total domain size was estimated as 56 (see Table 2 , Total Domain Size D TOT ). So for the Holiday1 domain, although the total domain size was estimated to be 57, the model predicted that saturation occurred when the sample size reached 17, and at that point 51 holidays should be retrieved. Model predictions were close to the empirical data, as 62 holidays were obtained with a sample of 24.

Larger sample sizes were needed to reach saturation in larger domains; the largest domains were MomsF2F, Industries1, and Industries2 each estimated to have about 1,000 items and more than 100 interviews needed to approach saturation. Saturation (Y≤1) tended to occur at about 90% of the total domain size. For Fruits, the domain size at saturation was 51 and the total domain size was estimated at 53 (51/53 = 96%) and for MomsF2F, domain size at saturation was 904 and total domain size was 951 (95%).

Second, total domain size was estimated using a capture-recapture log-linear model with a parameter for item heterogeneity [ 35 , 36 ]. A descending, concave curve is diagnostic of item heterogeneity and was present in almost all of the examples. The estimated population sizes using R-Capture appear in the last column of Table 2 . When the gamma distribution provided the best fit to the response data, the domain size increased by an order of magnitude as did the standard error on that estimate. When responses fit a gamma distribution, the domain may be extremely large and may not readily reach saturation.

Inclusion of the pattern of matching items across people with a parameter for item heterogeneity (overlap in items between people due to salience) resulted in larger population size estimates than those above without heterogeneity. Estimation from the first two respondents was not helpful and provided estimates much lower than those from any of the other methods. The simple model without subject or item effects (the binomial model) did not fit any of the examples. Estimation from the first 10 respondents in each example suggested that more variation was due to item heterogeneity than to item and subject heterogeneity, so we report only the estimated domain size with the complete samples accounting for item heterogeneity in salience.

Overall, the capture–recapture estimates incorporating the effect of salience were larger than the GLM results above without a parameter for salience. For Fruits, the total domain size was estimated as 45 from the first two people; as 88 (gamma distribution estimate) from the first 10 people with item heterogeneity and as 67 (Chao lower bound estimate) with item and subject heterogeneity; and using the total sample ( n = 33) the binomial model (without any heterogeneity parameters) estimated the domain size as 62 (but did not fit the data) and with item heterogeneity the domain size was estimated as 73 (the best-fitting model used the Chao lower bound estimate). Thus, the total domain size for Fruits estimated with a simple GLM model was 53 and with a capture–recapture model (including item heterogeneity) was 73 ( Table 2 , last column). Similarly, the domain size for Holiday1 was estimated at 57 with the simple GLM model and 100 with capture-recapture model. Domain size estimates suggest that even the simplest domains can be large and that inclusion of item heterogeneity increases domain size estimates.

Saturation and the number of responses per person

The original examples used an exhaustive listing of responses to obtain about a half dozen (GoodLeader and GoodTeam2Player) to almost three dozen responses per person (Industries1 and Industries2). A question is whether saturation and the number of unique ideas obtained might be affected by the number of responses per person. Since open-ended questions may obtain only a few responses, we limited the responses to a maximum of three per person, truncating lists to see the effect on the number of items obtained at different sample sizes and the point of saturation.

When more information (a greater number of responses) was collected per person, more unique items were obtained even at smaller sample sizes ( Table 3 ). The amount of information retrieved per sample can be conceived of in terms of bits of information per sample and is roughly the average number of responses per person times the sample size so that, with all other things being equal, larger sample sizes with less probing should approach the same amount of information obtained with smaller samples and more probing. So, for a given sample size, a study with six responses per person should obtain twice as much information as a study with three responses per person. In the GoodLeader, GoodTeam1, and GoodTeam2Player examples, the average list length was approximately six and when the sample size was 10 (6 x 10 = 60 bits of information), approximately twice as many items were obtained as when lists were truncated to three responses (3 x 10 = 30 bits of information).

thumbnail

https://doi.org/10.1371/journal.pone.0198606.t003

Increasing the sample size proportionately increases the amount of information, but not always. For Scary Things, 5.6 bits more information were collected per person with full listing (16.9 average list length) than with three or fewer responses per person (3.0 list length); and the number of items obtained in a sample size of 10 with full listing (102) was roughly 5.6 times greater than that obtained with three responses per person (18 items). However, at a sample size of 20 the number of unique items with free lists was only 4.5 times larger (153) than the number obtained with three responses per person (34). Across examples , interviews that obtained more information per person were more productive and obtained more unique items overall even with smaller sample sizes than did interviews with only three responses per person .

Using the same definition of saturation (the point where less than one new item would be expected for each additional person interviewed), less information per person resulted in reaching saturation at much smaller sample sizes. Fig 2 shows the predicted curves for all examples when the number of responses per person is three (or fewer). The Holiday examples reached saturation (fewer than one new item per person) with a sample size of 17 (Holiday1) with 13.0 average responses per person and 87 (Holiday2) with 17.8 average responses ( Table 2 ), but reached saturation with a sample size of only 9 (Holiday 1 and Holiday2) when there were a maximum of three responses per person ( Table 3 , last column). With three or fewer responses per person, the median sample size for reaching saturation was 16 (range: 4–134). Thus, fewer responses per person resulted in reaching saturation at smaller sample sizes and resulted in fewer domain items.

thumbnail

https://doi.org/10.1371/journal.pone.0198606.g002

Salience and sample size

Saturation did not seem to be a useful guide for determining a sample size stopping point, because it was sensitive both to domain size and the number of responses per person. Since a main goal of open-ended interviews is to obtain the most important ideas and themes, it seemed reasonable to consider item salience as an alternative guide to assist with determining sample size adequacy. Here, the question would be: Whether or not complete saturation is achieved, are the most salient ideas and themes captured in small samples?

A simple and direct measure of item salience is the proportion of people in a sample that mentioned an item [ 37 ]. However, we examined the correlation between the sample proportions and two salience indices that combine the proportion of people mentioning an item with the item’s list position [ 13 – 15 ]. Because the item frequency distributions have long tails—there are many items mentioned by only one or two people—we focused on only those items mentioned by two or more people (24–204 items) and used the full lists provided by each respondent. The average Spearman correlation between the Smith and Sutrop indices in the 28 examples was 0.95 (average Pearson correlation 0.96, 95%CI: 0.92, 0.98), between the Smith index and the sample proportions was 0.89 (average Pearson 0.96, 95%CI: 0.915, 0.982), and between the Sutrop index and the sample proportions was 0.86 (average Pearson 0.88 95%CI: 0.753, 0.943). Thus, the three measures were highly correlated in 28 examples that varied in content, number of items, and sample size—validating the measurement of a single construct.

To test whether the most salient ideas and themes were captured in smaller samples or with limited probing, we used the sample proportions to estimate item salience and compared the set of most salient items across sample sizes and across more and less probing. Specifically, we defined a set of salient items for each example as those mentioned by 20% or more in the sample of size 20 (because all examples had at least 20) with full-listing (because domains were more detailed). We compared the set of salient items with the set of items obtained at smaller sample sizes and with fewer responses per person.

The set size for salient items (prevalence ≥ 20%) was not related to overall domain size, but was an independent characteristic of each domain and whether there were core or prototypical items with higher salience. Most domains had about two dozen items mentioned by 20% or more of the original listing sample ( n = 20), but some domains had only a half dozen or fewer items (GoodLeader, GoodTeam2Player, GoodTeam3). With full listing, 26 of 28 examples captured more than 95% of the salient ideas in the first 10 interviews: 18 examples captured 100%, eight examples captured 95–99%, one example captured 91%, and one captured 80% ( Table 4 ). With a maximum of three responses per person, about two-thirds of the salient items (68%) were captured with 20 interviews and about half of the items (53%) were captured in the first 10 interviews. With a sample size of 20, a greater number of responses per person resulted in approximately 50% more items than with three responses per person. Extensive probing resulted in a greater capture of salient items even with smaller sample sizes.

thumbnail

https://doi.org/10.1371/journal.pone.0198606.t004

Summary and discussion

The strict notion of complete saturation as the point where few or no new ideas are observed is not a useful concept to guide sample size decisions, because it is sensitive to domain size and the amount of information contributed by each respondent. Larger sample sizes are necessary to reach saturation for large domains and it is difficult to know, when starting a study, just how large the domain or set of ideas will be. Also, when respondents only provide a few responses or codes per person, saturation may be reached quickly. So, if complete thematic saturation is observed, it is difficult to know whether the domain is small or whether the interviewer did only minimal probing.

Rather than attempting to reach complete saturation with an incremental sampling plan, a more productive focus might be on gaining more depth with probing and seeking the most salient ideas. Rarely do we need all the ideas and themes, rather we tend to be looking for important or salient ideas. A greater number of responses per person resulted in the capture of a greater number of salient items. With exhaustive listing, the first 10 interviews obtained 95% of the salient ideas (defined here as item prevalence of 0.20 or more), while only 53% of those ideas were obtained in 10 interviews with three or fewer responses per person.

We used a simple statistical model to predict the number of new items added by each additional person and found that complete saturation was not a helpful concept for free-lists, as the median sample size was 75 to get fewer than one new idea per person. It is important to note that we assumed that interviews were in a random order or were in the order that the interviews were conducted and were not reordered to any kind of optimum. The reordering of respondents to maximally fit a saturation curve may make it appear that saturation has been reached at a smaller sample size [ 31 ].

Most of the examples examined in this study needed sample sizes larger than most qualitative researchers use to reach saturation. Mason’s [ 6 ] review of 298 PhD dissertations in the United Kingdom, all based on qualitative data, found a mean sample size of 27 (range 1–95). Here, few of the examples reached saturation with less than four dozen interviews. Even with large sample sizes, some domains may continue to add new items. For very large domains, an incremental sampling strategy may lead to dozens and dozens of interviews and still not reach complete saturation. The problem is that most domains have very long tails in the distribution of observed items, with many items mentioned by only one or two people. A more liberal definition of complete saturation (allowing up to two new items per person) allowed for saturation to occur at smaller sample sizes, but saturation still did not occur until a median sample size of 50.

In the examples we studied, most domains were large and domain size affected when saturation occurred. Unfortunately, there did not seem to be a good or simple way at the outset to tell if a domain would be large or small. Most domains were much larger than expected, even on simple topics. Domain size varied by substantive content, sample, and degree of heterogeneity in salience. Domain size and saturation were sample dependent, as the holiday examples showed. Also, domain size estimates did not mean that there are only 73 fruits, rather the pattern of naming fruits—for this particular sample—indicated a set size of 73.

It was impossible to know, when starting, if a topic or domain was small and would require 15 interviews to reach saturation or if the domain was large and would require more than 100 interviews to reach saturation. Although eight of the examples had sample sizes of 50–99, sample sizes in qualitative studies are rarely that large. Estimates of domain size were even larger when models incorporated item heterogeneity (salience). The Fruit example had an estimated domain size of 53 without item heterogeneity, but 73 with item heterogeneity. The estimated size of the Fabric domain increased from 210 to 753 when item heterogeneity was included.

The number of responses per person affected both saturation and the number of obtained items. A greater number of responses per person resulted in a greater yield of domain items. The bits of information obtained in a sample can be approximated by the product of the average number of responses per person (list length) and the number of people in a sample. However, doubling the sample size did not necessarily double the unique items obtained because of item salience and sampling variability. When only a few items are obtained from each person, only the most salient items tend to be provided by each person and fewer items are obtained overall.

Brewer [ 29 ] explored the effect of probing or prompting on interview yield. Brewer examined the use of a few simple prompts: simply asking for more responses, providing alphabetical cues, or repeating the last response(s) and asking again for more information. Semantic cueing, repeating prior responses and asking for more information, increased the yield by approximately 50%. The results here indicated a similar pattern. When more information was elicited per person , about 50% more domain items were retrieved than when people provided a maximum of three responses.

Interviewing to obtain multiple responses also affects saturation. With few responses per person, complete saturation was reached rapidly. Without extensive interview probing, investigators may reach saturation quickly and assume they have a sample sufficient to retrieve most of the domain items. Unfortunately, different degrees of salience among items may cause strong effects for respondents to repeat similar ideas—the most salient ideas—without elaborating on less salient or less prevalent ideas, resulting in a set of only the ideas with the very highest salience. If an investigator wishes to obtain most of the ideas that are relevant in a domain , a small sample with extensive probing (listing) will prove much more productive than a large sample with casual or no probing .

Recently, Galvin [ 21 ] and Fugard and Potts [ 22 ] framed sample size estimation for qualitative interviewing in terms of binomial probabilities. However, results for the 28 examples with multiple responses per person suggest that this may not be appropriate because of the interdependencies among items due to salience. The capture–recapture analysis indicated that none of the 28 examples fit the binomial distribution. Framing the sample size problem in terms that a specific idea or theme will or will not appear in a set of interviews may facilitate thinking about sample size, but such estimates may be misleading.

If a binomial distribution is assumed, sample size can be estimated from the prevalence of an idea in the population, from how confident you want to be in obtaining these ideas, and from how many times you would like these ideas to minimally appear across participants in your interviews. A binomial estimate assumes independence (no difference in salience across items) and predicts that if an idea or theme actually occurs in 20% of the population, there is a 90% or higher likelihood of obtaining those themes at least once in 11 interviews and a 95% likelihood in 14 interviews. In contrast, our results indicated that the heterogeneity in salience across items causes these estimates to underestimate the necessary sample size as items with ≥20% prevalence were captured in 10 interviews in only 64% of the samples with full listing and in only 4% (one) of samples with three or fewer responses.

Lowe et al. [ 25 ] also found that items were not independent and that binomial estimates significantly underestimated sample size. They proposed sample size estimation from the desired proportion of items at a given average prevalence. Their formula predicts that 36 interviews would be necessary to capture 90% of items with an average prevalence of 0.20, regardless of degree of heterogeneity in salience, domain size, or amount of information provided per respondent. Although they included a parameter for non-independence, their model does not seem to be accurate for cases with limited responses or for large domains.

Conclusions

In general , probing and prompting during an interview seems to matter more than the number of interviews . Thematic saturation may be an illusion and may result from a failure to use in-depth probing during the interview. A small sample ( n = 10) can collect some of the most salient ideas, but a small sample with extensive probing can collect most of the salient ideas. A larger sample ( n = 20) is more sensitive and can collect more prevalent and more salient ideas, as well as less prevalent ideas, especially with probing. Some domains, however, may not have items with high prevalence. Several of the domains examined had only a half dozen or fewer items with prevalence of 20% or more. The direct link between salience and population prevalence offers a rationale for sample size and facilitates study planning. If the goal is to get a few widely held ideas, a small sample size will suffice. If the goal is to explore a larger range of ideas, a larger sample size or extensive probing is needed. Sample sizes of one to two dozen interviews should be sufficient with exhaustive probing (listing interviews), especially in a coherent domain. Empirically observed stabilization of item salience may indicate an adequate sample size.

A next step would be to test whether these conclusions and recommendations hold for other types of open-ended questions, such as narratives, life histories, and open-ended questions in large surveys. Open-ended survey questions are inefficient and result in thin or sparse data with few responses per person because of a lack of prompting. Tran et al. [ 24 ] reported item prevalence of 0.025 in answers in a large Internet survey suggesting few responses per person. In contrast, we used an item prevalence of 0.20 and higher to identify the most salient items in each domain and the highest prevalence in each domain ranged from 0.30 to 0.80 ( Table 1 ). Inefficiency in open-ended survey questions is likely due to the dual purpose of the questions: They try to define the range of possible answers and get the respondent’s answer. A better approach might be to precede survey development with a dozen free-listing interviews to get the range of possible responses and then use that content to design structured survey questions.

Another avenue for investigation is how our findings on thematic saturation compare to theoretical saturation in grounded theory studies [ 2 , 38 , 39 ]. Grounded theory studies rely on theoretical sampling–-an iterative procedure in which a single interview is coded for themes; the next respondent is selected to discover new themes and relationships between themes; and so on, until no more relevant themes or inter-relationships are discovered and a theory is built to explain the facts/themes of the case under study. In contrast this study examined thematic saturation, the simple accumulation of ideas and themes, and found that saturation in salience was more attainable–-perhaps more important—than thematic saturation.

Supporting information

S1 appendix. the original data for the 28 examples..

https://doi.org/10.1371/journal.pone.0198606.s001

S2 Appendix. GLM statistical model results for the 28 examples.

https://doi.org/10.1371/journal.pone.0198606.s002

Acknowledgments

We would like to thank Devon Brewer and Kristofer Jennings for providing feedback on an earlier version of this manuscript. We would also like to thank Devon Brewer for providing data from his studies on free-lists.

  • View Article
  • Google Scholar
  • 2. Glaser BG, Strauss AL. The discovery of grounded theory: Strategies for qualitative research. New Brunswick, NJ: Aldine, 1967.
  • 3. Lincoln YS, Guba EG. Naturalistic inquiry. Beverly Hills, CA: Sage, 1985.
  • 4. Morse JM. Strategies for sampling. In: Morse JM, editor, Qualitative Nursing Research: A Contemporary Dialogue. Rockville, MD: Aspen Press, 1989, pp. 117–131.
  • PubMed/NCBI
  • 6. Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research 2010; 11. http://nbn-resolving.de/urn:nbn:de:0114-fqs100387 (accessed December 26, 2017).
  • 10. Geeraerts D. Theories of lexical semantics. Oxford University Press, 2010.
  • 12. Berlin Brent. Ethnobiological classification. In: Rosch E, Lloyd BB, eds. Cognition and Categorization. Hillsdale, NJ: Erlbaum. 1978, pp 9–26.
  • 33. SAS Institute Inc. GENMOD. SAS/STAT® 13.1 User’s Guide. Cary, NC: SAS Institute Inc., 2013.
  • 34. Bishop Y, Feinberg S, Holland P. Discrete multivariate statistics: Theory and practice, MIT Press, Cambridge, 1975.
  • 36. Rivest LP, Baillargeon S: Package ‘Rcapture’ Loglinear models for capture-recapture experiments, in CRAN, R, Documentation Feb 19, 2015.
  • 37. Weller SC, Romney AK. Systematic data collection (Vol. 10). Sage, 1988.
  • 38. Morse M. Theoretical saturation. In Lewis-Beck MS, Bryman A, Liao TF, editors. The Sage encyclopedia of social science research methods. Thousand Oaks, CA: Sage, 2004, p1123. Available from http://sk.sagepub.com/reference/download/socialscience/n1011.pdf
  • 39. Tay, I. To what extent should data saturation be used as a quality criterion in qualitative research? Linked in 2014. Available from https://www.linkedin.com/pulse/20140824092647-82509310-to-what-extent-should-data-saturation-be-used-as-a-quality-criterion-in-qualitative-research

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

open ended questions qualitative research

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

open ended questions qualitative research

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

open ended questions qualitative research

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

open ended questions qualitative research

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods, sign up for our weekly newsletter.

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

Qualitative study design: Surveys & questionnaires

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups
  • Observation
  • Surveys & questionnaires
  • Study Designs Home

Surveys & questionnaires

Qualitative surveys use open-ended questions to produce long-form written/typed answers. Questions will aim to reveal opinions, experiences, narratives or accounts. Often a useful precursor to interviews or focus groups as they help identify initial themes or issues to then explore further in the research. Surveys can be used iteratively, being changed and modified over the course of the research to elicit new information. 

Structured Interviews may follow a similar form of open questioning.  

Qualitative surveys frequently include quantitative questions to establish elements such as age, nationality etc. 

Qualitative surveys aim to elicit a detailed response to an open-ended topic question in the participant’s own words.  Like quantitative surveys, there are three main methods for using qualitative surveys including face to face surveys, phone surveys, and online surveys. Each method of surveying has strengths and limitations.

Face to face surveys  

  • Researcher asks participants one or more open-ended questions about a topic, typically while in view of the participant’s facial expressions and other behaviours while answering. Being able to view the respondent’s reactions enables the researcher to ask follow-up questions to elicit a more detailed response, and to follow up on any facial or behavioural cues that seem at odds with what the participants is explicitly saying.
  • Face to face qualitative survey responses are likely to be audio recorded and transcribed into text to ensure all detail is captured; however, some surveys may include both quantitative and qualitative questions using a structured or semi-structured format of questioning, and in this case the researcher may simply write down key points from the participant’s response.

Telephone surveys

  • Similar to the face to face method, but without researcher being able to see participant’s facial or behavioural responses to questions asked. This means the researcher may miss key cues that would help them ask further questions to clarify or extend participant responses to their questions, and instead relies on vocal cues.

Online surveys

  • Open-ended questions are presented to participants in written format via email or within an online survey tool, often alongside quantitative survey questions on the same topic.
  • Researchers may provide some contextualising information or key definitions to help ‘frame’ how participants view the qualitative survey questions, since they can’t directly ask the researcher about it in real time. 
  • Participants are requested to responses to questions in text ‘in some detail’ to explain their perspective or experience to researchers; this can result in diversity of responses (brief to detailed).
  • Researchers can not always probe or clarify participant responses to online qualitative survey questions which can result in data from these responses being cryptic or vague to the researcher.
  • Online surveys can collect a greater number of responses in a set period of time compared to face to face and phone survey approaches, so while data may be less detailed, there is more of it overall to compensate.

Qualitative surveys can help a study early on, in finding out the issues/needs/experiences to be explored further in an interview or focus group. 

Surveys can be amended and re-run based on responses providing an evolving and responsive method of research. 

Online surveys will receive typed responses reducing translation by the researcher 

Online surveys can be delivered broadly across a wide population with asynchronous delivery/response. 

Limitations

Hand-written notes will need to be transcribed (time-consuming) for digital study and kept physically for reference. 

Distance (or online) communication can be open to misinterpretations that cannot be corrected at the time. 

Questions can be leading/misleading, eliciting answers that are not core to the research subject. Researchers must aim to write a neutral question which does not give away the researchers expectations. 

Even with transcribed/digital responses analysis can be long and detailed, though not as much as in an interview. 

Surveys may be left incomplete if performed online or taken by research assistants not well trained in giving the survey/structured interview. 

Narrow sampling may skew the results of the survey. 

Example questions

Here are some example survey questions which are open ended and require a long form written response:

  • Tell us why you became a doctor? 
  • What do you expect from this health service? 
  • How do you explain the low levels of financial investment in mental health services? (WHO, 2007) 

Example studies

  • Davey, L. , Clarke, V. and Jenkinson, E. (2019), Living with alopecia areata: an online qualitative survey study. British Journal of Dermatology, 180 1377-1389. Retrieved from https://onlinelibrary-wiley-com.ezproxy-f.deakin.edu.au/doi/10.1111%2Fbjd.17463    
  • Richardson, J. (2004). What Patients Expect From Complementary Therapy: A Qualitative Study. American Journal of Public Health, 94(6), 1049–1053. Retrieved from http://ezproxy.deakin.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=s3h&AN=13270563&site=eds-live&scope=site  
  • Saraceno, B., van Ommeren, M., Batniji, R., Cohen, A., Gureje, O., Mahoney, J., ... & Underhill, C. (2007). Barriers to improvement of mental health services in low-income and middle-income countries. The Lancet, 370(9593), 1164-1174. Retrieved from https://www-sciencedirect-com.ezproxy-f.deakin.edu.au/science/article/pii/S014067360761263X?via%3Dihub  

Below has more detail of the Lancet article including actual survey questions at: 

  • World Health Organization. (2007.) Expert opinion on barriers and facilitating factors for the implementation of existing mental health knowledge in mental health services. Geneva: World Health Organization. https://apps.who.int/iris/handle/10665/44808
  • Green, J. 1961-author., & Thorogood, N. (2018). Qualitative methods for health research. SAGE. Retrieved from http://ezproxy.deakin.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=cat00097a&AN=deakin.b4151167&authtype=sso&custid=deakin&site=eds-live&scope=site   
  • JANSEN, H. The Logic of Qualitative Survey Research and its Position in the Field of Social Research Methods. Forum Qualitative Sozialforschung, 11(2), Retrieved from http://www.qualitative-research.net/index.php/fqs/article/view/1450/2946  
  • Neilsen Norman Group, (2019). 28 Tips for Creating Great Qualitative Surveys. Retrieved from https://www.nngroup.com/articles/qualitative-surveys/   
  • << Previous: Documents
  • Next: Interviews >>
  • Last Updated: Apr 8, 2024 11:12 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs

a hand holding three question marks

6 Main Qualitative Questions Examples

Qualitative vs. quantitative research, the importance of qualitative questions, key elements of effective qualitative research questions, role in the research design, 6 types and examples of qualitative questions, how to choose qualitative research questions, start collecting qualitative data right now, fullsession pricing plans.

  • FAQs about Qualitative Research Questions

Qualitative research uncovers the details of human behavior, beliefs, and feelings. It gives us insights that numbers can’t always tell.

These research questions help us understand the “how” and “why” of things. 

In this article, we’ll look at six examples of good qualitative questions. We aim to highlight how picking the right questions can improve your study.

It is important to understand the differences between qualitative and quantitative research.

Qualitative research questions aim to explore concepts, experiences, and perspectives. They offer the qualitative research expert an in-depth insight into the subject.

On the other hand, quantitative research questions focus on measurable aspects. They seek statistical comparisons to reach factual conclusions.

Both quantitative and qualitative questions have important roles in research. They serve unique purposes and provide different types of data.

Unlike quantitative research, qualitative questions aren’t about numbers and statistical analysis. It’s about understanding the reason behind data from a focus group.

Why pick qualitative research? When conducting qualitative research, you want to know why someone does something, not just count how many times they do it.

You ask, and you listen. That’s the power of qualitative research. The right question is a key that unlocks valuable knowledge.

Effective qualitative research aims to unveil hidden truths. But how do you achieve it? With thought-provoking questions.

Here are the elements of the qualitative research questions for an in-depth exploration:

Open-ended and Exploratory

Qualitative research questions aim to understand the “how” and “why” of a topic. They invite people to share their views and stories.

Open-ended and exploratory questions help researchers grasp complex issues. These questions allow for diverse and detailed answers to a particular subject.

Clarity and Focus

Qualitative research questions need to be clear, focused, and brief. They help ensure the research meets its goals. 

Being specific guides data collection and analysis, leading to valuable findings.

Relationships and Personal Experiences

Qualitative research questions examine how different factors relate to personal experiences and seek to understand why people act in certain ways. 

They also explore how people respond to their surroundings, including culture and workplace rules.

Ethical Considerations

When creating qualitative research questions, it’s important to think about ethics. Questions need to respect participants’ dignity, privacy, and independence.

This makes sure that the research does not cause harm or distress. Ethics also matter when explaining and sharing results, as researchers must present data truthfully and with care.

a person using a laptop and market analysis on papers

The right qualitative research questions are crucial in the design of research projects for several reasons:

  • Guidance on Research Methods: Directs the choice of qualitative research methods. Options include:
  • Focus Groups: Small groups discuss topics with a moderator.
  • In-Depth Interviews: Offers detailed insights from individual viewpoints.
  • Qualitative Surveys: Gathers open-ended responses from a broad audience.
  • Ensuring the Right Tools are Used: Matching objectives with the most suitable research tools. Enables thorough investigation and captures the complexity of experiences.
  • Facilitating a Clear Understanding: Aims to uncover not just what is happening but why. Explores thoughts, feelings, behaviors, and the effects of various influences.
  • Informing the Research Design: Influences all design aspects, including participant selection and analysis framework. Ensures ethical standards guide the research process.

Here are six types of qualitative questions with examples:

1. Descriptive

These questions are aimed at describing the characteristics or features of a product.

  • Example 1: How do users describe their initial impressions when they first interact with our new software interface?
  • Example 2: What are the specific colors and design elements that users notice about the new smartphone model when they see it for the first time?

2. Exploratory

Exploratory questions are designed to investigate how things work or how users interact with a product.

  • Example 1: What strategies do users employ to navigate through the features of our newly launched app?
  • Example 2: How do users attempt to solve problems when they encounter errors using our digital service platform?

3. Experiential

These questions focus on the user’s experiences and emotions related to the product.

  • Example 1: Can you describe a memorable experience you had while using our product?
  • Example 2: What emotions do you feel when using our product under stressful conditions?

4. Comparative

Comparative questions look at differences between products, user groups, or other variables.

  • Example 1: How do new users’ experiences with our product compare to those of long-term users?
  • Example 2: In what ways does our product perform better or worse than our main competitor’s product in similar conditions?

5. Process-oriented

These questions delve into the processes or sequences of actions related to using the product.

  • Example 1: Can you walk me through the process you typically follow when setting up our product for the first time?
  • Example 2: What steps do you take when you troubleshoot an issue with our product?

6. Theoretical

Theoretical questions aim to understand the underlying principles or theories that explain user behavior or product dynamics.

  • Example 1: What theories can explain why users prefer our product’s design over traditional designs?
  • Example 2: Based on your knowledge, what psychological principles might influence how users adapt to our product’s innovative features?

people looking at questionnaires on papers

When selecting qualitative questions, the aim is to deeply understand user interactions, perceptions, and experiences with the product. 

Here are some key considerations for choosing good qualitative research questions:

  • Define Your Objectives  

Start by clearly defining the research objective of your product testing. What specific aspects of the product are you looking to evaluate? Are you interested in usability, aesthetics, functionality, or user satisfaction? Your objectives will guide the types of questions you need to ask. For example, if user satisfaction is your focus, you might ask about the user’s emotional response to the product.

  • Consider the Type of Qualitative Research  

Different types of qualitative methods—such as ethnographic, narrative, phenomenological, or grounded theory—may influence the style and structure of your questions. For instance, narrative research focuses on stories and experiences, so your questions should encourage storytelling about product use.

  • Ensure Questions are Open-Ended  

Qualitative questions should be open-ended to allow for detailed responses that can reveal insights not anticipated by the researcher. Instead of asking, “Do you like our product?” which prompts a yes or no answer, ask, “How do you feel about our product?” to encourage a more detailed and nuanced response.

  • Be Clear and Concise  

While questions should allow for open-ended answers, they must also be clear and concise to avoid confusing the respondent. Ambiguity can lead to unreliable qualitative data , as different participants might interpret the questions differently.

  • Sequence the Questions Logically  

The order in which you ask questions can impact the flow of conversation and the quality of information gathered. Start with more general questions to make the respondent comfortable before moving to more specific or sensitive topics. This sequence helps build rapport and can lead to more honest and detailed responses later in the discussion.

  • Consider the Participant  

Tailor your questions to fit the background and experience level of your participants. Questions that are too technical or too basic can frustrate users or fail to elicit useful information. Understanding your audience allows you to frame questions that are appropriately challenging and engaging.

  • Pilot Test Your Questions  

Before finalizing your set of questions, conduct a pilot test with a small group of participants. This testing can reveal if any questions are confusing or ineffective at eliciting useful responses. Feedback from this phase can be invaluable in refining your questions.

  • Be Prepared to Adapt  

Finally, while it’s important to prepare your questions carefully, also be flexible during actual interactions. The conversation may reveal new paths of inquiry that are worth exploring. Being adaptive can help you capture deep insights that strictly adhering to a prepared list of questions might miss.

It takes less than 5 minutes to set up your first website or app feedback form, with FullSession , and it’s completely free!

After that, you will be able to collect high-quality feedback and avoid the guesswork.

fullsession pricing plans

The FullSession platform offers a 14-day free trial. It provides two paid plans—Basic and Business. Here are more details on each plan.

  • The Basic plan costs $39/month and allows you to monitor up to 5,000 monthly sessions.
  • The Business plan costs $149/month and helps you to track and analyze up to 25,000 monthly sessions.
  • The Enterprise plan starts from 100,000 monthly sessions and has custom pricing.

If you need more information, you can get a demo.

FAQs about Qualitative Research Questions 

What is a qualitative research question.

Qualitative research questions focus on ways to gather deep insights into people’s experiences, beliefs, and perceptions. Such questions invite detailed narrative responses.

Can qualitative research questions change during the study?

It’s not uncommon for qualitative research questions to evolve during the course of a study. As preliminary data is collected and analyzed, new insights may emerge that prompt a qualitative researcher to refine their questions. 

How are qualitative questions used in business?

Businesses use qualitative questions to uncover valuable insights. They can explore customer behavior, employee satisfaction, or market trends. One example could be: “What factors drive consumer loyalty to our brand?”

Are there specific words to use in qualitative research questions?

Yes, use words like “describe,” “explain,” and “how” to frame qualitative questions. These terms promote more detailed and comprehensive answers. They are key to qualitative analysis.

open ended questions qualitative research

Improve Performance and Grow your Website

Ready to transform your digital presence? Let's create magic together! book our services now!

Chart with arrow moving upwards

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Open-ended vs. closed questions in user research.

Portrait of Maria Rosala

January 26, 2024 2024-01-26

  • Email article
  • Share on LinkedIn
  • Share on Twitter

When conducting user research, asking questions helps you uncover insights. However, how you ask questions impacts what and how much you can discover .

In This Article:

Open-ended vs. closed questions, why asking open-ended questions is important, how to ask open-ended questions.

There are two types of questions we can use in research studies: open-ended and closed.

  Open-ended questions allow participants to give a free-form text answer. Closed questions (or closed-ended questions) restrict participants to one of a limited set of possible answers.

Open-ended questions encourage exploration of a topic; a participant can choose what to share and in how much detail. Participants are encouraged to give a reasoned response rather than a one-word answer or a short phrase.

Examples of open-ended questions include:

  • Walk me through a typical day.
  • Tell me about the last time you used the website.
  • What are you thinking?
  • How did you feel about using the website to do this task?

Note that the first two open-ended questions are commands but act as questions. These are common questions asked in user interviews to get participants to share stories. Questions 3 and 4 are common questions that a usability-test facilitator may ask during and after a user attempts a task, respectively.

Closed questions have a short and limited response. Examples of closed questions include:

  • What’s your job title?
  • Have you used the website before?
  • Approximately, how many times have you used the website?
  • When was the last time you used the website?

Strictly speaking, questions 3 and 4 would only be considered “closed” if they were accompanied by answer options, such as (a) never, (b) once, (c) two times or more. This is because the number of times and days could be infinite. That being said, in UX, we treat questions like these as closed questions.

In the dialog between a facilitator and a user below, closed questions provide a short, clarifying response, while open-ended questions result in the user describing an experience.

Using Closed Questions in Surveys

Closed questions are heavily utilized in surveys because the responses can be analyzed statistically (and surveys are usually a quantitative exercise). When used in surveys, they often take the form of multiple-choice questions or rating-scale items , rather than open-text questions. This way, the respondent has the answer options provided, and researchers can easily quantify how popular certain responses are. That being said, some closed questions could be answered through an open-text field to provide a better experience for the respondent. Consider the following closed questions:

  • In which industry do you work?
  • What is your gender?

Both questions could be presented as multiple-choice questions in a survey. However, the respondent might find it more comfortable to share their industry and gender in a free-text field if they feel the survey does not provide an option that directly aligns with their situation or if there are too many options to review.

Another reason closed questions are used in surveys is that they are much easier to answer than open-ended ones. A survey with many open-ended questions will usually have a lower completion rate than one with more closed questions.

Using Closed Questions in Interviews and Usability Tests

Closed questions are used occasionally in interviews and usability tests to get clarification and extra details. They are often used when asking followup questions. For example, a facilitator might ask:

  • Has this happened to you before?
  • When was the last time this happened?
  • Was this a different time than the time you mentioned previously?

Closed questions help facilitators gather important details. However, they should be used sparingly in qualitative research as they can limit what you can learn.

open ended questions qualitative research

The greatest benefit of open-ended questions is that they allow you to find more than you anticipate. You don’t know what you don’t know.   People may share motivations you didn’t expect and mention behaviors and concerns you knew nothing about. When you ask people to explain things, they often reveal surprising mental models , problem-solving strategies, hopes, and fears.

On the other hand, closed questions stop the conversation. If an interviewer or usability-test facilitator were to ask only closed questions, the conversation would be stilted and surface-level. The facilitator might not learn important things they didn’t think to ask because closed questions eliminate surprises: what you expect is what you get.

open ended questions qualitative research

Closed Questions Can Sometimes Be Leading

When you ask closed questions, you may accidentally reveal what you’re interested in and prime participants to volunteer only specific information. This is why researchers use the funnel technique , where the session or followup questions begin with broad, open-ended questions before introducing specific, closed questions.

Not all closed questions are leading. That being said, it’s easy for a closed question to become leading if it suggests an answer.

The table below shows examples of leading closed questions . Reworking a question so it’s not leading often involves making it open-ended, as shown in column 2 of the table below.

One way to spot a leading, closed question is to look at how the question begins. Leading closed questions often start with the words “did,” “was,” or “is.” Open-ended questions often begin with “how” or “what.”

New interviewers and usability-test facilitators often struggle to ask enough open-ended questions. A new interviewer might be tempted to ask many factual, closed questions in quick succession, such as the following:

  • Do you have children?
  • Do you work?
  • How old are you?
  • Do you ever [insert behavior]?

However, these questions could be answered in response to a broad, open-ended question like Tell me a bit about yourself .

When constructing an interview guide for a user interview, try to think of a broad, open-ended version of a closed question that might get the participant talking about the question you want answered, like in the example above.

When asking questions in a usability test, try to favor questions that begin with “how,” or “what,” over “do,” or “did” like in the table below.

Another tip to help you ask open-ended questions is to use one of the following question stems :

  • Walk me through [how/what]...
  • Tell me a bit about…
  • Tell me about a time where…

Finally, you can ask open-ended questions when probing. Probing questions are open-ended and are used in response to what a participant shares. They are designed to solicit more information. You can use the following probing questions in interviews and usability tests.

  • Tell me more about that.
  • What do you mean by that?
  • Can you expand on that?
  • What do you think about that?
  • Why do you think that?

Ask open-ended questions in conversations with users to discover unanticipated answers and important insights. Use closed questions to gather additional small details, gain clarification, or when you want to analyze responses quantitatively.

Related Topics

  • Research Methods Research Methods

Learn More:

Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=LpV3tMy_WZ0

Open vs. Closed Questions in User Research

open ended questions qualitative research

Always Pilot Test User Research Studies

Kim Salazar · 3 min

open ended questions qualitative research

Level Up Your Focus Groups

Therese Fessenden · 5 min

open ended questions qualitative research

Inductively Analyzing Qualitative Data

Tanner Kohler · 3 min

Related Articles:

Field Studies Done Right: Fast and Observational

Jakob Nielsen · 3 min

Should You Run a Survey?

Maddie Brown · 6 min

The Funnel Technique in Qualitative User Research

Maria Rosala and Kate Moran · 7 min

Why Are Open-Ended Questions Important In Qualitative Research?

Nov 8, 2023 | User Acceptance Testing , User Research

Qualitative research is crucial in understanding the complexities of human behaviour, experiences, and perspectives.

It allows researchers to explore the richness and depth of individuals’ thoughts, feelings, decision making process and motivations.

One of the critical tools in qualitative research is the use of open-ended questions. Open-ended questions invite respondents to provide detailed and personalised responses—allowing for a more nuanced understanding of the topic at hand.

This article aims to explore the importance of open-ended questions in qualitative research and share some actionable tips for crafting practical questions. So, let’s dig in!

What is qualitative research?

Before delving into the significance of open-ended questions, let’s first understand what qualitative research entails.

Qualitative research is an exploratory approach that aims to understand the meaning and interpretation individuals attach to their experiences.

Unlike quantitative research, which focuses on numerical data and statistical analysis, qualitative research emphasises in capturing the richness and depth of human experiences through methods like interviews, think aloud usability test, focus groups, and observations.

Objectives of qualitative research in usability testing

In the context of usability testing, qualitative research helps uncover users’ thoughts, emotions, and attitudes towards a product or service.

Fundamentally, it provides valuable insights into user behaviour, preferences, pain points, and areas for improvement.

By leveraging open-ended questions, researchers can uncover the underlying reasons behind users’ actions and gain a deeper understanding of their needs and expectations.

Differences between qualitative and quantitative research methods

Qualitative and quantitative research methods typically differ in their approaches, data collection techniques, and analysis.

For context, quantitative research focuses on numerical data, statistical analysis, and generalizability, while qualitative research seeks to explore and understand specific contexts, meanings, and interpretations.

Furthermore, qualitative research is more subjective, allowing for greater depth and richness of data, while quantitative research prioritises objectivity and generalizability.

What are open-ended questions?

Open-ended questions are questions that don’t have predefined or limited answer options. They encourage respondents to provide detailed and personalised responses, allowing them to express their thoughts, feelings, and experiences in their own words.

Unlike closed-ended questions, which may be answered with a simple “yes” or “no” or by selecting from a list of options, open-ended questions invite respondents to provide more elaborate and nuanced responses.

Characteristics of open-ended questions

Open-ended questions are characterised by several key elements that distinguish them from closed-ended questions, namely:

  • Freedom of response: Respondents can express themselves freely with open-ended questions because there are no predetermined answer options.
  • Richness of information: Open-ended questions encourage respondents to provide detailed and in-depth responses, providing researchers with a wealth of information.
  • Flexibility: Open-ended questions give respondents the flexibility to respond in a way that makes sense to them, allowing for diverse perspectives and insights.
  • Exploration of complexity: These questions help explore complex phenomena, opinions, and experiences that cannot be easily captured by closed-ended questions.

Importance of open-ended questions in qualitative research

Open-ended questions play a vital role in qualitative research for several reasons, namely:

Encouraging detailed responses

Open-ended questions enable respondents to provide more detailed and nuanced responses. By avoiding predetermined options, researchers can capture the richness and complexity of individuals’ thoughts, feelings, and experiences.

This depth of information is invaluable in gaining a comprehensive understanding of the research topic.

Facilitating a deeper understanding

Open-ended questions provide researchers with a better understanding of participants’ perspectives, beliefs, attitudes, and experiences.

By allowing individuals to express themselves freely, researchers can gain insights into the underlying reasons behind their actions and decision-making processes.

This deeper understanding is crucial for uncovering the underlying motivations and meanings that drive human behaviour.

Flexibility and adaptability

Open-ended questions offer flexibility and adaptability in qualitative research. They give participants a platform to present fresh themes, concepts, and viewpoints that the researcher might not have anticipated.

This flexibility allows for the emergence of unexpected insights and encourages a more exploratory and dynamic research process.

Tips for crafting effective open-ended questions

Open-ended questions, designed to elicit rich and authentic responses, are essential tools for researchers seeking to unravel the depth of participant perspectives.

Here are some actionable tips to help you master the art of crafting effective, open-ended questions:

1. Align questions with objectives

Before penning down your open-ended questions, it’s crucial to align them with the overarching objectives of your research. Clear alignment ensures that each question serves a purpose in contributing to the depth and breadth of your study.

For example, if your objective is to understand user satisfaction with a new software interface, frame questions that specifically address different aspects of the UX design , such as navigation, font readability, and functionality.

2. Clarity and comprehension

Ambiguity in questions can hinder the quality of responses. Participants should easily comprehend the intent of each question, allowing them to provide insightful and relevant answers.

Always ensure that your questions are clear, concise, and free of jargon. Test your questions beforehand on a diverse audience to identify any potential confusion and refine them accordingly.

3. Maintain neutrality

A neutral tone in your questions is essential to minimise bias. Participants should feel free to express their genuine opinions without worrying about the researcher’s judgment.

Avoid injecting personal opinions, judgements, or assumptions into your questions. Instead, present inquiries in an objective and non-directive manner to foster an open and honest exchange.

4. Encourage openness

Creating an environment that encourages participants to open up is vital for qualitative research. Open-ended questions should invite participants to share their thoughts and experiences freely.

Begin questions with phrases that signal openness, such as “Tell me about…” or “Describe your experience with…” Such prompts set the stage for participants to share their perspectives openly.

5. Use probing questions

While open-ended questions provide an initial exploration, supplementing them with probing questions allows researchers to delve deeper into specific aspects.

Probing questions guide participants to elaborate on their initial responses.

After receiving an open-ended response, follow up with probing questions that seek clarification, ask for examples, or explore the participant’s feelings in more detail.

This layered approach enriches the data collected.

6. Frame questions that encourage respondents to share stories

Human experiences are often best expressed through stories. Crafting questions that invite participants to share narratives can provide a deeper understanding of their perspectives.

Furthermore, always ask questions that prompt participants to recount specific experiences or share anecdotes related to the topic. Remember, stories add context, emotion, and a human touch to the research data.

All things considered, the effectiveness of open-ended questions lies not only in their form but in the thoughtful application of these tips.

Common mistakes to avoid with open-ended questions

Pitfalls lurk along this path of crafting and using open-ended questions. It is important to be mindful of the common mistakes to ensure the authenticity and reliability of the data collected.

Let’s explore these potential pitfalls and learn how to navigate around them, shall we?

1. Leading questions

Leading questions subtly guide participants toward a particular response, often unintentionally injecting the researcher’s bias into the inquiry.

These questions can steer participants away from expressing their genuine thoughts and experiences.

Craft open-ended questions with a neutral tone, avoiding any language that may suggest a preferred answer. By maintaining objectivity, researchers create a safe space for participants to share their perspectives without feeling influenced.

Example of a Leading Question:

Leading: “Don’t you think the new feature significantly improved your user experience?”

Revised: “How has the new feature impacted your user experience?”

2. Double-barreled questions

Double-barreled questions address more than one issue in a single inquiry, potentially causing confusion for participants. This can lead to ambiguous or unreliable responses as participants may not clearly distinguish between the two issues presented.

Always break down complex inquiries into single-issue questions, as this not only enhances clarity but also allows participants to provide specific and focused responses to each component of the question.

Example of a Double-Barreled Question:

Double-barreled: “How satisfied are you with the product’s functionality and design?”

Revised: “How satisfied are you with the product’s functionality? How about its design?”

3. Overly complex questions

Complex questions, laden with jargon or convoluted language, can overwhelm participants. When faced with complexity, participants may struggle to comprehend the question, leading to vague or incomplete responses that do not truly reflect their experiences.

Frame questions in clear and straightforward language to ensure participants easily grasp the intent. A well-understood question encourages participants to provide thoughtful and meaningful responses.

Example of an Overly Complex Question:

Complex: “In what ways do the multifaceted functionalities of the application contribute to your overall user satisfaction?”

Revised: “How do the application’s features contribute to your overall satisfaction?”

In summary, open-ended questions are indispensable tools in qualitative research.

They allow UX researchers to explore the complexity and diversity of human experiences, thoughts, and perspectives.

Open-ended questions provide valuable insights that go beyond mere numerical data. It encourages detailed and personalised responses,.

Remember to align the questions with your research objectives, ensuring clarity and neutrality and encouraging openness and storytelling.

Researchers often learn more about their subjects and find valuable insights that drive meaningful research outcomes when they use open-ended questions.

Related posts:

  • Field Studies vs Usability Testing: Which Is Better?

Qualitative Usability Testing Tips

  • Dark mode UX has come to Facebook. Is it really suitable for your business?
  • The Difference Between UX Writing and Content Strategy

UX insights delivered straight to you like an exclusive club once a month

You have successfully subscribed, recent articles, quantitative usability testing tips you should consider.

Feb 27, 2024

UI/UX designers aim to not only create visually appealing interfaces but also ensure that users can interact with their designs effortlessly. In this quest, quantitative usability testing is a fundamental tenet. This in-depth guide seeks to delve into the intricacies...

Nov 30, 2023

Usability testing is a crucial part of the UX design process that helps businesses understand how users interact with their products and identify areas for improvement. One effective approach to usability testing is qualitative testing, which focuses on gathering...

How To Use Generative AI Tools and Applications For Your Research Workflow

Nov 29, 2023

Within this rapidly evolving technological landscape, unique Generative AI tools and applications have emerged as invaluable companions for both seasoned UX designers and curious user researchers. These tools offer a mosaic of innovative possibilities, serving as...

Generative AI: What Is It and How Can It Help With User Research?

Understanding user behaviour and preferences is of the utmost importance in the ever-changing field of user experience design. At this stage, user research becomes relevant, providing valuable insights that can inform design decisions. The problem with tried-and-true...

Generative Research: All You Need To Know

Aug 22, 2023

Human-centred research should always be an integral component of any UI/UX design process than isolated from it. Furthermore, human-centred research approaches should always be flexible, allowing appropriateness to be the key determining factor in the selection of...

Research

83 Qualitative Research Questions & Examples

83 Qualitative Research Questions & Examples

Qualitative research questions help you understand consumer sentiment. They’re strategically designed to show organizations how and why people feel the way they do about a brand, product, or service. It looks beyond the numbers and is one of the most telling types of market research a company can do.

The UK Data Service describes this perfectly, saying, “The value of qualitative research is that it gives a voice to the lived experience .”

Read on to see seven use cases and 83 qualitative research questions, with the added bonus of examples that show how to get similar insights faster with Similarweb Research Intelligence.

Inspirational quote about customer insights

What is a qualitative research question?

A qualitative research question explores a topic in-depth, aiming to better understand the subject through interviews, observations, and other non-numerical data. Qualitative research questions are open-ended, helping to uncover a target audience’s opinions, beliefs, and motivations.

How to choose qualitative research questions?

Choosing the right qualitative research questions can be incremental to the success of your research and the findings you uncover. Here’s my six-step process for choosing the best qualitative research questions.

  • Start by understanding the purpose of your research. What do you want to learn? What outcome are you hoping to achieve?
  • Consider who you are researching. What are their experiences, attitudes, and beliefs? How can you best capture these in your research questions ?
  • Keep your questions open-ended . Qualitative research questions should not be too narrow or too broad. Aim to ask specific questions to provide meaningful answers but broad enough to allow for exploration.
  • Balance your research questions. You don’t want all of your questions to be the same type. Aim to mix up your questions to get a variety of answers.
  • Ensure your research questions are ethical and free from bias. Always have a second (and third) person check for unconscious bias.
  • Consider the language you use. Your questions should be written in a way that is clear and easy to understand. Avoid using jargon , acronyms, or overly technical language.

Choosing qualitative questions

Types of qualitative research questions

For a question to be considered qualitative, it usually needs to be open-ended. However, as I’ll explain, there can sometimes be a slight cross-over between quantitative and qualitative research questions.

Open-ended questions

These allow for a wide range of responses and can be formatted with multiple-choice answers or a free-text box to collect additional details. The next two types of qualitative questions are considered open questions, but each has its own style and purpose.

  • Probing questions are used to delve deeper into a respondent’s thoughts, such as “Can you tell me more about why you feel that way?”
  • Comparative questions ask people to compare two or more items, such as “Which product do you prefer and why?” These qualitative questions are highly useful for understanding brand awareness , competitive analysis , and more.

Closed-ended questions

These ask respondents to choose from a predetermined set of responses, such as “On a scale of 1-5, how satisfied are you with the new product?” While they’re traditionally quantitative, adding a free text box that asks for extra comments into why a specific rating was chosen will provide qualitative insights alongside their respective quantitative research question responses.

  • Ranking questions get people to rank items in order of preference, such as “Please rank these products in terms of quality.” They’re advantageous in many scenarios, like product development, competitive analysis, and brand awareness.
  • Likert scale questions ask people to rate items on a scale, such as “On a scale of 1-5, how satisfied are you with the new product?” Ideal for placement on websites and emails to gather quick, snappy feedback.

Qualitative research question examples

There are many applications of qualitative research and lots of ways you can put your findings to work for the success of your business. Here’s a summary of the most common use cases for qualitative questions and examples to ask.

Qualitative questions for identifying customer needs and motivations

These types of questions help you find out why customers choose products or services and what they are looking for when making a purchase.

  • What factors do you consider when deciding to buy a product?
  • What would make you choose one product or service over another?
  • What are the most important elements of a product that you would buy?
  • What features do you look for when purchasing a product?
  • What qualities do you look for in a company’s products?
  • Do you prefer localized or global brands when making a purchase?
  • How do you determine the value of a product?
  • What do you think is the most important factor when choosing a product?
  • How do you decide if a product or service is worth the money?
  • Do you have any specific expectations when purchasing a product?
  • Do you prefer to purchase products or services online or in person?
  • What kind of customer service do you expect when buying a product?
  • How do you decide when it is time to switch to a different product?
  • Where do you research products before you decide to buy?
  • What do you think is the most important customer value when making a purchase?

Qualitative research questions to enhance customer experience

Use these questions to reveal insights into how customers interact with a company’s products or services and how those experiences can be improved.

  • What aspects of our product or service do customers find most valuable?
  • How do customers perceive our customer service?
  • What factors are most important to customers when purchasing?
  • What do customers think of our brand?
  • What do customers think of our current marketing efforts?
  • How do customers feel about the features and benefits of our product?
  • How do customers feel about the price of our product or service?
  • How could we improve the customer experience?
  • What do customers think of our website or app?
  • What do customers think of our customer support?
  • What could we do to make our product or service easier to use?
  • What do customers think of our competitors?
  • What is your preferred way to access our site?
  • How do customers feel about our delivery/shipping times?
  • What do customers think of our loyalty programs?

Qualitative research question example for customer experience

  • ‍♀️ Question: What is your preferred way to access our site?
  • Insight sought: How mobile-dominant are consumers? Should you invest more in mobile optimization or mobile marketing?
  • Challenges with traditional qualitative research methods: While using this type of question is ideal if you have a large database to survey when placed on a site or sent to a limited customer list, it only gives you a point-in-time perspective from a limited group of people.
  • A new approach: You can get better, broader insights quicker with Similarweb Digital Research Intelligence. To fully inform your research, you need to know preferences at the industry or market level.
  • ⏰ Time to insight: 30 seconds
  • ✅ How it’s done: Similarweb offers multiple ways to answer this question without going through a lengthy qualitative research process. 

First, I’m going to do a website market analysis of the banking credit and lending market in the finance sector to get a clearer picture of industry benchmarks.

Here, I can view device preferences across any industry or market instantly. It shows me the device distribution for any country across any period. This clearly answers the question of how mobile dominate my target audience is , with 59.79% opting to access site via a desktop vs. 40.21% via mobile

I then use the trends section to show me the exact split between mobile and web traffic for each key player in my space. Let’s say I’m about to embark on a competitive campaign that targets customers of Chase and Bank of America ; I can see both their audiences are highly desktop dominant compared with others in their space .

Qualitative question examples for developing new products or services

Research questions like this can help you understand customer pain points and give you insights to develop products that meet those needs.

  • What is the primary reason you would choose to purchase a product from our company?
  • How do you currently use products or services that are similar to ours?
  • Is there anything that could be improved with products currently on the market?
  • What features would you like to see added to our products?
  • How do you prefer to contact a customer service team?
  • What do you think sets our company apart from our competitors?
  • What other product or service offerings would like to see us offer?
  • What type of information would help you make decisions about buying a product?
  • What type of advertising methods are most effective in getting your attention?
  • What is the biggest deterrent to purchasing products from us?

Qualitative research question example for service development

  • ‍♀️ Question: What type of advertising methods are most effective in getting your attention?
  • Insight sought: The marketing channels and/or content that performs best with a target audience .
  • Challenges with traditional qualitative research methods: When using qualitative research surveys to answer questions like this, the sample size is limited, and bias could be at play.
  • A better approach: The most authentic insights come from viewing real actions and results that take place in the digital world. No questions or answers are needed to uncover this intel, and the information you seek is readily available in less than a minute.
  • ⏰ Time to insight: 5 minutes
  • ✅ How it’s done: There are a few ways to approach this. You can either take an industry-wide perspective or hone in on specific competitors to unpack their individual successes. Here, I’ll quickly show a snapshot with a whole market perspective.

qualitative example question - marketing channels

Using the market analysis element of Similarweb Digital Intelligence, I select my industry or market, which I’ve kept as banking and credit. A quick click into marketing channels shows me which channels drive the highest traffic in my market. Taking direct traffic out of the equation, for now, I can see that referrals and organic traffic are the two highest-performing channels in this market.

Similarweb allows me to view the specific referral partners and pages across these channels. 

qualitative question example - Similarweb referral channels

Looking closely at referrals in this market, I’ve chosen chase.com and its five closest rivals . I select referrals in the channel traffic element of marketing channels. I see that Capital One is a clear winner, gaining almost 25 million visits due to referral partnerships.

Qualitative research question example

Next, I get to see exactly who is referring traffic to Capital One and the total traffic share for each referrer. I can see the growth as a percentage and how that has changed, along with an engagement score that rates the average engagement level of that audience segment. This is particularly useful when deciding on which new referral partnerships to pursue.  

Once I’ve identified the channels and campaigns that yield the best results, I can then use Similarweb to dive into the various ad creatives and content that have the greatest impact.

Qualitative research example for ad creatives

These ads are just a few of those listed in the creatives section from my competitive website analysis of Capital One. You can filter this list by the specific campaign, publishers, and ad networks to view those that matter to you most. You can also discover video ad creatives in the same place too.

In just five minutes ⏰ 

  • I’ve captured audience loyalty statistics across my market
  • Spotted the most competitive players
  • Identified the marketing channels my audience is most responsive to
  • I know which content and campaigns are driving the highest traffic volume
  • I’ve created a target list for new referral partners and have been able to prioritize this based on results and engagement figures from my rivals
  • I can see the types of creatives that my target audience is responding to, giving me ideas for ways to generate effective copy for future campaigns

Qualitative questions to determine pricing strategies

Companies need to make sure pricing stays relevant and competitive. Use these questions to determine customer perceptions on pricing and develop pricing strategies to maximize profits and reduce churn.

  • How do you feel about our pricing structure?
  • How does our pricing compare to other similar products?
  • What value do you feel you get from our pricing?
  • How could we make our pricing more attractive?
  • What would be an ideal price for our product?
  • Which features of our product that you would like to see priced differently?
  • What discounts or deals would you like to see us offer?
  • How do you feel about the amount you have to pay for our product?

Get Faster Answers to Qualitative Research Questions with Similarweb Today

Qualitative research question example for determining pricing strategies.

  • ‍♀️ Question: What discounts or deals would you like to see us offer?
  • Insight sought: The promotions or campaigns that resonate with your target audience.
  • Challenges with traditional qualitative research methods: Consumers don’t always recall the types of ads or campaigns they respond to. Over time, their needs and habits change. Your sample size is limited to those you ask, leaving a huge pool of unknowns at play.
  • A better approach: While qualitative insights are good to know, you get the most accurate picture of the highest-performing promotion and campaigns by looking at data collected directly from the web. These analytics are real-world, real-time, and based on the collective actions of many, instead of the limited survey group you approach. By getting a complete picture across an entire market, your decisions are better informed and more aligned with current market trends and behaviors.
  • ✅ How it’s done: Similarweb’s Popular Pages feature shows the content, products, campaigns, and pages with the highest growth for any website. So, if you’re trying to unpack the successes of others in your space and find out what content resonates with a target audience, there’s a far quicker way to get answers to these questions with Similarweb.

Qualitative research example

Here, I’m using Capital One as an example site. I can see trending pages on their site showing the largest increase in page views. Other filters include campaign, best-performing, and new–each of which shows you page URLs, share of traffic, and growth as a percentage. This page is particularly useful for staying on top of trending topics , campaigns, and new content being pushed out in a market by key competitors.

Qualitative research questions for product development teams

It’s vital to stay in touch with changing consumer needs. These questions can also be used for new product or service development, but this time, it’s from the perspective of a product manager or development team. 

  • What are customers’ primary needs and wants for this product?
  • What do customers think of our current product offerings?
  • What is the most important feature or benefit of our product?
  • How can we improve our product to meet customers’ needs better?
  • What do customers like or dislike about our competitors’ products?
  • What do customers look for when deciding between our product and a competitor’s?
  • How have customer needs and wants for this product changed over time?
  • What motivates customers to purchase this product?
  • What is the most important thing customers want from this product?
  • What features or benefits are most important when selecting a product?
  • What do customers perceive to be our product’s pros and cons?
  • What would make customers switch from a competitor’s product to ours?
  • How do customers perceive our product in comparison to similar products?
  • What do customers think of our pricing and value proposition?
  • What do customers think of our product’s design, usability, and aesthetics?

Qualitative questions examples to understand customer segments

Market segmentation seeks to create groups of consumers with shared characteristics. Use these questions to learn more about different customer segments and how to target them with tailored messaging.

  • What motivates customers to make a purchase?
  • How do customers perceive our brand in comparison to our competitors?
  • How do customers feel about our product quality?
  • How do customers define quality in our products?
  • What factors influence customers’ purchasing decisions ?
  • What are the most important aspects of customer service?
  • What do customers think of our customer service?
  • What do customers think of our pricing?
  • How do customers rate our product offerings?
  • How do customers prefer to make purchases (online, in-store, etc.)?

Qualitative research question example for understanding customer segments

  • ‍♀️ Question: Which social media channels are you most active on?
  • Insight sought: Formulate a social media strategy . Specifically, the social media channels most likely to succeed with a target audience.
  • Challenges with traditional qualitative research methods: Qualitative research question responses are limited to those you ask, giving you a limited sample size. Questions like this are usually at risk of some bias, and this may not be reflective of real-world actions.
  • A better approach: Get a complete picture of social media preferences for an entire market or specific audience belonging to rival firms. Insights are available in real-time, and are based on the actions of many, not a select group of participants. Data is readily available, easy to understand, and expandable at a moment’s notice.
  • ✅ How it’s done: Using Similarweb’s website analysis feature, you can get a clear breakdown of social media stats for your audience using the marketing channels element. It shows the percentage of visits from each channel to your site, respective growth, and specific referral pages by each platform. All data is expandable, meaning you can select any platform, period, and region to drill down and get more accurate intel, instantly.

Qualitative question example social media

This example shows me Bank of America’s social media distribution, with YouTube , Linkedin , and Facebook taking the top three spots, and accounting for almost 80% of traffic being driven from social media.

When doing any type of market research, it’s important to benchmark performance against industry averages and perform a social media competitive analysis to verify rival performance across the same channels.

Qualitative questions to inform competitive analysis

Organizations must assess market sentiment toward other players to compete and beat rival firms. Whether you want to increase market share , challenge industry leaders , or reduce churn, understanding how people view you vs. the competition is key.

  • What is the overall perception of our competitors’ product offerings in the market?
  • What attributes do our competitors prioritize in their customer experience?
  • What strategies do our competitors use to differentiate their products from ours?
  • How do our competitors position their products in relation to ours?
  • How do our competitors’ pricing models compare to ours?
  • What do consumers think of our competitors’ product quality?
  • What do consumers think of our competitors’ customer service?
  • What are the key drivers of purchase decisions in our market?
  • What is the impact of our competitors’ marketing campaigns on our market share ? 10. How do our competitors leverage social media to promote their products?

Qualitative research question example for competitive analysis

  • ‍♀️ Question: What other companies do you shop with for x?
  • Insight sought: W ho are your competitors? Which of your rival’s sites do your customers visit? How loyal are consumers in your market?
  • Challenges with traditional qualitative research methods:  Sample size is limited, and customers could be unwilling to reveal which competitors they shop with, or how often they around. Where finances are involved, people can act with reluctance or bias, and be unwilling to reveal other suppliers they do business with.
  • A better approach: Get a complete picture of your audience’s loyalty, see who else they shop with, and how many other sites they visit in your competitive group. Find out the size of the untapped opportunity and which players are doing a better job at attracting unique visitors – without having to ask people to reveal their preferences.
  • ✅ How it’s done: Similarweb website analysis shows you the competitive sites your audience visits, giving you access to data that shows cross-visitation habits, audience loyalty, and untapped potential in a matter of minutes.

Qualitative research example for audience analysis

Using the audience interests element of Similarweb website analysis, you can view the cross-browsing behaviors of a website’s audience instantly. You can see a matrix that shows the percentage of visitors on a target site and any rival site they may have visited.

Qualitative research question example for competitive analysis

With the Similarweb audience overlap feature, view the cross-visitation habits of an audience across specific websites. In this example, I chose chase.com and its four closest competitors to review. For each intersection, you see the number of unique visitors and the overall proportion of each site’s audience it represents. It also shows the volume of unreached potential visitors.

qualitative question example for audience loyalty

Here, you can see a direct comparison of the audience loyalty represented in a bar graph. It shows a breakdown of each site’s audience based on how many other sites they have visited. Those sites with the highest loyalty show fewer additional sites visited.

From the perspective of chase.com, I can see 47% of their visitors do not visit rival sites. 33% of their audience visited 1 or more sites in this group, 14% visited 2 or more sites, 4% visited 3 or more sites, and just 0.8% viewed all sites in this comparison. 

How to answer qualitative research questions with Similarweb

Similarweb Research Intelligence drastically improves market research efficiency and time to insight. Both of these can impact the bottom line and the pace at which organizations can adapt and flex when markets shift, and rivals change tactics.

Outdated practices, while still useful, take time . And with a quicker, more efficient way to garner similar insights, opting for the fast lane puts you at a competitive advantage.

With a birds-eye view of the actions and behaviors of companies and consumers across a market , you can answer certain research questions without the need to plan, do, and review extensive qualitative market research .

Wrapping up

Qualitative research methods have been around for centuries. From designing the questions to finding the best distribution channels, collecting and analyzing findings takes time to get the insights you need. Similarweb Digital Research Intelligence drastically improves efficiency and time to insight. Both of which impact the bottom line and the pace at which organizations can adapt and flex when markets shift.

Similarweb’s suite of digital intelligence solutions offers unbiased, accurate, honest insights you can trust for analyzing any industry, market, or audience.

  • Methodologies used for data collection are robust, transparent, and trustworthy.
  • Clear presentation of data via an easy-to-use, intuitive platform.
  • It updates dynamically–giving you the freshest data about an industry or market.
  • Data is available via an API – so you can plug into platforms like Tableau or PowerBI to streamline your analyses.
  • Filter and refine results according to your needs.

Are quantitative or qualitative research questions best?

Both have their place and purpose in market research. Qualitative research questions seek to provide details, whereas quantitative market research gives you numerical statistics that are easier and quicker to analyze. You get more flexibility with qualitative questions, and they’re non-directional.

What are the advantages of qualitative research?

Qualitative research is advantageous because it allows researchers to better understand their subject matter by exploring people’s attitudes, behaviors, and motivations in a particular context. It also allows researchers to uncover new insights that may not have been discovered with quantitative research methods.

What are some of the challenges of qualitative research?

Qualitative research can be time-consuming and costly, typically involving in-depth interviews and focus groups. Additionally, there are challenges associated with the reliability and validity of the collected data, as there is no universal standard for interpreting the results.

Related Posts

What is a Niche Market? And How to Find the Right One

What is a Niche Market? And How to Find the Right One

The Future of UK Finance: Top Trends to Watch in 2024

The Future of UK Finance: Top Trends to Watch in 2024

From AI to Buy: The Role of Artificial Intelligence in Retail

From AI to Buy: The Role of Artificial Intelligence in Retail

How to Conduct a Social Media Competitor Analysis: 5 Quick Steps

How to Conduct a Social Media Competitor Analysis: 5 Quick Steps

Industry Research: The Data-Backed Approach

Industry Research: The Data-Backed Approach

How to Do a Competitive Analysis: A Complete Guide

How to Do a Competitive Analysis: A Complete Guide

Wondering what similarweb can do for you.

Here are two ways you can get started with Similarweb today!

open ended questions qualitative research

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Analyzing Open-ended Questions for Qualitative Research

Profile image of Mokhtaria Lahmer

Handling open-ended questions' results as part of novice researchers' background in analyzing qualitative data can be a frustrating task as it requires deliberate effort. As teachers at Ibn Khaldoun university EFL department, we receive complaints from our students on how challenging can the analysis of qualitative questions be as part of their Masters' final research. They also complain of not receiving proper guidance from their respective supervisors. The purpose of this case study is to investigate the reasons behind such complaints. It also attempts to determine to which extent the role of supervisors can influence the qualitative research process. After answering an online survey the findings indicated that the students' confusion in selecting a proper method to adopt for the analysis of qualitative data was justified by the fact that they were not taught how the process is carried out after the data collection phase. It was also unveiled that the teacher-centered way the supervision process was carried out prevented their respective supervisors from prioritizing students' needs at this stage of their post-graduation course. The fact of being explained about the coding process and some steps to relate the raw ideas in their findings to grounded theories could provide answers to their queries and make their work more fruitful. Consequently, the end of this paper offers some practical guidelines that could help remedy the problem for novice researchers.

Related Papers

Nurse Education Today

Stefanos Mantzoukas

open ended questions qualitative research

Quality & Quantity

Mansoor Niaz

UNICAF University - Zambia

Ivan Steenkamp

Demola V Akinyoade

Qualitative data analysis is a distinctive form of analysis in the social research enterprise. It is an approach that is less understood than its counterpart—quantitative analysis. Diversity and flexibility are main features of qualitative data analysis. These features also expose it to the danger of doing it anyhow—a slapdash analysis unbecoming of scientific endeavor. Despite its diversity there are common features to the analysis of qualitative data that beginning researchers or trainee-social scientists, such as undergraduates, should be familiar with. This is the focus of this chapter. It focuses on necessary areas in data analysis to help this category of students to make sense of their qualitative data. It covers sources and types of qualitative data, basic issues and procedures in qualitative data analysis. It presents a systematic, disciplined, transparent and describable process to the analysis of qualitative data in consonance with the nature of the science and its method.

Qualitative Research in Education

Mahmut Kalman

This study aimed at exploring novice researchers’ experiences of and perspectives on the qualitative research journey and determining the difficulties tackled and strategies developed while conducting qualitative research. The study was an interview-based qualitative case study involving nine graduate students in education as the participants. The data were collected between 2017 and 2019 at a state university located in Southeastern Turkey. The content-analyzed data revealed several findings about the research topic, indicating that the novice researchers considered the qualitative research journey as daunting and overwhelming, but pleasurable and satisfying. Despite scholarly development and lessons learned in the process, the researchers confronted with several difficulties concerning data collection, analysis and interpretation, recruitment and developing rapport, representation of findings, and the research process as a whole. They developed personal and external support strate...

Methodological Issues in Management Research: Advances, Challenges, and the Way Ahead

Richa Awasthy

Current paper is an overview of qualitative research. It starts with discussing meaning of research and links it with a framework of experiential learning. Complexity of socio-political environment can be captured with methodologies appropriate to capture dynamism and intricacy of human life. Qualitative research is a process of capturing lived-in experiences of individuals, groups, and society. It is an umbrella concept which involves variety of methods of data collection such as interviews, observations, focused group discussions, projective tools, drawings, narratives, biographies, videos, and anything which helps to understand world of participants. Researcher is an instrument of data collection and plays a crucial role in collecting data. Main steps and key characteristics of qualitative research are covered in this paper. Reader would develop appreciation for methodiness in qualitative research. Quality of qualitative research is explained referring to aspects related to rigor...

International Education Studies

Feyzullah Ezer

The purpose of the study is to determine the opinions of graduate students from social studies education regarding the qualitative research method. The study was conducted in accordance with the phenomenological design, one of the qualitative research designs. The sample group of the study was composed of 25 postgraduate students including 11 females and 14 males studying in the Department of Social Studies Education in Division of Turkish Language and Social Sciences Education of Educational Sciences Institute, in Firat University in the spring semester of the 2019-2020 academic year. The sample group was determined with criterion sampling, one of the purposeful sampling methods. The data of the study were obtained through interview method by using semi-structured interview form. Content analysis method was used in the analysis of the data obtained in the study and the qualitative data analysis program was employed to assess the data. It was determined as a result of the study that...

DR FREDRICK ONASANYA

The paper covers the following areas: definitions, characteristics, methods of data collection methods of qualitative research, sustaining the validity of qualitative research, judgments of truthfulness, or dependable qualitative research, analyzes of qualitative data, advantages and disadvantages of qualitative research. Qualitative research is about digging into matters, understanding, and developments, responding to questions, by examining and determining and making sense of unstructured data Qualitative research is mainly exploratory utilize to earn understanding of fundamental reasons, impressions, and motivations. It furnishes perceptivenesses into the problems or assists to originate thoughts or hypotheses for possible qualitative research. Qualitative research has the power to investigate or examine into reactions and answers from the participants.. The researcher can obtain information not expected by the researcher

Gaudensio Angkasa

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

open ended questions qualitative research

Home Surveys Question Types

Open-Ended Questions: Examples & Advantages

Open ended questions

When designing surveys , we often need to describe whether to use open-ended questions versus closed-ended questions to get specific information. Yet we need to be aware that open-ended and closed-ended questions have their strengths and weaknesses and perform in different ways.

Open-ended are those questions that a sender makes to encourage one or several receivers to obtain some information in response. For example: Where is my wallet?

LEARN ABOUT: Testimonial Questions

Open-Ended Questions: Definition

Open-ended questions are free-form survey questions that allow and encourage respondents to answer in open-text format to answer based on their complete knowledge, feeling, and understanding. The detailed response to this question is not limited to a set of options.

Unlike a closed-ended question that leaves survey responses limited and narrow to the given options, an open-ended question allows you to probe deep into the respondent’s detailed answers, gaining valuable information about the subject or project. The responses to these qualitative research questions can be used to attain detailed and descriptive information on a subject.

LEARN ABOUT: course evaluation survey examples

They are an integral part of Qualitative Market Research . This research process depends heavily on open and subjective questions and answers on a given topic of discussion or conversation, with room for further probing by the researcher based on the answer given by the respondent. In a typical scenario, closed-ended questions are used to gather qualitative data from respondents.

open ended questions qualitative research

Learn  more: Qualitative Research- Definition, Types, Methods and Examples

Examples of Open-Ended Questions

Respondents like open-ended questions as they get 100% control over what they want to respond to, and they don’t feel restricted by the limited number of options. The beauty of the process is that there can never be a one-word answer. They’ll either be in the form of lists, sentences or something longer like speech/paragraph.

So, to understand this more, here are some open-ended question examples:

Examples of Open Ended Questions

  • Interview method : How do you plan to use your existing skills to improve organizational growth, if hired by the company?
  • Customer-facing: Please describe a scenario where our online marketplace helps a person make day-to-day purchases in daily life.
  • Technical: Can you please explain the back-end Javascript code template used for this webpage or blog post?
  • Demographic: What is your age? (asked without survey options)
  • Personal / Psychographic: How do you typically deal with stress and anxiety in your life?

In a study conducted by Pew Research, respondents were asked, “What mattered most to you while deciding how you voted for president?” One group was asked this question in a close-ended question format, while the other was asked in an open-ended one. The results are displayed below:

open-ended-question

In the close-ended questions format, 58% of respondents chose “The economy”. In the other format, only 35% wrote an answer that indicated “The economy.” Note that only 8% of respondents selected “Other” in the format of the close-ended question. With an open-ended format, 43% of respondents wrote in a response that would have been categorized as “Other.”

Open-Ended Questions vs Close Ended Questions

Open-ended questions motivate the respondents to put their feedback into words without restricting people’s thoughts. They aren’t as objective and dominant as close-ended questions.

By using these leading questions, the researcher understands the respondents’ true feelings. They have an element that will give you information about different thought processes across your clientele, troubleshooting suggestions, and getting a peek into their inhibitions too.

  • The open-ended and closed-ended questions are different tasks for respondents. In the open-ended task, respondents write down what is readily available in their minds. In the close-ended question task, respondents focus their “attention on specific responses chosen by the investigator” (Converse and Presser, 1986).
  • Asking the same question in these two different formats will almost always produce different results. Many investigators have demonstrated this over several decades.
  • Few respondents are going to select the “Other” category and enter responses that are different from the answer choices that are listed.

So what does this mean for us? If you can, do qualitative research first and ensure your close-ended questions represent the items in people’s heads. We need the list of items to be complete since few respondents will select the “Other” category. It may also be necessary to list items not readily available to respondents if they are important to you.

close ended question

LEARN ABOUT: Survey Sample Sizes

When presenting results , I have found it helpful to explain the fundamental differences between open-ended and closed-ended question examples in a sentence or two. It helps them understand that these are not necessarily precise measurements but measurements that require some interpretation relative to other questions in the survey and additional information from steps in qualitative research . Hence, that is why they need an analyst like you or me!

Why Use Open-Ended Questions?

Unrestricted opinions:.

The customers need a platform to voice their opinions without limits on the answers. Happy or unhappy. As answer options for questions aren’t provided, the respondent has the liberty to include details about good life, feelings, attitudes, and views that they usually wouldn’t get to submit in single word answers.

Creative Expression:

These questions are more appreciative of the respondents than close-ended questions as users aren’t expected to just “fill” them out for the sake of it.

Spellbinding Vision and Creativity:

Respondents may stun you with the vision and creativity they show with their more detailed answers. Links to their blogs or a verse or two of their poetry will leave you spellbound.

Embracing Freedom of Response:

If there are only close-ended questions in a microsurvey, the users usually get disconnected and fill it out without giving it much thought. With the kind of freedom that open-ended questions offer, users can respond the way they’d like to, be it the number of words or the details or the tone of the message.

LEARN ABOUT: Send Surveys Using Text Message

Driving Marketing and Innovation:

These responses may be marketing tips for improving the organization’s branding or some creative ideas that can lead to monetary gains in future.

Tackling Complexity:

Knotty situations need more than just a mere Yes/No feedback. Single-select or multiple choice questions cannot do justice to the detail process or scrutiny required for some critical and complex situations.

Exploring Feedback and Troubles:

These questions work best in situations where the respondents are expected to explain their feedback or describe the troubles they’re facing with the products.

Unveiling Customer Insights:

You can learn from your respondents. The open-ended questions offer the freedom to these respondents to be vocal about their opinion that would be insightful for a company.

Revealing Thought Patterns:

Respondent logic, thoughts, language, and reference choices can be known from these questions that can reveal a lot about how the respondent’s brain functions.

Always think before designing a survey as to what your objective is. Scrutinize the purpose, evaluate the positives and negatives of using an open or closed answer for your research study. Try it by sending out to a selected database, analyzing the results, and planning improvements for the next round of surveys.

LEARN ABOUT: Speaker evaluation form

How to Ask an Open-Ended Question?

Everything easy or complicated requires competence. Asking the right question is also one such thing that requires capabilities. Capability to understand and segment the target audience , determine the kind of questions that will work well with that audience, and determine the efficiency of them.

Here are four ways to create effective open-ended questions:

Understand the difference between open-ended question and closed-ended question:

Before you start putting questions to paper, you need to have absolute clarity on open-ended vs closed-ended questions . Your objective of sending out an online survey should be clear, and based on that, you can evaluate the kind of questions you would want to use. These are usually used where the feelings and feedback of the customer are highly valued. To receive 100% transparent feedback on these questions, make sure that you don’t lead the respondents with your questions and give them complete liberty to fill in whatever they want.

Create a list of open-ended questions before curating the survey:

Once you get clarity on what are open-ended questions and how to implement them, figure out a list of survey questions that you’d want to use. First, you can have a fair share of open-ended questions in your survey, which can fluctuate depending on your responses.

LEARN ABOUT:  Social Communication Questionnaire

Examples of open-ended questions like these are extremely popular and give you more value-added insights:

  • Why do you think competitive market research is important before launching a new business?
  • How do you think you’ll overcome these obstacles in our project?
  • Tell us about your experience with our onboarding process.
  • What are your professional priorities at the moment?
  • What domain of work motivates you?
  • You can make a list of similar questions before you start executing the survey.

Reconstruct any question into an open-ended question:

Observation is the key here. Observe what kind of questions you usually ask your customers, prospects, and every other person you come across. Analyze whether your questions are closed-ended or open-ended. Try and convert those closed questions into open-end ones wherever you think the latter would fetch you better results and valuable insights.

Follow up a closed-ended question with an open-ended question:

This trick works wonders. It’s not always possible to convert a closed question into an open one, but you can follow up by getting a question answered.

For example, if you have a closed question like – “Do you think the product was efficient?” with the options “Yes” and “No”, you can follow it up with an open question like “How do you think we can make the product better in future?”

Regarding surveys, the advantages of open questions surpass that of closed ones.

How to Add Open-Ended Questions?

1. Goto: Login » Surveys » Edit » Workspace

2. Click on the Add Question button to add a question.

3. Select Basic, then go to the Text section and select Comment Box.

4. Enter the question text.

open ended questions

5. Select the data type: Single Row Text, Multiple Rows Text, Email address, or Numeric Data.

open ended questions setting

6. Select the Text Box Location (below or next to question text). Enabling “next to question text” will put the text box to the right of the question.

How to view the data collected by an open-ended question?

1. Click on Login » Surveys » Analytics » Text Analytics » Text Report

open ended questions analytics

Please note that analysis for open-ended text questions is not included in the Real-Time Summary or Analysis Report. To view the analysis of open-ended questions, you can see the Word Cloud report.

LEARN ABOUT: Easy Test Maker

Can You Limit The Number of Characters in a Text Question?

You can set the limit of the number of characters that respondents can enter in the textbox.

How to Mark The Question as Mandatory?

To make the question mandatory, you can toggle the validation on and select ‘Force Response’. It is off by default. When ‘Force Response’ is not enabled, respondents can continue with the survey without selecting answers. If respondents go through all the pages in the online questionnaire without selecting answers, the response is still considered complete. You can enable the required option to make a question required so that respondents can continue with the survey only after responding to the questions.

LEARN ABOUT: Structured Questionnaire

open ended questions settings

Closed-ended questions, like open questions, are used in both spoken and written language and in formal and informal situations. It is common to find questions of this type in school or academic evaluations, interrogations, and job interviews, among many other options.

LEARN ABOUT: This or that questions

Whether you need a simple survey tool or a collaborative research solution, with our Academic licenses for universities and educational institutions, you get access to all the best features used by our Enterprise research clients.

  • Advanced logic and workflows for smarter surveys
  • Over 5000 universities & colleges and over 1 million+ students use QuestionPro
  • Academic license supports multi-admin role environment

Open-ended questions are essential to note that crafting practical open-ended questions requires skill and careful consideration. Questions should be clear, concise, and relevant to the topic. They should avoid leading or biased language, allowing individuals to express their views without undue influence.

Overall, open-ended questions are powerful to gather information, foster communication, and gain deeper insights. Whether used in research, professional settings, or personal conversations, they enable individuals to explore ideas, share perspectives, critical thinking of a person, and engage in meaningful discussions. By embracing the openness and curiosity of open ended questions, we can uncover new knowledge, challenge assumptions, and broaden our understanding of the world.

MORE LIKE THIS

customer communication tool

Customer Communication Tool: Types, Methods, Uses, & Tools

Apr 23, 2024

sentiment analysis tools

Top 12 Sentiment Analysis Tools for Understanding Emotions

QuestionPro BI: From Research Data to Actionable Dashboards

QuestionPro BI: From Research Data to Actionable Dashboards

Apr 22, 2024

customer experience management software

21 Best Customer Experience Management Software in 2024

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

The Use of Closing Questions in Qualitative Research: Results of a Web-Based Survey

Timothy joseph sowicz.

The University of North Carolina at Greensboro School of Nursing, Greensboro, NC

Justine S. Sefcik

University of Pennsylvania School of Nursing, Philadelphia, PA

Helen L. Teng

Elliane irani.

Case Western Reserve University Frances Payne Bolton School of Nursing, Cleveland, OH

Terri-Ann Kelly

Rutgers School of Nursing – Camden, Camden, NJ

Christine Bradway

Author Note

Justine S. Sefcik, Ph.D., RN , is Postdoctoral Fellow and Helen L. Teng, MSN, RN , is Doctoral Student and Christine Bradway, Ph.D., CRNP, FAAN, AGSF , is Associate Professor of Gerontological Nursing, University of Pennsylvania School of Nursing, Philadelphia, PA.

Elliane Irani, Ph.D., RN , is Postdoctoral Fellow, Case Western Reserve University Frances Payne Bolton School of Nursing, Cleveland, OH.

Terri-Ann Kelly, Ph.D., RN , is Assistant Professor, Rutgers School of Nursing – Camden, Camden, NJ.

Scarce and differing reasons for including closing questions in qualitative research exist, but how data generated from these questions are used remains uncertain.

The purpose of the study was to understand if and how researchers use closing questions in qualitative research; specifically, the research questions were: (a) “Why do qualitative researchers include or exclude closing questions during interviews?”; and (b) “How do qualitative researchers use data from closing questions?”

A qualitative descriptive design using a single, asynchronous, web-based, investigator-designed survey containing 14 items was used to collect data. Convenience and snowball sampling were used to recruit participants. Data were analyzed using descriptive statistics and qualitative content analysis. Codes were developed from the qualitative data. Subcategories were derived from similar codes, and these subcategories were further scrutinized and were used to create broad categories.

The number of respondents per question ranged from 76 to 99; most identified nursing and sociology as their academic disciplines, lived in the United States, and were involved in qualitative research for 1 to 10 years. Data, the interview, the interviewee , and the interviewer were broad categories to emerge as reasons for including closing questions. Only one respondent reported a reason for excluding closing questions. The uses of closing question data were described in four broad categories: analysis; data; the interview guide ; and inquiry .

Researchers frequently included closing questions in qualitative studies. The reasons for including these questions and how data are used varies and support for closing questions is limited in previously published literature. One unique reason, adding “new breath” to the interview, emerged. Study findings can aid qualitative researchers in deciding whether to include closing questions.

Qualitative researchers use various approaches to conclude interviews. The absence of standard procedures or rules contributes to methodological variability ( Brinkmann & Kvale, 2014 ). Some researchers terminate an interview by thanking participants and acknowledging their contributions ( Baumbusch, 2010 ; Whiting, 2008 ); others add closing questions. Limited and divergent accounts exist regarding researchers’ choice of executing closing questions, question format, and rationale for including these.

A closing question may allow research participants time to reflect, share additional information, and decompress; however, how this information informs the research is unclear. King and Horrocks (2010) suggest using closing questions to give more control to participants and ask if they want to share anything else or inquire about the research. Rationale for this approach is to minimize tension or anxiety from discussing personal and emotional experiences, and/or concerns about the research process. Similarly, Brinkmann and Kvale (2014) add a final debriefing stage to allow participants to process emotions and share further information. During debriefing, the interviewer may summarize the main points and ask participants to comment and provide feedback. The authors provide no guidance on how the additional information and feedback are used. In contrast, other authors use closing questions to signal the conclusion of the interview ( Castillo-Montoya, 2016 ; Wengraf, 2001 ). Although not considered primary research questions, reflective closing questions may add valuable information or raise additional issues for the researcher to consider ( Castillo-Montoya, 2016 ).

Few authors describe how they use or analyze data generated from closing questions. Krueger and Casey (2015) include three types of closing questions with focus groups, which they believe are critical for data analysis and directing future interviews. Participants are asked an all-things-considered question (e.g., “Of all the needs we discussed, which one is most important to you?”) allowing participants time to reflect, comment on critical areas of concern, and clarify their positions. Responses inform interpretation of conflicting comments and assign weight of importance. A summary question (e.g., “Is this an adequate summary?”) and then an insurance question (e.g., “Have we missed anything?”) are used for considering topic importance, and can inform modifications of interview guides. Lincoln and Guba (1985) describe terminating an interview by completing a member check where the interviewer summarizes what they believe they just learned, and the interviewee is given time to react and comment on the validity of the constructs made. All information learned, including additional thoughts sparked by the summary, is available for possible triangulation and further member checking as the study proceeds ( Lincoln & Guba, 1985 ).

Due to limited and variable literature regarding closing questions, the purpose of our study was to understand if and how researchers use closing questions during qualitative interviews. Specifically, our questions were: (a) “Why do qualitative researchers include or exclude closing questions during interviews?); and (b) “How do qualitative researchers use data from closing questions?”

Study Design

A qualitative descriptive design was utilized for this study. No a priori conceptual framework was used in this study; however, the study was informed by the axioms of the naturalistic paradigm as described by Lincoln and Guba (1985) . The [institution 1] and [institution 2; blinded for review] institutional review boards deemed this study exempt.

Participants

A convenience sample of researchers known to the authors as having engaged in qualitative research were recruited initially; snowball sampling was used to recruit additional participants. First, the corresponding author emailed 15 colleagues to describe the study and included a link to complete the web-based survey. Additionally, coauthors sent emails to 66 others, and these 81 (15 + 66) initial contacts were asked to forward the survey to others engaged in qualitative research. Given the recruitment strategy, it is unknown how many emails were sent inviting people to complete the survey; therefore, we are unable to calculate a response rate.

Measurements

A single, asynchronous, web-based, investigator-designed survey was used to collect data. Before launching the survey, coauthors pilot tested it to improve the questions’ structure and sequence. The final survey included 14 questions: Nine multiple-choice and five free text responses. Respondents were asked about themselves, their experiences with conducting qualitative research and, specifically, if and how they used closing questions during qualitative interviews. The survey was administered via SurveyMonkey® and was available from December 11, 2017 until January 8, 2018.

Data Analysis

Descriptive statistics were used to analyze data from the multiple-choice questions; however, the focus of this article is on the responses to two of the free text questions: (a) “Please tell us why you include or do not include closing questions in your interviews.”; and (b) “Please tell us how you use the data generated from closing questions.” Individual responses to these questions were analyzed using qualitative content analysis ( Graneheim & Lundman, 2004 ; Sandelowski, 1995 ). Individual codes that were similar in content and meaning were subcategorized. Subcategories were further scrutinized and placed into broader categories ( Morse, 2008 ). Table 1 provides an example of how similar codes were collapsed into subcategories, which were then collapsed into a category. Separate codebooks were created for each question given that the purpose of each differed (i.e., reasons for excluding/including closing questions and how data from them are used). Data were analyzed by the corresponding author and final categories were reviewed and approved by all authors. Techniques to establish trustworthiness included the development of a coding system, peer debriefing, and maintaining an audit trail of decisions related to analysis ( Abboud et al., 2017 ; Lincoln & Guba, 1985 ; Morse, 2015 ).

Example of Category Development

Respondents per question ranged from 76 to 99. Academic discipline/profession was reported by 99 respondents. Most identified with nursing (49%) and sociology (17%); 17% were not explicit (e.g., “social science” and “researcher”). Current country of residence varied; the United States was the most frequent answer (83%). Other reported locations were Canada (7%), the United Kingdom (2%), and Brazil, India, Ireland, Nepal, Nigeria, Peru, Switzerland, and Turkey (1% each). Years involved in qualitative research was reported by 96 respondents; 15 years was the most frequent (35%) followed by 6–10 years (27%), more than 15 years (25%), less than one year (7%), and 11–15 years (5%).

The frequency of including closing questions was reported by 95 respondents; 81% reported doing so “always”, 15% “sometimes”, and 4% “never”. The inclusion of closing questions by interview type varied among the 91 respondents to this question; 91% for semistructured interviews; 34% for structured interviews; and 29% for unstructured interviews (this question allowed respondents to choose all answers that apply).

Reasons for Excluding or Including Closing Questions

Free text answers from 85 participants regarding reasons for including closing questions during interviews were analyzed and four broad categories emerged: data; the interview; the interviewee; and the interviewer. Table 2 includes representative quotes from each of these categories. Only one respondent offered a reason for excluding closing questions: “I feel these kinds of closing questions are often unhelpful, as most will just respond ‘no’ or go on a tangent that is not necessarily related to your research questions.”

Reasons for Including Closing Questions

Respondents described using closing questions as a means of collecting data, including adding to existing data, uncovering new areas or topics to explore, and for modifying interview guides.

The Interview

Most responses in this category indicated that closing questions are included as a means for signaling the close of the interview. Additionally, respondents offered that these questions are a way of indicating the existence of the relationship between the interviewee and the interviewer. Notably, one respondent indicated, “i [ sic ] always include closing questions because they can give an [ sic ] new breath to the interview”, which contrasts most reasons for including them.

The Interviewee

Many respondents reported asking closing questions to acknowledge the interviewee as a person. These acknowledgments took many forms, including conferring respect and honoring participants, conveying that participants were heard, and that their experiences were valued. Closing questions were also noted to give interviewees “a feeling of agency” and voice as they expand on interview topics, particularly those most important to them.

Responses in this category moved beyond simply garnering interviewees’ replies to questions, but rather, provided opportunities for researchers to hear additional information free of interviewers’ assumptions and removed from their own research agenda. Such questions were also described as benefiting interviewees by providing an outlet for reconsidering or rethinking topics discussed during interviews, summarizing and finalizing thoughts, and gaining closure to the experience of being interviewed or of speaking about their experiences.

The Interviewer

This category concerns use of closing questions for the purpose of serving the interviewer. Closing questions provide them opportunities to obtain feedback for instrument development, clarify interviewees’ responses, and terminate the relationship. In addition, closing questions are used to ensure data needed for the research endeavor have been obtained, and to attain information to support an argument or stance related to the area being studied.

The Uses of Closing Question Data

In addition to understanding why closing questions are used, we were also interested in learning how respondents use the data. Free-text answers from 81 respondents were gathered and from these four broad categories emerged from these data: Analysis, data, the interview guide, and inquiry. Table 3 includes representative quotes from each of these categories.

Uses of Closing Question Data

Some respondents described using closing question data similarly to how they use all other study data, while others reported separate analysis of these questions. Respondents also described using data from closing questions in all phases of the analytic process, including generating codes and themes and for higher levels of analysis, such as data interpretation. Data also served as a source of reflection for researchers relative to what they are learning from the data and how they come to understand them.

Data are used as new information or to supplement and clarify other existing data, or, more generally, to add to the overall body of data. Some respondents noted they rarely or ever use these data, offering that closing questions generate little or no data, and that use is dependent on some other condition (e.g., if relevant to the purpose of the interview).

The Interview Guide

Some respondents reported using data to inform and modify interview guides, such as via addition of questions and prompts to be included in subsequent versions.

Finally, data are used to discover new areas for future inquiry. Examples include generating new research questions or components of future studies (e.g., informing the development of a survey).

To our knowledge this is the first study to describe why researchers include closing questions and how data from them are used. We found most respondents use closing questions; however, the rationale for inclusion and how the data are used varies. As with Castillo-Montoya’s (2016) suggestion, some respondents had a functional intention for the closing question: Signaling the end of the interview and validating the relationship between the interviewer and interviewee. Our study unearthed a previously unreported reason for including a closing question: Adding “an [ sic ] new breath” to the interview. This runs counter to ending an interview ( Wengraf, 2001 ) which is inherent in the name “closing” question.

Respondents shared they intended for the closing question to augment or expand what had been shared by the interviewee, striving for completeness in the data in case something pertinent was not captured previously in the interview. Echoing King and Horrocks (2010) , some respondents felt closing questions provide a space where the interviewee may have a stronger sense of agency and autonomy. The open-endedness of the questions does not impose the theoretical or philosophical lens that may have informed the interview guide, giving interviewees more control. Interestingly no respondents used closing questions as an opportunity for the interviewee to emotionally debrief from the interview as suggested by Brinkmann and Kvale (2014) .

Respondents included closing questions to generate data to inform future topics of inquiry and modify interview guides, and analyzed data produced by closing questions jointly and separately from the rest of the data. Closing question data augments existing data when analyzed jointly. Reflecting on Krueger and Casey’s (2015) and Lincoln and Guba’s (1985) use of the closing question, data analyzed separately from the rest of the data were treated as new information, supplemental, or used for triangulation. This strategy further strengthens the trustworthiness of the data and rigor of the study ( Lincoln & Guba, 1985 ). Limitations include convenience and snowball sampling and a design that did not allow us to ask follow-up or probing questions to further explore answers to free text response questions. Additionally, our survey was created by and for those whose primary language is English.

We found that researchers include closing questions in qualitative research, which is consistent with the limited existing literature and reinforces that valuable data can be collected via closing questions. We recommend building on our results to further the discussion regarding alternative views, usefulness of established criteria researchers use to guide closing question development and use, and as a process of sparking continued conversations about this commonly used component of data collection.

Acknowledgement

Research reported in this publication was supported by The Center for Health Equity Research and Promotion, VA Pittsburgh Healthcare System (Dr. Sowicz) and the National Institute of Nursing Research of the National Institutes of Health under Award Numbers T32NR015433 (Dr. Irani) and T32NR009356 (Dr. Sefcik). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Veterans Health Administration. The authors thank the peer reviewers for their thoughtful critique of previous versions of our article.

The authors have no conflicts of interest to report.

Ethical Conduct of Research : The VA Pittsburgh Healthcare System and The University of North Carolina at Greensboro institutional review boards deemed this study exempt.

Clinical Trial Registration : Not applicable.

Contributor Information

Timothy Joseph Sowicz, The University of North Carolina at Greensboro School of Nursing, Greensboro, NC.

Justine S. Sefcik, University of Pennsylvania School of Nursing, Philadelphia, PA.

Helen L. Teng, University of Pennsylvania School of Nursing, Philadelphia, PA.

Elliane Irani, Case Western Reserve University Frances Payne Bolton School of Nursing, Cleveland, OH.

Terri-Ann Kelly, Rutgers School of Nursing – Camden, Camden, NJ.

Christine Bradway, University of Pennsylvania School of Nursing, Philadelphia, PA.

  • Abboud S, Kim SK, Jacoby S, Mooney-Doyle K, Waite T, Froh E, . . . Kagan S (2017). Co-creation of a pedagogical space to support qualitative inquiry: An advanced qualitative collective . Nurse Education Today , 50 , 8–11. doi: 10.1016/j.nedt.2016.12.001 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Baumbusch J (2010). Semi-structured interviewing in practice-close research . Journal for Specialists in Pediatric Nursing , 15 , 255–258. doi: 10.1111/j.1744-6155.2010.00243.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brinkmann S, & Kvale S (2014). Interviews: Learning the craft of qualitative research interviewing (3rd ed.). Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Castillo-Montoya M (2016). Preparing for interview research: The interview protocol refinement framework . The Qualitative Report , 21 , 811–831. [ Google Scholar ]
  • Graneheim UH, & Lundman B (2004). Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness . Nurse Education Today , 24 , 105–112. doi: 10.1016/j.nedt.2003.10.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • King N, & Horrocks C (2010). Interviews in qualitative research . Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Krueger RA, & Casey MA (2015). Focus groups: A practical guide for applied research (5th ed.). Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Lincoln YS, & Guba EG (1985). Naturalistic inquiry . Newbury Park, CA: Sage Publications. [ Google Scholar ]
  • Morse JM (2008). Confusing categories and themes . Qualitative Health Research , 18 , 727–728. doi: 10.1177/1049732308314930 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse JM (2015). Critical analysis of strategies for determining rigor in qualitative inquiry . Qualitative Health Research , 25 , 1212–1222. doi: 10.1177/1049732315588501 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M (1995). Sample size in qualitative research . Research in Nursing & Health , 18 , 179–183. doi: 10.1002/nur.4770180211 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wengraf T (2001). Qualitative research interviewing . London, UK: Sage Publications. [ Google Scholar ]
  • Whiting LS (2008). Semi-structured interviews: Guidance for novice researchers . Nursing Standard , 22 , 35–41. [ PubMed ] [ Google Scholar ]

IMAGES

  1. Open-Ended Questions in Marketing Research

    open ended questions qualitative research

  2. Qualitative Questionnaire

    open ended questions qualitative research

  3. Open-ended questions: When to ask them + 15 examples

    open ended questions qualitative research

  4. 12 Best NPS Survey Questions and Response Templates

    open ended questions qualitative research

  5. Open Ended Questions: Definition, Characteristics, Examples, and

    open ended questions qualitative research

  6. How to Use Open Ended Questions on Surveys

    open ended questions qualitative research

VIDEO

  1. Powerful Questioning 101: Closed vs Open

  2. How to Use Amazon's Future Backwards Principle for CX Success

  3. Why ChatGPT struggles to give consistent answers

  4. Open vs Closed Surveys with Chisquares

  5. Webinar: Categorizing Open ended Survey Data with MAXQDA

  6. Open Ended Survey Through Google Forms

COMMENTS

  1. Open-Ended Questions in Qualitative Research

    Using open-ended questions in qualitative research presents both challenges and benefits. To address potential limitations, researchers should remain objective and neutral, create a safe and non-judgmental space, and use probing techniques. Best practices include building rapport, developing clear research questions, and being flexible.

  2. Qualitative Methods in Health Care Research

    Qualitative Research Questions and Purpose Statements. Qualitative questions are exploratory and are open-ended. A well-formulated study question forms the basis for developing a protocol, guides the selection of design, and data collection methods. Qualitative research questions generally involve two parts, a central question and related ...

  3. Qualitative research: open-ended and closed-ended questions

    Introduction. Let us begin by pointing out that open and closed-ended questions do not at first glance serve the same purpose in market research. Instead, open-ended questions are used in qualitative research (see the video above for more information) and closed-ended questions are used in quantitative research. But this is not an absolute rule.

  4. Open-ended interview questions and saturation

    Abstract. Sample size determination for open-ended questions or qualitative interviews relies primarily on custom and finding the point where little new information is obtained (thematic saturation). Here, we propose and test a refined definition of saturation as obtaining the most salient items in a set of qualitative interviews (where items ...

  5. Qualitative Study

    Qualitative research at its core, ask open-ended questions whose answers are not easily put into numbers such as 'how' and 'why'. Due to the open-ended nature of the research questions at hand, qualitative research design is often not linear in the same way quantitative design is. One of the strengths of qualitative research is its ...

  6. Qualitative Methods Used to Generate Questionnaire Items: A Systematic

    Written answers to open-ended questions is a method to collect data that focus more on precise topics (Hanson, Balmer, & Giardino, 2011). We identified methods other than interviews, focus groups, or written answers to collect qualitative data. ... This overview of qualitative research methods used in concept elicitation for questionnaire ...

  7. Analyzing Open-ended Questions for Qualitative Research

    4. Open-ended questions can also provide a greater depth of insight that a closed-ended question. may not have. As Farber (2006) e xplains: agrees with this notion and adds that qualitative ...

  8. How to use and assess qualitative research methods

    Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined . The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data ...

  9. Qualitative Research Questions: Gain Powerful Insights + 25 Examples

    Qualitative research questions are open-ended and seek to explore a given topic in-depth. According to the Australian & New Zealand Journal of Psychiatry, "Qualitative research aims to address questions concerned with developing an understanding of the meaning and experience dimensions of humans' lives and social worlds. ...

  10. Your quick guide to open-ended questions in surveys

    Open-ended questions provide more qualitative research data; contextual insights that accentuate quantitative information. With open-ended questions, you get more meaningful user research data. Closed-ended questions, on the other hand, provide quantitative data; limited insight but easy to analyze and compile into reports. Market researchers ...

  11. Open-ended interview questions and saturation

    Sample size determination for open-ended questions or qualitative interviews relies primarily on custom and finding the point where little new information is obtained (thematic saturation). Here, we propose and test a refined definition of saturation as obtaining the most salient items in a set of qualitative interviews (where items can be material things or concepts, depending on the topic of ...

  12. OPEN-ENDED QUESTIONS IN QUALITATIVE RESEARCH:

    According to the National Center for Education Statistics, in 2015, "5,954,121 students [were] enrolled in any distance education courses at degree-granting postsecondary institutions," and that number has continued to rise (National Center for Education Statistics, 2018, para. 1). Rapid growth within online programs requires rapid revision and ...

  13. Writing Survey Questions

    We frequently test new survey questions ahead of time through qualitative research methods such as focus groups, cognitive interviews, ... One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics ...

  14. PDF Interviewing in Qualitative Research

    There several types of questions that can be asked in a qualitative in-depth interview: Introducing questions: Broad, open-ended questions to start a conversation. They should be general and non-threatening, to start the conversation on the friendly note; e.g., "I want

  15. LibGuides: Qualitative study design: Surveys & questionnaires

    Qualitative surveys use open-ended questions to produce long-form written/typed answers. Questions will aim to reveal opinions, experiences, narratives or accounts. Often a useful precursor to interviews or focus groups as they help identify initial themes or issues to then explore further in the research.

  16. Qualitative Study

    Qualitative research at its core, ask open-ended questions whose answers are not easily put into numbers such as 'how' and 'why'. Due to the open-ended nature of the research questions at hand, qualitative research design is often not linear in the same way quantitative design is. [2]

  17. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  18. 6 Main Qualitative Questions Examples

    Qualitative research uncovers the details of human behavior, beliefs, and feelings. It gives us insights that numbers can't always tell. These research questions help us understand the "how" and "why" of things. In this article, we'll look at six examples of good qualitative questions. We aim to highlight how picking the right ...

  19. Open-Ended vs. Closed Questions in User Research

    Open-Ended vs. Closed Questions. There are two types of questions we can use in research studies: open-ended and closed. Open-ended questions allow participants to give a free-form text answer. Closed questions (or closed-ended questions) restrict participants to one of a limited set of possible answers.. Open-ended questions encourage exploration of a topic; a participant can choose what to ...

  20. Importance Of Open-Ended Questions In Qualitative Research

    It allows researchers to explore the richness and depth of individuals' thoughts, feelings, decision making process and motivations. One of the critical tools in qualitative research is the use of open-ended questions. Open-ended questions invite respondents to provide detailed and personalised responses—allowing for a more nuanced ...

  21. Qualitative Research: Getting Started

    Qualitative research was historically employed in fields such as sociology, history, ... Patton 12 has described an interview as "open-ended questions and probes yielding in-depth responses about people's experiences, perceptions, opinions, feelings, and knowledge. Data consists of verbatim quotations and sufficient content/context to be ...

  22. 83 Qualitative Research Questions & Examples

    What is a qualitative research question? A qualitative research question explores a topic in-depth, aiming to better understand the subject through interviews, observations, and other non-numerical data. Qualitative research questions are open-ended, helping to uncover a target audience's opinions, beliefs, and motivations.

  23. Analyzing Open-ended Questions for Qualitative Research

    Download Free PDF. View PDF. 101 Analyzing Open-ended Questions for Qualitative Research Mokhtaria Lahmar Abstract Handling open-ended questions' results as part of novice researchers' background in analyzing qualitative data can be a frustrating task as it requires deliberate effort. As teachers at Ibn Khaldoun university EFL department ...

  24. Open-Ended Questions: Examples & Advantages

    Overall, open-ended questions are powerful to gather information, foster communication, and gain deeper insights. Whether used in research, professional settings, or personal conversations, they enable individuals to explore ideas, share perspectives, critical thinking of a person, and engage in meaningful discussions.

  25. The Use of Closing Questions in Qualitative Research: Results of a Web

    Qualitative researchers use various approaches to conclude interviews. The absence of standard procedures or rules contributes to methodological variability (Brinkmann & Kvale, 2014).Some researchers terminate an interview by thanking participants and acknowledging their contributions (Baumbusch, 2010; Whiting, 2008); others add closing questions.. Limited and divergent accounts exist ...