Logo for Open Educational Resources

Chapter 2. Research Design

Getting started.

When I teach undergraduates qualitative research methods, the final product of the course is a “research proposal” that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question. I highly recommend you think about designing your own research study as you progress through this textbook. Even if you don’t have a study in mind yet, it can be a helpful exercise as you progress through the course. But how to start? How can one design a research study before they even know what research looks like? This chapter will serve as a brief overview of the research design process to orient you to what will be coming in later chapters. Think of it as a “skeleton” of what you will read in more detail in later chapters. Ideally, you will read this chapter both now (in sequence) and later during your reading of the remainder of the text. Do not worry if you have questions the first time you read this chapter. Many things will become clearer as the text advances and as you gain a deeper understanding of all the components of good qualitative research. This is just a preliminary map to get you on the right road.

Null

Research Design Steps

Before you even get started, you will need to have a broad topic of interest in mind. [1] . In my experience, students can confuse this broad topic with the actual research question, so it is important to clearly distinguish the two. And the place to start is the broad topic. It might be, as was the case with me, working-class college students. But what about working-class college students? What’s it like to be one? Why are there so few compared to others? How do colleges assist (or fail to assist) them? What interested me was something I could barely articulate at first and went something like this: “Why was it so difficult and lonely to be me?” And by extension, “Did others share this experience?”

Once you have a general topic, reflect on why this is important to you. Sometimes we connect with a topic and we don’t really know why. Even if you are not willing to share the real underlying reason you are interested in a topic, it is important that you know the deeper reasons that motivate you. Otherwise, it is quite possible that at some point during the research, you will find yourself turned around facing the wrong direction. I have seen it happen many times. The reason is that the research question is not the same thing as the general topic of interest, and if you don’t know the reasons for your interest, you are likely to design a study answering a research question that is beside the point—to you, at least. And this means you will be much less motivated to carry your research to completion.

Researcher Note

Why do you employ qualitative research methods in your area of study? What are the advantages of qualitative research methods for studying mentorship?

Qualitative research methods are a huge opportunity to increase access, equity, inclusion, and social justice. Qualitative research allows us to engage and examine the uniquenesses/nuances within minoritized and dominant identities and our experiences with these identities. Qualitative research allows us to explore a specific topic, and through that exploration, we can link history to experiences and look for patterns or offer up a unique phenomenon. There’s such beauty in being able to tell a particular story, and qualitative research is a great mode for that! For our work, we examined the relationships we typically use the term mentorship for but didn’t feel that was quite the right word. Qualitative research allowed us to pick apart what we did and how we engaged in our relationships, which then allowed us to more accurately describe what was unique about our mentorship relationships, which we ultimately named liberationships ( McAloney and Long 2021) . Qualitative research gave us the means to explore, process, and name our experiences; what a powerful tool!

How do you come up with ideas for what to study (and how to study it)? Where did you get the idea for studying mentorship?

Coming up with ideas for research, for me, is kind of like Googling a question I have, not finding enough information, and then deciding to dig a little deeper to get the answer. The idea to study mentorship actually came up in conversation with my mentorship triad. We were talking in one of our meetings about our relationship—kind of meta, huh? We discussed how we felt that mentorship was not quite the right term for the relationships we had built. One of us asked what was different about our relationships and mentorship. This all happened when I was taking an ethnography course. During the next session of class, we were discussing auto- and duoethnography, and it hit me—let’s explore our version of mentorship, which we later went on to name liberationships ( McAloney and Long 2021 ). The idea and questions came out of being curious and wanting to find an answer. As I continue to research, I see opportunities in questions I have about my work or during conversations that, in our search for answers, end up exposing gaps in the literature. If I can’t find the answer already out there, I can study it.

—Kim McAloney, PhD, College Student Services Administration Ecampus coordinator and instructor

When you have a better idea of why you are interested in what it is that interests you, you may be surprised to learn that the obvious approaches to the topic are not the only ones. For example, let’s say you think you are interested in preserving coastal wildlife. And as a social scientist, you are interested in policies and practices that affect the long-term viability of coastal wildlife, especially around fishing communities. It would be natural then to consider designing a research study around fishing communities and how they manage their ecosystems. But when you really think about it, you realize that what interests you the most is how people whose livelihoods depend on a particular resource act in ways that deplete that resource. Or, even deeper, you contemplate the puzzle, “How do people justify actions that damage their surroundings?” Now, there are many ways to design a study that gets at that broader question, and not all of them are about fishing communities, although that is certainly one way to go. Maybe you could design an interview-based study that includes and compares loggers, fishers, and desert golfers (those who golf in arid lands that require a great deal of wasteful irrigation). Or design a case study around one particular example where resources were completely used up by a community. Without knowing what it is you are really interested in, what motivates your interest in a surface phenomenon, you are unlikely to come up with the appropriate research design.

These first stages of research design are often the most difficult, but have patience . Taking the time to consider why you are going to go through a lot of trouble to get answers will prevent a lot of wasted energy in the future.

There are distinct reasons for pursuing particular research questions, and it is helpful to distinguish between them.  First, you may be personally motivated.  This is probably the most important and the most often overlooked.   What is it about the social world that sparks your curiosity? What bothers you? What answers do you need in order to keep living? For me, I knew I needed to get a handle on what higher education was for before I kept going at it. I needed to understand why I felt so different from my peers and whether this whole “higher education” thing was “for the likes of me” before I could complete my degree. That is the personal motivation question. Your personal motivation might also be political in nature, in that you want to change the world in a particular way. It’s all right to acknowledge this. In fact, it is better to acknowledge it than to hide it.

There are also academic and professional motivations for a particular study.  If you are an absolute beginner, these may be difficult to find. We’ll talk more about this when we discuss reviewing the literature. Simply put, you are probably not the only person in the world to have thought about this question or issue and those related to it. So how does your interest area fit into what others have studied? Perhaps there is a good study out there of fishing communities, but no one has quite asked the “justification” question. You are motivated to address this to “fill the gap” in our collective knowledge. And maybe you are really not at all sure of what interests you, but you do know that [insert your topic] interests a lot of people, so you would like to work in this area too. You want to be involved in the academic conversation. That is a professional motivation and a very important one to articulate.

Practical and strategic motivations are a third kind. Perhaps you want to encourage people to take better care of the natural resources around them. If this is also part of your motivation, you will want to design your research project in a way that might have an impact on how people behave in the future. There are many ways to do this, one of which is using qualitative research methods rather than quantitative research methods, as the findings of qualitative research are often easier to communicate to a broader audience than the results of quantitative research. You might even be able to engage the community you are studying in the collecting and analyzing of data, something taboo in quantitative research but actively embraced and encouraged by qualitative researchers. But there are other practical reasons, such as getting “done” with your research in a certain amount of time or having access (or no access) to certain information. There is nothing wrong with considering constraints and opportunities when designing your study. Or maybe one of the practical or strategic goals is about learning competence in this area so that you can demonstrate the ability to conduct interviews and focus groups with future employers. Keeping that in mind will help shape your study and prevent you from getting sidetracked using a technique that you are less invested in learning about.

STOP HERE for a moment

I recommend you write a paragraph (at least) explaining your aims and goals. Include a sentence about each of the following: personal/political goals, practical or professional/academic goals, and practical/strategic goals. Think through how all of the goals are related and can be achieved by this particular research study . If they can’t, have a rethink. Perhaps this is not the best way to go about it.

You will also want to be clear about the purpose of your study. “Wait, didn’t we just do this?” you might ask. No! Your goals are not the same as the purpose of the study, although they are related. You can think about purpose lying on a continuum from “ theory ” to “action” (figure 2.1). Sometimes you are doing research to discover new knowledge about the world, while other times you are doing a study because you want to measure an impact or make a difference in the world.

Purpose types: Basic Research, Applied Research, Summative Evaluation, Formative Evaluation, Action Research

Basic research involves research that is done for the sake of “pure” knowledge—that is, knowledge that, at least at this moment in time, may not have any apparent use or application. Often, and this is very important, knowledge of this kind is later found to be extremely helpful in solving problems. So one way of thinking about basic research is that it is knowledge for which no use is yet known but will probably one day prove to be extremely useful. If you are doing basic research, you do not need to argue its usefulness, as the whole point is that we just don’t know yet what this might be.

Researchers engaged in basic research want to understand how the world operates. They are interested in investigating a phenomenon to get at the nature of reality with regard to that phenomenon. The basic researcher’s purpose is to understand and explain ( Patton 2002:215 ).

Basic research is interested in generating and testing hypotheses about how the world works. Grounded Theory is one approach to qualitative research methods that exemplifies basic research (see chapter 4). Most academic journal articles publish basic research findings. If you are working in academia (e.g., writing your dissertation), the default expectation is that you are conducting basic research.

Applied research in the social sciences is research that addresses human and social problems. Unlike basic research, the researcher has expectations that the research will help contribute to resolving a problem, if only by identifying its contours, history, or context. From my experience, most students have this as their baseline assumption about research. Why do a study if not to make things better? But this is a common mistake. Students and their committee members are often working with default assumptions here—the former thinking about applied research as their purpose, the latter thinking about basic research: “The purpose of applied research is to contribute knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment. While in basic research the source of questions is the tradition within a scholarly discipline, in applied research the source of questions is in the problems and concerns experienced by people and by policymakers” ( Patton 2002:217 ).

Applied research is less geared toward theory in two ways. First, its questions do not derive from previous literature. For this reason, applied research studies have much more limited literature reviews than those found in basic research (although they make up for this by having much more “background” about the problem). Second, it does not generate theory in the same way as basic research does. The findings of an applied research project may not be generalizable beyond the boundaries of this particular problem or context. The findings are more limited. They are useful now but may be less useful later. This is why basic research remains the default “gold standard” of academic research.

Evaluation research is research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems. We already know the problems, and someone has already come up with solutions. There might be a program, say, for first-generation college students on your campus. Does this program work? Are first-generation students who participate in the program more likely to graduate than those who do not? These are the types of questions addressed by evaluation research. There are two types of research within this broader frame; however, one more action-oriented than the next. In summative evaluation , an overall judgment about the effectiveness of a program or policy is made. Should we continue our first-gen program? Is it a good model for other campuses? Because the purpose of such summative evaluation is to measure success and to determine whether this success is scalable (capable of being generalized beyond the specific case), quantitative data is more often used than qualitative data. In our example, we might have “outcomes” data for thousands of students, and we might run various tests to determine if the better outcomes of those in the program are statistically significant so that we can generalize the findings and recommend similar programs elsewhere. Qualitative data in the form of focus groups or interviews can then be used for illustrative purposes, providing more depth to the quantitative analyses. In contrast, formative evaluation attempts to improve a program or policy (to help “form” or shape its effectiveness). Formative evaluations rely more heavily on qualitative data—case studies, interviews, focus groups. The findings are meant not to generalize beyond the particular but to improve this program. If you are a student seeking to improve your qualitative research skills and you do not care about generating basic research, formative evaluation studies might be an attractive option for you to pursue, as there are always local programs that need evaluation and suggestions for improvement. Again, be very clear about your purpose when talking through your research proposal with your committee.

Action research takes a further step beyond evaluation, even formative evaluation, to being part of the solution itself. This is about as far from basic research as one could get and definitely falls beyond the scope of “science,” as conventionally defined. The distinction between action and research is blurry, the research methods are often in constant flux, and the only “findings” are specific to the problem or case at hand and often are findings about the process of intervention itself. Rather than evaluate a program as a whole, action research often seeks to change and improve some particular aspect that may not be working—maybe there is not enough diversity in an organization or maybe women’s voices are muted during meetings and the organization wonders why and would like to change this. In a further step, participatory action research , those women would become part of the research team, attempting to amplify their voices in the organization through participation in the action research. As action research employs methods that involve people in the process, focus groups are quite common.

If you are working on a thesis or dissertation, chances are your committee will expect you to be contributing to fundamental knowledge and theory ( basic research ). If your interests lie more toward the action end of the continuum, however, it is helpful to talk to your committee about this before you get started. Knowing your purpose in advance will help avoid misunderstandings during the later stages of the research process!

The Research Question

Once you have written your paragraph and clarified your purpose and truly know that this study is the best study for you to be doing right now , you are ready to write and refine your actual research question. Know that research questions are often moving targets in qualitative research, that they can be refined up to the very end of data collection and analysis. But you do have to have a working research question at all stages. This is your “anchor” when you get lost in the data. What are you addressing? What are you looking at and why? Your research question guides you through the thicket. It is common to have a whole host of questions about a phenomenon or case, both at the outset and throughout the study, but you should be able to pare it down to no more than two or three sentences when asked. These sentences should both clarify the intent of the research and explain why this is an important question to answer. More on refining your research question can be found in chapter 4.

Chances are, you will have already done some prior reading before coming up with your interest and your questions, but you may not have conducted a systematic literature review. This is the next crucial stage to be completed before venturing further. You don’t want to start collecting data and then realize that someone has already beaten you to the punch. A review of the literature that is already out there will let you know (1) if others have already done the study you are envisioning; (2) if others have done similar studies, which can help you out; and (3) what ideas or concepts are out there that can help you frame your study and make sense of your findings. More on literature reviews can be found in chapter 9.

In addition to reviewing the literature for similar studies to what you are proposing, it can be extremely helpful to find a study that inspires you. This may have absolutely nothing to do with the topic you are interested in but is written so beautifully or organized so interestingly or otherwise speaks to you in such a way that you want to post it somewhere to remind you of what you want to be doing. You might not understand this in the early stages—why would you find a study that has nothing to do with the one you are doing helpful? But trust me, when you are deep into analysis and writing, having an inspirational model in view can help you push through. If you are motivated to do something that might change the world, you probably have read something somewhere that inspired you. Go back to that original inspiration and read it carefully and see how they managed to convey the passion that you so appreciate.

At this stage, you are still just getting started. There are a lot of things to do before setting forth to collect data! You’ll want to consider and choose a research tradition and a set of data-collection techniques that both help you answer your research question and match all your aims and goals. For example, if you really want to help migrant workers speak for themselves, you might draw on feminist theory and participatory action research models. Chapters 3 and 4 will provide you with more information on epistemologies and approaches.

Next, you have to clarify your “units of analysis.” What is the level at which you are focusing your study? Often, the unit in qualitative research methods is individual people, or “human subjects.” But your units of analysis could just as well be organizations (colleges, hospitals) or programs or even whole nations. Think about what it is you want to be saying at the end of your study—are the insights you are hoping to make about people or about organizations or about something else entirely? A unit of analysis can even be a historical period! Every unit of analysis will call for a different kind of data collection and analysis and will produce different kinds of “findings” at the conclusion of your study. [2]

Regardless of what unit of analysis you select, you will probably have to consider the “human subjects” involved in your research. [3] Who are they? What interactions will you have with them—that is, what kind of data will you be collecting? Before answering these questions, define your population of interest and your research setting. Use your research question to help guide you.

Let’s use an example from a real study. In Geographies of Campus Inequality , Benson and Lee ( 2020 ) list three related research questions: “(1) What are the different ways that first-generation students organize their social, extracurricular, and academic activities at selective and highly selective colleges? (2) how do first-generation students sort themselves and get sorted into these different types of campus lives; and (3) how do these different patterns of campus engagement prepare first-generation students for their post-college lives?” (3).

Note that we are jumping into this a bit late, after Benson and Lee have described previous studies (the literature review) and what is known about first-generation college students and what is not known. They want to know about differences within this group, and they are interested in ones attending certain kinds of colleges because those colleges will be sites where academic and extracurricular pressures compete. That is the context for their three related research questions. What is the population of interest here? First-generation college students . What is the research setting? Selective and highly selective colleges . But a host of questions remain. Which students in the real world, which colleges? What about gender, race, and other identity markers? Will the students be asked questions? Are the students still in college, or will they be asked about what college was like for them? Will they be observed? Will they be shadowed? Will they be surveyed? Will they be asked to keep diaries of their time in college? How many students? How many colleges? For how long will they be observed?

Recommendation

Take a moment and write down suggestions for Benson and Lee before continuing on to what they actually did.

Have you written down your own suggestions? Good. Now let’s compare those with what they actually did. Benson and Lee drew on two sources of data: in-depth interviews with sixty-four first-generation students and survey data from a preexisting national survey of students at twenty-eight selective colleges. Let’s ignore the survey for our purposes here and focus on those interviews. The interviews were conducted between 2014 and 2016 at a single selective college, “Hilltop” (a pseudonym ). They employed a “purposive” sampling strategy to ensure an equal number of male-identifying and female-identifying students as well as equal numbers of White, Black, and Latinx students. Each student was interviewed once. Hilltop is a selective liberal arts college in the northeast that enrolls about three thousand students.

How did your suggestions match up to those actually used by the researchers in this study? It is possible your suggestions were too ambitious? Beginning qualitative researchers can often make that mistake. You want a research design that is both effective (it matches your question and goals) and doable. You will never be able to collect data from your entire population of interest (unless your research question is really so narrow to be relevant to very few people!), so you will need to come up with a good sample. Define the criteria for this sample, as Benson and Lee did when deciding to interview an equal number of students by gender and race categories. Define the criteria for your sample setting too. Hilltop is typical for selective colleges. That was a research choice made by Benson and Lee. For more on sampling and sampling choices, see chapter 5.

Benson and Lee chose to employ interviews. If you also would like to include interviews, you have to think about what will be asked in them. Most interview-based research involves an interview guide, a set of questions or question areas that will be asked of each participant. The research question helps you create a relevant interview guide. You want to ask questions whose answers will provide insight into your research question. Again, your research question is the anchor you will continually come back to as you plan for and conduct your study. It may be that once you begin interviewing, you find that people are telling you something totally unexpected, and this makes you rethink your research question. That is fine. Then you have a new anchor. But you always have an anchor. More on interviewing can be found in chapter 11.

Let’s imagine Benson and Lee also observed college students as they went about doing the things college students do, both in the classroom and in the clubs and social activities in which they participate. They would have needed a plan for this. Would they sit in on classes? Which ones and how many? Would they attend club meetings and sports events? Which ones and how many? Would they participate themselves? How would they record their observations? More on observation techniques can be found in both chapters 13 and 14.

At this point, the design is almost complete. You know why you are doing this study, you have a clear research question to guide you, you have identified your population of interest and research setting, and you have a reasonable sample of each. You also have put together a plan for data collection, which might include drafting an interview guide or making plans for observations. And so you know exactly what you will be doing for the next several months (or years!). To put the project into action, there are a few more things necessary before actually going into the field.

First, you will need to make sure you have any necessary supplies, including recording technology. These days, many researchers use their phones to record interviews. Second, you will need to draft a few documents for your participants. These include informed consent forms and recruiting materials, such as posters or email texts, that explain what this study is in clear language. Third, you will draft a research protocol to submit to your institutional review board (IRB) ; this research protocol will include the interview guide (if you are using one), the consent form template, and all examples of recruiting material. Depending on your institution and the details of your study design, it may take weeks or even, in some unfortunate cases, months before you secure IRB approval. Make sure you plan on this time in your project timeline. While you wait, you can continue to review the literature and possibly begin drafting a section on the literature review for your eventual presentation/publication. More on IRB procedures can be found in chapter 8 and more general ethical considerations in chapter 7.

Once you have approval, you can begin!

Research Design Checklist

Before data collection begins, do the following:

  • Write a paragraph explaining your aims and goals (personal/political, practical/strategic, professional/academic).
  • Define your research question; write two to three sentences that clarify the intent of the research and why this is an important question to answer.
  • Review the literature for similar studies that address your research question or similar research questions; think laterally about some literature that might be helpful or illuminating but is not exactly about the same topic.
  • Find a written study that inspires you—it may or may not be on the research question you have chosen.
  • Consider and choose a research tradition and set of data-collection techniques that (1) help answer your research question and (2) match your aims and goals.
  • Define your population of interest and your research setting.
  • Define the criteria for your sample (How many? Why these? How will you find them, gain access, and acquire consent?).
  • If you are conducting interviews, draft an interview guide.
  •  If you are making observations, create a plan for observations (sites, times, recording, access).
  • Acquire any necessary technology (recording devices/software).
  • Draft consent forms that clearly identify the research focus and selection process.
  • Create recruiting materials (posters, email, texts).
  • Apply for IRB approval (proposal plus consent form plus recruiting materials).
  • Block out time for collecting data.
  • At the end of the chapter, you will find a " Research Design Checklist " that summarizes the main recommendations made here ↵
  • For example, if your focus is society and culture , you might collect data through observation or a case study. If your focus is individual lived experience , you are probably going to be interviewing some people. And if your focus is language and communication , you will probably be analyzing text (written or visual). ( Marshall and Rossman 2016:16 ). ↵
  • You may not have any "live" human subjects. There are qualitative research methods that do not require interactions with live human beings - see chapter 16 , "Archival and Historical Sources." But for the most part, you are probably reading this textbook because you are interested in doing research with people. The rest of the chapter will assume this is the case. ↵

One of the primary methodological traditions of inquiry in qualitative research, ethnography is the study of a group or group culture, largely through observational fieldwork supplemented by interviews. It is a form of fieldwork that may include participant-observation data collection. See chapter 14 for a discussion of deep ethnography. 

A methodological tradition of inquiry and research design that focuses on an individual case (e.g., setting, institution, or sometimes an individual) in order to explore its complexity, history, and interactive parts.  As an approach, it is particularly useful for obtaining a deep appreciation of an issue, event, or phenomenon of interest in its particular context.

The controlling force in research; can be understood as lying on a continuum from basic research (knowledge production) to action research (effecting change).

In its most basic sense, a theory is a story we tell about how the world works that can be tested with empirical evidence.  In qualitative research, we use the term in a variety of ways, many of which are different from how they are used by quantitative researchers.  Although some qualitative research can be described as “testing theory,” it is more common to “build theory” from the data using inductive reasoning , as done in Grounded Theory .  There are so-called “grand theories” that seek to integrate a whole series of findings and stories into an overarching paradigm about how the world works, and much smaller theories or concepts about particular processes and relationships.  Theory can even be used to explain particular methodological perspectives or approaches, as in Institutional Ethnography , which is both a way of doing research and a theory about how the world works.

Research that is interested in generating and testing hypotheses about how the world works.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

An approach to research that is “multimethod in focus, involving an interpretative, naturalistic approach to its subject matter.  This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.  Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives." ( Denzin and Lincoln 2005:2 ). Contrast with quantitative research .

Research that contributes knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment.

Research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems.  There are two kinds: summative and formative .

Research in which an overall judgment about the effectiveness of a program or policy is made, often for the purpose of generalizing to other cases or programs.  Generally uses qualitative research as a supplement to primary quantitative data analyses.  Contrast formative evaluation research .

Research designed to improve a program or policy (to help “form” or shape its effectiveness); relies heavily on qualitative research methods.  Contrast summative evaluation research

Research carried out at a particular organizational or community site with the intention of affecting change; often involves research subjects as participants of the study.  See also participatory action research .

Research in which both researchers and participants work together to understand a problematic situation and change it for the better.

The level of the focus of analysis (e.g., individual people, organizations, programs, neighborhoods).

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A fictional name assigned to give anonymity to a person, group, or place.  Pseudonyms are important ways of protecting the identity of research participants while still providing a “human element” in the presentation of qualitative data.  There are ethical considerations to be made in selecting pseudonyms; some researchers allow research participants to choose their own.

A requirement for research involving human participants; the documentation of informed consent.  In some cases, oral consent or assent may be sufficient, but the default standard is a single-page easy-to-understand form that both the researcher and the participant sign and date.   Under federal guidelines, all researchers "shall seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence. The information that is given to the subject or the representative shall be in language understandable to the subject or the representative.  No informed consent, whether oral or written, may include any exculpatory language through which the subject or the representative is made to waive or appear to waive any of the subject's rights or releases or appears to release the investigator, the sponsor, the institution, or its agents from liability for negligence" (21 CFR 50.20).  Your IRB office will be able to provide a template for use in your study .

An administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated. The IRB is charged with the responsibility of reviewing all research involving human participants. The IRB is concerned with protecting the welfare, rights, and privacy of human subjects. The IRB has the authority to approve, disapprove, monitor, and require modifications in all research activities that fall within its jurisdiction as specified by both the federal regulations and institutional policy.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 14 May 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Banner

Qualitative Research Design: Start

Qualitative Research Design

qualitative research in research design

What is Qualitative research design?

Qualitative research is a type of research that explores and provides deeper insights into real-world problems. Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much . It could be structured as a stand-alone study, purely relying on qualitative data or it could be part of mixed-methods research that combines qualitative and quantitative data.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and analyzing numerical data for statistical analysis. Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

While qualitative and quantitative approaches are different, they are not necessarily opposites, and they are certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined that there is a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated together.

Research Paradigms 

  • Positivist versus Post-Positivist
  • Social Constructivist (this paradigm/ideology mostly birth qualitative studies)

Events Relating to the Qualitative Research and Community Engagement Workshops @ CMU Libraries

CMU Libraries is committed to helping members of our community become data experts. To that end, CMU is offering public facing workshops that discuss Qualitative Research, Coding, and Community Engagement best practices.

The following workshops are a part of a broader series on using data. Please follow the links to register for the events. 

Qualitative Coding

Using Community Data to improve Outcome (Grant Writing)

Survey Design  

Upcoming Event: March 21st, 2024 (12:00pm -1:00 pm)

Community Engagement and Collaboration Event 

Join us for an event to improve, build on and expand the connections between Carnegie Mellon University resources and the Pittsburgh community. CMU resources such as the Libraries and Sustainability Initiative can be leveraged by users not affiliated with the university, but barriers can prevent them from fully engaging.

The conversation features representatives from CMU departments and local organizations about the community engagement efforts currently underway at CMU and opportunities to improve upon them. Speakers will highlight current and ongoing projects and share resources to support future collaboration.

Event Moderators:

Taiwo Lasisi, CLIR Postdoctoral Fellow in Community Data Literacy,  Carnegie Mellon University Libraries

Emma Slayton, Data Curation, Visualization, & GIS Specialist,  Carnegie Mellon University Libraries

Nicky Agate , Associate Dean for Academic Engagement, Carnegie Mellon University Libraries

Chelsea Cohen , The University’s Executive fellow for community engagement, Carnegie Mellon University

Sarah Ceurvorst , Academic Pathways Manager, Program Director, LEAP (Leadership, Excellence, Access, Persistence) Carnegie Mellon University

Julia Poeppibg , Associate Director of Partnership Development, Information Systems, Carnegie Mellon University 

Scott Wolovich , Director of New Sun Rising, Pittsburgh 

Additional workshops and events will be forthcoming. Watch this space for updates. 

Workshop Organizer

Profile Photo

Qualitative Research Methods

What are Qualitative Research methods?

Qualitative research adopts numerous methods or techniques including interviews, focus groups, and observation. Interviews may be unstructured, with open-ended questions on a topic and the interviewer adapts to the responses. Structured interviews have a predetermined number of questions that every participant is asked. It is usually one-on-one and is appropriate for sensitive topics or topics needing an in-depth exploration. Focus groups are often held with 8-12 target participants and are used when group dynamics and collective views on a topic are desired. Researchers can be participant observers to share the experiences of the subject or non-participant or detached observers.

What constitutes a good research question? Does the question drive research design choices?

According to Doody and Bailey (2014);

 We can only develop a good research question by consulting relevant literature, colleagues, and supervisors experienced in the area of research. (inductive interactions).

Helps to have a directed research aim and objective.

Researchers should not be “ research trendy” and have enough evidence. This is why research objectives are important. It helps to take time, and resources into consideration.

Research questions can be developed from theoretical knowledge, previous research or experience, or a practical need at work (Parahoo 2014). They have numerous roles, such as identifying the importance of the research and providing clarity of purpose for the research, in terms of what the research intends to achieve in the end.

Qualitative Research Questions

What constitutes a good Qualitative research question?

A good qualitative question answers the hows and whys instead of how many or how much. It could be structured as a stand-alone study, purely relying on qualitative data or it could be part of mixed-methods research that combines qualitative and quantitative data. Qualitative research gathers participants' experiences, perceptions and behavior.

Examples of good Qualitative Research Questions:

What are people's thoughts on the new library? 

How does it feel to be a first-generation student attending college?

Difference example (between Qualitative and Quantitative research questions):

How many college students signed up for the new semester? (Quan) 

How do college students feel about the new semester? What are their experiences so far? (Qual)

  • Qualitative Research Design Workshop Powerpoint

Foley G, Timonen V. Using Grounded Theory Method to Capture and Analyze Health Care Experiences. Health Serv Res. 2015 Aug;50(4):1195-210. [ PMC free article: PMC4545354 ] [ PubMed: 25523315 ]

Devers KJ. How will we know "good" qualitative research when we see it? Beginning the dialogue in health services research. Health Serv Res. 1999 Dec;34(5 Pt 2):1153-88. [ PMC free article: PMC1089058 ] [ PubMed: 10591278 ]

Huston P, Rowan M. Qualitative studies. Their role in medical research. Can Fam Physician. 1998 Nov;44:2453-8. [ PMC free article: PMC2277956 ] [ PubMed: 9839063 ]

Corner EJ, Murray EJ, Brett SJ. Qualitative, grounded theory exploration of patients' experience of early mobilisation, rehabilitation and recovery after critical illness. BMJ Open. 2019 Feb 24;9(2):e026348. [ PMC free article: PMC6443050 ] [ PubMed: 30804034 ]

Moser A, Korstjens I. Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. Eur J Gen Pract. 2018 Dec;24(1):9-18. [ PMC free article: PMC5774281 ] [ PubMed: 29199486 ]

Houghton C, Murphy K, Meehan B, Thomas J, Brooker D, Casey D. From screening to synthesis: using nvivo to enhance transparency in qualitative evidence synthesis. J Clin Nurs. 2017 Mar;26(5-6):873-881. [ PubMed: 27324875 ]

Soratto J, Pires DEP, Friese S. Thematic content analysis using ATLAS.ti software: Potentialities for researchs in health. Rev Bras Enferm. 2020;73(3):e20190250. [ PubMed: 32321144 ]

Zamawe FC. The Implication of Using NVivo Software in Qualitative Data Analysis: Evidence-Based Reflections. Malawi Med J. 2015 Mar;27(1):13-5. [ PMC free article: PMC4478399 ] [ PubMed: 26137192 ]

Korstjens I, Moser A. Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. Eur J Gen Pract. 2018 Dec;24(1):120-124. [ PMC free article: PMC8816392 ] [ PubMed: 29202616 ]

Saldaña, J. (2021). The coding manual for qualitative researchers. The coding manual for qualitative researchers, 1-440.

O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014 Sep;89(9):1245-51. [ PubMed: 24979285 ]

Palermo C, King O, Brock T, Brown T, Crampton P, Hall H, Macaulay J, Morphet J, Mundy M, Oliaro L, Paynter S, Williams B, Wright C, E Rees C. Setting priorities for health education research: A mixed methods study. Med Teach. 2019 Sep;41(9):1029-1038. [ PubMed: 31141390 ]

  • Last Updated: Feb 14, 2024 4:25 PM
  • URL: https://guides.library.cmu.edu/c.php?g=1346006
  • Technical Support
  • Find My Rep

You are here

Qualitative Research Design

Qualitative Research Design An Interactive Approach

  • Joseph A. Maxwell - George Mason University, VA
  • Description

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

“This book uses everyday language that will captivate students’ attention and embed practical knowledge to supplement the technical.”

“The key strengths of the text are the passion and the enthusiasm that Dr. Maxwell has for qualitative research after all these years. I feel I can also utilize these concepts on my own research team and take them out of the classroom and into research team meetings with colleagues.”

“I really liked this book. I found myself taking notes and saying “yes” so many times because Maxwell captures the research process so well and provides many points worth quoting. As a faculty mentor, I particularly see the value of this book for my students who are conducting qualitative dissertations.”

"Maxwell provides a clear explanation regarding the nuances involved in the circular process of qualitative research design."

Useful book for students undertaking qualitative research

Good book for qualitative research design. Used as a secondary text.

comprehensive, well written

This really is an excellent text which covers a vast range of qualitative research approaches - and is very readable

Maxwell's book has been very helpful to my students to make a conceptual design for their research. They were asked to read the first 3 chapters carefully and make the Chapter 2 assignment. Some students had never thought about their qualitative research in this way. I adopted the book as reference, however I would have made it essential to the course if the price were more reasonable. I think 26 pounds would be acceptable. Maxwell's book gives a common-sense approach to designing social research. I used it to provide a framework for integrating discourse analysis in social research.

Although a very good book, we cover more material than the focus of this book and we would have to adopt too many texts for our students.

New to the Third Edition

  • Provides new and expanded coverage of key topics such as paradigms in qualitative research, conceptual frameworks and using theory, doing literature reviews, and writing research proposals

Key Features

  • Offers an original, innovative model of design based on a systemic rather than a linear or typological structure, well suited for designing studies and writing research proposals
  • Includes many exercises that help readers to design their study
  • Provides guidance in a clear, direct writing style , offering practical advice on research design

A major impetus for a new edition of this book was the opportunity to expand it somewhat beyond the page limits of the earlier series on Applied Social Research Methods, for which it was originally written. However, many readers of the previous editions have said that they appreciated the conciseness of the book, so I didn't want to lose this virtue. Consequently, much of the new material in this edition consists of additional examples of my students' work, including a second example of a dissertation proposal (Appendix B).

Another impetus has been the ongoing development of qualitative research 1 , with a flourishing of new approaches, including arts-based approaches, to how it is conducted and presented. I haven't attempted to deal comprehensively with these, which would have ballooned the book well past what I felt was an appropriate length, as well as taking it beyond an introductory level. If you want to investigate these developments, the SAGE Encyclopedia of Qualitative Research (Given, 2008), the SAGE Handbook of Qualitative Research , 4th edition (Denzin & Lincoln, 2011) and the journal Qualitative Inquiry are good places to start. I've tried to indicate, in Chapters 1 and 3, how I see my approach to design as compatible with some of these developments, in particular with aspects of postmodernism and with the approach known as bricolage, and I have substantially rewritten and expanded my discussion of research paradigms, in Chapter 2.

However, I am also sceptical of some of these developments, particularly those that adopt a radical constructivist and relativist stance that denies the existence of any "reality" that our research attempts to understand, and that rejects any conception of "validity" (or related terms) that addresses the relationship between our research conclusions and the phenomena that we study. While I am enough of a postmodernist to believe that every theory and conclusion is our own construction, with no claim to "objective" or absolute truth, and argue in Chapter 2 that no theory can capture the full complexity of the things we study, I refuse to abandon the goal of gaining a better understanding of the physical, social, and cultural world in which we live, or the possibility of developing credible explanations for these phenomena.

This position is grounded in my third impetus for revising this book: my increasing awareness of how my own perspective on qualitative research has been informed by a philosophical realism about the things we study. I have developed this perspective at length in my book A Realist Approach for Qualitative Research (2011), arguing that the "critical realist" position I have taken is not only compatible with most qualitative researchers' actual practices, but can be valuable in helping researchers with some difficult theoretical, methodological, and political issues that they face. However, I offer this as a useful perspective among other perspectives, not as the single correct paradigm for qualitative research. As the writing teacher Peter Elbow (1973, 2006) argued, it is important to play both the "believing game" and the "doubting game" with any theory or position you encounter, trying to see both its advantages and its distortions or blind spots. For this reason, I want the present book to be of practical value to students and researchers who hold a variety of positions on these issues. The model of qualitative research design that I develop here is compatible with a range of philosophical perspectives, and I believe it is broadly applicable to most qualitative research.

My greater awareness of the implications of a critical realist stance have led me to revise or expand other parts of the book—in particular, the discussion of theory in Chapter 3; developing (and revising) research questions in Chapter 4; research relationships and ethics, developing interview questions, and data analysis, in Chapter 5; the concept of validity in Chapter 6; and the appropriate functions and content of a literature review in a research proposal, in Chapter 7. I've also continued to compulsively tinker with the language of the book, striving to make what I say clearer. I would be grateful for any feedback you can give me on how the book could be made more useful to you.

Finally, I realized in revising this work that I had said almost nothing explicitly about how I define qualitative research—what I see as most essential about a qualitative approach. I say more about this in Chapter 2. However, a brief definition would be that qualitative research is research that is intended to help you better understand 1) the meanings and perspectives of the people you study—seeing the world from their point of view, rather than simply from your own; 2) how these perspectives are shaped by, and shape, their physical, social, and cultural contexts; and 3) the specific processes that are involved in maintaining or altering these phenomena and relationships. All three of these aspects of qualitative research, but particularly the last one, contrast with most quantitative approaches to research, which are based on seeing the phenomena studied in terms of variables —properties of things that can vary, and can thus be measured and compared across contexts. I see most of the more obvious aspects of qualitative research—its inductive, open-ended approach, its reliance on textual or visual rather than numerical data, its primary goal of particular understanding rather than generalization across persons and settings—as due to these three main features of qualitative inquiry. (For a more detailed discussion of these issues, see Maxwell, 2011b.)

1. Some qualitative practitioners prefer the term "inquiry" to "research," seeing the latter as too closely associated with a quantitative or positivist approach. I agree with their concerns (see Maxwell, 2004a, b), and I understand that some types of qualitative inquiry are more humanistic than scientific, but I prefer to argue for a broader definition of "research" that includes a range of qualitative approaches.

Sample Materials & Chapters

For instructors, select a purchasing option.

Qualitative study design: Qualitative study design

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups
  • Observation
  • Surveys & questionnaires
  • Study Designs Home

Email your Librarians

Related guides

  • Study Design Basics
  • Quantitative Study Designs

Introduction

The effective evaluation of research involves assessing the way a study has been designed and conducted, and whether the method used was the most appropriate for answering the aims of the study. In contrast to quantitative studies, which are about breadth, qualitative research focuses on depth. 

Whereas quantitative research aims to develop objective theories by generating quantifiable numerical data, qualitative research aims to understand meaning. This might be the meanings that people attribute to their work, their behaviours or beliefs, or their attitudes or perceptions. Qualitative research is often based on methods of observation and enquiry; qualitative research “explores the meaning of human experiences and creates the possibilities of change through raised awareness and purposeful action” ( Taylor & Francis, 2013 ). Qualitative research focuses on life experiences; they are more about the “why” and “how” rather than the “how many”, or “how often”. 

Qualitative study designs might be chosen for any number of reasons. In health, you might be interested in finding out how nurses feel or experience care in the ICU; or you might want to find out how people engaged in heavy substance use found the experience of connecting with a support agency. Qualitative study designs are beneficial for certain types of research questions such as those looking to provide unique insights into specific contexts or social situations. However, they are not as strong when wanting to find direct cause and effect links or where a statistically significant result is required ( Taylor et al., 2006 ). 

Attribution and acknowledgement

Crediting creators and attributing content is a core part of both academic integrity and of being a digital citizen more broadly. This guide was created by Deakin Library.

The text and layout of Qualitative study design is © Deakin University 2023 and licensed under a CC BY NC 4.0

  • Next: Methodologies >>
  • Last Updated: Apr 8, 2024 11:12 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs
  • Privacy Policy

Research Method

Home » Qualitative Research – Methods, Analysis Types and Guide

Qualitative Research – Methods, Analysis Types and Guide

Table of Contents

Qualitative Research

Qualitative Research

Qualitative research is a type of research methodology that focuses on exploring and understanding people’s beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus groups, observations, and textual analysis.

Qualitative research aims to uncover the meaning and significance of social phenomena, and it typically involves a more flexible and iterative approach to data collection and analysis compared to quantitative research. Qualitative research is often used in fields such as sociology, anthropology, psychology, and education.

Qualitative Research Methods

Types of Qualitative Research

Qualitative Research Methods are as follows:

One-to-One Interview

This method involves conducting an interview with a single participant to gain a detailed understanding of their experiences, attitudes, and beliefs. One-to-one interviews can be conducted in-person, over the phone, or through video conferencing. The interviewer typically uses open-ended questions to encourage the participant to share their thoughts and feelings. One-to-one interviews are useful for gaining detailed insights into individual experiences.

Focus Groups

This method involves bringing together a group of people to discuss a specific topic in a structured setting. The focus group is led by a moderator who guides the discussion and encourages participants to share their thoughts and opinions. Focus groups are useful for generating ideas and insights, exploring social norms and attitudes, and understanding group dynamics.

Ethnographic Studies

This method involves immersing oneself in a culture or community to gain a deep understanding of its norms, beliefs, and practices. Ethnographic studies typically involve long-term fieldwork and observation, as well as interviews and document analysis. Ethnographic studies are useful for understanding the cultural context of social phenomena and for gaining a holistic understanding of complex social processes.

Text Analysis

This method involves analyzing written or spoken language to identify patterns and themes. Text analysis can be quantitative or qualitative. Qualitative text analysis involves close reading and interpretation of texts to identify recurring themes, concepts, and patterns. Text analysis is useful for understanding media messages, public discourse, and cultural trends.

This method involves an in-depth examination of a single person, group, or event to gain an understanding of complex phenomena. Case studies typically involve a combination of data collection methods, such as interviews, observations, and document analysis, to provide a comprehensive understanding of the case. Case studies are useful for exploring unique or rare cases, and for generating hypotheses for further research.

Process of Observation

This method involves systematically observing and recording behaviors and interactions in natural settings. The observer may take notes, use audio or video recordings, or use other methods to document what they see. Process of observation is useful for understanding social interactions, cultural practices, and the context in which behaviors occur.

Record Keeping

This method involves keeping detailed records of observations, interviews, and other data collected during the research process. Record keeping is essential for ensuring the accuracy and reliability of the data, and for providing a basis for analysis and interpretation.

This method involves collecting data from a large sample of participants through a structured questionnaire. Surveys can be conducted in person, over the phone, through mail, or online. Surveys are useful for collecting data on attitudes, beliefs, and behaviors, and for identifying patterns and trends in a population.

Qualitative data analysis is a process of turning unstructured data into meaningful insights. It involves extracting and organizing information from sources like interviews, focus groups, and surveys. The goal is to understand people’s attitudes, behaviors, and motivations

Qualitative Research Analysis Methods

Qualitative Research analysis methods involve a systematic approach to interpreting and making sense of the data collected in qualitative research. Here are some common qualitative data analysis methods:

Thematic Analysis

This method involves identifying patterns or themes in the data that are relevant to the research question. The researcher reviews the data, identifies keywords or phrases, and groups them into categories or themes. Thematic analysis is useful for identifying patterns across multiple data sources and for generating new insights into the research topic.

Content Analysis

This method involves analyzing the content of written or spoken language to identify key themes or concepts. Content analysis can be quantitative or qualitative. Qualitative content analysis involves close reading and interpretation of texts to identify recurring themes, concepts, and patterns. Content analysis is useful for identifying patterns in media messages, public discourse, and cultural trends.

Discourse Analysis

This method involves analyzing language to understand how it constructs meaning and shapes social interactions. Discourse analysis can involve a variety of methods, such as conversation analysis, critical discourse analysis, and narrative analysis. Discourse analysis is useful for understanding how language shapes social interactions, cultural norms, and power relationships.

Grounded Theory Analysis

This method involves developing a theory or explanation based on the data collected. Grounded theory analysis starts with the data and uses an iterative process of coding and analysis to identify patterns and themes in the data. The theory or explanation that emerges is grounded in the data, rather than preconceived hypotheses. Grounded theory analysis is useful for understanding complex social phenomena and for generating new theoretical insights.

Narrative Analysis

This method involves analyzing the stories or narratives that participants share to gain insights into their experiences, attitudes, and beliefs. Narrative analysis can involve a variety of methods, such as structural analysis, thematic analysis, and discourse analysis. Narrative analysis is useful for understanding how individuals construct their identities, make sense of their experiences, and communicate their values and beliefs.

Phenomenological Analysis

This method involves analyzing how individuals make sense of their experiences and the meanings they attach to them. Phenomenological analysis typically involves in-depth interviews with participants to explore their experiences in detail. Phenomenological analysis is useful for understanding subjective experiences and for developing a rich understanding of human consciousness.

Comparative Analysis

This method involves comparing and contrasting data across different cases or groups to identify similarities and differences. Comparative analysis can be used to identify patterns or themes that are common across multiple cases, as well as to identify unique or distinctive features of individual cases. Comparative analysis is useful for understanding how social phenomena vary across different contexts and groups.

Applications of Qualitative Research

Qualitative research has many applications across different fields and industries. Here are some examples of how qualitative research is used:

  • Market Research: Qualitative research is often used in market research to understand consumer attitudes, behaviors, and preferences. Researchers conduct focus groups and one-on-one interviews with consumers to gather insights into their experiences and perceptions of products and services.
  • Health Care: Qualitative research is used in health care to explore patient experiences and perspectives on health and illness. Researchers conduct in-depth interviews with patients and their families to gather information on their experiences with different health care providers and treatments.
  • Education: Qualitative research is used in education to understand student experiences and to develop effective teaching strategies. Researchers conduct classroom observations and interviews with students and teachers to gather insights into classroom dynamics and instructional practices.
  • Social Work : Qualitative research is used in social work to explore social problems and to develop interventions to address them. Researchers conduct in-depth interviews with individuals and families to understand their experiences with poverty, discrimination, and other social problems.
  • Anthropology : Qualitative research is used in anthropology to understand different cultures and societies. Researchers conduct ethnographic studies and observe and interview members of different cultural groups to gain insights into their beliefs, practices, and social structures.
  • Psychology : Qualitative research is used in psychology to understand human behavior and mental processes. Researchers conduct in-depth interviews with individuals to explore their thoughts, feelings, and experiences.
  • Public Policy : Qualitative research is used in public policy to explore public attitudes and to inform policy decisions. Researchers conduct focus groups and one-on-one interviews with members of the public to gather insights into their perspectives on different policy issues.

How to Conduct Qualitative Research

Here are some general steps for conducting qualitative research:

  • Identify your research question: Qualitative research starts with a research question or set of questions that you want to explore. This question should be focused and specific, but also broad enough to allow for exploration and discovery.
  • Select your research design: There are different types of qualitative research designs, including ethnography, case study, grounded theory, and phenomenology. You should select a design that aligns with your research question and that will allow you to gather the data you need to answer your research question.
  • Recruit participants: Once you have your research question and design, you need to recruit participants. The number of participants you need will depend on your research design and the scope of your research. You can recruit participants through advertisements, social media, or through personal networks.
  • Collect data: There are different methods for collecting qualitative data, including interviews, focus groups, observation, and document analysis. You should select the method or methods that align with your research design and that will allow you to gather the data you need to answer your research question.
  • Analyze data: Once you have collected your data, you need to analyze it. This involves reviewing your data, identifying patterns and themes, and developing codes to organize your data. You can use different software programs to help you analyze your data, or you can do it manually.
  • Interpret data: Once you have analyzed your data, you need to interpret it. This involves making sense of the patterns and themes you have identified, and developing insights and conclusions that answer your research question. You should be guided by your research question and use your data to support your conclusions.
  • Communicate results: Once you have interpreted your data, you need to communicate your results. This can be done through academic papers, presentations, or reports. You should be clear and concise in your communication, and use examples and quotes from your data to support your findings.

Examples of Qualitative Research

Here are some real-time examples of qualitative research:

  • Customer Feedback: A company may conduct qualitative research to understand the feedback and experiences of its customers. This may involve conducting focus groups or one-on-one interviews with customers to gather insights into their attitudes, behaviors, and preferences.
  • Healthcare : A healthcare provider may conduct qualitative research to explore patient experiences and perspectives on health and illness. This may involve conducting in-depth interviews with patients and their families to gather information on their experiences with different health care providers and treatments.
  • Education : An educational institution may conduct qualitative research to understand student experiences and to develop effective teaching strategies. This may involve conducting classroom observations and interviews with students and teachers to gather insights into classroom dynamics and instructional practices.
  • Social Work: A social worker may conduct qualitative research to explore social problems and to develop interventions to address them. This may involve conducting in-depth interviews with individuals and families to understand their experiences with poverty, discrimination, and other social problems.
  • Anthropology : An anthropologist may conduct qualitative research to understand different cultures and societies. This may involve conducting ethnographic studies and observing and interviewing members of different cultural groups to gain insights into their beliefs, practices, and social structures.
  • Psychology : A psychologist may conduct qualitative research to understand human behavior and mental processes. This may involve conducting in-depth interviews with individuals to explore their thoughts, feelings, and experiences.
  • Public Policy: A government agency or non-profit organization may conduct qualitative research to explore public attitudes and to inform policy decisions. This may involve conducting focus groups and one-on-one interviews with members of the public to gather insights into their perspectives on different policy issues.

Purpose of Qualitative Research

The purpose of qualitative research is to explore and understand the subjective experiences, behaviors, and perspectives of individuals or groups in a particular context. Unlike quantitative research, which focuses on numerical data and statistical analysis, qualitative research aims to provide in-depth, descriptive information that can help researchers develop insights and theories about complex social phenomena.

Qualitative research can serve multiple purposes, including:

  • Exploring new or emerging phenomena : Qualitative research can be useful for exploring new or emerging phenomena, such as new technologies or social trends. This type of research can help researchers develop a deeper understanding of these phenomena and identify potential areas for further study.
  • Understanding complex social phenomena : Qualitative research can be useful for exploring complex social phenomena, such as cultural beliefs, social norms, or political processes. This type of research can help researchers develop a more nuanced understanding of these phenomena and identify factors that may influence them.
  • Generating new theories or hypotheses: Qualitative research can be useful for generating new theories or hypotheses about social phenomena. By gathering rich, detailed data about individuals’ experiences and perspectives, researchers can develop insights that may challenge existing theories or lead to new lines of inquiry.
  • Providing context for quantitative data: Qualitative research can be useful for providing context for quantitative data. By gathering qualitative data alongside quantitative data, researchers can develop a more complete understanding of complex social phenomena and identify potential explanations for quantitative findings.

When to use Qualitative Research

Here are some situations where qualitative research may be appropriate:

  • Exploring a new area: If little is known about a particular topic, qualitative research can help to identify key issues, generate hypotheses, and develop new theories.
  • Understanding complex phenomena: Qualitative research can be used to investigate complex social, cultural, or organizational phenomena that are difficult to measure quantitatively.
  • Investigating subjective experiences: Qualitative research is particularly useful for investigating the subjective experiences of individuals or groups, such as their attitudes, beliefs, values, or emotions.
  • Conducting formative research: Qualitative research can be used in the early stages of a research project to develop research questions, identify potential research participants, and refine research methods.
  • Evaluating interventions or programs: Qualitative research can be used to evaluate the effectiveness of interventions or programs by collecting data on participants’ experiences, attitudes, and behaviors.

Characteristics of Qualitative Research

Qualitative research is characterized by several key features, including:

  • Focus on subjective experience: Qualitative research is concerned with understanding the subjective experiences, beliefs, and perspectives of individuals or groups in a particular context. Researchers aim to explore the meanings that people attach to their experiences and to understand the social and cultural factors that shape these meanings.
  • Use of open-ended questions: Qualitative research relies on open-ended questions that allow participants to provide detailed, in-depth responses. Researchers seek to elicit rich, descriptive data that can provide insights into participants’ experiences and perspectives.
  • Sampling-based on purpose and diversity: Qualitative research often involves purposive sampling, in which participants are selected based on specific criteria related to the research question. Researchers may also seek to include participants with diverse experiences and perspectives to capture a range of viewpoints.
  • Data collection through multiple methods: Qualitative research typically involves the use of multiple data collection methods, such as in-depth interviews, focus groups, and observation. This allows researchers to gather rich, detailed data from multiple sources, which can provide a more complete picture of participants’ experiences and perspectives.
  • Inductive data analysis: Qualitative research relies on inductive data analysis, in which researchers develop theories and insights based on the data rather than testing pre-existing hypotheses. Researchers use coding and thematic analysis to identify patterns and themes in the data and to develop theories and explanations based on these patterns.
  • Emphasis on researcher reflexivity: Qualitative research recognizes the importance of the researcher’s role in shaping the research process and outcomes. Researchers are encouraged to reflect on their own biases and assumptions and to be transparent about their role in the research process.

Advantages of Qualitative Research

Qualitative research offers several advantages over other research methods, including:

  • Depth and detail: Qualitative research allows researchers to gather rich, detailed data that provides a deeper understanding of complex social phenomena. Through in-depth interviews, focus groups, and observation, researchers can gather detailed information about participants’ experiences and perspectives that may be missed by other research methods.
  • Flexibility : Qualitative research is a flexible approach that allows researchers to adapt their methods to the research question and context. Researchers can adjust their research methods in real-time to gather more information or explore unexpected findings.
  • Contextual understanding: Qualitative research is well-suited to exploring the social and cultural context in which individuals or groups are situated. Researchers can gather information about cultural norms, social structures, and historical events that may influence participants’ experiences and perspectives.
  • Participant perspective : Qualitative research prioritizes the perspective of participants, allowing researchers to explore subjective experiences and understand the meanings that participants attach to their experiences.
  • Theory development: Qualitative research can contribute to the development of new theories and insights about complex social phenomena. By gathering rich, detailed data and using inductive data analysis, researchers can develop new theories and explanations that may challenge existing understandings.
  • Validity : Qualitative research can offer high validity by using multiple data collection methods, purposive and diverse sampling, and researcher reflexivity. This can help ensure that findings are credible and trustworthy.

Limitations of Qualitative Research

Qualitative research also has some limitations, including:

  • Subjectivity : Qualitative research relies on the subjective interpretation of researchers, which can introduce bias into the research process. The researcher’s perspective, beliefs, and experiences can influence the way data is collected, analyzed, and interpreted.
  • Limited generalizability: Qualitative research typically involves small, purposive samples that may not be representative of larger populations. This limits the generalizability of findings to other contexts or populations.
  • Time-consuming: Qualitative research can be a time-consuming process, requiring significant resources for data collection, analysis, and interpretation.
  • Resource-intensive: Qualitative research may require more resources than other research methods, including specialized training for researchers, specialized software for data analysis, and transcription services.
  • Limited reliability: Qualitative research may be less reliable than quantitative research, as it relies on the subjective interpretation of researchers. This can make it difficult to replicate findings or compare results across different studies.
  • Ethics and confidentiality: Qualitative research involves collecting sensitive information from participants, which raises ethical concerns about confidentiality and informed consent. Researchers must take care to protect the privacy and confidentiality of participants and obtain informed consent.

Also see Research Methods

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

  • Tools and Resources
  • Customer Services
  • Original Language Spotlight
  • Alternative and Non-formal Education 
  • Cognition, Emotion, and Learning
  • Curriculum and Pedagogy
  • Education and Society
  • Education, Change, and Development
  • Education, Cultures, and Ethnicities
  • Education, Gender, and Sexualities
  • Education, Health, and Social Services
  • Educational Administration and Leadership
  • Educational History
  • Educational Politics and Policy
  • Educational Purposes and Ideals
  • Educational Systems
  • Educational Theories and Philosophies
  • Globalization, Economics, and Education
  • Languages and Literacies
  • Professional Learning and Development
  • Research and Assessment Methods
  • Technology and Education
  • Share This Facebook LinkedIn Twitter

Article contents

Qualitative design research methods.

  • Michael Domínguez Michael Domínguez San Diego State University
  • https://doi.org/10.1093/acrefore/9780190264093.013.170
  • Published online: 19 December 2017

Emerging in the learning sciences field in the early 1990s, qualitative design-based research (DBR) is a relatively new methodological approach to social science and education research. As its name implies, DBR is focused on the design of educational innovations, and the testing of these innovations in the complex and interconnected venue of naturalistic settings. As such, DBR is an explicitly interventionist approach to conducting research, situating the researcher as a part of the complex ecology in which learning and educational innovation takes place.

With this in mind, DBR is distinct from more traditional methodologies, including laboratory experiments, ethnographic research, and large-scale implementation. Rather, the goal of DBR is not to prove the merits of any particular intervention, or to reflect passively on a context in which learning occurs, but to examine the practical application of theories of learning themselves in specific, situated contexts. By designing purposeful, naturalistic, and sustainable educational ecologies, researchers can test, extend, or modify their theories and innovations based on their pragmatic viability. This process offers the prospect of generating theory-developing, contextualized knowledge claims that can complement the claims produced by other forms of research.

Because of this interventionist, naturalistic stance, DBR has also been the subject of ongoing debate concerning the rigor of its methodology. In many ways, these debates obscure the varied ways DBR has been practiced, the varied types of questions being asked, and the theoretical breadth of researchers who practice DBR. With this in mind, DBR research may involve a diverse range of methods as researchers from a variety of intellectual traditions within the learning sciences and education research design pragmatic innovations based on their theories of learning, and document these complex ecologies using the methodologies and tools most applicable to their questions, focuses, and academic communities.

DBR has gained increasing interest in recent years. While it remains a popular methodology for developmental and cognitive learning scientists seeking to explore theory in naturalistic settings, it has also grown in importance to cultural psychology and cultural studies researchers as a methodological approach that aligns in important ways with the participatory commitments of liberatory research. As such, internal tension within the DBR field has also emerged. Yet, though approaches vary, and have distinct genealogies and commitments, DBR might be seen as the broad methodological genre in which Change Laboratory, design-based implementation research (DBIR), social design-based experiments (SDBE), participatory design research (PDR), and research-practice partnerships might be categorized. These critically oriented iterations of DBR have important implications for educational research and educational innovation in historically marginalized settings and the Global South.

  • design-based research
  • learning sciences
  • social-design experiment
  • qualitative research
  • research methods

Educational research, perhaps more than many other disciplines, is a situated field of study. Learning happens around us every day, at all times, in both formal and informal settings. Our worlds are replete with complex, dynamic, diverse communities, contexts, and institutions, many of which are actively seeking guidance and support in the endless quest for educational innovation. Educational researchers—as a source of potential expertise—are necessarily implicated in this complexity, linked to the communities and institutions through their very presence in spaces of learning, poised to contribute with possible solutions, yet often positioned as separate from the activities they observe, creating dilemmas of responsibility and engagement.

So what are educational scholars and researchers to do? These tensions invite a unique methodological challenge for the contextually invested researcher, begging them to not just produce knowledge about learning, but to participate in the ecology, collaborating on innovations in the complex contexts in which learning is taking place. In short, for many educational researchers, our backgrounds as educators, our connections to community partners, and our sociopolitical commitments to the process of educational innovation push us to ensure that our work is generative, and that our theories and ideas—our expertise—about learning and education are made pragmatic, actionable, and sustainable. We want to test what we know outside of laboratories, designing, supporting, and guiding educational innovation to see if our theories of learning are accurate, and useful to the challenges faced in schools and communities where learning is messy, collaborative, and contested. Through such a process, we learn, and can modify our theories to better serve the real needs of communities. It is from this impulse that qualitative design-based research (DBR) emerged as a new methodological paradigm for education research.

Qualitative design-based research will be examined, documenting its origins, the major tenets of the genre, implementation considerations, and methodological issues, as well as variance within the paradigm. As a relatively new methodology, much tension remains in what constitutes DBR, and what design should mean, and for whom. These tensions and questions, as well as broad perspectives and emergent iterations of the methodology, will be discussed, and considerations for researchers looking toward the future of this paradigm will be considered.

The Origins of Design-Based Research

Qualitative design-based research (DBR) first emerged in the learning sciences field among a group of scholars in the early 1990s, with the first articulation of DBR as a distinct methodological construct appearing in the work of Ann Brown ( 1992 ) and Allan Collins ( 1992 ). For learning scientists in the 1970s and 1980s, the traditional methodologies of laboratory experiments, ethnographies, and large-scale educational interventions were the only methods available. During these decades, a growing community of learning science and educational researchers (e.g., Bereiter & Scardamalia, 1989 ; Brown, Campione, Webber, & McGilley, 1992 ; Cobb & Steffe, 1983 ; Cole, 1995 ; Scardamalia & Bereiter, 1991 ; Schoenfeld, 1982 , 1985 ; Scribner & Cole, 1978 ) interested in educational innovation and classroom interventions in situated contexts began to find the prevailing methodologies insufficient for the types of learning they wished to document, the roles they wished to play in research, and the kinds of knowledge claims they wished to explore. The laboratory, or laboratory-like settings, where research on learning was at the time happening, was divorced from the complexity of real life, and necessarily limiting. Alternatively, most ethnographic research, while more attuned to capturing these complexities and dynamics, regularly assumed a passive stance 1 and avoided interceding in the learning process, or allowing researchers to see what possibility for innovation existed from enacting nascent learning theories. Finally, large-scale interventions could test innovations in practice but lost sight of the nuance of development and implementation in local contexts (Brown, 1992 ; Collins, Joseph, & Bielaczyc, 2004 ).

Dissatisfied with these options, and recognizing that in order to study and understand learning in the messiness of socially, culturally, and historically situated settings, new methods were required, Brown ( 1992 ) proposed an alternative: Why not involve ourselves in the messiness of the process, taking an active, grounded role in disseminating our theories and expertise by becoming designers and implementers of educational innovations? Rather than observing from afar, DBR researchers could trace their own iterative processes of design, implementation, tinkering, redesign, and evaluation, as it unfolded in shared work with teachers, students, learners, and other partners in lived contexts. This premise, initially articulated as “design experiments” (Brown, 1992 ), would be variously discussed over the next decade as “design research,” (Edelson, 2002 ) “developmental research,” (Gravemeijer, 1994 ), and “design-based research,” (Design-Based Research Collective, 2003 ), all of which reflect the original, interventionist, design-oriented concept. The latter term, “design-based research” (DBR), is used here, recognizing this as the prevailing terminology used to refer to this research approach at present. 2

Regardless of the evolving moniker, the prospects of such a methodology were extremely attractive to researchers. Learning scientists acutely aware of various aspects of situated context, and interested in studying the applied outcomes of learning theories—a task of inquiry into situated learning for which canonical methods were rather insufficient—found DBR a welcome development (Bell, 2004 ). As Barab and Squire ( 2004 ) explain: “learning scientists . . . found that they must develop technological tools, curriculum, and especially theories that help them systematically understand and predict how learning occurs” (p. 2), and DBR methodologies allowed them to do this in proactive, hands-on ways. Thus, rather than emerging as a strict alternative to more traditional methodologies, DBR was proposed to fill a niche that other methodologies were ill-equipped to cover.

Effectively, while its development is indeed linked to an inherent critique of previous research paradigms, neither Brown nor Collins saw DBR in opposition to other forms of research. Rather, by providing a bridge from the laboratory to the real world, where learning theories and proposed innovations could interact and be implemented in the complexity of lived socio-ecological contexts (Hoadley, 2004 ), new possibilities emerged. Learning researchers might “trace the evolution of learning in complex, messy classrooms and schools, test and build theories of teaching and learning, and produce instructional tools that survive the challenges of everyday practice” (Shavelson, Phillips, Towne, & Feuer, 2003 , p. 25). Thus, DBR could complement the findings of laboratory, ethnographic, and large-scale studies, answering important questions about the implementation, sustainability, limitations, and usefulness of theories, interventions, and learning when introduced as innovative designs into situated contexts of learning. Moreover, while studies involving these traditional methodologies often concluded by pointing toward implications—insights subsequent studies would need to take up—DBR allowed researchers to address implications iteratively and directly. No subsequent research was necessary, as emerging implications could be reflexively explored in the context of the initial design, offering considerable insight into how research is translated into theory and practice.

Since its emergence in 1992 , DBR as a methodological approach to educational and learning research has quickly grown and evolved, used by researchers from a variety of intellectual traditions in the learning sciences, including developmental and cognitive psychology (e.g., Brown & Campione, 1996 , 1998 ; diSessa & Minstrell, 1998 ), cultural psychology (e.g., Cole, 1996 , 2007 ; Newman, Griffin, & Cole, 1989 ; Gutiérrez, Bien, Selland, & Pierce, 2011 ), cultural anthropology (e.g., Barab, Kinster, Moore, Cunningham, & the ILF Design Team, 2001 ; Polman, 2000 ; Stevens, 2000 ; Suchman, 1995 ), and cultural-historical activity theory (e.g., Engeström, 2011 ; Espinoza, 2009 ; Espinoza & Vossoughi, 2014 ; Gutiérrez, 2008 ; Sannino, 2011 ). Given this plurality of epistemological and theoretical fields that employ DBR, it might best be understood as a broad methodology of educational research, realized in many different, contested, heterogeneous, and distinct iterations, and engaging a variety of qualitative tools and methods (Bell, 2004 ). Despite tensions among these iterations, and substantial and important variances in the ways they employ design-as-research in community settings, there are several common, methodological threads that unite the broad array of research that might be classified as DBR under a shared, though pluralistic, paradigmatic umbrella.

The Tenets of Design-Based Research

Why design-based research.

As we turn to the core tenets of the design-based research (DBR) paradigm, it is worth considering an obvious question: Why use DBR as a methodology for educational research? To answer this, it is helpful to reflect on the original intentions for DBR, particularly, that it is not simply the study of a particular, isolated intervention. Rather, DBR methodologies were conceived of as the complete, iterative process of designing, modifying, and assessing the impact of an educational innovation in a contextual, situated learning environment (Barab & Kirshner, 2001 ; Brown, 1992 ; Cole & Engeström, 2007 ). The design process itself—inclusive of the theory of learning employed, the relationships among participants, contextual factors and constraints, the pedagogical approach, any particular intervention, as well as any changes made to various aspects of this broad design as it proceeds—is what is under study.

Considering this, DBR offers a compelling framework for the researcher interested in having an active and collaborative hand in designing for educational innovation, and interested in creating knowledge about how particular theories of learning, pedagogical or learning practices, or social arrangements function in a context of learning. It is a methodology that can put the researcher in the position of engineer , actively experimenting with aspects of learning and sociopolitical ecologies to arrive at new knowledge and productive outcomes, as Cobb, Confrey, diSessa, Lehrer, and Schauble ( 2003 ) explain:

Prototypically, design experiments entail both “engineering” particular forms of learning and systematically studying those forms of learning within the context defined by the means of supporting them. This designed context is subject to test and revision, and the successive iterations that result play a role similar to that of systematic variation in experiment. (p. 9)

This being said, how directive the engineering role the researcher takes on varies considerably among iterations of DBR. Indeed, recent approaches have argued strongly for researchers to take on more egalitarian positionalities with respect to the community partners with whom they work (e.g., Zavala, 2016 ), acting as collaborative designers, rather than authoritative engineers.

Method and Methodology in Design-Based Research

Now, having established why we might use DBR, a recurring question that has faced the DBR paradigm is whether DBR is a methodology at all. Given the variety of intellectual and ontological traditions that employ it, and thus the pluralism of methods used in DBR to enact the “engineering” role (whatever shape that may take) that the researcher assumes, it has been argued that DBR is not, in actuality a methodology at all (Kelly, 2004 ). The proliferation and diversity of approaches, methods, and types of analysis purporting to be DBR have been described as a lack of coherence that shows there is no “argumentative grammar” or methodology present in DBR (Kelly, 2004 ).

Now, the conclusions one will eventually draw in this debate will depend on one’s orientations and commitments, but it is useful to note that these demands for “coherence” emerge from previous paradigms in which methodology was largely marked by a shared, coherent toolkit for data collection and data analysis. These previous paradigmatic rules make for an odd fit when considering DBR. Yet, even if we proceed—within the qualitative tradition from which DBR emerges—defining methodology as an approach to research that is shaped by the ontological and epistemological commitments of the particular researcher, and methods as the tools for research, data collection, and analysis that are chosen by the researcher with respect to said commitments (Gutiérrez, Engeström, & Sannino, 2016 ), then a compelling case for DBR as a methodology can be made (Bell, 2004 ).

Effectively, despite the considerable variation in how DBR has been and is employed, and tensions within the DBR field, we might point to considerable, shared epistemic common ground among DBR researchers, all of whom are invested in an approach to research that involves engaging actively and iteratively in the design and exploration of learning theory in situated, natural contexts. This common epistemic ground, even in the face of pluralistic ideologies and choices of methods, invites in a new type of methodological coherence, marked by “intersubjectivity without agreement” (Matusov, 1996 ), that links DBR from traditional developmental and cognitive psychology models of DBR (e.g., Brown, 1992 ; Brown & Campione, 1998 ; Collins, 1992 ), to more recent critical and sociocultural manifestations (e.g., Bang & Vossoughi, 2016 ; Engeström, 2011 ; Gutiérrez, 2016 ), and everything in between.

Put in other terms, even as DBR researchers may choose heterogeneous methods for data collection, data analysis, and reporting results complementary to the ideological and sociopolitical commitments of the particular researcher and the types of research questions that are under examination (Bell, 2004 ), a shared epistemic commitment gives the methodology shape. Indeed, the common commitment toward design innovation emerges clearly across examples of DBR methodological studies ranging in method from ethnographic analyses (Salvador, Bell, & Anderson, 1999 ) to studies of critical discourse within a design (Kärkkäinen, 1999 ), to focused examinations of metacognition of individual learners (White & Frederiksen, 1998 ), and beyond. Rather than indicating a lack of methodology, or methodological weakness, the use of varying qualitative methods for framing data collection and retrospective analyses within DBR, and the tensions within the epistemic common ground itself, simply reflects the scope of its utility. Learning in context is complex, contested, and messy, and the plurality of methods present across DBR allow researchers to dynamically respond to context as needed, employing the tools that fit best to consider the questions that are present, or may arise.

All this being the case, it is useful to look toward the coherent elements—the “argumentative grammar” of DBR, if you will—that can be identified across the varied iterations of DBR. Understanding these shared features, in the context and terms of the methodology itself, help us to appreciate what is involved in developing robust and thorough DBR research, and how DBR seeks to make strong, meaningful claims around the types of research questions it takes up.

Coherent Features of Design-Based Research

Several scholars have provided comprehensive overviews and listings of what they see as the cross-cutting features of DBR, both in the context of more traditional models of DBR (e.g., Cobb et al., 2003 ; Design-Based Research Collective, 2003 ), and in regards to newer iterations (e.g., Gutiérrez & Jurow, 2016 ; Bang & Vossoughi, 2016 ). Rather than try to offer an overview of each of these increasingly pluralistic classifications, the intent here is to attend to three broad elements that are shared across articulations of DBR and reflect the essential elements that constitute the methodological approach DBR offers to educational researchers.

Design research is concerned with the development, testing, and evolution of learning theory in situated contexts

This first element is perhaps most central to what DBR of all types is, anchored in what Brown ( 1992 ) was initially most interested in: testing the pragmatic validity of theories of learning by designing interventions that engaged with, or proposed, entire, naturalistic, ecologies of learning. Put another way, while DBR studies may have various units of analysis, focuses, and variables, and may organize learning in many different ways, it is the theoretically informed design for educational innovation that is most centrally under evaluation. DBR actively and centrally exists as a paradigm that is engaged in the development of theory, not just the evaluation of aspects of its usage (Bell, 2004 ; Design-Based Research Collective, 2003 ; Lesh & Kelly, 2000 ; van den Akker, 1999 ).

Effectively, where DBR is taking place, theory as a lived possibility is under examination. Specifically, in most DBR, this means a focus on “intermediate-level” theories of learning, rather than “grand” ones. In essence, DBR does not contend directly with “grand” learning theories (such as developmental or sociocultural theory writ large) (diSessa, 1991 ). Rather, DBR seeks to offer constructive insights by directly engaging with particular learning processes that flow from these theories on a “grounded,” “intermediate” level. This is not, however, to say DBR is limited in what knowledge it can produce; rather, tinkering in this “intermediate” realm can produce knowledge that informs the “grand” theory (Gravemeijer, 1994 ). For example, while cognitive and motivational psychology provide “grand” theoretical frames, interest-driven learning (IDL) is an “intermediate” theory that flows from these and can be explored in DBR to both inform the development of IDL designs in practice and inform cognitive and motivational psychology more broadly (Joseph, 2004 ).

Crucially, however, DBR entails putting the theory in question under intense scrutiny, or, “into harm’s way” (Cobb et al., 2003 ). This is an especially core element to DBR, and one that distinguishes it from the proliferation of educational-reform or educational-entrepreneurship efforts that similarly take up the discourse of “design” and “innovation.” Not only is the reflexive, often participatory element of DBR absent from such efforts—that is, questioning and modifying the design to suit the learning needs of the context and partners—but the theory driving these efforts is never in question, and in many cases, may be actively obscured. Indeed, it is more common to see educational-entrepreneur design innovations seek to modify a context—such as the way charter schools engage in selective pupil recruitment and intensive disciplinary practices (e.g., Carnoy et al., 2005 ; Ravitch, 2010 ; Saltman, 2007 )—rather than modify their design itself, and thus allow for humility in their theory. Such “innovations” and “design” efforts are distinct from DBR, which must, in the spirit of scientific inquiry, be willing to see the learning theory flail and struggle, be modified, and evolve.

This growth and evolution of theory and knowledge is of course central to DBR as a rigorous research paradigm; moving it beyond simply the design of local educational programs, interventions, or innovations. As Barab and Squire ( 2004 ) explain:

Design-based research requires more than simply showing a particular design works but demands that the researcher (move beyond a particular design exemplar to) generate evidence-based claims about learning that address contemporary theoretical issues and further the theoretical knowledge of the field. (pp. 5–6)

DBR as a research paradigm offers a design process through which theories of learning can be tested; they can be modified, and by allowing them to operate with humility in situated conditions, new insights and knowledge, even new theories, may emerge that might inform the field, as well as the efforts and directions of other types of research inquiry. These productive, theory-developing outcomes, or “ontological innovations” (diSessa & Cobb, 2004 ), represent the culmination of an effective program of DBR—the production of new ways to understand, conceptualize, and enact learning as a lived, contextual process.

Design research works to understand learning processes, and the design that supports them in situated contexts

As a research methodology that operates by tinkering with “grounded” learning theories, DBR is itself grounded, and seeks to develop its knowledge claims and designs in naturalistic, situated contexts (Brown, 1992 ). This is, again, a distinguishing element of DBR—setting it apart from laboratory research efforts involving design and interventions in closed, controlled environments. Rather than attempting to focus on singular variables, and isolate these from others, DBR is concerned with the multitude of variables that naturally occur across entire learning ecologies, and present themselves in distinct ways across multiple planes of possible examination (Rogoff, 1995 ; Collins, Joseph, & Bielaczyc, 2004 ). Certainly, specific variables may be identified as dependent, focal units of analysis, but identifying (while not controlling for) the variables beyond these, and analyzing their impact on the design and learning outcomes, is an equally important process in DBR (Collins et al., 2004 ; Barab & Kirshner, 2001 ). In practice, this of course varies across iterations in its depth and breadth. Traditional models of developmental or cognitive DBR may look to account for the complexity and nuance of a setting’s social, developmental, institutional, and intellectual characteristics (e.g., Brown, 1992 ; Cobb et al., 2003 ), while more recent, critical iterations will give increased attention to how historicity, power, intersubjectivity, and culture, among other things, influence and shape a setting, and the learning that occurs within it (e.g., Gutiérrez, 2016 ; Vakil, de Royston, Nasir, & Kirshner, 2016 ).

Beyond these variations, what counts as “design” in DBR varies widely, and so too will what counts as a naturalistic setting. It has been well documented that learning occurs all the time, every day, and in every space imaginable, both formal and informal (Leander, Phillips, & Taylor, 2010 ), and in ways that span strictly defined setting boundaries (Engeström, Engeström, & Kärkkäinen, 1995 ). DBR may take place in any number of contexts, based on the types of questions asked, and the learning theories and processes that a researcher may be interested in exploring. DBR may involve one-to-one tutoring and learning settings, single classrooms, community spaces, entire institutions, or even holistically designed ecologies (Design-Based Research Collective, 2003 ; Engeström, 2008 ; Virkkunen & Newnham, 2013 ). In all these cases, even the most completely designed experimental ecology, the setting remains naturalistic and situated because DBR actively embraces the uncontrollable variables that participants bring with them to the learning process for and from their situated worlds, lives, and experiences—no effort is made to control for these complicated influences of life, simply to understand how they operate in a given ecology as innovation is attempted. Thus, the extent of the design reflects a broader range of qualitative and theoretical study, rather than an attempt to control or isolate some particular learning process from outside influence.

While there is much variety in what design may entail, where DBR takes place, what types of learning ecologies are under examination, and what methods are used, situated ecologies are always the setting of this work. In this way, conscious of naturalistic variables, and the influences that culture, historicity, participation, and context have on learning, researchers can use DBR to build on prior research, and extend knowledge around the learning that occurs in the complexity of situated contexts and lived practices (Collins et al., 2004 ).

Design based research is iterative; it changes, grows, and evolves to meet the needs and emergent questions of the context, and this tinkering process is part of the research

The final shared element undergirding models of DBR is that it is an iterative, active, and interventionist process, interested in and focused on producing educational innovation by actually and actively putting design innovations into practice (Brown, 1992 , Collins, 1992 ; Gutiérrez, 2008 ). Given this interventionist, active stance, tinkering with the design and the theory of learning informing the design is as much a part of the research process as the outcome of the intervention or innovation itself—we learn what impacts learning as much, if not more, than we learn what was learned. In this sense, DBR involves a focus on analyzing the theory-driven design itself, and its implementation as an object of study (Edelson, 2002 ; Penuel, Fishman, Cheng, & Sabelli, 2011 ), and is ultimately interested in the improvement of the design—of how it unfolds, how it shifts, how it is modified, and made to function productively for participants in their contexts and given their needs (Kirshner & Polman, 2013 ).

While DBR is iterative and contextual as a foundational methodological principle, what this means varies across conceptions of DBR. For instance, in more traditional models, Brown and Campione ( 1996 ) pointed out the dangers of “lethal mutation” in which a design, introduced into a context, may become so warped by the influence, pressures, incomplete implementation, or misunderstanding of participants in the local context, that it no longer reflects or tests the theory under study. In short, a theory-driven intervention may be put in place, and then subsumed to such a degree by participants based on their understanding and needs, that it remains the original innovative design in name alone. The assertion here is that in these cases, the research ceases to be DBR in the sense that the design is no longer central, actively shaping learning. We cannot, they argue, analyze a design—and the theory it was meant to reflect—as an object of study when it has been “mutated,” and it is merely a banner under which participants are enacting their idiosyncratic, pragmatic needs.

While the ways in which settings and individuals might disrupt designs intended to produce robust learning is certainly a tension to be cautious of in DBR, it is also worth noting that in many critical approaches to DBR, such mutations—whether “lethal” to the original design or not—are seen as compelling and important moments. Here, where collaboration and community input is more central to the design process, iterative is understood differently. Thus, a “mutation” becomes a point where reflexivity, tension, and contradiction might open the door for change, for new designs, for reconsiderations of researcher and collaborative partner positionalities, or for ethnographic exploration into how a context takes up, shapes, and ultimately engages innovations in a particular sociocultural setting. In short, accounting for and documenting changes in design is a vital part of the DBR process, allowing researchers to respond to context in a variety of ways, always striving for their theories and designs to act with humility, and in the interest of usefulness .

With this in mind, the iterative nature of DBR means that the relationships researchers have with other design partners (educators and learners) in the ecology are incredibly important, and vital to consider (Bang et al., 2016 ; Engeström, 2007 ; Engeström, Sannino, & Virkkunen, 2014 ). Different iterations of DBR might occur in ways in which the researcher is more or less intimately involved in the design and implementation process, both in terms of actual presence and intellectual ownership of the design. Regarding the former, in some cases, a researcher may hand a design off to others to implement, periodically studying and modifying it, while in other contexts or designs, the researcher may be actively involved, tinkering in every detail of the implementation and enactment of the design. With regard to the latter, DBR might similarly range from a somewhat prescribed model, in which the researcher is responsible for the original design, and any modifications that may occur based on their analyses, without significant input from participants (e.g., Collins et al., 2004 ), to incredibly participatory models, in which all parties (researchers, educators, learners) are part of each step of the design-creation, modification, and research process (e.g., Bang, Faber, Gurneau, Marin, & Soto, 2016 ; Kirshner, 2015 ).

Considering the wide range of ideological approaches and models for DBR, we might acknowledge that DBR can be gainfully conducted through many iterations of “openness” to the design process. However, the strength of the research—focused on analyzing the design itself as a unit of study reflective of learning theory—will be bolstered by thoughtfully accounting for how involved the researcher will be, and how open to participation the modification process is. These answers should match the types of questions, and conceptual or ideological framing, with which researchers approach DBR, allowing them to tinker with the process of learning as they build on prior research to extend knowledge and test theory (Barab & Kirshner, 2001 ), while thoughtfully documenting these changes in the design as they go.

Implementation and Research Design

As with the overarching principles of design-based research (DBR), even amid the pluralism of conceptual frameworks of DBR researchers, it is possible, and useful, to trace the shared contours in how DBR research design is implemented. Though texts provide particular road maps for undertaking various iterations of DBR consistent with the specific goals, types of questions, and ideological orientations of these scholarly communities (e.g., Cole & Engeström, 2007 ; Collins, Joseph, & Bielaczyc, 2004 ; Fishman, Penuel, Allen, Cheng, & Sabelli, 2013 ; Gutiérrez & Jurow, 2016 ; Virkkunen & Newnham, 2013 ), certain elements, realized differently, can be found across all of these models, and may be encapsulated in five broad methodological phases.

Considering the Design Focus

DBR begins by considering what the focus of the design, the situated context, and the units of analysis for research will be. Prospective DBR researchers will need to consider broader research in regard to the “grand” theory of learning with which they work to determine what theoretical questions they have, or identify “intermediate” aspects of the theories that might be studied and strengthened by a design process in situated contexts, and what planes of analysis (Rogoff, 1995 ) will be most suitable for examination. This process allows for the identification of the critical theoretical elements of a design, and articulation of initial research questions.

Given the conceptual framework, theoretical and research questions, and sociopolitical interests at play, researchers may undertake this, and subsequent steps in the process, on their own, or in close collaboration with the communities and individuals in the situated contexts in which the design will unfold. As such, across iterations of DBR, and with respect to the ways DBR researchers choose to engage with communities, the origin of the design will vary, and might begin in some cases with theoretical questions, or arise in others as a problem of practice (Coburn & Penuel, 2016 ), though as has been noted, in either case, theory and practice are necessarily linked in the research.

Creating and Implementing a Designed Innovation

From the consideration and identification of the critical elements, planned units of analysis, and research questions that will drive a design, researchers can then actively create (either on their own or in conjunction with potential design partners) a designed intervention reflecting these critical elements, and the overarching theory.

Here, the DBR researcher should consider what partners exist in the process and what ownership exists around these partnerships, determine exactly what the pragmatic features of the intervention/design will be and who will be responsible for them, and consider when checkpoints for modification and evaluation will be undertaken, and by whom. Additionally, researchers should at this stage consider questions of timeline and of recruiting participants, as well as what research materials will be needed to adequately document the design, its implementation, and its outcomes, and how and where collected data will be stored.

Once a design (the planned, theory-informed innovative intervention) has been produced, the DBR researcher and partners can begin the implementation process, putting the design into place and beginning data collection and documentation.

Assessing the Impact of the Design on the Learning Ecology

Chronologically, the next two methodological steps happen recursively in the iterative process of DBR. The researcher must assess the impact of the design, and then, make modifications as necessary, before continuing to assess the impact of these modifications. In short, these next two steps are a cycle that continues across the life and length of the research design.

Once a design has been created and implemented, the researcher begins to observe and document the learning, the ecology, and the design itself. Guided by and in conversation with the theory and critical elements, the researcher should periodically engage in ongoing data analysis, assessing the success of the design, and of learning, paying equal attention to the design itself, and how its implementation is working in the situated ecology.

Within the realm of qualitative research, measuring or assessing variables of learning and assessing the design may look vastly different, require vastly different data-collection and data-analysis tools, and involve vastly different research methods among different researchers.

Modifying the Design

Modification, based on ongoing assessment of the design, is what makes DBR iterative, helping the researcher extend the field’s knowledge about the theory, design, learning, and the context under examination.

Modification of the design can take many forms, from complete changes in approach or curriculum, to introducing an additional tool or mediating artifact into a learning ecology. Moreover, how modification unfolds involves careful reflection from the researcher and any co-designing participants, deciding whether modification will be an ongoing, reflexive, tinkering process, or if it will occur only at predefined checkpoints, after formal evaluation and assessment. Questions of ownership, issues of resource availability, technical support, feasibility, and communication are all central to the work of design modification, and answers will vary given the research questions, design parameters, and researchers’ epistemic commitments.

Each moment of modification indicates a new phase in a DBR project, and a new round of assessing—through data analysis—the impact of the design on the learning ecology, either to guide continued or further modification, report the results of the design, or in some cases, both.

Reporting the Results of the Design

The final step in DBR methodology is to report on the results of the designed intervention, how it contributed to understandings of theory, and how it impacted the local learning ecology or context. The format, genre, and final data analysis methods used in reporting data and research results will vary across iterations of DBR. However, it is largely understood that to avoid methodological confusion, DBR researchers should clearly situate themselves in the DBR paradigm by clearly describing and detailing the design itself; articulating the theory, central elements, and units of analysis under scrutiny, what modifications occurred and what precipitated these changes, and what local effects were observed; and exploring any potential contributions to learning theory, while accounting for the context and their interventionist role and positionality in the design. As such, careful documentation of pragmatic and design decisions for retrospective data analysis, as well as research findings, should be done at each stage of this implementation process.

Methodological Issues in the Design-Based Research Paradigm

Because of its pluralistic nature, its interventionist, nontraditional stance, and the fact that it remains in its conceptual infancy, design-based research (DBR) is replete with ongoing methodological questions and challenges, both from external and internal sources. While there are many more that may exist, addressed will be several of the most pressing the prospective DBR researcher may encounter, or want to consider in understanding the paradigm and beginning a research design.

Challenges to Rigor and Validity

Perhaps the place to begin this reflection on tensions in the DBR paradigm is the recurrent and ongoing challenge to the rigor and validity of DBR, which has asked: Is DBR research at all? Given the interventionist and activist way in which DBR invites the researcher to participate, and the shift in orientation from long-accepted research paradigms, such critiques are hardly surprising, and fall in line with broader challenges to the rigor and objectivity of qualitative social science research in general. Historically, such complaints about DBR are linked to decades of critique of any research that does not adhere to the post-positivist approach set out as the U.S. Department of Education began to prioritize laboratory and large-scale randomized control-trial experimentation as the “gold standard” of research design (e.g., Mosteller & Boruch, 2002 ).

From the outset, DBR, as an interventionist, local, situated, non-laboratory methodology, was bound to run afoul of such conservative trends. While some researchers involved in (particularly traditional developmental and cognitive) DBR have found broader acceptance within these constraints, the rigor of DBR remains contested. It has been suggested that DBR is under-theorized and over-methologized, a haphazard way for researchers to do activist work without engaging in the development of robust knowledge claims about learning (Dede, 2004 ), and an approach lacking in coherence that sheltered interventionist projects of little impact to developing learning theory and allowed researchers to make subjective, pet claims through selective analysis of large bodies of collected data (Kelly, 2003 , 2004 ).

These critiques, however, impose an external set of criteria on DBR, desiring it to fit into the molds of rigor and coherence as defined by canonical methodologies. Bell ( 2004 ) and Bang and Vossoughi ( 2016 ) have made compelling cases for the wide variety of methods and approaches present in DBR not as a fracturing, but as a generative proliferation of different iterations that can offer powerful insights around the different types of questions that exist about learning in the infinitely diverse settings in which it occurs. Essentially, researchers have argued that within the DBR paradigm, and indeed within educational research more generally, the practical impact of research on learning, context, and practices should be a necessary component of rigor (Gutiérrez & Penuel, 2014 ), and the pluralism of methods and approaches available in DBR ensures that the practical impacts and needs of the varied contexts in which the research takes place will always drive the design and research tools.

These moves are emblematic of the way in which DBR is innovating and pushing on paradigms of rigor in educational research altogether, reflecting how DBR fills a complementary niche with respect to other methodologies and attends to elements and challenges of learning in lived, real environments that other types of research have consistently and historically missed. Beyond this, Brown ( 1992 ) was conscious of the concerns around data collection, validity, rigor, and objectivity from the outset, identifying this dilemma—the likelihood of having an incredible amount of data collected in a design only a small fraction of which can be reported and shared, thus leading potentially to selective data analysis and use—as the Bartlett Effect (Brown, 1992 ). Since that time, DBR researchers have been aware of this challenge, actively seeking ways to mitigate this threat to validity by making data sets broadly available, documenting their design, tinkering, and modification processes, clearly situating and describing disconfirming evidence and their own position in the research, and otherwise presenting the broad scope of human and learning activity that occurs within designs in large learning ecologies as comprehensively as possible.

Ultimately, however, these responses are likely to always be insufficient as evidence of rigor to some, for the root dilemma is around what “counts” as education science. While researchers interested and engaged in DBR ought rightly to continue to push themselves to ensure the methodological rigor of their work and chosen methods, it is also worth noting that DBR should seek to hold itself to its own criteria of assessment. This reflects broader trends in qualitative educational research that push back on narrow constructions of what “counts” as science, recognizing the ways in which new methodologies and approaches to research can help us examine aspects of learning, culture, and equity that have continued to be blind spots for traditional education research; invite new voices and perspectives into the process of achieving rigor and validity (Erickson & Gutiérrez, 2002 ); bolster objectivity by bringing it into conversation with the positionality of the researcher (Harding, 1993 ); and perhaps most important, engage in axiological innovation (Bang, Faber, Gurneau, Marin, & Soto, 2016 ), or the exploration of and design for what is, “good right, true, and beautiful . . . in cultural ecologies” (p. 2).

Questions of Generalizability and Usefulness

The generalizability of research results in DBR has been an ongoing and contentious issue in the development of the paradigm. Indeed, by the standards of canonical methods (e.g., laboratory experimentation, ethnography), these local, situated interventions should lack generalizability. While there is reason to discuss and question the merit of generalizability as a goal of qualitative research at all, researchers in the DBR paradigm have long been conscious of this issue. Understanding the question of generalizability around DBR, and how the paradigm has responded to it, can be done in two ways.

First, by distinguishing questions specific to a particular design from the generalizability of the theory. Cole’s (Cole & Underwood, 2013 ) 5th Dimension work, and the nationwide network of linked, theoretically similar sites, operating nationwide with vastly different designs, is a powerful example of this approach to generalizability. Rather than focus on a single, unitary, potentially generalizable design, the project is more interested in variability and sustainability of designs across local contexts (e.g., Cole, 1995 ; Gutiérrez, Bien, Selland, & Pierce, 2011 ; Jurow, Tracy, Hotchkiss, & Kirshner, 2012 ). Through attention to sustainable, locally effective innovations, conscious of the wide variation in culture and context that accompanies any and all learning processes, 5th Dimension sites each derive their idiosyncratic structures from sociocultural theory, sharing some elements, but varying others, while seeking their own “ontological innovations” based on the affordances of their contexts. This pattern reflects a key element of much of the DBR paradigm: that questions of generalizability in DBR may be about the generalizability of the theory of learning, and the variability of learning and design in distinct contexts, rather than the particular design itself.

A second means of addressing generalizability in DBR has been to embrace the pragmatic impacts of designing innovations. This response stems from Messick ( 1992 ) and Schoenfeld’s ( 1992 ) arguments early on in the development of DBR that the consequentialness and validity of DBR efforts as potentially generalizable research depend on the “ usefulness ” of the theories and designs that emerge. Effectively, because DBR is the examination of situated theory, a design must be able to show pragmatic impact—it must succeed at showing the theory to be useful . If there is evidence of usefulness to both the context in which it takes place, and the field of educational research more broadly, then the DBR researcher can stake some broader knowledge claims that might be generalizable. As a result, the DBR paradigm tends to “treat changes in [local] contexts as necessary evidence for the viability of a theory” (Barab & Squire, 2004 , p. 6). This of course does not mean that DBR is only interested in successful efforts. A design that fails or struggles can provide important information and knowledge to the field. Ultimately, though, DBR tends to privilege work that proves the usefulness of designs, whose pragmatic or theoretical findings can then be generalized within the learning science and education research fields.

With this said, the question of usefulness is not always straightforward, and is hardly unitary. While many DBR efforts—particularly those situated in developmental and cognitive learning science traditions—are interested in the generalizability of their useful educational designs (Barab & Squire, 2004 ; Cobb, Confrey, diSessa, Lehrer, & Schauble, 2003 ; Joseph, 2004 ; Steffe & Thompson, 2000 ), not all are. Critical DBR researchers have noted that if usefulness remains situated in the extant sociopolitical and sociocultural power-structures—dominant conceptual and popular definitions of what useful educational outcomes are—the result will be a bar for research merit that inexorably bends toward the positivist spectrum (Booker & Goldman, 2016 ; Dominguez, 2015 ; Zavala, 2016 ). This could potentially, and likely, result in excluding the non-normative interventions and innovations that are vital for historically marginalized communities, but which might have vastly different-looking outcomes, that are nonetheless useful in the sociopolitical context they occur in. Alternative framings to this idea of usefulness push on and extend the intention, and seek to involve the perspectives and agency of situated community partners and their practices in what “counts” as generative and rigorous research outcomes (Gutiérrez & Penuel, 2014 ). An example in this regard is the idea of consequential knowledge (Hall & Jurow, 2015 ; Jurow & Shea, 2015 ), which suggests outcomes that are consequential will be taken up by participants in and across their networks, and over-time—thus a goal of consequential knowledge certainly meets the standard of being useful , but it also implicates the needs and agency of communities in determining the success and merit of a design or research endeavor in important ways that strict usefulness may miss.

Thus, the bar of usefulness that characterizes the DBR paradigm should not be approached without critical reflection. Certainly designs that accomplish little for local contexts should be subject to intense questioning and critique, but considering the sociopolitical and systemic factors that might influence what “counts” as useful in local contexts and education science more generally, should be kept firmly in mind when designing, choosing methods, and evaluating impacts (Zavala, 2016 ). Researchers should think deeply about their goals, whether they are reaching for generalizability at all, and in what ways they are constructing contextual definitions of success, and be clear about these ideologically influenced answers in their work, such that generalizability and the usefulness of designs can be adjudicated based on and in conversation with the intentions and conceptual framework of the research and researcher.

Ethical Concerns of Sustainability, Participation, and Telos

While there are many external challenges to rigor and validity of DBR, another set of tensions comes from within the DBR paradigm itself. Rather than concerns about rigor or validity, these internal critiques are not unrelated to the earlier question of the contested definition of usefulness , and more accurately reflect questions of research ethics and grow from ideological concerns with how an intentional, interventionist stance is taken up in research as it interacts with situated communities.

Given that the nature of DBR is to design and implement some form of educational innovation, the DBR researcher will in some way be engaging with an individual or community, becoming part of a situated learning ecology, complete with a sociopolitical and cultural history. As with any research that involves providing an intervention or support, the question of what happens when the research ends is as much an ethical as a methodological one. Concerns then arise given how traditional models of DBR seem intensely focused on creating and implementing a “complete” cycle of design, but giving little attention to what happens to the community and context afterward (Engeström, 2011 ). In contrast to this privileging of “completeness,” sociocultural and critical approaches to DBR have suggested that if research is actually happening in naturalistic, situated contexts that authentically recognize and allow social and cultural dimensions to function (i.e., avoid laboratory-type controls to mitigate independent variables), there can never be such a thing as “complete,” for the design will, and should, live on as part of the ecology of the space (Cole, 2007 ; Engeström, 2000 ). Essentially, these internal critiques push DBR to consider sustainability, and sustainable scale, as equally important concerns to the completeness of an innovation. Not only are ethical questions involved, but accounting for the unbounded and ongoing nature of learning as a social and cultural activity can help strengthen the viability of knowledge claims made, and what degree of generalizability is reasonably justified.

Related to this question of sustainability are internal concerns regarding the nature and ethics of participation in DBR, whether partners in a design are being adequately invited to engage in the design and modification processes that will unfold in their situated contexts and lived communities (Bang et al., 2016 ; Engeström, 2011 ). DBR has actively sought to examine multiple planes of analysis in learning that might be occurring in a learning ecology but has rarely attended to the subject-subject dynamics (Bang et al., 2016 ), or “relational equity” (DiGiacomo & Gutiérrez, 2015 ) that exists between researchers and participants as a point of focus. Participatory design research (PDR) (Bang & Vossoughi, 2016 ) models have recently emerged as a way to better attend to these important dimensions of collective participation (Engeström, 2007 ), power (Vakil et al., 2016 ), positionality (Kirshner, 2015 ), and relational agency (Edwards, 2007 , 2009 ; Sannino & Engeström, 2016 ) as they unfold in DBR.

Both of these ethical questions—around sustainability and participation—reflect challenges to what we might call the telos —or direction—that DBR takes to innovation and research. These are questions related to whose voices are privileged, in what ways, for what purposes, and toward what ends. While DBR, like many other forms of educational research, has involved work with historically marginalized communities, it has, like many other forms of educational research, not always done so in humanizing ways. Put another way, there are ethical and political questions surrounding whether the designs, goals, and standards of usefulness we apply to DBR efforts should be purposefully activist, and have explicitly liberatory ends. To this point, critical and decolonial perspectives have pushed on the DBR paradigm, suggesting that DBR should situate itself as being a space of liberatory innovation and potential, in which communities and participants can become designers and innovators of their own futures (Gutiérrez, 2005 ). This perspective is reflected in the social design experiment (SDE) approach to DBR (Gutiérrez, 2005 , 2008 ; Gutierréz & Vossoughi, 2010 ; Gutiérrez, 2016 ; Gutiérrez & Jurow, 2016 ), which begins in participatory fashion, engaging a community in identifying its own challenges and desires, and reflecting on the historicity of learning practices, before proleptic design efforts are undertaken that ensure that research is done with , not on , communities of color (Arzubiaga, Artiles, King, & Harris-Murri, 2008 ), and intentionally focused on liberatory goals.

Global Perspectives and Unique Iterations

While design-based research (DBR) has been a methodology principally associated with educational research in the United States, its development is hardly limited to the U.S. context. Rather, while DBR emerged in U.S. settings, similar methods of situated, interventionist research focused on design and innovation were emerging in parallel in European contexts (e.g., Gravemeijer, 1994 ), most significantly in the work of Vygotskian scholars both in Europe and the United States (Cole, 1995 ; Cole & Engeström, 1993 , 2007 ; Engeström, 1987 ).

Particularly, where DBR began in the epistemic and ontological terrain of developmental and cognitive psychology, this vein of design-based research work began deeply grounded in cultural-historical activity theory (CHAT). This ontological and epistemic grounding meant that the approach to design that was taken was more intensively conscious of context, historicity, hybridity, and relational factors, and framed around understanding learning as a complex, collective activity system that, through design, could be modified and transformed (Cole & Engeström, 2007 ). The models of DBR that emerged in this context abroad were the formative intervention (Engeström, 2011 ; Engeström, Sannino, & Virkkunen, 2014 ), which relies heavily on Vygotskian double-stimulation to approach learning in nonlinear, unbounded ways, accounting for the role of learner, educator, and researcher in a collective process, shifting and evolving and tinkering with the design as the context needs and demands; and the Change Laboratory (Engeström, 2008 ; Virkkunen & Newnham, 2013 ), which similarly relies on the principle of double stimulation, while presenting holistic way to approach transforming—or changing—entire learning activity systems in fundamental ways through designs that encourage collective “expansive learning” (Engeström, 2001 ), through which participants can produce wholly new activity systems as the object of learning itself.

Elsewhere in the United States, still parallel to the developmental- or cognitive-oriented DBR work that was occurring, American researchers employing CHAT began to leverage the tools and aims of expansive learning in conversation with the tensions and complexity of the U.S. context (Cole, 1995 ; Gutiérrez, 2005 ; Gutiérrez & Rogoff, 2003 ). Like the CHAT design research of the European context, there was a focus on activity systems, historicity, nonlinear and unbounded learning, and collective learning processes and outcomes. Rather than a simple replication, however, these researchers put further attention on questions of equity, diversity, and justice in this work, as Gutiérrez, Engeström, and Sannino ( 2016 ) note:

The American contribution to a cultural historical activity theoretic perspective has been its attention to diversity, including how we theorize, examine, and represent individuals and their communities. (p. 276)

Effectively, CHAT scholars in parts of the United States brought critical and decolonial perspectives to bear on their design-focused research, focusing explicitly on the complex cultural, racial, and ethnic terrain in which they worked, and ensuring that diversity, equity, justice, and non-dominant perspectives would become central principles to the types of design research conducted. The result was the emergence of the aforementioned social design experiments (e.g., Gutiérrez, 2005 , 2016 ), and participatory design research (Bang & Vossoughi, 2016 ) models, which attend intentionally to historicity and relational equity, tailor their methods to the liberation of historically marginalized communities, aim intentionally for liberatory outcomes as key elements of their design processes, and seek to produce outcomes in which communities of learners become designers of new community futures (Gutiérrez, 2016 ). While these approaches emerged in the United States, their origins reflect ontological and ideological perspectives quite distinct from more traditional learning science models of DBR, and dominant U.S. ontologies in general. Indeed, these iterations of DBR are linked genealogically to the ontologies, ideologies, and concerns of peoples in the Global South, offering some promise for the method in those regions, though DBR has yet to broadly take hold among researchers beyond the United States and Europe.

There is, of course, much more nuance to these models, and each of these models (formative interventions, Change Laboratories, social design experiments, and participatory design research) might itself merit independent exploration and review well beyond the scope here. Indeed, there is some question as to whether all adherents of these CHAT design-based methodologies, with their unique genealogies and histories, would even consider themselves under the umbrella of DBR. Yet, despite significant ontological divergences, these iterations share many of the same foundational tenets of the traditional models (though realized differently), and it is reasonable to argue that they do indeed share the same, broad methodological paradigm (DBR), or at the very least, are so intimately related that any discussion of DBR, particularly one with a global view, should consider the contributions CHAT iterations have made to the DBR methodology in the course of their somewhat distinct, but parallel, development.

Possibilities and Potentials for Design-Based Research

Since its emergence in 1992 , the DBR methodology for educational research has continued to grow in popularity, ubiquity, and significance. Its use has begun to expand beyond the confines of the learning sciences, taken up by researchers in a variety of disciplines, and across a breadth of theoretical and intellectual traditions. While still not as widely recognized as more traditional and well-established research methodologies, DBR as a methodology for rigorous research is unquestionably here to stay.

With this in mind, the field ought to still be cautious of the ways in which the discourse of design is used. Not all design is DBR, and preserving the integrity, rigor, and research ethics of the paradigm (on its own terms) will continue to require thoughtful reflection as its pluralistic parameters come into clearer focus. Yet the proliferation of methods in the DBR paradigm should be seen as a positive. There are far too many theories of learning and ideological perspectives that have meaningful contributions to make to our knowledge of the world, communities, and learning to limit ourselves to a unitary approach to DBR, or set of methods. The paradigm has shown itself to have some core methodological principles, but there is no reason not to expect these to grow, expand, and evolve over time.

In an increasingly globalized, culturally diverse, and dynamic world, there is tremendous potential for innovation couched in this proliferation of DBR. Particularly in historically marginalized communities and across the Global South, we will need to know how learning theories can be lived out in productive ways in communities that have been understudied, and under-engaged. The DBR paradigm generally, and critical and CHAT iterations particularly, can fill an important need for participatory, theory-developing research in these contexts that simultaneously creates lived impacts. Participatory design research (PDR), social design experiments (SDE), and Change Laboratory models of DBR should be of particular interest and attention moving forward, as current trends toward culturally sustaining pedagogies and learning will need to be explored in depth and in close collaboration with communities, as participatory design partners, in the press toward liberatory educational innovations.

Bibliography

The following special issues of journals are encouraged starting points for engaging more deeply with current and past trends in design-based research.

  • Bang, M. , & Vossoughi, S. (Eds.). (2016). Participatory design research and educational justice: Studying learning and relations within social change making [Special issue]. Cognition and Instruction , 34 (3).
  • Barab, S. (Ed.). (2004). Design-based research [Special issue]. Journal of the Learning Sciences , 13 (1).
  • Cole, M. , & The Distributed Literacy Consortium. (2006). The Fifth Dimension: An after-school program built on diversity . New York, NY: Russell Sage Foundation.
  • Kelly, A. E. (Ed.). (2003). Special issue on the role of design in educational research [Special issue]. Educational Researcher , 32 (1).
  • Arzubiaga, A. , Artiles, A. , King, K. , & Harris-Murri, N. (2008). Beyond research on cultural minorities: Challenges and implications of research as situated cultural practice. Exceptional Children , 74 (3), 309–327.
  • Bang, M. , Faber, L. , Gurneau, J. , Marin, A. , & Soto, C. (2016). Community-based design research: Learning across generations and strategic transformations of institutional relations toward axiological innovations. Mind, Culture, and Activity , 23 (1), 28–41.
  • Bang, M. , & Vossoughi, S. (2016). Participatory design research and educational justice: Studying learning and relations within social change making. Cognition and Instruction , 34 (3), 173–193.
  • Barab, S. , Kinster, J. G. , Moore, J. , Cunningham, D. , & The ILF Design Team. (2001). Designing and building an online community: The struggle to support sociability in the Inquiry Learning Forum. Educational Technology Research and Development , 49 (4), 71–96.
  • Barab, S. , & Squire, K. (2004). Design-based research: Putting a stake in the ground. Journal of the Learning Sciences , 13 (1), 1–14.
  • Barab, S. A. , & Kirshner, D. (2001). Methodologies for capturing learner practices occurring as part of dynamic learning environments. Journal of the Learning Sciences , 10 (1–2), 5–15.
  • Bell, P. (2004). On the theoretical breadth of design-based research in education. Educational Psychologist , 39 (4), 243–253.
  • Bereiter, C. , & Scardamalia, M. (1989). Intentional learning as a goal of instruction. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 361–392). Hillsdale, NJ: Lawrence Erlbaum.
  • Booker, A. , & Goldman, S. (2016). Participatory design research as a practice for systemic repair: Doing hand-in-hand math research with families. Cognition and Instruction , 34 (3), 222–235.
  • Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences , 2 (2), 141–178.
  • Brown, A. , & Campione, J. C. (1996). Psychological theory and the design of innovative learning environments: On procedures, principles, and systems. In L. Schauble & R. Glaser (Eds.), Innovations in learning: New environments for education (pp. 289–325). Mahwah, NJ: Lawrence Erlbaum.
  • Brown, A. L. , & Campione, J. C. (1998). Designing a community of young learners: Theoretical and practical lessons. In N. M. Lambert & B. L. McCombs (Eds.), How students learn: Reforming schools through learner-centered education (pp. 153–186). Washington, DC: American Psychological Association.
  • Brown, A. , Campione, J. , Webber, L. , & McGilley, K. (1992). Interactive learning environments—A new look at learning and assessment. In B. R. Gifford & M. C. O’Connor (Eds.), Future assessment: Changing views of aptitude, achievement, and instruction (pp. 121–211). Boston, MA: Academic Press.
  • Carnoy, M. , Jacobsen, R. , Mishel, L. , & Rothstein, R. (2005). The charter school dust-up: Examining the evidence on enrollment and achievement . Washington, DC: Economic Policy Institute.
  • Carspecken, P. (1996). Critical ethnography in educational research . New York, NY: Routledge.
  • Cobb, P. , Confrey, J. , diSessa, A. , Lehrer, R. , & Schauble, L. (2003). Design experiments in educational research. Educational Researcher , 32 (1), 9–13.
  • Cobb, P. , & Steffe, L. P. (1983). The constructivist researcher as teacher and model builder. Journal for Research in Mathematics Education , 14 , 83–94.
  • Coburn, C. , & Penuel, W. (2016). Research-practice partnerships in education: Outcomes, dynamics, and open questions. Educational Researcher , 45 (1), 48–54.
  • Cole, M. (1995). From Moscow to the Fifth Dimension: An exploration in romantic science. In M. Cole & J. Wertsch (Eds.), Contemporary implications of Vygotsky and Luria (pp. 1–38). Worcester, MA: Clark University Press.
  • Cole, M. (1996). Cultural psychology: A once and future discipline . Cambridge, MA: Harvard University Press.
  • Cole, M. (2007). Sustaining model systems of educational activity: Designing for the long haul. In J. Campione , K. Metz , & A. S. Palinscar (Eds.), Children’s learning in and out of school: Essays in honor of Ann Brown (pp. 71–89). New York, NY: Routledge.
  • Cole, M. , & Engeström, Y. (1993). A cultural historical approach to distributed cognition. In G. Saloman (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 1–46). Cambridge, U.K.: Cambridge University Press.
  • Cole, M. , & Engeström, Y. (2007). Cultural-historical approaches to designing for development. In J. Valsiner & A. Rosa (Eds.), The Cambridge handbook of sociocultural psychology , Cambridge, U.K.: Cambridge University Press.
  • Cole, M. , & Underwood, C. (2013). The evolution of the 5th Dimension. In The Story of the Laboratory of Comparative Human Cognition: A polyphonic autobiography . https://lchcautobio.ucsd.edu/polyphonic-autobiography/section-5/chapter-12-the-later-life-of-the-5th-dimension-and-its-direct-progeny/ .
  • Collins, A. (1992). Toward a design science of education. In E. Scanlon & T. O’Shea (Eds.), New directions in educational technology (pp. 15–22). New York, NY: Springer-Verlag.
  • Collins, A. , Joseph, D. , & Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. Journal of the Learning Sciences , 13 (1), 15–42.
  • Dede, C. (2004). If design-based research is the answer, what is the question? A commentary on Collins, Joseph, and Bielaczyc; DiSessa and Cobb; and Fishman, Marx, Blumenthal, Krajcik, and Soloway in the JLS special issue on design-based research. Journal of the Learning Sciences , 13 (1), 105–114.
  • Design-Based Research Collective . (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher , 32 (1), 5–8.
  • DiGiacomo, D. , & Gutiérrez, K. D. (2015). Relational equity as a design tool within making and tinkering activities. Mind, Culture, and Activity , 22 (3), 1–15.
  • diSessa, A. A. (1991). Local sciences: Viewing the design of human-computer systems as cognitive science. In J. M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface (pp. 162–202). Cambridge, U.K.: Cambridge University Press.
  • diSessa, A. A. , & Cobb, P. (2004). Ontological innovation and the role of theory in design experiments. Journal of the Learning Sciences , 13 (1), 77–103.
  • diSessa, A. A. , & Minstrell, J. (1998). Cultivating conceptual change with benchmark lessons. In J. G. Greeno & S. Goldman (Eds.), Thinking practices (pp. 155–187). Mahwah, NJ: Lawrence Erlbaum.
  • Dominguez, M. (2015). Decolonizing teacher education: Explorations of expansive learning and culturally sustaining pedagogy in a social design experiment (Doctoral dissertation). University of Colorado, Boulder.
  • Edelson, D. (2002). Design research: What we learn when we engage in design. Journal of the Learning Sciences , 11 (1), 105–121.
  • Edwards, A. (2007). Relational agency in professional practice: A CHAT analysis. Actio: An International Journal of Human Activity Theory , 1 , 1–17.
  • Edwards, A. (2009). Agency and activity theory: From the systemic to the relational. In A. Sannino , H. Daniels , & K. Gutiérrez (Eds.), Learning and expanding with activity theory (pp. 197–211). Cambridge, U.K.: Cambridge University Press.
  • Engeström, Y. (1987). Learning by expanding . Helsinki, Finland: University of Helsinki, Department of Education.
  • Engeström, Y. (2000). Can people learn to master their future? Journal of the Learning Sciences , 9 , 525–534.
  • Engeström, Y. (2001). Expansive learning at work: Toward an activity theoretical reconceptualization. Journal of Education and Work , 14 (1), 133–156.
  • Engeström, Y. (2007). Enriching the theory of expansive learning: Lessons from journeys toward co-configuration. Mind, Culture, and Activity , 14 (1–2), 23–39.
  • Engeström, Y. (2008). Putting Vygotksy to work: The Change Laboratory as an application of double stimulation. In H. Daniels , M. Cole , & J. Wertsch (Eds.), Cambridge companion to Vygotsky (pp. 363–382). New York, NY: Cambridge University Press.
  • Engeström, Y. (2011). From design experiments to formative interventions. Theory & Psychology , 21 (5), 598–628.
  • Engeström, Y. , Engeström, R. , & Kärkkäinen, M. (1995). Polycontextuality and boundary crossing in expert cognition: Learning and problem solving in complex work activities. Learning and Instruction , 5 (4), 319–336.
  • Engeström, Y. , & Sannino, A. (2010). Studies of expansive learning: Foundations, findings and future challenges. Educational Research Review , 5 (1), 1–24.
  • Engeström, Y. , & Sannino, A. (2011). Discursive manifestations of contradictions in organizational change efforts: A methodological framework. Journal of Organizational Change Management , 24 (3), 368–387.
  • Engeström, Y. , Sannino, A. , & Virkkunen, J. (2014). On the methodological demands of formative interventions. Mind, Culture, and Activity , 2 (2), 118–128.
  • Erickson, F. , & Gutiérrez, K. (2002). Culture, rigor, and science in educational research. Educational Researcher , 31 (8), 21–24.
  • Espinoza, M. (2009). A case study of the production of educational sanctuary in one migrant classroom. Pedagogies: An International Journal , 4 (1), 44–62.
  • Espinoza, M. L. , & Vossoughi, S. (2014). Perceiving learning anew: Social interaction, dignity, and educational rights. Harvard Educational Review , 84 (3), 285–313.
  • Fine, M. (1994). Dis-tance and other stances: Negotiations of power inside feminist research. In A. Gitlin (Ed.), Power and method (pp. 13–25). New York, NY: Routledge.
  • Fishman, B. , Penuel, W. , Allen, A. , Cheng, B. , & Sabelli, N. (2013). Design-based implementation research: An emerging model for transforming the relationship of research and practice. National Society for the Study of Education , 112 (2), 136–156.
  • Gravemeijer, K. (1994). Educational development and developmental research in mathematics education. Journal for Research in Mathematics Education , 25 (5), 443–471.
  • Gutiérrez, K. (2005). Intersubjectivity and grammar in the third space . Scribner Award Lecture.
  • Gutiérrez, K. (2008). Developing a sociocritical literacy in the third space. Reading Research Quarterly , 43 (2), 148–164.
  • Gutiérrez, K. (2016). Designing resilient ecologies: Social design experiments and a new social imagination. Educational Researcher , 45 (3), 187–196.
  • Gutiérrez, K. , Bien, A. , Selland, M. , & Pierce, D. M. (2011). Polylingual and polycultural learning ecologies: Mediating emergent academic literacies for dual language learners. Journal of Early Childhood Literacy , 11 (2), 232–261.
  • Gutiérrez, K. , Engeström, Y. , & Sannino, A. (2016). Expanding educational research and interventionist methodologies. Cognition and Instruction , 34 (2), 275–284.
  • Gutiérrez, K. , & Jurow, A. S. (2016). Social design experiments: Toward equity by design. Journal of Learning Sciences , 25 (4), 565–598.
  • Gutiérrez, K. , & Penuel, W. R. (2014). Relevance to practice as a criterion for rigor. Educational Researcher , 43 (1), 19–23.
  • Gutiérrez, K. , & Rogoff, B. (2003). Cultural ways of learning: Individual traits or repertoires of practice. Educational Researcher , 32 (5), 19–25.
  • Gutierréz, K. , & Vossoughi, S. (2010). Lifting off the ground to return anew: Mediated praxis, transformative learning, and social design experiments. Journal of Teacher Education , 61 (1–2), 100–117.
  • Hall, R. , & Jurow, A. S. (2015). Changing concepts in activity: Descriptive and design studies of consequential learning in conceptual practices. Educational Psychologist , 50 (3), 173–189.
  • Harding, S. (1993). Rethinking standpoint epistemology: What is “strong objectivity”? In L. Alcoff & E. Potter (Eds.), Feminist epistemologies (pp. 49–82). New York, NY: Routledge.
  • Hoadley, C. (2002). Creating context: Design-based research in creating and understanding CSCL. In G. Stahl (Ed.), Computer support for collaborative learning 2002 (pp. 453–462). Mahwah, NJ: Lawrence Erlbaum.
  • Hoadley, C. (2004). Methodological alignment in design-based research. Educational Psychologist , 39 (4), 203–212.
  • Joseph, D. (2004). The practice of design-based research: Uncovering the interplay between design, research, and the real-world context. Educational Psychologist , 39 (4), 235–242.
  • Jurow, A. S. , & Shea, M. V. (2015). Learning in equity-oriented scale-making projects. Journal of the Learning Sciences , 24 (2), 286–307.
  • Jurow, S. , Tracy, R. , Hotchkiss, J. , & Kirshner, B. (2012). Designing for the future: How the learning sciences can inform the trajectories of preservice teachers. Journal of Teacher Education , 63 (2), 147–60.
  • Kärkkäinen, M. (1999). Teams as breakers of traditional work practices: A longitudinal study of planning and implementing curriculum units in elementary school teacher teams . Helsinki, Finland: University of Helsinki, Department of Education.
  • Kelly, A. (2004). Design research in education: Yes, but is it methodological? Journal of the Learning Sciences , 13 (1), 115–128.
  • Kelly, A. E. , & Sloane, F. C. (2003). Educational research and the problems of practice. Irish Educational Studies , 22 , 29–40.
  • Kirshner, B. (2015). Youth activism in an era of education inequality . New York: New York University Press.
  • Kirshner, B. , & Polman, J. L. (2013). Adaptation by design: A context-sensitive, dialogic approach to interventions. National Society for the Study of Education Yearbook , 112 (2), 215–236.
  • Leander, K. M. , Phillips, N. C. , & Taylor, K. H. (2010). The changing social spaces of learning: Mapping new mobilities. Review of Research in Education , 34 , 329–394.
  • Lesh, R. A. , & Kelly, A. E. (2000). Multi-tiered teaching experiments. In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 197–230). Mahwah, NJ: Lawrence Erlbaum.
  • Matusov, E. (1996). Intersubjectivty without agreement. Mind, Culture, and Activity , 3 (1), 29–45.
  • Messick, S. (1992). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher , 23 (2), 13–23.
  • Mosteller, F. , & Boruch, R. F. (Eds.). (2002). Evidence matters: Randomized trials in education research . Washington, DC: Brookings Institution Press.
  • Newman, D. , Griffin, P. , & Cole, M. (1989). The construction zone: Working for cognitive change in school . London, U.K.: Cambridge University Press.
  • Penuel, W. R. , Fishman, B. J. , Cheng, B. H. , & Sabelli, N. (2011). Organizing research and development at the intersection of learning, implementation, and design. Educational Researcher , 40 (7), 331–337.
  • Polman, J. L. (2000). Designing project-based science: Connecting learners through guided inquiry . New York, NY: Teachers College Press.
  • Ravitch, D. (2010). The death and life of the great American school system: How testing and choice are undermining education . New York, NY: Basic Books.
  • Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in social context . New York, NY: Oxford University Press.
  • Rogoff, B. (1995). Observing sociocultural activity on three planes: Participatory appropriation, guided participation, and apprenticeship. In J. V. Wertsch , P. D. Rio , & A. Alvarez (Eds.), Sociocultural studies of mind (pp. 139–164). Cambridge U.K.: Cambridge University Press.
  • Saltman, K. J. (2007). Capitalizing on disaster: Taking and breaking public schools . Boulder, CO: Paradigm.
  • Salvador, T. , Bell, G. , & Anderson, K. (1999). Design ethnography. Design Management Journal , 10 (4), 35–41.
  • Sannino, A. (2011). Activity theory as an activist and interventionist theory. Theory & Psychology , 21 (5), 571–597.
  • Sannino, A. , & Engeström, Y. (2016). Relational agency, double stimulation and the object of activity: An intervention study in a primary school. In A. Edwards (Ed.), Working relationally in and across practices: Cultural-historical approaches to collaboration (pp. 58–77). Cambridge, U.K.: Cambridge University Press.
  • Scardamalia, M. , & Bereiter, C. (1991). Higher levels of agency for children in knowledge building: A challenge for the design of new knowledge media. Journal of the Learning Sciences , 1 , 37–68.
  • Schoenfeld, A. H. (1982). Measures of problem solving performance and of problem solving instruction. Journal for Research in Mathematics Education , 13 , 31–49.
  • Schoenfeld, A. H. (1985). Mathematical problem solving . Orlando, FL: Academic Press.
  • Schoenfeld, A. H. (1992). On paradigms and methods: What do you do when the ones you know don’t do what you want them to? Issues in the analysis of data in the form of videotapes. Journal of the Learning Sciences , 2 (2), 179–214.
  • Scribner, S. , & Cole, M. (1978). Literacy without schooling: Testing for intellectual effects. Harvard Educational Review , 48 (4), 448–461.
  • Shavelson, R. J. , Phillips, D. C. , Towne, L. , & Feuer, M. J. (2003). On the science of education design studies. Educational Researcher , 32 (1), 25–28.
  • Steffe, L. P. , & Thompson, P. W. (2000). Teaching experiment methodology: Underlying principles and essential elements. In A. Kelly & R. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 267–307). Mahwah, NJ: Erlbaum.
  • Stevens, R. (2000). Divisions of labor in school and in the workplace: Comparing computer and paper-supported activities across settings. Journal of the Learning Sciences , 9 (4), 373–401.
  • Suchman, L. (1995). Making work visible. Communications of the ACM , 38 (9), 57–64.
  • Vakil, S. , de Royston, M. M. , Nasir, N. , & Kirshner, B. (2016). Rethinking race and power in design-based research: Reflections from the field. Cognition and Instruction , 34 (3), 194–209.
  • van den Akker, J. (1999). Principles and methods of development research. In J. van den Akker , R. M. Branch , K. Gustafson , N. Nieveen , & T. Plomp (Eds.), Design approaches and tools in education and training (pp. 1–14). Boston, MA: Kluwer Academic.
  • Virkkunen, J. , & Newnham, D. (2013). The Change Laboratory: A tool for collaborative development of work and education . Rotterdam, The Netherlands: Sense.
  • White, B. Y. , & Frederiksen, J. R. (1998). Inquiry, modeling, and metacognition: Making science accessible to all students. Cognition and Instruction , 16 , 3–118.
  • Zavala, M. (2016). Design, participation, and social change: What design in grassroots spaces can teach learning scientists. Cognition and Instruction , 34 (3), 236–249.

1. The reader should note the emergence of critical ethnography (e.g., Carspecken, 1996 ; Fine, 1994 ), and other more participatory models of ethnography that deviated from this traditional paradigm during this same time period. These new forms of ethnography comprised part of the genealogy of the more critical approaches to DBR, described later in this article.

2. The reader will also note that the adjective “qualitative” largely drops away from the acronym “DBR.” This is largely because, as described, DBR, as an exploration of naturalistic ecologies with multitudes of variables, and social and learning dynamics, necessarily demands a move beyond what can be captured by quantitative measurement alone. The qualitative nature of the research is thus implied and embedded as part of what makes DBR a unique and distinct methodology.

Related Articles

  • Qualitative Data Analysis
  • The Entanglements of Ethnography and Participatory Action Research (PAR) in Educational Research in North America
  • Writing Educational Ethnography
  • Qualitative Data Analysis and the Use of Theory
  • Comparative Case Study Research
  • Use of Qualitative Methods in Evaluation Studies
  • Writing Qualitative Dissertations
  • Ethnography in Early Childhood Education
  • A History of Qualitative Research in Education in China
  • Qualitative Research in the Field of Popular Education
  • Qualitative Methodological Considerations for Studying Undocumented Students in the United States
  • Culturally Responsive Evaluation as a Form of Critical Qualitative Inquiry
  • Participatory Action Research in Education
  • Complexity Theory as a Guide to Qualitative Methodology in Teacher Education
  • Observing Schools and Classrooms

Printed from Oxford Research Encyclopedias, Education. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 16 May 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|81.177.182.136]
  • 81.177.182.136

Character limit 500 /500

Qualitative Research: Characteristics, Design, Methods & Examples

Lauren McCall

MSc Health Psychology Graduate

MSc, Health Psychology, University of Nottingham

Lauren obtained an MSc in Health Psychology from The University of Nottingham with a distinction classification.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

“Not everything that can be counted counts, and not everything that counts can be counted“ (Albert Einstein)

Qualitative research is a process used for the systematic collection, analysis, and interpretation of non-numerical data (Punch, 2013). 

Qualitative research can be used to: (i) gain deep contextual understandings of the subjective social reality of individuals and (ii) to answer questions about experience and meaning from the participant’s perspective (Hammarberg et al., 2016).

Unlike quantitative research, which focuses on gathering and analyzing numerical data for statistical analysis, qualitative research focuses on thematic and contextual information.

Characteristics of Qualitative Research 

Reality is socially constructed.

Qualitative research aims to understand how participants make meaning of their experiences – individually or in social contexts. It assumes there is no objective reality and that the social world is interpreted (Yilmaz, 2013). 

The primacy of subject matter 

The primary aim of qualitative research is to understand the perspectives, experiences, and beliefs of individuals who have experienced the phenomenon selected for research rather than the average experiences of groups of people (Minichiello, 1990).

Variables are complex, interwoven, and difficult to measure

Factors such as experiences, behaviors, and attitudes are complex and interwoven, so they cannot be reduced to isolated variables , making them difficult to measure quantitatively.

However, a qualitative approach enables participants to describe what, why, or how they were thinking/ feeling during a phenomenon being studied (Yilmaz, 2013). 

Emic (insider’s point of view)

The phenomenon being studied is centered on the participants’ point of view (Minichiello, 1990).

Emic is used to describe how participants interact, communicate, and behave in the context of the research setting (Scarduzio, 2017).

Why Conduct Qualitative Research? 

In order to gain a deeper understanding of how people experience the world, individuals are studied in their natural setting. This enables the researcher to understand a phenomenon close to how participants experience it. 

Qualitative research allows researchers to gain an in-depth understanding, which is difficult to attain using quantitative methods. 

An in-depth understanding is attained since qualitative techniques allow participants to freely disclose their experiences, thoughts, and feelings without constraint (Tenny et al., 2022). 

This helps to further investigate and understand quantitative data by discovering reasons for the outcome of a study – answering the why question behind statistics. 

The exploratory nature of qualitative research helps to generate hypotheses that can then be tested quantitatively (Busetto et al., 2020).

To design hypotheses, theory must be researched using qualitative methods to find out what is important in order to begin research. 

For example, by conducting interviews or focus groups with key stakeholders to discover what is important to them. 

Examples of qualitative research questions include: 

  • How does stress influence young adults’ behavior?
  • What factors influence students’ school attendance rates in developed countries?
  • How do adults interpret binge drinking in the UK?
  • What are the psychological impacts of cervical cancer screening in women?
  • How can mental health lessons be integrated into the school curriculum? 

Collecting Qualitative Data

There are four main research design methods used to collect qualitative data: observations, interviews,  focus groups, and ethnography.

Observations

This method involves watching and recording phenomena as they occur in nature. Observation can be divided into two types: participant and non-participant observation.

In participant observation, the researcher actively participates in the situation/events being observed.

In non-participant observation, the researcher is not an active part of the observation and tries not to influence the behaviors they are observing (Busetto et al., 2020). 

Observations can be covert (participants are unaware that a researcher is observing them) or overt (participants are aware of the researcher’s presence and know they are being observed).

However, awareness of an observer’s presence may influence participants’ behavior. 

Interviews give researchers a window into the world of a participant by seeking their account of an event, situation, or phenomenon. They are usually conducted on a one-to-one basis and can be distinguished according to the level at which they are structured (Punch, 2013). 

Structured interviews involve predetermined questions and sequences to ensure replicability and comparability. However, they are unable to explore emerging issues.

Informal interviews consist of spontaneous, casual conversations which are closer to the truth of a phenomenon. However, information is gathered using quick notes made by the researcher and is therefore subject to recall bias. 

Semi-structured interviews have a flexible structure, phrasing, and placement so emerging issues can be explored (Denny & Weckesser, 2022).

The use of probing questions and clarification can lead to a detailed understanding, but semi-structured interviews can be time-consuming and subject to interviewer bias. 

Focus groups 

Similar to interviews, focus groups elicit a rich and detailed account of an experience. However, focus groups are more dynamic since participants with shared characteristics construct this account together (Denny & Weckesser, 2022).

A shared narrative is built between participants to capture a group experience shaped by a shared context. 

The researcher takes on the role of a moderator, who will establish ground rules and guide the discussion by following a topic guide to focus the group discussions.

Typically, focus groups have 4-10 participants as a discussion can be difficult to facilitate with more than this, and this number allows everyone the time to speak.

Ethnography

Ethnography is a methodology used to study a group of people’s behaviors and social interactions in their environment (Reeves et al., 2008).

Data are collected using methods such as observations, field notes, or structured/ unstructured interviews.

The aim of ethnography is to provide detailed, holistic insights into people’s behavior and perspectives within their natural setting. In order to achieve this, researchers immerse themselves in a community or organization. 

Due to the flexibility and real-world focus of ethnography, researchers are able to gather an in-depth, nuanced understanding of people’s experiences, knowledge and perspectives that are influenced by culture and society.

In order to develop a representative picture of a particular culture/ context, researchers must conduct extensive field work. 

This can be time-consuming as researchers may need to immerse themselves into a community/ culture for a few days, or possibly a few years.

Qualitative Data Analysis Methods

Different methods can be used for analyzing qualitative data. The researcher chooses based on the objectives of their study. 

The researcher plays a key role in the interpretation of data, making decisions about the coding, theming, decontextualizing, and recontextualizing of data (Starks & Trinidad, 2007). 

Grounded theory

Grounded theory is a qualitative method specifically designed to inductively generate theory from data. It was developed by Glaser and Strauss in 1967 (Glaser & Strauss, 2017).

 This methodology aims to develop theories (rather than test hypotheses) that explain a social process, action, or interaction (Petty et al., 2012). To inform the developing theory, data collection and analysis run simultaneously. 

There are three key types of coding used in grounded theory: initial (open), intermediate (axial), and advanced (selective) coding. 

Throughout the analysis, memos should be created to document methodological and theoretical ideas about the data. Data should be collected and analyzed until data saturation is reached and a theory is developed. 

Content analysis

Content analysis was first used in the early twentieth century to analyze textual materials such as newspapers and political speeches.

Content analysis is a research method used to identify and analyze the presence and patterns of themes, concepts, or words in data (Vaismoradi et al., 2013). 

This research method can be used to analyze data in different formats, which can be written, oral, or visual. 

The goal of content analysis is to develop themes that capture the underlying meanings of data (Schreier, 2012). 

Qualitative content analysis can be used to validate existing theories, support the development of new models and theories, and provide in-depth descriptions of particular settings or experiences.

The following six steps provide a guideline for how to conduct qualitative content analysis.
  • Define a Research Question : To start content analysis, a clear research question should be developed.
  • Identify and Collect Data : Establish the inclusion criteria for your data. Find the relevant sources to analyze.
  • Define the Unit or Theme of Analysis : Categorize the content into themes. Themes can be a word, phrase, or sentence.
  • Develop Rules for Coding your Data : Define a set of coding rules to ensure that all data are coded consistently.
  • Code the Data : Follow the coding rules to categorize data into themes.
  • Analyze the Results and Draw Conclusions : Examine the data to identify patterns and draw conclusions in relation to your research question.

Discourse analysis

Discourse analysis is a research method used to study written/ spoken language in relation to its social context (Wood & Kroger, 2000).

In discourse analysis, the researcher interprets details of language materials and the context in which it is situated.

Discourse analysis aims to understand the functions of language (how language is used in real life) and how meaning is conveyed by language in different contexts. Researchers use discourse analysis to investigate social groups and how language is used to achieve specific communication goals.

Different methods of discourse analysis can be used depending on the aims and objectives of a study. However, the following steps provide a guideline on how to conduct discourse analysis.
  • Define the Research Question : Develop a relevant research question to frame the analysis.
  • Gather Data and Establish the Context : Collect research materials (e.g., interview transcripts, documents). Gather factual details and review the literature to construct a theory about the social and historical context of your study.
  • Analyze the Content : Closely examine various components of the text, such as the vocabulary, sentences, paragraphs, and structure of the text. Identify patterns relevant to the research question to create codes, then group these into themes.
  • Review the Results : Reflect on the findings to examine the function of the language, and the meaning and context of the discourse. 

Thematic analysis

Thematic analysis is a method used to identify, interpret, and report patterns in data, such as commonalities or contrasts. 

Although the origin of thematic analysis can be traced back to the early twentieth century, understanding and clarity of thematic analysis is attributed to Braun and Clarke (2006).

Thematic analysis aims to develop themes (patterns of meaning) across a dataset to address a research question. 

In thematic analysis, qualitative data is gathered using techniques such as interviews, focus groups, and questionnaires. Audio recordings are transcribed. The dataset is then explored and interpreted by a researcher to identify patterns. 

This occurs through the rigorous process of data familiarisation, coding, theme development, and revision. These identified patterns provide a summary of the dataset and can be used to address a research question.

Themes are developed by exploring the implicit and explicit meanings within the data. Two different approaches are used to generate themes: inductive and deductive. 

An inductive approach allows themes to emerge from the data. In contrast, a deductive approach uses existing theories or knowledge to apply preconceived ideas to the data.

Phases of Thematic Analysis

Braun and Clarke (2006) provide a guide of the six phases of thematic analysis. These phases can be applied flexibly to fit research questions and data. 

Template analysis

Template analysis refers to a specific method of thematic analysis which uses hierarchical coding (Brooks et al., 2014).

Template analysis is used to analyze textual data, for example, interview transcripts or open-ended responses on a written questionnaire.

To conduct template analysis, a coding template must be developed (usually from a subset of the data) and subsequently revised and refined. This template represents the themes identified by researchers as important in the dataset. 

Codes are ordered hierarchically within the template, with the highest-level codes demonstrating overarching themes in the data and lower-level codes representing constituent themes with a narrower focus.

A guideline for the main procedural steps for conducting template analysis is outlined below.
  • Familiarization with the Data : Read (and reread) the dataset in full. Engage, reflect, and take notes on data that may be relevant to the research question.
  • Preliminary Coding : Identify initial codes using guidance from the a priori codes, identified before the analysis as likely to be beneficial and relevant to the analysis.
  • Organize Themes : Organize themes into meaningful clusters. Consider the relationships between the themes both within and between clusters.
  • Produce an Initial Template : Develop an initial template. This may be based on a subset of the data.
  • Apply and Develop the Template : Apply the initial template to further data and make any necessary modifications. Refinements of the template may include adding themes, removing themes, or changing the scope/title of themes. 
  • Finalize Template : Finalize the template, then apply it to the entire dataset. 

Frame analysis

Frame analysis is a comparative form of thematic analysis which systematically analyzes data using a matrix output.

Ritchie and Spencer (1994) developed this set of techniques to analyze qualitative data in applied policy research. Frame analysis aims to generate theory from data.

Frame analysis encourages researchers to organize and manage their data using summarization.

This results in a flexible and unique matrix output, in which individual participants (or cases) are represented by rows and themes are represented by columns. 

Each intersecting cell is used to summarize findings relating to the corresponding participant and theme.

Frame analysis has five distinct phases which are interrelated, forming a methodical and rigorous framework.
  • Familiarization with the Data : Familiarize yourself with all the transcripts. Immerse yourself in the details of each transcript and start to note recurring themes.
  • Develop a Theoretical Framework : Identify recurrent/ important themes and add them to a chart. Provide a framework/ structure for the analysis.
  • Indexing : Apply the framework systematically to the entire study data.
  • Summarize Data in Analytical Framework : Reduce the data into brief summaries of participants’ accounts.
  • Mapping and Interpretation : Compare themes and subthemes and check against the original transcripts. Group the data into categories and provide an explanation for them.

Preventing Bias in Qualitative Research

To evaluate qualitative studies, the CASP (Critical Appraisal Skills Programme) checklist for qualitative studies can be used to ensure all aspects of a study have been considered (CASP, 2018).

The quality of research can be enhanced and assessed using criteria such as checklists, reflexivity, co-coding, and member-checking. 

Co-coding 

Relying on only one researcher to interpret rich and complex data may risk key insights and alternative viewpoints being missed. Therefore, coding is often performed by multiple researchers.

A common strategy must be defined at the beginning of the coding process  (Busetto et al., 2020). This includes establishing a useful coding list and finding a common definition of individual codes.

Transcripts are initially coded independently by researchers and then compared and consolidated to minimize error or bias and to bring confirmation of findings. 

Member checking

Member checking (or respondent validation) involves checking back with participants to see if the research resonates with their experiences (Russell & Gregory, 2003).

Data can be returned to participants after data collection or when results are first available. For example, participants may be provided with their interview transcript and asked to verify whether this is a complete and accurate representation of their views.

Participants may then clarify or elaborate on their responses to ensure they align with their views (Shenton, 2004).

This feedback becomes part of data collection and ensures accurate descriptions/ interpretations of phenomena (Mays & Pope, 2000). 

Reflexivity in qualitative research

Reflexivity typically involves examining your own judgments, practices, and belief systems during data collection and analysis. It aims to identify any personal beliefs which may affect the research. 

Reflexivity is essential in qualitative research to ensure methodological transparency and complete reporting. This enables readers to understand how the interaction between the researcher and participant shapes the data.

Depending on the research question and population being researched, factors that need to be considered include the experience of the researcher, how the contact was established and maintained, age, gender, and ethnicity.

These details are important because, in qualitative research, the researcher is a dynamic part of the research process and actively influences the outcome of the research (Boeije, 2014). 

Reflexivity Example

Who you are and your characteristics influence how you collect and analyze data. Here is an example of a reflexivity statement for research on smoking. I am a 30-year-old white female from a middle-class background. I live in the southwest of England and have been educated to master’s level. I have been involved in two research projects on oral health. I have never smoked, but I have witnessed how smoking can cause ill health from my volunteering in a smoking cessation clinic. My research aspirations are to help to develop interventions to help smokers quit.

Establishing Trustworthiness in Qualitative Research

Trustworthiness is a concept used to assess the quality and rigor of qualitative research. Four criteria are used to assess a study’s trustworthiness: credibility, transferability, dependability, and confirmability.

Credibility in Qualitative Research

Credibility refers to how accurately the results represent the reality and viewpoints of the participants.

To establish credibility in research, participants’ views and the researcher’s representation of their views need to align (Tobin & Begley, 2004).

To increase the credibility of findings, researchers may use data source triangulation, investigator triangulation, peer debriefing, or member checking (Lincoln & Guba, 1985). 

Transferability in Qualitative Research

Transferability refers to how generalizable the findings are: whether the findings may be applied to another context, setting, or group (Tobin & Begley, 2004).

Transferability can be enhanced by giving thorough and in-depth descriptions of the research setting, sample, and methods (Nowell et al., 2017). 

Dependability in Qualitative Research

Dependability is the extent to which the study could be replicated under similar conditions and the findings would be consistent.

Researchers can establish dependability using methods such as audit trails so readers can see the research process is logical and traceable (Koch, 1994).

Confirmability in Qualitative Research

Confirmability is concerned with establishing that there is a clear link between the researcher’s interpretations/ findings and the data.

Researchers can achieve confirmability by demonstrating how conclusions and interpretations were arrived at (Nowell et al., 2017).

This enables readers to understand the reasoning behind the decisions made. 

Audit Trails in Qualitative Research

An audit trail provides evidence of the decisions made by the researcher regarding theory, research design, and data collection, as well as the steps they have chosen to manage, analyze, and report data. 

The researcher must provide a clear rationale to demonstrate how conclusions were reached in their study.

A clear description of the research path must be provided to enable readers to trace through the researcher’s logic (Halpren, 1983).

Researchers should maintain records of the raw data, field notes, transcripts, and a reflective journal in order to provide a clear audit trail. 

Discovery of unexpected data

Open-ended questions in qualitative research mean the researcher can probe an interview topic and enable the participant to elaborate on responses in an unrestricted manner.

This allows unexpected data to emerge, which can lead to further research into that topic. 

Flexibility

Data collection and analysis can be modified and adapted to take the research in a different direction if new ideas or patterns emerge in the data.

This enables researchers to investigate new opportunities while firmly maintaining their research goals. 

Naturalistic settings

The behaviors of participants are recorded in real-world settings. Studies that use real-world settings have high ecological validity since participants behave more authentically. 

Limitations

Time-consuming .

Qualitative research results in large amounts of data which often need to be transcribed and analyzed manually.

Even when software is used, transcription can be inaccurate, and using software for analysis can result in many codes which need to be condensed into themes. 

Subjectivity 

The researcher has an integral role in collecting and interpreting qualitative data. Therefore, the conclusions reached are from their perspective and experience.

Consequently, interpretations of data from another researcher may vary greatly. 

Limited generalizability

The aim of qualitative research is to provide a detailed, contextualized understanding of an aspect of the human experience from a relatively small sample size.

Despite rigorous analysis procedures, conclusions drawn cannot be generalized to the wider population since data may be biased or unrepresentative.

Therefore, results are only applicable to a small group of the population. 

Extraneous variables

Qualitative research is often conducted in real-world settings. This may cause results to be unreliable since extraneous variables may affect the data, for example:

  • Situational variables : different environmental conditions may influence participants’ behavior in a study. The random variation in factors (such as noise or lighting) may be difficult to control in real-world settings.
  • Participant characteristics : this includes any characteristics that may influence how a participant answers/ behaves in a study. This may include a participant’s mood, gender, age, ethnicity, sexual identity, IQ, etc.
  • Experimenter effect : experimenter effect refers to how a researcher’s unintentional influence can change the outcome of a study. This occurs when (i) their interactions with participants unintentionally change participants’ behaviors or (ii) due to errors in observation, interpretation, or analysis. 

What sample size should qualitative research be?

The sample size for qualitative studies has been recommended to include a minimum of 12 participants to reach data saturation (Braun, 2013).

Are surveys qualitative or quantitative?

Surveys can be used to gather information from a sample qualitatively or quantitatively. Qualitative surveys use open-ended questions to gather detailed information from a large sample using free text responses.

The use of open-ended questions allows for unrestricted responses where participants use their own words, enabling the collection of more in-depth information than closed-ended questions.

In contrast, quantitative surveys consist of closed-ended questions with multiple-choice answer options. Quantitative surveys are ideal to gather a statistical representation of a population.

What are the ethical considerations of qualitative research?

Before conducting a study, you must think about any risks that could occur and take steps to prevent them. Participant Protection : Researchers must protect participants from physical and mental harm. This means you must not embarrass, frighten, offend, or harm participants. Transparency : Researchers are obligated to clearly communicate how they will collect, store, analyze, use, and share the data. Confidentiality : You need to consider how to maintain the confidentiality and anonymity of participants’ data.

What is triangulation in qualitative research?

Triangulation refers to the use of several approaches in a study to comprehensively understand phenomena. This method helps to increase the validity and credibility of research findings. 

Types of triangulation include method triangulation (using multiple methods to gather data); investigator triangulation (multiple researchers for collecting/ analyzing data), theory triangulation (comparing several theoretical perspectives to explain a phenomenon), and data source triangulation (using data from various times, locations, and people; Carter et al., 2014).

Why is qualitative research important?

Qualitative research allows researchers to describe and explain the social world. The exploratory nature of qualitative research helps to generate hypotheses that can then be tested quantitatively.

In qualitative research, participants are able to express their thoughts, experiences, and feelings without constraint.

Additionally, researchers are able to follow up on participants’ answers in real-time, generating valuable discussion around a topic. This enables researchers to gain a nuanced understanding of phenomena which is difficult to attain using quantitative methods.

What is coding data in qualitative research?

Coding data is a qualitative data analysis strategy in which a section of text is assigned with a label that describes its content.

These labels may be words or phrases which represent important (and recurring) patterns in the data.

This process enables researchers to identify related content across the dataset. Codes can then be used to group similar types of data to generate themes.

What is the difference between qualitative and quantitative research?

Qualitative research involves the collection and analysis of non-numerical data in order to understand experiences and meanings from the participant’s perspective.

This can provide rich, in-depth insights on complicated phenomena. Qualitative data may be collected using interviews, focus groups, or observations.

In contrast, quantitative research involves the collection and analysis of numerical data to measure the frequency, magnitude, or relationships of variables. This can provide objective and reliable evidence that can be generalized to the wider population.

Quantitative data may be collected using closed-ended questionnaires or experiments.

What is trustworthiness in qualitative research?

Trustworthiness is a concept used to assess the quality and rigor of qualitative research. Four criteria are used to assess a study’s trustworthiness: credibility, transferability, dependability, and confirmability. 

Credibility refers to how accurately the results represent the reality and viewpoints of the participants. Transferability refers to whether the findings may be applied to another context, setting, or group.

Dependability is the extent to which the findings are consistent and reliable. Confirmability refers to the objectivity of findings (not influenced by the bias or assumptions of researchers).

What is data saturation in qualitative research?

Data saturation is a methodological principle used to guide the sample size of a qualitative research study.

Data saturation is proposed as a necessary methodological component in qualitative research (Saunders et al., 2018) as it is a vital criterion for discontinuing data collection and/or analysis. 

The intention of data saturation is to find “no new data, no new themes, no new coding, and ability to replicate the study” (Guest et al., 2006). Therefore, enough data has been gathered to make conclusions.

Why is sampling in qualitative research important?

In quantitative research, large sample sizes are used to provide statistically significant quantitative estimates.

This is because quantitative research aims to provide generalizable conclusions that represent populations.

However, the aim of sampling in qualitative research is to gather data that will help the researcher understand the depth, complexity, variation, or context of a phenomenon. The small sample sizes in qualitative studies support the depth of case-oriented analysis.

Boeije, H. (2014). Analysis in qualitative research. Sage.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology , 3 (2), 77-101. https://doi.org/10.1191/1478088706qp063oa

Brooks, J., McCluskey, S., Turley, E., & King, N. (2014). The utility of template analysis in qualitative psychology research. Qualitative Research in Psychology , 12 (2), 202–222. https://doi.org/10.1080/14780887.2014.955224

Busetto, L., Wick, W., & Gumbinger, C. (2020). How to use and assess qualitative research methods. Neurological research and practice , 2 (1), 14-14. https://doi.org/10.1186/s42466-020-00059-z 

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology nursing forum , 41 (5), 545–547. https://doi.org/10.1188/14.ONF.545-547

Critical Appraisal Skills Programme. (2018). CASP Checklist: 10 questions to help you make sense of a Qualitative research. https://casp-uk.net/images/checklist/documents/CASP-Qualitative-Studies-Checklist/CASP-Qualitative-Checklist-2018_fillable_form.pdf Accessed: March 15 2023

Clarke, V., & Braun, V. (2013). Successful qualitative research: A practical guide for beginners. Successful Qualitative Research , 1-400.

Denny, E., & Weckesser, A. (2022). How to do qualitative research?: Qualitative research methods. BJOG : an international journal of obstetrics and gynaecology , 129 (7), 1166-1167. https://doi.org/10.1111/1471-0528.17150 

Glaser, B. G., & Strauss, A. L. (2017). The discovery of grounded theory. The Discovery of Grounded Theory , 1–18. https://doi.org/10.4324/9780203793206-1

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18 (1), 59-82. doi:10.1177/1525822X05279903

Halpren, E. S. (1983). Auditing naturalistic inquiries: The development and application of a model (Unpublished doctoral dissertation). Indiana University, Bloomington.

Hammarberg, K., Kirkman, M., & de Lacey, S. (2016). Qualitative research methods: When to use them and how to judge them. Human Reproduction , 31 (3), 498–501. https://doi.org/10.1093/humrep/dev334

Koch, T. (1994). Establishing rigour in qualitative research: The decision trail. Journal of Advanced Nursing, 19, 976–986. doi:10.1111/ j.1365-2648.1994.tb01177.x

Lincoln, Y., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320(7226), 50–52.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic Analysis: Striving to Meet the Trustworthiness Criteria. International Journal of Qualitative Methods, 16 (1). https://doi.org/10.1177/1609406917733847

Petty, N. J., Thomson, O. P., & Stew, G. (2012). Ready for a paradigm shift? part 2: Introducing qualitative research methodologies and methods. Manual Therapy , 17 (5), 378–384. https://doi.org/10.1016/j.math.2012.03.004

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches. London: Sage

Reeves, S., Kuper, A., & Hodges, B. D. (2008). Qualitative research methodologies: Ethnography. BMJ , 337 (aug07 3). https://doi.org/10.1136/bmj.a1020

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: exploring its conceptualization and operationalization. Quality & quantity , 52 (4), 1893–1907. https://doi.org/10.1007/s11135-017-0574-8

Scarduzio, J. A. (2017). Emic approach to qualitative research. The International Encyclopedia of Communication Research Methods, 1–2 . https://doi.org/10.1002/9781118901731.iecrm0082

Schreier, M. (2012). Qualitative content analysis in practice / Margrit Schreier.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

Starks, H., & Trinidad, S. B. (2007). Choose your method: a comparison of phenomenology, discourse analysis, and grounded theory. Qualitative health research , 17 (10), 1372–1380. https://doi.org/10.1177/1049732307307031

Tenny, S., Brannan, J. M., & Brannan, G. D. (2022). Qualitative Study. In StatPearls. StatPearls Publishing.

Tobin, G. A., & Begley, C. M. (2004). Methodological rigour within a qualitative framework. Journal of Advanced Nursing, 48, 388–396. doi:10.1111/j.1365-2648.2004.03207.x

Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & health sciences , 15 (3), 398-405. https://doi.org/10.1111/nhs.12048

Wood L. A., Kroger R. O. (2000). Doing discourse analysis: Methods for studying action in talk and text. Sage.

Yilmaz, K. (2013). Comparison of Quantitative and Qualitative Research Traditions: epistemological, theoretical, and methodological differences. European journal of education , 48 (2), 311-325. https://doi.org/10.1111/ejed.12014

Print Friendly, PDF & Email

Related Articles

What Is a Focus Group?

Research Methodology

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Convergent Validity: Definition and Examples

Convergent Validity: Definition and Examples

  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

Qualitative Research

What is qualitative research.

Qualitative research is the methodology researchers use to gain deep contextual understandings of users via non-numerical means and direct observations. Researchers focus on smaller user samples—e.g., in interviews—to reveal data such as user attitudes, behaviors and hidden factors: insights which guide better designs.

“ There are also unknown unknowns, things we don’t know we don’t know.” — Donald Rumsfeld, Former U.S. Secretary of Defense
  • Transcript loading…

See how you can use qualitative research to expose hidden truths about users and iteratively shape better products.

Qualitative Research Focuses on the “Why”

Qualitative research is a subset of user experience (UX) research and user research . By doing qualitative research, you aim to gain narrowly focused but rich information about why users feel and think the ways they do. Unlike its more statistics-oriented “counterpart”, quantitative research , qualitative research can help expose hidden truths about your users’ motivations, hopes, needs, pain points and more to help you keep your project’s focus on track throughout development. UX design professionals do qualitative research typically from early on in projects because—since the insights they reveal can alter product development dramatically—they can prevent costly design errors from arising later. Compare and contrast qualitative with quantitative research here:

Qualitative research

Quantitative Research

You Aim to Determine

The “why” – to get behind how users approach their problems in their world

The “what”, “where” & “when” of the users’ needs & problems – to help keep your project’s focus on track during development

Loosely structured (e.g., contextual inquiries) – to learn why users behave how they do & explore their opinions

Highly structured (e.g., surveys) – to gather data about what users do & find patterns in large user groups

Number of Representative Users

Often around 5

Ideally 30+

Level of Contact with Users

More direct & less remote (e.g., usability testing to examine users’ stress levels when they use your design)

Less direct & more remote (e.g., analytics)

Statistically

You need to take great care with handling non-numerical data (e.g., opinions), as your own opinions might influence findings

Reliable – given enough test users

Regarding care with opinions, it’s easy to be subjective about qualitative data, which isn’t as comprehensively analyzable as quantitative data. That’s why design teams also apply quantitative research methods, to reinforce the “why” with the “what”.

Qualitative Research Methods You Can Use to Get Behind Your Users

You have a choice of many methods to help gain the clearest insights into your users’ world – which you might want to complement with quantitative research methods. In iterative processes such as user-centered design , you/your design team would use quantitative research to spot design problems, discover the reasons for these with qualitative research, make changes and then test your improved design on users again. The best method/s to pick will depend on the stage of your project and your objectives. Here are some:

Diary studies – You ask users to document their activities, interactions, etc. over a defined period. This empowers users to deliver context-rich information. Although such studies can be subjective—since users will inevitably be influenced by in-the-moment human issues and their emotions—they’re helpful tools to access generally authentic information.

Structured – You ask users specific questions and analyze their responses with other users’.

Semi-structured – You have a more free-flowing conversation with users, but still follow a prepared script loosely.

Ethnographic – You interview users in their own environment to appreciate how they perform tasks and view aspects of tasks.

How to Structure a User Interview

Usability testing

Moderated – In-person testing in, e.g., a lab.

Unmoderated – Users complete tests remotely: e.g., through a video call.

Guerrilla – “Down-the-hall”/“down-and-dirty” testing on a small group of random users or colleagues.

How to Plan a Usability Test

User observation – You watch users get to grips with your design and note their actions, words and reactions as they attempt to perform tasks.

qualitative research in research design

Qualitative research can be more or less structured depending on the method.

Qualitative Research – How to Get Reliable Results

Some helpful points to remember are:

Participants – Select a number of test users carefully (typically around 5). Observe the finer points such as body language. Remember the difference between what they do and what they say they do.

Moderated vs. unmoderated – You can obtain the richest data from moderated studies, but these can involve considerable time and practice. You can usually conduct unmoderated studies more quickly and cheaply, but you should plan these carefully to ensure instructions are clear, etc.

Types of questions – You’ll learn far more by asking open-ended questions. Avoid leading users’ answers – ask about their experience during, say, the “search for deals” process rather than how easy it was. Try to frame questions so users respond honestly: i.e., so they don’t withhold grievances about their experience because they don’t want to seem impolite. Distorted feedback may also arise in guerrilla testing, as test users may be reluctant to sound negative or to discuss fine details if they lack time.

Location – Think how where users are might affect their performance and responses. If, for example, users’ tasks involve running or traveling on a train, select the appropriate method (e.g., diary studies for them to record aspects of their experience in the environment of a train carriage and the many factors impacting it).

Overall, no single research method can help you answer all your questions. Nevertheless, The Nielsen Norman Group advise that if you only conduct one kind of user research, you should pick qualitative usability testing, since a small sample size can yield many cost- and project-saving insights. Always treat users and their data ethically. Finally, remember the importance of complementing qualitative methods with quantitative ones: You gain insights from the former; you test those using the latter.

Learn More about Qualitative Research

Take our course on User Research to see how to get the most from qualitative research.

Read about the numerous considerations for qualitative research in this in-depth piece.

This blog discusses the importance of qualitative research , with tips.

Explore additional insights into qualitative research here .

Answer a Short Quiz to Earn a Gift

What is the primary focus of qualitative research in user experience?

  • To determine statistical significance of user behavior
  • To explore user behaviors and motivations in-depth
  • To quantify user interaction across multiple platforms

How many participants typically participate in qualitative research studies?

  • About 5 to allow in-depth exploration
  • Between 30 and 50 for moderate generalization
  • Over 100 to guarantee statistical reliability

Which method do researchers often use in qualitative research to understand user experiences in their natural environment?

  • Ethnographic interviews
  • Laboratory experiments
  • Online surveys

What characterizes the analysis of data in qualitative research?

  • Simple tabulation of numeric responses
  • Statistical analysis of large data sets
  • Thematic analysis of detailed descriptions

What is a common challenge researchers face when they conduct qualitative research?

  • The ability to obtain a large enough sample size for statistical analysis.
  • The ability to remain objective and avoid bias in data interpretation.
  • The ability to use advanced statistical tools to analyze data.

Better luck next time!

Do you want to improve your UX / UI Design skills? Join us now

Congratulations! You did amazing

You earned your gift with a perfect score! Let us send it to you.

Check Your Inbox

We’ve emailed your gift to [email protected] .

Literature on Qualitative Research

Here’s the entire UX literature on Qualitative Research by the Interaction Design Foundation, collated in one place:

Learn more about Qualitative Research

Take a deep dive into Qualitative Research with our course User Research – Methods and Best Practices .

How do you plan to design a product or service that your users will love , if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design .

In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’ .

This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world . You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!

By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!

We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!

All open-source articles on Qualitative Research

How to do a thematic analysis of user interviews.

qualitative research in research design

  • 1.2k shares
  • 3 years ago

How to Visualize Your Qualitative User Research Results for Maximum Impact

qualitative research in research design

Creating Personas from User Research Results

qualitative research in research design

Best Practices for Qualitative User Research

qualitative research in research design

Card Sorting

qualitative research in research design

Contextual Interviews and How to Handle Them

qualitative research in research design

Understand the User’s Perspective through Research for Mobile UX

qualitative research in research design

  • 11 mths ago

Ethnography

7 simple ways to get better results from ethnographic research.

qualitative research in research design

Semi-structured qualitative studies

Pros and cons of conducting user interviews.

qualitative research in research design

Workshops to Establish Empathy and Understanding from User Research Results

qualitative research in research design

How to Moderate User Interviews

qualitative research in research design

  • 4 years ago

Question Everything

qualitative research in research design

Adding Quality to Your Design Research with an SSQS Checklist

qualitative research in research design

  • 8 years ago

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this page , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

New to UX Design? We’re Giving You a Free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

Qualitative Study

Affiliations.

  • 1 University of Nebraska Medical Center
  • 2 GDB Research and Statistical Consulting
  • 3 GDB Research and Statistical Consulting/McLaren Macomb Hospital
  • PMID: 29262162
  • Bookshelf ID: NBK470395

Qualitative research is a type of research that explores and provides deeper insights into real-world problems. Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a standalone study, purely relying on qualitative data, or part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers to some basic concepts, definitions, terminology, and applications of qualitative research.

Qualitative research, at its core, asks open-ended questions whose answers are not easily put into numbers, such as "how" and "why." Due to the open-ended nature of the research questions, qualitative research design is often not linear like quantitative design. One of the strengths of qualitative research is its ability to explain processes and patterns of human behavior that can be difficult to quantify. Phenomena such as experiences, attitudes, and behaviors can be complex to capture accurately and quantitatively. In contrast, a qualitative approach allows participants themselves to explain how, why, or what they were thinking, feeling, and experiencing at a particular time or during an event of interest. Quantifying qualitative data certainly is possible, but at its core, qualitative data is looking for themes and patterns that can be difficult to quantify, and it is essential to ensure that the context and narrative of qualitative work are not lost by trying to quantify something that is not meant to be quantified.

However, while qualitative research is sometimes placed in opposition to quantitative research, where they are necessarily opposites and therefore "compete" against each other and the philosophical paradigms associated with each other, qualitative and quantitative work are neither necessarily opposites, nor are they incompatible. While qualitative and quantitative approaches are different, they are not necessarily opposites and certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated.

Qualitative Research Approaches

Ethnography

Ethnography as a research design originates in social and cultural anthropology and involves the researcher being directly immersed in the participant’s environment. Through this immersion, the ethnographer can use a variety of data collection techniques to produce a comprehensive account of the social phenomena that occurred during the research period. That is to say, the researcher’s aim with ethnography is to immerse themselves into the research population and come out of it with accounts of actions, behaviors, events, etc, through the eyes of someone involved in the population. Direct involvement of the researcher with the target population is one benefit of ethnographic research because it can then be possible to find data that is otherwise very difficult to extract and record.

Grounded theory

Grounded Theory is the "generation of a theoretical model through the experience of observing a study population and developing a comparative analysis of their speech and behavior." Unlike quantitative research, which is deductive and tests or verifies an existing theory, grounded theory research is inductive and, therefore, lends itself to research aimed at social interactions or experiences. In essence, Grounded Theory’s goal is to explain how and why an event occurs or how and why people might behave a certain way. Through observing the population, a researcher using the Grounded Theory approach can then develop a theory to explain the phenomena of interest.

Phenomenology

Phenomenology is the "study of the meaning of phenomena or the study of the particular.” At first glance, it might seem that Grounded Theory and Phenomenology are pretty similar, but the differences can be seen upon careful examination. At its core, phenomenology looks to investigate experiences from the individual's perspective. Phenomenology is essentially looking into the "lived experiences" of the participants and aims to examine how and why participants behaved a certain way from their perspective. Herein lies one of the main differences between Grounded Theory and Phenomenology. Grounded Theory aims to develop a theory for social phenomena through an examination of various data sources. In contrast, Phenomenology focuses on describing and explaining an event or phenomenon from the perspective of those who have experienced it.

Narrative research

One of qualitative research’s strengths lies in its ability to tell a story, often from the perspective of those directly involved in it. Reporting on qualitative research involves including details and descriptions of the setting involved and quotes from participants. This detail is called a "thick" or "rich" description and is a strength of qualitative research. Narrative research is rife with the possibilities of "thick" description as this approach weaves together a sequence of events, usually from just one or two individuals, hoping to create a cohesive story or narrative. While it might seem like a waste of time to focus on such a specific, individual level, understanding one or two people’s narratives for an event or phenomenon can help to inform researchers about the influences that helped shape that narrative. The tension or conflict of differing narratives can be "opportunities for innovation."

Research Paradigm

Research paradigms are the assumptions, norms, and standards underpinning different research approaches. Essentially, research paradigms are the "worldviews" that inform research. It is valuable for qualitative and quantitative researchers to understand what paradigm they are working within because understanding the theoretical basis of research paradigms allows researchers to understand the strengths and weaknesses of the approach being used and adjust accordingly. Different paradigms have different ontologies and epistemologies. Ontology is defined as the "assumptions about the nature of reality,” whereas epistemology is defined as the "assumptions about the nature of knowledge" that inform researchers' work. It is essential to understand the ontological and epistemological foundations of the research paradigm researchers are working within to allow for a complete understanding of the approach being used and the assumptions that underpin the approach as a whole. Further, researchers must understand their own ontological and epistemological assumptions about the world in general because their assumptions about the world will necessarily impact how they interact with research. A discussion of the research paradigm is not complete without describing positivist, postpositivist, and constructivist philosophies.

Positivist versus postpositivist

To further understand qualitative research, we must discuss positivist and postpositivist frameworks. Positivism is a philosophy that the scientific method can and should be applied to social and natural sciences. Essentially, positivist thinking insists that the social sciences should use natural science methods in their research. It stems from positivist ontology, that there is an objective reality that exists that is wholly independent of our perception of the world as individuals. Quantitative research is rooted in positivist philosophy, which can be seen in the value it places on concepts such as causality, generalizability, and replicability.

Conversely, postpositivists argue that social reality can never be one hundred percent explained, but could be approximated. Indeed, qualitative researchers have been insisting that there are “fundamental limits to the extent to which the methods and procedures of the natural sciences could be applied to the social world,” and therefore, postpositivist philosophy is often associated with qualitative research. An example of positivist versus postpositivist values in research might be that positivist philosophies value hypothesis-testing, whereas postpositivist philosophies value the ability to formulate a substantive theory.

Constructivist

Constructivism is a subcategory of postpositivism. Most researchers invested in postpositivist research are also constructivist, meaning they think there is no objective external reality that exists but instead that reality is constructed. Constructivism is a theoretical lens that emphasizes the dynamic nature of our world. "Constructivism contends that individuals' views are directly influenced by their experiences, and it is these individual experiences and views that shape their perspective of reality.” constructivist thought focuses on how "reality" is not a fixed certainty and how experiences, interactions, and backgrounds give people a unique view of the world. Constructivism contends, unlike positivist views, that there is not necessarily an "objective"reality we all experience. This is the ‘relativist’ ontological view that reality and our world are dynamic and socially constructed. Therefore, qualitative scientific knowledge can be inductive as well as deductive.”

So why is it important to understand the differences in assumptions that different philosophies and approaches to research have? Fundamentally, the assumptions underpinning the research tools a researcher selects provide an overall base for the assumptions the rest of the research will have. It can even change the role of the researchers. For example, is the researcher an "objective" observer, such as in positivist quantitative work? Or is the researcher an active participant in the research, as in postpositivist qualitative work? Understanding the philosophical base of the study undertaken allows researchers to fully understand the implications of their work and their role within the research and reflect on their positionality and bias as it pertains to the research they are conducting.

Data Sampling

The better the sample represents the intended study population, the more likely the researcher is to encompass the varying factors. The following are examples of participant sampling and selection:

Purposive sampling- selection based on the researcher’s rationale for being the most informative.

Criterion sampling selection based on pre-identified factors.

Convenience sampling- selection based on availability.

Snowball sampling- the selection is by referral from other participants or people who know potential participants.

Extreme case sampling- targeted selection of rare cases.

Typical case sampling selection based on regular or average participants.

Data Collection and Analysis

Qualitative research uses several techniques, including interviews, focus groups, and observation. [1] [2] [3] Interviews may be unstructured, with open-ended questions on a topic, and the interviewer adapts to the responses. Structured interviews have a predetermined number of questions that every participant is asked. It is usually one-on-one and appropriate for sensitive topics or topics needing an in-depth exploration. Focus groups are often held with 8-12 target participants and are used when group dynamics and collective views on a topic are desired. Researchers can be participant-observers to share the experiences of the subject or non-participants or detached observers.

While quantitative research design prescribes a controlled environment for data collection, qualitative data collection may be in a central location or the participants' environment, depending on the study goals and design. Qualitative research could amount to a large amount of data. Data is transcribed, which may then be coded manually or using computer-assisted qualitative data analysis software or CAQDAS such as ATLAS.ti or NVivo.

After the coding process, qualitative research results could be in various formats. It could be a synthesis and interpretation presented with excerpts from the data. Results could also be in the form of themes and theory or model development.

Dissemination

The healthcare team can use two reporting standards to standardize and facilitate the dissemination of qualitative research outcomes. The Consolidated Criteria for Reporting Qualitative Research or COREQ is a 32-item checklist for interviews and focus groups. The Standards for Reporting Qualitative Research (SRQR) is a checklist covering a more comprehensive range of qualitative research.

Applications

Many times, a research question will start with qualitative research. The qualitative research will help generate the research hypothesis, which can be tested with quantitative methods. After the data is collected and analyzed with quantitative methods, a set of qualitative methods can be used to dive deeper into the data to better understand what the numbers truly mean and their implications. The qualitative techniques can then help clarify the quantitative data and also help refine the hypothesis for future research. Furthermore, with qualitative research, researchers can explore poorly studied subjects with quantitative methods. These include opinions, individual actions, and social science research.

An excellent qualitative study design starts with a goal or objective. This should be clearly defined or stated. The target population needs to be specified. A method for obtaining information from the study population must be carefully detailed to ensure no omissions of part of the target population. A proper collection method should be selected that will help obtain the desired information without overly limiting the collected data because, often, the information sought is not well categorized or obtained. Finally, the design should ensure adequate methods for analyzing the data. An example may help better clarify some of the various aspects of qualitative research.

A researcher wants to decrease the number of teenagers who smoke in their community. The researcher could begin by asking current teen smokers why they started smoking through structured or unstructured interviews (qualitative research). The researcher can also get together a group of current teenage smokers and conduct a focus group to help brainstorm factors that may have prevented them from starting to smoke (qualitative research).

In this example, the researcher has used qualitative research methods (interviews and focus groups) to generate a list of ideas of why teens start to smoke and factors that may have prevented them from starting to smoke. Next, the researcher compiles this data. The research found that, hypothetically, peer pressure, health issues, cost, being considered "cool," and rebellious behavior all might increase or decrease the likelihood of teens starting to smoke.

The researcher creates a survey asking teen participants to rank how important each of the above factors is in either starting smoking (for current smokers) or not smoking (for current nonsmokers). This survey provides specific numbers (ranked importance of each factor) and is thus a quantitative research tool.

The researcher can use the survey results to focus efforts on the one or two highest-ranked factors. Let us say the researcher found that health was the primary factor that keeps teens from starting to smoke, and peer pressure was the primary factor that contributed to teens starting smoking. The researcher can go back to qualitative research methods to dive deeper into these for more information. The researcher wants to focus on keeping teens from starting to smoke, so they focus on the peer pressure aspect.

The researcher can conduct interviews and focus groups (qualitative research) about what types and forms of peer pressure are commonly encountered, where the peer pressure comes from, and where smoking starts. The researcher hypothetically finds that peer pressure often occurs after school at the local teen hangouts, mostly in the local park. The researcher also hypothetically finds that peer pressure comes from older, current smokers who provide the cigarettes.

The researcher could further explore this observation made at the local teen hangouts (qualitative research) and take notes regarding who is smoking, who is not, and what observable factors are at play for peer pressure to smoke. The researcher finds a local park where many local teenagers hang out and sees that the smokers tend to hang out in a shady, overgrown area of the park. The researcher notes that smoking teenagers buy their cigarettes from a local convenience store adjacent to the park, where the clerk does not check identification before selling cigarettes. These observations fall under qualitative research.

If the researcher returns to the park and counts how many individuals smoke in each region, this numerical data would be quantitative research. Based on the researcher's efforts thus far, they conclude that local teen smoking and teenagers who start to smoke may decrease if there are fewer overgrown areas of the park and the local convenience store does not sell cigarettes to underage individuals.

The researcher could try to have the parks department reassess the shady areas to make them less conducive to smokers or identify how to limit the sales of cigarettes to underage individuals by the convenience store. The researcher would then cycle back to qualitative methods of asking at-risk populations their perceptions of the changes and what factors are still at play, and quantitative research that includes teen smoking rates in the community and the incidence of new teen smokers, among others.

Copyright © 2024, StatPearls Publishing LLC.

  • Introduction
  • Issues of Concern
  • Clinical Significance
  • Enhancing Healthcare Team Outcomes
  • Review Questions

Publication types

  • Study Guide

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 14, Issue 5
  • Medical researchers’ perceptions regarding research evaluation: a web-based survey in Japan
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Akira Minoura 1 ,
  • Yuhei Shimada 2 , 3 ,
  • Keisuke Kuwahara 4 , 5 , 6 ,
  • Makoto Kondo 7 ,
  • Hiroko Fukushima 8 , 9 ,
  • http://orcid.org/0000-0001-5391-682X Takehiro Sugiyama 3 , 10
  • 1 Department of Hygiene, Public Health and Preventive Medicine, Showa University School of Medicine , Shinagawa-ku , Japan
  • 2 Department of Law and Politics , The University of Tokyo , Bunkyo-ku , Japan
  • 3 Diabetes and Metabolism Information Center, Research Institute , National Center for Global Health and Medicine , Shinjuku-ku , Japan
  • 4 Department of Epidemiology and Prevention, Center for Clinical Sciences , National Center for Global Health and Medicine , Shinjuku-ku , Japan
  • 5 Department of Public Health , Yokohama City University School of Medicine , Yokohama , Japan
  • 6 Department of Health Data Science, Graduate School of Data Science , Yokohama City University , Yokohama , Japan
  • 7 Department of Anatomy and Neuroscience, Graduate School of Medicine , Osaka Metropolitan University , Osaka , Japan
  • 8 Department of Pediatrics , University of Tsukuba Hospital , Tsukuba , Japan
  • 9 Department of Child Health, Institute of Medicine , University of Tsukuba , Tsukuba , Japan
  • 10 Department of Health Services Research, Institute of Medicine , University of Tsukuba , Tsukuba , Japan
  • Correspondence to Dr Takehiro Sugiyama; tsugiyama{at}hosp.ncgm.go.jp

Objectives Japanese medical academia continues to depend on quantitative indicators, contrary to the general trend in research evaluation. To understand this situation better and facilitate discussion, this study aimed to examine how Japanese medical researchers perceive quantitative indicators and qualitative factors of research evaluation and their differences by the researchers’ characteristics.

Design We employed a web-based cross-sectional survey and distributed the self-administered questionnaire to academic society members via the Japanese Association of Medical Sciences.

Participants We received 3139 valid responses representing Japanese medical researchers in any medical research field (basic, clinical and social medicine).

Outcomes The subjective importance of quantitative indicators and qualitative factors in evaluating researchers (eg, the journal impact factor (IF) or the originality of the research topic) was assessed on a four-point scale, with 1 indicating ‘especially important’ and 4 indicating ‘not important’. The attitude towards various opinions in quantitative and qualitative research evaluation (eg, the possibility of research misconduct or susceptibility to unconscious bias) was also evaluated on a four-point scale, ranging from 1, ‘strongly agree’, to 4, ‘completely disagree’.

Results Notably, 67.4% of the medical researchers, particularly men, younger and basic medicine researchers, responded that the journal IF was important in researcher evaluation. Most researchers (88.8%) agreed that some important studies do not get properly evaluated in research evaluation using quantitative indicators. The respondents perceived quantitative indicators as possibly leading to misconduct, especially in basic medicine (strongly agree—basic, 22.7%; clinical, 11.7%; and social, 16.1%). According to the research fields, researchers consider different qualitative factors, such as the originality of the research topic (especially important—basic, 46.2%; social, 39.1%; and clinical, 32.0%) and the contribution to solving clinical and social problems (especially important—basic, 30.4%; clinical, 41.0%; and social, 52.0%), as important. Older researchers tended to believe that qualitative research evaluation was unaffected by unconscious bias.

Conclusion Despite recommendations from the Declaration on Research Assessment and the Leiden Manifesto to de-emphasise quantitative indicators, this study found that Japanese medical researchers have actually tended to prioritise the journal IF and other quantitative indicators based on English-language publications in their research evaluation. Therefore, constantly reviewing the research evaluation methods while respecting the viewpoints of researchers from different research fields, generations and genders is crucial.

  • MEDICAL EDUCATION & TRAINING
  • Health policy
  • GENERAL MEDICINE (see Internal Medicine)

Data availability statement

Data are available upon reasonable request. Data used in the analysis will be made available to researchers upon request, in compliance with ethical guidelines and with the ethics committee’s approval.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2023-079269

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

A web-based survey was conducted for Japanese medical researchers in research fields by various medical societies and the Japan Association of Medical Sciences.

The questionnaire was developed through focus group interviews with 22 medical researchers from various backgrounds.

The subjective importance of quantitative indicators and qualitative factors in evaluating researchers, stratified by the respondents’ characteristics, was demonstrated.

The number of responses was limited when compared with the total number of medical researchers in Japan.

The design of a web-based self-administered survey could possibly result in bias.

Introduction

Evaluating research is essential for the continuous advancement of scientific progress nationally and internationally. 1 Although there is no universal definition, research evaluation refers to the assessment of all research project processes, from the planning of a research project to the dissemination of its results and the development of subsequent research areas. 2 Regardless of whether the evaluation is quantitative or qualitative, the research evaluation assesses performance in relation to the research missions or objectives. 3 Although researcher evaluation is also an evaluation of researchers, depending on the evaluation objective, it may include their cumulative research activities and non-research activities such as education, professional practice and administration. 4

However, some quantitative metrics of scientific outputs, such as the number of English-language publications and the number of citations, are considered important in the allocation of funds and the recruitment of researchers at universities. 5 In particular, the journal impact factor (IF), which was originally a measure for journals, not for each paper, has occasionally been used to evaluate the quality of an article or the productivity of a researcher. Actions have been taken to promote responsible research assessment among researchers worldwide as a countermeasure to the trend, symbolised by numerous researchers and organisations signing the Declaration on Research Assessment (DORA), which mainly dissents IF uses for research evaluation. 6 Furthermore, the Leiden Manifesto for Research Metrics alarms the pervasive misapplication not only of IF but also of quantitative indicators in general to the evaluation of scientific performance. 3 Recently, the Science Council of Japan issued a recommendation regarding research evaluation, stating that quantitative assessment methods should not be overemphasised in research evaluation; they hoped to introduce international trends to help Japanese researchers develop appropriate ways to conduct research evaluations. 7

However, the current state of the research evaluation has not yet achieved the stated goal. Indeed, the fourth Medium-Term Plans of National University Corporations, which are required by law to establish Key Performance Indicators to achieve the ministry’s Medium-Term Goals, state that they will measure their performance with a focus on quantitative indicators, including the number of published articles. 8 This is true in the field of medicine; combined with the fierce competition for positions as medical researchers, publishing in journals with high IFs is encouraged regardless of differences in fields. 9 Consequently, this merit-based evaluation, combined with an overcompetitive environment, puts pressure on researchers to publish, potentially making them more susceptible to research misconduct. 10–12

It is important to understand how medical researchers internalise the evaluation axes of their research/researcher and how they interpret the evaluations they receive to address this contentious situation and find a solution. Internationally, in addition to studies aimed at the entire research community, 13–15 some studies investigated medical researchers’ perceptions. 16–19 Similarly, in Japan, researchers’ perspectives on the evaluation system are discussed. 20 21 The problems with current evaluation practices have also been highlighted among domestic medical researchers. 22 23 However, no previous research has measured medical researchers’ perceptions of the evaluation of the research/researcher using a large-scale questionnaire to our knowledge.

This study aims to clarify the perceptions of Japanese researchers in medicine regarding research evaluation and extract the problems they face. We conducted a questionnaire survey among medical researchers in the fields of basic, clinical and social medicine to examine the characteristics and issues in the current evaluation axis of medical researchers and identify the evaluation methods that can be considered in the future. Specifically, the study identifies the current state of how medical research and researchers should be evaluated.

Development of the questionnaire

Figure 1 presents an overview of the survey, and the Method Detail in the online supplemental material describes the details. A team of volunteer junior faculties of the Scientific Committee for the 31st General Assembly of the Japanese Association of Medical Sciences worked on this study. Because the measures on this topic have not been established, we developed a preliminary questionnaire and refined it through focus group interviews (FGIs) with researchers affiliated with member societies of the Japanese Association of Medical Sciences (eg, the Japanese Society of Internal Medicine, the Japan Surgical Society, the Japanese Association of Anatomists and the Japanese Society of Public Health), (non-medical) experts in research evaluation, senior researchers and early career researchers and students. 24 FGIs allowed us to extract opinions on research evaluation across disciplines and careers in the Japanese medical field.

Supplemental material

  • Download figure
  • Open in new tab
  • Download powerpoint

Overview of the survey. This study was conducted in two phases: a survey to improve and finalise the questionnaire (based on focus group interviews and receiving comments) and the implementation of our web-based survey. The details are described in the Method Detail in the online supplemental material .

Survey design

After each interview, we reviewed and revised the questionnaire based on the participants’ opinions. Using the revised questionnaire, we conducted a web-based survey of medical researchers. This survey was requested to be announced and distributed to members of medical academic societies through these societies from the Japanese Association of Medical Sciences. However, the survey announcement was voluntary; the method of announcement varied between societies (eg, email newsletter and notification on the society’s website); some societies may not have sent the announcement to their members. The organisation represents the entire Japanese medical research community, ensuring the broadest possible reach to the medical researchers who are the focus of our research. In Japan, in anticipation of the increasing sophistication of medical care and the decrease in the number of medical personnel due to the declining birthrate, the way doctors work will undergo major legal changes in 2024, and medical researchers are becoming increasingly interested in research evaluation. On the survey website, after explaining the present survey, those who did not consent to the study or those whose daily work (ie, work before going on maternity leave and childcare leave) is not related to research were asked to leave the website and therefore excluded. The survey period was from December 14, 2022, to January 17, 2023.

Statistical analyses

The survey involved a self-administered questionnaire to obtain responses on the evaluation’s current status and issues regarding how medical research and researchers were evaluated. In addition to the descriptive statistics, to reveal the difference in local situations, cross-tabulation was calculated stratified by various factors such as age, gender, position and family situation. Other variables indicate the characteristics of the respondents. To efficiently analyse the results, we summarised the characteristics of respondents into fewer classifications. Questionnaires with the same answers to all questions were considered invalid and were excluded. To confirm the robustness of the results of cross-tabulation, we additionally examined the adjusted values. Ordered logistic regression was used to adjust for factors of gender, research field and age and to calculate the predicted percentage of each answer. We excluded the ‘I do not know’ responses in the adjusted analyses. The methods used have been detailed in the online supplemental material . Descriptive analyses were conducted using QuickCross (Macromill, Inc., Minato-ku, Tokyo, Japan), and statistical analyses were conducted using Stata 17.0 (Stata Corp., TX, USA).

Ethical considerations

The study protocol was approved by the Institutional Review Board of the National Centre for Global Health and Medicine (NCGM-S-0 04 530–01).

Patient and public involvement

A total of 3169 researchers answered the questionnaire during the survey period; 386 respondents either did not consent to participate or declined because research activity was not a part of their job. Among the responses, 30 were excluded because they had invalid answers; thus, the analysis included 3139 researchers (2244 men, 852 women and 43 others). The response rate could not be calculated because the number of potential respondents (ie, medical researchers who received the survey announcement) was unknown. Table 1 shows the characteristics of the participants, whereas online supplemental table S1 presents more comprehensive descriptive analyses of the survey answers. By academic rank, professor level (eg, professors, directors of clinical departments or directors of research laboratories) was predominant (n=1213, 38.6%), and by employment status, full-time (tenured) employees were the most common (n=2009, 64.0%). Regarding effort for research, 33.3% (n=1048) of researchers answered that research work accounted for half or more of their work time.

  • View inline

Characteristics of survey participants

For quantitative indicators in evaluating researchers ( figure 2 for stratified results, online supplemental table S1 Q3-1 for overall results and online supplemental table S3 Q3-1-1, Q3-1-2 and Q3-1-4 for details of stratified results), 67.4% answered that the journal IF was considered important (especially important, n=616, 19.6%; important, n=1501, 47.8%), notably more in basic medicine than in clinical and social medicine, among younger researchers than older researchers and among men than women. Compared with respondents with no medical license, physicians and other healthcare professionals were more likely to respond that IFs are important. The number of papers published in English-language journals was considered more important (especially important, n=1045, 33.3%; important, n=1625, 51.8%) than those published in Japanese-language journals (especially important, n=106, 3.4%; important, n=947, 30.2%). The preference for English-language journals over Japanese-language journals was more pronounced in basic medicine than in clinical and social medicine, in younger researchers (39 years old or lower) than older researchers (60 years old or older) and in men than in women. Online supplemental table S4 shows the results of cross-tabulations stratified by license and education to observe how the effect of education differed by the medical profession. Physicians and dentists with only an MD or doctor of dental surgery and those with a PhD placed more importance on the IF in research evaluation ( online supplemental table S4 Q3-1-4). Quantitative indicators were observed to be valued higher in the order of doctor, master and undergraduate scales for respondents with other medical licenses.

Quantitative indicators in evaluating the surrounding researchers. Each colour represents the percentage of answers. The left, middle and right columns are the cross-tabulation results by research field, age and gender. The exact values are shown in online supplemental table S3 (Q3-1-1, Q3-1-2 and Q3-1-4 for tables stratified research field, age and gender). Note: the terms ‘clinical’, ‘basic‘ and ‘social’ refer to clinical, basic and social medicine. ‘yo’ means ‘years old’

Regarding the qualitative factors in evaluating researchers ( figure 3 for stratified results, online supplemental table S1 Q3-2 for overall results and online supplemental table S3 Q3-2-2, Q3-2-3 and Q3-2-5 for details of stratified results), the originality of the research topic (especially important, n=1159, 37.0%; important, n=1614, 51.5%) and contribution to the advancement of science (especially important, n=1172, 37.4; important, n=1485, 47.4%) were considered more important than the exhaustiveness of analyses (ie, the degree that necessary analyses are thoroughly performed) (especially important, n=392, 12.6%; important, n=1612, 51.6%). The originality of the research topic was considered more important in basic medicine than in clinical and social medicine (especially important—basic, n=321, 46.2%; social, n=223, 39.1%; and clinical, n=541, 32.0%), whereas the contribution to solving clinical and social problems was considered more important in social medicine than in basic and clinical medicine (especially important—basic, n=211, 30.4%; clinical, n=692, 41.0%; and social, n=295, 52.0%).

Qualitative factors in evaluating the surrounding researchers. Each colour represents the percentages of answers. The left, middle and right columns are the cross-tabulation results by research field, age and gender, respectively. The exact values are shown in online supplemental table S3 (Q3-2-2, Q3-2-3 and Q3-2-5 for tables stratified research field, age and gender). Note: the terms ‘clinical’, ‘basic’ and ‘social’ refer to clinical, basic and social medicine. ‘yo’ means ‘years old’.

Figure 4 illustrates the researchers’ perceptions of quantitative indicators and qualitative factors for research evaluation. Most researchers (88.8%) agreed that some important studies do not get properly evaluated in research evaluation using quantitative indicators, especially in basic and social medicine, among the 40–49 age group and men. The use of quantitative indicators was perceived to possibly lead to misconduct for researchers in basic medicine compared with those in clinical and social medicine (strongly agree—basic: 22.7%, clinical: 11.7%, social: 16.1%). Older researchers tended to consider that qualitative research evaluation was not affected by unconscious bias compared with younger researchers. Furthermore, online supplemental table S3 stratified by academic rank (Q3-4-8, Q3-5-2) revealed that respondents at the professor level were less likely to believe that focusing on quantitative indicators would lead to underestimation of non-research activities (eg, education, clinical practice and social activities) and were unaware of susceptibility of qualitative evaluation to biases caused by interpersonal relationships or unconscious biases.

Researchers’ perceptions of quantitative indicators and qualitative factors for research evaluation. Each colour represents the percentages of answers. The left, middle and right columns are the cross-tabulation results by research field, age and gender, respectively. The exact values are shown in online supplemental table S3 (Q3-4-5Q-4-7and Q-5-2 for tables stratified research field, age and gender). Note: the terms ‘clinical’, ‘basic’ and ‘social’ refer to clinical, basic and social medicine. ‘yo’ means ‘years old’.

Online supplemental table S5 shows the proportion for each choice, adjusted for gender, research field and age category, using ordered logistic regression analysis. These tables suggest that the main results shown in figures 2–4 were not explained solely by confounding.

Regarding DORA recognition, only 10.1% of the respondents knew its contents, whereas 28.8% knew the name but not the contents ( online supplemental table S1 Q4-3). In other words, 61.1% were unfamiliar with the name DORA. Given that DORA recognition represents evaluation knowledge, this variable’s stratified results can be interpreted as the effect of evaluation literacy. Researchers who recognised the DORA tended to place slightly less emphasis on the importance of the IF ( online supplemental table S3 stratified by research evaluation literacy Q3-1-4). Among them, those who knew its contents were likely to value the qualitative factors such as the originality of the topic or methodology (Q3-2-1 and Q3-2-2), contribution to the advancement of science (Q3-2-3) and contribution to clinical and social problem-solving (Q3-2-4). Furthermore, they also agreed that some important studies do not get properly evaluated by quantitative indicators (Q3-4-5), quantitative indicators may lead to research misconduct (Q3-4-7), and the validity of the qualitative evaluator’s (ie, reviewer’s) assessments should be evaluated (Q3-5-4).

We obtained a total of 645 responses for open-ended inquiries. We classified these responses into seven categories. Out of the 232 survey responses, 37 recommended public reporting of results, 34 suggested incorporating results into policy, 77 highlighted survey problems and criticisms, and 91 expressed positive attitudes towards the survey. The responses that mentioned activities other than work research (n=84) included education, clinical practice, social activities, administrative work and peer review. We also received responses regarding institutional-environmental conditions for research evaluation (n=77), and problems were identified, such as differences in fields, evaluators’ abilities and the amount of available financial and human resources. Regarding the nature of the indicators (n=69), opinions were divided into two groups: 32 criticised and 26 supported the quantitative indicators. Online supplemental table S6 contains additional categories (activities outside of work (n=11), structural conflicts between valuable research and evaluation (n=11) and evaluation fatigue (n=7)), subcategories and examples.

Summary findings

The web-based survey yielded two major findings. First, it was discovered that the majority of medical researchers in Japan, particularly those in basic medicine, young researchers and men, believe that IF and other quantitative indicators based on English publications are appropriate for assessing researchers. Second, medical researchers’ perceptions of quantitative and qualitative indicators in evaluating medical research and researchers varied depending on the participants’ characteristics, such as research field, age and gender. When evaluating researchers, basic medicine researchers were more likely to consider the number of articles published in English-language journals, the journal’s IF and the originality of the research topic. Meanwhile, more social medicine researchers than other medical researchers believed that the number of articles published in Japanese-language journals and the contribution to the resolution of clinical and social problems were important. To our knowledge, this is the first study to clarify perceptions of research/researcher evaluation among medical researchers in Japan.

Reliance on quantitative indicators derived from English-language publications

The general tendency to emphasise IF across disciplines deviates significantly from the DORA, while it recommends against using IFs for research evaluation. Not only IF but also other quantitative indicators based on English-language publications were also regarded as significant factors for evaluation, which was the situation the Leiden Manifesto is concerned with. This result was widely observed among respondents, as 67.4% placed importance on the IF and 85.1% placed importance on the number of English-language journals ( online supplemental table S3 Q3-1). It demonstrates that metrics play an important role in the evaluation system among the Japanese medical research communities as a whole. To advance current evaluation practices, we must approach the entire medical community rather than a specific group. Although many researchers or research institutes in Japan use IF as a metric for study importance or for a researcher’s productivity (eg, by adding the IFs of the journals in which they published papers), it should be noted that the IF was originally designed to measure the influence of a scientific journal, rather than the quality of the research or the researchers. 5

In addition to the general over-reliance on these metrics, there are attribute-specific trends in the preference for quantitative measures. Younger researchers were likely to refer to IF and other quantitative indicators based on English-language publications, perhaps because many hold fiercely competitive positions; they tended to internalise the widely used evaluation metrics. This tendency to place importance on the evaluation axes in the form of published papers is possibly compatible with the study’s results; among the qualitative factors for evaluating researchers, the exhaustiveness of analyses was rated lower than the originality of the research topic and contribution to the advancement of science. This may be due in part to the requirement to present through the medium of a paper that demands conciseness rather than exhaustiveness. 25 The importance of IF in research evaluation was not significantly affected by knowledge of DORA. However, it was linked to higher rates in qualitative factors such as research topic originality. As only 10% of participants claimed to be familiar with DORA and its contents, advocating for and supporting these activities and statements may influence perceptions of research evaluation.

Variety of research evaluation axis by attributes

Remarkably, the evaluation axes differed among research fields, age groups, genders and other subcategories. Researchers of basic medicine tend to rate IFs and the number of papers published in English-language journals higher and the number of papers in Japanese-language journals lower. This may be because the research product in basic medicine is often applicable in any country; thus, it is reasonable to publish information in English. In addition, basic medicine is more susceptible to funding shortages due to maintenance cost for laboratory equipment so that researchers in this field may need to generate well-evaluated outputs. However, this may not be the case in clinical and social medicine; the main readers of their research products may also be clinicians or policymakers who may not be well versed in English. 26 Clinical and social medicine, in contrast to basic medicine, sometimes focus on the domestic context, which is separate from international journals. 23 Furthermore, researchers in clinical or social medicine are expected to engage in a variety of tasks in addition to writing papers in English (eg, clinical practice, guideline development and social practice). Therefore, it is not easy to establish a universal evaluation axis across research fields.

Furthermore, the difference in perception by age and academic rank may partly represent the contrast between evaluators and those who are evaluated. For example, while older and professor-level researchers placed less importance on quantitative indicators, they tended to be unaware of the risk of unconscious biases derived from qualitative evaluation. Although senior researchers frequently evaluate junior researchers and therefore have the authority to determine evaluation axes, discussions about research evaluation between age groups can foster mutual understanding and enhance young researchers’ capacity and responsibility to take on future research fields. 27 28

Mild differences were also observed by gender, such as women placing less emphasis on IF than men, which persisted after adjustment for covariates; thus, gender diversity should be considered when discussing research evaluation. Meanwhile, the low number of women in management positions 29 and difficulties in maintaining work-life balance, which is expected to improve in the near future, may have contributed to reducing the observed differences based on gender in these results. Regarding profession and education, respondents with medical license generally tended to respond IFs are important; it is interesting that the effect of graduate education was heterogeneous between physician/dentist and other medical professionals.

Implication of the study for further consensus

This study did not set out to find a new indicator for research evaluation. Although we found that the situation in Japan differs from what the DORA and the Leiden Manifesto aim for, many researchers may become unsure which evaluation axis to use and require alternative research evaluation axes to rely on.

One possible solution is to develop more reliable evaluation criteria. In fact, "field weighted citation impact (FWCI)" and "top 10% of highly cited papers" compensate for differences in disciplines better than the (unadjusted) journal IF. 30 31 This may help alleviate differences between disciplines, such as basic, clinical and social medicine. The H-index takes into account both the number and impact of articles written by a researcher. 32 Additionally, advanced metrics known as altmetrics, which measure social impact, are being developed. In the age of open-access journals and social networking services, efforts to establish metrics for medical research evaluation should continue. However, it is difficult to develop a definitive metric. For example, the FWCI or top 10% of highly cited papers cannot fully account for the differences between research fields; citations may not be the best indicator of impact in some fields. The H-index, which is influenced by the researcher’s academic age and research field, should be used with caution. Actually, its excessive use was questioned by scientometricians and resulted in the publication of the Leiden Manifesto. 3 33 Moreover, the development of a definitive metric does not imply that we can stop thinking about it, because once a metric is established, it becomes self-objective, undermining efforts towards overall optimisation.

Rather than looking for better metrics, we must accept the limitations of quantitative indicators and share the understanding that quantitative indicators should only be used in conjunction with qualitative evaluation. Even though it is difficult to evaluate research/researcher solely in qualitative manner as academic disciplines get specialised and subdivided, it is important to conduct the research/researcher evaluation based on a deeper qualitative assessment in a balanced manner. 3 6 One of the limitations of qualitative evaluation is its time-consuming nature. It is advantageous to reach an agreement on the importance of this time-consuming process and to shorten the time required for qualitative evaluation. Another limitation is the transparency of the assessment’s basis. It is desirable to reconsider which aspects of research/researcher should be valued, namely, the research/researcher’s mission, within each community or organisation, as well as to clarify the evaluation objective. 3 34

Strengths and limitations

This study conducted a nationwide survey with the assistance of the Japanese Association of Medical Sciences. It serves as the umbrella organisation for all medical academic societies in Japan. This design allowed us to reach our target population (ie, medical researchers in Japan) as much as possible. Finally, we received 3139 valid responses from medical researchers in Japan, which improved the analysis’s robustness.

The study’s limitations include the use of a self-administered web-based survey. Furthermore, although the present sample of over 3000 responses produced robust results, the survey was only completed by a subset of medical researchers. Those who are initially interested in research evaluation are more likely to complete the survey, which may lead to bias in the results. The respondents’ gender imbalance was obvious, and it appeared to reflect the basic gender gap that exists among Japanese doctors and medical researchers, 29 35 which we and others regard as a problem in and of itself. 36 37 Despite sampling limitations, this study is the first to examine how medical researchers perceive the evaluation of research and researchers nationwide in Japan. This study’s results are expected to improve researchers’ evaluation methods and, in turn, their research performance.

The primary analyses (shown in figures 2–4 ) focused on stratification by age groups, gender and research fields that were three characteristic variables out of eight listed in Table 1. Meanwhile, cross-tabulation results for all variables were described in online supplemental table S3 . Future studies should explore the relationships between these variables in greater depth. For example, the effect of academic rank and age cannot be completely separated.

Conclusions

Although most medical researchers in Japan refer to IF and other quantitative indicators based on English paper publications for evaluating researchers, the ideal evaluation axes differ across research fields, generations and genders. We believe it is important to assess research evaluations and constantly review whether there is room for improvement while respecting different ideas from every research field, generation and gender.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

This study involves human participants and was approved by the Institutional Review Board of the National Center for Global Health and Medicine (NCGM-S-004530-01), and the study protocol was approved on 9 December 2022. Participants gave informed consent to participate in the study before taking part.

Acknowledgments

The authors appreciate the interview and survey respondents’ time and effort. For their unwavering support of the research, the authors thank the members of the committee of junior faculties (U40 Committee) of the Scientific Committee and executive members of the 31st General Assembly of the Japanese Association of Medical Sciences. The authors appreciate the questionnaire’s review and distribution by committee members of the Japanese Association of Medical Sciences and the Japanese Association of Medical Sciences Coalitions. The authors are grateful to the members of the Young Academy of Japan and the Science Council of Japan for several productive discussions. The authors also thank Dr Kenjiro Imai, Dr Noriko Ihana-Sugiyama and Ms Akiko Kimura-Wakui for supporting project administration. The authors extend their gratitude to Dr Takahiro Higashi, Dr. Yoshiharu Fukuda and Dr Hideaki Shiroyama for their insightful advice. Finally, the authors appreciate the assistance provided by Dr Kenkichi Takase, Dr Amane Koizumi and Dr Kazuhiro Hayashi throughout the research.

  • Petersen AM
  • Wouters P ,
  • Waltman L , et al
  • Science Council of Japan
  • ↵ Declaration on research assessment (DORA) . Available : https://sfdora.org/ [Accessed 23 Aug 2023 ].
  • Freeman R ,
  • Weinstein E ,
  • Marincola E , et al
  • Toyoshima M ,
  • Takenoshita S ,
  • Hasegawa H , et al
  • Anderson MS
  • Tijdink JK ,
  • Verbeke R ,
  • Smulders YM
  • de Rijcke S
  • Alberts B ,
  • Kirschner MW ,
  • Tilghman S , et al
  • Guraya SY ,
  • Norman RI ,
  • Khoshhal KI , et al
  • Siegel MG ,
  • Rossi MJ , et al
  • Schipper K ,
  • Bouter LM , et al
  • Science CouncilL of Japan
  • The Japanese Association of Medical Sciences
  • Harada KH ,
  • Yamamoto T , et al
  • Hennink M ,
  • Fortunato S ,
  • Bergstrom CT ,
  • Börner K , et al
  • Shibayama S ,
  • Husemann M ,
  • Meyer S , et al
  • Tsugawa S ,
  • Kanetsuki T ,
  • National Institute of Science and Technology Policy
  • Dasgupta P ,
  • Taegtmeyer H
  • De Los Reyes A ,
  • Ando T , et al
  • Yasukawa K ,

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Contributors Conceptualisation: AM, KK, MK, HF, TS. Methodology: AM, YS, KK, MK, HF, TS. Investigation: AM, YS, KK, MK, HF, TS. Visualisation: AM, YS, KK, MK, HF, TS. Funding acquisition: AM, KK, MK, HF, TS. Project administration: AM, TS. Supervision: TS. Writing – original draft: AM, YS, TS. Writing – review and editing: AM, YS, KK, MK, HF, TS. All authors have conducted the following: (1) substantial contributions to the conception or design of the work or the acquisition, analysis or interpretation of data for the work; (2) drafting the work or reviewing it critically for important intellectual content; (3) final approval of the version to be published; and (4) agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

TS is the guarantor of this work and, as such, has full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Funding This work was supported by the 53rd Kurata Grants (the Hitachi Global Foundation) in Humanities and Social Sciences 'Reconsideration of evaluation criteria for medical research and researchers aiming for better medical care from the standpoint of young medical researchers' (No. 1523).

Competing interests None declared.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Prev Med

Qualitative Methods in Health Care Research

Vishnu renjith.

School of Nursing and Midwifery, Royal College of Surgeons Ireland - Bahrain (RCSI Bahrain), Al Sayh Muharraq Governorate, Bahrain

Renjulal Yesodharan

1 Department of Mental Health Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India

Judith A. Noronha

2 Department of OBG Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India

Elissa Ladd

3 School of Nursing, MGH Institute of Health Professions, Boston, USA

Anice George

4 Department of Child Health Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India

Healthcare research is a systematic inquiry intended to generate robust evidence about important issues in the fields of medicine and healthcare. Qualitative research has ample possibilities within the arena of healthcare research. This article aims to inform healthcare professionals regarding qualitative research, its significance, and applicability in the field of healthcare. A wide variety of phenomena that cannot be explained using the quantitative approach can be explored and conveyed using a qualitative method. The major types of qualitative research designs are narrative research, phenomenological research, grounded theory research, ethnographic research, historical research, and case study research. The greatest strength of the qualitative research approach lies in the richness and depth of the healthcare exploration and description it makes. In health research, these methods are considered as the most humanistic and person-centered way of discovering and uncovering thoughts and actions of human beings.

Introduction

Healthcare research is a systematic inquiry intended to generate trustworthy evidence about issues in the field of medicine and healthcare. The three principal approaches to health research are the quantitative, the qualitative, and the mixed methods approach. The quantitative research method uses data, which are measures of values and counts and are often described using statistical methods which in turn aids the researcher to draw inferences. Qualitative research incorporates the recording, interpreting, and analyzing of non-numeric data with an attempt to uncover the deeper meanings of human experiences and behaviors. Mixed methods research, the third methodological approach, involves collection and analysis of both qualitative and quantitative information with an objective to solve different but related questions, or at times the same questions.[ 1 , 2 ]

In healthcare, qualitative research is widely used to understand patterns of health behaviors, describe lived experiences, develop behavioral theories, explore healthcare needs, and design interventions.[ 1 , 2 , 3 ] Because of its ample applications in healthcare, there has been a tremendous increase in the number of health research studies undertaken using qualitative methodology.[ 4 , 5 ] This article discusses qualitative research methods, their significance, and applicability in the arena of healthcare.

Qualitative Research

Diverse academic and non-academic disciplines utilize qualitative research as a method of inquiry to understand human behavior and experiences.[ 6 , 7 ] According to Munhall, “Qualitative research involves broadly stated questions about human experiences and realities, studied through sustained contact with the individual in their natural environments and producing rich, descriptive data that will help us to understand those individual's experiences.”[ 8 ]

Significance of Qualitative Research

The qualitative method of inquiry examines the 'how' and 'why' of decision making, rather than the 'when,' 'what,' and 'where.'[ 7 ] Unlike quantitative methods, the objective of qualitative inquiry is to explore, narrate, and explain the phenomena and make sense of the complex reality. Health interventions, explanatory health models, and medical-social theories could be developed as an outcome of qualitative research.[ 9 ] Understanding the richness and complexity of human behavior is the crux of qualitative research.

Differences between Quantitative and Qualitative Research

The quantitative and qualitative forms of inquiry vary based on their underlying objectives. They are in no way opposed to each other; instead, these two methods are like two sides of a coin. The critical differences between quantitative and qualitative research are summarized in Table 1 .[ 1 , 10 , 11 ]

Differences between quantitative and qualitative research

Qualitative Research Questions and Purpose Statements

Qualitative questions are exploratory and are open-ended. A well-formulated study question forms the basis for developing a protocol, guides the selection of design, and data collection methods. Qualitative research questions generally involve two parts, a central question and related subquestions. The central question is directed towards the primary phenomenon under study, whereas the subquestions explore the subareas of focus. It is advised not to have more than five to seven subquestions. A commonly used framework for designing a qualitative research question is the 'PCO framework' wherein, P stands for the population under study, C stands for the context of exploration, and O stands for the outcome/s of interest.[ 12 ] The PCO framework guides researchers in crafting a focused study question.

Example: In the question, “What are the experiences of mothers on parenting children with Thalassemia?”, the population is “mothers of children with Thalassemia,” the context is “parenting children with Thalassemia,” and the outcome of interest is “experiences.”

The purpose statement specifies the broad focus of the study, identifies the approach, and provides direction for the overall goal of the study. The major components of a purpose statement include the central phenomenon under investigation, the study design and the population of interest. Qualitative research does not require a-priori hypothesis.[ 13 , 14 , 15 ]

Example: Borimnejad et al . undertook a qualitative research on the lived experiences of women suffering from vitiligo. The purpose of this study was, “to explore lived experiences of women suffering from vitiligo using a hermeneutic phenomenological approach.” [ 16 ]

Review of the Literature

In quantitative research, the researchers do an extensive review of scientific literature prior to the commencement of the study. However, in qualitative research, only a minimal literature search is conducted at the beginning of the study. This is to ensure that the researcher is not influenced by the existing understanding of the phenomenon under the study. The minimal literature review will help the researchers to avoid the conceptual pollution of the phenomenon being studied. Nonetheless, an extensive review of the literature is conducted after data collection and analysis.[ 15 ]

Reflexivity

Reflexivity refers to critical self-appraisal about one's own biases, values, preferences, and preconceptions about the phenomenon under investigation. Maintaining a reflexive diary/journal is a widely recognized way to foster reflexivity. According to Creswell, “Reflexivity increases the credibility of the study by enhancing more neutral interpretations.”[ 7 ]

Types of Qualitative Research Designs

The qualitative research approach encompasses a wide array of research designs. The words such as types, traditions, designs, strategies of inquiry, varieties, and methods are used interchangeably. The major types of qualitative research designs are narrative research, phenomenological research, grounded theory research, ethnographic research, historical research, and case study research.[ 1 , 7 , 10 ]

Narrative research

Narrative research focuses on exploring the life of an individual and is ideally suited to tell the stories of individual experiences.[ 17 ] The purpose of narrative research is to utilize 'story telling' as a method in communicating an individual's experience to a larger audience.[ 18 ] The roots of narrative inquiry extend to humanities including anthropology, literature, psychology, education, history, and sociology. Narrative research encompasses the study of individual experiences and learning the significance of those experiences. The data collection procedures include mainly interviews, field notes, letters, photographs, diaries, and documents collected from one or more individuals. Data analysis involves the analysis of the stories or experiences through “re-storying of stories” and developing themes usually in chronological order of events. Rolls and Payne argued that narrative research is a valuable approach in health care research, to gain deeper insight into patient's experiences.[ 19 ]

Example: Karlsson et al . undertook a narrative inquiry to “explore how people with Alzheimer's disease present their life story.” Data were collected from nine participants. They were asked to describe about their life experiences from childhood to adulthood, then to current life and their views about the future life. [ 20 ]

Phenomenological research

Phenomenology is a philosophical tradition developed by German philosopher Edmond Husserl. His student Martin Heidegger did further developments in this methodology. It defines the 'essence' of individual's experiences regarding a certain phenomenon.[ 1 ] The methodology has its origin from philosophy, psychology, and education. The purpose of qualitative research is to understand the people's everyday life experiences and reduce it into the central meaning or the 'essence of the experience'.[ 21 , 22 ] The unit of analysis of phenomenology is the individuals who have had similar experiences of the phenomenon. Interviews with individuals are mainly considered for the data collection, though, documents and observations are also useful. Data analysis includes identification of significant meaning elements, textural description (what was experienced), structural description (how was it experienced), and description of 'essence' of experience.[ 1 , 7 , 21 ] The phenomenological approach is further divided into descriptive and interpretive phenomenology. Descriptive phenomenology focuses on the understanding of the essence of experiences and is best suited in situations that need to describe the lived phenomenon. Hermeneutic phenomenology or Interpretive phenomenology moves beyond the description to uncover the meanings that are not explicitly evident. The researcher tries to interpret the phenomenon, based on their judgment rather than just describing it.[ 7 , 21 , 22 , 23 , 24 ]

Example: A phenomenological study conducted by Cornelio et al . aimed at describing the lived experiences of mothers in parenting children with leukemia. Data from ten mothers were collected using in-depth semi-structured interviews and were analyzed using Husserl's method of phenomenology. Themes such as “pivotal moment in life”, “the experience of being with a seriously ill child”, “having to keep distance with the relatives”, “overcoming the financial and social commitments”, “responding to challenges”, “experience of faith as being key to survival”, “health concerns of the present and future”, and “optimism” were derived. The researchers reported the essence of the study as “chronic illness such as leukemia in children results in a negative impact on the child and on the mother.” [ 25 ]

Grounded Theory Research

Grounded theory has its base in sociology and propagated by two sociologists, Barney Glaser, and Anselm Strauss.[ 26 ] The primary purpose of grounded theory is to discover or generate theory in the context of the social process being studied. The major difference between grounded theory and other approaches lies in its emphasis on theory generation and development. The name grounded theory comes from its ability to induce a theory grounded in the reality of study participants.[ 7 , 27 ] Data collection in grounded theory research involves recording interviews from many individuals until data saturation. Constant comparative analysis, theoretical sampling, theoretical coding, and theoretical saturation are unique features of grounded theory research.[ 26 , 27 , 28 ] Data analysis includes analyzing data through 'open coding,' 'axial coding,' and 'selective coding.'[ 1 , 7 ] Open coding is the first level of abstraction, and it refers to the creation of a broad initial range of categories, axial coding is the procedure of understanding connections between the open codes, whereas selective coding relates to the process of connecting the axial codes to formulate a theory.[ 1 , 7 ] Results of the grounded theory analysis are supplemented with a visual representation of major constructs usually in the form of flow charts or framework diagrams. Quotations from the participants are used in a supportive capacity to substantiate the findings. Strauss and Corbin highlights that “the value of the grounded theory lies not only in its ability to generate a theory but also to ground that theory in the data.”[ 27 ]

Example: Williams et al . conducted a grounded theory research to explore the nature of relationship between the sense of self and the eating disorders. Data were collected form 11 women with a lifetime history of Anorexia Nervosa and were analyzed using the grounded theory methodology. Analysis led to the development of a theoretical framework on the nature of the relationship between the self and Anorexia Nervosa. [ 29 ]

Ethnographic research

Ethnography has its base in anthropology, where the anthropologists used it for understanding the culture-specific knowledge and behaviors. In health sciences research, ethnography focuses on narrating and interpreting the health behaviors of a culture-sharing group. 'Culture-sharing group' in an ethnography represents any 'group of people who share common meanings, customs or experiences.' In health research, it could be a group of physicians working in rural care, a group of medical students, or it could be a group of patients who receive home-based rehabilitation. To understand the cultural patterns, researchers primarily observe the individuals or group of individuals for a prolonged period of time.[ 1 , 7 , 30 ] The scope of ethnography can be broad or narrow depending on the aim. The study of more general cultural groups is termed as macro-ethnography, whereas micro-ethnography focuses on more narrowly defined cultures. Ethnography is usually conducted in a single setting. Ethnographers collect data using a variety of methods such as observation, interviews, audio-video records, and document reviews. A written report includes a detailed description of the culture sharing group with emic and etic perspectives. When the researcher reports the views of the participants it is called emic perspectives and when the researcher reports his or her views about the culture, the term is called etic.[ 7 ]

Example: The aim of the ethnographic study by LeBaron et al . was to explore the barriers to opioid availability and cancer pain management in India. The researchers collected data from fifty-nine participants using in-depth semi-structured interviews, participant observation, and document review. The researchers identified significant barriers by open coding and thematic analysis of the formal interview. [ 31 ]

Historical research

Historical research is the “systematic collection, critical evaluation, and interpretation of historical evidence”.[ 1 ] The purpose of historical research is to gain insights from the past and involves interpreting past events in the light of the present. The data for historical research are usually collected from primary and secondary sources. The primary source mainly includes diaries, first hand information, and writings. The secondary sources are textbooks, newspapers, second or third-hand accounts of historical events and medical/legal documents. The data gathered from these various sources are synthesized and reported as biographical narratives or developmental perspectives in chronological order. The ideas are interpreted in terms of the historical context and significance. The written report describes 'what happened', 'how it happened', 'why it happened', and its significance and implications to current clinical practice.[ 1 , 10 ]

Example: Lubold (2019) analyzed the breastfeeding trends in three countries (Sweden, Ireland, and the United States) using a historical qualitative method. Through analysis of historical data, the researcher found that strong family policies, adherence to international recommendations and adoption of baby-friendly hospital initiative could greatly enhance the breastfeeding rates. [ 32 ]

Case study research

Case study research focuses on the description and in-depth analysis of the case(s) or issues illustrated by the case(s). The design has its origin from psychology, law, and medicine. Case studies are best suited for the understanding of case(s), thus reducing the unit of analysis into studying an event, a program, an activity or an illness. Observations, one to one interviews, artifacts, and documents are used for collecting the data, and the analysis is done through the description of the case. From this, themes and cross-case themes are derived. A written case study report includes a detailed description of one or more cases.[ 7 , 10 ]

Example: Perceptions of poststroke sexuality in a woman of childbearing age was explored using a qualitative case study approach by Beal and Millenbrunch. Semi structured interview was conducted with a 36- year mother of two children with a history of Acute ischemic stroke. The data were analyzed using an inductive approach. The authors concluded that “stroke during childbearing years may affect a woman's perception of herself as a sexual being and her ability to carry out gender roles”. [ 33 ]

Sampling in Qualitative Research

Qualitative researchers widely use non-probability sampling techniques such as purposive sampling, convenience sampling, quota sampling, snowball sampling, homogeneous sampling, maximum variation sampling, extreme (deviant) case sampling, typical case sampling, and intensity sampling. The selection of a sampling technique depends on the nature and needs of the study.[ 34 , 35 , 36 , 37 , 38 , 39 , 40 ] The four widely used sampling techniques are convenience sampling, purposive sampling, snowball sampling, and intensity sampling.

Convenience sampling

It is otherwise called accidental sampling, where the researchers collect data from the subjects who are selected based on accessibility, geographical proximity, ease, speed, and or low cost.[ 34 ] Convenience sampling offers a significant benefit of convenience but often accompanies the issues of sample representation.

Purposive sampling

Purposive or purposeful sampling is a widely used sampling technique.[ 35 ] It involves identifying a population based on already established sampling criteria and then selecting subjects who fulfill that criteria to increase the credibility. However, choosing information-rich cases is the key to determine the power and logic of purposive sampling in a qualitative study.[ 1 ]

Snowball sampling

The method is also known as 'chain referral sampling' or 'network sampling.' The sampling starts by having a few initial participants, and the researcher relies on these early participants to identify additional study participants. It is best adopted when the researcher wishes to study the stigmatized group, or in cases, where findings of participants are likely to be difficult by ordinary means. Respondent ridden sampling is an improvised version of snowball sampling used to find out the participant from a hard-to-find or hard-to-study population.[ 37 , 38 ]

Intensity sampling

The process of identifying information-rich cases that manifest the phenomenon of interest is referred to as intensity sampling. It requires prior information, and considerable judgment about the phenomenon of interest and the researcher should do some preliminary investigations to determine the nature of the variation. Intensity sampling will be done once the researcher identifies the variation across the cases (extreme, average and intense) and picks the intense cases from them.[ 40 ]

Deciding the Sample Size

A-priori sample size calculation is not undertaken in the case of qualitative research. Researchers collect the data from as many participants as possible until they reach the point of data saturation. Data saturation or the point of redundancy is the stage where the researcher no longer sees or hears any new information. Data saturation gives the idea that the researcher has captured all possible information about the phenomenon of interest. Since no further information is being uncovered as redundancy is achieved, at this point the data collection can be stopped. The objective here is to get an overall picture of the chronicle of the phenomenon under the study rather than generalization.[ 1 , 7 , 41 ]

Data Collection in Qualitative Research

The various strategies used for data collection in qualitative research includes in-depth interviews (individual or group), focus group discussions (FGDs), participant observation, narrative life history, document analysis, audio materials, videos or video footage, text analysis, and simple observation. Among all these, the three popular methods are the FGDs, one to one in-depth interviews and the participant observation.

FGDs are useful in eliciting data from a group of individuals. They are normally built around a specific topic and are considered as the best approach to gather data on an entire range of responses to a topic.[ 42 Group size in an FGD ranges from 6 to 12. Depending upon the nature of participants, FGDs could be homogeneous or heterogeneous.[ 1 , 14 ] One to one in-depth interviews are best suited to obtain individuals' life histories, lived experiences, perceptions, and views, particularly while exporting topics of sensitive nature. In-depth interviews can be structured, unstructured, or semi-structured. However, semi-structured interviews are widely used in qualitative research. Participant observations are suitable for gathering data regarding naturally occurring behaviors.[ 1 ]

Data Analysis in Qualitative Research

Various strategies are employed by researchers to analyze data in qualitative research. Data analytic strategies differ according to the type of inquiry. A general content analysis approach is described herewith. Data analysis begins by transcription of the interview data. The researcher carefully reads data and gets a sense of the whole. Once the researcher is familiarized with the data, the researcher strives to identify small meaning units called the 'codes.' The codes are then grouped based on their shared concepts to form the primary categories. Based on the relationship between the primary categories, they are then clustered into secondary categories. The next step involves the identification of themes and interpretation to make meaning out of data. In the results section of the manuscript, the researcher describes the key findings/themes that emerged. The themes can be supported by participants' quotes. The analytical framework used should be explained in sufficient detail, and the analytic framework must be well referenced. The study findings are usually represented in a schematic form for better conceptualization.[ 1 , 7 ] Even though the overall analytical process remains the same across different qualitative designs, each design such as phenomenology, ethnography, and grounded theory has design specific analytical procedures, the details of which are out of the scope of this article.

Computer-Assisted Qualitative Data Analysis Software (CAQDAS)

Until recently, qualitative analysis was done either manually or with the help of a spreadsheet application. Currently, there are various software programs available which aid researchers to manage qualitative data. CAQDAS is basically data management tools and cannot analyze the qualitative data as it lacks the ability to think, reflect, and conceptualize. Nonetheless, CAQDAS helps researchers to manage, shape, and make sense of unstructured information. Open Code, MAXQDA, NVivo, Atlas.ti, and Hyper Research are some of the widely used qualitative data analysis software.[ 14 , 43 ]

Reporting Guidelines

Consolidated Criteria for Reporting Qualitative Research (COREQ) is the widely used reporting guideline for qualitative research. This 32-item checklist assists researchers in reporting all the major aspects related to the study. The three major domains of COREQ are the 'research team and reflexivity', 'study design', and 'analysis and findings'.[ 44 , 45 ]

Critical Appraisal of Qualitative Research

Various scales are available to critical appraisal of qualitative research. The widely used one is the Critical Appraisal Skills Program (CASP) Qualitative Checklist developed by CASP network, UK. This 10-item checklist evaluates the quality of the study under areas such as aims, methodology, research design, ethical considerations, data collection, data analysis, and findings.[ 46 ]

Ethical Issues in Qualitative Research

A qualitative study must be undertaken by grounding it in the principles of bioethics such as beneficence, non-maleficence, autonomy, and justice. Protecting the participants is of utmost importance, and the greatest care has to be taken while collecting data from a vulnerable research population. The researcher must respect individuals, families, and communities and must make sure that the participants are not identifiable by their quotations that the researchers include when publishing the data. Consent for audio/video recordings must be obtained. Approval to be in FGDs must be obtained from the participants. Researchers must ensure the confidentiality and anonymity of the transcripts/audio-video records/photographs/other data collected as a part of the study. The researchers must confirm their role as advocates and proceed in the best interest of all participants.[ 42 , 47 , 48 ]

Rigor in Qualitative Research

The demonstration of rigor or quality in the conduct of the study is essential for every research method. However, the criteria used to evaluate the rigor of quantitative studies are not be appropriate for qualitative methods. Lincoln and Guba (1985) first outlined the criteria for evaluating the qualitative research often referred to as “standards of trustworthiness of qualitative research”.[ 49 ] The four components of the criteria are credibility, transferability, dependability, and confirmability.

Credibility refers to confidence in the 'truth value' of the data and its interpretation. It is used to establish that the findings are true, credible and believable. Credibility is similar to the internal validity in quantitative research.[ 1 , 50 , 51 ] The second criterion to establish the trustworthiness of the qualitative research is transferability, Transferability refers to the degree to which the qualitative results are applicability to other settings, population or contexts. This is analogous to the external validity in quantitative research.[ 1 , 50 , 51 ] Lincoln and Guba recommend authors provide enough details so that the users will be able to evaluate the applicability of data in other contexts.[ 49 ] The criterion of dependability refers to the assumption of repeatability or replicability of the study findings and is similar to that of reliability in quantitative research. The dependability question is 'Whether the study findings be repeated of the study is replicated with the same (similar) cohort of participants, data coders, and context?'[ 1 , 50 , 51 ] Confirmability, the fourth criteria is analogous to the objectivity of the study and refers the degree to which the study findings could be confirmed or corroborated by others. To ensure confirmability the data should directly reflect the participants' experiences and not the bias, motivations, or imaginations of the inquirer.[ 1 , 50 , 51 ] Qualitative researchers should ensure that the study is conducted with enough rigor and should report the measures undertaken to enhance the trustworthiness of the study.

Conclusions

Qualitative research studies are being widely acknowledged and recognized in health care practice. This overview illustrates various qualitative methods and shows how these methods can be used to generate evidence that informs clinical practice. Qualitative research helps to understand the patterns of health behaviors, describe illness experiences, design health interventions, and develop healthcare theories. The ultimate strength of the qualitative research approach lies in the richness of the data and the descriptions and depth of exploration it makes. Hence, qualitative methods are considered as the most humanistic and person-centered way of discovering and uncovering thoughts and actions of human beings.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

IMAGES

  1. Understanding Qualitative Research: An In-Depth Study Guide

    qualitative research in research design

  2. 6 Types of Qualitative Research Methods

    qualitative research in research design

  3. Qualitative Research: Definition, Types, Methods and Examples

    qualitative research in research design

  4. Qualitative Research

    qualitative research in research design

  5. Qualitative Research: Definition, Types, Methods and Examples

    qualitative research in research design

  6. Research Design in Qualitative Research

    qualitative research in research design

VIDEO

  1. QUALITATIVE RESEARCH DESIGN IN EDUCATIONAL RESEAERCH

  2. Research Designs: Part 2 of 3: Qualitative Research Designs (ሪሰርች ዲዛይን

  3. QUANTITATIVE METHODOLOGY (Part 2 of 3):

  4. Qualitative vs Quantitative Research Design

  5. Quantitative Research Designs

  6. What is Correlation Coefficient || Best Exam Topic || Research & Statistics ||

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  3. Chapter 2. Research Design

    Chapter 2. Research Design Getting Started. When I teach undergraduates qualitative research methods, the final product of the course is a "research proposal" that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question.

  4. What is Qualitative Research Design? Definition, Types, Methods and

    Qualitative research design is defined as a type of research methodology that focuses on exploring and understanding complex phenomena and the meanings attributed to them by individuals or groups. It is commonly used in social sciences, psychology, anthropology, and other fields where subjective experiences and interpretations are of interest. ...

  5. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  6. 20

    In other words, qualitative research uncovers social processes and mechanisms undergirding human behavior. In this chapter, we will discuss how to design a qualitative research project using two of the most common qualitative research methods: in-depth interviewing and ethnographic observations (also known as ethnography or participant ...

  7. Research Design in Qualitative Research

    A research design is based on an integration of the theories, concepts, goals, contexts, beliefs, and sets of relationships that shape a specific topic. In addition, it is shaped by responding to the realities and perspectives of participants and contexts of a study. In a solid qualitative research design, framing theory and key constructs are ...

  8. Research Design

    Step 2: Choose a type of research design. Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research. Types of quantitative research designs. Quantitative designs can be split into four main types.

  9. PDF Qualitative Research Design: The Five Essential Components

    5. Validity. How can you ensure that the data you collect will, a) address your research questions, b) yield correct and defensible answers to these questions, and c) apply to the larger population or process of interest? 1 Taken from Joseph A. Maxwell, Qualitative Research Design, 2nd edition. 2004. Sage Publications.

  10. CMU LibGuides: Qualitative Research Design: Start

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  11. What is Qualitative in Qualitative Research

    Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.

  12. Qualitative Research Design

    Features. Preview. Qualitative Research Design: An Interactive Approach provides researchers and students with a user-friendly, step-by-step guide to planning qualitative research. It shows how the components of design interact with each other, and provides a strategy for creating coherent and workable relationships among these design ...

  13. LibGuides: Qualitative study design: Qualitative study design

    Qualitative research is often based on methods of observation and enquiry; qualitative research "explores the meaning of human experiences and creates the possibilities of change through raised awareness and purposeful action" ( Taylor & Francis, 2013 ). Qualitative research focuses on life experiences; they are more about the "why" and ...

  14. Choosing a Qualitative Research Approach

    In this Rip Out, we describe 3 different qualitative research approaches commonly used in medical education: grounded theory, ethnography, and phenomenology. Each acts as a pivotal frame that shapes the research question (s), the method (s) of data collection, and how data are analyzed. 4, 5. Go to:

  15. Qualitative Research

    Qualitative Research. Qualitative research is a type of research methodology that focuses on exploring and understanding people's beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus ...

  16. Case Study Methodology of Qualitative Research: Key Attributes and

    Research design is the key that unlocks before the both the researcher and the audience all the primary elements of the research—the purpose of the research, the research questions, the type of case study research to be carried out, the sampling method to be adopted, the sample size, the techniques of data collection to be adopted and the ...

  17. Qualitative Design Research Methods

    The Origins of Design-Based Research. Qualitative design-based research (DBR) first emerged in the learning sciences field among a group of scholars in the early 1990s, with the first articulation of DBR as a distinct methodological construct appearing in the work of Ann Brown and Allan Collins ().For learning scientists in the 1970s and 1980s, the traditional methodologies of laboratory ...

  18. Characteristics of Qualitative Research

    Qualitative research is a method of inquiry used in various disciplines, including social sciences, education, and health, to explore and understand human behavior, experiences, and social phenomena. It focuses on collecting non-numerical data, such as words, images, or objects, to gain in-depth insights into people's thoughts, feelings, motivations, and perspectives.

  19. What is Qualitative Research?

    Qualitative research is the methodology researchers use to gain deep contextual understandings of users via non-numerical means and direct observations. Researchers focus on smaller user samples—e.g., in interviews—to reveal data such as user attitudes, behaviors and hidden factors: insights which guide better designs.

  20. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems. Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data.

  21. (PDF) Qualitative research

    Gabriel, C (1990) The validity of qualitative market research, The Journal of the Market Research Society, 32, pp 507-20. Gibb, A. (1997), `Focus Groups', Social Research Update, 19, Winter ...

  22. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants ...

  23. "So That I May Hope to Honor You": Centering Wholeness, Agency, and

    Qualitative research, "an interpretive and naturalistic approach . . . that endeavors to make social life more known through a series of analytical representations" (Esposito & Evans-Winters, 2021, p. 6), has historically served the needs of adult researchers (Yoon & Templeton, 2019), whose methods often privilege conventional ways of communicating (e.g., oral interviews, text analysis).

  24. Breaking barriers, building faculty: A qualitative analysis to

    Hospital medicine (HM) continues to be primarily composed of junior hospitalists and research has highlighted a paucity of mentors and academic output. Faculty advancement programs have been identified as a means to support junior hospitalists in their career trajectories and to advance the field. ... Design. Rapid qualitative methods were ...

  25. Problem areas of determining the sample size in qualitative research: a

    The lack of a definite standard for determining the sample size in qualitative research leaves the research process to the initiative of the researcher, and this situation overshadows the scientificity of the research. ... homogeneity, information strength, drilling ability, triangulation and research design, are used as inputs. A total of 20 ...

  26. Medical researchers' perceptions regarding research evaluation: a web

    Outcomes The subjective importance of quantitative indicators and qualitative factors in evaluating researchers (eg, the journal impact factor (IF) or the originality of the research topic) was assessed on a four-point scale, with 1 indicating 'especially important' and 4 indicating 'not important'. The attitude towards various opinions in quantitative and qualitative research ...

  27. Qualitative Methods in Health Care Research

    In healthcare, qualitative research is widely used to understand patterns of health behaviors, describe lived experiences, develop behavioral theories, explore healthcare needs, and design interventions.[1,2,3] Because of its ample applications in healthcare, there has been a tremendous increase in the number of health research studies ...

  28. Forms of Fraudulence in Human-Centered Design: Collective Strategies

    DOI: 10.1145/3613905.3636309 Corpus ID: 269750163; Forms of Fraudulence in Human-Centered Design: Collective Strategies and Future Agenda for Qualitative HCI Research @inproceedings{Panicker2024FormsOF, title={Forms of Fraudulence in Human-Centered Design: Collective Strategies and Future Agenda for Qualitative HCI Research}, author={Aswati Panicker and Novia Nurain and Zaidat Ibrahim and Chun ...

  29. Top UX Research Tools for Qualitative Data Collection

    Qualitative data, unlike quantitative data, provides insights into the 'why' behind user actions and attitudes, making it invaluable for crafting user-centered designs.