MS in Nursing (MSN)

A Guide To Qualitative Rigor In Research

Advances in technology have made quantitative data more accessible than ever before; but in the human-centric discipline of nursing, qualitative research still brings vital learnings to the health care industry. It is sometimes difficult to derive viable insights from qualitative research; but in the article below, the authors identify three criteria for developing acceptable qualitative studies.

Qualitative rigor in research explained

Qualitative rigor. It’s one of those terms you either understand or you don’t. And it seems that many of us fall into the latter of those two categories. From novices to experienced qualitative researchers, qualitative rigor is a concept that can be challenging. However, it also happens to be one of the most critical aspects of qualitative research, so it’s important that we all start getting to grips with what it means.

Rigor, in qualitative terms, is a way to establish trust or confidence in the findings of a research study. It allows the researcher to establish consistency in the methods used over time. It also provides an accurate representation of the population studied. As a nurse, you want to build your practice on the best evidence you can and to do so you need to have confidence in those research findings.

This article will look in more detail at the unique components of qualitative research in relation to qualitative rigor. These are: truth-value (credibility); applicability (transferability); consistency (dependability); and neutrality (confirmability).

Credibility

Credibility allows others to recognize the experiences contained within the study through the interpretation of participants’ experiences. In order to establish credibility, a researcher must review the individual transcripts, looking for similarities within and across all participants.

A study is considered credible when it presents an interpretation of an experience in such a way that people sharing that experience immediately recognize it. Examples of strategies used to establish credibility include:

  • Reflexivity
  • Member checking (aka informant feedback)
  • Peer examination
  • Peer debriefing
  • Prolonged time spent with participants
  • Using the participants’ words in the final report

Transferability

The ability to transfer research findings or methods from one group to another is called transferability in qualitative language, equivalent to external validity. One way of establishing transferability is to provide a dense description of the population studied by describing the demographics and geographic boundaries of the study.

Ways in which transferability can be applied by researchers include:

  • Using the same data collection methods with different demographic groups or geographical locations
  • Giving a range of experiences on which the reader can build interventions and understanding to decide whether the research is applicable to practice

Dependability

Related to reliability in quantitative terms, dependability occurs when another researcher can follow the decision trail used by the researcher. This trail is achieved by:

  • Describing the specific purpose of the study
  • Discussing how and why the participants were selected for the study
  • Describing how the data was collected and how long collection lasted
  • Explaining how the data was reduced or transformed for analysis
  • Discussing the interpretation and presentation of the findings
  • Explaining the techniques used to determine the credibility of the data

Strategies used to establish dependability include:

  • Having peers participate in the analysis process
  • Providing a detailed description of the research methods
  • Conducting a step-by-step repetition of the study to identify similarities in results or to enhance findings

Confirmability

Confirmability occurs once credibility, transferability and dependability have been established. Qualitative research must be reflective, maintaining a sense of awareness and openness to the study and results. The researcher needs a self-critical attitude, taking into account how his or her preconceptions affect the research.

Techniques researchers use to achieve confirmability include:

  • Taking notes regarding personal feelings, biases and insights immediately after an interview
  • Following, rather than leading, the direction of interviews by asking for clarifications when needed

Reflective research produces new insights, which lead the reader to trust the credibility of the findings and applicability of the study

Become a Champion of Qualitative Rigor

Clinical Nurse Leaders, or CNLs, work with interdisciplinary teams to improve care for populations of patients. CNLs can impact quality and safety by assessing risks and utilizing research findings to develop quality improvement strategies and evidence-based solutions.

As a student in Queens University of Charlotte’s online Master of Science in Nursing program , you will solidify your skills in research and analysis allowing you to make informed, strategic decisions to drive measurable results for your patients.

Request more information to learn more about how this degree can improve your nursing practice, or call 866-313-2356.

Adapted from: Thomas, E. and Magilvy, J. K. (2011), Qualitative Rigor or Research Validity in Qualitative Research. Journal for Specialists in Pediatric Nursing, 16: 151–155. [WWW document]. URL  http://onlinelibrary.wiley.com/doi/10.1111/j.1744-6155.2011.00283.x  [accessed 2 July 2014]

Recommended Articles

Tips for nurse leaders to maintain moral courage amid ethical dilemmas, relationship between nursing leadership & patient outcomes, day in the life of a clinical nurse leader®, 8 leadership skills nurses need to be successful, the rise of medical errors in hospitals, get started.

SlidePlayer

  • My presentations

Auth with social network:

Download presentation

We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!

Presentation is loading. Please wait.

Quality Indicators of Rigor in Qualitative Methods & Analysis Dr. Louise McCuaig Dr. Sue Sutherland.

Published by Gwendolyn Curtis Modified over 8 years ago

Similar presentations

Presentation on theme: "Quality Indicators of Rigor in Qualitative Methods & Analysis Dr. Louise McCuaig Dr. Sue Sutherland."— Presentation transcript:

Quality Indicators of Rigor in Qualitative Methods & Analysis Dr. Louise McCuaig Dr. Sue Sutherland

Grounded Theory   Charmaz (2008).

rigor in qualitative research ppt

Applying Grounded Theory Methods to Library and User Assessment

rigor in qualitative research ppt

Collecting Qualitative Data

rigor in qualitative research ppt

Research Methods in Psychology (Pp 45-59). Observations Can be used in both experimental and nonexperimental research; can be used quantitatively or qualitatively.

rigor in qualitative research ppt

Reviewing and Critiquing Research

rigor in qualitative research ppt

Research Methodologies

rigor in qualitative research ppt

Publishing qualitative studies H Maisonneuve April 2015 Edinburgh, Scotland.

rigor in qualitative research ppt

Data Analysis, Interpretation, and Reporting

rigor in qualitative research ppt

 It’s an approach to research that examines a concept or phenomenon from the perspective of the individual who is experiencing it  The research purpose.

rigor in qualitative research ppt

Mother and Child Health: Research Methods G.J.Ebrahim Editor Journal of Tropical Pediatrics, Oxford University Press.

rigor in qualitative research ppt

Case Study Research By Kenneth Medley.

rigor in qualitative research ppt

Karin Hannes Centre for Methodology of Educational Research K.U.Leuven.

rigor in qualitative research ppt

Reliability & Validity Qualitative Research Methods.

rigor in qualitative research ppt

Chapter 17 Ethnographic Research Gay, Mills, and Airasian

rigor in qualitative research ppt

Chapter 14 Overview of Qualitative Research Gay, Mills, and Airasian

rigor in qualitative research ppt

RESEARCH DESIGN.

rigor in qualitative research ppt

Methods used to validate qualitative

rigor in qualitative research ppt

Chapter 10 Conducting & Reading Research Baumgartner et al Chapter 10 Qualitative Research.

rigor in qualitative research ppt

Donor deferral: a qualitative approach. Arjuna Ponnampalam Transfusion Medicine Fellow December 20 th, 2011.

About project

© 2024 SlidePlayer.com Inc. All rights reserved.

  • Search Menu
  • Advance Articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About Journal of Public Administration Research and Theory
  • About the Public Management Research Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, where qualitative methods shine, forget qualitative versus quantitative, rigor: the point of departure, deductive, mixed, and hybrid qualitative methods.

  • < Previous

A Reviewer’s Guide to Qualitative Rigor

  • Article contents
  • Figures & tables
  • Supplementary Data

Branda Nowell, Kate Albrecht, A Reviewer’s Guide to Qualitative Rigor, Journal of Public Administration Research and Theory , Volume 29, Issue 2, April 2019, Pages 348–363, https://doi.org/10.1093/jopart/muy052

  • Permissions Icon Permissions

Institutions are useful for advancing methodologies within disciplines. Through required coursework, doctoral students are indoctrinated into basic guidelines and frameworks that provide a common foundation for scholars to interact with one another. Lacking such forums in many of our doctoral granting institutions ( Stout 2013 ), the field of public management continues to struggle with an ambivalence toward qualitative approaches. Lack of shared understanding concerning basic tenets of qualitative methodology abounds. This article is intended for qualitative consumers, those not formally trained in qualitative methods but who serve as peer reviewers, content experts, and advisors in arenas where qualitative methods are encountered. Adopting a postpositivistic stance dominant in the field, we seek to offer a pragmatic perspective on qualitative methods with regards to some basic tenets of rigor appropriate (and inappropriate) for assessing the contribution of qualitative research. We argue that the first step in this effort is to stop conflating data type (qualitative versus quantitative) with inductive versus deductive modes of inquiry. Using deductive modes as the basis for comparison, we discuss both common, as well as, diverging criteria of quality and rigor for inductive modes of inquiry. We conclude with a discussion of rigor in emerging methods which utilize qualitative data but from within a deductive, mixed, or hybrid mode of inquiry.

The field of public management continues to have a rocky relationship with qualitative methods. Like most methods, qualitative research has both its champions and its critics in the field. However, it is our sense that the majority of the field sits somewhere a bit right of center, open to a discussion but still suspect of what to do with findings from any study consisting of a small unrepresentative sample and no standard error. Much of this stems from fundamental misunderstandings about what qualitative inquiry is, and is not, designed to do. The cost of this to our discipline is significant. In a recent review, Ospina and colleagues (2017) reported only 7.5% of the articles published in top PA journals over the past 5 years relied solely on qualitative methods. This is not particularly surprising as our doctoral training institutions allow graduates to remain largely uninformed about qualitative approaches ( Stout 2013 ). However, there are many questions germane to our discipline that are best suited to qualitative inquiry (for discussion, see Brower, Abolafia, and Carr 2000 ; Milward forthcoming , Ospina et al. 2017 ). In order to advance the contribution qualitative methods can make to the field, some foundational understanding about qualitative rigor is needed.

In embarking on this effort, we join an esteemed cadre of scholars who have grappled with the issue of qualitative rigor in public management (e.g., Brower et al. 2000 ; Dodge et al. 2005 ; Lowery and Evans 2004 ; Ospina et al. 2017 ). However, we seek a very specific audience. This is not an article written for the initiated qualitative scholar; we are not seeking to offer advancements in qualitative techniques or further the discourse on the precepts of qualitative inquiry. Nor is this an article particularly aimed at the edification of the novice qualitative scholar looking to embark upon qualitative inquiry for the first time; there are many excellent texts out there that deal with the issues contained in this article in a much more thorough manner. Rather, this article was conceptualized and written primarily for the qualitative consumer who, at present, represents the over-whelming majority in the field of public management.

As we are envisioning our intended audience, three general categories of consumers come to mind. First, this article is for the quantitatively trained peer reviewer who finds themselves asked to assess the quality and contribution of a qualitative study brought to them for review. These folks serve as the gatekeepers and a quality assurance mechanism critical to the advancement of the discipline. Second, this article is for the scholar reviewing the literature within a content domain populated by both qualitative and quantitative studies. If we want qualitative research to have a greater substantive impact on the discipline, we need to give non-qualitatively trained scholars the tools to assess the contribution of qualitative research within their own research paradigm. Otherwise, citations will inevitably trend into methodological silos. Finally, this article is written for the quantitatively trained professor who finds themselves on a committee trying to support a student pursuing a qualitative or mixed method dissertation. We have a beloved colleague who routinely asks students whether their dissertations are going to be empirical or qualitative. Her intent is not to be pejorative; she simply has no frame of reference for how to think about quotations as data.

A Brief Note on Epistemology

We recognize that the writing of this article requires the adoption of some normative stances linked to the philosophy of science; namely, an epistemological stance that is primarily postpositivist in nature. We have intentionally deviated from normative practice in qualitative scholarship in minimizing our discussion of epistemology (for further discussions, see Creswell 2018 ; Creswell and Miller 2000 ; Raadschelders 2011 ; Riccucci 2010 ). This is not because we do not appreciate the value and relevance of alternative epistemological stances for the field of public management. However, many methods associated with qualitative rigor can be applied across different epistemological stances, varying in intention and orientation rather than practical execution. 1 For example, Lincoln and Guba’s (1986) criteria of trustworthiness are useful. This is true regardless of whether you are utilizing those practices because you believe in the postpositivistic limitations of humans to fully comprehend social processes present in natural settings, or because you believe these social processes are co-constructed in an inseparable relationship between the researcher and the participant. In a similar way, reflexivity 2 is relevant to both the postpositivist as well as the interpretivists regardless of whether you embrace the inseparability between the knower and knowledge (constructivism) or just view humans as fallible in part because they cannot fully and objectively separate who they are from the questions they ask and the answers they find (postpositivism; Guba 1990 ).

In this paper, we seek to offer a pragmatic perspective on qualitative inquiry with a focus on how to conceptualize and assess quality and rigor within a postpositivistic framework; the dominant philosophical stance of most qualitative consumers within public management. We do this with the aim of widening the pathways through which qualitative studies might influence scholarship in public management. We recognize that such an endeavor may be highly controversial within some branches of the qualitative community which maintain a strong allegiance to advancing a constructivist philosophy of science (e.g., Carter and Little 2007 ; Rolfe 2006 ). However, we argue it is neither reasonable nor necessary for qualitative consumers to suspend their fundamental view of reality in order to appreciate and assess the contribution of qualitative work to the broader field. There is a rich history of the integration of qualitative research within postpositivism (e.g., Clark 1998 ; Glaser and Strauss 2017 ; Prasad 2015 ; Yin 2017 ), particularly in the organizational sciences ( Eisenhardt and Graebner 2007 ).

We do not foresee a reconciliation between constructivism and postpositivist philosophies occurring any time soon. However, we do see sizable opportunity for naturalistic, inductive qualitative inquiry to have a broader impact in the field of public management if we start from the perspective that both qualitative and quantitative methods are compatible and complementary buckets of tools within social science. Different tools are best suited for different jobs and there is almost as much variation within each bucket as there is between them. Regardless, the world is an increasingly complex place. As a discipline that routinely trudges off into some really messy domains of inquiry and holds itself accountable to informing practice as well as advancing theory ( Brooks 2002 ; Denhardt 2001 ; Gill and Meier 2000 ; Head 2010 ; Weber and Khademian 2008 ), we need every tool we can get.

To address building this toolbox for qualitative consumers, we present first an overview of critical domains of inquiry in the field of public management where we see qualitative methods as being particularly well suited to advancing scholarship. This review highlights some of the most cited and celebrated theories of our field that have been initially shaped or meaningfully re-imagined from qualitative approaches. Next, we argue for a reframing of the question of qualitative rigor, asserting the more productive distinction lies in differentiating inductive versus deductive modes of inquiry. Leveraging this perspective, we discuss both commonalities and points of departure in appropriate criteria of quality and rigor between deductive versus inductive models. Finally, we discuss issues of rigor in three emerging methods in public management that use qualitative data in deductive, mixed and hybrid models of inquiry.

If qualitative methods are viewed as a category of tools, it is relevant to next consider some of the functionality one maximizes through the use of such tools. Although this list is not exhaustive, it is intended to provide a general grounding into the types of situations where qualitative approaches are particularly well equipped to make a contribution to the field of public management.

Advancing New Theory and Discovering Nuance in Existing Theory

Quantitative hypothesis testing requires a priori theory. Arbitrarily searching for significant correlations between variables in a dataset without a theoretically grounded hypothesis to direct the analysis is infamously problematic for well-documented reasons ( Kuhn 1996 ; Steiner 1988 ). Theory is a combination of a premise as well as a well-explicated mechanism that explains the why behind the premise or proposition.

Cross-sectional quantitative designs can test the strength and nature of association between two or more constructs. Longitudinal quantitative designs can examine the patterning of association over time, and experimental designs can even narrow in on causality. These are powerful tools, but none are well equipped to discover the mechanisms by which these observed patterns are operating or identifying intervening factors that explain inconsistencies across cases. We use existing theory to infer the mechanism associated with an observed pattern but this is generally not an empirical exercise, it is a conceptual one. Further, it often requires the extrapolation of theoretical mechanisms conceptualized in one organizational context (e.g., private firms) to be applied in a completely different organizational context (e.g., public organizations). When the hypothesized association holds, we generally conclude that the same mechanisms are in operation in the same manner. How critically do we look at this assumption? What else might be going on? Qualitative methods offer tools specifically designed to empirically shed light on these questions.

Qualitative methods are particularly useful in the theory development process because they are able to provide detailed description of a phenomenon as it occurs in context. These methods do not require the scholar to guess in advance the most important factors and their relationship to each other. Mechanisms associated with the co-occurrence of two phenomena can be observed in real time or described by first hand informants who experienced it. For example, Feldman’s (e.g. Feldman 2000 ; Feldman and Pentland 2003 ) seminal work on the role of routines as sources of change and innovation in organizations was based on organizational ethnography. Some other classic examples of theory development in public management that began as qualitative research can be found in organizational culture and sense making case studies ( Schein 2003 ; Weick 1993 ). Toepler’s (2005) case study of a CEO in crisis, the phenomena of iron triangles ( Freeman 1965 ), and the social construction of target populations ( Schneider and Ingram 1993 ) are also illustrations of theoretical advances through qualitative inquiry. Additionally, a major contribution to theory of both formal and informal accountability in the public sector and multi-sector collaboration was a direct result of a grounded theory qualitative approach ( Romzek and Dubnick 1987 ; Romzek, LeRoux, and Blackmar 2012 ; Romzek et al. 2014 ). All of these examples leverage a qualitative researcher’s ability to harness an inductive approach that allows for the emergence of our understanding of the nature of phenomena from those organizations and people who experienced it.

Beyond advancing new theories, qualitative methods have a strong tradition of clarifying and expanding upon existing theory. Underpinning many public management research areas is the ever-present politics-administration dichotomy. Maynard-Moody and Kelly’s (1993) foundational piece used a phenomenological approach to present the views of public workers who must navigate their administrative and political responsibilities every day. Agency and stewardship theories have also been examined and further delineated using qualitative methods ( Schillemans 2013 ). Theories of goal-directed networks and managerial tensions around unity and diversity have been expanded through qualitative studies ( Saz-Carranza and Ospina 2010 ). Finally, the nature of public participation has been theorized and statistically tested, but along the way the notion of authentic engagement—described as “deep and continuous involvement…with the potential for all involved to have an effect on the situation” (p. 320) was introduced to clarify theories, in part as a result of King et al.’s (1998) qualitative study.

Developing New Constructs, Frameworks, and Typologies

Quantitative hypothesis testing and construct validation requires the conceptualization and suggested operationalization of a construct. The development or usage of a new measure is aptly treated with skepticism if it is not empirically and theoretically grounded. In this way, many variables that we quantitatively leverage could not exist without prior development through qualitative research. For example, a foundational idea, and the basis for subsequent quantitative considerations of the differences between managers and front-line workers, is rooted in Lipsky’s (1971) examination and discussion of the street-level bureaucrat. Drawing from case studies and interviews, Lipsky highlights the nature of front-line worker discretion and challenges public management scholars to include this important context in future research.

Public Service Motivation (PSM), public management’s very own and home-grown construct, was born from Perry and Wise’s (1990) discussion citing both cases and quotes from public servants. Their argument for PSM to be more fully operationalized and then measured is rooted in their content analysis. Although they do not explicitly state the qualitative nature of their article, their argument for, and legacy of PSM scale measures, is drawn directly from the words and actions of public servants themselves.

Defining Mechanisms Underlying Statistical Associations

Although some quantitative articles do include mechanisms in their discussion sections, many simply rehash results and what hypotheses were or were not supported. Indeed, quantitative research in public management gives considerable weight to well-documented statistical association, even when the causal mechanism is ambiguous. In this world, how then do mechanisms get clarified when an association is found? This is an area where qualitative researchers have been working with less recognition of the importance of their research striving to answer “how” and “why” questions. The literature mentioned here again is not an exhaustive list, but emblematic of some prime examples of how our field’s understanding of a statistical result has been given more texture and a much richer application to both theory and practice through qualitative methods.

In the area of government contracting, Dias and Maynard-Moody (2007) further examine past quantitative findings that turn on Transaction Cost Economics (TCE) ( Williamson 1981 ) by explicating how and why implementing competing contracting philosophies of agencies and service providers underpins the nature of the transaction itself. Another qualitative piece examining the deeper mechanisms behind TCE is Van Slyke’s (2003) discussion of the “mythology” of contracting. In his research, data from semi-structured interviews suggests competition is not a simple construct in testing TCE interactions between governments and service providers because of the nature of environmental constraints, actions by nonprofit organizations, networked relationships, and government-enacted barriers have important dynamics. Honig (2018) offers another apt example in a mixed method study in which he demonstrates how comparative case study designs can reveal insights about the role of the environment in moderating the relationship between managerial control and success that were not possible to capture through quantitative modeling.

We have observed many scholars get conceptually hung up on the numbers versus text dichotomy associated with qualitative versus quantitative traditions. Although it is true that qualitative methods generally involve the analysis of some form of text and quantitative methods always involve the analysis of numbers, this focus on data type is largely a distraction from the more important distinction of inductive versus deductive forms of inquiry ( Eisenhardt and Graebner 2007 ). Deductive approaches to inquiry start with a general premise or proposition and then investigate whether this premise holds within a specific sample intended to represent a broader population. Inductive approaches start with a specific case or set of cases of theoretical importance and seek to describe a phenomenon of interest within that case in such a manner as to draw rich insight into that phenomenon (for discussion, see Eisenhardt and Graebner 2007 ; McNabb 2014 ). Although there are a handful of qualitative and/or hybrid qualitative/quantitative methods intended for deductive inquiry (more on this below), the bulk of tools in the qualitative bucket are intended for inductive inquiry.

Overarching Principles of Quality Between Inductive and Deductive Inquiry

Before we get into differences, it is important to first consider similarities. Although inductive and deductive traditions of scholarship differ in many important respects, they also share some commonalities that form the mutual basis of considerations of quality in terms of assessing their contribution to the literature. In our exuberance to elaborate their differences, we can forget what these forms of inquiry can hold in common. We argue that inductive and deductive approaches share in common three core values that are foundational to the notion of quality scholarship in public management: 1) the importance of scholarship that advances theory, 2) the principle of inquiry-driven design, and 3) the criticality of gap-driven inquiry.

Relevance of Scholarship for Advancing Theory

In public management, our focus is to inform practice as well as advance theory ( Kettl 2000 ). As a result, we give the greatest currency to knowledge that has relevance beyond the boundaries of the specific case, individual, or instance. Thus, within our field, the degree to which findings can have relevance beyond the study case or sample is foundational to conceptualizations of quality regardless of inductive or deductive approach ( Dubnick 1999 ). Inductive scholarship, different from most deductive studies, allows for a plurality of truths and an equifinality of pathways to the same outcome ( Eisenhardt, Graebner, and Sonenshein 2016 ), but the same standards of quality still apply. In other words, in inductive approaches, one need not argue an observed finding is the only explanation for a given outcome observed in another space or time, but it must be a plausible explanation for a similar outcome given a similar set of circumstances ( Holland 1986 ; Lewis 1973 ).

As such, both inductive and deductive studies are in the same boat of trying to figure out the extent to which and ways in which their limited study has broader implications for the field. The criteria and processes used to establish this element of quality certainly differs, but the precept that findings must have relevance beyond the scope of the data analyzed is common to both qualitative and quantitative scholarship in the field of public management ( McNabb 2015 ).

Inquiry-Driven Design

Both inductive and deductive traditions are inquiry driven. This means that evaluating the quality of any design—qualitative or quantitative—is inseparable from understanding the research question the study is designed to address. It is possible to hammer a nail with a screwdriver, but it is not considered good practice as you are likely to just make a mess of it. In the same way, different research questions are more or less appropriate to different designs. Thus, while it is possible to attempt to describe the different ways in which people experience transformational leadership with an exploratory survey or use a series of focus groups to examine the relative prevalence of public service motivation among different groups, it is not a good practice as you are likely to just make a mess of it.

A common misconception is that inductive qualitative methods seek to ask and answer the same questions as quantitative methods, just using different types of data and standards of rigor. This is not the case. Inductive approaches are designed to pose and address fundamentally different kinds of questions that necessitate different types of data and criteria of rigor. However, methodological appropriateness ( Haverland and Yanow 2012 ), or using the right tool for the job, is a value common to both inductive and deductive traditions and a key element of quality for all public management scholarship.

Gap-Driven Inquiry

Both inductive and deductive traditions recognize that knowledge does not advance in isolation—it takes a community of scholars to build a body of knowledge ( Kuhn 1996 ; Gill and Meier 2000 ). The practice of positioning a research question in terms of its relevance within broader conversations that are taking place within the literature is mainstream to both traditions ( McNabb 2015 ). In the field of public management—as elsewhere—the greatest currency is given to studies that clearly identify and address a significant gap within the literature; we seek to investigate something overlooked, under-appreciated, or potentially misunderstood in our current understanding of a given phenomenon. The extent to which a study accomplishes such a contribution is a shared element of quality for both deductive and inductive traditions.

In the previous section, we have argued that inductive and deductive approaches in public management share a common foundation in conceptualizing the quality of inquiry. Specifically, we suggest quality can be conceptualized as inquiry that addresses a significant gap in the literature in a manner that advances our general understanding of a broader phenomenon through the use of a method appropriate to the nature of the research question. Rigor, then, can be conceptualized as the appropriate execution of that method. Put simply, if quality is the what, rigor for our purposes becomes the how. It is here that inductive and deductive traditions diverge in a significant way.

It is useful to start with the negative case. Two criteria appropriate for deductive research but NOT appropriate for inductive inquiry include:

1) Is there evidence that the causal factors, processes, nature, meaning, and/or significance of the phenomenon generalize to the broader population?

2) Are the findings able to be replicated in the sense that two researchers asking the same question would come to the same interpretation of the data?

These two criteria, held sacred as cornerstones of rigor in deductive inquiry, seem to cause the greatest amount of heartburn within the field of public management and its relationship to inductive qualitative inquiry. If it is not generalizable and it does not replicate, how is that possibly science? This results in on-going frustration among qualitative scholars as they attempt to respond to criticisms of their design by reviewers, colleagues, and advisors in terms of the lack of representative sampling and/or inter-rater reliability measures. This is rooted in some fundamental misunderstandings about what inductive inquiry is and what it seeks to accomplish.

Generalizability

In deductive methods, when there are more cases that conform to an a priori hypothesis than do not, relative to the standard error and controlling for all other factors in the model, we reject the null hypothesis that this pattern could have been observed merely by random chance. However, in every deductive sample, there can be numerous observations which do not conform to our models. These we vaguely disregard as “error.” When cases deviate substantially, we call them “outliers” and may remove them from consideration entirely. This is reasonable because the aim of deductive inquiry is to test the presence of an a priori relationship in the population based on a limited, representative sample ( Neuman and Robson 2014 ). Associations deal with probabilities and likelihoods; not all cases must conform to a pattern to conclude that an association exists as long as the sample is reasonably representative and sufficient to detect differences ( Wasserman 2013 ).

Inductive research is attempting to do something quite different. The sample of an inductive study is never purely random nor convenient. Instead, each case or participant should be purposively selected because they represent a theoretically interesting exemplar of, or key informant about, a phenomenon of interest ( Patton 2014 ). In other words, by nature of being selected for inclusion in an inductive study, the scholar is making the argument that we should care about understanding the experience of this person(s) or the events of this case. Whether a pattern discerned in an inductive study is common in the general population is not the question an inductive scholar is seeking to answer. In fact, the case may have been selected specifically because it represents something rare or unusual. Rather, they are seeking to use a systematic method to interpret and represent, in rich detail, what is true for a particular set of individual(s) and/or cases, identifying themes and patterns across cases that add insight into the phenomenon of interest. Cases with divergent patterns or informants with contradictory experiences are not ignored or discounted as measurement error or outliers. Rather, the inductive scholar seeks to understand the factors and mechanisms that explain these points of divergence ( Eisenhardt et al. 2016 ).

Although the inductive scholar does not empirically test the extent to which an association or experience is common in the general population, this does not mean that inductive findings are not intended to have relevance for advancing general theory and practice. If done well, an inductive study should provide a detailed, contextualized, and empirically grounded interpretation of what was true in one or more cases of interest. Just as one experience in one setting should never be assumed to dictate what one might experience in another setting, it would likewise be absurd to assume prior experience is totally irrelevant if a similar set of conditions are present. In this way, qualitative inductive scholarship seeks to systematically describe and interpret what is occurring in a finite set of cases in sufficient detail as to lend insight into what might be going on in cases like these . Discerning the quantitative prevalence of these key patterns or experiences within populations is where deductive methods can pick up where inductive leave off. However, it is only through also gaining a grounded and detailed understanding of phenomenon of theoretical interest do we gain new insights and have hope of developing understanding and theory that has relevance to field of practice.

Replication

As mentioned previously, inductive methods are seeking to develop a deep understanding of causal factors, processes, nature, meaning, and/or significance of a particular phenomenon ( Creswell and Poth 2018 ; Denzin and Lincoln 2012 ; Patton 2014 ). This understanding generally comes from asking a lot of questions, observing settings and behavior, and collecting stories, images, and other artifacts that aid the scholar in also gaining insight into their phenomenon of interest. Different approaches have been created to narrow in on specific types of phenomenon. For example, phenomenology looks at how individuals experience and ascribe meaning to a given phenomenon ( Giorgi 1997 ; Moran 2002 ; Waugh and Waugh 2003 ). Grounded theory seeks to identify the causal relationships that give rise to, and result from, a given phenomenon ( Glaser and Strauss 2017 ; Morse et al. 2016 ). Ethnography seeks to uncover the cultural elements within human systems ( Agar 1996 ; Hammersley 1983 ; Preissle and Le Compte 1984 ).

Each tradition has its own systematic process of data collection and analysis. However, regardless of the tradition, it is always the analyst who must draw inference and interpretation from the vast array of qualitative information in front of them . Just as there are some doctors who can observe the same patient information to diagnose root causes while others focus on first order symptomology, multiple analysts working independently on the same data sources may also come to different interpretations of what is going on ( Langley 1999 ). One doctor is not necessarily right and the others wrong; rather the same thing can be many things at once (e.g., structural, psychological, cultural). Therefore, the appropriate criteria of rigor is not whether the same interpretation would be independently arrived upon by different analysts. Rather, in inductive analysis, the criteria is: based on the evidence provided, is a given interpretation credible ( Patton 1999 )? In other words, if an independent analyst were informed of another analyst’s interpretation and then given all the same source information, would the interpretation stand up to scrutiny as being a justified, empirically grounded, exposition of the phenomenon?

Elements of Rigor

If we cannot assess inductive studies in terms of generalizability and replication, what are valid criteria upon which they might be evaluated? In very global terms, rigorous inductive research in public management can be judged on two core criteria:

1) Does the research design and its execution generate new insight into the causal factors, processes, nature, meaning, and/or significance of a phenomenon of interest to the field? (reviewed in Table 1 ) and

2) Is the account of these causal factors, processes, nature, meaning, and/or significance within these cases trustworthy? (reviewed in Table 3 )

Relevant and Inappropriate Criteria of Rigor for Inductive Research

The trustworthiness and depth of insight of an inductive study is manifest in its research design, execution, reporting.

Research Design

Because the contribution of inductive qualitative research fundamentally hinges on the theoretical relevance of the units (e.g., individuals, cases, texts) selected for study, sampling is of paramount importance. Different approaches of qualitative analysis have specific guidance on sampling consistent with that approach. For example, grounded theory uses a protocol of proposition-driven sampling in which the investigator strategically chooses cases iteratively in conjunction with data analysis in an effort to examine variation in patterns observed in the previous cases (for discussion, see Corbin and Strauss 1990 ; Glaser 2002 ). However, regardless of which analysis tradition an inductive scholar is using, the inductive qualitative sample must always be justified in terms of why the informants, texts, and/or cases selected should be considered of theoretical interest to the field. This description should be situated in terms of who these informants are in the broader population of possible informants relevant to the research question. Inductive scholarship should include a clear explication of why these individuals were chosen specifically and what they represent. What qualifies them as key informants of this phenomenon? Why would we expect them to have insight into this question that is particularly information rich and/or relevant to the field? How might their position likely influence the perspective they offer about the phenomenon (for discussion, see Marshall 1996 ; for exemplar, see Saz-Carranza and Ospina 2010 and Romzek et al. 2012 justification of both case and informant selection)?

As outlined in most introductory texts in qualitative analysis (e.g., Denzin and Lincoln 2012 ; Miles, Huberman, and Saldana 2013 ; Patton 2014 ), there are numerous sampling strategies that may guide participant or case selection in an inductive study. Common approaches include efforts to capture the “typical case,” the “extreme case,” the disconfirming case, or the “unusual case.” Sampling is also often purposefully stratified to represent informants from theoretically important sub-populations. In studies of individual level phenomenon, this may include stratifying samples to include men and women, young/middle age/old, more or less experience, or different ethnicities/racial groups. In studies of higher order phenomenon such as at the organizational, coalition, group, or network level, the scholar may choose to stratify cases across geographic region or based on some developmental phase (e.g., new versus old organizations). Although there are numerous potential sampling strategies for an inductive study, they all share in common the criteria that whatever or whomever is chosen for inclusion or exclusion of an inductive study, sampling decisions must be inquiry driven, theoretically justified, and information rich.

How Many is Enough?

The question of sample size in inductive qualitative research is less straight forward than it is in deductive research. In the deductive world, the sample size criteria turns primarily on the power to detect differences given the model applied to the data ( Wasserman 2013 ). In inductive research, the sample size question focuses on the sources of variability of the phenomenon of interest that are of theoretical importance to the field given the research question. However, inductive studies complicate the sample size question because numerous and varied sources of data can be, and often are, integrated. For example, in several qualitative approaches, triangulation of findings among multiple data sources is one of elements of the rigor (e.g., for review see Jonsen and Jehn 2009 ).

Just as with deductive research, no one inductive study can address every dimension or perspective that might be relevant to understanding a phenomenon of interest. Therefore, in addition to clearly articulating the criteria upon which individuals or other data sources were sampled for inclusion into the study, there is need to explicate the boundary criteria that sets the limits for who or what is not considered within the scope of the inquiry. Following this, the authors must clearly articulate the unit or units of analysis that define the phenomenon of inquiry. Is it case-based such as an inquiry into the factors that hindered international NGO community from being effective contributors to the response phase of Hurricane Katrina (e.g., Eikenberry, Arroyave, and Cooper 2007 )? Is it organizational such as a study of service providers’ usage of monitoring tools based on agency theory (e.g., Lambright 2008 ). Is it focused on the individual, such as examining public service motivation and transformation leadership (e.g., Andersen et al. 2016 )? Or is it episodic such as a study of the process through which routines can lead to a source of innovation within an organization (e.g., Feldman 2003 )? Higher order phenomenon (i.e., case-level, coalition-level, organizational-level, etc.) often require multiple data sources or informants associated with that case, group, or organization to gain sufficient depth of understanding of the dynamics present. This will necessarily place limits on the number of cases that can be studied comparatively. Alternatively, a single informant may be able to reflect on multiple episodic units based on varied experiences over time.

Qualitative Saturation

Qualitative saturation is a technique commonly referenced in inductive research to demonstrate that the dataset is robust in terms of capturing the important variability that exists around the phenomenon of interest ( O’Reilly and Parker 2013 ). However, we advise caution in the use of saturation in defending the sample characteristics of a qualitative sample. Qualitative saturation refers to a point at which the analyst has obtained a sort of information redundancy such that continued analysis has revealed no new insight not already captured by previous cases ( Morse 1995 ). Generally, during analysis, scholars do reach a point at which no new themes or propositions emerge and analysis of new transcripts leads only to additional instances of existing themes or relationships. However, this standard is problematic as a criterion for rigor in public management for two reasons.

First, in order to be used as a condition of sampling rigor, it requires that the scholar analyze their data as it is being collected so as to recognize the point at which no additional data collection is needed. Although this design feature is integral to grounded theory, it is uncommon in other qualitative traditions which often mimic deductive models having a distinct data collection phase preceding a data analysis phase ( Miles, Huberman, and Saldana 2013 ). Second, the methods by which a scholar determines saturation are generally methodologically difficult to standardize or demonstrate as a criteria of rigor ( Morse 1995 ; O’Reilly and Parker 2013 ). Therefore, while saturation is an important heuristic in guiding data analysis—for example, for informing the analyst when they should transition from open coding to axial coding, we do not find it is a particularly useful concept for qualitative consumers to evaluate the suitability of a dataset in terms of whether it should be considered theoretically robust.

Consequently, qualitative consumers generally must rely on qualitative as opposed to quantitative benchmarks for determining the suitability of a given dataset for addressing an inductive research question. The questions qualitative consumers need to answer are these: 1) is the dataset analyzed information rich and 2) does it have a reasonable chance of representing variability of the phenomena of interest that are of theoretical importance given the research question ( Brower, Abolafia, and Carr 2000 )? In efforts to orient new inductive scholars into the general ballpark of sample expectations, some scholars have cautiously made heavily caveated recommendations (for review, see Onwuegbuzie and Leech 2007 ). Qualitative studies of 100 or more units are unusual and generally unnecessary for most inductive analytic traditions unless some type of quantification is desired (see below discussion on hybrid designs; Gentles et al. 2015 ). Studies of ten or less units would require a unique justification in terms of how such a data set provides a theoretically robust perspective on the phenomenon of interest. Within that sizeable range, qualitative consumers will have to make a subjective call about the theoretical robustness of a given dataset in relation to the research question asked, the phenomenon of interest, the analytic tradition used, and the interpretive claims made. Benchmarking sampling approaches against existing literature utilizing the same analytic approach is helpful for creating consistency within the field. Additionally, qualitative consumers may find the following questions a useful rubric in determining how theoretically robust a given dataset might be considered to be:

1) Is the phenomenon rare or infrequently encountered?

2) Are the data rare or particularly difficult to obtain?

3) Is the phenomenon simple or complex?

4) Is the phenomenon new or well investigated in the literature?

5) How information rich is each unit in relation to the phenomenon of interest?

6) Is the same unit being engaged at multiple points in time?

Data Collection Protocols and Procedures

In deductive research, constructs and relationships are articulated prior to analysis, and what one can discover is therefore necessarily constrained to what one looks to find. In inductive research, constructs and relationships are articulated through analysis, and the scholar seeks to minimize constraint on what can be discovered ( Lincoln and Guba 1986 ). However, because in most inductive studies, the data must still be collected from individuals, the actions of the investigator will inevitably constrain and shape what the data looks like. This is done a priori through the creation of protocols which guide the types of questions that the investigator asks informants or the elements the investigator observes and records their observations. While these protocols can, and often should evolve over the course of the study, it is the execution of these protocols that create the data used in analysis. Consequently, the quality of these protocols and their execution is an important consideration in determining the rigor of an inductive study ( Miles, Huberman, and Saldana 2013 ).

In demonstrating the rigor of an inductive research design, the investigator should be able to clearly describe what data was considered relevant for a given research question and how this data was obtained. Data collection protocol design should be consistent with the specific methodological tradition embraced by the study (see Table 2 ). Vague descriptors such as “data were obtained through open ended interviews” is not sufficient description to determine rigor. Just as in deductive research the same construct can be operationalized in multiple ways, two inductive investigators may be interested in the same research question but ask very different types of interview questions of their informants. Researchers should be able to describe the types of questions the investigator asked informants related to the phenomenon of interest. These questions should have a clear conceptual linkage to the research question of concern, the analytic tradition embraced, and be a key consideration in the analysis and interpretation of the findings (for exemplar, see Rerup and Feldman’s (2011) , description of the interview protocol used to illicit espoused schemas of staff in a tech start up). It is also important for the qualitative consumer to recognize that data looks different depending on the different analytic tradition one uses. Table 2 outlines some of the more prevalent qualitative traditions.

Qualitative Data Collection and Analysis Traditions

Data Analysis and Interpretation

Like deductive approaches, inductive qualitative data analysis come in many forms linked to different analytic traditions and are more or less appropriate to different types of research questions. These traditions carry with them specific guidance on design, sampling, and analysis. Methodological deviations or qualitative “mixology” ( Kahlke 2014 ) in which design element from multiple traditions are combined or certain design elements omitted should be well-justified and evaluated carefully by the qualitative consumer to ensure the resulting design remains robust. Just as with deductive designs, robust inductive designs should have a clear logical flow from the research question, to the data collection protocol, to the description of the analysis procedure, to the explication of the findings. There should be no black curtain behind which hundreds of pages of transcripts are magically transformed into seven key findings. Rather, the scholar should be able to provide a clear and concise description of their analysis process and its relationship to the reported findings (for exemplar, see Rivera’s (2017) case analysis description in her study of gender discrimination in academic hiring committees).

As discussed, the overarching criteria of rigor associated with an inductive study is not reliability or replication. Rather, rigorous analysis is based on 1) whether the interpretation is credible in light of the data, 2) whether it was the result of a robust and systematic analytical process designed to move beyond superficial findings and minimize and/or account for investigator bias, and 3) whether it is reported with sufficient attention to context so as to facilitate the potential relevance of insights to similar contexts. These features were first described by Lincoln and Guba (1986) as the criteria of qualitative trustworthiness. They developed an initial set of practices designed to achieve these elements of rigor that have since been expanded upon by various qualitative scholars. Although these elements remain under development and debate, especially in public management (for discussion see, Lowery and Evans 2004 ), Table 3 offers a broad overview of some of the more commonly advocated strategies and associated aims that qualitative consumers might consider when evaluating the rigor of an inductive study. However, it is important to note that these elements represent strategies. They are not a checklist and certain strategies may be more or less appropriate in certain study designs. As such, we argue rigor is best conceptualized in terms of its functionality. Was the design logically coherent in relation to the research question? Was the analysis systematically designed to move beyond superficial findings and minimize and/or account for investigator bias? Did the design result in both credible and insightful findings? Were the findings reported with sufficient attention to context so as to facilitate empirically grounded theory building?

Elements of Qualitative Rigor (Adapted From Creswell and Poth 2018 ; Denzin and Lincoln 2003 ; Lincoln and Guba 1986 ; Morse 2015 )

We have argued that the distinction between inductive versus deductive approaches is a most relevant delineation for identifying appropriate criteria of rigor. Up to this point, we have focused primarily on inductive applications of qualitative data. However, as noted previously, not all qualitative data analysis is inductive. In this final section, we give special consideration to qualitative approaches in the field of public management that that are either deductive, mixed, and hybrid methods.

Narrative Policy Framework

In policy process research, the Narrative Policy Framework (NPF) has more recently emerged as an approach for quantifying qualitative data that has been coded from policy documents and various mediums of public comment ( Shanahan, Jones, and McBeth 2013 ). The NPF was designed to address postpositivist challenges to policy process theories by taking into account the critical role that narratives play in generating and facilitating meaning for people and how those narratives then relate to the politics of constructing reality ( Shanahan, Jones, and McBeth 2013 ). Within the NPF, narratives are considered to be constructed of important elements that include many traditional parts of stories like a hero, a villain, a plot, and a moral. These narrative elements are applied as codes in a more directly deductive approach and then often used for hypothesis testing at micro , meso , and macro levels ( McBeth, Jones, and Shanahan 2014 ).

Despite being derived from qualitative data, much of the work on NPF embraces a deductive model of hypothesis testing ( Shanahan, Jones, and McBeth 2017 ). In deductive applications, the standards of rigor as it relates to representative sampling, construct validity, reliability, statistical power, and generalizability apply. These methods require the development of a stable coding framework that can be applied by multiple coders with a high degree of reliability. As such, metrics such as inter-rater reliability are appropriate tools for demonstrating that the coding framework is being applied in a consistent manner. Another design challenge with NPF is the fact that its core propositions are associated with discrete “narratives” as the unit of analysis, which can be difficult to isolate in a standardized way across different types of policy documents which may contain multiple narratives (for discussion, see Shanahan, Jones, and McBeth 2018 ). Further, the representative sampling of policy documents relative to a defined population can be difficult to conceptualize ( Shanahan, Jones, and McBeth 2018 ). Despite these challenges, NPF is valuable in its ability to examine whether specific narrative patterns have a stable and generalizable influence on different outcomes of the policy process ( McBeth, Jones, and Shanahan 2014 ); a question ill-suited to an inductive narrative analysis approach.

Mixed Methods

Another development that has gained popularity in public management and applied social sciences more generally is the mixed methods study (see Honig, this issue). A mixed methods study is often characterized as one that uses a combination of both qualitative and quantitative data ( Creswell and Clark 2018 ; for alternative definitions see Johnson et al. 2007 ). It is generally assumed that mixed methods studies will also utilize a combination of inductive and deductive approaches. The ordering of the inductive/deductive mixture can vary. For example, the scholar may use an inductive qualitative phase aimed at gaining a greater insight about a poorly understood phenomenon. Constructs, dimensions, and propositions resulting in the findings from this first inductive phase of analysis can then be translated into a second confirmatory phase in the form of survey measure development, psychometrics, and hypothesis testing. In a second variation, a scholar may use existing literature and theory to deductively create measures and propose and test hypotheses. The scholar may then design an inductive phase in which the mechanisms and contextual factor underlying these hypotheses are explored in great depth through qualitative methods (for discussion of various design options, see Mele & Belardinelli, this issue; Creswell, Clark, Gutmann, and Hanson 2003 ).

Considerations of rigor in a mixed methods study are two pronged. First, mixed methods studies have the dual burden of adhering to all the requirements of rigorous design associated with both inductive and deductive models. For example, the sample for the inductive phase must meet the criteria of offering an information rich, inquiry-driven sample while the sample for the deductive phase must have still sufficient power to detect differences and be a reasonably representative sample of the population. This generally makes such studies relatively large and ambitious. Second, a rigorous mixed methods study should ideally reflect some degree of complementarity between the approaches, maximizing the different advantages in inductive versus deductive designs. Each design element should reflect thoughtful attention to the points at which the findings from the different phases of analysis co-inform one another ( Johnson, Burke, and Onwuegbuzie 2004 ).

Qualitative Comparative Analysis

Qualitative Comparative Analysis (QCA; Ragin 1998 ; Ragin and Rihoux 2004 ) represents a hybrid approach, being neither fully inductive or deductive. QCA has an established presence in public management ( Cristofoli and Markovic 2016 ; Hudson and Kuhner 2013 ; Malatesta and Carboni 2015 ; Pattyn, Molevald, and Befani 2017 ; Raab, Mannak, and Cabre 2015 ; Sanger 2013 ; Thomann 2015 ). Like NPF, QCA involves the quantification of qualitative data and the application of mathematical models. However, different from NPF, which is principally deductive in its approach, QCA can use inductive qualitative methods to identify outcomes of interest and factors of relevance to explaining that outcome. These interpretations of the data are then quantified and entered into mathematical models designed to examine pathways of necessary and sufficient conditions that are derived from a researcher creating a numeric data table, often using binary codes.

QCA, first introduced by Ragin (1987) , is intended to unify aspects of qualitative, case-based research, and quantitative, variable-based, approaches ( Fischer 2011 ). QCA is rooted in the assumption of equifinality; that different causal conditions can lead to the same outcome, and that the effect of each condition is dependent on how it is combined with other conditions ( Fischer 2011 ; Ragin 1987 ). Accordingly, QCA is not hindered by the assumptions of homogeneous effects that encumber many quantitative approaches. Rather, it enables the researcher to consider multiple pathways and combinations that may lead to the same outcome. Also unique to QCA is the focus on combinatorial logic that assumes that cases should be viewed holistically within the context of all conditions combined. As such, QCA can reveal patterns across cases that might be difficult to discern through purely qualitative approaches (for discussion see Rihoux and Ragin 2008 ).

One of the challenges to assessing QCA from a rigor perspective stems from its inherently hybrid nature. The samples in QCA are generally small and presumably inductively selected ( Hug 2013 ). As such, an inductive criteria of rigor could apply. However, the results of a QCA have a distinctive deductive flavor in both the style of analysis and interpretation. For example, the process by which the specific constructs are identified for inclusion is often not well explicated and may contain a mixture of a priori theory and inductively derived theory. Some authors embrace a fully deductive hypothesis driven approaches based on theory and using predetermined codebooks (e.g., Raab, Mannak, and Cambre 2015 ; Thomman 2015 ). Cases, which do not fit into one of the identified pathways are excluded from the output due to criteria like relevancy and consistency that enable the Boolean algebra of QCA to more readily converge on causal pathways. 3 Publications of QCA findings generally focus primarily on the pathways identified with little or no attention to the cases that deviated from these patterns.

It is our belief that as QCA applications evolve, scholars will need to, metaphorically, pick a horse to ride in their utilization of this technique in order for a study to be associated with the appropriate standards of rigor. In other words, QCA is a descriptive tool that can be used either inductively or deductively. Is a study a deductive effort to examine possible configurations of pathways toward a predefined outcome using a priori factors examined within a representative sample of a population? If so, deductive criteria of rigor would apply to a QCA as it relates to construct validity, inter-rater reliability, and representative sampling. On the other hand, QCA could also be a powerful tool used within an inductive model of research with associated inductive criteria of rigor. In this model, cases would be purposively justified as theoretically important to understanding a given phenomenon. The QCA would represent a tool within a broader process of inquiry for examining complex patterning across cases that may be difficult to otherwise discern. The inductive process by which coding categories were generated and qualitative variability that exists within coding delineations would be central concerns of the analysis. The analysis would include an empirically grounded contextualization and interpretation of the cases that conform to, as well as deviate from, the identified patterns so as to inform the mechanisms by which one pattern versus another may emerge. Either application of QCA, whether deductive or inductive, holds promise as a technique but murky applications which do not fully commit to either standard of rigor seem problematic (for additional discussion, see Hug 2013 ).

We began this article with the assertion that qualitative methods are poised to make a greater contribution in shaping our understanding of public management. We view this as a good thing; having the potential to inject new insight and depth of understanding into the questions that define the field. We perceive a general openness in the discipline to advancing bodies of literature through the integration of contributions from both inductive and deductive styles of inquiry. However, much of the discipline lacks even basic training in inductive approaches to research (see Stout 2013 ) which serves as a barrier. Deductive models—by virtue of the more structured task they are designed to accomplish coupled with the greater duration of time this approach has had to institutionalize—are simply more straightforward in their precepts of rigor. However, advancing the contribution of qualitative methods in public management will not happen without some shared construction of rigor that is compatible with a postpositivistic stance on science. We argue that the first step in advancing this agenda is to stop conflating data type (qualitative versus quantitative) with methodological approach (inductive versus deductive).

Beyond this, this article is positioned as a conversation-starter and as a resource for breaking down barriers for meaningful interactions that have put qualitative and quantitative methods at odds. We argue here that these past misunderstandings have less to do with the analysis of text versus number-based data, and more to do with murky or altogether misunderstood differences between the requirements of quality and rigor for inductive versus deductive methods. In clearing some of the air on quality and rigor of both kinds of methods in this space, we put forth a postpositivist stance with the understanding that not all scholars will agree, but that this perspective offers a productive pathway for broadly engaging the most common public management researcher today.

Agar , Michael H . 1996 . The professional stranger: An informal introduction to ethnography . 2nd ed. UK : Emerald Publishing Group .

Google Scholar

Google Preview

Andersen , Lotte Bøgh , Bente Bjørnholt , Louise Ladegaard Bro , and Christina Holm-Petersen . 2016 . Leadership and motivation: A qualitative study of transformational leadership and public service motivation . International Review of Administrative Sciences : 1 – 17 .

Brooks , Arthur C . 2002 . Can nonprofit management help answer public management’s “big questions” ? Public Administration Review , 62 ( 3 ): 259 – 66 .

Brower , Ralph S. , Mitchel Y. Abolafia , and Jered B. Carr . 2000 . On improving qualitative methods in public administration research . Administration & Society 32 ( 4 ): 363 – 97 . doi: 10.1177/00953990022019470 .

Carter , Stacy M. , and Miles Little . 2007 . Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research . Qualitative Health Research 17 ( 10 ): 1316 – 28 .

Caelli , Kate , Lynne Ray , and Judy Mill . 2003 . ‘Clear as mud’: Toward greater clarity in generic qualitative research . International Journal of Qualitative Methods 2 ( 2 ): 1 – 13 .

Clark , Alexander M . 1998 . The qualitative-quantitative debate: Moving from positivism and confrontation to post-positivism and reconciliation . Journal of Advanced Nursing 27 ( 6 ): 1242 – 9 .

Corbin , Juliet M. , and Anselm Strauss . 1990 . Grounded theory research: Procedures, canons, and evaluative criteria . Qualitative sociology 13 ( 1 ): 3 – 21 .

Corbin , Juliet , and Anselm L. Strauss . 2014 . Basics of qualitative research . Thousand Oaks, CA : Sage .

Creswell , John W. , and Dana L. Miller . 2000 . “ Determining validity in qualitative inquiry .” Theory into Practice . 39 ( 3 ): 124 – 130 .

Creswell , John W. , and Vicki L. Plano Clark . 2018 . Designing and conducting mixed methods research , 3rd ed. Thousand Oaks, CA : Sage .

Creswell , John W. and Cheryl Poth . 2018 . Qualitative inquiry and research design: Choosing among five approaches . 4th ed. Thousand Oaks, CA : Sage .

Creswell , John W. , Vicki L. Plano Clark , Michelle L. Gutmann , and William E. Hanson . 2003 . “ Advanced mixed methods research designs .” Handbook of Mixed Methods in Social and Behavioral Research 209 : 240 .

Cristofoli , Daniela and Josip Markovic . 2016 . How to make public networks really work: A qualitative comparative analysis . Public Administration 94 : 89 – 110 . doi: 10.1111/padm.12192

Dias , Janice Johnson , and Steven Maynard-Moody . 2007 . For-Profit Welfare: Contracts, Conflicts, and the Performance Paradox . Journal of Public Administration Research and Theory 17 : 189 – 211 .

Denhardt , Robert B . 2001 . The big questions of public administration education . Public Administration Review . 61 ( 5 ): 526 – 34 .

Denzin , Norman K. , and Yvonna S. Lincoln . 2003 . The landscape of qualitative research: Theories and issues . 2nd ed. Thousand Oaks, CA : Sage .

Denzin , Norman K. , and Yvonna S. Lincoln . 2012 . Strategies of qualitative inquiry . Vol. 4 . Thousand Oaks, CA : Sage .

Dodge , J. , S. M. Ospina , and E. G. Foldy . 2005 . Integrating rigor and relevance in public administration scholarship: The contribution of narrative inquiry . Public Administration Review 65 ( 3 ): 286 – 300 .

Dubnick , M. J . 1999 . Demons, spirits, and elephants: Reflections on the failure of public administration theory . Paper presented at the annual meeting of the American Political Science Association , Atlanta, GA .

Eikenberry , Angela M. , Verónica Arroyave , and Tracy Cooper . 2007 . Administrative failure and the international NGO response to Hurricane Katrina . Public Administration Review 67 ( 1 ): 160 – 70 .

Eisenhardt , Kathleen M. , and Melissa E. Graebner . 2007 . Theory building from cases: Opportunities and challenges . Academy of Management Journal . 50 ( 1 ): 25 – 32 .

Eisenhardt , Kathleen M. , Melissa E. Graebner , and Scott Sonenshein . 2016 . “ Grand challenges and inductive methods: Rigor without rigor mortis .” Academy Of Management Journal 59 ( 4 ): 1113 – 1123 .

Feldman , Martha S . 2000 . “ Organizational routines as a source of continuous change .” Organization Science 11 ( 6 ): 611 – 629 .

Feldman , Martha S. , and B. T. Pentland . 2003 . Reconceptualizing organizational routines as a source of flexibility and change . Administrative Science Quarterly 48 ( 1 ): 94 – 118 .

Feldman , Martha S. , Kaj Sköldberg , Ruth Nicole Brown , and Debra Horner . 2004 . Making sense of stories: A rhetorical approach to narrative analysis . Journal of Public Administration Research and Theory 14 ( 2 ): 147 – 70 .

Fischer , Manuel . 2011 . Social Network Analysis and Qualitative Comparative Analysis: Their mutual benefit for the explanation of policy network structures . Methodological Innovations Online 6 ( 2 ): 27 – 51 .

Freeman , John Leiper . 1965 . The Political Process: Executive Bureau-Legislative, Committee Relations . Vol. 13 . New York : Random House .

Glaser , Barney G . 2002 . Conceptualization: On theory and theorizing using grounded theory . International Journal of Qualitative Methods 1 ( 2 ): 23 – 38 .

Glaser , Barney G. , and Anselm L. Strauss . 2017 . Discovery of grounded theory: Strategies for qualitative research . London : Routledge .

Gentles , Stephen J. , Cathy Charles , Jenny Ploeg , and K. Ann McKibbon . 2015 . Sampling in qualitative research: Insights from an overview of the methods literature . The Qualitative Report 20 ( 11 ): 1772 .

Gill , J. , and K. J. Meier . 2000 . Public administration research and practice: A methodological manifesto . Journal of Public Administration Research and Theory 10 ( 1 ): 157 – 99 .

Giorgi , Amedeo . 1997 . The theory, practice, and evaluation of the phenomenological method as a qualitative research procedure . Journal of Phenomenological Psychology 28 ( 2 ): 235 – 60 .

Guba Egon G ., ed. 1990 . The paradigm dialog . Newbury Park, CA : Sage .

Hammersley , Martyn . 1983 . Ethnography . San Francisco, CA : John Wiley & Sons .

Haverland , Markus , and Dvora Yanow . 2012 . A hitchhiker ‘ s guide to the public administration research universe: Surviving conversations on methodologies and methods . Public Administration Review 72 ( 3 ): 401 – 8 .

Head , Brian William . 2010 . Public management research: Towards relevance . Public Management Review 12 ( 5 ): 571 – 85 .

Holland , P. W . 1986 . Statistics and causal inference . Journal of the American Statistical Association . 81 ( 396 ): 945 – 960 .

Honig , Dan . 2018 . Case study design and analysis as a complementary empirical strategy to econometric analysis in the study of public agencies: deploying mutually supportive mixed methods . Current issue.

Hudson , John , and Stefan Kühner . 2013 . Qualitative comparative analysis and applied public policy analysis: New applications of innovative methods . Policy and Society 32 ( 4 ): 279 – 87 .

Hug , Simon . 2013 . Qualitative comparative analysis: How inductive use and measurement error lead to problematic inference . Political Analysis 21 ( 2 ): 252 – 65 .

Johnson , R. Burke , and Anthony J. Onwuegbuzie . 2004 . Mixed methods research: A research paradigm whose time has come . Educational Researcher 33 ( 7 ): 14 – 26 .

Johnson , R. Burke , Anthony J. Onwuegbuzie , and Lisa A. Turner . 2007 . Toward a definition of mixed methods research . Journal of Mixed Methods Research 1 ( 2 ): 112 – 33 .

Jonsen , K. , and K. A. Jehn . 2009 . Using triangulation to validate themes in qualitative studies . Qualitative Research in Organizations and Management: An International Journal 4 ( 2 ): 123 – 50 .

Kahlke , Renate M . 2014 . Generic qualitative approaches: Pitfalls and benefits of methodological mixology . International Journal of Qualitative Methods 13 ( 1 ): 37 – 52 .

Kettl , Donald F . 2000 . Public administration at the millennium: The state of the field . Journal of Public Administration Research and Theory 10 ( 1 ): 7 – 34 .

King , Cheryl Simrell , Kathryn M. Feltey , and Bridget O’Neill Susel . 1998 . The Question of Participation: Toward Authentic Public Participation in Public Administration . Public Administration Review 58 ( 4 ): 317 . doi: 10.2307/977561 .

Kuhn , Thomas S . 1996 . The nature and necessity of scientific revolutions . In The structure of scientific revolutions , Kihn T. S . ed. 3rd ed. Chicago : University of Chicago Press .

Lambright , Kristina T . 2008 . Agency theory and beyond: Contracted providers’ motivations to properly use service monitoring tools . Journal of Public Administration Research and Theory 19 ( 2 ): 207 – 27 .

Langley , Ann . 1999 . Strategies for theorizing from process data . Academy of Management review 24 ( 4 ): 691 – 710 .

Lewis , D . 1973 . Causation . The Journal of Philosophy 70 ( 17 ): 556 – 67 .

Lincoln , Yvonna S. , and Egon G. Guba . 1986 . But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation . New Directions for Evaluation 30 : 73 – 84 .

Lipsky , Michael . 1971 . Street-level bureaucracy and the analysis of urban reform . Urban Affairs Quarterly 6 ( 4 ): 391 – 409 . doi: 10.1177/107808747100600401

Lowery , Daniel , and Karen G. Evans . 2004 . The iron cage of methodology: The vicious circle of means limiting ends limiting means ... Administration & Society 36 ( 3 ): 306 – 27 .

Malatesta , Deanna , and Julia L. Carboni . 2015 . The public–private distinction: Insights for public administration from the state action doctrine . Public Administration Review 75 ( 1 ): 63 – 74 .

Marshall , M. N . 1996 . Sampling for qualitative research . Family Practice 13 ( 6 ): 522 – 6 .

Maynard-Moody , Steven , and Marisa Kelly . 1993 . Stories managers tell about elected officials: Making sense of the politics-administration dichotomy . In Public management: The state of the art , ed. B. Bozeman, 71 – 90 . San Francisco : Jossey-Bass .

McNabb , David E . 2014 . Case research in public management . London : Routledge .

McNabb , David E . 2015 . Research methods in public administration and nonprofit management . London : Routledge .

McBeth , Mark K. , Michael D. Jones , and Elizabeth A. Shanahan . 2014 . “ The narrative policy framework .” In Theories of the policy process , edited by Sabatier , Paul A. , and Weible Christopher M , 225 – 266 . Boulder : Westview Press .

Mertens , Donna M. , and Amy T. Wilson . 2012 . Program evaluation theory and practice: A comprehensive guide . New York : Guilford Press .

Milward , H. Brinton. Forthcoming. Toward a theory of organizational networks: Structure, process, and people . Perspectives on Public Management and Governance .

Miles , Matthew B. , A. Michael Huberman , and Johnny Saldana . 2013 . Qualitative data analysis . Thousand Oaks, CA : Sage .

Moran , Dermot . 2002 . Introduction to phenomenology . London : Routledge .

Morse , Janice M . 1995 . The significance of saturation . Qualitative Health Research 5 ( 2 ): 147 – 9 .

Morse , Janice M . 2015 . Critical analysis of strategies for determining rigor in qualitative inquiry . Qualitative Health Research 25 ( 9 ): 1212 – 22 .

Morse , Janice M. , Phyllis Noerager Stern , Juliet Corbin , Barbara Bowers , Kathy Charmaz , and Adele E. Clarke . 2016 . Developing grounded theory: The second generation . London : Routledge .

Neuman , William Lawrence , and Karen Robson . 2014 . Basics of social research . Canada : Pearson .

Onwuegbuzie , A. J. , and N. L. Leech . 2007 . A call for qualitative power analyses . Quality & Quantity 41 ( 1 ): 105 – 21 .

O’Reilly , Michelle , and Nicola Parker . 2013 . ‘Unsatisfactory Saturation’: A critical exploration of the notion of saturated sample sizes in qualitative research . Qualitative Research 13 ( 2 ): 190 – 7 .

Ospina , Sonia M. , and Jennifer Dodge . 2005 . It’s about time: Catching method up to meaning—the usefulness of narrative inquiry in public administration research . Public Administration Review 65 ( 2 ): 143 – 57 .

Ospina , Sonia M. , Marc Esteve , and Seulki Lee . 2017 . Assessing Qualitative Studies in Public Administration Research . Public Administration Review 78 ( 4 ): 593 – 605 . doi: 10.1111/puar.12837

Patton , Michael Quinn . 1999 . Enhancing the quality and credibility of qualitative analysis . Health Services Research 34 ( 5 ): 1189 – 208 .

Patton , Michael Quinn . 2014 . Qualitative Research & Evaluation Methods: Integrating Theory and Practice . Thousand Oaks, CA : Sage .

Pattyn , Valérie , Astrid Molenveld , and Barbara Befani . 2017 . Qualitative comparative analysis as an evaluation tool: Lessons from an application in development cooperation . American Journal of Evaluation .

Perry , James L. , and Lois Recascino Wise . 1990 . The motivational bases of public service . Public Administration Review 50 ( 3 ): 367 . doi: 10.2307/976618

Pillow , W . 2003 . Confession, catharsis, or cure? Rethinking the uses of reflexivity as methodological power in qualitative research . International Journal of Qualitative Studies in Education 16 ( 2 ): 175 – 96 .

Prasad , Pushkala . 2015 . Crafting qualitative research: Working in the postpositivist traditions . London : Routledge .

Preissle , Judith , and Margaret D. Le Compte . 1984 . Ethnography and qualitative design in educational research . New York : Academic Press .

Raab , Jörg. , R. S. Mannak , and B. Cambre . 2015 . Combining structure, governance, and context: A configurational approach to network effectiveness . Journal of Public Administration Research and Theory 25 ( 2 ): 479 – 511 . doi: 10.1093/jopart/mut039

Raadschelders , J. C . 2011 . The future of the study of public administration: Embedding research object and methodology in epistemology and ontology . Public Administration Review 71 ( 6 ): 916 – 24 .

Ragin , Charles . 1987 . The comparative method: Moving beyond qualitative and quantitative methods . Berkeley : University of California Press .

Ragin , Charles C . 1998 . The logic of qualitative comparative analysis . International Review of Social History 43 ( S6 ): 105 – 24 . doi: 10.1017/S0020859000115111

Ragin , Charles C. , and Benoit Rihoux . 2004 . Qualitative Comparative Analysis (QCA): State of the Art and Prospects . Qualitative Methods 2 ( 2 ): 3 – 13 .

Rerup , C. , and M. S. Feldman . 2011 . Routines as a source of change in organizational schemata: The role of trial-and-error learning . Academy of Management Journal 54 ( 3 ): 577 – 610 .

Riccucci , Norma M . 2010 . Public administration: Traditions of inquiry and philosophies of knowledge . Washington, DC : Georgetown University Press .

Riessman , Catherine Kohler . 1993 . Narrative analysis . Vol. 30 . Thousand Oaks, CA : Sage .

Rihoux , Benoît , and Charles C. Ragin . 2008 . Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques . Vol. 51 . Thousand Oaks, CA : Sage .

Rivera , Lauren A . 2017 . “ When two bodies are (not) a problem: Gender and relationship status discrimination in academic hiring .” American Sociological Review 82 ( 6 ): 1111 – 1138 .

Rolfe , Gary . 2006 . Validity, trustworthiness and rigour: Quality and the idea of qualitative research . Journal of Advanced Nursing 53 ( 3 ): 304 – 10 .

Romzek , B. , and Melvin J. Dubnick . 1987 . Accountability in the Public Sector: Lessons from the Challenger Disaster . Public Administration Review 47 ( 3 ): 227 – 38 .

Romzek , B. , Kelly LeRoux , and Jeannette M. Blackmar . 2012 . A preliminary theory of informal accountability among network organizational actors . Public Administration Review 72 ( 3 ): 442 – 53 .

Romzek , B. , K. LeRoux , J. Johnston , R. J. Kempf , and J. S. Piatak . 2014 . Informal accountability in multisector service delivery collaborations . Journal of Public Administration Research and Theory 24 ( 4 ): 813 – 42 . doi: 10.1093/jopart/mut027

Sanger , Mary Bryna . 2013 . Does measuring performance lead to better performance ?. Journal of Policy Analysis and Management 32 ( 1 ): 185 – 203 .

Saz-Carranza , Angel , and Sonia M. Ospina . 2010 . The behavioral dimension of governing interorganizational goal-directed networks—Managing the unity-diversity tension . Journal of Public Administration Research and Theory . 21 ( 2 ): 327 – 65 .

Schein , Edgar H . 2003 . On dialogue, culture, and organizational learning . Reflections: The SoL Journal 4 ( 4 ): 27 – 38 . doi: 10.1162/152417303322004184

Schillemans , Thomas . 2013 . Moving beyond the clash of interests: On stewardship theory and the relationships between central government departments and public agencies . Public Management Review 15 ( 4 ): 541 – 62 .

Shanahan , Elizabeth A. , Michael D. Jones , and Mark K. McBeth . 2018 . How to conduct a Narrative Policy Framework study . The Social Science Journal . 55 ( 3 ): 332 – 345 .

Shanahan , Elizabeth A. , Michael D. Jones , Mark K. McBeth , and Ross R. Lane . 2013 . An angel on the wind: How heroic policy narratives shape policy realities: Narrative policy framework . Policy Studies Journal 41 ( 3 ): 453 – 83 . doi: 10.1111/psj.12025

Schneider , Anne , and Helen Ingram . 1993 . Social construction of target populations: Implications for politics and policy . American Political Science Review 87 ( 2 ): 334 – 47 .

Steiner , Elizabeth . 1988 . Methodology of theory building . Sydney : Educology Research Associates .

Stout , Margaret . 2013 . Preparing public administration scholars for qualitative inquiry: A status report . Public Administration Research 2 ( 1 ): 11 – 28 .

Thomann , Eva . 2015 . Is output performance all about the resources? A fuzzy ‐ set qualitative comparative analysis of street ‐ level bureaucrats in Switzerland . Public Administration 93 ( 1 ): 177 – 94 .

Toepler , Stefan . 2005 . Called to order: A board president in trouble . Nonprofit Management and Leadership 15 ( 4 ): 469 – 76 .

Van Slyke , David M . 2003 . The mythology of privatization in contracting for social services . Public Administration Review 63 ( 3 ): 296 – 315 . doi:10.1111/1540–6210.00291

Wasserman , Larry . 2013 . All of statistics: A concise course in statistical inference . New York : Springer Science & Business Media .

Waugh , William L. Jr , and Wesley W. Waugh . 2003 . Phenomenology and public administration . International Journal of Organization Theory & Behavior . 7 ( 3 ): 405 – 31 .

Weber , E. P. , and A. M. Khademian . 2008 . Wicked problems, knowledge challenges, and collaborative capacity builders in network settings . Public administration review 68 ( 2 ): 334 – 49 .

Weick , Karl E . 1993 . The collapse of sensemaking in organizations: The mann gulch disaster . Administrative Science Quarterly 38 ( 4 ): 628 . doi: 10.2307/2393339

Williamson , O. E . 1981 . The economics of organization: The transaction cost approach . American Journal of Sociology 87 ( 3 ): 548 – 77 .

Yin , Robert K . 2017 . Case study research and applications: Design and methods . Thousand Oaks, CA : Sage .

The method, methodology, epistemology coupling is a topic of considerable debate and concern in the field of qualitative methods ( Corbin and Strauss 2014 ; Haverland and Yanow 2012 ; Ospina et al 2017 ). Whether certain methods can or should be implemented by scholars embracing diverging epistemological stances is a topic warranting further discourse in the field of public management.

Reflexivity refers to the practice of being intentionally reflective about who you are both as a person situated within society and as a scholar professionally socialized within a cultural and institutional milieu. Specifically, reflexive practice calls upon scholars to consider how the totality of who they are as individuals influences the manner in which they approach scholarship, the questions they ask, the way the subjects of one’s inquiry may react/respond, and how one interprets what they observe. This is done with an eye toward critically examining how these factors may shape and constrain what one “finds” (for discussion, see Pillow 2003 ).

Within the QCA lexicon, results are referred to as causal pathways, although researchers are cautioned against the use of terms like causation as QCA uses a combinatorial logic/conjunctural causation instead of main effect/parameter estimate logic.

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1477-9803
  • Print ISSN 1053-1858
  • Copyright © 2024 Public Management Research Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Logo for VIVA Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

17 20. Quality in qualitative studies: Rigor in research design

Chapter outline.

  • Introduction to qualitative rigor (13 minute read)
  • Ethical responsibility and cultural respectfulness (4 minute read)
  • Critical considerations (6 minute read)
  • Data capture: Striving for accuracy in our raw data (6 minute read)
  • Data management: Keeping track of our data and our analysis (8 minute read)
  • Tools to account for our influence (22 minute read)

Content warning: Examples in this chapter contain references to fake news, mental health treatment, peer-support, misrepresentation, equity and (dis)honesty in research.

We hear a lot about fake news these days. Fake news has to do with the quality of journalism that we are consuming. It begs questions like: does it contain misinformation, is it skewed or biased in its portrayal of stories, does it leave out certain facts while inflating others. If we take this news at face value, our opinions and actions may be intentionally manipulated by poor quality information. So, how do we avoid or challenge this? The oversimplified answer is, we find ways to check for quality. While this isn’t a chapter dedicated to fake news, it does offer an important comparison for the focus of this chapter, rigor in qualitative research.  Rigor is concerned with the quality of research that we are designing and consuming. While I devote a considerable amount of time in my clinical class talking about the importance of adopting a non-judgmental stance in practice, that is not the case here; I want you to be judgmental, critical thinkers about research! As a social worker who will hopefully be producing research (we need you!) and definitely consuming research, you need to be able to differentiate good science from rubbish science. Rigor will help you to do this.

rigor in qualitative research ppt

This chapter will introduce you to the concept of rigor and specifically, what it looks like in qualitative research.  We will begin by considering how rigor relates to issues of ethics and how thoughtfully involving community partners in our research can add additional dimensions in planning for rigor. Next, we will look at rigor in how we capture and manage qualitative data, essentially helping to ensure that we have quality raw data to work with for our study.  Finally, we will devote time to discussing how researchers, as human instruments, need to maintain accountability throughout the research process. Finally, we will examine tools that encourage this accountability and how they can be integrated into your research design. Our hope is that by the end of this chapter, you will begin to be able to identify some of the hallmarks of quality in qualitative research, and if you are designing a qualitative research proposal, that you consider how to build these into your design.

20.1 Introduction to qualitative rigor

Learning Objectives

Learners will be able to…

  • Identify the role of rigor in qualitative research and important concepts related to qualitative rigor
  • Discuss why rigor is an important consideration when conducting, critiquing and consuming qualitative research
  • Differentiate between quality in quantitative and qualitative research studies

In Chapter 10 we talked about quality in quantitative studies, but we built our discussion around concepts like reliability and validity .  With qualitative studies, we generally think about quality in terms of the concept of rigor . The difference between quality in quantitative research and qualitative research extends beyond the type of data (numbers vs. words/sounds/images). If you sneak a peek all the way back to Chapter 5 , we discussed the idea of different paradigms or fundamental frameworks for how we can think about the world. These frameworks value different kinds of knowledge, arrive at knowledge in different ways, and evaluate the quality of knowledge with different criteria. These differences are essential in differentiating qualitative and quantitative work.

Quantitative research generally falls under a positivist paradigm, seeking to uncover knowledge that holds true across larger groups of people.  To accomplish this, we need to have tools like reliability and validity to help produce internally consistent and externally generalizable findings (i.e. was our study design dependable and do our findings hold true across our population).

In contrast, qualitative research is generally considered to fall into an alternative paradigm (other than positivist), such as the interpretive paradigm which is focused on the subjective experiences of individuals and their unique perspectives. To accomplish this, we are often asking participants to expand on their ideas and interpretations. A positivist tradition requires the information collected to be very focused and discretely defined (i.e. closed questions with prescribed categories). With qualitative studies, we need to look across unique experiences reflected in the data and determine how these experiences develop a richer understanding of the phenomenon we are studying, often across numerous perspectives.

Rigor is a concept that reflects the quality of the process used in capturing, managing, and analyzing our data as we develop this rich understanding. Rigor helps to establish standards through which qualitative research is critiqued and judged, both by the scientific community and by the practitioner community.

rigor in qualitative research ppt

For the scientific community, people who review qualitative research studies submitted for publication in scientific journals or for presentations at conferences will specifically look for indications of rigor, such as the tools we will discuss in this chapter.  This confirms for them that the researcher(s) put safeguards in place to ensure that the research took place systematically and that consumers can be relatively confident that the findings are not fabricated and can be directly connected back to the primary sources of data that was gathered or the secondary data that was analyzed.

As a note here, as we are critiquing the research of others or developing our own studies, we also need to recognize the limitations of rigor.  No research design is flawless and every researcher faces limitations and constraints. We aren’t looking for a researcher to adopt every tool we discuss below in their design.  In fact, one of my mentors, speaks explicitly about “misplaced rigor”, that is, using techniques to support rigor that don’t really fit what you are trying to accomplish with your research design. Suffice it to say that we can go overboard in the area of rigor and it might not serve our study’s best interest.  As a consumer or evaluator of research, you want to look for steps being taken to reflect quality and transparency throughout the research process, but they should fit within the overall framework of the study and what it is trying to accomplish. 

From the perspective of a practitioner, we also need to be acutely concerned with the quality of research. Social work has made a commitment, outlined in our Code of Ethics (NASW,2017) , to competent practice in service to our clients based on “empirically based knowledge” (subsection 4.01). When I think about my own care providers, I want them to be using “good” research – research that we can be confident was conducted in a credible way and whose findings are honestly and clearly represented. Don’t our clients deserve the same from us?

rigor in qualitative research ppt

As providers, we will be looking to qualitative research studies to provide us with information that helps us better understand our clients, their experiences, and the problems they encounter.  As such, we need to look for research that accurately represents:

  • Who is participating in the study
  • What circumstances is the study being conducted under
  • What is the research attempting to determine

Further, we want to ensure that:

  • Findings are presented accurately and reflect what was shared by participants ( raw data )
  •  A reasonably good explanation of how the researcher got from the raw data to their findings is presented
  • The researcher adequately considered and accounted for their potential influence on the research process

As we talk about different tools we can use to help establish qualitative rigor, I will try to point out tips for what to look for as you are reading qualitative studies that can reflect these. While rigor can’t “prove” quality, it can demonstrate steps that are taken that reflect thoughtfulness and attention on the part of the researcher(s). This is a link from the American Psychological Association on the topic of reviewing qualitative research manuscripts. It’s a bit beyond the level of critiquing that I would expect from a beginning qualitative research student, however, it does provide a really nice overview of this process.  Even if you aren’t familiar with all the terms, I think it can be helpful in giving an overview of the general thought process that should be taking place.

To begin breaking down how to think about rigor, I find it helpful to have a framework to help understand different concepts that support or are associated with rigor. Lincoln and Guba (1985) have suggested such a framework for thinking about qualitative rigor that has widely contributed to standards that are often employed for qualitative projects. The overarching concept around which this framework is centered is trustworthiness .  Trustworthiness is reflective of how much stock we should put in a given qualitative study – is it really worth our time, headspace, and intellectual curiosity? A study that isn’t trustworthy suggests poor quality resulting from inadequate forethought, planning, and attention to detail in how the study was carried out.  This suggests that we should have little confidence in the findings of a study that is not trustworthy.

According to Lincoln and Guba (1985) [1] trustworthiness is grounded in responding to four key ideas and related questions to help you conceptualize how they relate to your study. Each of these concepts is discussed below with some considerations to help you to compare and contrast these ideas with more positivist or quantitative constructs of research quality.

Truth value

You have already been introduced to the concept of internal validity . As a reminder, establishing internal validity is a way to ensure that the change we observe in the dependent variable is the result of the variation in our independent variable – did we actually design a study that is truly testing our hypothesis.  In much/most qualitative studies we don’t have hypotheses, independent or dependent variables, but we do still want to design a study where our audience (and ourselves) can be relatively sure that we as the researcher arrived at our findings through a systematic and scientific process, and that those findings can be clearly linked back to the data we used and not some fabrication or falsification of that data; in other words, the truth value of the research process and its findings. We want to give our readers confidence that we didn’t just make up our findings or “see what we wanted to see”.

rigor in qualitative research ppt

Applicability

  • who we were studying
  • how we went about studying them
  • what we found

rigor in qualitative research ppt

Consistency

rigor in qualitative research ppt

These concepts reflect a set of standards that help to determine the integrity of qualitative studies. At the end of this chapter you will be introduced to a range of tools to help support or reflect these various standards in qualitative research. Because different qualitative designs (e.g. phenomenology, narrative, ethnographic), that you will learn more about in chapter 22 emphasize or prioritize different aspects of quality, certain tools will be more appropriate for these designs. Since this chapter is intended to give you a general overview of rigor in qualitative studies, exploring additional resources will be necessary to best understand which of these concepts are prioritized in each type of design and which tools best support them.

Key Takeaways

  • Qualitative research is generally conducted within an interpretativist paradigm.  This differs from the post-positivist paradigm in which most quantitative research originates. This fundamental difference means that the overarching aim of these different approaches to knowledge building differ, and consequently, our standards for judging the quality of research within these paradigms differ.
  • Assessing the quality of qualitative research is important, both from a researcher and a practitioner perspective.  On behalf of our clients and our profession, we are called to be critical consumers of research. To accomplish this, we need strategies for assessing the scientific rigor with which research is conducted.
  • Trustworthiness and associated concepts, including credibility, transferablity, dependability and confirmability, provide a framework for assessing rigor or quality in qualitative research.

20.2 Ethical responsibility and cultural respectfulness

  • Discuss the connection between rigor and ethics as they relate to the practice of qualitative research
  • Explain how the concepts of accountability and transparency lay an ethical foundation for rigorous qualitative research

The two concepts of rigor and ethics in qualitative research are closely intertwined.  It is a commitment to ethical research that leads us to conduct research in rigorous ways, so as not to put forth research that is of poor quality, misleading, or altogether false.  Furthermore, the tools that demonstrate rigor in our research are reinforced by solid ethical practices.  For instance, as we build a rigorous protocol for collecting interview data, part of this protocol must include a well-executed, ethical informed consent process; otherwise, we hold little hope that our efforts will lead to trustworthy data . Both ethics and rigor shine a light on our behaviors as researchers.  These concepts offer standards by which others can critique our commitment to quality in the research we produce. They are both tools for accountability in the practice of research.

Related to this idea of accountability, rigor requires that we promote a sense of t ransparenc y in the qualitative research process.  We will talk extensively in this chapter about tools to help support this sense of transparency, but first, I want to explore why transparency is so important for ethical qualitative research. As social workers, our own knowledge, skills, and abilities to help serve our clients are our tools.  Similarly, qualitative research demands the social work researcher be an actively involved human instrument in the research process.

While quantitative researchers also makes a commitment to transparency, they may have an easier job of demonstrating it.  Let’s just think about the data analysis stage of research. The quantitative researcher has a data set, and based on that data set there are certain tests that they can run. Those tests are mathematically defined and computed by statistical software packages and we have established guidelines for interpreting the results and reporting the findings. There is most certainly tremendous skill and knowledge exhibited in the many decisions that go into this analysis process; however, the rules and requirements that lay the foundation for these mathematical tests mean that much of this process is prescribed for us. The prescribed procedures offer quantitative researchers a shorthand for talking about their transparency.

In comparison, the qualitative researcher, sitting down with their data for analysis will engage in a process that will require them to make hundreds or thousands of decisions about what pieces of data mean, what label they should have, how they relate to other ideas, what the larger significance is as it relates to their final results. That isn’t to say that we don’t have procedures and processes as qualitative researchers, we just can’t rely on mathematics to make these decisions precise and clear.  We have to rely on ourselves as human instruments. Adopting a commitment to transparency in our research as qualitative researchers means that we are actively describing for our audience the role we have as human instruments and we consider how this is shaping the research process. This allows us to avoid unethically representing what we did in our research process and what we found.

I think that as researchers we can sometimes think of data as an object that is not inherently valuable, but rather a means to an end.  But if we see qualitative data as part of sacred stories that are being shared with us, doesn’t it feel like a more precious resource? Something worthy of thoughtfully and even gently gathering, something that needs protecting and safe-keeping. Adhering to a rigorous research process can help to honor these commitments and avoid the misuse of data as a precious resource. Thinking like this will hopefully help us to demonstrate greater cultural humility as social work researchers.

  • Ethics and rigor both are interdependent and call attention to our behaviors as researchers and the quality and care with which our research is conducted.
  • Accountability and transparency in qualitative research helps to demonstrate that as researchers we are acting with integrity. This means that we are clear about how we are conducting our research, what decisions we are making during the research process, and how we have arrived at these decisions.

While this activity is early in the chapter, I want you to consider for a few moments about how accountability relates to your research proposal.

  • Who are you accountable to as you carry plan and carry out your research
  • In what ways are you accountable to each of the people you listed in the previous question?

20.3 Critical considerations

  • Identify some key questions for a critical critique of research planning and design
  • Differentiate some alternative standards for rigor according to more participatory research approaches

As I discussed above, rigor shines a spotlight on our actions as researchers.  A critical perspective is one that challenges traditional arrangements of power, control and the role of structural forces in maintaining oppression and inequality in society. From this perspective, rigor takes on additional meaning beyond the internal integrity of the qualitative processes used by you or I as researchers, and suggest that standards of quality need to address accountability to our participants and the communities that they represent, NOT just the scientific community. There are many evolving dialogues about what criteria constitutes “good” research from critical traditions, including participatory and empowerment approaches that have their roots in critical perspective. These discussions could easily stand as their own chapter, however, for our purposes, we will borrow some questions from these critical debates to consider how they might inform the work we do as qualitative researchers.

Who gets to ask the questions?

rigor in qualitative research ppt

In the case of your research proposal, chances are you are outlining your research question. Because our research question truly drives our research process, it carries a lot of weight in the planning and decision-making process of research. In many instances, we bring our f ully-formed research projects to participants, and they are only involved in the collection of data.  But critical approaches would challenge us to involve people who are impacted by issues we are studying from the onset. How can they be involved in the early stages of study development, even in defining our question? If we treat their lived experience as expertise on the topic, why not start early using this channel to guide how we think about the issue? This challenges us to give up some of our control and to listen for the “right” question before we ask it.

Who owns the data and the findings?

Answering this question from a traditional research approach is relatively clear – the researcher or rather, the university or research institution they represent. However, critical approaches question this.  Think about this specifically in terms of qualitative research. Should we be “owning” pieces of other people’s stories, since that is often the data we are working with? What say do people get in what is done with their stories and the findings that are derived from them? Unfortunately, there aren’t clear answers. These are some critical questions that we need to struggle with as qualitative researchers.

  • How can we disrupt or challenge current systems of data ownership, empowering participants to maintain greater rights?
  • What could more reciprocal research ownership arrangments look like?
  • What are the benefits and consequences of disrupting this system? 
  • What are the benefits and consequences of perpetuating our current system?

rigor in qualitative research ppt

What is the sustained impact of what I’m doing?

As qualitative researchers, our aim is often exploring meaning and developing understanding of social phenomena. However, criteria from more critical traditions challenge us to think more tangibly and with more immediacy. They require us to answer questions about how our involvement with this specific group of people within the context of this project may directly benefit or harm the people involved. This not only applies in the present but also in the future.

We need to consider questions like:

  • How has our interaction shaped participants’ perceptions of research?
  • What are the ripple effects left behind from the questions we raised by our study?
  • What thoughts or feelings have been reinforced or challenged, both within the community but also for outsiders?
  • Have we built/strengthened/damaged relationships?
  • Have we expanded/depleted resources for participants?

We need to reflect on these topics in advance and carefully considering the potential ramifications of our research before we begin. This helps to demonstrate critical rigor in our approach to research planning.  Furthermore, research that is being conducted in participatory traditions should actively involve participants and other community members to define what the immediate impacts of the research should be.  We need to ask early and often, what do they need as a community and how can research be a tool for accomplishing this? Their answers to these questions then become the criteria on which our research is judged.  In designing research for direct and immediate change and benefit to the community, we also need to think about how well we are designing for sustainable change.  Have we crafted a research project that creates lasting transformation, or something that will only be short-lived?

As students and as scholars we are often challenged by constraints as we address issues of rigor, especially some of the issues raised here. One of the biggest constraints is time.  As a student, you are likely building a research proposal while balancing many demands on your time. To actively engage community members and to create sustainable research projects takes considerable time and commitment. Furthermore, we often work in highly structured systems that have many rules and regulations that can make doing things differently or challenging convention quite hard.  However, we can begin to make a more equity-informed research agenda by:

  • Reflecting on issues of power and control in our own projects
  • Learning from research that models more reciprocal relationships between researcher and researched
  • Finding new and creative ways to actively involve participants in the process of research and in sharing the benefits of research

In the resource box below, you will find links for a number of sources to learn more about participatory research methods that embody the critical perspective in research that we have been discussing.

As we turn our attention to rigor in the various aspects of the qualitative research process, continue to think about what critical criteria might also apply to each of these areas.

  • Traditional research methods, including many qualitative approaches, may fail to challenge the ways that the practice of research can disenfranchise and disempower individauls and communities.
  • Researchers from critical perspectives often question the power arrangments, roles, and objectives of more traditional research methods, and have been developing alternatives such as participatory research approaches. These participatory approaches engage participants in much more active ways and furthermore, they evaluate the quality of research by the direct and sustained benefit that it brings to participants and their communities.

Bergold, J., & Thomas, S. (2012). Participatory research methods: A methodological approach in motion.  Historical Social Research/Historische Sozialforschun g, 13 (1), 191-222. https://www.jstor.org/stable/41756482?casa_token=RSMr_e37Hp0AAAAA%3ApH–Cs2L–zUUf7uUi_arUBZtHurNvVFE9P5anZVLtV0VgPSATK54BuTux3ZzE9BgqHoSU006D6-04OD_9LW50dpLb0t8CEocWGmX-G8LebgoC3Bvbv6&seq=1#metadata_info_tab_contents

Center for Community Health and Development, University of Kansas. (n.d.). Community toolbox: Section.2 Community-based participatory research [webpage]. https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/intervention-research/main

New Tactics in Human Rights. (n.d.). Participatory research for action [webpage]. https://www.newtactics.org/conversation/participatory-research-action

Pain, R., Whitman, G., Milledge, D., & Lune Rivers Trust. (2010). Participatory action research toolkit: An introduction to using PAR as an approach to learning, research and action. http://www.communitylearningpartnership.org/wp-content/uploads/2017/01/PARtoolkit.pdf

Participate. (n.d.). Participatory research methods [webpage]. https://participatesdgs.org/methods/

20.4 Data capture: Striving for accuracy in our raw data

  • Explain the importance of paying attention to the data collection process for ensuring rigor in qualitative research
  • Identify key points that they will need to consider and address in developing a plan for gathering data to support rigor in their study

It is very hard to make a claim that research was conducted in a rigorous way if we don’t start with quality raw data. That is to say, if we botch our data collection, we really can’t produce trustworthy findings, no matter how good our analysis is. So what is quality raw data? From a qualitative research perspective, quality raw data means that the data we capture provides an accurate representation of what was shared with us by participants or through other data sources, such as documents. This section is meant to help you consider rigor as it pertains to how you capture your data. This might mean how you document the information from your interviews or focus groups, how you record your field notes as you are conducting observations, what you track down and access for other artifacts, or how you produce your entries in your reflexive journal (as this can become part of your data, as well).

This doesn’t mean that all your data will look the same. However, you will want to anticipate the type(s) of data you will be collecting and what format they will be in.  In addition, whenever possible and appropriate, you will want the data you collect to be in a consistent format.  So, if you are conducting interviews and you decide that you will be capturing this data by taking field notes, you will use a similar strategy for gathering information at each interview.  You would avoid using field notes for some, recording and transcribing others, and then having emailed responses from the remaining participants.  You might be wondering why this matters, after all, you are asking them the same questions. However, using these different formats to capture your data can make your data less comparable. This may have led to different information being shared by the participant and different information being captured by the researcher.  For instance, if you rely on email responses, you lose the ability to follow up with probing questions you may have introduced in an in-person interview. Those participants who were recorded may not have felt as free to share information when compared to those interviews where you took field notes. It becomes harder to know if variation in your data is due to diversity in peoples’ experiences or just differences in how you went about capturing your data. Now we will turn our attention to quality in different types of data.

As qualitative researchers, we often are dealing with written data. At times, it may be participants who are doing the writing.  We may ask participants to provide written responses to questions or we may use writing samples as artifacts that are produced for some other purpose that we have permission to include in our study. In either case, ideally we are including this written data with as little manipulation as possible. If we do things like take passages or ideas out of context or interpret segments in our own words, we run a much greater risk of misrepresenting the data that is being shared with us. This is a direct threat to rigor, compromising the quality of the raw data we are collecting.   If we need to clarify what a participant means by one of their responses and we have the opportunity to follow up with them, we want to capture their own words as closely as we can when they provide their explanation.  This is also true if we ask participants to provide us with drawings.  For instance, we may ask youth to provide a drawn response to a question as an age-appropriate way to respond to a question, but we might follow-up by asking them to explain their drawing to us.  We would want to capture their description as close to their own words as possible, including both the drawing and the description in our data.

rigor in qualitative research ppt

Researchers may also be responsible for producing written data. Rigorous field notes strive to capture participants’ words as accurately as possible, which usually means quoting more and paraphrasing less.  Of course we can’t avoid paraphrasing altogether (unless you have incredible shorthand skills, which I definitely do not), but the more interpreting or filtering we do as we capture our data, the less trustworthy it becomes. You also want to stick to a consistent method of recording your field notes.  It becomes much harder to analyze your data if you have one system one day and another system another day.  The quality of the notes may differ greatly and differences in organization may make it challenging to compare across your data. Finally, rigorous field notes usually capture context, as well.  If you are gathering field notes for an interview or during a focus group, this may mean that you take note of non-verbal information during the exchange. If you are conducting an observation, your field notes might contain detailed information about the setting and circumstances of the observation.

As qualitative researchers, we may also be working with audio, video, or other forms of media data.  Much of what we have already discussed in respect to written data also applies to these data formats, as well. The less we manipulate or change the original data source, the better. For example, if you have an audio recording of your focus group, you want your transcript to be as close to verbatim as possible. Also, if we are working with a visual or aural medium, like a performance, capturing context and description – including audience reactions – with as much detail as possible is vital if we are looking to analyze the meaning of such an event or experience.

This topic shouldn’t require more than a couple sentences as you write up your research proposal. However, these sentences should reflect some careful forethought and planning. Remember, this is the hand-off! If you are a relay runner, this is the point where the baton gets passed as the participant or source transfers information to the study. Also, you want to ensure that you select a strategy that can be consistent and part of systematic process. Now we need to come up with a plan for managing our data.

Data will be collected using semi-structured interviews. Interviews will be digitally recorded and transcribed verbatim. In addition, the researcher will take field notes during each interview (see field note template, appendix A).  

As they are gathered, documents will be assigned a study identification number. Along with their study ID, a brief description of the document, its source, and any other historical information will be kept in the data tracking log (see data tracking log, appendix B).   

  • Anticipating and planning for how you will systematically and consistently gather your data is crucial for a rigorous qualitative research project.
  • When conducting qualitative research, we not only need to consider the data that we collect from other sources, but the data that we produce ourselves.  As human instruments in the research process, our reaction to the data also becomes a form of data that can shape our findings.  As such, we need to think about how we can capture this as well.

How will you ensure that you use a consistent and systematic approach for qualitative data collection in your proposal?

20.5 Data management: Keeping track of our data and our analysis

  • Explain how data management and data analysis in qualitative projects can present unique challenges or opportunities for demonstrating quality in the research process
  • Plan for key elements to address or include in a data management plan that supports qualitative rigor in the study

Elements to think about

Once data collection begins, we need a plan for what we are going to do with it. As we talked about in our chapter devoted to qualitative data collection, this is often an important point of departure between quantitative and qualitative methods.  Quantitative research tends to be much more sequential, meaning that first we collect all the data, then we analyze the data. If we didn’t do it this way, we wouldn’t know what numbers we are dealing with.  However, with qualitative data, we are usually collecting and beginning to analyze our data simultaneously. This offers us a great opportunity to learn from our data as we are gathering it. However, it also means that if you don’t have a plan for how you are going to manage these dual processes of data collection and data analysis, you are going to get overwhelmed twice as fast!  A rigorous process will have a clearly defined process for labeling and tracking your data artifacts, whether they are text documents (e.g. transcripts, newspaper clippings, advertisements), photos, videos, or audio recordings. These may be physical documents, but more often than not, they are electronic. In either case, a clear, documented labeling system is required. This becomes very important because you are going to need to come back to this artifact at some point during your analysis and you need to have a way of tracking it down. Let’s talk a bit more about this.

You were introduced to the term iterative process in our previous discussions about qualitative data analysis. As a reminder, an iterative process is one that involves repetition, so in the case of working with qualitative data, it means that we will be engaging in a repeating and evolving cycle of reviewing our data, noting our initial thoughts and reactions about what the data means, collecting more data, and going back to review the data again.  Figure 20.1 depicts this iterative process. To adopt a rigorous approach to qualitative analysis, we need to think about how we will capture and document each point of this iterative process. This ishow we demonstrate transparency in our data analysis process, how we detail the work that we are doing as human instruments.

Visual representation of the qualitative data analysis process. Interconnecting gears labeled "gathering data", "review", "develop understanding".

During this process, we need to consider:

  • How will we capture our thoughts about the data, including what we are specifically responding to in the data?
  • How do we introduce new data into this process? 
  • How do we record our evolving understanding of the data and what those changes are prompted by?

So we have already talked about the importance of labeling our artifacts, but each artifact is likely to contain many ideas.  For instance, think about the many ideas that are shared in a single interview.  Because of this, we need to also have a clear and standardized way of labeling smaller segments of data within each artifact that represent discrete or separate ideas. If you recall back to our analysis chapter, these labels are called units . You are likely to have many, many units in each artifact. Additionally, as suggested above, you need a way to capture your thought process as you respond to the data.  This documentation is called memoing , a term you were introduced to in our analysis chapter.  These various components, labeling your artifacts, labeling your units, and memoing, come together as you produce a rigorous plan for how you document your data analysis.  Again, rigor here is closely associated with transparency.  This means that you are using these tools to document a clear road map for how you got from your raw data to your findings. The term for this road map is an audit trail , and we will speak more about it in the next section. The test of this aspect of rigor becomes your ability to work backwards, or better yet, for someone else to work backwards.  Could someone not connected with your project look at your findings, and using your audit trail, trace these ideas all the way back to specific points in your raw data? The term for this is having an external audit and will also be further explained below.  If you can do this, we sometimes say that your findings are clearly “grounded in your data”.

What our plan for data management might look like.

If you are working with physical data, you will need a system of logging and storing your artifacts.  In addition, as you break your artifacts down into units you may well be copying pieces of these artifacts onto small note cards or post-its that serve as your data units.  These smaller units become easier to manipulate and move around as you think about what ideas go together and what they mean collectively.  However, each of these smaller units need a label that links them back to their artifact. But why do I have to go through all this? Well, it isn’t just for the sake of transparency and being able to link your findings back to the original raw data, although that is certainly important. You also will likely reach a point in your analysis where themes are coming together and you are starting to make sense of things. When this occurs, you will have a pile of units from various artifacts under each of these themes.  At this point you will want to know where the information in the units came from. If it was verbal data, you will want to know who said it or what source it came from.  This offers us important information about the context of our findings and who/what they are connected to. We can’t determine this unless we have a good labeling system.

rigor in qualitative research ppt

You will need to come up with a system that makes sense to you and fits for your data.  As an example, I’m often working with transcripts from interviews or focus groups.  As I am collecting my data, each transcript is numbered as I obtain it.  Also, the transcripts themselves have continuous line numbers on them.  When I start to break-up or deconstruct my data, each unit gets a label that consists of two numbers separated by a period. The number before the period is the transcript that the unit came from and the number after the period is the line number within that transcript so that I can find exactly where the information is.  So, if I have a unit labeled 3.658, it means that this data can be found in my transcript labeled 3 and on line 658.

Now, I often use electronic versions of my transcripts when I break them up. As I showed in our data analysis chapter, I create an excel file where I can cut and paste the data units, their label, and the preliminary code I am assigning to this idea.  I find excel useful because I can easily sort my data by codes and start to look for emerging themes.  Furthermore, above I mentioned memoing, or recording my thoughts and responses to the data.  I can easily do this in excel, by adding an additional column for memoing where I can put my thoughts/responses by a particular unit and date it, so I know when I was having that thought.  Generally speaking, I find that excel makes it pretty easy for me to manipulate or move my data around while I’m making sense of it, while also documenting this.  Of course, the qualitative data analysis software packages that I mentioned in our analysis chapter all have their own systems for activities such as assigning labels, coding , and memoing .  And if you choose to use one of these, you will want to be well acquainted with how to do this before you start collecting data. That being said, you don’t need software or even excel to do this work.  I know many qualitative researchers who prefer having physical data in front of them, allowing them to shift note cards around and more clearly visualize their emerging themes. If you elect for this, you just need to make sure you track the moves you are making and your thought process during the analysis. And be careful if you have a cat, mine would have a field day with piles of note cards left on my desk!

  • Due to the dynamic and often iterative nature of qualitative research, we need to proactively consider how we will store and analyze our qualitative data, often at the same time we are collecting it.
  • Whatever data management system we plan for, it needs to have consistent ways of documenting our evolving understanding of what our data mean. This documentation acts as an important bridge between our raw qualitative data and our qualitative research findings, helping to support rigor in our design.

20.6 Tools to account for our influence

  • Identify key tools for enhancing qualitative rigor at various stages of the research process
  • Begin to critique the quality of existing qualitative studies based on the use of these tools
  • Determine which tools may strengthen the quality of our own qualitative research designs

So I’ve saved the best for last. This is a concrete discussion about tools that you can utilize to demonstrate qualitative rigor in your study.  The previous sections in this chapter suggest topics you need to think about related to rigor, but this suggests strategies to actually accomplish it. Remember, these are tools you should also be looking for as you examine other qualitative research studies.  As I previously mentioned, you won’t be looking to use all of these in any one study, but rather determining which tools make the most sense based on your study design.

Some of these tools apply throughout the research process, while others are more specifically applied at one stage of research. For instance, an audit trail is created during your analysis phase, while peer debriefing can take place throughout all stages of your research process. These come to us from the work of Lincoln and Guba (1985) [2] . Along with the argument that we need separate criteria for judging the quality of from the Interpretivist paradigm (as opposed to Positivist criteria of reliability and validity ), they also proposed a compendium of tools to help meet these criteria. We will review each of these tools and an example will be provided after the description.

Observer triangulation

Observer triangulation involves including more than one member of your research team to aid in analyzing the data. Essentially, you will have at least two sets of eyes looking at the data, drawing it out, and then comparing findings, converging on agreement about what the final results should be. This helps us to ensure that we aren’t just seeing what we want to see.

Data triangulation

Data triangulation is a strategy that you build into your research design where you include data from multiple sources to help enhance your understanding of a topic.  This might mean that you include a variety of groups of people to represent different perspectives on the issue. This can also mean that you collect different types of data. The main idea here is that by incorporating different sources of data (people or types), you are seeking to get a more well-rounded or comprehensive understanding of the focus of your study.

People: Instead of just interviewing mental health consumers about their treatment, you also include family members and providers.

Types: I have conducted a case study where we included interviews and the analysis of multiple documents, such as emails, agendas, and meeting minutes.

Peer debriefing

Peer debriefing means that you intentionally plan for and meet with a qualitative researcher outside of your team to discuss your process and findings and to help examine the decisions you are making, the logic behind them, and your potential influence and accountability in the research process. You will often meet with a peer debriefer multiple times during your research process and may do things like: review your reflexive journal ; review certain aspects of your project, such as preliminary findings; discuss current decisions you are considering; and review the current status of your project. The main focus here is building in some objectivity to what can become a very subjective process. We can easily become very involved in this research and it can be hard for us to step back and thoughtfully examine the decisions we are making.

Member-checking

Member-checking has to do with incorporating research participants into the data analysis process. This may mean actively including them throughout the analysis, either as a co-researcher or as a consultant. This can also mean that once you have the findings from your analysis, you take these to your participants (or a subset of your participants) and ask them to review these findings and provide you feedback about their accuracy.  I will often ask participants when I member-check, can you hear your voice in these findings? Do you recognize what you shared with me in these results? We often need to preface member-checking by saying that we are bringing together many people’s ideas, so we are often trying to represent multiple perspectives, but we want to make sure that their perspective is included in there.  This can be a very important step in ensuring that we did a reasonable job getting from our raw data to our findings…did we get it right. It also gives some power back to participants, as we are giving them some say in what our findings look like.

rigor in qualitative research ppt

Thick description

Providing a thick description means that you are giving your audience a rich, detailed description of your findings and the context in which they exist.  As you read a thick description, you walk away feeling like you have a very vivid picture of what the research participants felt, thought, or experienced, and that you now have a more complete understanding of the topic being studied. Of course, a thick description can’t just be made up at the end. You can’t hope to produce a thick description if you haven’t done work early on to collect detailed data and performed a thorough analysis.  Our main objective with a thick description is being accountable to our audience in helping them to understand what we learned in the most comprehensive way possible.

Reflexivity

Reflexivity pertains to how we understand and account for our influence, as researchers, on the research process. In social work practice, we talk extensively about our “use of self” as social workers, meaning that we work to understanding how our unique personhood (who we are) impacts or influences how we work with our clients.  Reflexivity is about applying this to the process of research, rather than practice. It assumes that our values, beliefs, understanding, and experiences all may influence the decisions that we make as we engage in research.  By engaging in qualitative research with reflexivity, we are attempting to be transparent about how we are shaping and being shaped by the research we are conducting.

Prolonged engagement

Prolonged engagement means that we are extensively spending time with participants or are in the community we are studying. We are visiting on multiple occasions during the study in an attempt to get the most complete picture or understanding possible. This can be very important for us as we attempt to analyze and interpret our data. If we haven’t spent enough time getting to know our participants and their community, we may miss the meaning of data that is shared with us because we don’t understand the cultural subtext in which this data exists.  The main idea here is that we don’t know what we don’t know; furthermore, we can’t know it unless we invest time getting to know it! There’s no short-cut here, you have to put in the time.

Audit trail

Creating an audit trail is something we do during our data analysis process as qualitative researchers. An audit trail is essentially creating a map of how you got from your raw data to your research findings. This means that we should be able to work backwards, starting with your research findings and trace them back to your raw data.  It starts with labeling our data as we begin to break it apart (deconstruction) and then reassemble it (reconstruction). It allows us to determine where ideas came from and how/why we put ideas together to form broader themes.  An audit trail offers transparency in our data analysis process. It is the opposite of the “black box” we spoke about in our qualitative analysis chapter, making it clear how we got from point A to point B.

rigor in qualitative research ppt

External audit

An external audit is when we actually bring in a qualitative researcher not connected to our project once the study has been completed to examine the research project and the findings to “evaluate the accuracy and evaluate whether or not the findings, interpretations and conclusions are supported by the data” (Robert Wood Johnson Foundation, External Audits). An external auditor will likely look at all of our research materials, but will likely make extensive use of our audit trail to ensure that a clear link can be established between our findings and the raw data we collected by an external observer.  Much like a peer debriefer, an external auditor can offer an outside critique of the study, thereby helping us to reflect on the work we are doing and how we are going about it.

Negative case analysis

Negative case analysis involves including data that contrasts, contradicts, or challenges the majority of evidence that we have found or expect to find. This may come into play in our sampling, meaning that we may seek to recruit or include a specific participant or group of participants because they represent a divergent opinion. Or, as we begin our analysis, we may identify a unique or contrasting idea or opinion that seems to contradict the majority of what our other data seem to be point to.  In this case, we choose to intentionally analyze and work to understand this unique perspective in our data. As with a thick description, a negative case analysis is attempting to offer the most comprehensive and complete understanding of the phenomenon we are studying, including divergent or contradictory ideas that may be held about it.

Now let’s take some time to think through each of the stages of the design process and consider how we might apply some of these strategies.  Again, these tools are to help us, as human instruments, better account for our role in the qualitative research process and also to enhance the trustworthiness of our research when we share it with others. It is unrealistic that you would apply all of these, but attention to some will indicate that you have been thoughtful in your design and concerned about the quality of your work and the confidence in your findings.

First let’s discuss sampling. We have already discussed that qualitative research generally relies on non-probability sampling and have reviewed some specific non-probability strategies you might use.  However, along with selecting a strategy, you might also include a couple of the rigor-related tools discussed above.  First, you might choose to employ data triangulation.  For instance, maybe you are conducting an ethnography studying the culture of a peer-support clubhouse.  As you are designing your study, along with extensive observations you plan to make in the clubhouse, you are also going to conduct interviews with staff, board members, and focus groups with members.  In this way you are combining different types of data (i.e. observations, focus groups, interviews) and perspectives (i.e. yourself as the researcher, members, staff, board). In addition, you might also consider using negative case analysis. At the planning stage, this could involve you intentionally sampling a case or set of cases that are likely to provide an alternative view or perspective compared to what you might expect to find. Finally, specifically articulating your sampling rationale can also enhance the rigor of your research (Barusch, Gringeri, & George, 2011) [3] . While this isn’t listed in our tools table, it is generally a good practice when reporting your research (qualitative or quantitative) to outline your sampling strategy with a brief rationale for the choices you made. This helps to improve the transparency of your study.

Next, we can progress to data gathering. The main rigor-related tool that directly applies to this stage of your design is likely prolonged engagement.  Here we build in or plan to spend extensive time with participants gathering data.  This might mean that we return for repeated interviews with the same participants or that we go back numerous times to make observations and take field notes. While this can take many forms, the overarching idea here is that you build in time to immerse yourself in the context and culture that you are studying.  Again, there is no short-cut here, it demands time in the field getting to know people, places, significance, history, etc. You need to appreciate the context and the culture of the situation you are studying. Something special to consider here is insider/outsider status.  If you would consider yourself an “outsider”, that is to say someone who does not belong to the same group or community of people you are studying, it may be quite obvious that you will need to spend time getting to know this group and take considerable time observing and reflecting on the significance of what you see. However, if you are a researcher who is a member of the particular community you are studying, or an “insider”, I would suggest that you still need to work to objectively to take a step back, make observations, and try to reflect on what you see, what you thought you knew, and what you come to know about the community you belong to.  In both cases, prolonged engagement requires good self-reflection and observation skills.

A number of these tools may be applied during the data analysis process. First, if you have a research team, you might use observer triangulation, although this might not be an option as a student unless you are building a proposal as a group. As explained above, observer triangulation means that more than one of you will be examining the data that has been collected and drawing results from it. You will then compare these results and ultimately converge on your findings.

Example.  I’m currently using the following strategy on a project where we are analyzing focus group data that was collected over a number of focus groups. We have a team of four researchers and our process involves:

  • reviewing our initial focus group transcripts
  • individually identifying important categories that were present
  • collectively processing these together and identifying specific labels we would use for a second round of coding
  • individually returning to the transcripts with our codes and coding all the transcripts
  • collectively meeting again to discuss what subthemes fell under each of the codes and if the codes fit or needed to be changed/merged/expanded

While the process was complex, I do believe this triangulation of observers enriched our analysis process. It helped us to gain a clearer understanding of our results as we collectively discussed and debated what each theme meant based on our individual understandings of the data.

While we did discuss negative case analysis above in the sampling phase, it is also worth mentioning here. Contradictory findings may creep up during our analysis. One of our participants may share something or we may find something in a document that seemingly is at odds with the majority of the rest of our data. Rather than ignoring this, negative case analysis would seek to understand this perspective and what might be behind this contradiction. In addition, we may choose to construct an audit trail as we move from raw data to our research findings during our data analysis. This means that we will institute a strategy for tracking our analysis process. I imagine that most researchers develop their own variation on this tracking process, but at its core, you need to find a way to label your segments of data so that you know where they came from once you start to break them up. Furthermore, you will be making decisions about what groups of data belong together and what they mean. Your tracking process for your audit trail will also have to provide a way to document how you arrived at these decisions. Often towards the end of an analysis process, researchers may choose to employ member checking (although you may also implement this throughout your analysis). In the example above where I was discussing our focus group project, we plan to take our findings back to some of our focus group participants to see if they feel that we captured the important information based on what they shared with us. As discussed in sampling, it is also a good practice to make sure to articulate your qualitative analysis process clearly. Unfortunately, I’ve read a number of qualitative studies where the researchers provide little detail regarding what their analysis looked like and how they arrived at their results. This often leaves me with questions about the quality of what was done.

Now we need to share our research with others. The most relevant tool specific to this phase is providing a thick description of our results.  As indicated in the table, a thick description means that we offer our audience a very detailed, rich narrative in helping them to interpret and make sense of our results.  Remember, the main aim of qualitative research is not necessarily to produce results that generalize to a large group of people.  Rather, we are seeking to enhance understanding about a particular experience, issue, or phenomenon by studying it very extensively for a relatively small sample. This produces a deep, as opposed to, a broad understanding. A thick description can be very helpful by offering detailed information about the sample, the context in which the study takes place, and a thorough explanation of findings and often how they relate to each other.  As a consumer of research, a thick description can help us to make our own judgments about the implications of these results and what other situations or populations these findings might apply to.

rigor in qualitative research ppt

You may have noticed that a few of the tools in our table haven’t yet been discussed in the qualitative process yet. This is because some of these rigor-related tools are meant to span the researcher process. To begin with, reflexivity is a tool that best applied through qualitative research. I encourage students in my social work practice classes to find ways to build reflexivity into their professional lives as a way of improving their professional skills. This is no less true of qualitative research students. Throughout our research process, we need to consider how our use-of-self is shaping the decisions we are making and how the research may be transforming us during the process.  What led you to choose your research question? Why did you group those ideas together? What caused you to label your theme that? What words do you use to talk about your study at a conference? The qualitative researcher has much influence throughout this process, and self-examination of that influence can be an important piece of rigor.  As an example, one step that I sometimes build into qualitative projects is reflexively journaling before and after interviews.  I’m often driving to these interviews, so I’ll turn my bluetooth on in the car and capture my thoughts before and after, transcribing them later.  This helps me to check-in with myself during data collection and can help me illuminate insights I might otherwise miss.  I have also found this to be helpful to use in my peer debriefing. Peer debriefing can be used throughout the research process. Meeting with a peer debriefer throughout the research process can be a good way to consistently reflect on your progress and the decisions you are making throughout a project. A peer debriefer can make connections that we may otherwise miss and question aspects of our project that may be important for us to explore.  As I mentioned, combining reflexivity with peer debriefing can be a powerful tool for processing our self-reflection in connection with the progress of our project.

Finally, the use of an external audit really doesn’t come into play until the end of the research process, but an external auditor will look extensively at the whole research process. Again, this is a researcher who is unattached to the project and seeking to follow the path of the project in hopes of providing an external perspective on the trustworthiness of the research process and its findings. Often, these auditors will begin at the end, starting with the findings, and attempt to trace backwards to the beginning of the project. This is often quite a laborious task and some qualitative scholars debate whether the attention to objectivity in this strategy may be at odds with the aims of qualitative research in illuminating the uniquely subjective experiences of participants by inherently subjective researchers. However, it can be a powerful tool for demonstrating that a systematic approach was used.

As you are thinking about designing your qualitative research proposal, consider how you might use some of these tools to strengthen the quality of your proposed research.  Again, you might be using these throughout the entire research process, or applying them more specifically to one stage of the process (e.g. data collection, data analysis).  In addition, as you are reviewing qualitative studies to include in your literature review or just in developing your understanding of the topic, make sure to look out for some of these tools being used.  They are general indicators that we can use to assess the attention and care that was given to using a scientific approach to producing the knowledge that is being shared.

  • As qualitative researchers there are a number of tools at your disposal to help support quality and rigor. These tools can aid you in assessing the quality of others’ work and in supporting the quality of your own design.
  • Qualitative rigor is not a box we can tick complete somewhere along our research project’s timeline.  It is something that needs to be attended to thoughtfully throughout the research process; it is a commitment we make to our participants and to our potential audience.

List out 2-3 tools that seem like they would be a good fit for supporting the rigor of your qualitative proposal. Also, provide a justification as to why they seem relevant to the design of your research and what you are trying to accomplish.

  • Justification:
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Newberry Park, CA: Sage ↵
  • Lincoln, YS. & Guba, EG. (1985). Naturalistic inquiry. Newbury Park, CA: Sage Publications. ↵
  • Barusch, A., Gringeri, C., & George, M. (2011). Rigor in qualitative social work research: A review of strategies used in published articles. Social Work Research, 35 (1), 11-19. ↵

Rigor is the process through which we demonstrate, to the best of our ability, that our research is empirically sound and reflects a scientific approach to knowledge building.

The ability of a measurement tool to measure a phenomenon the same way, time after time. Note: Reliability does not imply validity.

The extent to which the scores from a measure represent the variable they are intended to.

Findings form a research study that apply to larger group of people (beyond the sample). Producing generalizable findings requires starting with a representative sample.

in a literature review, a source that describes primary data collected and analyzed by the author, rather than only reviewing what other researchers have found

Data someone else has collected that you have permission to use in your research.

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

a single truth, observed without bias, that is universally applicable

one truth among many, bound within a social and cultural context

The idea that qualitative researchers attempt to limit or at the very least account for their own biases, motivations, interests and opinions during the research process.

The process of research is record and described in such a way that the steps the researcher took throughout the research process are clear.

A research journal that helps the researcher to reflect on and consider their thoughts and reactions to the research process and how it may be shaping the study

Notes that are taken by the researcher while we are in the field, gathering data.

An iterative approach means that after planning and once we begin collecting data, we begin analyzing as data as it is coming in.  This early analysis of our (incomplete) data, then impacts our planning, ongoing data gathering and future analysis as it progresses.

Memoing is the act of recording your thoughts, reactions, quandaries as you are reviewing the data you are gathering.

An audit trail is a system of documenting in qualitative research analysis that allows you to link your final results with your original raw data. Using an audit trail, an independent researcher should be able to start with your results and trace the research process backwards to the raw data. This helps to strengthen the trustworthiness of the research.

Context is the circumstances surrounding an artifact, event, or experience.

A code is a label that we place on segment of data that seems to represent the main idea of that segment.

Part of the qualitative data analysis process where we begin to interpret and assign meaning to the data.

including more than one member of your research team to aid in analyzing the data

Including data from multiple sources to help enhance your understanding of a topic

Member checking involves taking your results back to participants to see if we "got it right" in our analysis. While our findings bring together many different peoples' data into one set of findings, participants should still be able to recognize their input and feel like their ideas and experiences have been captured adequately.

A thick description is a very complete, detailed, and illustrative of the subject that is being described.

How we understand and account for our influence, as researchers, on the research process.

As researchers, this means we are extensively spending time with participants or are in the community we are studying.

Including data that contrasts, contradicts, or challenges the majority of evidence that we have found or expect to find

sampling approaches for which a person’s likelihood of being selected for membership in the sample is unknown

Ethnography is a qualitative research design that is used when we are attempting to learn about a culture by observing people in their natural environment.

Graduate research methods in social work Copyright © 2020 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Please log in to save materials. Log in

  • Resource Library
  • Research Methods
  • VIVA Grant Recipients
  • Vgr-social-work-research

Education Standards

Radford university.

Learning Domain: Social Work

Standard: Basic Research Methodology

Lesson 10: Sampling in Qualitative Research

Lesson 11: qualitative measurement & rigor, lesson 12: qualitative design & data gathering, lesson 1: introduction to research, lesson 2: getting started with your research project, lesson 3: critical information literacy, lesson 4: paradigm, theory, and causality, lesson 5: research questions, lesson 6: ethics, lesson 7: measurement in quantitative research, lesson 8: sampling in quantitative research, lesson 9: quantitative research designs, powerpoint slides: sowk 621.01: research i: basic research methodology.

PowerPoint Slides: SOWK 621.01: Research I: Basic Research Methodology

The twelve lessons for SOWK 621.01: Research I: Basic Research Methodology as previously taught by Dr. Matthew DeCarlo at Radford University. Dr. DeCarlo and his team developed a complete package of materials that includes a textbook, ancillary materials, and a student workbook as part of a VIVA Open Course Grant.

The PowerPoint slides associated with the twelve lessons of the course, SOWK 621.01: Research I: Basic Research Methodology, as previously taught by Dr. Matthew DeCarlo at Radford University. 

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

20.1 Introduction to qualitative rigor

We hear a lot about fake news these days. Fake news has to do with the quality of journalism that we are consuming. It begs questions like: does it contain misinformation, is it skewed or biased in its portrayal of stories, does it leave out certain facts while inflating others. If we take this news at face value, our opinions and actions may be intentionally manipulated by poor quality information. So, how do we avoid or challenge this? The oversimplified answer is, we find ways to check for quality. While this isn’t a chapter dedicated to fake news, it does offer an important comparison for the focus of this chapter, rigor in qualitative research. Rigor is concerned with the quality of research that we are designing and consuming. While I devote a considerable amount of time in my clinical class talking about the importance of adopting a non-judgmental stance in practice, that is not the case here; I want you to be judgmental, critical thinkers about research! As a social worker who will hopefully be producing research (we need you!) and definitely consuming research, you need to be able to differentiate good science from rubbish science. Rigor will help you to do this.

rigor in qualitative research ppt

This chapter will introduce you to the concept of rigor and specifically, what it looks like in qualitative research. We will begin by considering how rigor relates to issues of ethics and how thoughtfully involving community partners in our research can add additional dimensions in planning for rigor. Next, we will look at rigor in how we capture and manage qualitative data, essentially helping to ensure that we have quality raw data to work with for our study. Finally, we will devote time to discussing how researchers, as human instruments, need to maintain accountability throughout the research process. Finally, we will examine tools that encourage this accountability and how they can be integrated into your research design. Our hope is that by the end of this chapter, you will begin to be able to identify some of the hallmarks of quality in qualitative research, and if you are designing a qualitative research proposal, that you consider how to build these into your design.

19.1 Introduction to qualitative rigor

Learning objectives.

Learners will be able to…

  • Identify the role of rigor in qualitative research and important concepts related to qualitative rigor
  • Discuss why rigor is an important consideration when conducting, critiquing and consuming qualitative research
  • Differentiate between quality in quantitative and qualitative research studies

In Chapter 11 we talked about quality in quantitative studies, but we built our discussion around concepts like reliability and validity . With qualitative studies, we generally think about quality in terms of the concept of rigor . The difference between quality in quantitative research and qualitative research extends beyond the type of data (numbers vs. words/sounds/images). If you sneak a peek all the way back to Chapter 7 , we discussed the idea of different paradigms or fundamental frameworks for how we can think about the world. These frameworks value different kinds of knowledge, arrive at knowledge in different ways, and evaluate the quality of knowledge with different criteria. These differences are essential in differentiating qualitative and quantitative work.

Quantitative research generally falls under a positivist paradigm, seeking to uncover knowledge that holds true across larger groups of people. To accomplish this, we need to have tools like reliability and validity to help produce internally consistent and externally generalizable findings (i.e. was our study design dependable and do our findings hold true across our population).

In contrast, qualitative research is generally considered to fall into an alternative paradigm (other than positivist), such as the interpretive paradigm which is focused on the subjective experiences of individuals and their unique perspectives. To accomplish this, we are often asking participants to expand on their ideas and interpretations. A positivist tradition requires the information collected to be very focused and discretely defined (i.e. closed questions with prescribed categories). With qualitative studies, we need to look across unique experiences reflected in the data and determine how these experiences develop a richer understanding of the phenomenon we are studying, often across numerous perspectives.

Rigor is a concept that reflects the quality of the process used in capturing, managing, and analyzing our data as we develop this rich understanding. Rigor helps to establish standards through which qualitative research is critiqued and judged, both by the scientific community and by the practitioner community.

rigor in qualitative research ppt

For the scientific community, people who review qualitative research studies submitted for publication in scientific journals or for presentations at conferences will specifically look for indications of rigor, such as the tools we will discuss in this chapter. This confirms for them that the researcher(s) put safeguards in place to ensure that the research took place systematically and that consumers can be relatively confident that the findings are not fabricated and can be directly connected back to the primary sources of data that was gathered or the secondary data that was analyzed.

As a note here, as we are critiquing the research of others or developing our own studies, we also need to recognize the limitations of rigor. No research design is flawless and every researcher faces limitations and constraints. We aren’t looking for a researcher to adopt every tool we discuss below in their design. In fact, one of my mentors, speaks explicitly about “misplaced rigor”, that is, using techniques to support rigor that don’t really fit what you are trying to accomplish with your research design. Suffice it to say that we can go overboard in the area of rigor and it might not serve our study’s best interest. As a consumer or evaluator of research, you want to look for steps being taken to reflect quality and transparency throughout the research process, but they should fit within the overall framework of the study and what it is trying to accomplish.

From the perspective of a practitioner, we also need to be acutely concerned with the quality of research. Social work has made a commitment, outlined in our Code of Ethics (NASW,2017) , to competent practice in service to our clients based on “empirically based knowledge” (subsection 4.01). When I think about my own care providers, I want them to be using “good” research—research that we can be confident was conducted in a credible way and whose findings are honestly and clearly represented. Don’t our clients deserve the same from us?

rigor in qualitative research ppt

As providers, we will be looking to qualitative research studies to provide us with information that helps us better understand our clients, their experiences, and the problems they encounter. As such, we need to look for research that accurately represents:

  • Who is participating in the study
  • What circumstances is the study being conducted under
  • What is the research attempting to determine

Further, we want to ensure that:

  • Findings are presented accurately and reflect what was shared by participants ( raw data )
  •  A reasonably good explanation of how the researcher got from the raw data to their findings is presented
  • The researcher adequately considered and accounted for their potential influence on the research process

As we talk about different tools we can use to help establish qualitative rigor, I will try to point out tips for what to look for as you are reading qualitative studies that can reflect these. While rigor can’t “prove” quality, it can demonstrate steps that are taken that reflect thoughtfulness and attention on the part of the researcher(s). This is a link from the American Psychological Association on the topic of reviewing qualitative research manuscripts. It’s a bit beyond the level of critiquing that I would expect from a beginning qualitative research student, however, it does provide a really nice overview of this process. Even if you aren’t familiar with all the terms, I think it can be helpful in giving an overview of the general thought process that should be taking place.

To begin breaking down how to think about rigor, I find it helpful to have a framework to help understand different concepts that support or are associated with rigor. Lincoln and Guba (1985) have suggested such a framework for thinking about qualitative rigor that has widely contributed to standards that are often employed for qualitative projects. The overarching concept around which this framework is centered is trustworthiness . Trustworthiness is reflective of how much stock we should put in a given qualitative study—is it really worth our time, headspace, and intellectual curiosity? A study that isn’t trustworthy suggests poor quality resulting from inadequate forethought, planning, and attention to detail in how the study was carried out. This suggests that we should have little confidence in the findings of a study that is not trustworthy.

According to Lincoln and Guba (1985) [1] trustworthiness is grounded in responding to four key ideas and related questions to help you conceptualize how they relate to your study. Each of these concepts is discussed below with some considerations to help you to compare and contrast these ideas with more positivist or quantitative constructs of research quality.

Truth value

You have already been introduced to the concept of internal validity . As a reminder, establishing internal validity is a way to ensure that the change we observe in the dependent variable is the result of the variation in our independent variable—did we actually design a study that is truly testing our hypothesis. In much/most qualitative studies we don’t have hypotheses, independent or dependent variables, but we do still want to design a study where our audience (and ourselves) can be relatively sure that we as the researcher arrived at our findings through a systematic and scientific process, and that those findings can be clearly linked back to the data we used and not some fabrication or falsification of that data; in other words, the truth value of the research process and its findings. We want to give our readers confidence that we didn’t just make up our findings or “see what we wanted to see”.

rigor in qualitative research ppt

Applicability

  • who we were studying
  • how we went about studying them
  • what we found

rigor in qualitative research ppt

Consistency

rigor in qualitative research ppt

These concepts reflect a set of standards that help to determine the integrity of qualitative studies. At the end of this chapter you will be introduced to a range of tools to help support or reflect these various standards in qualitative research. Because different qualitative designs (e.g. phenomenology, narrative, ethnographic), that you will learn more about in Chapter 22 emphasize or prioritize different aspects of quality, certain tools will be more appropriate for these designs. Since this chapter is intended to give you a general overview of rigor in qualitative studies, exploring additional resources will be necessary to best understand which of these concepts are prioritized in each type of design and which tools best support them.

Key Takeaways

  • Qualitative research is generally conducted within an interpretativist paradigm. This differs from the post-positivist paradigm in which most quantitative research originates. This fundamental difference means that the overarching aim of these different approaches to knowledge building differ, and consequently, our standards for judging the quality of research within these paradigms differ.
  • Assessing the quality of qualitative research is important, both from a researcher and a practitioner perspective. On behalf of our clients and our profession, we are called to be critical consumers of research. To accomplish this, we need strategies for assessing the scientific rigor with which research is conducted.
  • Trustworthiness and associated concepts, including credibility, transferablity, dependability and confirmability, provide a framework for assessing rigor or quality in qualitative research.
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Newberry Park, CA: Sage ↵

Rigor is the process through which we demonstrate, to the best of our ability, that our research is empirically sound and reflects a scientific approach to knowledge building.

The degree to which an instrument reflects the true score rather than error.  In statistical terms, reliability is the portion of observed variability in the sample that is accounted for by the true variability, not by error. Note : Reliability is necessary, but not sufficient, for measurement validity.

The extent to which the scores from a measure represent the variable they are intended to.

a paradigm guided by the principles of objectivity, knowability, and deductive logic

Findings form a research study that apply to larger group of people (beyond the sample). Producing generalizable findings requires starting with a representative sample.

a paradigm based on the idea that social context and interaction frame our realities

in a literature review, a source that describes primary data collected and analyzed by the author, rather than only reviewing what other researchers have found

Data someone else has collected that you have permission to use in your research.

unprocessed data that researchers can analyze using quantitative and qualitative methods (e.g., responses to a survey or interview transcripts)

Trustworthiness is a quality reflected by qualitative research that is conducted in a credible way; a way that should produce confidence in its findings.

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

The level of confidence that research is obtained through a systematic and scientific process and that findings can be clearly connected to the data they are based on (and not some fabrication or falsification of that data).

The ability to apply research findings beyond the study sample to some broader population,

This is a synonymous term for generalizability - the ability to apply the findings of a study beyond the sample to a broader population.

The potential for qualitative research findings to be applicable to other situations or with other people outside of the research study itself.

Consistency is the idea that we use a systematic (and potentially repeatable) process when conducting our research.

a single truth, observed without bias, that is universally applicable

one truth among many, bound within a social and cultural context

The idea that qualitative researchers attempt to limit or at the very least account for their own biases, motivations, interests and opinions during the research process.

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

rigor in qualitative research ppt

The Ultimate Guide to Qualitative Research - Part 3: Presenting Qualitative Data

rigor in qualitative research ppt

  • Presenting qualitative data
  • Data visualization
  • Research paper writing
  • Introduction

What is rigor in qualitative research?

Why is transparent research important, how do you achieve transparency and rigor in research.

  • How to publish a research paper

Transparency and rigor in research

Qualitative researchers face particular challenges in convincing their target audience of the value and credibility of their subsequent analysis . Numbers and quantifiable concepts in quantitative studies are relatively easier to understand than their counterparts associated with qualitative methods . Think about how easy it is to make conclusions about the value of items at a store based on their prices, then imagine trying to compare those items based on their design, function, and effectiveness.

rigor in qualitative research ppt

The goal of qualitative data analysis is to allow a qualitative researcher and their audience to make determinations about the value and impact of the research. Still, before the audience can reach these determinations, the process of conducting research that produces the qualitative analysis must first be perceived as credible. It is the responsibility of the researcher to persuade their audience that their data collection process and subsequent analysis are rigorous.

Qualitative rigor refers to the meticulousness, consistency, and transparency of the research. It is the application of systematic, disciplined, and stringent methods to ensure the credibility, dependability, confirmability, and transferability of research findings. In qualitative inquiry, these attributes ensure the research accurately reflects the phenomenon it is intended to represent, that its findings can be used by others, and that its processes and results are open to scrutiny and validation.

Credibility

Credibility refers to the extent to which the results accurately represent the participants' experiences. To achieve credibility, qualitative researchers, especially those conducting research on human research participants, employ a number of strategies to bolster the credibility of the data and the subsequent analysis. Prolonged engagement and persistent observation , for example, involve spending significant time in the field to gain a deep understanding of the research context and to continuously observe the phenomenon under study. Peer debriefing involves discussing the research and findings with knowledgeable peers to assess their validity . Member checking involves sharing the findings with the research participants to confirm that they accurately reflect their experiences. These and other methods ensure an abundantly rich data set from which the researcher describes in vivid detail the phenomenon under study, and which other scholars can audit to challenge the strength of the findings if necessary.

Dependability

Dependability refers to the consistency of the research process such that it is logical and clearly documented. It addresses the potential for others to build on the research through subsequent studies. To achieve dependability, researchers should provide a 'decision trail' detailing all decisions made during the course of the study. This allows others to understand how conclusions were reached and to replicate the study if necessary. Ultimately, the documentation of a researcher's process while collecting and analyzing data provides a clear record not only for other scholars to consider but also for those conducting the study and refining their methods for future research.

Confirmability

Confirmability requires the research findings to be directly linked to the data. While it is important to acknowledge researcher positionality (e.g., through reflexive memos) in social science research, researchers still have a responsibility to make assertions and identify insights rooted in the data for the resulting knowledge to be considered confirmable. By transparently communicating how the data was analyzed and conclusions were reached, researchers can allow their audience to perform a sort of audit of the study. This practice helps remind researchers about the importance of ensuring that there are sufficient connections to the raw data collected from the field and the findings that are presented as consequential developments of theory.

Transferability

Transferability refers to the applicability of the research findings in other contexts or with other participants. While dependability is more relevant to the application of research within its own situated context, transferability is determined by how findings generated in one set of circumstances (e.g., a geographic location or a culture) apply to another set of circumstances. This is essentially a significant challenge since, given the unique focus on context in qualitative research , researchers can't usually claim that their findings are universally applicable. Instead, they provide a rich, detailed description of the context and participants, enabling others to determine if the findings may apply to their own context. As a result, such detail necessitates discussion of transparency in research, which will be discussed later in this section.

Reflexivity

The concept of reflexivity also contributes to rigor in qualitative research. Reflexivity involves the researcher critically reflecting on the research and their own role in it, including how their biases , values, experiences, and presence may influence the research. Any discussion of reflexivity necessitates a recognition that knowledge about the social world is never objective, but always from a particular perspective. Reflexivity begins with an acknowledgment that those who conduct qualitative research do so while perceiving the social world through an analytical lens that is unique and distinct from that of others. As subjectivity is an inevitable circumstance in any research involving humans as sources or instruments of data collection , the researcher is responsible for providing a thick description of the environment in which they are collecting data as well as a detailed description of their own place in the research. Subjectivity can be considered as an asset, whereby researchers acknowledge and indicate how their positionality informed the analysis in ways that were insightful and productive.

Triangulation

Triangulation is another key aspect of rigor, referring to the use of multiple data sources, researchers, or methods to cross-check and validate findings. This can increase the depth and breadth of the research, improve its quality, and decrease the likelihood of researcher bias influencing the findings. Particularly given the complexity and dynamic nature of the social world, one method or one analytical approach will seldom be sufficient in holistically understanding the phenomenon or concept under study. Instead, a researcher benefits from examining the world through multiple methods and multiple analytical approaches, not to garner perfectly consistent results but to gather as much rich detail as possible to strengthen the analysis and subsequent findings.

In qualitative research , rigor is not about seeking a single truth or reality, but rather about being thorough, transparent, and critical in the research to ensure the integrity and value of the study. Rigor can be seen as a commitment to best practices in research, with researchers consistently questioning their methods and findings, checking for alternative interpretations , and remaining open to critique and revision. This commitment to rigor helps ensure that qualitative research provides valid, reliable , and meaningful contributions to our understanding of the complex social world.

rigor in qualitative research ppt

Transparently document your research with ATLAS.ti

Choose ATLAS.ti for getting the most out of your data. Download a free trial of our software today.

When you read a story in a newspaper or watch a news report on television, do you ever get the feeling that you may not be receiving all the information or context necessary to understand the overarching messages being conveyed? Perhaps a salesperson is trying to convince you to buy something from them by explaining all the benefits of a product but doesn't tell you how they know these benefits are real. When you're choosing a movie to watch, you might look at a critic's review or a score in an online movie database without actually knowing how that review or score was actually determined.

rigor in qualitative research ppt

In all of these situations, it is easier to trust the information presented to you if there is a rigorous analysis process behind that information and if that process is explicitly detailed. The same is true for qualitative research results, making transparency a key element in qualitative research methodologies . Transparency is a fundamental aspect of rigor in qualitative research. It involves the clear, detailed, and explicit documentation of all stages of the research process. This allows other researchers to understand, evaluate, transfer, and build upon the study. The key aspects of transparency in qualitative research include methodological transparency, analytical transparency, and reflexive transparency.

Methodological transparency involves providing a comprehensive description of the research methods and procedures used in the study. This includes detailing the research design, sampling strategy, data collection methods , and ethical considerations . For example, researchers should thoroughly describe how participants were selected, how and where data were collected (e.g., interviews , focus groups , observations ), and how ethical issues such as consent, confidentiality , and potential harm were addressed. They should also clearly articulate the theoretical and conceptual frameworks that guided the study. Methodological transparency allows other researchers to understand how the study was conducted and assess its trustworthiness.

rigor in qualitative research ppt

Analytical transparency refers to the clear and detailed documentation of the data analysis process. This involves explaining how the raw data were transformed into findings, including the coding process , theme/category development, and interpretation of results . Researchers should describe the specific analytical strategies they used, such as thematic analysis , grounded theory , or discourse analysis . They should provide evidence to support their findings, such as direct quotes from participants. They may also describe any software they used to assist with analyzing data . Analytical transparency allows other researchers to understand how the findings were derived and assess their credibility and confirmability.

Reflexive transparency involves the researcher reflecting on and disclosing their own role in the research, including their potential biases , assumptions, and influences. This includes recognizing and discussing how the researcher's background, beliefs, and interactions with participants may have shaped the data collection and analysis . Reflexive transparency may be achieved through the use of a reflexivity journal, where the researcher regularly records their thoughts, feelings, and reactions during research. This aspect of transparency ensures that the researcher is open about their subjectivity and allows others to assess the potential impact of the researcher's positionality on the findings.

rigor in qualitative research ppt

Transparency in qualitative research is essential for maintaining rigor, trustworthiness, and ethical integrity . By being transparent, researchers allow their work to be scrutinized, critiqued, and improved upon, contributing to the ongoing development and refinement of knowledge in their field.

Rigorous, trustworthy research is research that applies the appropriate research tools to meet the stated objectives of the investigation. For example, to determine if an exploratory investigation was rigorous, the investigator would need to answer a series of methodological questions: Do the data collection tools produce appropriate information for the level of precision required in the analysis ? Do the tools maximize the chance of identifying the full range of what there is to know about the phenomenon? To what degree are the collection techniques likely to generate the appropriate level of detail needed for addressing the research question(s) ? To what degree do the tools maximize the chance of producing data with discernible patterns?

rigor in qualitative research ppt

Once the data are collected, to what degree are the analytic techniques likely to ensure the discovery of the full range of relevant and salient themes and topics? To what degree do the analytic strategies maximize the potential for finding relationships among themes and topics? What checks are in place to ensure that the discovery of patterns and models are relevant to the research question? Finally, what standards of evidence are required to ensure readers that results are supported by the data?

The clear challenge is to identify what questions are most important for establishing research rigor (trustworthiness) and to provide examples of how such questions could be answered for those using qualitative data . Clearly, rigorous research must be both transparent and explicit; in other words, researchers need to be able to describe to their colleagues and their audiences what they did (or plan to do) in clear, simple language. Much of the confusion that surrounds qualitative data collection and analysis techniques comes from practitioners who shroud their behaviors in mystery and jargon. For example, clearly describing how themes are identified, how codebooks are built and applied, and how models were induced helps to bring more rigor to qualitative research .

rigor in qualitative research ppt

Researchers also must become more familiar with the broad range of methodological techniques available, such as content analysis , grounded theory , and discourse analysis . Cross-fertilization across methodological traditions can also be extremely valuable to generate meaningful understanding rather than attacking all problems with the same type of methodological tool.

The introduction of methodologically neutral and highly flexible qualitative analysis software like ATLAS.ti can be considered as extremely helpful indeed. It is highly apt to both support interdisciplinary cross-pollination and to bring about a great deal of trust in the presented results. By allowing the researcher to combine both the source material and his/her findings in a structured, interactive platform while producing both quantifiable reports and intuitive visual renderings of their results, ATLAS.ti adds new levels of trustworthiness to qualitative research . Moreover, it permits the researcher to apply multiple approaches to their research, to collaborate across philosophical boundaries, and thus significantly enhance the level of rigor in qualitative research. Dedicated research software like ATLAS.ti helps the researcher to catalog, explore and competently analyze the data generated in a given research project.

Ultimately, transparency and rigor are indispensable elements of any robust research study. Achieving transparency requires a systematic, deliberate, and thoughtful approach. It revolves around clarity in the formulation of research objectives, comprehensiveness in methods, and conscientious reporting of the results. Here are several key strategies for achieving transparency and rigor in research:

Clear research objectives and methods

Transparency begins with the clear and explicit statement of research objectives and questions. Researchers should explain why they are conducting the study, what they hope to learn, and how they plan to achieve their objectives. This involves identifying and articulating the study's theoretical or conceptual framework and delineating the key research questions . Ensuring clarity at this stage sets the groundwork for transparency throughout the rest of the study.

Transparent research includes a comprehensive and detailed account of the research design and methodology. Researchers should describe all stages of their research process, including the selection and recruitment of participants, the data collection methods , the setting of the research, and the timeline. Each step should be explained in enough detail that another researcher could replicate the study. Furthermore, any modifications to the research design or methodology over the course of the study should be clearly documented and justified.

Thorough data documentation and analysis

In the data collection phase, researchers should provide thorough documentation, including original data records such as transcripts , field notes , or images . The specifics of how data was gathered, who was involved, and when and where it took place should be meticulously recorded.

rigor in qualitative research ppt

During the data analysis phase , researchers should clearly describe the steps taken to analyze the data, including coding processes , theme identification , and how conclusions were drawn. Researchers should provide evidence to support their findings and interpretations , such as verbatim quotes or detailed examples from the data. They should also describe any analytic software or tools used, including how they were used and why they were chosen.

Reflexivity and acknowledgment of bias

Transparent research involves a process of reflexivity , where researchers critically reflect on their own role in the research process. This includes considering how their own beliefs, values, experiences, and relationships with participants may have influenced the data collection and analysis . Researchers should maintain reflexivity journals to document these reflections, which can then be incorporated into the final research report. Researchers should also explicitly acknowledge potential biases and conflicts of interest that could influence the research. This includes personal, financial, or institutional interests that could affect the conduct or reporting of the research.

Transparent reporting and publishing

Transparency also involves the open sharing of research materials and data, where ethical and legal guidelines permit. This may include providing access to interview guides , survey instruments , data analysis scripts, raw data , and other research materials. Open sharing allows others to scrutinize, transfer, or extend the research, thereby enhancing its transparency and trustworthiness.

rigor in qualitative research ppt

Finally, the reporting and publishing phase should adhere to the principles of transparency. Researchers should follow the relevant reporting guidelines for their field. Such guidelines provide a framework for reporting research in a comprehensive, systematic, and transparent manner.

Furthermore, researchers should choose to publish in open-access journals or other accessible formats whenever possible, to ensure the research is publicly accessible. They should also be open to critique and engage in post-publication discussion and debate about their findings.

By adhering to these strategies, researchers can ensure the transparency of their research, enhancing its credibility, trustworthiness, and contribution to their field. Transparency is more than just a good research practice—it's a fundamental ethical obligation to the research community, participants, and wider society.

rigor in qualitative research ppt

Rigorous research starts with ATLAS.ti

Click here for a free trial of our powerful and intuitive data analysis software.

Login to your account

If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password

If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password

  • AACP Member Login      Submit

rigor in qualitative research ppt

Download started.

  • PDF [516 KB] PDF [516 KB]
  • Figure Viewer
  • Download Figures (PPT)
  • Add To Online Library Powered By Mendeley
  • Add To My Reading List
  • Export Citation
  • Create Citation Alert

A Review of the Quality Indicators of Rigor in Qualitative Research

  • Jessica L. Johnson, PharmD Jessica L. Johnson Correspondence Corresponding Author: Jessica L. Johnson, William Carey University School of Pharmacy, 19640 Hwy 67, Biloxi, MS 39574. Tel: 228-702-1897. Contact Affiliations William Carey University School of Pharmacy, Biloxi, Mississippi Search for articles by this author
  • Donna Adkins, PharmD Donna Adkins Affiliations William Carey University School of Pharmacy, Biloxi, Mississippi Search for articles by this author
  • Sheila Chauvin, PhD Sheila Chauvin Affiliations Louisiana State University, School of Medicine, New Orleans, Louisiana Search for articles by this author
  • qualitative research design
  • standards of rigor
  • best practices

INTRODUCTION

  • Denzin Norman
  • Lincoln Y.S.
  • Google Scholar
  • Anderson C.
  • Full Text PDF
  • Scopus (584)
  • Santiago-Delefosse M.
  • Stephen S.L.
  • Scopus (85)
  • Scopus (32)
  • Levinson W.
  • Scopus (506)
  • Dixon-Woods M.
  • Scopus (440)
  • Malterud K.
  • Midtgarden T.
  • Scopus (205)
  • Wasserman J.A.
  • Wilson K.L.
  • Scopus (68)
  • Barbour R.S.
  • Sale J.E.M.
  • Scopus (12)
  • Fraser M.W.
  • Scopus (37)
  • Sandelowski M.
  • Scopus (1571)

BEST PRACTICES: STEP-WISE APPROACH

Step 1: identifying a research topic.

  • Scopus (288)
  • Creswell J.
  • Maxwell J.A.
  • Glassick C.E.
  • Maeroff G.I.
  • Scopus (269)

Table thumbnail ajpe7120-t1

  • Scopus (279)
  • Ringsted C.
  • Scherpbier A.
  • Scopus (132)
  • Ravitch S.M.

Figure 1

  • View Large Image
  • Download Hi-res image
  • Download (PPT)
  • Huberman M.

Step 2: Qualitative Study Design

  • Whittemore R.
  • Mandle C.L.
  • Scopus (987)
  • Marshall M.N.
  • Scopus (2232)
  • Horsfall J.
  • Scopus (185)
  • O’Reilly M.
  • Scopus (1072)
  • Burkard A.W.
  • Scopus (168)
  • Patton M.Q.
  • Scopus (364)
  • Scopus (4280)
  • Johnson R.B.

Step 3: Data Analysis

Step 4: drawing valid conclusions.

  • Swanwick T.
  • Swanwick T.O.
  • O’Brien B.C.
  • Harris I.B.
  • Beckman T.J.
  • Scopus (5009)

Step 5: Reporting Research Results

Table thumbnail ajpe7120-t2

  • Shenton A.K.
  • Scopus (4221)

Article info

Publication history, identification.

DOI: https://doi.org/10.5688/ajpe7120

ScienceDirect

  • Download .PPT

Related Articles

  • Access for Developing Countries
  • Articles & Issues
  • Articles In Press
  • Current Issue
  • Past Issues
  • Journal Information
  • About Open Access
  • Aims & Scope
  • Editorial Board
  • Editorial Team
  • History of AJPE
  • Contact Information
  • For Authors
  • Guide for Authors
  • Researcher Academy
  • Rights & Permissions
  • Submission Process
  • Submit Article
  • For Reviewers
  • Reviewer Instructions
  • Reviewer Frequently Asked Questions

The content on this site is intended for healthcare professionals.

  • Privacy Policy   
  • Terms and Conditions   
  • Accessibility   
  • Help & Contact

RELX

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Rigor in Qualitative Research: The Assessment of Trustworthiness

Profile image of Wawan Yulianto

Despite a growing interest in quaLitative research in occupationaL therapy, littLe attention has been pLaced on establishing its rigor. This article presents one modeL that can be used for the assessment of trustworthi-ness or merit of qualitative inquiry. Cuba's (1981) modeL describes four generaL criteria for evaLuation of research and then defines each from both a quantitative and a qualitative perspective. SeveraL strategies for the achievement of rigor in qualitative research usefuL for both researchers and consumers of research are described.

Related Papers

Scandinavian Journal of Occupational Therapy

rigor in qualitative research ppt

The American journal of occupational therapy : official publication of the American Occupational Therapy Association

Cheryl Custard

As a profession, occupational therapy has been repeatedly confronted with the challenge to prove the value of occupation as a therapeutic medium. The types of research pursued by occupational therapists have evolved in response to societal trends, external pressures, and the priorities of individual practitioners. Although many therapists have reconciled the pursuit of research with the roots of occupational therapy through an adherence to naturalistic methods, others continue to value experimental research designs. This article explores the rise of qualitative research methods in occupational therapy and addresses the current dilemma between naturalistic and positivistic designs.

Sarina Goldstand

American Journal of Occupational Therapy

Claire-Jehanne Dubouloz

Occupational therapists are increasingly urged to carry out evidenced-based practice; however, little is known regarding their present practice and perceptions of evidence-based practice. To explore this phenomenon, a qualitative study was completed using a grounded theory approach. Semi-structured interviews were carried out with eight occupational therapists who worked in diverse practice settings. Participants were asked to reflect on their own views of evidence-based practice and their use of evidence in therapy. Data were analyzed inductively using constant comparison analysis. Participants’ perceptions of evidence-based practice were described in three broad categories. To these occupational therapists, evidence-based practice is: (a) a process of looking for understanding; (b) associated with research, and; (c) a potential threat to the occupational therapist. These findings produce a basis from which recommendations are made to increase the use of evidence-based practice by ...

Lesley Garcia

Scandinavian journal of occupational therapy

Susanne Guidetti

This study was conducted in the context of a randomized controlled trial where occupational therapists (OTs) in collaboration with researchers implemented a client-centred activity of daily living intervention (CADL) for persons with stroke. The aim was to identify and describe over time the OTs&#39; experiences regarding the collaboration with the researcher in their role as implementers of a new complex intervention. Focus group interviews were conducted with 33 OTs, two, six and 12 months after they had participated in a five-day workshop. The interviews were analysed using a grounded theory approach. Three categories were identified: (1) Including in the scientific world, (2) Involving as an implementer of science and (3) Integrating in a partnership. One core category emerged: The implementation of client-centred intervention enabled the fusion of practice and science. An increased experience of using CADL and support from the researchers changed the OTs&#39; attitudes towards ...

Joseph K Wells

Cheryl Mattingly

Many constructs of interest to occupational therapists can only be studied through qualitative methods. Such constructs include meaning of activity or the illness experience and the context in which these occur. The purpose of this paper is to describe how ethnographic methods used in research can be generalized and applied to clinical practice. Ethnography is compared with other qualitative research approaches and a model clinical ethnographic assessment process is described.

Tore Bonsaksen

RELATED PAPERS

Mathieu Salzmann

Aline Barlet

Review of Economics and Political Science

Marine Biodiversity

Meng-Hsien Chen

florin grosaru

Scientific Papers of Silesian University of Technology. Organization and Management Series

Zygmunt Kruczek

Serge Yarovoi

Indonesian Journal of Information Systems

Generosa pritalia

Journal of Food Processing and Preservation

Domenico Castaldo

Human Pathology

Journal of animal science

Samodha Fernando

Journal Africain du Cancer / African Journal of Cancer

Mohamed Bibi

Saúde (Santa Maria)

Amaury de Souza

Infectious Diseases in Obstetrics and Gynecology

Piotr B Heczko

Agronomy research

Sérgio Zolnier

SSRN Electronic Journal

Maria Camila Murillo

Schizophrenia Research

Lorenzo Sinibaldi

Journal of Neuroimmunology

Filip Bergquist

Archives of Iranian medicine

Nima Mehdizadegan

Gianluigi Segalerba

Health Promotion International

Rob A.B. Oostendorp

jhkghjf hfdgedfg

Journal of Family Violence

Judit Wirth

Revista Eletrônica TECCEN

Rafael Conceição

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

rigor in academic research

Rigor in academic research

Apr 01, 2019

580 likes | 942 Views

Rigor in academic research. Dan Remenyi PhD. We are in the age of systematic research. Miracles or unexplained revelations are simply not allow in academic research Archimedes, Galileo and Newton would not get a degree for at least some of their discovers

Share Presentation

  • logical path
  • www youtube
  • www nova edu ssss
  • business guidelines
  • principle aim

jaguar

Presentation Transcript

Rigor in academic research Dan Remenyi PhD

We are in the age of systematic research • Miracles or unexplained revelations are simply not allow in academic research • Archimedes, Galileo and Newton would not get a degree for at least some of their discovers • Rigor is a concept which spans a spectrum or perhaps a continuum. Research needs to be “rigorous enough”.

A definition of rigor • A piece of research can be said to be rigorous if there is no doubt that it has been conducted in terms of all the rules associated with the research paradigm under which it was produced. •  The degree to which research methods are scrupulously and meticulously applied • Is there a trade off between Relevance and Rigor? Some researchers suggest that there is an Heisenberg type relationship but this does not have to be the case.

Another definition of rigor • Academic research has to end up as an argument that something of value has been added to the body of theoretical knowledge. Rigor is a question of the force of the argument i.e. how persuasive is this argument. The force of the argument is a function of an appropriate research question/s, useful data, and analysis thereof and a logical/rational and clear argument. • Some researchers argue that “rigor is the strength of inference made possible by the given research study”. Staw, B. M. (1985) Reports on the road to relevance and rigor: Some unexplored issues in publishing organizational research. In L. L. Cummings & P. J. Frost, eds. Publishing in the Organizational Sciences (pp 96-107). Homewood Illinois, Richard D. Irwin, Inc.

Definitions of Rationality ….1 • A principle aim of this chapter has been to build the foundation upon which a clear understanding of the concept of “rationality” could be erected. Clarity does not necessarily imply simplicity, however. Roughly speaking, rationality is concerned with the selection of preferred behaviour alternatives in terms of some system of values whereby the consequences of behaviour can be evaluated. Does this mean that the process of adaption must be conscious, or are unconscious processes included as well? It has been shown that many of the steps in mathematical invention – than which there can presumably be nothing more rational – are subconscious; and this is certainly true of the simpler processes of equation-solving.

Definitions of Rationality ….2 • Moreover, if consciousness is not stipulated as an element of rationality, are only deliberate processes of adaption admitted, or non-deliberate ones as well? The typist trains herself to strike a particular key in response to the stimulus of a particular letter. Once learned, the act is unconscious, but deliberate. On the other hand, any person instinctively withdraws a finger that has been burned. This is “rational” in the sense that it serves a useful purpose, but is certainly neither a conscious nor deliberate adaption.

Definitions of Rationality ….3 • Shall we moreover, call a behaviour “rational” when it is in error, but only because the information on which it is based is faulty? When a subjective test is applied, it is rational for an individual to take medicine for a disease if he believes the medicine will cure the disease. When an objective test is applied, the behaviour is rational only if the medicine is in fact efficacious. • Simon, H. A, (1997) Administrative Behaviours: A Study of Decision-Making Processes in Administrative Organizations. New York: The Free Press.

Clarity of argument http://www.online-utility.org/english/readability_test_and_improve.jsp

Getting Academic Papers Published Fog Index 25.2 (p63)

Getting Academic Papers Published

Two major approaches to rigor • Rigor may be seen as a function of process • This is the tick the box approach where we are checking that all the necessary tasks have been appropriately addressed • Rigor may be seen as a product of evaluation • To be able to apply rigor as an evaluation you first have to performed a rigor checklist type approach • Then you need to have a standard to which to compare the research under review

Rigorous websites • http://www.wjh.harvard.edu/nsfqual/Ryan%20Paper.pdf --- What Are Standards Of Rigor For Qualitative Research? Gery W. Ryan RAND Corporation • http://www.nova.edu/ssss/QR/QR8-4/golafshani.pdf--- Understanding Reliability and Validity in Qualitative Research , NahidGolafshani, University of Toronto, Toronto, Ontario, Canada • http://www.nova.edu/ssss/QR/QR7-4/pare.html---Enhancing the Rigor of Qualitative Research: Application of a Case Methodology to Build Theories of IT Implementation, Guy Paré+ • http://faculty.babson.edu/krollag/org_site/craft_articles/staw_rel_rigor.html --- Staw, B. M. (1985) Reports on the road to relevance and rigor: Some unexplored issues in publishing organizational research. In L. L. Cummings & P. J. Frost, eds. Publishing in the Organizational Sciences (pp 96-107). Homewood Illinois, Richard D. Irwin

Some more useful rigorous websites • http://www.youtube.com/watch?v=SW9N34LH7KQ a different use of the word rigour. • http://www.youtube.com/watch?v=RZ4a8pQIwfE a general remarks • http://www.youtube.com/watch?v=Nnp4qZX9H1U in mathematics

Some questions about the context of rigour • Has an adequate case been made for the conceptual framework of the research? • Is there a convincing case for the importance and relevance of the research question? • Does the data required as specified by the researcher resonate with the research qustion • Has a well established methodology/method been chosen or has a methodology/method been created by the researcher?

Characteristics of a rigor • Careful and precise uses of definitions • Making any assumptions used visible • Avoiding any obfuscation • Step by step logical path • Exposition demonstrated by examples • Heightened sense of being critical • Has constant comparison been a characteristic of the study

Rigor as a process----1 • What are the processes of academic research? • Producing a suitable research question • Demonstrating the position of the research question within the extant literature • Establishing the connection to theory or arguing for grounded theory • Identifying an appropriate research methodology and discuss its advantages over other possibilities • Collecting/gathering/developing data/evidence

Detectives are Researchers

Rigor as a process----2 • Managing the data and the software required (if any) • Analysing the data and producing the results • Postulating our finding and conclusions • Discussing the research through reflection including using the audit train and arguing for its relevance and rigor • Developing management and/or business guidelines

Producing a suitable research question • We are not concerned here whether the research question is interesting, topical or useful. • We are concerned to see if the research question is clear, unambiguous and answerable and that it is associated with some theory. • The same thinking will also apply to sub-questions.

The purpose of literature in research • A knowledge of the literature is of paramount importance to academic research for 2 key reasons. • The literature positions the research in terms of what is known in the field of study about the topic and way of researching in that field. • The literature is used to call on authority to support the arguments made by the researcher. Be careful of the authority you invoke as some of the material published is wrong.

Demonstrating the position of the research question within the extant literature • All academic research is underpinned by the extant literature. • The key rigor issues here are:- • Peer reviewed journals need to dominate the reference list. • A significant number of the papers need to be drawn from highly rated journals (ABS) http://www.the-abs.org.uk/?id=257 • The majority of the papers should be contemporary • Do not rely too heavily on an author or one journal • Other sources such as books and less formal publications may be use sparingly

Identifying an appropriate research methodology and discuss its advantages over other possibilities • Research methodology needs to be clearly articulated and rooted in the researcher philosophical view point • The chosen research methodology needs to be justified on its own and • It has to be justified relatively to other methodological options.

Collecting/gathering/developing data/evidence • For the purposes of your research data needs to be defined • A philosophical stance on the nature of data collection is important • At least two philosophical understandings of data need to be considered:- • Data of a product of mining – this has nothing to do with data mining. • Data as a product of travelling

Managing the data and the software required if any • There are issues related to the storing and management of the data and keeping hard and soft copies • Data will normally be coded and transferred to a computer • Version control is imperative as is backup copies • Software selection is important and the need for skills required in using the software is important

Analysing the data and producing the results • Rigor relies on a demonstration of being able to use appropriate data analysis to explore the research question/s • What was the nature of the data used? • How much data was used? • How were data suppliers chosen? • From whom was it collected and how many? • What was the stance of the data suppliers? • Have the assumptions underpinning the analysis been complied with? • Are the results significant and in what way?

Postulating our finding and conclusions • The findings of the data analysis need to be clearly articulated • The findings need to be interpreted • The researcher needs to point out what the findings suggest and what they do not suggest • This requires skill at rhetoric

Developing management and/or business guidelines • Research findings are not the same as the results of research • One of the problems which sometimes occur in the presenting of business guidelines is the apparent attractiveness of motherhoods. Motherhoods have to be avoided. They are superficial generalities which have no place in academic research.

Experimental and Quasi-experimental • The so-called gold standard of rigor in experimental research is to use a random sample and to have a control group. • A quasi-experiment does not fully comply with the rigor required for an experiment. The difference is that in a quasi-experiment the participants/informants will not have been randomly chosen.

Rigor as a product of evaluation • How do we do this? • Any evaluation means comparing what is being evaluated to a standard---- Here we are looking for a gap between what we have and an ideal of what we want i.e. standard----We need the “application of precise and exacting standards” • We need to avoid shallow analysis • What issues in academic research need to be evaluated?

Each chapter has a list of issues to be examined Each issue needs to be checked for completeness and for depth and where possible the conclusions need to be supported by authority Issues in academic research to be evaluated

Analytical rigor • The need to avoid shallow analysis • How do we tell what is rigorous • Rigor is following the rule -- too simple to be useful • How do you assess the quality of an analysis? Rigor starts with getting the research question right. The research question leads to the hypothoses or propositions.

Some issues • The breath and depth of exploration of alternative analysis. • How much critiquing has been done. • 8 different kinds of characteristics of a Meta judgement of the rigor of an analysis • The facts are nothing more than ………reading the news • Reconsidered from diverse points of view • Build a chain of reasoning • How has the reasoning and logic been checked • Quickly generating a hypothesis is an indication of low rigor • Avoid closing the issue too soon. Pre-mature closure • Are you going to have the best hypothesis on the topic. • Find on topic material ---broadening check

A multi-attribute measure of sufficiency • Rigor is a product of the culture of the school • A revised definition of rigor, reframing it as an emergent multi-attribute measure of sufficiency rather than as a measure of process deviation.

What is meant by shallow analysis? • Shallow analysis is not adequate to deliver a fuller meaning of the work which has been performed. • Shallow analysis is incomplete. • Shallow analysis may miss the point of the whole research or findings.

What is meant by shallow analysis?..1 • A poor set of initial papers. The most appropriate papers were not read and considered. • An inadequate critique of the literature? • There is an unsatisfactory research question/s and this hypothesis • The importance of the research was not adequately argued • The methodology was not adequately argued • There are clear rules about this issue • More subjective • The test here is to seek support from supervisor, colleagues and outside influencers • The force of the argument is the main issue here • There are clear rules about this issue

Hypothesis -- the world is getting warmer – Global warming?

Support • Make a list of all those who support your research besides yourself and your supervisor. • Have you encountered any paradoxes or contradictions in your research so far? • Have you any lurking doubts about the way your research has gone so far? • Lurking doubts are a central issue in academic research. If you have no lurking doubts you do not understand the problems/challenges you are facing Remember the aphorism “if you are not confused and fearful you do not understand the issues you are facing”.

What is meant by shallow analysis?...2 • An unsatisfactory sample was used – no cherry picking • Are the analytical procedures been conducted properly conducted? Any biases? • Have the results of the analysis being understood correctly? What alternative candidate explanations have been considered? • Is there a convincing discussion? Has there been any supporting information. • Have the results been convincingly converted to management guidelines or policy options? • There are clear rules about this issue • There are clear rules about this issue • This is a more creative issue • This is a function of rhetoric and support from supervisor, colleagues etc • This is a creative issue and there is no cookbook answer

What is meant by shallow analysis?....3 • Has an adequate suggestion been made concerning future research? • Has adequate attention been given to the principle of parsimony? • Is the written language at an acceptable standard? • Have all the universities requirements been complied with regard presentation? • This is a more creative issue • There are clear rules about this issue • There are clear rules about this issue • There are clear rules about this issue

Exploring challenges • The researcher needs to have thought about a list of potential criticisms which examiners/reviews may use

Introduction • Is the topic of the research clear ? • Is the objective of the research clear ? • Are all the terms adequately defined? • Has a possible theoretical context been highlighted? • Has the importance of the research been argued? • Has the course of the research been described?

Literature • Have all the important authors in the field been included? • Is the literature being explored on a concept or timeline basis or a combination of both and why has the approach been chosen? • Is there an adequately critical approach to the literature? • What gaps are there and how does the research question relate to the gaps?

The methodology was not adequately discussed and argued • What type of investigations is being proposed and why? • What is the unit of analysis? • What time horizon is being suggested? • What data/evidence is required? • What type of instrument is needed? • How will the data be analysed? • How will bias be minimised? • Alternatives?

The research • What did the research protocol look like? • How were the ethical issued addressed? • What was actually done? • How were informants chosen? • How may? • Where? • How was the data/evidence collected? • What data management issued arose?

Findings and conclusions • Was the analysis conducted with appropriate knowledge and care? • What do the findings mean? • What are the limitations of the analysis? • How were the conclusions drawn from the research?

Audit trail and reflections • How did the research process develop? • What were the major miles stones in the thinking? • How has the researcher developed over the period of the research? • Looking over the whole experience how does the research feel about what has been achieved?

Management guidelines and future research • So what? • What’s new? • Who cares? • How can the research finding be put to use? • What are the other interesting issues encountered during the research process?

Zen and the Archer • Many years’ ago in Japan, there was a warrior – one of those itinerant Samurai known as ronin. He had an ambition to find fame and fortune as the finest archer in the land. In pursuit of his dream, he travelled the length and breadth of the country looking for a master-bowman to help him improve his technique. Few of those he encountered had much to teach him but he continued searching and eventually, just as he was about to give up, he chanced upon some dilapidated farm buildings far from other habitation. All over the buildings, in the most unlikely and inaccessible places, were hundreds of hand-painted targets, each with an arrow at the exact centre of the bulls-eye.

  • More by User

Academic Research in Accounting

Academic Research in Accounting

1.09k views • 39 slides

Achieving Appropriate Rigor in Qualitative Research

Achieving Appropriate Rigor in Qualitative Research

Achieving Appropriate Rigor in Qualitative Research. Research Day, February 4, 2011 Mary Katherine O’Connor, Ph.D. School of Social Work, Virginia Commonwealth University [email protected]. Presentation Goals.

1.07k views • 36 slides

Academic Rigor in the Classroom

Academic Rigor in the Classroom

Academic Rigor in the Classroom . Dennis Duncan University of Georgia. Objectives. Assess our current understanding of rigor in the classroom Develop a set of best mgt practices for promoting academic excellence through rigor in the classroom

499 views • 11 slides

Academic Rigor in the Classroom

Academic Rigor in the Classroom . Objectives. Assess our current understanding of rigor in the classroom Develop a set of best mgt practices for promoting academic excellence through rigor in the classroom

522 views • 15 slides

Academic Rigor in the Classroom

Academic Rigor in the Classroom . “Do not confine your children to your own learning, for they were born in another time.”. Hebrew Proverb. Objectives. Assess our current understanding of rigor in the classroom

664 views • 8 slides

Interest, Talent and Academic Rigor:

Interest, Talent and Academic Rigor:

Interest, Talent and Academic Rigor:. A Model for FACS as the Cornerstone of the Curriculum. The Enrichment Triad Model. Developed in 1976 by Dr. Joseph Renzulli (U. Conn) Originally designed as a model for the education of gifted and talented students

488 views • 36 slides

Chapter 9 Enhancing Rigor in Quantitative Research

Chapter 9 Enhancing Rigor in Quantitative Research

Chapter 9 Enhancing Rigor in Quantitative Research. Controlling Extraneous Variables. Controlling external, situational factors—constancy of conditions. Constancy of the environment Constancy of treatment conditions Constancy of time Constancy of communications to subjects.

976 views • 17 slides

Plagiarism in Academic Research

Plagiarism in Academic Research

Antar Abdellah. Plagiarism in Academic Research . Defintion. To plagiarize is to steal information, ideas of some writers and claim them yours. From Latin plagiarius (literally kidnapper ), to denote someone stealing someone else's work

170 views • 9 slides

RIGOR

RIGOR. Nurturing Active, Deep, and Engaging Learning for Students of Color Tony Lamair Burks II, Ed.D . Superintendent-in-Residence National Center for Urban School Transformation School Transformation Coach North Carolina Department of Public Instruction. Learning Goals.

651 views • 19 slides

Developing rigor and academic support services

Developing rigor and academic support services

Developing rigor and academic support services. Gregory Keech , City College of San Francisco. V ESL I mmersion P roject (VIP). Funded by SF Department of Human Services (HSA) grant Community based organization Arriba Juntos manages grant

150 views • 6 slides

Academic Integrity and Rigor at UW Stout

Academic Integrity and Rigor at UW Stout

Academic Integrity and Rigor at UW Stout. Kate Thomas, US History, Social Science And Elizabeth Buchanan, Ethics Center. Academic Integrity and Teaching: The Significance. Personal experiences Professional experiences. Statistics on Bad Behaviors.

180 views • 7 slides

Upping the Academic Rigor of Your Instruction!

Upping the Academic Rigor of Your Instruction!

Upping the Academic Rigor of Your Instruction!. Dacia Toll, Co-CEO, AF National Charter Schools Conference June 20, 2012 . Whose School Is It?. Whose School Is It? It’s Our School! Not my school! Not your school! It’s our school. It’s our place. It’s our chance to win this race.

596 views • 48 slides

Reflexivity in Academic Research

Reflexivity in Academic Research

Reflexivity in Academic Research . perspectives, practices, debates. Why reflexivity in doctoral research?. Why be reflexive at all?. ‘In the social sciences, the progress of knowledge presupposes progress in our knowledge of the conditions of knowledge.’

484 views • 15 slides

Promoting Academic Rigor in YouthBuild for Postsecondary Completion

Promoting Academic Rigor in YouthBuild for Postsecondary Completion

Promoting Academic Rigor in YouthBuild for Postsecondary Completion. Elise M Huggins, PhD Portland YouthBuilders November 3, 2011. Warm-up activity : Define academic rigor. What does it look like? Feel like? How do you know it when you see it?. Think, Pair, Share Writing to Learn.

275 views • 15 slides

Rigor in assessments

Rigor in assessments

Rigor in assessments. Nov 14 – Teach for America Break-Out Session. FIRST THREE: 2 minutes. Open the PDF document titled “DOK (your content area)” Open the Word document titled “(your content area) – DOK of objectives AND items”. DO NOW: 3 minutes. Using the periodic table

398 views • 22 slides

Encouraging Students to Pursue Academic Rigor

Encouraging Students to Pursue Academic Rigor

Encouraging Students to Pursue Academic Rigor. Presented by: Scott Power New Hampshire Scholars Director Deb Connell NH Department of Education. What is NH Scholars?. Challenge Your Students.

361 views • 22 slides

The “Tween” Years — Increasing Academic Rigor

The “Tween” Years — Increasing Academic Rigor

STUDENT SUPPORT SERVICES. The “Tween” Years — Increasing Academic Rigor. Administrators’ Management Meeting for Exceptional Education and Student Services Personnel November 2004. Presenters. Helen Lancashire School Guidance Consultant,Bureau of Exceptional Education and Student Services

460 views • 28 slides

Academic Rigor: Where Are We Now

Academic Rigor: Where Are We Now

Academic Rigor: Where Are We Now. Looking at Student Reading Achievement and Increasing Rigor Using Grade 4 NAEP Item Maps and Percentile Graphs. Prepared by Jeanne Foy Alaska State NAEP Coordinator. NAEP: the Common Measurement of Student Achievement among States.

450 views • 31 slides

Succeeding in Academic Research

Succeeding in Academic Research

Succeeding in Academic Research. David Winter The Careers Group Consultancy. What forces are changing the academic research world?. Political – policy / legislation / ideology Economic – financial pressures /priorities Sociological – human / environmental needs

146 views • 4 slides

Academic Rigor in the Classroom

Academic Rigor in the Classroom. Objectives. Assess our current understanding of rigor in the classroom Develop a set of best mgt practices for promoting academic excellence through rigor in the classroom

367 views • 15 slides

Academic Rigor in the Classroom

Academic Rigor in the Classroom. “Do not confine your children to your own learning, for they were born in another time.”. Hebrew Proverb. Objectives. Assess our current understanding of rigor in the classroom

606 views • 8 slides

Academic Rigor

Academic Rigor

Academic Rigor. High Expectations and Standards for Students And Staff. TOPICS TO COVER. What is academic rigor? How do we know when it is happening? How can we cultivate it?. Now these are high standards!. WHAT IS ACADEMIC RIGOR?.

535 views • 52 slides

IMAGES

  1. PPT

    rigor in qualitative research ppt

  2. PPT

    rigor in qualitative research ppt

  3. PPT

    rigor in qualitative research ppt

  4. How to ensure rigour in qualitative research [quality, trustworthiness and examples]

    rigor in qualitative research ppt

  5. PPT

    rigor in qualitative research ppt

  6. Qualitative Research Powerpoint Ppt Template Bundles

    rigor in qualitative research ppt

VIDEO

  1. RIGOR IN QUALITATIVE RESEARCH

  2. Karen Albright, PhD -- Qualitative and Mixed Methods Mini-Series

  3. Qualitative Research Designs (PPT COPY) (NO SOUND)

  4. CHARACTERISTICS, STRENGTHS, WEAKNESSES of QUALITATIVE RESEARCH (PPT COPY) (NO SOUND)

  5. Critical Methodologies in Qualitative Research Part 1 with Dr. Christian Chan

  6. Clinical Research Industry Insight via PPT Presentation

COMMENTS

  1. Research 101: Rigor in Qualitative Research

    5. Qualitative Transferability Dependability Credibility Confirmability Presence of accurate description or interpretation of human experience (that people who also share the same experience would immediately recognize) How one determines the extent to which the findings of a particular inquiry have applicability in other contexts or with other subjects When another researcher can follow the ...

  2. Achieving Appropriate Rigor in Qualitative Research

    Achieving Appropriate Rigor in Qualitative Research Research Day, February 4, 2011 Mary Katherine O'Connor, Ph.D. School of Social Work, Virginia Commonwealth University [email protected]. Presentation Goals • Propose a multi-paradigmatic heuristic for understanding variety in qualitative research • Detail differential standards for research quality depending upon paradigmatic perspective ...

  3. PDF Quality in Qualitative Research 1

    Quality in Qualitative Research 1 Elizabeth M. Pope (2019) - for teaching purposes only; do not cite or redistribute Hello! I want to welcome everyone to this presentation on Quality and Rigor in Qualitative Research. I'm Dr. Elizabeth Pope, an Assistant Professor of Educational Research and I specialize in qualitative research.

  4. A Guide To Qualitative Rigor In Research

    Rigor, in qualitative terms, is a way to establish trust or confidence in the findings of a research study. It allows the researcher to establish consistency in the methods used over time. It also provides an accurate representation of the population studied. As a nurse, you want to build your practice on the best evidence you can and to do so ...

  5. A Review of the Quality Indicators of Rigor in Qualitative Research

    Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...

  6. PDF Achieving Rigor in Qualitative Analysis: the Role of Active

    Qualitative analysis is a central tool for developing new theory (Edmondson & McManus, 2007; Eisen-hardt, 1989). In recent years, there has been a call for increasing the rigor of qualitative research (Lamont & White,2008;Lubet,2017;Pratt,Kaplan,&Whittington, 2020; Small, 2013). This debate on rigor has led some

  7. PDF Achieving Rigor in Qualitative Analysis: The Role of Active

    To pre-empt skepticism about qualitative rigor, scholars have articulated broad guidelines on how to conduct qualitative research properly by calling on scholars to develop "a coding scheme and, insofar as possible, provide a sample of likely coding categories" (Lamont & White, 2008: 143), yet they have been less explicit about how to do so.

  8. Quality Indicators of Rigor in Qualitative Methods & Analysis Dr

    28 Rigor Outline what you did in your data analysis Research journal for reflexivity on the whole research process Coding book - codes and possible initial interpretation Audit trail Download ppt "Quality Indicators of Rigor in Qualitative Methods & Analysis Dr. Louise McCuaig Dr. Sue Sutherland."

  9. A Reviewer's Guide to Qualitative Rigor

    Branda Nowell, Kate Albrecht, A Reviewer's Guide to Qualitative Rigor, Journal of Public Administration Research and Theory, Volume 29, Issue 2, April 2019, Pages 348-363, ... If we want qualitative research to have a greater substantive impact on the discipline, we need to give non-qualitatively trained scholars the tools to assess the ...

  10. Rigor or Reliability and Validity in Qualitative Research: P ...

    Rigor of qualitative research continues to be challenged even now in the 21st century—from the very idea that qualitative research alone is open to ... and presentation. 56 First is the design consideration. Developing a self-conscious design, the paradigm assumption, the purposeful choice of small sample of informants relevant to the study ...

  11. A Review of the Quality Indicators of Rigor in Qualitative Research

    Effective presentation - presented in a way that others can emulate and/or build upon the work 6. Reflective critique - regular, systematic approach to question and learn from and during research process ... While a conceptual framework is important to rigor in qualitative research, Huberman and Miles caution qualitative researchers about ...

  12. 17 20. Quality in qualitative studies: Rigor in research design

    In Chapter 10 we talked about quality in quantitative studies, but we built our discussion around concepts like reliability and validity.. With qualitative studies, we generally think about quality in terms of the concept of rigor.The difference between quality in quantitative research and qualitative research extends beyond the type of data (numbers vs. words/sounds/images).

  13. PowerPoint Slides: SOWK 621.01: Research I: Basic Research Methodology

    Lesson 10: Sampling in Qualitative Research Download. PPTX Lesson 11: Qualitative Measurement & Rigor Download. PPTX Lesson 12: Qualitative Design & Data Gathering Download. PPTX Lesson 1: Introduction to Research ... PowerPoint Slides: SOWK 621.01: Research I: Basic Research Methodology.

  14. PowerPoint Presentations

    Chapter 23: Sampling in Qualitative Research, PowerPoint Presentation; Chapter 24: Data Collection in Qualitative Research, PowerPoint Presentation; Chapter 25: Qualitative Data Analysis, PowerPoint Presentation; Chapter 26: Trustworthiness and Rigor in Qualitative Research, PowerPoint Presentation; Chapter 27: Basics of Mixed Methods Research ...

  15. Ensuring Rigor in Qualitative Data Analysis: A Design Research Approach

    To ensure the research process was trustworthy, Guba and Lincoln's (1989) criteria for ensuring rigor in qualitative research were addressed by employing the following strategies. For the purpose of credibility and to affirm the research measured a design researchers understanding of and approach to research, Charmaz, well-established methods ...

  16. Rigor With or Without Templates? The Pursuit of Methodological Rigor in

    The domain of qualitative research is replete with templates, standard protocols for the analysis of qualitative data. ... The use of such templates has sometimes been considered as automatically enhancing the rigor of qualitative research. In this article, we challenge the view that in the context of qualitative research, rigor is tied into ...

  17. 20.1 Introduction to qualitative rigor

    In Chapter 11 we talked about quality in quantitative studies, but we built our discussion around concepts like reliability and validity.With qualitative studies, we generally think about quality in terms of the concept of rigor.The difference between quality in quantitative research and qualitative research extends beyond the type of data (numbers vs. words/sounds/images).

  18. Rigor and Transparency in Research

    Reflexive transparency refers to the qualitative researcher being clear about their position in the research. Photo by Timon Studler. Transparency in qualitative research is essential for maintaining rigor, trustworthiness, and ethical integrity. By being transparent, researchers allow their work to be scrutinized, critiqued, and improved upon ...

  19. Improving Qualitative Research Findings Presentations:

    Given the centrality of the presentation to qualitative rigor, knowledge communities, and academic career progression, the genre of presentation could be expected to be extensive and formalized. However, doctoral programs remain focused on developing substantive knowledge and methodological expertise (League of European Research Universities, 2010

  20. A Review of the Quality Indicators of Rigor in Qualitative Research

    Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework, both of which contribute to the selection of appropriate ...

  21. Rigor in Qualitative Research: The Assessment of Trustworthiness

    SeveraL strategies for the achievement of rigor in qualitative research usefuL for both researchers and consumers of research are described. T he worth of any research endeavor, regardless of the approach taken, is evaluated by peers, grant reviewers, and readers. ... 1986; Leininger, 1985), this presentation will be based on Guba's model ...

  22. PPT

    Presentation Transcript. We are in the age of systematic research • Miracles or unexplained revelations are simply not allow in academic research • Archimedes, Galileo and Newton would not get a degree for at least some of their discovers • Rigor is a concept which spans a spectrum or perhaps a continuum. Research needs to be "rigorous ...

  23. Psychometric properties of the TACT framework—Determining rigor in

    The verification of the TACT started with the development of a rating tool. The initial TACT scale contained 16 items to assess four dimensions of rigor in qualitative research studies: trustworthiness, auditability, credibility and transferability. Participants rated each item on a 5-point Likert scale, where 1 = very important, 2 = important ...