Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 22, Issue 1
  • How to appraise qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Calvin Moorley 1 ,
  • Xabi Cathala 2
  • 1 Nursing Research and Diversity in Care, School of Health and Social Care , London South Bank University , London , UK
  • 2 Institute of Vocational Learning , School of Health and Social Care, London South Bank University , London , UK
  • Correspondence to Dr Calvin Moorley, Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London SE1 0AA, UK; Moorleyc{at}lsbu.ac.uk

https://doi.org/10.1136/ebnurs-2018-103044

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

In order to make a decision about implementing evidence into practice, nurses need to be able to critically appraise research. Nurses also have a professional responsibility to maintain up-to-date practice. 1 This paper provides a guide on how to critically appraise a qualitative research paper.

What is qualitative research?

  • View inline

Useful terms

Some of the qualitative approaches used in nursing research include grounded theory, phenomenology, ethnography, case study (can lend itself to mixed methods) and narrative analysis. The data collection methods used in qualitative research include in depth interviews, focus groups, observations and stories in the form of diaries or other documents. 3

Authenticity

Title, keywords, authors and abstract.

In a previous paper, we discussed how the title, keywords, authors’ positions and affiliations and abstract can influence the authenticity and readability of quantitative research papers, 4 the same applies to qualitative research. However, other areas such as the purpose of the study and the research question, theoretical and conceptual frameworks, sampling and methodology also need consideration when appraising a qualitative paper.

Purpose and question

The topic under investigation in the study should be guided by a clear research question or a statement of the problem or purpose. An example of a statement can be seen in table 2 . Unlike most quantitative studies, qualitative research does not seek to test a hypothesis. The research statement should be specific to the problem and should be reflected in the design. This will inform the reader of what will be studied and justify the purpose of the study. 5

Example of research question and problem statement

An appropriate literature review should have been conducted and summarised in the paper. It should be linked to the subject, using peer-reviewed primary research which is up to date. We suggest papers with a age limit of 5–8 years excluding original work. The literature review should give the reader a balanced view on what has been written on the subject. It is worth noting that for some qualitative approaches some literature reviews are conducted after the data collection to minimise bias, for example, in grounded theory studies. In phenomenological studies, the review sometimes occurs after the data analysis. If this is the case, the author(s) should make this clear.

Theoretical and conceptual frameworks

Most authors use the terms theoretical and conceptual frameworks interchangeably. Usually, a theoretical framework is used when research is underpinned by one theory that aims to help predict, explain and understand the topic investigated. A theoretical framework is the blueprint that can hold or scaffold a study’s theory. Conceptual frameworks are based on concepts from various theories and findings which help to guide the research. 6 It is the researcher’s understanding of how different variables are connected in the study, for example, the literature review and research question. Theoretical and conceptual frameworks connect the researcher to existing knowledge and these are used in a study to help to explain and understand what is being investigated. A framework is the design or map for a study. When you are appraising a qualitative paper, you should be able to see how the framework helped with (1) providing a rationale and (2) the development of research questions or statements. 7 You should be able to identify how the framework, research question, purpose and literature review all complement each other.

There remains an ongoing debate in relation to what an appropriate sample size should be for a qualitative study. We hold the view that qualitative research does not seek to power and a sample size can be as small as one (eg, a single case study) or any number above one (a grounded theory study) providing that it is appropriate and answers the research problem. Shorten and Moorley 8 explain that three main types of sampling exist in qualitative research: (1) convenience (2) judgement or (3) theoretical. In the paper , the sample size should be stated and a rationale for how it was decided should be clear.

Methodology

Qualitative research encompasses a variety of methods and designs. Based on the chosen method or design, the findings may be reported in a variety of different formats. Table 3 provides the main qualitative approaches used in nursing with a short description.

Different qualitative approaches

The authors should make it clear why they are using a qualitative methodology and the chosen theoretical approach or framework. The paper should provide details of participant inclusion and exclusion criteria as well as recruitment sites where the sample was drawn from, for example, urban, rural, hospital inpatient or community. Methods of data collection should be identified and be appropriate for the research statement/question.

Data collection

Overall there should be a clear trail of data collection. The paper should explain when and how the study was advertised, participants were recruited and consented. it should also state when and where the data collection took place. Data collection methods include interviews, this can be structured or unstructured and in depth one to one or group. 9 Group interviews are often referred to as focus group interviews these are often voice recorded and transcribed verbatim. It should be clear if these were conducted face to face, telephone or any other type of media used. Table 3 includes some data collection methods. Other collection methods not included in table 3 examples are observation, diaries, video recording, photographs, documents or objects (artefacts). The schedule of questions for interview or the protocol for non-interview data collection should be provided, available or discussed in the paper. Some authors may use the term ‘recruitment ended once data saturation was reached’. This simply mean that the researchers were not gaining any new information at subsequent interviews, so they stopped data collection.

The data collection section should include details of the ethical approval gained to carry out the study. For example, the strategies used to gain participants’ consent to take part in the study. The authors should make clear if any ethical issues arose and how these were resolved or managed.

The approach to data analysis (see ref  10 ) needs to be clearly articulated, for example, was there more than one person responsible for analysing the data? How were any discrepancies in findings resolved? An audit trail of how the data were analysed including its management should be documented. If member checking was used this should also be reported. This level of transparency contributes to the trustworthiness and credibility of qualitative research. Some researchers provide a diagram of how they approached data analysis to demonstrate the rigour applied ( figure 1 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Example of data analysis diagram.

Validity and rigour

The study’s validity is reliant on the statement of the question/problem, theoretical/conceptual framework, design, method, sample and data analysis. When critiquing qualitative research, these elements will help you to determine the study’s reliability. Noble and Smith 11 explain that validity is the integrity of data methods applied and that findings should accurately reflect the data. Rigour should acknowledge the researcher’s role and involvement as well as any biases. Essentially it should focus on truth value, consistency and neutrality and applicability. 11 The authors should discuss if they used triangulation (see table 2 ) to develop the best possible understanding of the phenomena.

Themes and interpretations and implications for practice

In qualitative research no hypothesis is tested, therefore, there is no specific result. Instead, qualitative findings are often reported in themes based on the data analysed. The findings should be clearly linked to, and reflect, the data. This contributes to the soundness of the research. 11 The researchers should make it clear how they arrived at the interpretations of the findings. The theoretical or conceptual framework used should be discussed aiding the rigour of the study. The implications of the findings need to be made clear and where appropriate their applicability or transferability should be identified. 12

Discussions, recommendations and conclusions

The discussion should relate to the research findings as the authors seek to make connections with the literature reviewed earlier in the paper to contextualise their work. A strong discussion will connect the research aims and objectives to the findings and will be supported with literature if possible. A paper that seeks to influence nursing practice will have a recommendations section for clinical practice and research. A good conclusion will focus on the findings and discussion of the phenomena investigated.

Qualitative research has much to offer nursing and healthcare, in terms of understanding patients’ experience of illness, treatment and recovery, it can also help to understand better areas of healthcare practice. However, it must be done with rigour and this paper provides some guidance for appraising such research. To help you critique a qualitative research paper some guidance is provided in table 4 .

Some guidance for critiquing qualitative research

  • ↵ Nursing and Midwifery Council . The code: Standard of conduct, performance and ethics for nurses and midwives . 2015 https://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf ( accessed 21 Aug 18 ).
  • Barrett D ,
  • Cathala X ,
  • Shorten A ,

Patient consent for publication Not required.

Competing interests None declared.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Critically appraising...

Critically appraising qualitative research

  • Related content
  • Peer review
  • Ayelet Kuper , assistant professor 1 ,
  • Lorelei Lingard , associate professor 2 ,
  • Wendy Levinson , Sir John and Lady Eaton professor and chair 3
  • 1 Department of Medicine, Sunnybrook Health Sciences Centre, and Wilson Centre for Research in Education, University of Toronto, 2075 Bayview Avenue, Room HG 08, Toronto, ON, Canada M4N 3M5
  • 2 Department of Paediatrics and Wilson Centre for Research in Education, University of Toronto and SickKids Learning Institute; BMO Financial Group Professor in Health Professions Education Research, University Health Network, 200 Elizabeth Street, Eaton South 1-565, Toronto
  • 3 Department of Medicine, Sunnybrook Health Sciences Centre
  • Correspondence to: A Kuper ayelet94{at}post.harvard.edu

Six key questions will help readers to assess qualitative research

Summary points

Appraising qualitative research is different from appraising quantitative research

Qualitative research papers should show appropriate sampling, data collection, and data analysis

Transferability of qualitative research depends on context and may be enhanced by using theory

Ethics in qualitative research goes beyond review boards’ requirements to involve complex issues of confidentiality, reflexivity, and power

Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

In this article we offer guidance for readers on how to assess a study that uses qualitative research methods by providing six key questions to ask when reading qualitative research (box 1). However, the thorough assessment of qualitative research is an interpretive act and requires informed reflective thought rather than the simple application of a scoring system.

Box 1 Key questions to ask when reading qualitative research studies

Was the sample used in the study appropriate to its research question, were the data collected appropriately, were the data analysed appropriately, can i transfer the results of this study to my own setting, does the study adequately address potential ethical issues, including reflexivity.

Overall: is what the researchers did clear?

One of the critical decisions in a qualitative study is whom or what to include in the sample—whom to interview, whom to observe, what texts to analyse. An understanding that qualitative research is based in experience and in the construction of meaning, combined with the specific research question, should guide the sampling process. For example, a study of the experience of survivors of domestic violence that examined their reasons for not seeking help from healthcare providers might focus on interviewing a sample of such survivors (rather than, for example, healthcare providers, social services workers, or academics in the field). The sample should be broad enough to capture the many facets of a phenomenon, and limitations to the sample should be clearly justified. Since the answers to questions of experience and meaning also relate to people’s social affiliations (culture, religion, socioeconomic group, profession, etc), it is also important that the researcher acknowledges these contexts in the selection of a study sample.

In contrast with quantitative approaches, qualitative studies do not usually have predetermined sample sizes. Sampling stops when a thorough understanding of the phenomenon under study has been reached, an end point that is often called saturation. Researchers consider samples to be saturated when encounters (interviews, observations, etc) with new participants no longer elicit trends or themes not already raised by previous participants. Thus, to sample to saturation, data analysis has to happen while new data are still being collected. Multiple sampling methods may be used to broaden the understanding achieved in a study (box 2). These sampling issues should be clearly articulated in the methods section.

Box 2 Qualitative sampling methods for interviews and focus groups 9

Examples are for a hypothetical study of financial concerns among adult patients with chronic renal failure receiving ongoing haemodialysis in a single hospital outpatient unit.

Typical case sampling —sampling the most ordinary, usual cases of a phenomenon

The sample would include patients likely to have had typical experiences for that haemodialysis unit and patients who fit the profile of patients in the unit for factors found on literature review. Other typical cases could be found via snowball sampling (see below)

Deviant case sampling —sampling the most extreme cases of a phenomenon

The sample would include patients likely to have had different experiences of relevant aspects of haemodialysis. For example, if most patients in the unit are 60-70 years old and recently began haemodialysis for diabetic nephropathy, researchers might sample the unmarried university student in his 20s on haemodialysis since childhood, the 32 year old woman with lupus who is now trying to get pregnant, and the 90 year old who newly started haemodialysis due to an adverse reaction to radio-opaque contrast dye. Other deviant cases could be found via theoretical and/or snowball sampling (see below)

Critical case sampling —sampling cases that are predicted (based on theoretical models or previous research) to be especially information-rich and thus particularly illuminating

The nature of this sample depends on previous research. For example, if research showed that marital status was a major determinant of financial concerns for haemodialysis patients, then critical cases might include patients whose marital status changed while on haemodialysis

Maximum-variation sampling —sampling as wide a range of perspectives as possible to capture the broadest set of information and experiences)

The sample would include typical, deviant, and critical cases (as above), plus any other perspectives identified

Confirming-disconfirming sampling —Sampling both individuals or texts whose perspectives are likely to confirm the researcher’s developing understanding of the phenomenon under study and those whose perspectives are likely to challenge that understanding

The sample would include patients whose experiences would likely either confirm or disconfirm what the researchers had already learnt (from other patients) about financial concerns among patients in the haemodialysis unit. This could be accomplished via theoretical and/or snowball sampling (see below)

Snowball sampling —sampling participants found by asking current participants in a study to recommend others whose experiences would be relevant to the study

Current participants could be asked to provide the names of others in the unit who they thought, when asked about financial concerns, would either share their views (confirming), disagree with their views (disconfirming), have views typical of patients on their unit (typical cases), or have views different from most other patients on their unit (deviant cases)

Theoretical sampling —sampling individuals or texts whom the researchers predict (based on theoretical models or previous research) would add new perspectives to those already represented in the sample

Researchers could use their understanding of known issues for haemodialysis patients that would, in theory, relate to financial concerns to ensure that the relevant perspectives were represented in the study. For example, if, as the research progressed, it turned out that none of the patients in the sample had had to change or leave a job in order to accommodate haemodialysis scheduling, the researchers might (based on previous research) choose to intentionally sample patients who had left their jobs because of the time commitment of haemodialysis (but who could not do peritoneal dialysis) and others who had switched to jobs with more flexible scheduling because of their need for haemodialysis

It is important that a qualitative study carefully describes the methods used in collecting data. The appropriateness of the method(s) selected to use for the specific research question should be justified, ideally with reference to the research literature. It should be clear that methods were used systematically and in an organised manner. Attention should be paid to specific methodological challenges such as the Hawthorne effect, 1 whereby the presence of an observer may influence participants’ behaviours. By using a technique called thick description, qualitative studies often aim to include enough contextual information to provide readers with a sense of what it was like to have been in the research setting.

Another technique that is often used is triangulation, with which a researcher uses multiple methods or perspectives to help produce a more comprehensive set of findings. A study can triangulate data, using different sources of data to examine a phenomenon in different contexts (for example, interviewing palliative patients who are at home, those who are in acute care hospitals, and those who are in specialist palliative care units); it can also triangulate methods, collecting different types of data (for example, interviews, focus groups, observations) to increase insight into a phenomenon.

Another common technique is the use of an iterative process, whereby concurrent data analysis is used to inform data collection. For example, concurrent analysis of an interview study about lack of adherence to medications among a particular social group might show that early participants seem to be dismissive of the efforts of their local pharmacists; the interview script might then be changed to include an exploration of this phenomenon. The iterative process constitutes a distinctive qualitative tradition, in contrast to the tradition of stable processes and measures in quantitative studies. Iterations should be explicit and justified with reference to the research question and sampling techniques so that the reader understands how data collection shaped the resulting insights.

Qualitative studies should include a clear description of a systematic form of data analysis. Many legitimate analytical approaches exist; regardless of which is used, the study should report what was done, how, and by whom. If an iterative process was used, it should be clearly delineated. If more than one researcher analysed the data (which depends on the methodology used) it should be clear how differences between analyses were negotiated. Many studies make reference to a technique called member checking, wherein the researcher shows all or part of the study’s findings to participants to determine if they are in accord with their experiences. 2 Studies may also describe an audit trail, which might include researchers’ analysis notes, minutes of researchers’ meetings, and other materials that could be used to follow the research process.

The contextual nature of qualitative research means that careful thought must be given to the potential transferability of its results to other sociocultural settings. Though the study should discuss the extent of the findings’ resonance with the published literature, 3 much of the onus of assessing transferability is left to readers, who must decide if the setting of the study is sufficiently similar for its results to be transferable to their own context. In doing so, the reader looks for resonance—the extent that research findings have meaning for the reader.

Transferability may be helped by the study’s discussion of how its results advance theoretical understandings that are relevant to multiple situations. For example, a study of patients’ preferences in palliative care may contribute to theories of ethics and humanity in medicine, thus suggesting relevance to other clinical situations such as the informed consent exchange before treatment. We have explained elsewhere in this series the importance of theory in qualitative research, and there are many who believe that a key indicator of quality in qualitative research is its contribution to advancing theoretical understanding as well as useful knowledge. This debate continues in the literature, 4 but from a pragmatic perspective most qualitative studies in health professions journals emphasise results that relate to practice; theoretical discussions tend to be published elsewhere.

Reflexivity is particularly important within the qualitative paradigm. Reflexivity refers to recognition of the influence a researcher brings to the research process. It highlights potential power relationships between the researcher and research participants that might shape the data being collected, particularly when the researcher is a healthcare professional or educator and the participant is a patient, client, or student. 5 It also acknowledges how a researcher’s gender, ethnic background, profession, and social status influence the choices made within the study, such as the research question itself and the methods of data collection. 6 7

Research articles written in the qualitative paradigm should show evidence both of reflexive practice and of consideration of other relevant ethical issues. Ethics in qualitative research should extend beyond prescriptive guidelines and research ethics boards into a thorough exploration of the ethical consequences of collecting personal experiences and opening those experiences to public scrutiny (a detailed discussion of this problem within a research report may, however, be limited by the practicalities of word count limitations). 8 Issues of confidentiality and anonymity can become quite complex when data constitute personal reports of experience or perception; the need to minimise harm may involve not only protection from external scrutiny but also mechanisms to mitigate potential distress to participants from sharing their personal stories.

In conclusion: is what the researchers did clear?

The qualitative paradigm includes a wide range of theoretical and methodological options, and qualitative studies must include clear descriptions of how they were conducted, including the selection of the study sample, the data collection methods, and the analysis process. The list of key questions for beginning readers to ask when reading qualitative research articles (see box 1) is intended not as a finite checklist, but rather as a beginner’s guide to a complex topic. Critical appraisal of particular qualitative articles may differ according to the theories and methodologies used, and achieving a nuanced understanding in this area is fairly complex.

Further reading

Crabtree F, Miller WL, eds. Doing qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 1999.

Denzin NK, Lincoln YS, eds. Handbook of qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 2000.

Finlay L, Ballinger C, eds. Qualitative research for allied health professionals: challenging choices . Chichester: Wiley, 2006.

Flick U. An introduction to qualitative research . 2nd ed. London: Sage, 2002.

Green J, Thorogood N. Qualitative methods for health research . London: Sage, 2004.

Lingard L, Kennedy TJ. Qualitative research in medical education . Edinburgh: Association for the Study of Medical Education, 2007.

Mauthner M, Birch M, Jessop J, Miller T, eds. Ethics in Qualitative Research . Thousand Oaks, CA: Sage, 2002.

Seale C. The quality of qualitative research . London: Sage, 1999.

Silverman D. Doing qualitative research . Thousand Oaks, CA: Sage, 2000.

Journal articles

Greenhalgh T. How to read a paper: papers that go beyond numbers. BMJ 1997;315:740-3.

Mays N, Pope C. Qualitative research: Rigour and qualitative research. BMJ 1995;311:109-12.

Mays N, Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ 2000;320:50-2.

Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res 1998;8:341-51.

Internet resources

National Health Service Public Health Resource Unit. Critical appraisal skills programme: qualitative research appraisal tool . 2006. www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf

Cite this as: BMJ 2008;337:a1035

  • Related to doi: , 10.1136/bmj.a288
  • doi: , 10.1136/bmj.39602.690162.47
  • doi: , 10.1136/bmj.a1020
  • doi: , 10.1136/bmj.a879
  • doi: 10.1136/bmj.a949

This is the last in a series of six articles that aim to help readers to critically appraise the increasing number of qualitative research articles in clinical journals. The series editors are Ayelet Kuper and Scott Reeves.

For a definition of general terms relating to qualitative research, see the first article in this series.

Contributors: AK wrote the first draft of the article and collated comments for subsequent iterations. LL and WL made substantial contributions to the structure and content, provided examples, and gave feedback on successive drafts. AK is the guarantor.

Funding: None.

Competing interests: None declared.

Provenance and peer review: Commissioned; externally peer reviewed.

  • ↵ Holden JD. Hawthorne effects and research into professional practice. J Evaluation Clin Pract 2001 ; 7 : 65 -70. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hammersley M, Atkinson P. Ethnography: principles in practice . 2nd ed. London: Routledge, 1995 .
  • ↵ Silverman D. Doing qualitative research . Thousand Oaks, CA: Sage, 2000 .
  • ↵ Mays N, Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ 2000 ; 320 : 50 -2. OpenUrl FREE Full Text
  • ↵ Lingard L, Kennedy TJ. Qualitative research in medical education . Edinburgh: Association for the Study of Medical Education, 2007 .
  • ↵ Seale C. The quality of qualitative research . London: Sage, 1999 .
  • ↵ Wallerstein N. Power between evaluator and community: research relationships within New Mexico’s healthier communities. Soc Sci Med 1999 ; 49 : 39 -54. OpenUrl CrossRef PubMed Web of Science
  • ↵ Mauthner M, Birch M, Jessop J, Miller T, eds. Ethics in qualitative research . Thousand Oaks, CA: Sage, 2002 .
  • ↵ Kuzel AJ. Sampling in qualitative inquiry. In: Crabtree F, Miller WL, eds. Doing qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 1999 :33-45.

how to critically appraise qualitative research

Banner

  • Teesside University Student & Library Services
  • Learning Hub Group

Critical Appraisal for Health Students

  • Critical Appraisal of a qualitative paper
  • Critical Appraisal: Help
  • Critical Appraisal of a quantitative paper
  • Useful resources

Appraisal of a Qualitative paper : Top tips

undefined

  • Introduction

Critical appraisal of a qualitative paper

This guide aimed at health students, provides basic level support for appraising qualitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework  for appraising qualitative research (based on 4 aspects of trustworthiness) is  provided and there is an opportunity to practise the technique on a sample article.

Support Materials

  • Framework for reading qualitative papers
  • Critical appraisal of a qualitative paper PowerPoint

To practise following this framework for critically appraising a qualitative article, please look at the following article:

Schellekens, M.P.J.  et al  (2016) 'A qualitative study on mindfulness-based stress reduction for breast cancer patients: how women experience participating with fellow patients',  Support Care Cancer , 24(4), pp. 1813-1820.

Critical appraisal of a qualitative paper: practical example.

  • Credibility
  • Transferability
  • Dependability
  • Confirmability

How to use this practical example 

Using the framework, you can have a go at appraising a qualitative paper - we are going to look at the following article: 

Step 1.  take a quick look at the article, step 2.  click on the credibility tab above - there are questions to help you appraise the trustworthiness of the article, read the questions and look for the answers in the article. , step 3.   click on each question and our answers will appear., step 4.    repeat with the other aspects of trustworthiness: transferability, dependability and confirmability ., questioning the credibility:, who is the researcher what has been their experience how well do they know this research area, was the best method chosen what method did they use was there any justification was the method scrutinised by peers is it a recognisable method was there triangulation ( more than one method used), how was the data collected was data collected from the participants at more than one time point how long were the interviews were questions asked to the participants in different ways, is the research reporting what the participants actually said were the participants shown transcripts / notes of the interviews / observations to ‘check’ for accuracy are direct quotes used from a variety of participants, how would you rate the overall credibility, questioning the transferability, was a meaningful sample obtained how many people were included is the sample diverse how were they selected, are the demographics given, does the research cover diverse viewpoints do the results include negative cases was data saturation reached, what is the overall transferability can the research be transferred to other settings , questioning the dependability :, how transparent is the audit trail can you follow the research steps are the decisions made transparent is the whole process explained in enough detail did the researcher keep a field diary is there a clear limitations section, was there peer scrutiny of the researchwas the research plan shown to peers / colleagues for approval and/or feedback did two or more researchers independently judge data, how would you rate the overall dependability would the results be similar if the study was repeated how consistent are the data and findings, questioning the confirmability :, is the process of analysis described in detail is a method of analysis named or described is there sufficient detail, have any checks taken place was there cross-checking of themes was there a team of researchers, has the researcher reflected on possible bias is there a reflexive diary, giving a detailed log of thoughts, ideas and assumptions, how do you rate the overall confirmability has the researcher attempted to limit bias, questioning the overall trustworthiness :, overall how trustworthy is the research, further information.

See Useful resources  for links, books and LibGuides to help with Critical appraisal.

  • << Previous: Critical Appraisal: Help
  • Next: Critical Appraisal of a quantitative paper >>
  • Last Updated: Aug 25, 2023 2:48 PM
  • URL: https://libguides.tees.ac.uk/critical_appraisal

Qualitative Studies

Phillips-Wangensteen Building.

Qualitative Research Studies: Introduction

Introduction

Research design decides how research materials will be collected. One or more research methods, for example -- experiment, survey, interview, etc. -- are chosen depending on the research objectives. In some research contexts, a survey may be suitable. In other instances, interviews or case studies or observation might be more appropriate. Research design actually provides insights into “how” to conduct research using a particular research methodology. Basically, every researcher has a list of research questions that need to be assessed that can be done with research design.

So research design can be defined as a framework of research methods and techniques applied by a researcher to incorporate different elements & components of research in a systematic manner. Most significantly, research design provides insights into how to Conduct Research using a particular research methodology. 

Qualitative Methods try to gather detailed, rich data allowing for an in-depth understanding of research phenomena.  Seeks the “why” rather than the “how.”

Qualitative Data Collection

Data obtained using qualitative data collection methods can be used to find new ideas, opportunities, and problems, test their value and accuracy, formulate predictions, explore a certain field in more detail, and explain the numbers obtained using quantitative data collection techniques.

Since qualitative data collection methods usually do not involve numbers and mathematical calculations, qualitative data is often seen as more subjective, but at the same time, it allows a greater depth of understanding.

Aspers, P., Corte, U. What is Qualitative in Qualitative Research .  Qual Sociol   42 , 139–160 (2019). 

Types of Qualitative Studies

Qualitative study methods are semi-structured or unstructured, usually involve small sample sizes and lack strong scientific controls.

Qualitative Study Methods

Qualitative study methods employ many of the same methods as quantitative data collection, except that instead of structured or closed, they are semi- or unstructured and open-ended.  Some of the most common qualitative  study techniques include open-ended surveys and questionnaires, interviews, focus groups, observation, case studies, and so on.

There is generally five types of qualitative data collection:

  • Ethnography research: Involves semi-structure or unstructured interviews with open-ended questions; participant and non-participant observation; collected materials including documents, books, papers, audio, images, videos etc.
  • Phenomenological research : I n-depth interviewing which involves conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program, or situation.  The participant interviews may be structured, semi-structured or unstructured; it also includes reflective journals; written oral self-reports; and participant’s aesthetic expressions.
  • Grounded theory research: Data collection methods often include in-depth interviews using open-ended questions. Questions can be adjusted as theory emerges. Participant observation and focus groups may also be used as well as collecting and studying …  including documents, books, papers, audio, images, artifacts; videos etc. used by participants in their daily lives.
  • Narrative: Participant or non-participant interview, aesthetic expressions; one’s own and other’s observation; storytelling; letter writing; autobiographic writing; collected materials …..; personal information such as values. Narrative analysis focuses on different elements to make diverse but equally substantial and meaningful interpretations and conclusions. It is a genre of analytical frames used by researchers to interpret information with the context of research shared by all in daily life. 
  • Case study : Focus groups; semi-structured or unstructured interviews with open-ended questions; participant and non-participant observation; collected materials

Nayar, S., & Stanley, D. M. (Eds.). (2015).  Qualitative research methodologies for occupational science and therapy . London: Routledge.

Frank, G., & Polkinghorne, D. (2010). Qualitative Research in Occupational Therapy: From the First to the Second Generation . OTJR (Thorofare, N.J.), 30(2), 51-57.

How To Search for Qualitative Studies

Databases categorize their records using subject terms or controlled vocabularies. These Subject Headings vary for each database.

Medline/PubMed : MeSH Subject Headings

  • Qualitative Research : Any type of research that employs nonnumeric information to explore individual or group characteristics, producing findings not arrived at by statistical procedures or other quantitative means.  Includes Document Analysis & Hermaneutics.
  • Interviews as Topic:  Works about conversations with an individual or individuals held in order to obtain information about their background and other personal biographical data, their attitudes and opinions, etc. It includes works about school admission or job interviews.
  • Focus Groups : A method of data collection and a QUALITATIVE RESEARCH tool in which a small group of individuals are brought together and allowed to interact in a discussion of their opinions about topics, issues, or questions.
  • Grounded Theory : The generation of theories from analysis of empirical data.
  • Nursing Methodology Research :  Research carried out by nurses concerning techniques and methods to implement projects and to document information, including methods of interviewing patients, collecting data, and forming inferences. The concept includes exploration of methodological issues such as human subjectivity and human experience.
  • Anecdotes As Topic : Works about brief accounts or narratives of an incident or event.
  • Narration : The act, process, or an instance of narrating, i.e., telling a story. In the context of MEDICINE or ETHICS, narration includes relating the particular and the personal in the life story of an individual.
  • Personal Narratives As Topic:  Works about accounts of individual experience in relation to a particular field or of participation in related activities.
  • Observational Studies As Topic : Works about clinical studies in which participants may receive diagnostic, therapeutic, or other types of interventions, but the investigator does not assign participants to specific interventions (as in an interventional study).

CINAHL (Cumulative Index to Nursing & Allied Health) : CINAHL Subject Headings 

  • Action Research: Research in which problem definition, data collection, factor formulation, planned change, data analysis, and problem redefinition continue in an ongoing cycle.
  • Ethnographic Research: Research which seeks to uncover the symbols and categories that members of a given culture use to interpret their world.
  • Ethnological Research: Comparison and contrasting of cultures and societies as a whole.
  • Ethnonursing Research: The study and analysis of a designated culture's viewpoints, beliefs, and practices about nursing care behavior.
  • Grounded Theory: A qualitative method developed by Glaser and Strauss to unite theory construction and data analysis.
  • Naturalist Inquiry: The use of the natural setting in research to enable understanding the whole rather than only part of the reality being studied.
  • Phenomenological Research: Research designed to discover and understand the meaning of human life experiences.
  • Focus Groups : Small groups of individuals brought together to discuss their opinions regarding specific issues, topics, and questions.
  • Interviews:  Face-to-face or telephone meetings with subjects for the purpose of gathering information.
  • Narratives : Descriptions or interpretations of events, usually in an informal manner. Often used as a data collection method for research. Do not confuse with STORYTELLING, a form of literature or telling a real or imagined story to an audience or listener.
  • Descriptive Research : Research studies that have as their main objective the accurate portrayal of the characteristics of persons, situations, or groups, and the frequency with which certain phenomena occur.
  • Observational Methods:  Methods of data collection in which the investigator witnesses and records behaviors of interest.
  • Projective Techniques : A variety of methods for measuring by providing respondents with unstructured stimuli to which to respond.

In CINHAL, on the Advanced Search page, there are Search Options.  Scroll down to the Clinical Queries drop down box and choose to limit the search to  Qualitative-High Sensitivity; Qualitative-High Specificity ; Qualitative-Best Balance . High Sensitivity is the broadest search, to include ALL relevant material, but may also include less relevant materials. High Specificity is the most targeted search to include only the most relevant result set, but may miss some relevant materials. Best Balance retrieves the best balance between Sensitivity and Specificity.

PsycINFO: Subject Headings

  • Grounded Theory
  • Narrative Analysis
  • Thematic Analysis : A qualitative research strategy for identifying, analyzing, and reporting identifiable patterns or clusters within data.
  • Focus Grou p
  • Focus Group Interview
  • Semi-Structured Interview
  • Interpretive Phenomenological Analysis : A systematic qualitative approach in which a researcher explores how individual's make sense of particular experiences, events, and states, primarily through the analysis of data from structured and semi-structured interviews.
  • Qualitative Measures : Measures or tests employing qualitative methods and/or data, such as narratives, interviews, and focus groups.

As with CINAHL, you can limit to Methodology.  Click on Additional Limits, scroll down to "Methodology" and choose "Qualitative Study", "Focus Groups" or "Interview".

NOTE!: Be aware of  Inconsistent indexing. The above subject headings as not always indexed (i.e. added to articles) for qualitative research nor is the publication type/methodology.  So, to successfully find qualitative articles you also need to add keywords to your search strategy or if you are getting too few results, leave off the Clinical Queries or Methodology filters.

Free text keywords

Use selective free text keywords to search in Titles, Abstracts or Keywords of records held in the databases to identify Qualitative Research.  Examples:

When searching, do a combination of subject terms and keywords depending on the type of qualitative study you are looking for:

Qualitative Research [MeSH] OR (qualitative AND (research OR study OR method))

(Grounded Theory[MeSH] OR "grounded theory")

then combine it with your topic of interest

post-traumatic stress disorder OR PTSD

brain injury, OR BTI OR "traumatic, brain injury"

How to Critically Analyze Qualitative Studies

 A critical analysis of a qualitative study considers the “fit” of the research question with the qualitative method used in the study. There are many checklists available for the assessment of qualitative research studies.  Here are a few:

  • The Johanna Briggs Institute: The Joanna Briggs Institute Critical Appraisal tools  for use in JBI Systematic Reviews Checklist for  Qualitative Research  
  • CASP:  CASP Checklist: 10 questions to help you make sense of a Qualitative research
  • McMaster University:  Guidelines for Critical Review Form:  Qualitative Studies (Version 2.0) © Letts, L., Wilkins, S., Law, M., Stewart, D., Bosch, J., & Westmorland, M., 2007  

NOTE:  When using these checklists, be sure to use them critically and with careful consideration of the research context.  In other words, use the checklists as the beginning point in assessing the article and then re-assess the article based on whether the findings can be applied in your setting/population/disease/condition.

Additional Resources

Moorley, C., & Cathala, X. (2019). How to appraise qualitative research .  Evidence-Based Nursing ,  22 (1), 10-13.    ( open access)

Stenfors, T., Kajamaa, A. and Bennett, D. (2020), How to … assess the quality of qualitative research . Clin Teach, 17: 596-599.

Greenhalgh, T., & Taylor, R. (1997). How to read a paper: Papers that go beyond numbers (qualitative research).   BMj ,  315 (7110), 740-743. 

Jeanfreau, S. G., & Jack, L., Jr (2010). Appraising qualitative research in health education: guidelines for public health educators.   Health promotion practice ,  11 (5), 612–617. 

Research Series - Critical appraisal of qualitative research when reading papers Jul 22, 2022 Virtual Tutor; Research Series (Elsevier Health Education) YouTube Video 10:04 min [ This episode Professor Dall'Ora will be looking at qualitative research in more detail. In particular how to critically appraise qualitative studies.]

Hanes K. Chapter 4: Critical appraisal of qualitative research. In: Noyes J, Booth A, Hannes K, Harden A, Harris J, Lewin S, Lockwood C (editors), Supplementary Guidance for Inclusion of Qualitative Research in Cochrane Systematic Reviews of Interventions. Version 1 (updated August 2011). Cochrane Collaboration Qualitative Methods Group, 2011. 

David Tod, Andrew Booth & Brett Smith (2022)  Critical appraisal ,  International Review of Sport and Exercise Psychology, 15:1, 52-72  (open access)

Validity & Reliability in Qualitative Studies

Validity & Reliability

Validity in qualitative research means the “appropriateness” of the tools, processes, and data -- are the tools, processes and data measuring what it is intended to measure to answer the research question?  Assessing for validity is looking to see if the research question is "valid" for the desired outcome -- whether the choice of of the methodology used was appropriate for answering the research question, was the study design valid for the methodology, were the appropriate sampling and data analysis used and finally, were the results and conclusions valid for the sample and within the context of the research question. 

In contrast, reliability concerns the degree of consistency in the results if the study, using the same methodology, can be repeated over and over.

The Basics of Validity and Reliability in Research by Joe O'Brian & Anders Orn, Research Collective.com

Brewer, M., & Crano, W. (2014). Research Design and Issues of Validity. In H. Reis & C. Judd (Eds.),  Handbook of Research Methods in Social and Personality Psychology  (pp. 11-26). Cambridge: Cambridge University Press. 

Golafshani, N. (2003). Understanding Reliability and Validity in Qualitative Research.   The Qualitative Report ,  8 (4), 597-606. 

Cypress, Brigitte S. EdD, RN, CCRN. Rigor or Reliability and Validity in Qualitative Research: Perspectives, Strategies, Reconceptualization, and Recommendations . Dimensions of Critical Care Nursing 36(4):p 253-263, 7/8 2017. 

Leung L. (2015). Validity, reliability, and generalizability in qualitative research .  Journal of family medicine and primary care ,  4 (3), 324–327. 

Understanding Reliability and Validity . Writing@CSU

Rosumeck, S., Wagner, M., Wallraf, S., & Euler, U. (2020). A validation study revealed differences in design and performance of search filters for qualitative research in PsycINFO and CINAHL.   Journal of clinical epidemiology ,  128 , 101–108. 

Wagner, M., Rosumeck, S., Küffmeier, C., Döring, K., & Euler, U. (2020). A validation study revealed differences in design and performance of MEDLINE search filters for qualitative research .  Journal of clinical epidemiology ,  120 , 17–24.

Franzel, B., Schwiegershausen, M., Heusser, P.  et al.   How to locate and appraise qualitative research in complementary and alternative medicine.   BMC Complement Altern Med   13 , 125 (2013). 

Finfgeld-Connett, D. and Johnson, E.D. (2013), Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing, 69: 194-204. 

Rogers, M, Bethel, A, Abbott, R.  Locating qualitative studies in dementia on MEDLINE, EMBASE, CINAHL, and PsycINFO: A comparison of search strategies.   Res Syn Meth . 2018; 9: 579– 586. 

Booth, A. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review .  Syst Rev   5 , 74 (2016). 

Noyes, J., Hannes, K., Booth, A., Harris, J., Harden, A., Popay, J., ... & Pantoja, T. (2015). Qualitative research and Cochrane reviews .

Citing Sources

Citations are brief notations in the body of a research paper that point to a source in the bibliography or references cited section.

If your paper quotes, paraphrases, summarizes the work of someone else, you need to use citations.

Citation style guides such as APA, Chicago and MLA provide detailed instructions on how citations and bibliographies should be formatted.

Health Sciences Research Toolkit

Resources, tips, and guidelines to help you through the research process., finding information.

Library Research Checklist Helpful hints for starting a library research project.

Search Strategy Checklist and Tips Helpful tips on how to develop a literature search strategy.

Boolean Operators: A Cheat Sheet Boolean logic (named after mathematician George Boole) is a system of logic to designed to yield optimal search results. The Boolean operators, AND, OR, and NOT, help you construct a logical search. Boolean operators act on sets -- groups of records containing a particular word or concept.

Literature Searching Overview and tips on how to conduct a literature search.

Health Statistics and Data Sources Health related statistics and data sources are increasingly available on the Internet. They can be found already neatly packaged, or as raw data sets. The most reliable data comes from governmental sources or health-care professional organizations.

Evaluating Information

Primary, Secondary and Tertiary Sources in the Health Sciences Understand what are considered primary, secondary and tertiary sources.

Scholarly vs Popular Journals/Magazines How to determine what are scholarly journals vs trade or popular magazines.

Identifying Peer-Reviewed Journals A “peer-reviewed” or “refereed” journal is one in which the articles it contains have been examined by people with credentials in the article’s field of study before it is published.

Evaluating Web  Resources When searching for information on the Internet, it is important to be aware of the quality of the information being presented to you. Keep in mind that anyone can host a web site. To be sure that the information you are looking at is credible and of value.

Conducting Research Through An Anti-Racism Lens This guide is for students, staff, and faculty who are incorporating an anti-racist lens at all stages of the research life cycle.

Understanding Research Study Designs Covers case studies, randomized control trials, systematic reviews and meta-analysis.

Qualitative Studies Overview of what is a qualitative study and how to recognize, find and critically appraise.

Writing and Publishing

Citing Sources Citations are brief notations in the body of a research paper that point to a source in the bibliography or references cited section.

Structure of a Research Paper Reports of research studies usually follow the IMRAD format. IMRAD (Introduction, Methods, Results, [and] Discussion) is a mnemonic for the major components of a scientific paper. These elements are included in the overall structure of a research paper.

Top Reasons for Non-Acceptance of Scientific Articles Avoid these mistakes when preparing an article for publication.

Annotated Bibliographies Guide on how to create an annotated bibliography.

Writing guides, Style Manuals and the Publication Process in the Biological and Health Sciences Style manuals, citation guides as well as information on public access policies, copyright and plagiarism.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Diagn Res
  • v.11(5); 2017 May

Critical Appraisal of Clinical Research

Azzam al-jundi.

1 Professor, Department of Orthodontics, King Saud bin Abdul Aziz University for Health Sciences-College of Dentistry, Riyadh, Kingdom of Saudi Arabia.

Salah Sakka

2 Associate Professor, Department of Oral and Maxillofacial Surgery, Al Farabi Dental College, Riyadh, KSA.

Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.

Introduction

Decisions related to patient value and care is carefully made following an essential process of integration of the best existing evidence, clinical experience and patient preference. Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ].

Critical appraisal is essential to:

  • Combat information overload;
  • Identify papers that are clinically relevant;
  • Continuing Professional Development (CPD).

Carrying out Critical Appraisal:

Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design.

Standard Common Questions:

  • What is the research question?
  • What is the study type (design)?
  • Selection issues.
  • What are the outcome factors and how are they measured?
  • What are the study factors and how are they measured?
  • What important potential confounders are considered?
  • What is the statistical method used in the study?
  • Statistical results.
  • What conclusions did the authors reach about the research question?
  • Are ethical issues considered?

The Critical Appraisal starts by double checking the following main sections:

I. Overview of the paper:

  • The publishing journal and the year
  • The article title: Does it state key trial objectives?
  • The author (s) and their institution (s)

The presence of a peer review process in journal acceptance protocols also adds robustness to the assessment criteria for research papers and hence would indicate a reduced likelihood of publication of poor quality research. Other areas to consider may include authors’ declarations of interest and potential market bias. Attention should be paid to any declared funding or the issue of a research grant, in order to check for a conflict of interest [ 2 ].

II. ABSTRACT: Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.

  • Aim of the study: It should be well and clearly written.
  • Materials and Methods: The study design and type of groups, type of randomization process, sample size, gender, age, and procedure rendered to each group and measuring tool(s) should be evidently mentioned.
  • Results: The measured variables with their statistical analysis and significance.
  • Conclusion: It must clearly answer the question of interest.

III. Introduction/Background section:

An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged [ 2 ].

-Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’

-What has been already achieved and how does this study be at variance?

-Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations?

IV. Methods and Materials section : Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated [ 3 ].

V. Results section : This section should clearly reveal what actually occur to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs.

VI. Discussion section : This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established. A discussion on a possible related limitations and necessitation for further studies should also be indicated.

Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).

  • Does it address any source of potential bias?
  • Are interpretations consistent with the results?
  • How are null findings interpreted?
  • Does it mention how do the findings of this study relate to previous work in the area?
  • Can they be generalized (external validity)?
  • Does it mention their clinical implications/applicability?
  • What are the results/outcomes/findings applicable to and will they affect a clinical practice?
  • Does the conclusion answer the study question?
  • -Is the conclusion convincing?
  • -Does the paper indicate ethics approval?
  • -Can you identify potential ethical issues?
  • -Do the results apply to the population in which you are interested?
  • -Will you use the results of the study?

Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist.

1-What is the research question?

For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results. Useful structure for assessing the problem addressed in the article is the Problem Intervention Comparison Outcome (PICO) method [ 3 ].

P = Patient or problem: Patient/Problem/Population:

It involves identifying if the research has a focused question. What is the chief complaint?

E.g.,: Disease status, previous ailments, current medications etc.,

I = Intervention: Appropriately and clearly stated management strategy e.g.,: new diagnostic test, treatment, adjunctive therapy etc.,

C= Comparison: A suitable control or alternative

E.g.,: specific and limited to one alternative choice.

O= Outcomes: The desired results or patient related consequences have to be identified. e.g.,: eliminating symptoms, improving function, esthetics etc.,

The clinical question determines which study designs are appropriate. There are five broad categories of clinical questions, as shown in [ Table/Fig-1 ].

[Table/Fig-1]:

Categories of clinical questions and the related study designs.

2- What is the study type (design)?

The study design of the research is fundamental to the usefulness of the study.

In a clinical paper the methodology employed to generate the results is fully explained. In general, all questions about the related clinical query, the study design, the subjects and the correlated measures to reduce bias and confounding should be adequately and thoroughly explored and answered.

Participants/Sample Population:

Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population.

The sample should be representative of the target population from which it came. Knowing the baseline characteristics of the sample population is important because this allows researchers to see how closely the subjects match their own patients [ 4 ].

Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists. Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups [ 5 ].

  • Is the sample defined? Human, Animals (type); what population does it represent?
  • Does it mention eligibility criteria with reasons?
  • Does it mention where and how the sample were recruited, selected and assessed?
  • Does it mention where was the study carried out?
  • Is the sample size justified? Rightly calculated? Is it adequate to detect statistical and clinical significant results?
  • Does it mention a suitable study design/type?
  • Is the study type appropriate to the research question?
  • Is the study adequately controlled? Does it mention type of randomization process? Does it mention the presence of control group or explain lack of it?
  • Are the samples similar at baseline? Is sample attrition mentioned?
  • All studies report the number of participants/specimens at the start of a study, together with details of how many of them completed the study and reasons for incomplete follow up if there is any.
  • Does it mention who was blinded? Are the assessors and participants blind to the interventions received?
  • Is it mentioned how was the data analysed?
  • Are any measurements taken likely to be valid?

Researchers use measuring techniques and instruments that have been shown to be valid and reliable.

Validity refers to the extent to which a test measures what it is supposed to measure.

(the extent to which the value obtained represents the object of interest.)

  • -Soundness, effectiveness of the measuring instrument;
  • -What does the test measure?
  • -Does it measure, what it is supposed to be measured?
  • -How well, how accurately does it measure?

Reliability: In research, the term reliability means “repeatability” or “consistency”

Reliability refers to how consistent a test is on repeated measurements. It is important especially if assessments are made on different occasions and or by different examiners. Studies should state the method for assessing the reliability of any measurements taken and what the intra –examiner reliability was [ 6 ].

3-Selection issues:

The following questions should be raised:

  • - How were subjects chosen or recruited? If not random, are they representative of the population?
  • - Types of Blinding (Masking) Single, Double, Triple?
  • - Is there a control group? How was it chosen?
  • - How are patients followed up? Who are the dropouts? Why and how many are there?
  • - Are the independent (predictor) and dependent (outcome) variables in the study clearly identified, defined, and measured?
  • - Is there a statement about sample size issues or statistical power (especially important in negative studies)?
  • - If a multicenter study, what quality assurance measures were employed to obtain consistency across sites?
  • - Are there selection biases?
  • • In a case-control study, if exercise habits to be compared:
  • - Are the controls appropriate?
  • - Were records of cases and controls reviewed blindly?
  • - How were possible selection biases controlled (Prevalence bias, Admission Rate bias, Volunteer bias, Recall bias, Lead Time bias, Detection bias, etc.,)?
  • • Cross Sectional Studies:
  • - Was the sample selected in an appropriate manner (random, convenience, etc.,)?
  • - Were efforts made to ensure a good response rate or to minimize the occurrence of missing data?
  • - Were reliability (reproducibility) and validity reported?
  • • In an intervention study, how were subjects recruited and assigned to groups?
  • • In a cohort study, how many reached final follow-up?
  • - Are the subject’s representatives of the population to which the findings are applied?
  • - Is there evidence of volunteer bias? Was there adequate follow-up time?
  • - What was the drop-out rate?
  • - Any shortcoming in the methodology can lead to results that do not reflect the truth. If clinical practice is changed on the basis of these results, patients could be harmed.

Researchers employ a variety of techniques to make the methodology more robust, such as matching, restriction, randomization, and blinding [ 7 ].

Bias is the term used to describe an error at any stage of the study that was not due to chance. Bias leads to results in which there are a systematic deviation from the truth. As bias cannot be measured, researchers need to rely on good research design to minimize bias [ 8 ]. To minimize any bias within a study the sample population should be representative of the population. It is also imperative to consider the sample size in the study and identify if the study is adequately powered to produce statistically significant results, i.e., p-values quoted are <0.05 [ 9 ].

4-What are the outcome factors and how are they measured?

  • -Are all relevant outcomes assessed?
  • -Is measurement error an important source of bias?

5-What are the study factors and how are they measured?

  • -Are all the relevant study factors included in the study?
  • -Have the factors been measured using appropriate tools?

Data Analysis and Results:

- Were the tests appropriate for the data?

- Are confidence intervals or p-values given?

  • How strong is the association between intervention and outcome?
  • How precise is the estimate of the risk?
  • Does it clearly mention the main finding(s) and does the data support them?
  • Does it mention the clinical significance of the result?
  • Is adverse event or lack of it mentioned?
  • Are all relevant outcomes assessed?
  • Was the sample size adequate to detect a clinically/socially significant result?
  • Are the results presented in a way to help in health policy decisions?
  • Is there measurement error?
  • Is measurement error an important source of bias?

Confounding Factors:

A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present [ 9 ].

6- What important potential confounders are considered?

  • -Are potential confounders examined and controlled for?
  • -Is confounding an important source of bias?

7- What is the statistical method in the study?

  • -Are the statistical methods described appropriate to compare participants for primary and secondary outcomes?
  • -Are statistical methods specified insufficient detail (If I had access to the raw data, could I reproduce the analysis)?
  • -Were the tests appropriate for the data?
  • -Are confidence intervals or p-values given?
  • -Are results presented as absolute risk reduction as well as relative risk reduction?

Interpretation of p-value:

The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.

  • When p-value is less than significance level, which is usually 0.05, we often reject the null hypothesis and the result is considered to be statistically significant. Conversely, when p-value is greater than 0.05, we conclude that the result is not statistically significant and the null hypothesis is accepted.

Confidence interval:

Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range.

8- Statistical results:

  • -Do statistical tests answer the research question?

Are statistical tests performed and comparisons made (data searching)?

Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper. Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study.

It is important to identify if this is appropriate for the study [ 9 ].

  • -Was the sample size adequate to detect a clinically/socially significant result?
  • -Are the results presented in a way to help in health policy decisions?

Clinical significance:

Statistical significance as shown by p-value is not the same as clinical significance. Statistical significance judges whether treatment effects are explicable as chance findings, whereas clinical significance assesses whether treatment effects are worthwhile in real life. Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:

  • -If the results are statistically significant, do they also have clinical significance?
  • -If the results are not statistically significant, was the sample size sufficiently large to detect a meaningful difference or effect?

9- What conclusions did the authors reach about the study question?

Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study. The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies [ 10 ].

  • -Are the questions posed in the study adequately addressed?
  • -Are the conclusions justified by the data?
  • -Do the authors extrapolate beyond the data?
  • -Are shortcomings of the study addressed and constructive suggestions given for future research?
  • -Bibliography/References:

Do the citations follow one of the Council of Biological Editors’ (CBE) standard formats?

10- Are ethical issues considered?

If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained? [ 10 , 11 ].

Critical appraisal of RCTs: Factors to look for:

  • Allocation (randomization, stratification, confounders).
  • Follow up of participants (intention to treat).
  • Data collection (bias).
  • Sample size (power calculation).
  • Presentation of results (clear, precise).
  • Applicability to local population.

[ Table/Fig-2 ] summarizes the guidelines for Consolidated Standards of Reporting Trials CONSORT [ 12 ].

[Table/Fig-2]:

Summary of the CONSORT guidelines.

Critical appraisal of systematic reviews: provide an overview of all primary studies on a topic and try to obtain an overall picture of the results.

In a systematic review, all the primary studies identified are critically appraised and only the best ones are selected. A meta-analysis (i.e., a statistical analysis) of the results from selected studies may be included. Factors to look for:

  • Literature search (did it include published and unpublished materials as well as non-English language studies? Was personal contact with experts sought?).
  • Quality-control of studies included (type of study; scoring system used to rate studies; analysis performed by at least two experts).
  • Homogeneity of studies.

[ Table/Fig-3 ] summarizes the guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA [ 13 ].

[Table/Fig-3]:

Summary of PRISMA guidelines.

Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry. By following a systematic approach, such evidence can be considered and applied to clinical practice.

Financial or other Competing Interests

How to appraise qualitative research

Affiliations.

  • 1 Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London, UK.
  • 2 Institute of Vocational Learning, School of Health and Social Care, London South Bank University, London, UK.
  • PMID: 30504448
  • DOI: 10.1136/ebnurs-2018-103044

Publication types

  • Qualitative Research*
  • Reproducibility of Results
  • Research Design
  • Sample Size

IMAGES

  1. How to appraise qualitative research

    how to critically appraise qualitative research

  2. 4 Critical Appraisal

    how to critically appraise qualitative research

  3. Example Of A Critical Analysis Of A Qualitative Study

    how to critically appraise qualitative research

  4. Critical appraisal of qualitative research

    how to critically appraise qualitative research

  5. Critical Appraisal of Qualitative Research.

    how to critically appraise qualitative research

  6. Critical Appraisal Checklist for Qualitative Research Studies (PDF

    how to critically appraise qualitative research

VIDEO

  1. Qualitative Research Analysis Approaches

  2. How to critical appraise literature for your literature review! #shorts #literaturereview #uni

  3. Critical Appraisal of Qualitative Research

  4. Day- 12 : Short-term Course on Biostatistics and Research Methodology #Research #SPSS #Biostatistics

  5. Literature Reviews

  6. Day- 5: Short-term Course on Biostatistics and Research Methodology #Research, #SPSS #Biostatistics

COMMENTS

  1. How to appraise qualitative research

    In critically appraising qualitative research, steps need to be taken to ensure its rigour, credibility and trustworthiness (table 1). View this table: Table 1 Useful terms Some of the qualitative approaches used in nursing research include grounded theory, phenomenology, ethnography, case study (can lend itself to mixed methods) and narrative ...

  2. Critically appraising qualitative research

    Ethics in qualitative research goes beyond review boards' requirements to involve complex issues of confidentiality, reflexivity, and power. Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings.

  3. Appraising Qualitative Research in Health Education: Guidelines for

    This publication presents an overview of qualitative research approaches, defines key terminology used in qualitative research, and provides guidelines for appraising the strengths and weaknesses of published qualitative research. On reading, health educators will be better equipped to evaluate the quality of the evidence through critical ...

  4. Full article: Critical appraisal

    Critical appraisal is a dynamic process, not a static definitive judgement of research credibility. Although checklists and frameworks are designed to help appraise qualitative research in systematic and transparent ways, as highlighted checklists and frameworks are problematic and contested (Morse, Citation 2021). Researchers thus need to ...

  5. Scientific writing: Critical Appraisal Toolkit (CAT) for assessing

    The ability to critically appraise research is, therefore, an essential skill for health professionals serving on policy or guideline development working groups. ... A qualitative assessment is made based on strength of study designs, the quality of studies, number of studies, consistency of results, and directness of the evidence. ...

  6. PDF Critical appraisal of qualitative research: necessity, partialities and

    It is important to appraise different qualitative studies in relation to the specific methodology used because the methodological approach is linked to the 'outcome' of the research (eg, theory devel-opment, phenomenological understandings and credibility of findings). Moreover, appraisal needs to go beyond merely describing the specific ...

  7. PDF Critically appraising qualitative research

    Ethics in qualitative research goes beyond review boards' requirements to involve complex issues of confidentiality, reflexivity, and power Over the past decade, readers of medical journals have gained skills in critically appraising studies to

  8. Critically Appraising Qualitative Research: a Guide for Clinicians More

    Reflexivity also shares features of transference. Qualitative approaches include: ethnography, action-assessment, grounded theory, case studies and mixed methods. Qualitative research can complement quantitative approaches. An understanding of both is useful in critically appraising the psychiatric literature.

  9. How to critically appraise a qualitative health research study

    Qualitative Research*. Evidence-based nursing is a process that requires nurses to have the knowledge, skills, and confidence to critically reflect on their practice, articulate structured questions, and then reliably search for research evidence to address the questions posed. Many types of research evidence are used to ….

  10. Critical Appraisal of a qualitative paper

    Critical appraisal of a qualitative paper. This guide aimed at health students, provides basic level support for appraising qualitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising qualitative research (based on 4 aspects of trustworthiness) is provided and ...

  11. Critical Appraisal Tools and Reporting Guidelines

    More. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essential role in evidence-based practice and decision-making.

  12. Home

    In particular how to critically appraise qualitative studies.] Hanes K. Chapter 4: Critical appraisal of qualitative research. In: Noyes J, Booth A, Hannes K, Harden A, Harris J, Lewin S, Lockwood C (editors), Supplementary Guidance for Inclusion of Qualitative Research in Cochrane Systematic Reviews of Interventions. Version 1 (updated August ...

  13. Critically appraising qualitative research: a guide for clinicians more

    This overview is an introduction to critically appraising a qualitative paper for clinicians who are more familiar with quantitative methods. Conclusions: Qualitative research uses data from interviews (semi-structured or unstructured), focus groups, observations or written materials. Data analysis is inductive, allowing meaning to emerge from ...

  14. Optimising the value of the critical appraisal skills programme (CASP

    A key stage common to all systematic reviews is quality appraisal of the evidence to be synthesized. 1,8 There is broad debate and little consensus among the academic community over what constitutes 'quality' in qualitative research. 'Qualitative' is an umbrella term that encompasses a diverse range of methods, which makes it difficult to have a 'one size fits all' definition of ...

  15. DOC Home

    ÐÏ à¡± á> þÿ z | þÿÿÿw x y ...

  16. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ]. Critical appraisal is essential to: Continuing Professional Development (CPD).

  17. How to appraise qualitative research

    Affiliations. 1 Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London, UK. 2 Institute of Vocational Learning, School of Health and Social Care, London South Bank University, London, UK. PMID: 30504448. DOI: 10.1136/ebnurs-2018-103044.

  18. (PDF) Critically Appraising Qualitative Research

    Critically appraising qualitative research. Ayelet Kuper, Lorelei Lingard, Wendy Levinson. Six key questions will help readers to assess. qualitative research. Over the past decade, readers of ...

  19. How to critically appraise a qualitative health research study

    TLDR. Developing skills in critically appraising findings from qualitative studies will increase awareness and utilisation of this type of evidence in practice and policy, with a goal to ensure that patient/client perceptions are considered, leading to enhanced person-centred care or systems. Expand. 1.

  20. [PDF] How to appraise qualitative research

    This paper provides a guide on how to critically appraise a qualitative research paper and discusses how the title, keywords, authors' positions and affiliations and abstract can influence the authenticity and readability of quantitative research papers. In order to make a decision about implementing evidence into practice, nurses need to be able to critically appraise research. Nurses also ...