Critical Appraisal of Quantitative Research

  • Reference work entry
  • First Online: 13 January 2019
  • pp 1027–1049
  • Cite this reference work entry

casp tool case study

  • Rocco Cavaleri 2 ,
  • Sameer Bhole 3 , 8 &
  • Amit Arora 4 , 5 , 6 , 7  

2364 Accesses

1 Citations

Critical appraisal skills are important for anyone wishing to make informed decisions or improve the quality of healthcare delivery. A good critical appraisal provides information regarding the believability and usefulness of a particular study. However, the appraisal process is often overlooked, and critically appraising quantitative research can be daunting for both researchers and clinicians. This chapter introduces the concept of critical appraisal and highlights its importance in evidence-based practice. Readers are then introduced to the most common quantitative study designs and key questions to ask when appraising each type of study. These studies include systematic reviews, experimental studies (randomized controlled trials and non-randomized controlled trials), and observational studies (cohort, case-control, and cross-sectional studies). This chapter also provides the tools most commonly used to appraise the methodological and reporting quality of quantitative studies. Overall, this chapter serves as a step-by-step guide to appraising quantitative research in healthcare settings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Altman DG, Bland JM. Treatment allocation in controlled trials: why randomise? BMJ. 1999;318(7192):1209.

Article   Google Scholar  

Arora A, Scott JA, Bhole S, Do L, Schwarz E, Blinkhorn AS. Early childhood feeding practices and dental caries in preschool children: a multi-centre birth cohort study. BMC Public Health. 2011;11(1):28.

Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, … Lijmer JG. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med. 2003;138(1):W1–12.

Cavaleri R, Schabrun S, Te M, Chipchase L. Hand therapy versus corticosteroid injections in the treatment of de quervain’s disease: a systematic review and meta-analysis. J Hand Ther. 2016;29(1):3–11. https://doi.org/10.1016/j.jht.2015.10.004 .

Centre for Evidence-based Management. Critical appraisal tools. 2017. Retrieved 20 Dec 2017, from https://www.cebma.org/resources-and-tools/what-is-critical-appraisal/ .

Centre for Evidence-based Medicine. Critical appraisal worksheets. 2017. Retrieved 3 Dec 2017, from http://www.cebm.net/blog/2014/06/10/critical-appraisal/ .

Clark HD, Wells GA, Huët C, McAlister FA, Salmi LR, Fergusson D, Laupacis A. Assessing the quality of randomized trials: reliability of the jadad scale. Control Clin Trials. 1999;20(5):448–52. https://doi.org/10.1016/S0197-2456(99)00026-4 .

Critical Appraisal Skills Program. Casp checklists. 2017. Retrieved 5 Dec 2017, from http://www.casp-uk.net/casp-tools-checklists .

Dawes M, Davies P, Gray A, Mant J, Seers K, Snowball R. Evidence-based practice: a primer for health care professionals. London: Elsevier; 2005.

Google Scholar  

Dumville JC, Torgerson DJ, Hewitt CE. Research methods: reporting attrition in randomised controlled trials. BMJ. 2006;332(7547):969.

Greenhalgh T, Donald A. Evidence-based health care workbook: understanding research for individual and group learning. London: BMJ Publishing Group; 2000.

Guyatt GH, Sackett DL, Cook DJ, Guyatt G, Bass E, Brill-Edwards P, … Gerstein H. Users’ guides to the medical literature: II. How to use an article about therapy or prevention. JAMA. 1993;270(21):2598–601.

Guyatt GH, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, … Jaeschke R. GRADE guidelines: 1. Introduction – GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4), 383–94.

Herbert R, Jamtvedt G, Mead J, Birger Hagen K. Practical evidence-based physiotherapy. London: Elsevier Health Sciences; 2005.

Hewitt CE, Torgerson DJ. Is restricted randomisation necessary? BMJ. 2006;332(7556):1506–8.

Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions version 5.0.2. The cochrane collaboration. 2009. Retrieved 3 Dec 2017, from http://www.cochrane-handbook.org .

Hoffmann T, Bennett S, Del Mar C. Evidence-based practice across the health professions. Chatswood: Elsevier Health Sciences; 2013.

Hoffmann T, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, … Johnston M. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ, 2014;348: g1687.

Joanna Briggs Institute. Critical appraisal tools. 2017. Retrieved 4 Dec 2017, from http://joannabriggs.org/research/critical-appraisal-tools.html .

Mhaskar R, Emmanuel P, Mishra S, Patel S, Naik E, Kumar A. Critical appraisal skills are essential to informed decision-making. Indian J Sex Transm Dis. 2009;30(2):112–9. https://doi.org/10.4103/0253-7184.62770 .

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Med Res Methodol. 2001;1(1):2. https://doi.org/10.1186/1471-2288-1-2 .

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the prisma statement. PLoS Med. 2009;6(7):e1000097.

National Health and Medical Research Council. NHMRC additional levels of evidence and grades for recommendations for developers of guidelines. Canberra: NHMRC; 2009. Retrieved from https://www.nhmrc.gov.au/_files_nhmrc/file/guidelines/developers/nhmrc_levels_grades_evidence_120423.pdf .

National Heart Lung and Blood Institute. Study quality assessment tools. 2017. Retrieved 17 Dec 2017, from https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools .

Physiotherapy Evidence Database. PEDro scale. 2017. Retrieved 10 Dec 2017, from https://www.pedro.org.au/english/downloads/pedro-scale/ .

Portney L, Watkins M. Foundations of clinical research: application to practice. 2nd ed. Upper Saddle River: F.A. Davis Company/Publishers; 2009.

Roberts C, Torgerson DJ. Understanding controlled trials: baseline imbalance in randomised controlled trials. BMJ. 1999;319(7203):185.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, … Kristjansson E. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008. https://doi.org/10.1136/bmj.j4008 .

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, … Boutron I. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919.

Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, … Thacker SB. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA. 2000;283(15):2008–12.

Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, Initiative S. The strengthening the reporting of observational studies in epidemiology (strobe) statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495–9.

Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, … Bossuyt PM. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155(8):529–36.

Download references

Author information

Authors and affiliations.

School of Science and Health, Western Sydney University, Campbelltown, NSW, Australia

Rocco Cavaleri

Sydney Dental School, Faculty of Medicine and Health, The University of Sydney, Surry Hills, NSW, Australia

Sameer Bhole

School of Science and Health, Western Sydney University, Sydney, NSW, Australia

Discipline of Paediatrics and Child Health, Sydney Medical School, Sydney, NSW, Australia

Oral Health Services, Sydney Local Health District and Sydney Dental Hospital, NSW Health, Sydney, NSW, Australia

COHORTE Research Group, Ingham Institute of Applied Medical Research, Liverpool, NSW, Australia

Oral Health Services, Sydney Local Health District and Sydney Dental Hospital, NSW Health, Surry Hills, NSW, Australia

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Rocco Cavaleri .

Editor information

Editors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Cavaleri, R., Bhole, S., Arora, A. (2019). Critical Appraisal of Quantitative Research. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_120

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_120

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Public Health

  • Search Operators
  • Find Books / E-Books
  • Find Articles
  • Anatomy Models/ Visuals
  • Health Sciences Research
  • Infectious Diseases
  • Women's Health
  • Major Sources
  • By Population
  • By Chronic Diseases
  • Environmental Health
  • Types of Reviews in Health
  • Methods of Research
  • Find the Evidences
  • Appraise the Quality of the Evidence- CASP & PRISMA

Appraise the Information

Critical Appraisal Skill Programme ( CASP ) are tools designed to be used to critically appraise different types of evidence in health sciences. The CASP tool determines the trustworthiness, results and relevance of the information in an article.

Watch the two videos below to understand how to appraise information and use the tools listed below

  • CASP- Randomized Clinical Study (13:24 min)
  • CASP- Qualitative Study (12:20 min)
  • Clinical Prediction
  • Cohort Study
  • Control Case Study
  • Randomized Case Trial
  • Systematic Review
  • Synthesis of Evidence - EBP- Template

Record What You Do

You should always document how you have found information. Why? Because you may be asked how you found it, other researchers may contact you with questions, and you want to publish your result.

It is a good practice to remember to record:

  • What sources you searched
  • The date-s you searched and results (some databases are updated weekly)
  • The name-s of those who developed and conducted the searches
  • The keywords used and the limiters
  • PRISMA FLOW DIAGRAM
  • PRISMA CHECKLIST This is a 27-item checklist to use with Meta-Analysis and Systematic Reviews
  • << Previous: Find the Evidences
  • Next: DSM-5 >>
  • Last Updated: Apr 26, 2024 9:36 AM
  • URL: https://sru.libguides.com/publichealth
  • En español – ExME
  • Em português – EME

CASP Tools for Randomised Controlled Trials (RCT)

Posted on 19th June 2013 by Abu Abioye

patient waiting room

CASP (Critical Appraisal Skills Programme) Tools for Randomised Controlled Trials is designed to help you and me make sense of evidence from clinical trials. This is achieved through a series of 11 questions that assess the validity, results, and applicability of an RCT ( randomised control trial ).

Before delving into the questions, it is worth clarifying what an RCT actually is. When someone tells you that drinking cranberry juice prevents urinary tract infections (UTIs), you may wonder how they know this. You might be satisfied with them using anecdotal evidence, e.g saying that their grandmother drinks a glass of cranberry juice every morning and has never had a UTI. Scrupulous scientists amongst us will not accept anecdotal evidence and will look to an RCT for evidence. An RCT is the gold-standard experimental paradigm where participants are randomly allocated to drinking cranberry juice or a placebo (of course, not every RCT compares cranberry juice with a placebo, the interventions that are compared will depend on the question being asked). The participants and those conducting the trial should be blinded (not literally, but rather they should not know what is being drunk by whom). The outcomes of the two groups (cranberry juice vs placebo) will be analysed without the analyst knowing which results relate to which group. The aim of an RCT is to test a hypothesis in a way that does not bias the findings.

So you’ve got a question in mind and have found an RCT that looks like it answers your questions, but how do you work out if it is any good or not? This is where this nifty tool comes in.

This resource appears to be aimed at anyone who wants a systematic way to review an RCT. It is easy enough to use if you are new to the world of evidence-based medicine, however even those with lots of experience can find it to be a useful reminder of what to be looking out for.

The list of questions, only 11, are short and simple so the length of time it takes to use the resource will be determined by the length and complexity of the RCT you use it on.

I used this tool to assess the PACE trial and I found it to be surprisingly effectively. I am familiar with PICO, which is the method of evaluating papers that most people seem to use. However, I found that using these 11 questions, I was able to systematically evaluate the paper. I could consider its strengths and weaknesses, without having to take the authors’ word that the evidence was valid. It also forces you to categorically note the patient group that the results are applicable to, which is something that can easily be glanced over. I really like the tool and from now on I will be using it to evaluate trials.

https://casp-uk.net/images/checklist/documents/CASP-Randomised-Controlled-Trial-Checklist/CASP-RCT-Checklist-PDF-Fillable-Form.pdf

' src=

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on CASP Tools for Randomised Controlled Trials (RCT)

' src=

Hello Had a question about using cash to evaluate a study. How do I get a score? I have an example of matrix of an article by a student. That student has used casp and gave a score of 8/11 for the article they critiqued. How did they get a score?

' src=

Dear Puja, thank you for your question. The CASP team specifically mention that the checklists were designed to be used as educational/teaching tools in a workshop setting so they do not recommend using a scoring system. It may be that the student was not aware of that particular aspect of the checklist so gave a score based on the critique they carried out. I hope this helps you. I notice that there may be a newer checklist on the CASP website which can be found here: https://casp-uk.net/casp-tools-checklists/ so please do go and take a look. Many thanks and best wishes. Emma.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

Unite for Clinical Trials Transparency

This is a call to health justice organizations to submit comments to the Universities Allied for Essential Medicines (UAEM) citizen petition.

""

Risk Communication in Public Health

Learn why effective risk communication in public health matters and where you can get started in learning how to better communicate research evidence.

""

Natural killer cells in glioblastoma therapy

As seen in a previous blog from Davide, modern neuroscience often interfaces with other medical specialities. In this blog, he provides a summary of new evidence about the potential of a therapeutic strategy born at the crossroad between neurology, immunology and oncology.

Health Research Reporting Guidelines, Study Execution Manuals, Critical Appraisal, Risk of Bias, & Non-reporting Biases

  • Health Research Reporting Guidelines
  • Study Execution Manuals

Why Critical Appraisal or Study Execution Assessment?

Systematic review critical appraisal tools, scoping & other review types critical appraisal, randomized clinical trial critical appraisal tools, quasi-experimental (non-randomised) trials/studies critical appraisal tools, observational studies critical appraisal tools, diagnostic studies critical appraisal tools, prognosis/prediction critical appraisal tools, economic evaluations critical appraisal tools, qualitative studies critical appraisal tools, case reports & case series critical appraisal tools, statistical methodology assessment, all critical appraisal tools in alphabetical order.

  • Risk of Bias
  • Reporting Biases
  • Quality of Evidence
  • HSLS Systematic Review LibGuide

For additional information, contact:

Helena VonVille, MLS, MPH

  • HSLS Pitt Public Health Liaison 
  • HSLS Research & Instruction Librarian
  • Email me:  [email protected]
  • Schedule an appointment with me

What is critical appraisal?

Definitions for critical appraisal evolve around a similar set of criteria:

What is Critical Appraisal (Bandolier)

  • "Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context."

Glossary definition (Critical Appraisal Skills Program (CASP)

  • "Critical Appraisal is the process of assessing and interpreting evidence, by systematically considering its  validity , results and relevance to your own context."

Critical Appraisal Tools (Centre for Evidence-Based Medicine (CEBM))

  • Does this study address a  clearly focused question ?
  • Did the study use valid methods to address this question?
  • Are the valid results of this study important?
  • Are these valid, important results applicable to my patient or population?

While critical appraisal can highlight bias in a study, the current version of the  Cochrane Handbook points out:

" Methodological quality refers to critical appraisal of a study or systematic review and the extent to which study authors conducted and reported their research to the highest possible standard. Bias refers to systematic deviation of results or inferences from the truth. These deviations can occur as a result of flaws in design, conduct, analysis, and/or reporting. It is not always possible to know whether an estimate is biased even if there is a flaw in the study; further, it is difficult to quantify and at times to predict the direction of bias. For these reasons, reviewers refer to ‘risk of bias’ (Chapter 8)." Chapter V: Overviews of Reviews  (below the table).

A separate page has been created for Risk of Bias .

Why critical appraisal?

Critical appraisal and, more specifically, critical appraisal tools provide us with a mechanism to evaluate the research methodology of a study with a critical, objective, and systematic lens. This appraisal is essential when evaluating a study for a systematic review, for determining new guidelines for patient care, or for choosing appropriate interventions. 

Uses for Critical Appraisal Tools

CA tools can be used in multiple ways and in different settings.

  • Use the appropriate critical appraisal tool (and reporting guideline) to ensure you engage in good study conduct and are prepared to practice clear and transparent reporting.
  • Use the appropriate critical appraisal tool (and reporting guideline) to ensure the author(s) engaged in good study conduct as well as clear and transparent reporting.
  • Use the appropriate critical appraisal tool (and reporting guideline) as a framework for evaluating manuscripts and providing feedback in a clear and objective manner.
  • Critical appraisal of included studies is a necessity when conducting a systematic review, even when the studies are non-randomized or observational. 

Checklist for Systematic Reviews and Research Syntheses

Produced by:  Joanna Briggs Institute Part of:  The JBI Critical Appraisal Tools collection

  • The checklist is a Word document.

AMSTAR 2: A MeaSurement Tool to Assess systematic Reviews  

There were 2 goals in the development of AMSTAR:

  • To create valid, reliable and useable instruments that would help users differentiate between systematic reviews, focusing on their methodological quality and expert consensus.
  • To facilitate the development of high-quality reviews.

Critical Appraisal Tools

Produced by:  Centre for Evidence Based Medicine, University of Oxford, UK Part of:  The  CEBM Critical Appraisal Tools  collection

  • Systematic Reviews Critical Appraisal Sheet
  • Chapter 26 of the Cochrane Handbook describes an IPD review
  • The appraisal worksheets are in PDF format.
  • The appraisal worksheets are in English as well as Chinese, German, Lithuanian, Persian, and Spanish; languages other than English can be found on the home page.

CASP Checklists

Produced by:   Critical Appraisal Skills Programme, UK

CASP  Systematic Review  Checklist

  • Print & Fill

About:  Heise, T.L., Seidler, A., Girbig, M. et al. CAT HPPR: a critical appraisal tool to assess the quality of systematic, rapid, and scoping reviews investigating interventions in health promotion and prevention. BMC Med Res Methodol 22, 334 (2022). https://doi.org/10.1186/s12874-022-01821-4

Supplementary information:

  • File 1 Includes the checklist and user manual
  • File 2 includes methodological supporting documentation

Randomized Controlled Trials

Study quality assessment tools.

Produced by:  US National Heart, Lung, and Blood Institute (NHLBI)

This site has 6 assessment tools covering controlled interventions, SRs/MAs, observational cohort studies, cross-sectional studies, case control studies, pre-post studies (no control group), and case series.

CASP has 8 critical appraisal tools for SRs, RCTs, cohort studies, case control studies, economic evaluations, diagnostic studies, qualitative studies, and clinical prediction. Each item in the individual checklists provides a series of questions.

Randomised Controlled Trials (RCT) Critical Appraisal Worksheet

Checklist for quasi-experimental studies (non-randomized experimental studies).

This site has an assessment tool for pre-post studies (no control group).

Methodological index for non-randomized studies (MINORS): development and validation of a new instrument (requires U Pitt authentication)  

  • "Background: Because of specific methodological difficulties in conducting randomized trials, surgical research remains dependent predominantly on observational or non‐randomized studies. Few validated instruments are available to determine the methodological quality of such studies either from the reader's perspective or for the purpose of meta‐analysis. The aim of the present study was to develop and validate such an instrument."

Observational studies

Checklist for analytical cross sectional studies, checklist for case control studies, checklist for cohort studies, checklist for prevalence studies.

  • The checklists are all Word documents.
  • This site has assessment tools for: observational cohort studies, cross-sectional studies, and case control studies.

Produced by:  Critical Appraisal Skills Programme, UK

  • CASP has a critical appraisal tool for cohort studies and case control studies. Each item in the individual checklists provides a series of questions.

Newcastle-Ottawa Scale: Case Control Studies & Cohort Studies

Produced by:  University of Newcastle, Australia and University of Ottawa, Canada

  • Instructions for both case control and cohort studies
  • Instruments for both case control and cohort studies

Diagnostic Test Accuracy Studies

Produced by:  Joanna Briggs Institute Part of:  The  JBI Critical Appraisal Tools  collection

CASP has a critical appraisal tool for diagnostic studies. Each item in the individual checklists provides a series of questions.

Diagnostics Critical Appraisal Sheet

Charms: critical appraisal and data extraction for systematic reviews of prediction modelling studies.

Moons KG, de Groot JA, Bouwmeester W, Vergouwe Y, Mallett S, Altman DG, Reitsma JB, Collins GS. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med. 2014 Oct 14;11(10):e1001744. doi: 10.1371/journal.pmed.1001744 . PMID: 25314315 ; PMCID: PMC4196729 .

CHARMS-PF: Checklist for critical appraisal and data extraction for systematic reviews of prognostic factors studies

Found in:  Riley RD, Moons KGM, Snell KIE, Ensor J, Hooft L, Altman DG, Hayden J, Collins GS, Debray TPA. A guide to systematic review and meta-analysis of prognostic factor studies. BMJ. 2019 Jan 30;364:k4597. doi: 10.1136/bmj.k4597 . PMID: 30700442 .

CASP has 8 critical appraisal tools including one for clinical prediction. Each item in the individual checklists provides a series of questions.

Prognosis Critical Appraisal Sheet

Checklist for economic evaluations.

CASP has a critical appraisal tool for economic evaluations. Each item in the individual checklists provides a series of questions.

Checklist for Qualitative Research

  • The checklist is in PDF format
  • CASP has a critical appraisal tool for qualitative studies. Each item in the individual checklists provides a series of questions.

Critical Appraisal of Qualitative Studies Sheet 

Evaluation tool for mixed methods studies.

Produced by:  A. Long, U of Leeds

The ‘mixed method’ evaluation tool was developed from the evaluation tools for ‘quantitative’ and ‘qualitative’ studies, themselves created within the context of a project exploring the feasibility of undertaking systematic reviews of research literature on effectiveness and outcomes in social care.

Checklist for Case Reports  &  Checklist for Case Series

  • The checklists are both Word documents.

CHAMP (Checklist for statistical Assessment of Medical Papers)  (2021)

About:  "While CHAMP is primarily aimed at editors and peer reviewers during the statistical assessment of a medical paper, we believe it will serve as a useful reference to improve authors' and readers' practice in their use of statistics in medical research."

  • CHecklist for statistical Assessment of Medical Papers: the CHAMP statement
  • A CHecklist for statistical Assessment of Medical Papers (the CHAMP statement): explanation and elaboration

A tool to assess the quality of a meta-analysis (2013) (Not open access)

Higgins JP, Lane PW, Anagnostelis B, Anzures-Cabrera J, Baker NF, Cappelleri JC, Haughie S, Hollis S, Lewis SC, Moneuse P, Whitehead A. A tool to assess the quality of a meta-analysis. Res Synth Methods. 2013 Dec;4(4):351-66. doi: 10.1002/jrsm.1092. Epub 2013 Oct 18. PMID: 26053948 . Not open access

CHAMP (Checklist for statistical Assessment of Medical Papers) (2021)

"While CHAMP is primarily aimed at editors and peer reviewers during the statistical assessment of a medical paper, we believe it will serve as a useful reference to improve authors' and readers' practice in their use of statistics in medical research."

Produced by:  Joanna Briggs Institute

The link above points to all of the Critical Appraisal Tools from JBI. All are in the format of a Word document.

See  Chapter 3.2.7 of the JBI Manual for Evidence Synthesis for an explanation on how to conduct and describe the critical appraisal of studies in a systematic review. 

Produced by:  Centre for Evidence Based Medicine, Oxford, UK

The appraisal worksheets are in English as well as Chinese, German, Lithuanian, Persian, and Spanish; languages can be found on the home page.

  • Systematic Reviews  Critical Appraisal Sheet
  • Diagnostics  Critical Appraisal Sheet
  • Prognosis  Critical Appraisal Sheet
  • Randomised Controlled Trials  (RCT) Critical Appraisal Sheet
  • Critical Appraisal of Qualitative Studies  Sheet
  • Chapter 26  of the Cochrane Handbook describes an IPD review.

Downs & Black: The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions

Items from the Downs & Black checklist can be found in this article. 

Newcastle-Ottawa Scale

Produced by:   University of Newcastle, Australia and University of Ottawa, Canada

NOS was developed to assess the quality of nonrandomised studies with its design, content and ease of use directed to the task of incorporating the quality assessments in the interpretation of meta-analytic results. A 'star system' has been developed in which a study is judged on three broad perspectives: the selection of the study groups; the comparability of the groups; and the ascertainment of either the exposure or outcome of interest for case-control or cohort studies respectively.

Additional Links

  • The link goes to a PDF of the article and the QI checklist which can be found in the supplement.
  • The aim of this WIKI is to enable collaborative work for developing a Mixed Methods Appraisal Tool (MMAT).
  • The MMAT is intended to be used as a checklist for concomitantly appraising and/or describing studies included in systematic mixed studies reviews (reviews including original qualitative, quantitative and mixed methods studies).

themselves created within the context of a project exploring the feasibility of undertaking systematic reviews of research literature on effectiveness and outcomes in social care.

  • The MetaQAT is a meta-tool for appraising public health evidence that orients users to the appropriate application of several critical appraisal tools and places them within a larger framework to guide their use. It is one of few critical appraisal tools designed specifically for public health evidence.

The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review

  • This study, from 2015, evaluated 21 assessment (critical appraisal) tools and provides recommendations for which tool to use, depending on the study methodology.
  • Note that several of the instruments evaluated have undergone some modifications since 2015.
  • << Previous: Study Execution Manuals
  • Next: Risk of Bias >>
  • Last Updated: Jan 10, 2024 10:02 AM
  • URL: https://hsls.libguides.com/reporting-study-tools

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 25, Issue 1
  • Critical appraisal of qualitative research: necessity, partialities and the issue of bias
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-5660-8224 Veronika Williams ,
  • Anne-Marie Boylan ,
  • http://orcid.org/0000-0003-4597-1276 David Nunan
  • Nuffield Department of Primary Care Health Sciences , University of Oxford, Radcliffe Observatory Quarter , Oxford , UK
  • Correspondence to Dr Veronika Williams, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX2 6GG, UK; veronika.williams{at}phc.ox.ac.uk

https://doi.org/10.1136/bmjebm-2018-111132

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • qualitative research

Introduction

Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the ‘how’ and ‘why’. As we have argued previously 1 , qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety, 2 prescribing, 3 4 and understanding chronic illness. 5 Equally, it offers additional insight into quantitative studies, explaining contextual factors surrounding a successful intervention or why an intervention might have ‘failed’ or ‘succeeded’ where effect sizes cannot. It is for these reasons that the MRC strongly recommends including qualitative evaluations when developing and evaluating complex interventions. 6

Critical appraisal of qualitative research

Is it necessary.

Although the importance of qualitative research to improve health services and care is now increasingly widely supported (discussed in paper 1), the role of appraising the quality of qualitative health research is still debated. 8 10 Despite a large body of literature focusing on appraisal and rigour, 9 11–15 often referred to as ‘trustworthiness’ 16 in qualitative research, there remains debate about how to —and even whether to—critically appraise qualitative research. 8–10 17–19 However, if we are to make a case for qualitative research as integral to evidence-based healthcare, then any argument to omit a crucial element of evidence-based practice is difficult to justify. That being said, simply applying the standards of rigour used to appraise studies based on the positivist paradigm (Positivism depends on quantifiable observations to test hypotheses and assumes that the researcher is independent of the study. Research situated within a positivist paradigm isbased purely on facts and consider the world to be external and objective and is concerned with validity, reliability and generalisability as measures of rigour.) would be misplaced given the different epistemological underpinnings of the two types of data.

Given its scope and its place within health research, the robust and systematic appraisal of qualitative research to assess its trustworthiness is as paramount to its implementation in clinical practice as any other type of research. It is important to appraise different qualitative studies in relation to the specific methodology used because the methodological approach is linked to the ‘outcome’ of the research (eg, theory development, phenomenological understandings and credibility of findings). Moreover, appraisal needs to go beyond merely describing the specific details of the methods used (eg, how data were collected and analysed), with additional focus needed on the overarching research design and its appropriateness in accordance with the study remit and objectives.

Poorly conducted qualitative research has been described as ‘worthless, becomes fiction and loses its utility’. 20 However, without a deep understanding of concepts of quality in qualitative research or at least an appropriate means to assess its quality, good qualitative research also risks being dismissed, particularly in the context of evidence-based healthcare where end users may not be well versed in this paradigm.

How is appraisal currently performed?

Appraising the quality of qualitative research is not a new concept—there are a number of published appraisal tools, frameworks and checklists in existence. 21–23  An important and often overlooked point is the confusion between tools designed for appraising methodological quality and reporting guidelines designed to assess the quality of methods reporting. An example is the Consolidate Criteria for Reporting Qualitative Research (COREQ) 24 checklist, which was designed to provide standards for authors when reporting qualitative research but is often mistaken for a methods appraisal tool. 10

Broadly speaking there are two types of critical appraisal approaches for qualitative research: checklists and frameworks. Checklists have often been criticised for confusing quality in qualitative research with ‘technical fixes’ 21 25 , resulting in the erroneous prioritisation of particular aspects of methodological processes over others (eg, multiple coding and triangulation). It could be argued that a checklist approach adopts the positivist paradigm, where the focus is on objectively assessing ‘quality’ where the assumptions is that the researcher is independent of the research conducted. This may result in the application of quantitative understandings of bias in order to judge aspects of recruitment, sampling, data collection and analysis in qualitative research papers. One of the most widely used appraisal tools is the Critical Appraisal Skills Programme (CASP) 26 and along with the JBI QARI (Joanna Briggs Institute Qualitative Assessment and Assessment Instrument) 27 presents examples which tend to mimic the quantitative approach to appraisal. The CASP qualitative tool follows that of other CASP appraisal tools for quantitative research designs developed in the 1990s. The similarities are therefore unsurprising given the status of qualitative research at that time.

Frameworks focus on the overarching concepts of quality in qualitative research, including transparency, reflexivity, dependability and transferability (see box 1 ). 11–13 15 16 20 28 However, unless the reader is familiar with these concepts—their meaning and impact, and how to interpret them—they will have difficulty applying them when critically appraising a paper.

The main issue concerning currently available checklist and framework appraisal methods is that they take a broad brush approach to ‘qualitative’ research as whole, with few, if any, sufficiently differentiating between the different methodological approaches (eg, Grounded Theory, Interpretative Phenomenology, Discourse Analysis) nor different methods of data collection (interviewing, focus groups and observations). In this sense, it is akin to taking the entire field of ‘quantitative’ study designs and applying a single method or tool for their quality appraisal. In the case of qualitative research, checklists, therefore, offer only a blunt and arguably ineffective tool and potentially promote an incomplete understanding of good ‘quality’ in qualitative research. Likewise, current framework methods do not take into account how concepts differ in their application across the variety of qualitative approaches and, like checklists, they also do not differentiate between different qualitative methodologies.

On the need for specific appraisal tools

Current approaches to the appraisal of the methodological rigour of the differing types of qualitative research converge towards checklists or frameworks. More importantly, the current tools do not explicitly acknowledge the prejudices that may be present in the different types of qualitative research.

Concepts of rigour or trustworthiness within qualitative research 31

Transferability: the extent to which the presented study allows readers to make connections between the study’s data and wider community settings, ie, transfer conceptual findings to other contexts.

Credibility: extent to which a research account is believable and appropriate, particularly in relation to the stories told by participants and the interpretations made by the researcher.

Reflexivity: refers to the researchers’ engagement of continuous examination and explanation of how they have influenced a research project from choosing a research question to sampling, data collection, analysis and interpretation of data.

Transparency: making explicit the whole research process from sampling strategies, data collection to analysis. The rationale for decisions made is as important as the decisions themselves.

However, we often talk about these concepts in general terms, and it might be helpful to give some explicit examples of how the ‘technical processes’ affect these, for example, partialities related to:

Selection: recruiting participants via gatekeepers, such as healthcare professionals or clinicians, who may select them based on whether they believe them to be ‘good’ participants for interviews/focus groups.

Data collection: poor interview guide with closed questions which encourage yes/no answers and/leading questions.

Reflexivity and transparency: where researchers may focus their analysis on preconceived ideas rather than ground their analysis in the data and do not reflect on the impact of this in a transparent way.

The lack of tailored, method-specific appraisal tools has potentially contributed to the poor uptake and use of qualitative research for informing evidence-based decision making. To improve this situation, we propose the need for more robust quality appraisal tools that explicitly encompass both the core design aspects of all qualitative research (sampling/data collection/analysis) but also considered the specific partialities that can be presented with different methodological approaches. Such tools might draw on the strengths of current frameworks and checklists while providing users with sufficient understanding of concepts of rigour in relation to the different types of qualitative methods. We provide an outline of such tools in the third and final paper in this series.

As qualitative research becomes ever more embedded in health science research, and in order for that research to have better impact on healthcare decisions, we need to rethink critical appraisal and develop tools that allow differentiated evaluations of the myriad of qualitative methodological approaches rather than continuing to treat qualitative research as a single unified approach.

  • Williams V ,
  • Boylan AM ,
  • Lingard L ,
  • Orser B , et al
  • Brawn R , et al
  • Van Royen P ,
  • Vermeire E , et al
  • Barker M , et al
  • McGannon KR
  • Dixon-Woods M ,
  • Agarwal S , et al
  • Greenhalgh T ,
  • Dennison L ,
  • Morrison L ,
  • Conway G , et al
  • Barrett M ,
  • Mayan M , et al
  • Lockwood C ,
  • Santiago-Delefosse M ,
  • Bruchez C , et al
  • Sainsbury P ,
  • ↵ CASP (Critical Appraisal Skills Programme). date unknown . http://www.phru.nhs.uk/Pages/PHD/CASP.htm .
  • ↵ The Joanna Briggs Institute . JBI QARI Critical appraisal checklist for interpretive & critical research . Adelaide : The Joanna Briggs Institute , 2014 .
  • Stephens J ,

Contributors VW and DN: conceived the idea for this article. VW: wrote the first draft. AMB and DN: contributed to the final draft. All authors approve the submitted article.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Correction notice This article has been updated since its original publication to include a new reference (reference 1.)

Read the full text or download the PDF:

Psychology: Research and Review

  • Open access
  • Published: 19 March 2021

Appraising psychotherapy case studies in practice-based evidence: introducing Case Study Evaluation-tool (CaSE)

  • Greta Kaluzeviciute   ORCID: orcid.org/0000-0003-1197-177X 1  

Psicologia: Reflexão e Crítica volume  34 , Article number:  9 ( 2021 ) Cite this article

7177 Accesses

3 Citations

5 Altmetric

Metrics details

Systematic case studies are often placed at the low end of evidence-based practice (EBP) due to lack of critical appraisal. This paper seeks to attend to this research gap by introducing a novel Case Study Evaluation-tool (CaSE). First, issues around knowledge generation and validity are assessed in both EBP and practice-based evidence (PBE) paradigms. Although systematic case studies are more aligned with PBE paradigm, the paper argues for a complimentary, third way approach between the two paradigms and their ‘exemplary’ methodologies: case studies and randomised controlled trials (RCTs). Second, the paper argues that all forms of research can produce ‘valid evidence’ but the validity itself needs to be assessed against each specific research method and purpose. Existing appraisal tools for qualitative research (JBI, CASP, ETQS) are shown to have limited relevance for the appraisal of systematic case studies through a comparative tool assessment. Third, the paper develops purpose-oriented evaluation criteria for systematic case studies through CaSE Checklist for Essential Components in Systematic Case Studies and CaSE Purpose-based Evaluative Framework for Systematic Case Studies. The checklist approach aids reviewers in assessing the presence or absence of essential case study components (internal validity). The framework approach aims to assess the effectiveness of each case against its set out research objectives and aims (external validity), based on different systematic case study purposes in psychotherapy. Finally, the paper demonstrates the application of the tool with a case example and notes further research trajectories for the development of CaSE tool.

Introduction

Due to growing demands of evidence-based practice, standardised research assessment and appraisal tools have become common in healthcare and clinical treatment (Hannes, Lockwood, & Pearson, 2010 ; Hartling, Chisholm, Thomson, & Dryden, 2012 ; Katrak, Bialocerkowski, Massy-Westropp, Kumar, & Grimmer, 2004 ). This allows researchers to critically appraise research findings on the basis of their validity, results, and usefulness (Hill & Spittlehouse, 2003 ). Despite the upsurge of critical appraisal in qualitative research (Williams, Boylan, & Nunan, 2019 ), there are no assessment or appraisal tools designed for psychotherapy case studies.

Although not without controversies (Michels, 2000 ), case studies remain central to the investigation of psychotherapy processes (Midgley, 2006 ; Willemsen, Della Rosa, & Kegerreis, 2017 ). This is particularly true of systematic case studies, the most common form of case study in contemporary psychotherapy research (Davison & Lazarus, 2007 ; McLeod & Elliott, 2011 ).

Unlike the classic clinical case study, systematic cases usually involve a team of researchers, who gather data from multiple different sources (e.g., questionnaires, observations by the therapist, interviews, statistical findings, clinical assessment, etc.), and involve a rigorous data triangulation process to assess whether the data from different sources converge (McLeod, 2010 ). Since systematic case studies are methodologically pluralistic, they have a greater interest in situating patients within the study of a broader population than clinical case studies (Iwakabe & Gazzola, 2009 ). Systematic case studies are considered to be an accessible method for developing research evidence-base in psychotherapy (Widdowson, 2011 ), especially since they correct some of the methodological limitations (e.g. lack of ‘third party’ perspectives and bias in data analysis) inherent to classic clinical case studies (Iwakabe & Gazzola, 2009 ). They have been used for the purposes of clinical training (Tuckett, 2008 ), outcome assessment (Hilliard, 1993 ), development of clinical techniques (Almond, 2004 ) and meta-analysis of qualitative findings (Timulak, 2009 ). All these developments signal a revived interest in the case study method, but also point to the obvious lack of a research assessment tool suitable for case studies in psychotherapy (Table 1 ).

To attend to this research gap, this paper first reviews issues around the conceptualisation of validity within the paradigms of evidence-based practice (EBP) and practice-based evidence (PBE). Although case studies are often positioned at the low end of EBP (Aveline, 2005 ), the paper suggests that systematic cases are a valuable form of evidence, capable of complimenting large-scale studies such as randomised controlled trials (RCTs). However, there remains a difficulty in assessing the quality and relevance of case study findings to broader psychotherapy research.

As a way forward, the paper introduces a novel Case Study Evaluation-tool (CaSE) in the form of CaSE Purpose - based Evaluative Framework for Systematic Case Studies and CaSE Checklist for Essential Components in Systematic Case Studies . The long-term development of CaSE would contribute to psychotherapy research and practice in three ways.

Given the significance of methodological pluralism and diverse research aims in systematic case studies, CaSE will not seek to prescribe explicit case study writing guidelines, which has already been done by numerous authors (McLeod, 2010 ; Meganck, Inslegers, Krivzov, & Notaerts, 2017 ; Willemsen et al., 2017 ). Instead, CaSE will enable the retrospective assessment of systematic case study findings and their relevance (or lack thereof) to broader psychotherapy research and practice. However, there is no reason to assume that CaSE cannot be used prospectively (i.e. producing systematic case studies in accordance to CaSE evaluative framework, as per point 3 in Table 2 ).

The development of a research assessment or appraisal tool is a lengthy, ongoing process (Long & Godfrey, 2004 ). It is particularly challenging to develop a comprehensive purpose - oriented evaluative framework, suitable for the assessment of diverse methodologies, aims and outcomes. As such, this paper should be treated as an introduction to the broader development of CaSE tool. It will introduce the rationale behind CaSE and lay out its main approach to evidence and evaluation, with further development in mind. A case example from the Single Case Archive (SCA) ( https://singlecasearchive.com ) will be used to demonstrate the application of the tool ‘in action’. The paper notes further research trajectories and discusses some of the limitations around the use of the tool.

Separating the wheat from the chaff: what is and is not evidence in psychotherapy (and who gets to decide?)

The common approach: evidence-based practice.

In the last two decades, psychotherapy has become increasingly centred around the idea of an evidence-based practice (EBP). Initially introduced in medicine, EBP has been defined as ‘conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’ (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996 ). EBP revolves around efficacy research: it seeks to examine whether a specific intervention has a causal (in this case, measurable) effect on clinical populations (Barkham & Mellor-Clark, 2003 ). From a conceptual standpoint, Sackett and colleagues defined EBP as a paradigm that is inclusive of many methodologies, so long as they contribute towards clinical decision-making process and accumulation of best currently available evidence in any given set of circumstances (Gabbay & le May, 2011 ). Similarly, the American Psychological Association (APA, 2010 ) has recently issued calls for evidence-based systematic case studies in order to produce standardised measures for evaluating process and outcome data across different therapeutic modalities.

However, given EBP’s focus on establishing cause-and-effect relationships (Rosqvist, Thomas, & Truax, 2011 ), it is unsurprising that qualitative research is generally not considered to be ‘gold standard’ or ‘efficacious’ within this paradigm (Aveline, 2005 ; Cartwright & Hardie, 2012 ; Edwards, 2013 ; Edwards, Dattilio, & Bromley, 2004 ; Longhofer, Floersch, & Hartmann, 2017 ). Qualitative methods like systematic case studies maintain an appreciation for context, complexity and meaning making. Therefore, instead of measuring regularly occurring causal relations (as in quantitative studies), the focus is on studying complex social phenomena (e.g. relationships, events, experiences, feelings, etc.) (Erickson, 2012 ; Maxwell, 2004 ). Edwards ( 2013 ) points out that, although context-based research in systematic case studies is the bedrock of psychotherapy theory and practice, it has also become shrouded by an unfortunate ideological description: ‘anecdotal’ case studies (i.e. unscientific narratives lacking evidence, as opposed to ‘gold standard’ evidence, a term often used to describe the RCT method and the therapeutic modalities supported by it), leading to a further need for advocacy in and defence of the unique epistemic process involved in case study research (Fishman, Messer, Edwards, & Dattilio, 2017 ).

The EBP paradigm prioritises the quantitative approach to causality, most notably through its focus on high generalisability and the ability to deal with bias through randomisation process. These conditions are associated with randomised controlled trials (RCTs) but are limited (or, as some argue, impossible) in qualitative research methods such as the case study (Margison et al., 2000 ) (Table 3 ).

‘Evidence’ from an EBP standpoint hovers over the epistemological assumption of procedural objectivity : knowledge can be generated in a standardised, non-erroneous way, thus producing objective (i.e. with minimised bias) data. This can be achieved by anyone, as long as they are able to perform the methodological procedure (e.g. RCT) appropriately, in a ‘clearly defined and accepted process that assists with knowledge production’ (Douglas, 2004 , p. 131). If there is a well-outlined quantitative form for knowledge production, the same outcome should be achieved regardless of who processes or interprets the information. For example, researchers using Cochrane Review assess the strength of evidence using meticulously controlled and scrupulous techniques; in turn, this minimises individual judgment and creates unanimity of outcomes across different groups of people (Gabbay & le May, 2011 ). The typical process of knowledge generation (through employing RCTs and procedural objectivity) in EBP is demonstrated in Fig. 1 .

figure 1

Typical knowledge generation process in evidence–based practice (EBP)

In EBP, the concept of validity remains somewhat controversial, with many critics stating that it limits rather than strengthens knowledge generation (Berg, 2019 ; Berg & Slaattelid, 2017 ; Lilienfeld, Ritschel, Lynn, Cautin, & Latzman, 2013 ). This is because efficacy research relies on internal validity . At a general level, this concept refers to the congruence between the research study and the research findings (i.e. the research findings were not influenced by anything external to the study, such as confounding variables, methodological errors and bias); at a more specific level, internal validity determines the extent to which a study establishes a reliable causal relationship between an independent variable (e.g. treatment) and independent variable (outcome or effect) (Margison et al., 2000 ). This approach to validity is demonstrated in Fig. 2 .

figure 2

Internal validity

Social scientists have argued that there is a trade-off between research rigour and generalisability: the more specific the sample and the more rigorously defined the intervention, the outcome is likely to be less applicable to everyday, routine practice. As such, there remains a tension between employing procedural objectivity which increases the rigour of research outcomes and applying such outcomes to routine psychotherapy practice where scientific standards of evidence are not uniform.

According to McLeod ( 2002 ), inability to address questions that are most relevant for practitioners contributed to a deepening research–practice divide in psychotherapy. Studies investigating how practitioners make clinical decisions and the kinds of evidence they refer to show that there is a strong preference for knowledge that is not generated procedurally, i.e. knowledge that encompasses concrete clinical situations, experiences and techniques. A study by Stewart and Chambless ( 2007 ) sought to assess how a larger population of clinicians (under APA, from varying clinical schools of thought and independent practices, sample size 591) make treatment decisions in private practice. The study found that large-scale statistical data was not the primary source of information sought by clinicians. The most important influences were identified as past clinical experiences and clinical expertise ( M = 5.62). Treatment materials based on clinical case observations and theory ( M = 4.72) were used almost as frequently as psychotherapy outcome research findings ( M = 4.80) (i.e. evidence-based research). These numbers are likely to fluctuate across different forms of psychotherapy; however, they are indicative of the need for research about routine clinical settings that does not isolate or generalise the effect of an intervention but examines the variations in psychotherapy processes.

The alternative approach: practice-based evidence

In an attempt to dissolve or lessen the research–practice divide, an alternative paradigm of practice-based evidence (PBE) has been suggested (Barkham & Mellor-Clark, 2003 ; Fox, 2003 ; Green & Latchford, 2012 ; Iwakabe & Gazzola, 2009 ; Laska, Motulsky, Wertz, Morrow, & Ponterotto, 2014 ; Margison et al., 2000 ). PBE represents a shift in how we think about evidence and knowledge generation in psychotherapy. PBE treats research as a local and contingent process (at least initially), which means it focuses on variations (e.g. in patient symptoms) and complexities (e.g. of clinical setting) in the studied phenomena (Fox, 2003 ). Moreover, research and theory-building are seen as complementary rather than detached activities from clinical practice. That is to say, PBE seeks to examine how and which treatments can be improved in everyday clinical practice by flagging up clinically salient issues and developing clinical techniques (Barkham & Mellor-Clark, 2003 ). For this reason, PBE is concerned with the effectiveness of research findings: it evaluates how well interventions work in real-world settings (Rosqvist et al., 2011 ). Therefore, although it is not unlikely for RCTs to be used in order to generate practice-informed evidence (Horn & Gassaway, 2007 ), qualitative methods like the systematic case study are seen as ideal for demonstrating the effectiveness of therapeutic interventions with individual patients (van Hennik, 2020 ) (Table 4 ).

PBE’s epistemological approach to ‘evidence’ may be understood through the process of concordant objectivity (Douglas, 2004 ): ‘Instead of seeking to eliminate individual judgment, … [concordant objectivity] checks to see whether the individual judgments of people in fact do agree’ (p. 462). This does not mean that anyone can contribute to the evaluation process like in procedural objectivity, where the main criterion is following a set quantitative protocol or knowing how to operate a specific research design. Concordant objectivity requires that there is a set of competent observers who are closely familiar with the studied phenomenon (e.g. researchers and practitioners who are familiar with depression from a variety of therapeutic approaches).

Systematic case studies are a good example of PBE ‘in action’: they allow for the examination of detailed unfolding of events in psychotherapy practice, making it the most pragmatic and practice-oriented form of psychotherapy research (Fishman, 1999 , 2005 ). Furthermore, systematic case studies approach evidence and results through concordant objectivity (Douglas, 2004 ) by involving a team of researchers and rigorous data triangulation processes (McLeod, 2010 ). This means that, although systematic case studies remain focused on particular clinical situations and detailed subjective experiences (similar to classic clinical case studies; see Iwakabe & Gazzola, 2009 ), they still involve a series of validity checks and considerations on how findings from a single systematic case pertain to broader psychotherapy research (Fishman, 2005 ). The typical process of knowledge generation (through employing systematic case studies and concordant objectivity) in PBE is demonstrated in Fig. 3 . The figure exemplifies a bidirectional approach to research and practice, which includes the development of research-supported psychological treatments (through systematic reviews of existing evidence) as well as the perspectives of clinical practitioners in the research process (through the study of local and contingent patient and/or treatment processes) (Teachman et al., 2012 ; Westen, Novotny, & Thompson-Brenner, 2004 ).

figure 3

Typical knowledge generation process in practice-based evidence (PBE)

From a PBE standpoint, external validity is a desirable research condition: it measures extent to which the impact of interventions apply to real patients and therapists in everyday clinical settings. As such, external validity is not based on the strength of causal relationships between treatment interventions and outcomes (as in internal validity); instead, the use of specific therapeutic techniques and problem-solving decisions are considered to be important for generalising findings onto routine clinical practice (even if the findings are explicated from a single case study; see Aveline, 2005 ). This approach to validity is demonstrated in Fig. 4 .

figure 4

External validity

Since effectiveness research is less focused on limiting the context of the studied phenomenon (indeed, explicating the context is often one of the research aims), there is more potential for confounding factors (e.g. bias and uncontrolled variables) which in turn can reduce the study’s internal validity (Barkham & Mellor-Clark, 2003 ). This is also an important challenge for research appraisal. Douglas ( 2004 ) argues that appraising research in terms of its effectiveness may produce significant disagreements or group illusions, since what might work for some practitioners may not work for others: ‘It cannot guarantee that values are not influencing or supplanting reasoning; the observers may have shared values that cause them to all disregard important aspects of an event’ (Douglas, 2004 , p. 462). Douglas further proposes that an interactive approach to objectivity may be employed as a more complex process in debating the evidential quality of a research study: it requires a discussion among observers and evaluators in the form of peer-review, scientific discourse, as well as research appraisal tools and instruments. While these processes of rigour are also applied in EBP, there appears to be much more space for debate, disagreement and interpretation in PBE’s approach to research evaluation, partly because the evaluation criteria themselves are subject of methodological debate and are often employed in different ways by researchers (Williams et al., 2019 ). This issue will be addressed more explicitly again in relation to CaSE development (‘Developing purpose-oriented evaluation criteria for systematic case studies’ section).

A third way approach to validity and evidence

The research–practice divide shows us that there may be something significant in establishing complementarity between EBP and PBE rather than treating them as mutually exclusive forms of research (Fishman et al., 2017 ). For one, EBP is not a sufficient condition for delivering research relevant to practice settings (Bower, 2003 ). While RCTs can demonstrate that an intervention works on average in a group, clinicians who are facing individual patients need to answer a different question: how can I make therapy work with this particular case ? (Cartwright & Hardie, 2012 ). Systematic case studies are ideal for filling this gap: they contain descriptions of microprocesses (e.g. patient symptoms, therapeutic relationships, therapist attitudes) in psychotherapy practice that are often overlooked in large-scale RCTs (Iwakabe & Gazzola, 2009 ). In particular, systematic case studies describing the use of specific interventions with less researched psychological conditions (e.g. childhood depression or complex post-traumatic stress disorder) can deepen practitioners’ understanding of effective clinical techniques before the results of large-scale outcome studies are disseminated.

Secondly, establishing a working relationship between systematic case studies and RCTs will contribute towards a more pragmatic understanding of validity in psychotherapy research. Indeed, the very tension and so-called trade-off between internal and external validity is based on the assumption that research methods are designed on an either/or basis; either they provide a sufficiently rigorous study design or they produce findings that can be applied to real-life practice. Jimenez-Buedo and Miller ( 2010 ) call this assumption into question: in their view, if a study is not internally valid, then ‘little, or rather nothing, can be said of the outside world’ (p. 302). In this sense, internal validity may be seen as a pre-requisite for any form of applied research and its external validity, but it need not be constrained to the quantitative approach of causality. For example, Levitt, Motulsky, Wertz, Morrow, and Ponterotto ( 2017 ) argue that, what is typically conceptualised as internal validity, is, in fact, a much broader construct, involving the assessment of how the research method (whether qualitative or quantitative) is best suited for the research goal, and whether it obtains the relevant conclusions. Similarly, Truijens, Cornelis, Desmet, and De Smet ( 2019 ) suggest that we should think about validity in a broader epistemic sense—not just in terms of psychometric measures, but also in terms of the research design, procedure, goals (research questions), approaches to inquiry (paradigms, epistemological assumptions), etc.

The overarching argument from research cited above is that all forms of research—qualitative and quantitative—can produce ‘valid evidence’ but the validity itself needs to be assessed against each specific research method and purpose. For example, RCTs are accompanied with a variety of clearly outlined appraisal tools and instruments such as CASP (Critical Appraisal Skills Programme) that are well suited for the assessment of RCT validity and their implications for EBP. Systematic case studies (or case studies more generally) currently have no appraisal tools in any discipline. The next section evaluates whether existing qualitative research appraisal tools are relevant for systematic case studies in psychotherapy and specifies the missing evaluative criteria.

The relevance of existing appraisal tools for qualitative research to systematic case studies in psychotherapy

What is a research tool.

Currently, there are several research appraisal tools, checklists and frameworks for qualitative studies. It is important to note that tools, checklists and frameworks are not equivalent to one another but actually refer to different approaches to appraising the validity of a research study. As such, it is erroneous to assume that all forms of qualitative appraisal feature the same aims and methods (Hannes et al., 2010 ; Williams et al., 2019 ).

Generally, research assessment falls into two categories: checklists and frameworks . Checklist approaches are often contrasted with quantitative research, since the focus is on assessing the internal validity of research (i.e. researcher’s independence from the study). This involves the assessment of bias in sampling, participant recruitment, data collection and analysis. Framework approaches to research appraisal, on the other hand, revolve around traditional qualitative concepts such as transparency, reflexivity, dependability and transferability (Williams et al., 2019 ). Framework approaches to appraisal are often challenging to use because they depend on the reviewer’s familiarisation and interpretation of the qualitative concepts.

Because of these different approaches, there is some ambiguity in terminology, particularly between research appraisal instruments and research appraisal tools . These terms are often used interchangeably in appraisal literature (Williams et al., 2019 ). In this paper, research appraisal tool is defined as a method-specific (i.e. it identifies a specific research method or component) form of appraisal that draws from both checklist and framework approaches. Furthermore, a research appraisal tool seeks to inform decision making in EBP or PBE paradigms and provides explicit definitions of the tool’s evaluative framework (thus minimising—but by no means eliminating—the reviewers’ interpretation of the tool). This definition will be applied to CaSE (Table 5 ).

In contrast, research appraisal instruments are generally seen as a broader form of appraisal in the sense that they may evaluate a variety of methods (i.e. they are non-method specific or they do not target a particular research component), and are aimed at checking whether the research findings and/or the study design contain specific elements (e.g. the aims of research, the rationale behind design methodology, participant recruitment strategies, etc.).

There is often an implicit difference in audience between appraisal tools and instruments. Research appraisal instruments are often aimed at researchers who want to assess the strength of their study; however, the process of appraisal may not be made explicit in the study itself (besides mentioning that the tool was used to appraise the study). Research appraisal tools are aimed at researchers who wish to explicitly demonstrate the evidential quality of the study to the readers (which is particularly common in RCTs). All forms of appraisal used in the comparative exercise below are defined as ‘tools’, even though they have different appraisal approaches and aims.

Comparing different qualitative tools

Hannes et al. ( 2010 ) identified CASP (Critical Appraisal Skills Programme-tool), JBI (Joanna Briggs Institute-tool) and ETQS (Evaluation Tool for Qualitative Studies) as the most frequently used critical appraisal tools by qualitative researchers. All three instruments are available online and are free of charge, which means that any researcher or reviewer can readily utilise CASP, JBI or ETQS evaluative frameworks to their research. Furthermore, all three instruments were developed within the context of organisational, institutional or consortium support (Tables 6 , 7 and 8 ).

It is important to note that neither of the three tools is specific to systematic case studies or psychotherapy case studies (which would include not only systematic but also experimental and clinical cases). This means that using CASP, JBI or ETQS for case study appraisal may come at a cost of overlooking elements and components specific to the systematic case study method.

Based on Hannes et al. ( 2010 ) comparative study of qualitative appraisal tools as well as the different evaluation criteria explicated in CASP, JBI and ETQS evaluative frameworks, I assessed how well each of the three tools is attuned to the methodological , clinical and theoretical aspects of systematic case studies in psychotherapy. The latter components were based on case study guidelines featured in the journal of Pragmatic Case Studies in Psychotherapy as well as components commonly used by published systematic case studies across a variety of other psychotherapy journals (e.g. Psychotherapy Research , Research In Psychotherapy : Psychopathology Process And Outcome , etc.) (see Table 9 for detailed descriptions of each component).

The evaluation criteria for each tool in Table 9 follows Joanna Briggs Institute (JBI) ( 2017a , 2017b ); Critical Appraisal Skills Programme (CASP) ( 2018 ); and ETQS Questionnaire (first published in 2004 but revised continuously since). Table 10 demonstrates how each tool should be used (i.e. recommended reviewer responses to checklists and questionnaires).

Using CASP, JBI and ETQS for systematic case study appraisal

Although JBI, CASP and ETQS were all developed to appraise qualitative research, it is evident from the above comparison that there are significant differences between the three tools. For example, JBI and ETQS are well suited to assess researcher’s interpretations (Hannes et al. ( 2010 ) defined this as interpretive validity , a subcategory of internal validity ): the researcher’s ability to portray, understand and reflect on the research participants’ experiences, thoughts, viewpoints and intentions. JBI has an explicit requirement for participant voices to be clearly represented, whereas ETQS involves a set of questions about key characteristics of events, persons, times and settings that are relevant to the study. Furthermore, both JBI and ETQS seek to assess the researcher’s influence on the research, with ETQS particularly focusing on the evaluation of reflexivity (the researcher’s personal influence on the interpretation and collection of data). These elements are absent or addressed to a lesser extent in the CASP tool.

The appraisal of transferability of findings (what this paper previously referred to as external validity ) is addressed only by ETQS and CASP. Both tools have detailed questions about the value of research to practice and policy as well as its transferability to other populations and settings. Methodological research aspects are also extensively addressed by CASP and ETQS, but less so by JBI (which relies predominantly on congruity between research methodology and objectives without any particular assessment criteria for other data sources and/or data collection methods). Finally, the evaluation of theoretical aspects (referred to by Hannes et al. ( 2010 ) as theoretical validity ) is addressed only by JBI and ETQS; there are no assessment criteria for theoretical framework in CASP.

Given these differences, it is unsurprising that CASP, JBI and ETQS have limited relevance for systematic case studies in psychotherapy. First, it is evident that neither of the three tools has specific evaluative criteria for the clinical component of systematic case studies. Although JBI and ETQS feature some relevant questions about participants and their context, the conceptualisation of patients (and/or clients) in psychotherapy involves other kinds of data elements (e.g. diagnostic tools and questionnaires as well as therapist observations) that go beyond the usual participant data. Furthermore, much of the clinical data is intertwined with the therapist’s clinical decision-making and thinking style (Kaluzeviciute & Willemsen, 2020 ). As such, there is a need to appraise patient data and therapist interpretations not only on a separate basis, but also as two forms of knowledge that are deeply intertwined in the case narrative.

Secondly, since systematic case studies involve various forms of data, there is a need to appraise how these data converge (or how different methods complement one another in the case context) and how they can be transferred or applied in broader psychotherapy research and practice. These systematic case study components are attended to a degree by CASP (which is particularly attentive of methodological components) and ETQS (particularly specific criteria for research transferability onto policy and practice). These components are not addressed or less explicitly addressed by JBI. Overall, neither of the tools is attuned to all methodological, theoretical and clinical components of the systematic case study. Specifically, there are no clear evaluation criteria for the description of research teams (i.e. different data analysts and/or clinicians); the suitability of the systematic case study method; the description of patient’s clinical assessment; the use of other methods or data sources; the general data about therapeutic progress.

Finally, there is something to be said about the recommended reviewer responses (Table 10 ). Systematic case studies can vary significantly in their formulation and purpose. The methodological, theoretical and clinical components outlined in Table 9 follow guidelines made by case study journals; however, these are recommendations, not ‘set in stone’ case templates. For this reason, the straightforward checklist approaches adopted by JBI and CASP may be difficult to use for case study researchers and those reviewing case study research. The ETQS open-ended questionnaire approach suggested by Long and Godfrey ( 2004 ) enables a comprehensive, detailed and purpose-oriented assessment, suitable for the evaluation of systematic case studies. That said, there remains a challenge of ensuring that there is less space for the interpretation of evaluative criteria (Williams et al., 2019 ). The combination of checklist and framework approaches would, therefore, provide a more stable appraisal process across different reviewers.

Developing purpose-oriented evaluation criteria for systematic case studies

The starting point in developing evaluation criteria for Case Study Evaluation-tool (CaSE) is addressing the significance of pluralism in systematic case studies. Unlike RCTs, systematic case studies are pluralistic in the sense that they employ divergent practices in methodological procedures ( research process ), and they may include significantly different research aims and purpose ( the end - goal ) (Kaluzeviciute & Willemsen, 2020 ). While some systematic case studies will have an explicit intention to conceptualise and situate a single patient’s experiences and symptoms within a broader clinical population, others will focus on the exploration of phenomena as they emerge from the data. It is therefore important that CaSE is positioned within a purpose - oriented evaluative framework , suitable for the assessment of what each systematic case is good for (rather than determining an absolute measure of ‘good’ and ‘bad’ systematic case studies). This approach to evidence and appraisal is in line with the PBE paradigm. PBE emphasises the study of clinical complexities and variations through local and contingent settings (e.g. single case studies) and promotes methodological pluralism (Barkham & Mellor-Clark, 2003 ).

CaSE checklist for essential components in systematic case studies

In order to conceptualise purpose-oriented appraisal questions, we must first look at what unites and differentiates systematic case studies in psychotherapy. The commonly used theoretical, clinical and methodological systematic case study components were identified earlier in Table 9 . These components will be seen as essential and common to most systematic case studies in CaSE evaluative criteria. If these essential components are missing in a systematic case study, then it may be implied there is a lack of information, which in turn diminishes the evidential quality of the case. As such, the checklist serves as a tool for checking whether a case study is, indeed, systematic (as opposed to experimental or clinical; see Iwakabe & Gazzola, 2009 for further differentiation between methodologically distinct case study types) and should be used before CaSE Purpose - based Evaluative Framework for Systematic Case Studie s (which is designed for the appraisal of different purposes common to systematic case studies).

As noted earlier in the paper, checklist approaches to appraisal are useful when evaluating the presence or absence of specific information in a research study. This approach can be used to appraise essential components in systematic case studies, as shown below. From a pragmatic point view (Levitt et al., 2017 ; Truijens et al., 2019 ), CaSE Checklist for Essential Components in Systematic Case Studies can be seen as a way to ensure the internal validity of systematic case study: the reviewer is assessing whether sufficient information is provided about the case design, procedure, approaches to inquiry, etc., and whether they are relevant to the researcher’s objectives and conclusions (Table 11 ).

CaSE purpose-based evaluative framework for systematic case studies

Identifying differences between systematic case studies means identifying the different purposes systematic case studies have in psychotherapy. Based on the earlier work by social scientist Yin ( 1984 , 1993 ), we can differentiate between exploratory (hypothesis generating, indicating a beginning phase of research), descriptive (particularising case data as it emerges) and representative (a case that is typical of a broader clinical population, referred to as the ‘explanatory case’ by Yin) cases.

Another increasingly significant strand of systematic case studies is transferable (aggregating and transferring case study findings) cases. These cases are based on the process of meta-synthesis (Iwakabe & Gazzola, 2009 ): by examining processes and outcomes in many different case studies dealing with similar clinical issues, researchers can identify common themes and inferences. In this way, single case studies that have relatively little impact on clinical practice, research or health care policy (in the sense that they capture psychotherapy processes rather than produce generalisable claims as in Yin’s representative case studies) can contribute to the generation of a wider knowledge base in psychotherapy (Iwakabe, 2003 , 2005 ). However, there is an ongoing issue of assessing the evidential quality of such transferable cases. According to Duncan and Sparks ( 2020 ), although meta-synthesis and meta-analysis are considered to be ‘gold standard’ for assessing interventions across disparate studies in psychotherapy, they often contain case studies with significant research limitations, inappropriate interpretations and insufficient information. It is therefore important to have a research appraisal process in place for selecting transferable case studies.

Two other types of systematic case study research include: critical (testing and/or confirming existing theories) cases, which are described as an excellent method for falsifying existing theoretical concepts and testing whether therapeutic interventions work in practice with concrete patients (Kaluzeviciute, 2021 ), and unique (going beyond the ‘typical’ cases and demonstrating deviations) cases (Merriam, 1998 ). These two systematic case study types are often seen as less valuable for psychotherapy research given that unique/falsificatory findings are difficult to generalise. But it is clear that practitioners and researchers in our field seek out context-specific data, as well as detailed information on the effectiveness of therapeutic techniques in single cases (Stiles, 2007 ) (Table 12 ).

Each purpose-based case study contributes to PBE in different ways. Representative cases provide qualitatively rich, in-depth data about a clinical phenomenon within its particular context. This offers other clinicians and researchers access to a ‘closed world’ (Mackrill & Iwakabe, 2013 ) containing a wide range of attributes about a conceptual type (e.g. clinical condition or therapeutic technique). Descriptive cases generally seek to demonstrate a realistic snapshot of therapeutic processes, including complex dynamics in therapeutic relationships, and instances of therapeutic failure (Maggio, Molgora, & Oasi, 2019 ). Data in descriptive cases should be presented in a transparent manner (e.g. if there are issues in standardising patient responses to a self-report questionnaire, this should be made explicit). Descriptive cases are commonly used in psychotherapy training and supervision. Unique cases are relevant for both clinicians and researchers: they often contain novel treatment approaches and/or introduce new diagnostic considerations about patients who deviate from the clinical population. Critical cases demonstrate the application of psychological theories ‘in action’ with particular patients; as such, they are relevant to clinicians, researchers and policymakers (Mackrill & Iwakabe, 2013 ). Exploratory cases bring new insight and observations into clinical practice and research. This is particularly useful when comparing (or introducing) different clinical approaches and techniques (Trad & Raine, 1994 ). Findings from exploratory cases often include future research suggestions. Finally, transferable cases provide one solution to the generalisation issue in psychotherapy research through the previously mentioned process of meta-synthesis. Grouped together, transferable cases can contribute to theory building and development, as well as higher levels of abstraction about a chosen area of psychotherapy research (Iwakabe & Gazzola, 2009 ).

With this plurality in mind, it is evident that CaSE has a challenging task of appraising research components that are distinct across six different types of purpose-based systematic case studies. The purpose-specific evaluative criteria in Table 13 was developed in close consultation with epistemological literature associated with each type of case study, including: Yin’s ( 1984 , 1993 ) work on establishing the typicality of representative cases; Duncan and Sparks’ ( 2020 ) and Iwakabe and Gazzola’s ( 2009 ) case selection criteria for meta-synthesis and meta-analysis; Stake’s ( 1995 , 2010 ) research on particularising case narratives; Merriam’s ( 1998 ) guidelines on distinctive attributes of unique case studies; Kennedy’s ( 1979 ) epistemological rules for generalising from case studies; Mahrer’s ( 1988 ) discovery oriented case study approach; and Edelson’s ( 1986 ) guidelines for rigorous hypothesis generation in case studies.

Research on epistemic issues in case writing (Kaluzeviciute, 2021 ) and different forms of scientific thinking in psychoanalytic case studies (Kaluzeviciute & Willemsen, 2020 ) was also utilised to identify case study components that would help improve therapist clinical decision-making and reflexivity.

For the analysis of more complex research components (e.g. the degree of therapist reflexivity), the purpose-based evaluation will utilise a framework approach, in line with comprehensive and open-ended reviewer responses in ETQS (Evaluation Tool for Qualitative Studies) (Long & Godfrey, 2004 ) (Table 13 ). That is to say, the evaluation here is not so much about the presence or absence of information (as in the checklist approach) but the degree to which the information helps the case with its unique purpose, whether it is generalisability or typicality. Therefore, although the purpose-oriented evaluation criteria below encompasses comprehensive questions at a considerable level of generality (in the sense that not all components may be required or relevant for each case study), it nevertheless seeks to engage with each type of purpose-based systematic case study on an individual basis (attending to research or clinical components that are unique to each of type of case study).

It is important to note that, as this is an introductory paper to CaSE, the evaluative framework is still preliminary: it involves some of the core questions that pertain to the nature of all six purpose-based systematic case studies. However, there is a need to develop a more comprehensive and detailed CaSE appraisal framework for each purpose-based systematic case study in the future.

Using CaSE on published systematic case studies in psychotherapy: an example

To illustrate the use of CaSE Purpose - based Evaluative Framework for Systematic Case Studies , a case study by Lunn, Daniel, and Poulsen ( 2016 ) titled ‘ Psychoanalytic Psychotherapy With a Client With Bulimia Nervosa ’ was selected from the Single Case Archive (SCA) and analysed in Table 14 . Based on the core questions associated with the six purpose-based systematic case study types in Table 13 (1 to 6), the purpose of Lunn et al.’s ( 2016 ) case was identified as critical (testing an existing theoretical suggestion).

Sometimes, case study authors will explicitly define the purpose of their case in the form of research objectives (as was the case in Lunn et al.’s study); this helps identifying which purpose-based questions are most relevant for the evaluation of the case. However, some case studies will require comprehensive analysis in order to identify their purpose (or multiple purposes). As such, it is recommended that CaSE reviewers first assess the degree and manner in which information about the studied phenomenon, patient data, clinical discourse and research are presented before deciding on the case purpose.

Although each purpose-based systematic case study will contribute to different strands of psychotherapy (theory, practice, training, etc.) and focus on different forms of data (e.g. theory testing vs extensive clinical descriptions), the overarching aim across all systematic case studies in psychotherapy is to study local and contingent processes, such as variations in patient symptoms and complexities of the clinical setting. The comprehensive framework approach will therefore allow reviewers to assess the degree of external validity in systematic case studies (Barkham & Mellor-Clark, 2003 ). Furthermore, assessing the case against its purpose will let reviewers determine whether the case achieves its set goals (research objectives and aims). The example below shows that Lunn et al.’s ( 2016 ) case is successful in functioning as a critical case as the authors provide relevant, high-quality information about their tested therapeutic conditions.

Finally, it is also possible to use CaSE to gather specific type of systematic case studies for one’s research, practice, training, etc. For example, a CaSE reviewer might want to identify as many descriptive case studies focusing on negative therapeutic relationships as possible for their clinical supervision. The reviewer will therefore only need to refer to CaSE questions in Table 13 (2) on descriptive cases. If the reviewed cases do not align with the questions in Table 13 (2), then they are not suitable for the CaSE reviewer who is looking for “know-how” knowledge and detailed clinical narratives.

Concluding comments

This paper introduces a novel Case Study Evaluation-tool (CaSE) for systematic case studies in psychotherapy. Unlike most appraisal tools in EBP, CaSE is positioned within purpose-oriented evaluation criteria, in line with the PBE paradigm. CaSE enables reviewers to assess what each systematic case is good for (rather than determining an absolute measure of ‘good’ and ‘bad’ systematic case studies). In order to explicate a purpose-based evaluative framework, six different systematic case study purposes in psychotherapy have been identified: representative cases (purpose: typicality), descriptive cases (purpose: particularity), unique cases (purpose: deviation), critical cases (purpose: falsification/confirmation), exploratory cases (purpose: hypothesis generation) and transferable cases (purpose: generalisability). Each case was linked with an existing epistemological network, such as Iwakabe and Gazzola’s ( 2009 ) work on case selection criteria for meta-synthesis. The framework approach includes core questions specific to each purpose-based case study (Table 13 (1–6)). The aim is to assess the external validity and effectiveness of each case study against its set out research objectives and aims. Reviewers are required to perform a comprehensive and open-ended data analysis, as shown in the example in Table 14 .

Along with CaSE Purpose - based Evaluative Framework (Table 13 ), the paper also developed CaSE Checklist for Essential Components in Systematic Case Studies (Table 12 ). The checklist approach is meant to aid reviewers in assessing the presence or absence of essential case study components, such as the rationale behind choosing the case study method and description of patient’s history. If essential components are missing in a systematic case study, then it may be implied that there is a lack of information, which in turn diminishes the evidential quality of the case. Following broader definitions of validity set out by Levitt et al. ( 2017 ) and Truijens et al. ( 2019 ), it could be argued that the checklist approach allows for the assessment of (non-quantitative) internal validity in systematic case studies: does the researcher provide sufficient information about the case study design, rationale, research objectives, epistemological/philosophical paradigms, assessment procedures, data analysis, etc., to account for their research conclusions?

It is important to note that this paper is set as an introduction to CaSE; by extension, it is also set as an introduction to research evaluation and appraisal processes for case study researchers in psychotherapy. As such, it was important to provide a step-by-step epistemological rationale and process behind the development of CaSE evaluative framework and checklist. However, this also means that further research needs to be conducted in order to develop the tool. While CaSE Purpose - based Evaluative Framework involves some of the core questions that pertain to the nature of all six purpose-based systematic case studies, there is a need to develop individual and comprehensive CaSE evaluative frameworks for each of the purpose-based systematic case studies in the future. This line of research is likely to enhance CaSE target audience: clinicians interested in reviewing highly particular clinical narratives will attend to descriptive case study appraisal frameworks; researchers working with qualitative meta-synthesis will find transferable case study appraisal frameworks most relevant to their work; while teachers on psychotherapy and counselling modules may seek out unique case study appraisal frameworks.

Furthermore, although CaSE Checklist for Essential Components in Systematic Case Studies and CaSE Purpose - based Evaluative Framework for Systematic Case Studies are presented in a comprehensive, detailed manner, with definitions and examples that would enable reviewers to have a good grasp of the appraisal process, it is likely that different reviewers may have different interpretations or ideas of what might be ‘substantial’ case study data. This, in part, is due to the methodologically pluralistic nature of the case study genre itself; what is relevant for one case study may not be relevant for another, and vice-versa. To aid with the review process, future research on CaSE should include a comprehensive paper on using the tool. This paper should involve evaluation examples with all six purpose-based systematic case studies, as well as a ‘search’ exercise (using CaSE to assess the relevance of case studies for one’s research, practice, training, etc.).

Finally, further research needs to be developed on how (and, indeed, whether) systematic case studies should be reviewed with specific ‘grades’ or ‘assessments’ that go beyond the qualitative examination in Table 14 . This would be particularly significant for the processes of qualitative meta-synthesis and meta-analysis. These research developments will further enhance CaSE tool, and, in turn, enable psychotherapy researchers to appraise their findings within clear, purpose-based evaluative criteria appropriate for systematic case studies.

Availability of data and materials

Not applicable.

Almond, R. (2004). “I Can Do It (All) Myself”: Clinical technique with defensive narcissistic self–sufficiency. Psychoanalytic Psychology , 21 (3), 371–384. https://doi.org/10.1037/0736-9735.21.3.371 .

Article   Google Scholar  

American Psychological Association (2010). Evidence–based case study. Retrieved from https://www.apa.org/pubs/journals/pst/evidence–based–case–study.

Google Scholar  

Aveline, M. (2005). Clinical case studies: Their place in evidence–based practice. Psychodynamic Practice: Individuals, Groups and Organisations , 11 (2), 133–152. https://doi.org/10.1080/14753630500108174 .

Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy , 10 (6), 319–327. https://doi.org/10.1002/cpp.379 .

Berg, H. (2019). How does evidence–based practice in psychology work? – As an ethical demarcation. Philosophical Psychology , 32 (6), 853–873. https://doi.org/10.1080/09515089.2019.1632424 .

Berg, H., & Slaattelid, R. (2017). Facts and values in psychotherapy—A critique of the empirical reduction of psychotherapy within evidence-based practice. Journal of Evaluation in Clinical Practice , 23 (5), 1075–1080. https://doi.org/10.1111/jep.12739 .

Article   PubMed   Google Scholar  

Bower, P. (2003). Efficacy in evidence-based practice. Clinical Psychology and Psychotherapy , 10 (6), 328–336. https://doi.org/10.1002/cpp.380 .

Cartwright, N., & Hardie, J. (2012). What are RCTs good for? In N. Cartwright, & J. Hardie (Eds.), Evidence–based policy: A practical guide to doing it better . Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199841608.003.0008 .

Critical Appraisal Skills Programme (CASP). (2018). Qualitative checklist. Retrieved from https://casp–uk.net/wp–content/uploads/2018/01/CASP–Qualitative–Checklist–2018.pdf .

Davison, G. C., & Lazarus, A. A. (2007). Clinical case studies are important in the science and practice of psychotherapy. In S. O. Lilienfeld, & W. T. O’Donohue (Eds.), The great ideas of clinical science: 17 principles that every mental health professional should understand , (pp. 149–162). Routledge/Taylor & Francis Group.

Douglas, H. (2004). The irreducible complexity of objectivity. Synthese , 138 (3), 453–473. https://doi.org/10.1023/B:SYNT.0000016451.18182.91 .

Duncan, B. L., & Sparks, J. A. (2020). When meta–analysis misleads: A critical case study of a meta–analysis of client feedback. Psychological Services , 17 (4), 487–496. https://doi.org/10.1037/ser0000398 .

Edelson, M. (1986). Causal explanation in science and in psychoanalysis—Implications for writing a case study. Psychoanalytic Study of Child , 41 (1), 89–127. https://doi.org/10.1080/00797308.1986.11823452 .

Edwards, D. J. A. (2013). Collaborative versus adversarial stances in scientific discourse: Implications for the role of systematic case studies in the development of evidence–based practice in psychotherapy. Pragmatic Case Studies in Psychotherapy , 3 (1), 6–34.

Edwards, D. J. A., Dattilio, F. M., & Bromley, D. B. (2004). Developing evidence–based practice: The role of case–based research. Professional Psychology: Research and Practice , 35 (6), 589–597. https://doi.org/10.1037/0735-7028.35.6.589 .

Erickson, F. (2012). Comments on causality in qualitative inquiry. Qualitative Inquiry , 18 (8), 686–688. https://doi.org/10.1177/1077800412454834 .

Fishman, D. B. (1999). The case for pragmatic psychology . New York University Press.

Fishman, D. B. (2005). Editor’s introduction to PCSP––From single case to database: A new method for enhancing psychotherapy practice. Pragmatic Case Studies in Psychotherapy , 1 (1), 1–50.

Fishman, D. B., Messer, S. B., Edwards, D. J. A., & Dattilio, F. M. (Eds.) (2017). Case studies within psychotherapy trials: Integrating qualitative and quantitative methods . Oxford University Press.

Fox, N. J. (2003). Practice–based evidence: Towards collaborative and transgressive research. Sociology , 37 (1), 81–102. https://doi.org/10.1177/0038038503037001388 .

Gabbay, J., & le May, A. (2011). Practice–based evidence for healthcare: Clinical mindlines . Routledge.

Green, L. W., & Latchford, G. (2012). Maximising the benefits of psychotherapy: A practice–based evidence approach . Wiley–Blackwell. https://doi.org/10.1002/9781119967590 .

Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research , 20 (12), 1736–1743. https://doi.org/10.1177/1049732310378656 .

Hartling, L., Chisholm, A., Thomson, D., & Dryden, D. M. (2012). A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One , 7 (11), e49667. https://doi.org/10.1371/journal.pone.0049667 .

Article   PubMed   PubMed Central   Google Scholar  

Hill, A., & Spittlehouse, C. (2003). What is critical appraisal? Evidence–Based Medicine , 3 (2), 1–8.

Hilliard, R. B. (1993). Single–case methodology in psychotherapy process and outcome research. Journal of Consulting and Clinical Psychology , 61 (3), 373–380. https://doi.org/10.1037/0022-006X.61.3.373 .

Horn, S. D., & Gassaway, J. (2007). Practice–based evidence study design for comparative effectiveness research. Medical Care , 45 (10), S50–S57. https://doi.org/10.1097/MLR.0b013e318070c07b .

Iwakabe, S. (2003, May). Common change events in stages of psychotherapy: A qualitative analysis of case reports. In Paper presented at the 19th Annual Conference of the Society for Exploration of Psychotherapy Integration, New York .

Iwakabe, S. (2005). Pragmatic meta–analysis of case studies. Annual Progress of Family Psychology , 23 , 154–169.

Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research , 19 (4-5), 601–611. https://doi.org/10.1080/10503300802688494 .

Jimenez-Buedo, M., & Miller, L. (2010). Why a Trade–Off? The relationship between the external and internal validity of experiments. THEORIA: An International Journal for Theory History and Foundations of Science , 25 (3), 301–321.

Joanna Briggs Institute (JBI). (2017a). Critical appraisal checklist for qualitative research. Retrieved from https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Qualitative_Research2017_0.pdf

Joanna Briggs Institute (JBI). (2017b). Checklist for case reports. Retrieved from https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Case_Reports2017_0.pdf

Kaluzeviciute, G. (2021). Validity, Evidence and Appraisal in Systematic Psychotherapy Case Studies . Paper presented at the Research Forum of Department of Psychosocial and Psychoanalytic Studies, University of Essex, Colchester, UK. https://doi.org/10.13140/RG.2.2.33502.15683  

Kaluzeviciute, G., & Willemsen, J. (2020). Scientific thinking styles: The different ways of thinking in psychoanalytic case studies. The International Journal of Psychoanalysis , 101 (5), 900–922. https://doi.org/10.1080/00207578.2020.1796491 .

Katrak, P., Bialocerkowski, A. E., Massy-Westropp, N., Kumar, S. V. S., & Grimmer, K. (2004). A systematic review of the content of critical appraisal tools. BMC Medical Research Methodology , 4 (1), 22. https://doi.org/10.1186/1471-2288-4-22 .

Kennedy, M. M. (1979). Generalising from single case studies. Evaluation Quarterly , 3 (4), 661–678. https://doi.org/10.1177/0193841X7900300409 .

Laska, K. M., Gurman, A. S., & Wampold, B. E. (2014). Expanding the lens of evidence–based practice in psychotherapy: A common factors perspective. Psychotherapy , 51 (4), 467–481. https://doi.org/10.1037/a0034332 .

Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., & Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: Promoting methodological integrity. Qualitative Psychology , 4 (1), 2–22. https://doi.org/10.1037/qup0000082 .

Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2013). Why many clinical psychologists are resistant to evidence–based practice: root causes and constructive remedies. Clinical Psychology Review , 33 (7), 883–900. https://doi.org/10.1016/j.cpr.2012.09.008 .

Long, A. F., & Godfrey, M. (2004). An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology , 7 (2), 181–196. https://doi.org/10.1080/1364557032000045302 .

Longhofer, J., Floersch, J., & Hartmann, E. A. (2017). Case for the case study: How and why they matter. Clinical Social Work Journal , 45 (3), 189–200. https://doi.org/10.1007/s10615-017-0631-8 .

Lunn, S., Daniel, S. I. F., & Poulsen, S. (2016). Psychoanalytic psychotherapy with a client with bulimia nervosa. Psychotherapy , 53 (2), 206–215. https://doi.org/10.1037/pst0000052 .

Mackrill, T., & Iwakabe, S. (2013). Making a case for case studies in psychotherapy training: A small step towards establishing an empirical basis for psychotherapy training. Counselling Psychotherapy Quarterly , 26 (3–4), 250–266. https://doi.org/10.1080/09515070.2013.832148 .

Maggio, S., Molgora, S., & Oasi, O. (2019). Analyzing psychotherapeutic failures: A research on the variables involved in the treatment with an individual setting of 29 cases. Frontiers in Psychology , 10 , 1250. https://doi.org/10.3389/fpsyg.2019.01250 .

Mahrer, A. R. (1988). Discovery–oriented psychotherapy research: Rationale, aims, and methods. American Psychologist , 43 (9), 694–702. https://doi.org/10.1037/0003-066X.43.9.694 .

Margison, F. B., et al. (2000). Measurement and psychotherapy: Evidence–based practice and practice–based evidence. British Journal of Psychiatry , 177 (2), 123–130. https://doi.org/10.1192/bjp.177.2.123 .

Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher , 33 (2), 3–11. https://doi.org/10.3102/0013189X033002003 .

McLeod, J. (2002). Case studies and practitioner research: Building knowledge through systematic inquiry into individual cases. Counselling and Psychotherapy Research: Linking research with practice , 2 (4), 264–268. https://doi.org/10.1080/14733140212331384755 .

McLeod, J. (2010). Case study research in counselling and psychotherapy . SAGE Publications. https://doi.org/10.4135/9781446287897 .

McLeod, J., & Elliott, R. (2011). Systematic case study research: A practice–oriented introduction to building an evidence base for counselling and psychotherapy. Counselling and Psychotherapy Research , 11 (1), 1–10. https://doi.org/10.1080/14733145.2011.548954 .

Meganck, R., Inslegers, R., Krivzov, J., & Notaerts, L. (2017). Beyond clinical case studies in psychoanalysis: A review of psychoanalytic empirical single case studies published in ISI–ranked journals. Frontiers in Psychology , 8 , 1749. https://doi.org/10.3389/fpsyg.2017.01749 .

Merriam, S. B. (1998). Qualitative research and case study applications in education . Jossey–Bass Publishers.

Michels, R. (2000). The case history. Journal of the American Psychoanalytic Association , 48 (2), 355–375. https://doi.org/10.1177/00030651000480021201 .

Midgley, N. (2006). Re–reading “Little Hans”: Freud’s case study and the question of competing paradigms in psychoanalysis. Journal of the American Psychoanalytic Association , 54 (2), 537–559. https://doi.org/10.1177/00030651060540021601 .

Rosqvist, J., Thomas, J. C., & Truax, P. (2011). Effectiveness versus efficacy studies. In J. C. Thomas, & M. Hersen (Eds.), Understanding research in clinical and counseling psychology , (pp. 319–354). Routledge/Taylor & Francis Group.

Sackett, D. L., Rosenberg, W. M., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn’t. BMJ , 312 (7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71 .

Stake, R. E. (1995). The art of case study research . SAGE Publications.

Stake, R. E. (2010). Qualitative research: Studying how things work . The Guilford Press.

Stewart, R. E., & Chambless, D. L. (2007). Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology , 63 (3), 267–281. https://doi.org/10.1002/jclp.20347 .

Stiles, W. B. (2007). Theory–building case studies of counselling and psychotherapy. Counselling and Psychotherapy Research , 7 (2), 122–127. https://doi.org/10.1080/14733140701356742 .

Teachman, B. A., Drabick, D. A., Hershenberg, R., Vivian, D., Wolfe, B. E., & Goldfried, M. R. (2012). Bridging the gap between clinical research and clinical practice: introduction to the special section. Psychotherapy , 49 (2), 97–100. https://doi.org/10.1037/a0027346 .

Thorne, S., Jensen, L., Kearney, M. H., Noblit, G., & Sandelowski, M. (2004). Qualitative metasynthesis: Reflections on methodological orientation and ideological agenda. Qualitative Health Research , 14 (10), 1342–1365. https://doi.org/10.1177/1049732304269888 .

Timulak, L. (2009). Meta–analysis of qualitative studies: A tool for reviewing qualitative research findings in psychotherapy. Psychotherapy Research , 19 (4–5), 591–600. https://doi.org/10.1080/10503300802477989 .

Trad, P. V., & Raine, M. J. (1994). A prospective interpretation of unconscious processes during psychoanalytic psychotherapy. Psychoanalytic Psychology , 11 (1), 77–100. https://doi.org/10.1037/h0079522 .

Truijens, F., Cornelis, S., Desmet, M., & De Smet, M. (2019). Validity beyond measurement: Why psychometric validity is insufficient for valid psychotherapy research. Frontiers in Psychology , 10 . https://doi.org/10.3389/fpsyg.2019.00532 .

Tuckett, D. (Ed.) (2008). The new library of psychoanalysis. Psychoanalysis comparable and incomparable: The evolution of a method to describe and compare psychoanalytic approaches . Routledge/Taylor & Francis Group. https://doi.org/10.4324/9780203932551 .

van Hennik, R. (2020). Practice based evidence based practice, part II: Navigating complexity and validity from within. Journal of Family Therapy , 43 (1), 27–45. https://doi.org/10.1111/1467-6427.12291 .

Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin , 130 (4), 631–663. https://doi.org/10.1037/0033-2909.130.4.631 .

Widdowson, M. (2011). Case study research methodology. International Journal of Transactional Analysis Research & Practice , 2 (1). https://doi.org/10.29044/v2i1p25 .

Willemsen, J., Della Rosa, E., & Kegerreis, S. (2017). Clinical case studies in psychoanalytic and psychodynamic treatment. Frontiers in Psychology , 8 (108). https://doi.org/10.3389/fpsyg.2017.00108 .

Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine . https://doi.org/10.1136/bmjebm-2018-111132 .

Yin, R. K. (1984). Case study research: Design and methods . SAGE Publications.

Yin, R. K. (1993). Applications of case study research . SAGE Publications.

Download references

Acknowledgments

I would like to thank Prof Jochem Willemsen (Faculty of Psychology and Educational Sciences, Université catholique de Louvain-la-Neuve), Prof Wayne Martin (School of Philosophy and Art History, University of Essex), Dr Femke Truijens (Institute of Psychology, Erasmus University Rotterdam) and the reviewers of Psicologia: Reflexão e Crítica / Psychology : Research and Review for their feedback, insight and contributions to the manuscript.

Arts and Humanities Research Council (AHRC) and Consortium for Humanities and the Arts South-East England (CHASE) Doctoral Training Partnership, Award Number [AH/L50 3861/1].

Author information

Authors and affiliations.

Department of Psychosocial and Psychoanalytic Studies, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK

Greta Kaluzeviciute

You can also search for this author in PubMed   Google Scholar

Contributions

GK is the sole author of the manuscript. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Greta Kaluzeviciute .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kaluzeviciute, G. Appraising psychotherapy case studies in practice-based evidence: introducing Case Study Evaluation-tool (CaSE). Psicol. Refl. Crít. 34 , 9 (2021). https://doi.org/10.1186/s41155-021-00175-y

Download citation

Received : 12 January 2021

Accepted : 09 March 2021

Published : 19 March 2021

DOI : https://doi.org/10.1186/s41155-021-00175-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic case studies
  • Psychotherapy research
  • Research appraisal tool
  • Evidence-based practice
  • Practice-based evidence
  • Research validity

casp tool case study

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Is tooth extraction as proxy for periodontal disease related to the development of RA? Lessons from a longitudinal study in the at-risk stage of clinically suspect arthralgia
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-5953-6844 Sarah J H Khidir 1 ,
  • http://orcid.org/0000-0002-9618-6414 René E M Toes 1 ,
  • http://orcid.org/0000-0003-1900-790X Elise van Mulligen 1 , 2 ,
  • http://orcid.org/0000-0001-8572-1437 Annette H M van der Helm-van Mil 1 , 2
  • 1 Rheumatology , Leiden University Medical Center , Leiden , The Netherlands
  • 2 Rheumatology , Erasmus Medical Center , Rotterdam , The Netherlands
  • Correspondence to Sarah J H Khidir, Leiden University Medical Center, Leiden, The Netherlands; s.j.h.khidir{at}lumc.nl

https://doi.org/10.1136/ard-2024-225688

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • Anti-Citrullinated Protein Antibodies
  • Arthritis, Rheumatoid
  • Autoimmunity

Emerging evidence points to the involvement of periodontal disease (PD) in the pathogenesis of rheumatoid arthritis (RA), especially in anti-citrullinated protein antibodies (ACPA)-positive RA. The bacteria Porphyromonas gingivalis , involved in oral mucosal inflammation and PD, can citrullinate proteins via prokaryotic peptidylarginine deiminase. 1 Systemic translocation of oral bacteria has been found in RA-patients with PD. These bacterial translocations have been implicated in the generation of anti-modified protein antibodies (AMPAs) as ACPA can also recognise modified bacterial proteins. 2 Nevertheless, the ‘cause-consequence’ relation between PD and RA remains debatable as PD may be a risk factor (PD→RA; scenario 1; figure 1A ) but also a consequence of RA (RA→PD; scenario 2). 3 Additionally, the relation PD→RA may be confounded by related factors (eg, body mass index (BMI), smoking or other factors related to socioeconomic status (SES); scenario 3). Increased prevalence of periodontitis and P. gingivalis were reported in ACPA-positive/autoantibody-positive at-risk individuals in case–control/cross-sectional studies. 4 5 Longitudinal studies on PD in at-risk individuals could elucidate temporal relationships and provide further insight into the relation of PD in RA-development. Therefore, we longitudinally analysed the relation between tooth loss as proxy for (preceding) PD and progression to clinically apparent inflammatory arthritis (IA) and RA in patients with clinically suspect arthralgia (CSA). We also studied whether this relation is independent of SES and SES-related factors.

  • Download figure
  • Open in new tab
  • Download powerpoint

Conceptual framework of hypotheses on periodontal disease and RA (A) and the development of inflammatory arthritis in ACPA-positive and ACPA-negative clinically suspect arthralgia according to tooth extraction (B). (A) The coloured circles represent the three hypotheses on the relation between PD and RA, inspired by de Pablo et al. 3 Scenario 1 in blue: PD is a risk factor for RA. Scenario 2 in orange: RA is a risk factor for PD. Scenario 3 in green: the association between PD and RA is driven by confounding factors such as smoking and SES. (B) Development of IA in ACPA-positive and ACPA-negative patients with clinically suspect arthralgia is shown according to tooth extraction, which is a late-stage of periodontal disease, showing a relation between tooth extraction and IA-development in ACPA-positive CSA-patients. ACPA, anti-citrullinated protein antibody; CSA, clinically suspect arthralgia; IA, inflammatory arthritis; PD, periodontal disease; RA, rheumatoid arthritis; SES, socioeconomic status.

Supplemental material

At baseline, 306 CSA-patients (44%) had previous tooth extraction. These patients were older than patients without tooth extraction (48.7 vs 41.4 years; online supplemental S2 ), more often had a low educational attainment (15% vs 8%), a higher BMI (27.8 vs 26.3), more often smoked (66% vs 51%) and had subclinical joint-inflammation (57% vs 47%). ACPA-positive CSA-patients with tooth extraction more often progressed to IA than ACPA-positive patients without tooth extraction (HR=1.91, 95% CI 1.10–3.32, p=0.022), while this association was not significant in ACPA-negative CSA (HR=1.41, 95% CI 0.85–2.34, p=0.19; figure 1B ). After correcting for SES, smoking, BMI and age, tooth extraction remained significantly associated with IA-development in ACPA-positive CSA-patients (HR=2.22, 95% CI 1.23–4.00, p=0.008). This association remained after additional adjustment for subclinical joint-inflammation (HR=3.10, 95% CI 1.57–6.10, p=0.001). The association between tooth extraction and RA-development was similar, as every ACPA-positive CSA-patient who developed IA also developed RA according to classification criteria. Within ACPA-positive CSA-patients (n=96), ACPA-levels, RF-positivity and number of AMPA-isotypes did not differ between individuals with and without tooth extraction ( online supplemental S3 ). Analyses stratified for autoantibody-positivity/autoantibody-negativity (negative for ACPA and RF) showed similar findings ( online supplemental S4 ).

To our best knowledge, this is the first study that longitudinally evaluates individuals with arthralgia at-risk of RA. We show that tooth extraction is a risk factor for progression to ACPA-positive RA and that this association is not confounded by environmental or SES-related factors. We acknowledge that tooth extraction has different causes among which end-stage periodontitis. This means it is a proxy, but not a perfect proxy. The finding that the risk effect is only present in developing ACPA-positive RA and not in ACPA-negative RA, suggests that tooth loss is partly related to prior PD (potentially present long before CSA-onset or recently). Future clinical and translational studies are needed to substantiate this finding. Interestingly, there were no differences in ACPA-levels and number of AMPA-isotypes as markers of autoantibody-maturation between ACPA-positive patients with and without tooth extraction. Whether antigenic triggering of autoreactive B-cells by bacteraemia plays a role in the trajectory towards ACPA-positive RA remains to be determined. 7

In conclusion, this is the first longitudinal study with data on tooth extraction as proxy for late-stage PD in both ACPA-positive and ACPA-negative at-risk patients. Tooth extraction, a late-stage of PD, associated with RA-development in ACPA-positive CSA-patients. Although our study does not show causality, it deductively supports the hypothesis that PD could confer risk for ACPA-positive RA and provides clues for future clinical and translational studies.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Pisetsky DS
  • Brewer RC ,
  • Hale CR , et al
  • de Pablo P ,
  • Chapple ILC ,
  • Buckley CD , et al
  • Do T , et al
  • Mikuls TR ,
  • Thiele GM ,
  • Deane KD , et al
  • Broers DLM ,
  • de Lange J , et al
  • Kristyanto H ,
  • Blomberg NJ ,
  • Slot LM , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Handling editor Josef S Smolen

Contributors SJHK and AvdH-vM designed the study. SJHK and EvM accessed and verified the data. SJHK analysed the data and acted as guarantor. All authors interpreted the data and wrote the report. AvdH-vM was the principal investigator. All authors approved the final version of the manuscript and were responsible for the decision to submit the manuscript for publication.

Funding This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Starting grant, agreement No. 714312) and by the Dutch Arthritis Society.

Competing interests None declared.

Patient and public involvement Patient partners were involved in the design of the CSA-cohort, and in the design and execution of the TREAT EARLIER-trial

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

IMAGES

  1. (PDF) Optimising the value of the critical appraisal skills programme

    casp tool case study

  2. SOLUTION: Casp Systematic Review Checklist Pdf

    casp tool case study

  3. CASP RCT Checklist PDF Fillable Form CASP Tool.pdf

    casp tool case study

  4. Quality appraisal using CASP checklist for case-control studies

    casp tool case study

  5. CASP cohort study checklist summary of screened articles

    casp tool case study

  6. Critical Appraisal Skills Programme (CASP) Cohort Study Checklist Wor…

    casp tool case study

VIDEO

  1. Opening This Yugioh Product For The FIRST TIME! (EPIC)

  2. EXAM DOST

  3. Dental X-Rays Linked to Brain Tumors in New Yale Study

  4. Pleiadian Transmission

  5. HS2405 AssessmentTask1 Group4 Maru

  6. Sample CASPer Question + Our Student’s Response + Expert Analysis

COMMENTS

  1. CASP Checklists

    Critical Appraisal Checklists. We offer a number of free downloadable checklists to help you more easily and accurately perform critical appraisal across a number of different study types. The CASP checklists are easy to understand but in case you need any further guidance on how they are structured, take a look at our guide on how to use our ...

  2. Critical Appraisal Tools & Resources

    Critical Appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. It is an essential skill for evidence-based medicine because it allows people to find and use research evidence reliably and efficiently. Learn more about what critical appraisal ...

  3. Optimising the value of the critical appraisal skills programme (CASP

    Our findings and conclusions are based on a single case study in which the CASP tool was used to appraise studies that used semi-structured interview methods only. It is likely that we would have encountered different issues had our search identified other study types with different data types, and had we employed an alternative method of ...

  4. Critical Appraisal Tools and Reporting Guidelines

    Tool Authors/Organization Applicability/Study design Example of application in lactation research; Critical Appraisal Skills Programme (CASP) CASP: Systematic Reviews, Randomized Controlled Trials, Cohort Studies, Case Control Studies, Economic Evaluations, Diagnostic Studies, Qualitative Studies, and Clinical Prediction Rule (Channell Doig et ...

  5. CASP

    This set of eight critical appraisal tools are designed to be used when reading research. CASP has appraisal checklists designed for use with Systematic Reviews, Randomised Controlled Trials, Cohort Studies, Case Control Studies, Economic Evaluations, Diagnostic Studies, Qualitative studies and Clinical Prediction Rule. View checklists & tools.

  6. Appraising psychotherapy case studies in practice-based evidence

    Using CASP, JBI and ETQS for systematic case study appraisal. Although JBI, CASP and ETQS were all developed to appraise qualitative research, it is evident from the above comparison that there are significant differences between the three tools. ... This paper introduces a novel Case Study Evaluation-tool (CaSE) for systematic case studies in ...

  7. Optimising the value of the critical appraisal skills programme (CASP

    The Critical Appraisal Skills Programme tool The CASP tool is a generic tool for appraising the strengths and limitations of any qualitative research methodology.30 The tool has ten questions that each focus on a different methodological aspect of a quali-tative study (Box 1). The questions posed by the tool

  8. Critically appraising and utilising qualitative health research

    With these points in mind, the Critical Appraisal Skills Program (CASP) tool for qualitative research was chosen to illustrate the critical appraisal process in the present study. 22. The CASP has developed eight critical appraisal tools specific to different research designs, including systematic reviews, randomised controlled trials, case ...

  9. Critical Appraisal of Quantitative Research

    The CASP case-control study appraisal tool (Critical Appraisal Skills Program 2017) The Joanna Briggs Institute's appraisal checklist for case-control studies (Joanna Briggs Institute 2017) The case-control study appraisal tool from the Center for Evidence-based Management .

  10. CASP Tools for Case Control Studies

    The CASP International Network [1] is a non-profit making organisation for people promoting skills in finding, critically appraising and acting on the results of research papers (evidence).CASPin provide many tools to help you systematically read evidence and this specific tool will help you make sense of any case control study and assess its validity in a quick and easy way.

  11. Appraise the Quality of the Evidence- CASP & PRISMA

    Critical Appraisal Skill Programme are tools designed to be used to critically appraise different types of evidence in health sciences.The CASP tool determines the trustworthiness, results and relevance of the information in an article. Watch the two videos below to understand how to appraise information and use the tools listed below

  12. CASP Tools for Randomised Controlled Trials (RCT)

    CASP (Critical Appraisal Skills Programme) Tools for Randomised Controlled Trials is designed to help you and me make sense of evidence from clinical trials. This is achieved through a series of 11 questions that assess the validity, results, and applicability of an RCT ( randomised control trial ). Before delving into the questions, it is ...

  13. Critical Appraisal

    CASP has 8 critical appraisal tools for SRs, RCTs, cohort studies, case control studies, economic evaluations, diagnostic studies, qualitative studies, and clinical prediction. Each item in the individual checklists provides a series of questions.

  14. CASP checklists

    CASP checklists. CASP (Critical Appraisal Skills Programme) checklists are a series of checklists involving prompt questions to help you evaluate research studies. They are often used in Healthcare and cover the following types of research methods: Systematic Reviews, Randomised Controlled Trials, Cohort Studies, Case Control Studies, Economic ...

  15. PDF CASP Checklist: 1 qu estion ohelp y u m ake sen f Case Control Study

    CASP Checklist: 1 qu estion ohelp y u m ake sen f Case Control Study How to use this appraisal tool: Three broad issues need to be considered when appraising a ... (insert name of checklist i.e. Case Control Study) Checklist. [online] Available at: URL. Accessed: Date Accessed. ©CASP this work is licensed under the Creative Commons Attribution ...

  16. JBI Critical Appraisal Tools

    Methodological quality of case series studies: an introduction to the JBI critical appraisal tool ... Klugar M, Tufanaru C, Leonardi-Bee J, Aromataris E, Munn Z. The revised JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials. JBI Evidence Synthesis. 2023;21(3):494-506. Associated publication(s)

  17. PDF 11 questions to help you make sense of case control study

    ©Critical Appraisal Skills Programme (CASP) Case Control Study Checklist 13.03.17 1 11 questions to help you make sense of case control study How to use this appraisal tool Three broad issues need to be considered when appraising a case control study: Are the results of the study valid? (Section A) What are the results? (Section B)

  18. Critical appraisal of qualitative research

    In this sense, it is akin to taking the entire field of 'quantitative' study designs and applying a single method or tool for their quality appraisal. In the case of qualitative research, checklists, therefore, offer only a blunt and arguably ineffective tool and potentially promote an incomplete understanding of good 'quality' in ...

  19. Critical Appraisal Study Designs

    The Critical Appraisal Skills Programme (CASP) aims to help people develop the skills they need to make sense of scientific evidence. CASP has produced simple critical appraisal checklists for the key study designs. These are not meant to replace considered thought and judgement when reading a paper but are for use as a guide and aide memoire.

  20. Methodological quality (risk of bias) assessment tools for primary and

    Randomized controlled trial (individual or cluster) The first RCT was designed by Hill BA (1897-1991) and became the "gold standard" for experimental study design [12, 13] up to now.Nowadays, the Cochrane risk of bias tool for randomized trials (which was introduced in 2008 and edited on March 20, 2011) is the most commonly recommended tool for RCT [9, 14], which is called "RoB".

  21. PDF Selected references regarding therapeutic benefits of various

    tools. RESULTS: In total, 152 RCTs (12,123 participants) were analysed according to the type of the ... (CASP). RESULTS: Fifteen research articles, comprising 178 patient experiences, were included. Studies ... inclusion criteria (two exploratory studies, two case reports, and one prospective study). These were included in the data evaluation ...

  22. Appraising psychotherapy case studies in practice-based evidence

    Systematic case studies are often placed at the low end of evidence-based practice (EBP) due to lack of critical appraisal. This paper seeks to attend to this research gap by introducing a novel Case Study Evaluation-tool (CaSE). First, issues around knowledge generation and validity are assessed in both EBP and practice-based evidence (PBE) paradigms. Although systematic case studies are more ...

  23. Is tooth extraction as proxy for periodontal disease related to the

    Emerging evidence points to the involvement of periodontal disease (PD) in the pathogenesis of rheumatoid arthritis (RA), especially in anti-citrullinated protein antibodies (ACPA)-positive RA. The bacteria Porphyromonas gingivalis , involved in oral mucosal inflammation and PD, can citrullinate proteins via prokaryotic peptidylarginine deiminase.1 Systemic translocation of oral bacteria has ...

  24. Case Study

    A case study is in depth analysis and systematic description of one patient or group of similar patients to promote a detailed understanding of their circumstances. Back to glossary.

  25. The Canvas Chatbot: How Northwestern University Built Its Own AI

    Case Studies Emerging Technologies and Trends . min read. ... IBM Watson services helped in the creation of tools capable of answering questions using natural language and that users could access anytime. Could the developers build a tool that fulfilled the most common help desk requests and was available 24/7? In collaboration with the ...

  26. PDF 12 questions to help you makesense of a Cohort Study

    The core CASP checklists (randomised controlled trial & systematic review) were based on JAMA 'Users' guides to the medical literature 1994 (adapted from Guyatt GH, Sackett DL, and Cook DJ), and piloted with health care practitioners. For each new checklist, a group of experts were assembled to develop and pilot the checklist and the workshop ...