How To Write a Critical Appraisal

daily newspaper

A critical appraisal is an academic approach that refers to the systematic identification of strengths and weakness of a research article with the intent of evaluating the usefulness and validity of the work’s research findings. As with all essays, you need to be clear, concise, and logical in your presentation of arguments, analysis, and evaluation. However, in a critical appraisal there are some specific sections which need to be considered which will form the main basis of your work.

Structure of a Critical Appraisal

Introduction.

Your introduction should introduce the work to be appraised, and how you intend to proceed. In other words, you set out how you will be assessing the article and the criteria you will use. Focusing your introduction on these areas will ensure that your readers understand your purpose and are interested to read on. It needs to be clear that you are undertaking a scientific and literary dissection and examination of the indicated work to assess its validity and credibility, expressed in an interesting and motivational way.

Body of the Work

The body of the work should be separated into clear paragraphs that cover each section of the work and sub-sections for each point that is being covered. In all paragraphs your perspectives should be backed up with hard evidence from credible sources (fully cited and referenced at the end), and not be expressed as an opinion or your own personal point of view. Remember this is a critical appraisal and not a presentation of negative parts of the work.

When appraising the introduction of the article, you should ask yourself whether the article answers the main question it poses. Alongside this look at the date of publication, generally you want works to be within the past 5 years, unless they are seminal works which have strongly influenced subsequent developments in the field. Identify whether the journal in which the article was published is peer reviewed and importantly whether a hypothesis has been presented. Be objective, concise, and coherent in your presentation of this information.

Once you have appraised the introduction you can move onto the methods (or the body of the text if the work is not of a scientific or experimental nature). To effectively appraise the methods, you need to examine whether the approaches used to draw conclusions (i.e., the methodology) is appropriate for the research question, or overall topic. If not, indicate why not, in your appraisal, with evidence to back up your reasoning. Examine the sample population (if there is one), or the data gathered and evaluate whether it is appropriate, sufficient, and viable, before considering the data collection methods and survey instruments used. Are they fit for purpose? Do they meet the needs of the paper? Again, your arguments should be backed up by strong, viable sources that have credible foundations and origins.

One of the most significant areas of appraisal is the results and conclusions presented by the authors of the work. In the case of the results, you need to identify whether there are facts and figures presented to confirm findings, assess whether any statistical tests used are viable, reliable, and appropriate to the work conducted. In addition, whether they have been clearly explained and introduced during the work. In regard to the results presented by the authors you need to present evidence that they have been unbiased and objective, and if not, present evidence of how they have been biased. In this section you should also dissect the results and identify whether any statistical significance reported is accurate and whether the results presented and discussed align with any tables or figures presented.

The final element of the body text is the appraisal of the discussion and conclusion sections. In this case there is a need to identify whether the authors have drawn realistic conclusions from their available data, whether they have identified any clear limitations to their work and whether the conclusions they have drawn are the same as those you would have done had you been presented with the findings.

The conclusion of the appraisal should not introduce any new information but should be a concise summing up of the key points identified in the body text. The conclusion should be a condensation (or precis) of all that you have already written. The aim is bringing together the whole paper and state an opinion (based on evaluated evidence) of how valid and reliable the paper being appraised can be considered to be in the subject area. In all cases, you should reference and cite all sources used. To help you achieve a first class critical appraisal we have put together some key phrases that can help lift you work above that of others.

Key Phrases for a Critical Appraisal

  • Whilst the title might suggest
  • The focus of the work appears to be…
  • The author challenges the notion that…
  • The author makes the claim that…
  • The article makes a strong contribution through…
  • The approach provides the opportunity to…
  • The authors consider…
  • The argument is not entirely convincing because…
  • However, whilst it can be agreed that… it should also be noted that…
  • Several crucial questions are left unanswered…
  • It would have been more appropriate to have stated that…
  • This framework extends and increases…
  • The authors correctly conclude that…
  • The authors efforts can be considered as…
  • Less convincing is the generalisation that…
  • This appears to mislead readers indicating that…
  • This research proves to be timely and particularly significant in the light of…

You may also like

How to Write a Critical Review of an Article

How to Write Critical Reviews

When you are asked to write a critical review of a book or article, you will need to identify, summarize, and evaluate the ideas and information the author has presented. In other words, you will be examining another person’s thoughts on a topic from your point of view.

Your stand must go beyond your “gut reaction” to the work and be based on your knowledge (readings, lecture, experience) of the topic as well as on factors such as criteria stated in your assignment or discussed by you and your instructor.

Make your stand clear at the beginning of your review, in your evaluations of specific parts, and in your concluding commentary.

Remember that your goal should be to make a few key points about the book or article, not to discuss everything the author writes.

Understanding the Assignment

To write a good critical review, you will have to engage in the mental processes of analyzing (taking apart) the work–deciding what its major components are and determining how these parts (i.e., paragraphs, sections, or chapters) contribute to the work as a whole.

Analyzing the work will help you focus on how and why the author makes certain points and prevent you from merely summarizing what the author says. Assuming the role of an analytical reader will also help you to determine whether or not the author fulfills the stated purpose of the book or article and enhances your understanding or knowledge of a particular topic.

Be sure to read your assignment thoroughly before you read the article or book. Your instructor may have included specific guidelines for you to follow. Keeping these guidelines in mind as you read the article or book can really help you write your paper!

Also, note where the work connects with what you’ve studied in the course. You can make the most efficient use of your reading and notetaking time if you are an active reader; that is, keep relevant questions in mind and jot down page numbers as well as your responses to ideas that appear to be significant as you read.

Please note: The length of your introduction and overview, the number of points you choose to review, and the length of your conclusion should be proportionate to the page limit stated in your assignment and should reflect the complexity of the material being reviewed as well as the expectations of your reader.

Write the introduction

Below are a few guidelines to help you write the introduction to your critical review.

Introduce your review appropriately

Begin your review with an introduction appropriate to your assignment.

If your assignment asks you to review only one book and not to use outside sources, your introduction will focus on identifying the author, the title, the main topic or issue presented in the book, and the author’s purpose in writing the book.

If your assignment asks you to review the book as it relates to issues or themes discussed in the course, or to review two or more books on the same topic, your introduction must also encompass those expectations.

Explain relationships

For example, before you can review two books on a topic, you must explain to your reader in your introduction how they are related to one another.

Within this shared context (or under this “umbrella”) you can then review comparable aspects of both books, pointing out where the authors agree and differ.

In other words, the more complicated your assignment is, the more your introduction must accomplish.

Finally, the introduction to a book review is always the place for you to establish your position as the reviewer (your thesis about the author’s thesis).

As you write, consider the following questions:

  • Is the book a memoir, a treatise, a collection of facts, an extended argument, etc.? Is the article a documentary, a write-up of primary research, a position paper, etc.?
  • Who is the author? What does the preface or foreword tell you about the author’s purpose, background, and credentials? What is the author’s approach to the topic (as a journalist? a historian? a researcher?)?
  • What is the main topic or problem addressed? How does the work relate to a discipline, to a profession, to a particular audience, or to other works on the topic?
  • What is your critical evaluation of the work (your thesis)? Why have you taken that position? What criteria are you basing your position on?

Provide an overview

In your introduction, you will also want to provide an overview. An overview supplies your reader with certain general information not appropriate for including in the introduction but necessary to understanding the body of the review.

Generally, an overview describes your book’s division into chapters, sections, or points of discussion. An overview may also include background information about the topic, about your stand, or about the criteria you will use for evaluation.

The overview and the introduction work together to provide a comprehensive beginning for (a “springboard” into) your review.

  • What are the author’s basic premises? What issues are raised, or what themes emerge? What situation (i.e., racism on college campuses) provides a basis for the author’s assertions?
  • How informed is my reader? What background information is relevant to the entire book and should be placed here rather than in a body paragraph?

Write the body

The body is the center of your paper, where you draw out your main arguments. Below are some guidelines to help you write it.

Organize using a logical plan

Organize the body of your review according to a logical plan. Here are two options:

  • First, summarize, in a series of paragraphs, those major points from the book that you plan to discuss; incorporating each major point into a topic sentence for a paragraph is an effective organizational strategy. Second, discuss and evaluate these points in a following group of paragraphs. (There are two dangers lurking in this pattern–you may allot too many paragraphs to summary and too few to evaluation, or you may re-summarize too many points from the book in your evaluation section.)
  • Alternatively, you can summarize and evaluate the major points you have chosen from the book in a point-by-point schema. That means you will discuss and evaluate point one within the same paragraph (or in several if the point is significant and warrants extended discussion) before you summarize and evaluate point two, point three, etc., moving in a logical sequence from point to point to point. Here again, it is effective to use the topic sentence of each paragraph to identify the point from the book that you plan to summarize or evaluate.

Questions to keep in mind as you write

With either organizational pattern, consider the following questions:

  • What are the author’s most important points? How do these relate to one another? (Make relationships clear by using transitions: “In contrast,” an equally strong argument,” “moreover,” “a final conclusion,” etc.).
  • What types of evidence or information does the author present to support his or her points? Is this evidence convincing, controversial, factual, one-sided, etc.? (Consider the use of primary historical material, case studies, narratives, recent scientific findings, statistics.)
  • Where does the author do a good job of conveying factual material as well as personal perspective? Where does the author fail to do so? If solutions to a problem are offered, are they believable, misguided, or promising?
  • Which parts of the work (particular arguments, descriptions, chapters, etc.) are most effective and which parts are least effective? Why?
  • Where (if at all) does the author convey personal prejudice, support illogical relationships, or present evidence out of its appropriate context?

Keep your opinions distinct and cite your sources

Remember, as you discuss the author’s major points, be sure to distinguish consistently between the author’s opinions and your own.

Keep the summary portions of your discussion concise, remembering that your task as a reviewer is to re-see the author’s work, not to re-tell it.

And, importantly, if you refer to ideas from other books and articles or from lecture and course materials, always document your sources, or else you might wander into the realm of plagiarism.

Include only that material which has relevance for your review and use direct quotations sparingly. The Writing Center has other handouts to help you paraphrase text and introduce quotations.

Write the conclusion

You will want to use the conclusion to state your overall critical evaluation.

You have already discussed the major points the author makes, examined how the author supports arguments, and evaluated the quality or effectiveness of specific aspects of the book or article.

Now you must make an evaluation of the work as a whole, determining such things as whether or not the author achieves the stated or implied purpose and if the work makes a significant contribution to an existing body of knowledge.

Consider the following questions:

  • Is the work appropriately subjective or objective according to the author’s purpose?
  • How well does the work maintain its stated or implied focus? Does the author present extraneous material? Does the author exclude or ignore relevant information?
  • How well has the author achieved the overall purpose of the book or article? What contribution does the work make to an existing body of knowledge or to a specific group of readers? Can you justify the use of this work in a particular course?
  • What is the most important final comment you wish to make about the book or article? Do you have any suggestions for the direction of future research in the area? What has reading this work done for you or demonstrated to you?

critical appraisal assignment examples

Academic and Professional Writing

This is an accordion element with a series of buttons that open and close related content panels.

Analysis Papers

Reading Poetry

A Short Guide to Close Reading for Literary Analysis

Using Literary Quotations

Play Reviews

Writing a Rhetorical Précis to Analyze Nonfiction Texts

Incorporating Interview Data

Grant Proposals

Planning and Writing a Grant Proposal: The Basics

Additional Resources for Grants and Proposal Writing

Job Materials and Application Essays

Writing Personal Statements for Ph.D. Programs

  • Before you begin: useful tips for writing your essay
  • Guided brainstorming exercises
  • Get more help with your essay
  • Frequently Asked Questions

Resume Writing Tips

CV Writing Tips

Cover Letters

Business Letters

Proposals and Dissertations

Resources for Proposal Writers

Resources for Dissertators

Research Papers

Planning and Writing Research Papers

Quoting and Paraphrasing

Writing Annotated Bibliographies

Creating Poster Presentations

Writing an Abstract for Your Research Paper

Thank-You Notes

Advice for Students Writing Thank-You Notes to Donors

Reading for a Review

Critical Reviews

Writing a Review of Literature

Scientific Reports

Scientific Report Format

Sample Lab Assignment

Writing for the Web

Writing an Effective Blog Post

Writing for Social Media: A Guide for Academics

Northeastern University Library

  • Northeastern University Library
  • Research Subject Guides
  • Guides for Library Services
  • Systematic Reviews and Evidence Syntheses
  • Critical Appraisal
  • Evidence Synthesis Service
  • Types of Systematic Reviews in the Health Sciences
  • Beginning Your Project
  • Standards & Guidance
  • Evidence-Based Assignments
  • Tips for a Successful Review Team
  • Training and Tutorials

Systematic Reviews and Evidence Syntheses : Critical Appraisal

Introduction.

“Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context,” ( Burls, 2009 ). -- Amanda Burls, Director of Postgraduate Programmes in Evidence-Based Health Care, University of Oxford

Critical appraisal, or risk of bias assessment, is an integral part of the systematic review methodology.

Bias can be introduced at any point in the research process, from study design to publication, and as such, there are many different forms of bias. (For descriptions and examples of forms of bias, see the University of Oxford’s Biases Archive ). Bias or systemic error can lead to inaccurate or incomplete conclusions. Therefore it is imperative to assess possible sources of bias in the research included in your review.

Hundreds of critical appraisal tools (CATs) have been developed to help you do so. Rather than providing a comprehensive list, this page provides a short list of CATs recommended by expert review groups and health technology assessment organizations. The list is organized by study design.

Defining terms

Critical appraisal includes the assessment of several related features: risk of bias, quality of reporting, precision, and external validity .

Critical appraisal has always been a defining feature of the systematic review methodology. However, early critical appraisal tools were structured as 'scales' which rolled many of the previously-mentioned features into one combined score. More recently, consensus has emerged within the health sciences that, in the case of systematic reviews of interventions, critical appraisal should focus on risk of bias alone. Cochrane's risk of bias tools, RoB-2 and ROBINS-I, were developed for this purpose.

Due to the evolution of critical appraisal within the systematic review methodology, you may hear folks use the terms "critical appraisal" and "risk of bias" interchangeably. It is useful to recall the differences between these terms and other related terms.

Critical appraisal (also called: critical assessment or quality assessment) includes the assessment of several related features: risk of bias, quality of reporting, precision, and external validity.

Risk of bias is equivalent to internal validity.

Internal validity can be defined as "the extent to which the observed results represent the truth in the population we are studying and, thus, are not due to methodological errors," ( Patino & Ferreira, 2018 ).

Quality of reporting refers to how accurately and thoroughly the study's methodology was reported.

Precision refers to random error. "Precision depends on the number of participants and (for dichotomous outcomes) the number of events in a study, and is reflected in the confidence interval around the intervention effect estimate from each study," ( Cochrane Handbook ).

External validity refers to generalizability; "the extent to which the results of a study can be generalized to other populations and settings," ( Cochrane Handbook ).

Critical Appraisal Tools

Case control.

  • Creator: Critical Appraisal Skills Programme (CASP)
  • Creator: Joanna Briggs Institute

Clinical Prediction Rule

Cross-sectional.

  • Creator: Downes, Brennan, et al.
  • Creator: Wirsching, et al.
  • Creator: National Collaborating Centre for Environmental Health
  • Creator: Oxford University’s Centre for Evidence-Based Medicine

Economic Evaluations

Mixed methods.

  • Creator: National Collaborating Centre for Methods and Tools, McMaster University

Other Quantitative

  • Creator: Cochrane Methods

Qualitative

Randomized controlled trials, systematic reviews.

  • Creator: Bruyere Research Institute, Ottowa Hospital Research Institute, et al.

Reviews of Critical Appraisal Tools

This section lists articles which have reviewed or inventoried CATs. These articles can serve more comprehensive catalogs of previously developed CATs.

Buccheri RK, Sharifi C. Critical Appraisal Tools and Reporting Guidelines for Evidence-Based Practice. Worldviews Evid Based Nurs. 2017;14(6):463-472. doi: 10.1111/wvn.12258

Munthe-Kaas HM, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Med Res Methodol. 2019;19(1):113. doi: 10.1186/s12874-019-0728-6

  • << Previous: Tools
  • Next: Evidence-Based Assignments >>
  • Ask a Librarian
  • Last Updated: May 6, 2024 9:46 AM
  • URL: https://subjectguides.lib.neu.edu/systematicreview
  • Mayo Clinic Libraries
  • Systematic Reviews
  • Critical Appraisal by Study Design

Systematic Reviews: Critical Appraisal by Study Design

  • Knowledge Synthesis Comparison
  • Knowledge Synthesis Decision Tree
  • Standards & Reporting Results
  • Materials in the Mayo Clinic Libraries
  • Training Resources
  • Review Teams
  • Develop & Refine Your Research Question
  • Develop a Timeline
  • Project Management
  • Communication
  • PRISMA-P Checklist
  • Eligibility Criteria
  • Register your Protocol
  • Other Resources
  • Other Screening Tools
  • Grey Literature Searching
  • Citation Searching
  • Data Extraction Tools
  • Minimize Bias
  • Synthesis & Meta-Analysis
  • Publishing your Systematic Review

Tools for Critical Appraisal of Studies

critical appraisal assignment examples

“The purpose of critical appraisal is to determine the scientific merit of a research report and its applicability to clinical decision making.” 1 Conducting a critical appraisal of a study is imperative to any well executed evidence review, but the process can be time consuming and difficult. 2 The critical appraisal process requires “a methodological approach coupled with the right tools and skills to match these methods is essential for finding meaningful results.” 3 In short, it is a method of differentiating good research from bad research.

Critical Appraisal by Study Design (featured tools)

  • Non-RCTs or Observational Studies
  • Diagnostic Accuracy
  • Animal Studies
  • Qualitative Research
  • Tool Repository
  • AMSTAR 2 The original AMSTAR was developed to assess the risk of bias in systematic reviews that included only randomized controlled trials. AMSTAR 2 was published in 2017 and allows researchers to “identify high quality systematic reviews, including those based on non-randomised studies of healthcare interventions.” 4 more... less... AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews)
  • ROBIS ROBIS is a tool designed specifically to assess the risk of bias in systematic reviews. “The tool is completed in three phases: (1) assess relevance(optional), (2) identify concerns with the review process, and (3) judge risk of bias in the review. Signaling questions are included to help assess specific concerns about potential biases with the review.” 5 more... less... ROBIS (Risk of Bias in Systematic Reviews)
  • BMJ Framework for Assessing Systematic Reviews This framework provides a checklist that is used to evaluate the quality of a systematic review.
  • CASP Checklist for Systematic Reviews This CASP checklist is not a scoring system, but rather a method of appraising systematic reviews by considering: 1. Are the results of the study valid? 2. What are the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Systematic Reviews Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • JBI Critical Appraisal Tools, Checklist for Systematic Reviews JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • NHLBI Study Quality Assessment of Systematic Reviews and Meta-Analyses The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • RoB 2 RoB 2 “provides a framework for assessing the risk of bias in a single estimate of an intervention effect reported from a randomized trial,” rather than the entire trial. 6 more... less... RoB 2 (revised tool to assess Risk of Bias in randomized trials)
  • CASP Randomised Controlled Trials Checklist This CASP checklist considers various aspects of an RCT that require critical appraisal: 1. Is the basic study design valid for a randomized controlled trial? 2. Was the study methodologically sound? 3. What are the results? 4. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CONSORT Statement The CONSORT checklist includes 25 items to determine the quality of randomized controlled trials. “Critical appraisal of the quality of clinical trials is possible only if the design, conduct, and analysis of RCTs are thoroughly and accurately described in the report.” 7 more... less... CONSORT (Consolidated Standards of Reporting Trials)
  • NHLBI Study Quality Assessment of Controlled Intervention Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • JBI Critical Appraisal Tools Checklist for Randomized Controlled Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • ROBINS-I ROBINS-I is a “tool for evaluating risk of bias in estimates of the comparative effectiveness… of interventions from studies that did not use randomization to allocate units… to comparison groups.” 8 more... less... ROBINS-I (Risk Of Bias in Non-randomized Studies – of Interventions)
  • NOS This tool is used primarily to evaluate and appraise case-control or cohort studies. more... less... NOS (Newcastle-Ottawa Scale)
  • AXIS Cross-sectional studies are frequently used as an evidence base for diagnostic testing, risk factors for disease, and prevalence studies. “The AXIS tool focuses mainly on the presented [study] methods and results.” 9 more... less... AXIS (Appraisal tool for Cross-Sectional Studies)
  • NHLBI Study Quality Assessment Tools for Non-Randomized Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. • Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies • Quality Assessment of Case-Control Studies • Quality Assessment Tool for Before-After (Pre-Post) Studies With No Control Group • Quality Assessment Tool for Case Series Studies more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • Case Series Studies Quality Appraisal Checklist Developed by the Institute of Health Economics (Canada), the checklist is comprised of 20 questions to assess “the robustness of the evidence of uncontrolled, [case series] studies.” 10
  • Methodological Quality and Synthesis of Case Series and Case Reports In this paper, Dr. Murad and colleagues “present a framework for appraisal, synthesis and application of evidence derived from case reports and case series.” 11
  • MINORS The MINORS instrument contains 12 items and was developed for evaluating the quality of observational or non-randomized studies. 12 This tool may be of particular interest to researchers who would like to critically appraise surgical studies. more... less... MINORS (Methodological Index for Non-Randomized Studies)
  • JBI Critical Appraisal Tools for Non-Randomized Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis. • Checklist for Analytical Cross Sectional Studies • Checklist for Case Control Studies • Checklist for Case Reports • Checklist for Case Series • Checklist for Cohort Studies
  • QUADAS-2 The QUADAS-2 tool “is designed to assess the quality of primary diagnostic accuracy studies… [it] consists of 4 key domains that discuss patient selection, index test, reference standard, and flow of patients through the study and timing of the index tests and reference standard.” 13 more... less... QUADAS-2 (a revised tool for the Quality Assessment of Diagnostic Accuracy Studies)
  • JBI Critical Appraisal Tools Checklist for Diagnostic Test Accuracy Studies JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • STARD 2015 The authors of the standards note that “[e]ssential elements of [diagnostic accuracy] study methods are often poorly described and sometimes completely omitted, making both critical appraisal and replication difficult, if not impossible.”10 The Standards for the Reporting of Diagnostic Accuracy Studies was developed “to help… improve completeness and transparency in reporting of diagnostic accuracy studies.” 14 more... less... STARD 2015 (Standards for the Reporting of Diagnostic Accuracy Studies)
  • CASP Diagnostic Study Checklist This CASP checklist considers various aspects of diagnostic test studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Diagnostic Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • SYRCLE’s RoB “[I]mplementation of [SYRCLE’s RoB tool] will facilitate and improve critical appraisal of evidence from animal studies. This may… enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the methodological quality of animal studies.” 15 more... less... SYRCLE’s RoB (SYstematic Review Center for Laboratory animal Experimentation’s Risk of Bias)
  • ARRIVE 2.0 “The [ARRIVE 2.0] guidelines are a checklist of information to include in a manuscript to ensure that publications [on in vivo animal studies] contain enough information to add to the knowledge base.” 16 more... less... ARRIVE 2.0 (Animal Research: Reporting of In Vivo Experiments)
  • Critical Appraisal of Studies Using Laboratory Animal Models This article provides “an approach to critically appraising papers based on the results of laboratory animal experiments,” and discusses various “bias domains” in the literature that critical appraisal can identify. 17
  • CEBM Critical Appraisal of Qualitative Studies Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • CASP Qualitative Studies Checklist This CASP checklist considers various aspects of qualitative research studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • Quality Assessment and Risk of Bias Tool Repository Created by librarians at Duke University, this extensive listing contains over 100 commonly used risk of bias tools that may be sorted by study type.
  • Latitudes Network A library of risk of bias tools for use in evidence syntheses that provides selection help and training videos.

References & Recommended Reading

1.     Kolaski, K., Logan, L. R., & Ioannidis, J. P. (2024). Guidance to best tools and practices for systematic reviews .  British Journal of Pharmacology ,  181 (1), 180-210

2.    Portney LG.  Foundations of clinical research : applications to evidence-based practice.  Fourth edition. ed. Philadelphia: F A Davis; 2020.

3.     Fowkes FG, Fulton PM.  Critical appraisal of published research: introductory guidelines.   BMJ (Clinical research ed).  1991;302(6785):1136-1140.

4.     Singh S.  Critical appraisal skills programme.   Journal of Pharmacology and Pharmacotherapeutics.  2013;4(1):76-77.

5.     Shea BJ, Reeves BC, Wells G, et al.  AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.   BMJ (Clinical research ed).  2017;358:j4008.

6.     Whiting P, Savovic J, Higgins JPT, et al.  ROBIS: A new tool to assess risk of bias in systematic reviews was developed.   Journal of clinical epidemiology.  2016;69:225-234.

7.     Sterne JAC, Savovic J, Page MJ, et al.  RoB 2: a revised tool for assessing risk of bias in randomised trials.  BMJ (Clinical research ed).  2019;366:l4898.

8.     Moher D, Hopewell S, Schulz KF, et al.  CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials.  Journal of clinical epidemiology.  2010;63(8):e1-37.

9.     Sterne JA, Hernan MA, Reeves BC, et al.  ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions.  BMJ (Clinical research ed).  2016;355:i4919.

10.     Downes MJ, Brennan ML, Williams HC, Dean RS.  Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS).   BMJ open.  2016;6(12):e011458.

11.   Guo B, Moga C, Harstall C, Schopflocher D.  A principal component analysis is conducted for a case series quality appraisal checklist.   Journal of clinical epidemiology.  2016;69:199-207.e192.

12.   Murad MH, Sultan S, Haffar S, Bazerbachi F.  Methodological quality and synthesis of case series and case reports.  BMJ evidence-based medicine.  2018;23(2):60-63.

13.   Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J.  Methodological index for non-randomized studies (MINORS): development and validation of a new instrument.   ANZ journal of surgery.  2003;73(9):712-716.

14.   Whiting PF, Rutjes AWS, Westwood ME, et al.  QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.   Annals of internal medicine.  2011;155(8):529-536.

15.   Bossuyt PM, Reitsma JB, Bruns DE, et al.  STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies.   BMJ (Clinical research ed).  2015;351:h5527.

16.   Hooijmans CR, Rovers MM, de Vries RBM, Leenaars M, Ritskes-Hoitinga M, Langendam MW.  SYRCLE's risk of bias tool for animal studies.   BMC medical research methodology.  2014;14:43.

17.   Percie du Sert N, Ahluwalia A, Alam S, et al.  Reporting animal research: Explanation and elaboration for the ARRIVE guidelines 2.0.  PLoS biology.  2020;18(7):e3000411.

18.   O'Connor AM, Sargeant JM.  Critical appraisal of studies using laboratory animal models.   ILAR journal.  2014;55(3):405-417.

  • << Previous: Minimize Bias
  • Next: GRADE >>
  • Last Updated: May 10, 2024 7:59 AM
  • URL: https://libraryguides.mayo.edu/systematicreviewprocess

Banner

Critical appraisal for medical and health sciences: 3. Checklists

  • Introduction
  • 1. What is critical appraisal?
  • 2. How do I start?
  • 3. Checklists
  • 4. Further reading and resources

Using the checklists

  • Where do I look?

critical appraisal assignment examples

"The process of assessing and interpreting evidence by systematically considering its validity , results and relevance ."

The checklists will help you consider these three areas as part of your critical appraisal. See the following tabs for an overview. 

critical appraisal assignment examples

There will be particular biases to look out for, depending on the study type.

For example, the checklists and guidance will help you to scrutinise: 

  • Was the study design appropriate for the research question?
  • How were participants selected? Has there been an attempt to minimise bias in this selection process?
  • Were potential ethical issues addressed? 
  • Was there any failure to account for subjects dropping out of the study?

critical appraisal assignment examples

  • How was data collected and analysed?
  • Are the results reliable?
  • Are the results statistically significant?

The following e- resources, developed by the University of Nottingham, may be useful when appraising quantitative studies:

  • Confidence intervals
  • Numbers Needed to Treat (NNT)
  • Relative Risk Reduction (RRR) and Absolute Risk Reduction (ARR)

critical appraisal assignment examples

Finally, the checklists will assist you in determining:

  • Can you use the results in your situation?
  • How applicable are they to your patient or research topic?
  • Was the study well conducted?
  • Are the results valid and reproducible?
  • What do the studies tell us about the current state of science?

Where do I look for this information?

Most articles follow the IMRAD format; Introduction, Methods, Results and Discussion (Greenhalgh, 2014, p. 28), with an abstract at the beginning.

The table below shows where in the article you might look to answer your questions:

  • How to read a paper: the basics of evidence-based medicine and healthcare Greenhalgh, Trisha

Checklists and tools

  • AMSTAR checklist for systematic reviews
  • Cardiff University critical appraisal checklists
  • CEBM Critical appraisal worksheets
  • Scottish Intercollegiate Guidelines Network checklists
  • JBI Critical appraisal tools
  • CASP checklists

Checklists for different study types

  • Systematic Review
  • Randomised Controlled Trial (RCT)
  • Qualitative study
  • Cohort study
  • Case-control study
  • Case report
  • In vivo animal studies
  • In vitro studies
  • Grey literature

critical appraisal assignment examples

There are different checklists for different study types, as each are prone to different biases.

The following tabs will give you an overview of some of the different study types you will come across, the sort of questions you will need to consider with each, and the checklists you can use.

Not sure what type of study you're looking at? See the  Spotting the study design  guide from the Centre for Evidence-Based Medicine for more help.

What is a systematic review?

A review of a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review. Statistical methods (meta-analysis) may or may not be used to analyse and summarise the results of the included studies.

From Cochrane Glossary

Some questions to ask when critically appraising a systematic review:

  • Do you think all the important, relevant studies were included?
  • Did the review’s authors do enough to assess quality of the included studies?
  • If the results of the review have been combined, was it reasonable to do so?

From: Critical Appraisal Skills Programme (2018). CASP Systematic Review Checklist. [online] Available at:  https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018

Checklists you can use to critically appraise a systematic review:

What is a randomised controlled trial?

An experiment in which two or more interventions, possibly including a control intervention or no intervention, are compared by being randomly allocated to participants. In most trials on intervention is assigned to each individual but sometimes assignment is to defined groups of individuals (for example, in a household) or interventions are assigned within individuals (for example, in different orders or to different parts of the body).

Some questions to ask when critically appraising RCTs:

  • Was the assignment of patients to treatments randomised?
  • Were patients, health workers and study personnel ‘blind’ to treatment? i.e. could they tell who was in each group?
  • Were all of the patients who entered the trial properly accounted for at its conclusion? 
  • Were all participants analysed in the groups in which they were randomised, i.e. was a  Intention to treat analysis undertaken?

From: Critical Appraisal Skills Programme (2018).  CASP Randomised Controlled Trial Checklist . [online] Available at:  https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018

Checklists you can use to critically appraise an RCT:

What is a qualitative study?

Qualitative research is designed to explore the human elements of a given topic, where specific methods are used to examine how individuals see and experience the world...Qualitative methods are best for addressing many of the  why  questions that researchers have in mind when they develop their projects. Where quantitative approaches are appropriate for examining  who  has engaged in a behavior or  what  has happened and while experiments can test particular interventions, these techniques are not designed to explain why certain behaviors occur. Qualitative approaches are typically used to explore new phenomena and to capture individuals’ thoughts, feelings, or interpretations of meaning and process.

From Given, L. (2008)  The SAGE Encyclopedia of Qualitative Research Methods . Sage: London.

Some  questions to ask  when critically appraising a  qualitative study:  

  • What was the selection process and was it appropriate? 
  • Were potential ethical issues addressed, such as the potential impact of the researcher on the participants? Has anything been done to limit the effects of this?
  • Was the data analysis done using explicit, rigorous, and justified methods?

From: Critical Appraisal Skills Programme (2018).  CASP Qualitative Checklist . [online] Available at:  https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018

Checklists you can use to critically appraise a qualitative study:

Watch the video for an example of how to critically appraise a qualitative study using the CASP checklist:

What is a cohort study?

An observational study in which a defined group of people (the cohort) is followed over time. The outcomes of people in subsets of this cohort are compared, to examine people who were exposed or not exposed (or exposed at different levels) to a particular intervention or other factor of interest. A prospective cohort study assembles participants and follows them into the future. A retrospective (or historical) cohort study identifies subjects from past records and follows them from the time of those records to the present. Because subjects are not allocated by the investigator to different interventions or other exposures, adjusted analysis is usually required to minimise the influence of other factors (confounders).

Some questions to ask when critically appraising a cohort study

  • Have there been any attempts to limit selection bias or other types of bias?
  • Have the authors identified any confounding factors?
  • Are the results precise and reliable?

From: Critical Appraisal Skills Programme (2018).  CASP Cohort Study Checklist . [online] Available at:  https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018

Checklists you can use to critically appraise a cohort study:

What is a case-control study?

A study that compares people with a specific disease or outcome of interest (cases) to people from the same population without that disease or outcome (controls), and which seeks to find associations between the outcome and prior exposure to particular risk factors. This design is particularly useful where the outcome is rare and past exposure can be reliably measured. Case-control studies are usually retrospective, but not always.

Some questions to ask  when critically appraising a case-control study:

  • Was the recruitment process appropriate? Is there any evidence of selection bias?
  • Have all confounding factors been accounted for?
  • How precise is the estimate of the effect? Were confidence intervals given?
  • Do you believe the results?

From Critical Appraisal Skills Programme (2018). CASP Case Control Study Checklist . [online] Available at: https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018. 

Checklists you can use to critically appraise a case-control study:

What is a case report?

A study reporting observations on a single individual.

Some questions to ask  when critically appraising a case report:

  • Is the researcher’s perspective clearly described and taken into account?
  • Are the methods for collecting data clearly described?
  • Are the methods for analysing the data likely to be valid and reliable?
  • Are quality control measures used?
  • Was the analysis repeated by more than one researcher to ensure reliability?
  • Are the results credible, and if so, are they relevant for practice? Are the results easy to understand?
  • Are the conclusions drawn justified by the results?
  • Are the findings of the study transferable to other settings?

From:  Roever and Reis (2015), 'Critical Appraisal of a Case Report', Evidence Based Medicine and Practice  Vol. 1 (1) 

Checklists you can use to critically appraise a case report:

  • CEBM critical appraisal of a case study

What are in vivo animal studies?

In vivo animal studies are experiments carried out using animals as models. These studies are usually pre-clinical, often bridging the gap between in vitro experiments (using cell or microorganisms) and research with human participants.

Arrive guidelines  provide suggested minimum reporting standards for in vivo experiments using animal models. You can use these to help you evaluate the quality and transparency of animal studies

Some questions to ask when critically appraising in vivo studies:

  • Is the study/experimental design explained clearly?
  • Was the sample size clearly stated, with information about how sample size was decided?
  • Was randomisation used?
  • Who was aware of group allocation at each stage of the experiment?
  • Were outcomes measures clearly defined and assessed?
  • Were the statistical methods used clearly explained?
  • Were all relevant details about animals used in the experiment clearly outlined (species, strain and substrain, sex, age or developmental stage, and, if relevant, weight)
  • Were experimental procedures explained in enough detail for them to be replicated?
  • Were the results clear, with relevant statistics included?

Adapted from:  The ARRIVE guidelines 2.0: author checklist

The ARRIVE guidelines 2.0: author checklist

While this checklist has been designed for authors to help while writing their studies, you can use the checklist to help you identify whether or not a study reports all of the required elements effectively.

SciRAP : evaluation of in vivo toxicity studies tool

The SciRAP method for evaluating reliability of  in vivo  toxicity studies consists of criteria for for evaluating both the   reporting quality  and  methodological quality  of studies, separately. You can switch between evaluation of reporting and methodological quality.

Further guidance 

Hooijmans CR, Rovers MM, de Vries RB, Leenaars M, Ritskes-Hoitinga M, Langendam MW. SYRCLE's risk of bias tool for animal studies. BMC Med Res Methodol. 2014 Mar 26;14:43. doi: 10.1186/1471-2288-14-43. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4230647/

Kilkenny C, et al, Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoSBiol2010;8:e1000412. doi:10.1371/journal.pbio.100041220613859: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1000412

Moermond CT, Kase R, Korkaric M, Ågerstrand M. CRED: Criteria for reporting and evaluating ecotoxicity data. Environ Toxicol Chem. 2016 May;35(5):1297-309. doi: 10.1002/etc.3259.

What are in vitro studies?

In vitro studies involve tests carried out outside of a living organism, usually involving tissues, organs or cells.

Some questions to ask when critically appraising in vitro studies:

  • Is there a clear and detailed description of the results, the test conditions and the interpretation of the results? 
  • Do the authors clearly communicate the limitations of the method/s used?
  • Do the authors use a validated method? 

Adapted from:  https://echa.europa.eu/support/registration/how-to-avoid-unnecessary-testing-on-animals/in-vitro-methods 

Guidance and checklists

SciRAP : evaluation of in vitro toxicity studies tool

The SciRAP method for evaluating reliability of  in vitro  toxicity studies consists of criteria for for evaluating both the   reporting quality  and  methodological quality  of studies, separately. You can switch between evaluation of reporting and methodological quality.

Development and validation of a risk-of-bias tool for assessing in vitro studies conducted in dentistry: The QUIN

Checklist designed to support the evaluation of in vitro dentistry studies, although can be used to assess risk of bias in other types of in vitro studies.

What is grey literature?

The term grey literature is used to describe a wide range of different information that is produced outside of traditional publishing and distribution channels, and which is often not well represented in indexing databases.

A widely accepted definition in the scholarly community for grey literature is

"information produced on all levels of government, academia, business and industry in electronic and print formats not controlled by commercial publishing" ie. where publishing is not the primary activity of the producing body." 

From: Third International Conference on Grey Literature in 1997 (ICGL Luxembourg definition, 1997  - Expanded in New York, 2004).

You can find out more about grey literature and how to track it down here.

Some questions to ask when critically appraising grey literature:

  • Who is the author? Are they credible and do they have appropriate qualifications to speak to the subject?
  • Does the source have a clearly stated aim or brief, and does it meet this?
  • Does the source reference credible and authoritative sources?
  • Is any data collection valid and appropriate for it's purpose?
  • Are any limits stated? e.g. missing data, information outside of scope or resource of project.
  • Is the source objective, or does the source support a viewpoint that could be biased?
  • Does the source have a identifiable date?
  • Is the source appropriate and relevant to the research area you have chosen?

Adapted from AACOS checklist

AACODS Checklist

The AACODS checklist has been designed to support the evaluation and critical appraisal of grey literature. 

  • << Previous: 2. How do I start?
  • Next: 4. Further reading and resources >>
  • Last Updated: Nov 22, 2023 11:28 AM
  • URL: https://libguides.exeter.ac.uk/criticalappraisalhealth

Occupational Therapy and Rehabilitation Sciences

  • Defining the Research Question(s)
  • Reference Resources
  • Evidence Summaries & Clinical Guidelines
  • Health Data & Statistics
  • Patient & Consumer Facing Materials
  • Images/Streaming Video
  • Database Tutorials
  • Crafting a Search
  • Narrowing / Filtering a Search
  • Expanding a Search
  • Cited Reference Searching
  • Find Grey Literature
  • Save Your Searches
  • Cite Sources
  • Critical Appraisal
  • Different Types of Literature Reviews
  • Conducting & Reporting Systematic Reviews
  • Finding Systematic Reviews
  • Tutorials & Tools for Literature Reviews
  • Mobile Apps for Health

PRISMA  or Preferred Reporting Items for Systematic Reviews and Meta-Analyses is an evidence-based protocol for reporting information in systematic reviews and meta-analyses.

  • The PRISMA STATEMENT , a 27-item checklist and a four-phase flow diagram to help authors improve the reporting of systematic reviews and meta-analyses.
  • PRISMA also offers editable templates for the flow diagram as PDF and Word documents 

Appraising the Evidence: Getting Started

To appraise the quality of evidence, it is essential understand the nature of the evidence source. Begin the appraisal process by considering these general characteristics:

  • Is the source primary, secondary or tertiary? (See University of Minnesota Library -  Primary, Secondary, and Tertiary Sources in the Health Sciences )
  • If the source is a journal article, what kind of article is it? (A report of original research? A review article? An opinion or commentary?)
  • If the source is reporting original research, what was the purpose of the research?
  • What is the date of publication?
  • Would the evidence presented in the source still be applicable today? (Consider: has technology changed? Have recommended best clinical practices changed? Has consensus understanding of a disease, condition, or treatment changed?)

Authority/Accuracy

  • Who is the author? What are the author's credentials and qualifications and to write on the topic?
  • Was the source published by a credible entity? (a scholarly journal? a popular periodical, e.g, newspaper or magazine?  an association? an organization?)
  • Did the source go through a peer review or editorial process before being published? (See this section of the guide for more information about locating peer reviewed articles)

Determining Study Methodology

Understanding how a study was conducted (the methodology) is fundamental for determining the level of evidence that was generated by the study, as well as assessing the quality of the evidence it generated.  While some papers state explicitly in the title what kind of method was used, it is often not so straightforward.  When looking at report of a study, there are a few techniques you can use to help classify the study design.

1. Notice Metadata in Database Records

In some bibliographic databases, there is information found in the Subject field, or the Publication Type field of the record that can provide information about a study's methodology.  Try to locate the record for the article of interest in CINAHL, PubMed or PsycINFO and look for information describing the study (e.g., is it tagged as a "randomized controlled trial,"  a "case report," and "observational study", a "review" article, etc).

  • A word of caution : A  "review" article is not necessarily a "systematic review."  Even if the title or abstract says "systematic review," carefully evaluate what type of review it is (a systematic review of interventions? a mixed methods SR? a scoping review? a narrative review?).

2. Read the Methods Section

While there may be some information in the abstract that indicates a study's design, it is often necessary to read the full methods section in order to truly understand how the study was conducted.  For help understanding the major types of research methodologies within the health sciences, see:

  • Understanding Research Study Designs  (University of Minnesota Library)
  • Study Designs  (Centre for Evidence Based Medicine)
  • Jeremey Howick's  Introduction to Study Designs  (Flow Chart) [PDF]
  • Quantitative Study Designs  (Deakin University Library)
  • Grimes, D. A., & Schulz, K. F. (2002). An overview of clinical research: the lay of the land .  Lancet (London, England) ,  359 (9300), 57–61. https://doi.org/10.1016/S0140-6736(02)07283-5
  • Deconstructing the Research Article (May/Jun2022; 42(3): 138-140)
  • Background, Significance, and Literature Review (Jul-Aug2022; 42(4): 203-205)
  • Purpose Statement, Research Questions, and Hypotheses (Sep/Oct2022; 42(5): 249-257)
  • Quantitative Research Designs (Nov/Dec2022; 42(6): 303-311)
  • Qualitative Research Designs (Jan/Feb2023; 43(1): 41-45)
  • Non-Experimental Research Designs (Mar/Apr2023; 43(2): 99-102)

Once the study methodology is understood, a tool or checklist can be selected to appraise the quality of the evidence that was generated by that study.  

Critical Appraisal Resources

In order to select a tool for critical appraisal (also known as quality assessment or "risk of bias" assessment), it is necessary to understand what methodology was used in the study.  (For help understanding study design, see this section of the guide .)

The list below sets of contains critical appraisal tools and checklists, with information about what types of studies those tools are meant for.  Additionally, there are links to reporting guidelines for different types of students, which can also be useful for quality assessment.  

If you're new to critical appraisal, check out this helpful video overview of some of the common tools:

Checklists & Tools

The AGREE II an instrument is valid and reliable tool that can be applied to any practice guideline in any disease area and can be used by health care providers, guideline developers, researchers, decision/policy makers, and educators.

For help using the AGREE II instrument, see the AGREE II Training Tools

  • AMSTAR 2 AMSTAR 2 is the revised version of the popular AMSTAR tool (a tool for critically appraising systematic reviews of RCTs). AMSTAR 2 can be used to critically appraise systematic reviews that include randomized or non-randomized studies of healthcare interventions, or both.

A collection of checklists for a number of purposes related to EBM, including finding, interpreting, and evaluating research evidence.

Found in Appendix 1 of Greenhalgh, Trisha. (2010). How to Read a Paper : The Basics of Evidence Based Medicine, 4th edition .

Systematic reviews Randomised controlled trials Qualitative research studies Economic evaluation studies Cohort studies Case control studies Diagnostic test studies

CEBM offers Critical Appraisal Sheets for:

  • GRADE The GRADE working group has developed a common, sensible and transparent approach to grading quality of a body of evidence and strength of recommendations that can be drawn from randomized and non-randomized trials . GRADE is meant for use in systematic reviews and other evidence syntheses (e.g., clinical guidelines) where a recommendation impacting practice will be made.

JBI’s critical appraisal tools assist in assessing the trustworthiness, relevance and results of published papers. There are checklists available for:

  • The Patient Education Materials Assessment Tool (PEMAT) and User’s Guide The Patient Education Materials Assessment Tool (PEMAT) is a systematic method to evaluate and compare the understandability and actionability of patient education materials . It is designed as a guide to help determine whether patients will be able to understand and act on information. Separate tools are available for use with print and audiovisual materials.
  • MMAT (Mixed Methods Appraisal Tool) 2018 "The MMAT is a critical appraisal tool that is designed for the appraisal stage of systematic mixed studies reviews, i.e., reviews that include qualitative, quantitative and mixed methods studies. It permits to appraise the methodological quality of five categories to studies: qualitative research, randomized controlled trials, non randomized studies, quantitative descriptive studies, and mixed methods studies."
  • PEDro Scale (Physiotherapy Evidence Database) The PEDro scale was developed to help users rapidly identify trials that are likely to be internally valid and have sufficient statistical information to guide clinical decision-making.
  • Risk of Bias (RoB) Tools The RoB 2 tool is designed for assessing risk of bias in randomized trials , while the ROBINS-I tool is meant for assessing non-randomized studies of interventions .
  • CanChild / McMaster EBP Research Group - Evidence Review Forms Evidence review forms from the McMaster University Occupational Therapy Evidence-Based Practice for appraising quantitative and qualitative evidence.

Reporting Guidelines

  • CONSORT (CONsolidated Standards Of Reporting Trials) The CONSORT Statement is an evidence-based, minimum set of standards for reporting of randomized trials . It offers a standard way for authors to prepare reports of trial findings, facilitating their complete and transparent reporting, and aiding their critical appraisal and interpretation.
  • TREND (Transparent Reporting of Evaluations with Nonrandomized Designs) The TREND statement has a 22-item checklist specifically developed to guide standardized reporting of non-randomized controlled trials .

PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses . PRISMA primarily focuses on the reporting of reviews evaluating the effects of interventions, but can also be used as a basis for reporting systematic reviews with objectives other than evaluating interventions.

There are also extensions available for scoping reviews , as well as other aspects or types of systematic reviews.

  • SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence) The SQUIRE guidelines provide a framework for reporting new knowledge about how to improve healthcare (i.e., quality improvement ). These guidelines are intended for reports that describe system level work to improve the quality, safety, and value of healthcare, and used methods to establish that observed outcomes were due to the intervention(s).

Searchable Registries of Appraisal Tools & Reporting Guidelines

  • Equator Network: Enhancing the QUAlity and Transparency Of health Research Comprehensive searchable database of reporting guidelines for main study types and also links to other resources relevant to research reporting.
  • The Registry of Methods and Tools for Evidence-Informed Decision Making The Registry of Methods and Tools for Evidence-Informed Decision Making ("the Registry") is a collection of resources to support evidence-informed decision making in practice, programs and policy. This curated, searchable resource offers a selection of methods and tools for each step in the evidence-informed decision-making process. Includes tools related to implementation science , assessing the applicability and transferability of evidence.

For a list of additional tools, as well as some commentary on their use, see:

Ma, L.-L., Wang, Y.-Y., Yang, Z.-H., Huang, D., Weng, H., & Zeng, X.-T. (2020). Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: What are they and which is better ? Military Medical Research, 7 (1), 7. https://doi.org/10.1186/s40779-020-00238-8

Determining Level of Evidence

Determining the level of evidence for a particular study or information source depends on understanding, the nature of the research question that is being investigated and the  methodology  that was used to collect the evidence.  See these these resources for help understanding study methodologies .  

There are a number of evidence hierarchies that could be used to 'rank' evidence. Which hierarchy is applied often depends on disciplinary norms - students should refer to materials and guidance from their professors about which hierarchy is appropriate to use.

  • Oxford Centre for Evidence Based Medicine - Levels of Evidence The CEBM has put together a suite of documents to enable ranking of evidence into levels. Where a study falls in the ranking depends on the methodology of the study, and what kind of question (e.g., therapy, prognosis, diagnosis) is being addressed.
  • Joanna Briggs Levels of Evidence [PDF] The JBI Levels of Evidence and Grades of Recommendation are meant to be used alongside the supporting document (PDF) outlining their use.
  • << Previous: Evidence-Based Practice
  • Next: Literature Reviews >>
  • Last Updated: May 10, 2024 11:25 AM
  • URL: https://guides.nyu.edu/ot

Banner

  • Teesside University Student & Library Services
  • Learning Hub Group

Critical Appraisal for Health Students

  • Critical Appraisal of a qualitative paper
  • Critical Appraisal: Help
  • Critical Appraisal of a quantitative paper
  • Useful resources

Appraisal of a Qualitative paper : Top tips

undefined

  • Introduction

Critical appraisal of a qualitative paper

This guide aimed at health students, provides basic level support for appraising qualitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework  for appraising qualitative research (based on 4 aspects of trustworthiness) is  provided and there is an opportunity to practise the technique on a sample article.

Support Materials

  • Framework for reading qualitative papers
  • Critical appraisal of a qualitative paper PowerPoint

To practise following this framework for critically appraising a qualitative article, please look at the following article:

Schellekens, M.P.J.  et al  (2016) 'A qualitative study on mindfulness-based stress reduction for breast cancer patients: how women experience participating with fellow patients',  Support Care Cancer , 24(4), pp. 1813-1820.

Critical appraisal of a qualitative paper: practical example.

  • Credibility
  • Transferability
  • Dependability
  • Confirmability

How to use this practical example 

Using the framework, you can have a go at appraising a qualitative paper - we are going to look at the following article: 

Step 1.  take a quick look at the article, step 2.  click on the credibility tab above - there are questions to help you appraise the trustworthiness of the article, read the questions and look for the answers in the article. , step 3.   click on each question and our answers will appear., step 4.    repeat with the other aspects of trustworthiness: transferability, dependability and confirmability ., questioning the credibility:, who is the researcher what has been their experience how well do they know this research area, was the best method chosen what method did they use was there any justification was the method scrutinised by peers is it a recognisable method was there triangulation ( more than one method used), how was the data collected was data collected from the participants at more than one time point how long were the interviews were questions asked to the participants in different ways, is the research reporting what the participants actually said were the participants shown transcripts / notes of the interviews / observations to ‘check’ for accuracy are direct quotes used from a variety of participants, how would you rate the overall credibility, questioning the transferability, was a meaningful sample obtained how many people were included is the sample diverse how were they selected, are the demographics given, does the research cover diverse viewpoints do the results include negative cases was data saturation reached, what is the overall transferability can the research be transferred to other settings , questioning the dependability :, how transparent is the audit trail can you follow the research steps are the decisions made transparent is the whole process explained in enough detail did the researcher keep a field diary is there a clear limitations section, was there peer scrutiny of the researchwas the research plan shown to peers / colleagues for approval and/or feedback did two or more researchers independently judge data, how would you rate the overall dependability would the results be similar if the study was repeated how consistent are the data and findings, questioning the confirmability :, is the process of analysis described in detail is a method of analysis named or described is there sufficient detail, have any checks taken place was there cross-checking of themes was there a team of researchers, has the researcher reflected on possible bias is there a reflexive diary, giving a detailed log of thoughts, ideas and assumptions, how do you rate the overall confirmability has the researcher attempted to limit bias, questioning the overall trustworthiness :, overall how trustworthy is the research, further information.

See Useful resources  for links, books and LibGuides to help with Critical appraisal.

  • << Previous: Critical Appraisal: Help
  • Next: Critical Appraisal of a quantitative paper >>
  • Last Updated: Apr 30, 2024 4:47 PM
  • URL: https://libguides.tees.ac.uk/critical_appraisal

Banner

Evidence Based Practice: Critical Appraisal

  • Finding the Evidence?
  • Critical Appraisal
  • Find Books About EBP
  • Evaluating Sources This link opens in a new window

What is a critical appraisal and why should you us it?

The critical appraisal of the quality of clinical research is one of the keys to informed decision-making in healthcare.  Critical appraisal is the process of carefully and systematically examining research evidence to judge its trustworthiness, its value and relevance in a particular context.

Critical appraisal skills promote understanding of:

  • which treatments or interventions may really work;
  • whether research has been conducted properly and has been reported reliably;
  • which papers are clinically relevant;
  • which services or treatments are potentially worth funding;
  • whether the benefits of an intervention are likely to outweigh the harms or costs;
  • what to believe when making decisions when there is conflicting research.

Critical appraisals are done using checklists, depending on the type of study being appraised. Common questions asked are:

  • What is the research question?
  • What is the study type (design)?
  • What selection considerations were applied?
  • What are the outcome factors and how are they measured?
  • What are the study factors and how are they measured?
  • What important potential confounders are considered?
  • What is the statistical method used in the study?
  • How were statistical results used and applied?
  • What conclusions did the authors reach about the research question?
  • Are ethical issues considered?

Information from Al-Jundi & Sakka, 2017; CASP, 2018 ; Mhaskar et al., 2009

Study Types

  • General Information
  • Cohort Study

Case-control study

  • Cross-sectional study

Different types of clinical questions are answered by different types of study design.

Randomised Controlled Trial (RCT)

Used to answer questions about effects. Participants are randomised into two (or more) different groups and each group receives a different intervention. At the end of the trial, the effects of the different interventions are measured. Blinding (patients and investigators should not know which group the patient belongs to) is used to minimise bias. 

Non-Randomised Controlled Trial

This type of study does not apply randomisation or uses a method that does not meet randomisation standards, like eg. alternate assignment to groups, age-based groupings, etc. After the allocation of participants to groups, the Non-Randomised Controlled Trial resembles a Cohort Study.

( Grimes & Schulz, 2002 ; Public Health Action Support Team, 2017 ; Sut, 2014 )

Cohort study

Participants or subjects (not patients) with specific characteristics are identified as a 'cohort' (cohort=group) and followed over a long time (years or decades). Differences between them, such as exposure of possible risk factor(s), are measured. Used to answer questions about aetiology or prognosis. Cohort studies are a form of longitudinal study design that flows from the exposure to outcome. Prognostic cohort studies start with a group of patients with a specific condition and follow them up over time to see how the condition develops.

Looks at patients (cases) who already have a specific condition and match them with a control group who are very similar except they don't have the condition. Medical records and interviews are used to identify differences in exposure to risk factors in the two groups. Used to answer questions about aetiology, especially for rare conditions where a cohort study would not be feasible. 

Cross-sectional study/survey

A representative sample of a population is identified and examined or interviewed to establish whether or not a specific outcome is present. Used to answer questions about prevalence and diagnosis. For diagnostic studies, the sensitivity and specificity of a new diagnostic test is measured against a 'go ld standard' or reference test. Cross-sectional studies can be descriptive or analytical.

Critical Appraisal Tools

JBI Checklists

  • Downs & Black

"Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context."  ( Burls, 2009 )

Choosing an appraisal tool

Critical appraisal tools are designed to be used when reading and evaluating published research.

For best results, match the tool against the type of study you want to appraise. Some of the common critical appraisal tools are included here.

Critical Appraisal Skills Programme (CASP) checklists

CASP provides a number of checklists covering RCTs, cohort studies, systematic reviews and more.

Joanna Briggs Institute Critical Appraisal Tools have been developed by the JBI and collaborators and approved by the JBI Scientific Committee, following extensive peer review. JBI offers a large number of appraisal checklists for both experimental and observational studies. Word and PDF versions of each are available.

STROBE Statement checklists

The STROBE checklists are designed for the reporting of observational (cohort, case-control, and cross-sectional) studies and can be applied to the critical appraisal of these types of study. Includes individual and mixed study checklists.

Critical appraisal

  • << Previous: Finding the Evidence?
  • Next: PICO >>
  • Last Updated: Feb 13, 2024 11:46 AM
  • URL: https://libguides.cdu.edu.au/evidence

IMAGES

  1. Critical Appraisal Example 2020-2022

    critical appraisal assignment examples

  2. Critical apprasial assignment- Nursing research

    critical appraisal assignment examples

  3. 📗 Free Essay with a Critical Appraisal of Quantitative Research Article

    critical appraisal assignment examples

  4. Assignment 3 critical appraisal

    critical appraisal assignment examples

  5. What Is a Critical Analysis Essay? Simple Guide With Examples

    critical appraisal assignment examples

  6. Example of a critically appraised topic (CAT)

    critical appraisal assignment examples

VIDEO

  1. Project Appraisal Assignment Discussion

  2. Critical appraisal

  3. Critical Appraisal (3 sessions) practical book EBM

  4. critical appraisal by dr ammad waheed khan

  5. Critical Appraisal of a Clinical Trial- Lecture by Dr. Bishal Gyawali

  6. Launch the Critical Appraisal Skills Program (CASP)

COMMENTS

  1. Critical Appraisal of Clinical Studies: An Example from Computed Tomography Screening for Lung Cancer

    In the following sections, we provide a recent example of the TAG Unit's critical appraisal of a highly publicized study, highlighting key steps involved in the critical appraisal process. ... to group assignments, whether contamination occurred (ie, intervention or control subjects not complying with study assignment), and whether intent-to ...

  2. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  3. How To Write a Critical Appraisal

    A critical appraisal is an academic approach that refers to the systematic identification of strengths and weakness of a research article with the intent of evaluating the usefulness and validity of the work's research findings. As with all essays, you need to be clear, concise, and logical in your presentation of arguments, analysis, and ...

  4. PDF Writing a Critical Review

    assignment instructions and seek clarification from your lecturer/tutor if needed. Purpose of a critical review The critical review is a writing task that asks you to summarise and evaluate a text. The critical review can be of a book, a chapter, or a journal article. Writing the critical review usually requires you to read the

  5. PDF Critical appraisal of a journal article

    Critical appraisal of a journal article 1. Introduction to critical appraisal Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. (Burls 2009) Critical appraisal is an important element of evidence-based medicine.

  6. How to Write Critical Reviews

    To write a good critical review, you will have to engage in the mental processes of analyzing (taking apart) the work-deciding what its major components are and determining how these parts (i.e., paragraphs, sections, or chapters) contribute to the work as a whole. Analyzing the work will help you focus on how and why the author makes certain ...

  7. Study Guide 3: How to Critically Appraise a Paper

    strengths and weaknesses. Critical appraisal is the process by which these are identified to establish whether the results are valid and their interpretation is reliable. Journal editors will subject submitted papers to a peer-review process by asking one or more referees to comment on the article. In general, the process works well but

  8. Evidence-Based Practice, Step by Step: Critical Appraisal of the ...

    For example, the term randomization, or random assignment, is a relevant feature of research methodology for intervention studies that may be unfamiliar. Using the glossary, he explains that random assignment and random sampling are often confused with one another, but that they're very different. When researchers select subjects from within a ...

  9. Writing Critical Reviews: A Step-by-Step Guide

    Ev en better you might. consider doing an argument map (see Chapter 9, Critical thinking). Step 5: Put the article aside and think about what you have read. Good critical review. writing requires ...

  10. Systematic Reviews and Evidence Syntheses : Critical Appraisal

    Critical appraisal includes the assessment of several related features: risk of bias, quality of reporting, precision, and external validity.. Critical appraisal has always been a defining feature of the systematic review methodology. However, early critical appraisal tools were structured as 'scales' which rolled many of the previously-mentioned features into one combined score.

  11. Critical Appraisal of a Qualitative Journal Article

    This essay critically appraises a research article, Using CASP (critical appraisal skills programme, 2006) and individual sections of Bellini & Rumrill: guidelines for critiquing research articles (Bellini &Rumrill, 1999). The title of this article is; 'Clinical handover in the trauma setting: A qualitative study of paramedics and trauma team ...

  12. Systematic Reviews: Critical Appraisal by Study Design

    "The purpose of critical appraisal is to determine the scientific merit of a research report and its applicability to clinical decision making." 1 Conducting a critical appraisal of a study is imperative to any well executed evidence review, but the process can be time consuming and difficult. 2 The critical appraisal process requires "a methodological approach coupled with the right ...

  13. Critical appraisal for medical and health sciences: 3. Checklists

    For example, the checklists and guidance will help you to scrutinise: ... In most trials on intervention is assigned to each individual but sometimes assignment is to defined groups of individuals (for example, in a household) or interventions are assigned within individuals (for example, in different orders or to different parts of the body ...

  14. Full article: Critical appraisal

    What is critical appraisal? Critical appraisal involves a careful and systematic assessment of a study's trustworthiness or rigour (Booth et al., Citation 2016).A well-conducted critical appraisal: (a) is an explicit systematic, rather than an implicit haphazard, process; (b) involves judging a study on its methodological, ethical, and theoretical quality, and (c) is enhanced by a reviewer ...

  15. PDF Critical Appraisal of the Evidence: Part II

    design under review (see Example of a Rapid Critical Appraisal Checklist). Although the EBP team will be looking at how well the re- searchers conducted their studies and discussing what makes a "good" research study, Carlos reminds them that the goal of critical appraisal is to determine the worth of a study to practice,

  16. Critical Appraisal

    "The MMAT is a critical appraisal tool that is designed for the appraisal stage of systematic mixed studies reviews, i.e., reviews that include qualitative, quantitative and mixed methods studies. It permits to appraise the methodological quality of five categories to studies: qualitative research, randomized controlled trials, non randomized ...

  17. Critical Appraisal for Health Students

    Critical appraisal of a qualitative paper. This guide aimed at health students, provides basic level support for appraising qualitative research papers. It's designed for students who have already attended lectures on critical appraisal. ... is provided and there is an opportunity to practise the technique on a sample article. Support Materials.

  18. PDF Writing a Critical Review

    Writing a Critical Review The advice in this brochure is a general guide only. We strongly recommend that you also follow your assignment instructions and seek clarification from your lecturer/tutor if needed. Purpose of a Critical Review The critical review is a writing task that asks you to summarise and evaluate a text. The critical review ...

  19. PDF Planning and writing a critical review

    A critical review (sometimes called a critique, critical commentary, critical appraisal, critical analysis) is a detailed commentary on and critical evaluation ... if you think that a sample of ten participants seemed quite small, you should try to find a similar study that has used more than ten, to cite as a comparison.

  20. Evidence Based Practice: Critical Appraisal

    Critical appraisal is the process of carefully and systematically examining research evidence to judge its trustworthiness, its value and relevance in a particular context. Critical appraisal skills promote understanding of: what to believe when making decisions when there is conflicting research. Critical appraisals are done using checklists ...

  21. PDF critical appraisal guidelines

    Our journal is interested in publishing critical appraisals that describe the use of clinical research in the decision making process for a specific aspect of the care of one patient. Decision making must include the context of care—social determinants that affect the recommendations. It is an exercise in applied clinical decision-making.