Banner

  • Teesside University Student & Library Services
  • Learning Hub Group

Critical Appraisal for Health Students

  • Critical Appraisal of a quantitative paper
  • Critical Appraisal: Help
  • Critical Appraisal of a qualitative paper
  • Useful resources

Appraisal of a Quantitative paper: Top tips

undefined

  • Introduction

Critical appraisal of a quantitative paper (RCT)

This guide, aimed at health students, provides basic level support for appraising quantitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising quantitative research (based on reliability, internal and external validity) is provided and there is an opportunity to practise the technique on a sample article.

Please note this framework is for appraising one particular type of quantitative research a Randomised Controlled Trial (RCT) which is defined as 

a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo.  The groups are then followed up to see if there are any differences between the results.  This helps in assessing the effectiveness of the intervention.(CASP, 2020)

Support materials

  • Framework for reading quantitative papers (RCTs)
  • Critical appraisal of a quantitative paper PowerPoint

To practise following this framework for critically appraising a quantitative article, please look at the following article:

Marrero, D.G.  et al  (2016) 'Comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial',  AJPH Research , 106(5), pp. 949-956.

Critical Appraisal of a quantitative paper (RCT): practical example

  • Internal Validity
  • External Validity
  • Reliability Measurement Tool

How to use this practical example 

Using the framework, you can have a go at appraising a quantitative paper - we are going to look at the following article:

Marrero, d.g.  et al  (2016) 'comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial',  ajph research , 106(5), pp. 949-956.,            step 1.  take a quick look at the article, step 2.  click on the internal validity tab above - there are questions to help you appraise the article, read the questions and look for the answers in the article. , step 3.   click on each question and our answers will appear., step 4.    repeat with the other aspects of external validity and reliability. , questioning the internal validity:, randomisation : how were participants allocated to each group did a randomisation process taken place, comparability of groups: how similar were the groups eg age, sex, ethnicity – is this made clear, blinding (none, single, double or triple): who was not aware of which group a patient was in (eg nobody, only patient, patient and clinician, patient, clinician and researcher) was it feasible for more blinding to have taken place , equal treatment of groups: were both groups treated in the same way , attrition : what percentage of participants dropped out did this adversely affect one group has this been evaluated, overall internal validity: does the research measure what it is supposed to be measuring, questioning the external validity:, attrition: was everyone accounted for at the end of the study was any attempt made to contact drop-outs, sampling approach: how was the sample selected was it based on probability or non-probability what was the approach (eg simple random, convenience) was this an appropriate approach, sample size (power calculation): how many participants was a sample size calculation performed did the study pass, exclusion/ inclusion criteria: were the criteria set out clearly were they based on recognised diagnostic criteria, what is the overall external validity can the results be applied to the wider population, questioning the reliability (measurement tool) internal validity:, internal consistency reliability (cronbach’s alpha). has a cronbach’s alpha score of 0.7 or above been included, test re-test reliability correlation. was the test repeated more than once were the same results received has a correlation coefficient been reported is it above 0.7 , validity of measurement tool. is it an established tool if not what has been done to check if it is reliable pilot study expert panel literature review criterion validity (test against other tools): has a criterion validity comparison been carried out was the score above 0.7, what is the overall reliability how consistent are the measurements , overall validity and reliability:, overall how valid and reliable is the paper.

  • << Previous: Critical Appraisal of a qualitative paper
  • Next: Useful resources >>
  • Last Updated: Aug 25, 2023 2:48 PM
  • URL: https://libguides.tees.ac.uk/critical_appraisal

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 21, Issue 4
  • How to appraise quantitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

This article has a correction. Please see:

  • Correction: How to appraise quantitative research - April 01, 2019

Download PDF

  • Xabi Cathala 1 ,
  • Calvin Moorley 2
  • 1 Institute of Vocational Learning , School of Health and Social Care, London South Bank University , London , UK
  • 2 Nursing Research and Diversity in Care , School of Health and Social Care, London South Bank University , London , UK
  • Correspondence to Mr Xabi Cathala, Institute of Vocational Learning, School of Health and Social Care, London South Bank University London UK ; cathalax{at}lsbu.ac.uk and Dr Calvin Moorley, Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London SE1 0AA, UK; Moorleyc{at}lsbu.ac.uk

https://doi.org/10.1136/eb-2018-102996

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Some nurses feel that they lack the necessary skills to read a research paper and to then decide if they should implement the findings into their practice. This is particularly the case when considering the results of quantitative research, which often contains the results of statistical testing. However, nurses have a professional responsibility to critique research to improve their practice, care and patient safety. 1  This article provides a step by step guide on how to critically appraise a quantitative paper.

Title, keywords and the authors

The authors’ names may not mean much, but knowing the following will be helpful:

Their position, for example, academic, researcher or healthcare practitioner.

Their qualification, both professional, for example, a nurse or physiotherapist and academic (eg, degree, masters, doctorate).

This can indicate how the research has been conducted and the authors’ competence on the subject. Basically, do you want to read a paper on quantum physics written by a plumber?

The abstract is a resume of the article and should contain:

Introduction.

Research question/hypothesis.

Methods including sample design, tests used and the statistical analysis (of course! Remember we love numbers).

Main findings.

Conclusion.

The subheadings in the abstract will vary depending on the journal. An abstract should not usually be more than 300 words but this varies depending on specific journal requirements. If the above information is contained in the abstract, it can give you an idea about whether the study is relevant to your area of practice. However, before deciding if the results of a research paper are relevant to your practice, it is important to review the overall quality of the article. This can only be done by reading and critically appraising the entire article.

The introduction

Example: the effect of paracetamol on levels of pain.

My hypothesis is that A has an effect on B, for example, paracetamol has an effect on levels of pain.

My null hypothesis is that A has no effect on B, for example, paracetamol has no effect on pain.

My study will test the null hypothesis and if the null hypothesis is validated then the hypothesis is false (A has no effect on B). This means paracetamol has no effect on the level of pain. If the null hypothesis is rejected then the hypothesis is true (A has an effect on B). This means that paracetamol has an effect on the level of pain.

Background/literature review

The literature review should include reference to recent and relevant research in the area. It should summarise what is already known about the topic and why the research study is needed and state what the study will contribute to new knowledge. 5 The literature review should be up to date, usually 5–8 years, but it will depend on the topic and sometimes it is acceptable to include older (seminal) studies.

Methodology

In quantitative studies, the data analysis varies between studies depending on the type of design used. For example, descriptive, correlative or experimental studies all vary. A descriptive study will describe the pattern of a topic related to one or more variable. 6 A correlational study examines the link (correlation) between two variables 7  and focuses on how a variable will react to a change of another variable. In experimental studies, the researchers manipulate variables looking at outcomes 8  and the sample is commonly assigned into different groups (known as randomisation) to determine the effect (causal) of a condition (independent variable) on a certain outcome. This is a common method used in clinical trials.

There should be sufficient detail provided in the methods section for you to replicate the study (should you want to). To enable you to do this, the following sections are normally included:

Overview and rationale for the methodology.

Participants or sample.

Data collection tools.

Methods of data analysis.

Ethical issues.

Data collection should be clearly explained and the article should discuss how this process was undertaken. Data collection should be systematic, objective, precise, repeatable, valid and reliable. Any tool (eg, a questionnaire) used for data collection should have been piloted (or pretested and/or adjusted) to ensure the quality, validity and reliability of the tool. 9 The participants (the sample) and any randomisation technique used should be identified. The sample size is central in quantitative research, as the findings should be able to be generalised for the wider population. 10 The data analysis can be done manually or more complex analyses performed using computer software sometimes with advice of a statistician. From this analysis, results like mode, mean, median, p value, CI and so on are always presented in a numerical format.

The author(s) should present the results clearly. These may be presented in graphs, charts or tables alongside some text. You should perform your own critique of the data analysis process; just because a paper has been published, it does not mean it is perfect. Your findings may be different from the author’s. Through critical analysis the reader may find an error in the study process that authors have not seen or highlighted. These errors can change the study result or change a study you thought was strong to weak. To help you critique a quantitative research paper, some guidance on understanding statistical terminology is provided in  table 1 .

  • View inline

Some basic guidance for understanding statistics

Quantitative studies examine the relationship between variables, and the p value illustrates this objectively.  11  If the p value is less than 0.05, the null hypothesis is rejected and the hypothesis is accepted and the study will say there is a significant difference. If the p value is more than 0.05, the null hypothesis is accepted then the hypothesis is rejected. The study will say there is no significant difference. As a general rule, a p value of less than 0.05 means, the hypothesis is accepted and if it is more than 0.05 the hypothesis is rejected.

The CI is a number between 0 and 1 or is written as a per cent, demonstrating the level of confidence the reader can have in the result. 12  The CI is calculated by subtracting the p value to 1 (1–p). If there is a p value of 0.05, the CI will be 1–0.05=0.95=95%. A CI over 95% means, we can be confident the result is statistically significant. A CI below 95% means, the result is not statistically significant. The p values and CI highlight the confidence and robustness of a result.

Discussion, recommendations and conclusion

The final section of the paper is where the authors discuss their results and link them to other literature in the area (some of which may have been included in the literature review at the start of the paper). This reminds the reader of what is already known, what the study has found and what new information it adds. The discussion should demonstrate how the authors interpreted their results and how they contribute to new knowledge in the area. Implications for practice and future research should also be highlighted in this section of the paper.

A few other areas you may find helpful are:

Limitations of the study.

Conflicts of interest.

Table 2 provides a useful tool to help you apply the learning in this paper to the critiquing of quantitative research papers.

Quantitative paper appraisal checklist

  • 1. ↵ Nursing and Midwifery Council , 2015 . The code: standard of conduct, performance and ethics for nurses and midwives https://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf ( accessed 21.8.18 ).
  • Gerrish K ,
  • Moorley C ,
  • Tunariu A , et al
  • Shorten A ,

Competing interests None declared.

Patient consent Not required.

Provenance and peer review Commissioned; internally peer reviewed.

Correction notice This article has been updated since its original publication to update p values from 0.5 to 0.05 throughout.

Linked Articles

  • Miscellaneous Correction: How to appraise quantitative research BMJ Publishing Group Ltd and RCN Publishing Company Ltd Evidence-Based Nursing 2019; 22 62-62 Published Online First: 31 Jan 2019. doi: 10.1136/eb-2018-102996corr1

Read the full text or download the PDF:

  • Mayo Clinic Libraries
  • Systematic Reviews
  • Critical Appraisal by Study Design

Systematic Reviews: Critical Appraisal by Study Design

  • Knowledge Synthesis Comparison
  • Knowledge Synthesis Decision Tree
  • Standards & Reporting Results
  • Materials in the Mayo Clinic Libraries
  • Training Resources
  • Review Teams
  • Develop & Refine Your Research Question
  • Develop a Timeline
  • Project Management
  • Communication
  • PRISMA-P Checklist
  • Eligibility Criteria
  • Register your Protocol
  • Other Resources
  • Other Screening Tools
  • Grey Literature Searching
  • Citation Searching
  • Data Extraction Tools
  • Minimize Bias
  • Synthesis & Meta-Analysis
  • Publishing your Systematic Review

Tools for Critical Appraisal of Studies

critical appraisal of quantitative research example essay

“The purpose of critical appraisal is to determine the scientific merit of a research report and its applicability to clinical decision making.” 1 Conducting a critical appraisal of a study is imperative to any well executed evidence review, but the process can be time consuming and difficult. 2 The critical appraisal process requires “a methodological approach coupled with the right tools and skills to match these methods is essential for finding meaningful results.” 3 In short, it is a method of differentiating good research from bad research.

Critical Appraisal by Study Design (featured tools)

  • Non-RCTs or Observational Studies
  • Diagnostic Accuracy
  • Animal Studies
  • Qualitative Research
  • Tool Repository
  • AMSTAR 2 The original AMSTAR was developed to assess the risk of bias in systematic reviews that included only randomized controlled trials. AMSTAR 2 was published in 2017 and allows researchers to “identify high quality systematic reviews, including those based on non-randomised studies of healthcare interventions.” 4 more... less... AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews)
  • ROBIS ROBIS is a tool designed specifically to assess the risk of bias in systematic reviews. “The tool is completed in three phases: (1) assess relevance(optional), (2) identify concerns with the review process, and (3) judge risk of bias in the review. Signaling questions are included to help assess specific concerns about potential biases with the review.” 5 more... less... ROBIS (Risk of Bias in Systematic Reviews)
  • BMJ Framework for Assessing Systematic Reviews This framework provides a checklist that is used to evaluate the quality of a systematic review.
  • CASP Checklist for Systematic Reviews This CASP checklist is not a scoring system, but rather a method of appraising systematic reviews by considering: 1. Are the results of the study valid? 2. What are the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Systematic Reviews Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • JBI Critical Appraisal Tools, Checklist for Systematic Reviews JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • NHLBI Study Quality Assessment of Systematic Reviews and Meta-Analyses The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • RoB 2 RoB 2 “provides a framework for assessing the risk of bias in a single estimate of an intervention effect reported from a randomized trial,” rather than the entire trial. 6 more... less... RoB 2 (revised tool to assess Risk of Bias in randomized trials)
  • CASP Randomised Controlled Trials Checklist This CASP checklist considers various aspects of an RCT that require critical appraisal: 1. Is the basic study design valid for a randomized controlled trial? 2. Was the study methodologically sound? 3. What are the results? 4. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CONSORT Statement The CONSORT checklist includes 25 items to determine the quality of randomized controlled trials. “Critical appraisal of the quality of clinical trials is possible only if the design, conduct, and analysis of RCTs are thoroughly and accurately described in the report.” 7 more... less... CONSORT (Consolidated Standards of Reporting Trials)
  • NHLBI Study Quality Assessment of Controlled Intervention Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • JBI Critical Appraisal Tools Checklist for Randomized Controlled Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • ROBINS-I ROBINS-I is a “tool for evaluating risk of bias in estimates of the comparative effectiveness… of interventions from studies that did not use randomization to allocate units… to comparison groups.” 8 more... less... ROBINS-I (Risk Of Bias in Non-randomized Studies – of Interventions)
  • NOS This tool is used primarily to evaluate and appraise case-control or cohort studies. more... less... NOS (Newcastle-Ottawa Scale)
  • AXIS Cross-sectional studies are frequently used as an evidence base for diagnostic testing, risk factors for disease, and prevalence studies. “The AXIS tool focuses mainly on the presented [study] methods and results.” 9 more... less... AXIS (Appraisal tool for Cross-Sectional Studies)
  • NHLBI Study Quality Assessment Tools for Non-Randomized Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. • Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies • Quality Assessment of Case-Control Studies • Quality Assessment Tool for Before-After (Pre-Post) Studies With No Control Group • Quality Assessment Tool for Case Series Studies more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • Case Series Studies Quality Appraisal Checklist Developed by the Institute of Health Economics (Canada), the checklist is comprised of 20 questions to assess “the robustness of the evidence of uncontrolled, [case series] studies.” 10
  • Methodological Quality and Synthesis of Case Series and Case Reports In this paper, Dr. Murad and colleagues “present a framework for appraisal, synthesis and application of evidence derived from case reports and case series.” 11
  • MINORS The MINORS instrument contains 12 items and was developed for evaluating the quality of observational or non-randomized studies. 12 This tool may be of particular interest to researchers who would like to critically appraise surgical studies. more... less... MINORS (Methodological Index for Non-Randomized Studies)
  • JBI Critical Appraisal Tools for Non-Randomized Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis. • Checklist for Analytical Cross Sectional Studies • Checklist for Case Control Studies • Checklist for Case Reports • Checklist for Case Series • Checklist for Cohort Studies
  • QUADAS-2 The QUADAS-2 tool “is designed to assess the quality of primary diagnostic accuracy studies… [it] consists of 4 key domains that discuss patient selection, index test, reference standard, and flow of patients through the study and timing of the index tests and reference standard.” 13 more... less... QUADAS-2 (a revised tool for the Quality Assessment of Diagnostic Accuracy Studies)
  • JBI Critical Appraisal Tools Checklist for Diagnostic Test Accuracy Studies JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • STARD 2015 The authors of the standards note that “[e]ssential elements of [diagnostic accuracy] study methods are often poorly described and sometimes completely omitted, making both critical appraisal and replication difficult, if not impossible.”10 The Standards for the Reporting of Diagnostic Accuracy Studies was developed “to help… improve completeness and transparency in reporting of diagnostic accuracy studies.” 14 more... less... STARD 2015 (Standards for the Reporting of Diagnostic Accuracy Studies)
  • CASP Diagnostic Study Checklist This CASP checklist considers various aspects of diagnostic test studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Diagnostic Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • SYRCLE’s RoB “[I]mplementation of [SYRCLE’s RoB tool] will facilitate and improve critical appraisal of evidence from animal studies. This may… enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the methodological quality of animal studies.” 15 more... less... SYRCLE’s RoB (SYstematic Review Center for Laboratory animal Experimentation’s Risk of Bias)
  • ARRIVE 2.0 “The [ARRIVE 2.0] guidelines are a checklist of information to include in a manuscript to ensure that publications [on in vivo animal studies] contain enough information to add to the knowledge base.” 16 more... less... ARRIVE 2.0 (Animal Research: Reporting of In Vivo Experiments)
  • Critical Appraisal of Studies Using Laboratory Animal Models This article provides “an approach to critically appraising papers based on the results of laboratory animal experiments,” and discusses various “bias domains” in the literature that critical appraisal can identify. 17
  • CEBM Critical Appraisal of Qualitative Studies Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • CASP Qualitative Studies Checklist This CASP checklist considers various aspects of qualitative research studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • Quality Assessment and Risk of Bias Tool Repository Created by librarians at Duke University, this extensive listing contains over 100 commonly used risk of bias tools that may be sorted by study type.
  • Latitudes Network A library of risk of bias tools for use in evidence syntheses that provides selection help and training videos.

References & Recommended Reading

1.     Kolaski, K., Logan, L. R., & Ioannidis, J. P. (2024). Guidance to best tools and practices for systematic reviews .  British Journal of Pharmacology ,  181 (1), 180-210

2.    Portney LG.  Foundations of clinical research : applications to evidence-based practice.  Fourth edition. ed. Philadelphia: F A Davis; 2020.

3.     Fowkes FG, Fulton PM.  Critical appraisal of published research: introductory guidelines.   BMJ (Clinical research ed).  1991;302(6785):1136-1140.

4.     Singh S.  Critical appraisal skills programme.   Journal of Pharmacology and Pharmacotherapeutics.  2013;4(1):76-77.

5.     Shea BJ, Reeves BC, Wells G, et al.  AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.   BMJ (Clinical research ed).  2017;358:j4008.

6.     Whiting P, Savovic J, Higgins JPT, et al.  ROBIS: A new tool to assess risk of bias in systematic reviews was developed.   Journal of clinical epidemiology.  2016;69:225-234.

7.     Sterne JAC, Savovic J, Page MJ, et al.  RoB 2: a revised tool for assessing risk of bias in randomised trials.  BMJ (Clinical research ed).  2019;366:l4898.

8.     Moher D, Hopewell S, Schulz KF, et al.  CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials.  Journal of clinical epidemiology.  2010;63(8):e1-37.

9.     Sterne JA, Hernan MA, Reeves BC, et al.  ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions.  BMJ (Clinical research ed).  2016;355:i4919.

10.     Downes MJ, Brennan ML, Williams HC, Dean RS.  Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS).   BMJ open.  2016;6(12):e011458.

11.   Guo B, Moga C, Harstall C, Schopflocher D.  A principal component analysis is conducted for a case series quality appraisal checklist.   Journal of clinical epidemiology.  2016;69:199-207.e192.

12.   Murad MH, Sultan S, Haffar S, Bazerbachi F.  Methodological quality and synthesis of case series and case reports.  BMJ evidence-based medicine.  2018;23(2):60-63.

13.   Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J.  Methodological index for non-randomized studies (MINORS): development and validation of a new instrument.   ANZ journal of surgery.  2003;73(9):712-716.

14.   Whiting PF, Rutjes AWS, Westwood ME, et al.  QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.   Annals of internal medicine.  2011;155(8):529-536.

15.   Bossuyt PM, Reitsma JB, Bruns DE, et al.  STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies.   BMJ (Clinical research ed).  2015;351:h5527.

16.   Hooijmans CR, Rovers MM, de Vries RBM, Leenaars M, Ritskes-Hoitinga M, Langendam MW.  SYRCLE's risk of bias tool for animal studies.   BMC medical research methodology.  2014;14:43.

17.   Percie du Sert N, Ahluwalia A, Alam S, et al.  Reporting animal research: Explanation and elaboration for the ARRIVE guidelines 2.0.  PLoS biology.  2020;18(7):e3000411.

18.   O'Connor AM, Sargeant JM.  Critical appraisal of studies using laboratory animal models.   ILAR journal.  2014;55(3):405-417.

  • << Previous: Minimize Bias
  • Next: GRADE >>
  • Last Updated: Apr 5, 2024 12:20 PM
  • URL: https://libraryguides.mayo.edu/systematicreviewprocess

Book cover

How to Perform a Systematic Literature Review pp 51–68 Cite as

Critical Appraisal: Assessing the Quality of Studies

  • Edward Purssell   ORCID: orcid.org/0000-0003-3748-0864 3 &
  • Niall McCrae   ORCID: orcid.org/0000-0001-9776-7694 4  
  • First Online: 05 August 2020

7163 Accesses

There is great variation in the type and quality of research evidence. Having completed your search and assembled your studies, the next step is to critically appraise the studies to ascertain their quality. Ultimately you will be making a judgement about the overall evidence, but that comes later. You will see throughout this chapter that we make a clear differentiation between the individual studies and what we call the body of evidence , which is all of the studies and anything else that we use to answer the question or to make a recommendation. This chapter deals with only the first of these—the individual studies. Critical appraisal, like everything else in systematic literature reviewing, is a scientific exercise that requires individual judgement, and we describe some tools to help you.

  • Bias (MeSH)
  • Credibility
  • Critical appraisal
  • Dependability
  • Reliability
  • Reproducibility of results (MeSH)
  • Risk of bias

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Oxford Centre for Evidence-Based Medicine (OCEBM) (2016) OCEBM levels of evidence. In: CEBM. https://www.cebm.net/2016/05/ocebm-levels-of-evidence/ . Accessed 17 Apr 2020

Aromataris E, Munn Z (eds) (2017) Joanna Briggs Institute reviewer’s manual. The Joanna Briggs Institute, Adelaide

Google Scholar  

Daly J, Willis K, Small R et al (2007) A hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol 60:43–49. https://doi.org/10.1016/j.jclinepi.2006.03.014

Article   PubMed   Google Scholar  

EQUATOR Network (2020) What is a reporting guideline?—The EQUATOR Network. https://www.equator-network.org/about-us/what-is-a-reporting-guideline/ . Accessed 7 Mar 2020

Tong A, Sainsbury P, Craig J (2007) Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 19:349–357. https://doi.org/10.1093/intqhc/mzm042

von Elm E, Altman DG, Egger M et al (2007) The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med 4:e296. https://doi.org/10.1371/journal.pmed.0040296

Article   Google Scholar  

Brouwers MC, Kerkvliet K, Spithoff K, AGREE Next Steps Consortium (2016) The AGREE reporting checklist: a tool to improve reporting of clinical practice guidelines. BMJ 352:i1152. https://doi.org/10.1136/bmj.i1152

Article   PubMed   PubMed Central   Google Scholar  

Moher D, Liberati A, Tetzlaff J et al (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 6:e1000097. https://doi.org/10.1371/journal.pmed.1000097

Boutron I, Page MJ, Higgins JPT, Altman DG, Lundh A, Hróbjartsson A (2019) Chapter 7: Considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds). Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated July 2019), Cochrane. https://www.training.cochrane.org/handbook

Critical Appraisal Skills Programme (2018) CASP checklists. In: CASP—critical appraisal skills programme. https://casp-uk.net/casp-tools-checklists/ . Accessed 7 Mar 2020

Higgins JPT, Savović J, Page MJ et al (2019) Chapter 8: Assessing risk of bias in a randomized trial. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Chapter   Google Scholar  

Guyatt GH, Oxman AD, Kunz R et al (2011) GRADE guidelines 6. Rating the quality of evidence—imprecision. J Clin Epidemiol 64:1283–1293. https://doi.org/10.1016/j.jclinepi.2011.01.012

Sterne JAC, Savović J, Page MJ et al (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Sterne JA, Hernán MA, Reeves BC et al (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Wells GA, Shea B, O’Connell D et al (2019) The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. Ottawa Hospital Research Institute, Ottawa. http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp . Accessed 7 Mar 2020

Cochrane Community (2020) Glossary—Cochrane community. https://community.cochrane.org/glossary#letter-R . Accessed 8 Mar 2020

Messick S (1989) Meaning and values in test validation: the science and ethics of assessment. Educ Res 18:5–11. https://doi.org/10.3102/0013189X018002005

Sparkes AC (2001) Myth 94: qualitative health researchers will agree about validity. Qual Health Res 11:538–552. https://doi.org/10.1177/104973230101100409

Article   CAS   PubMed   Google Scholar  

Aguinis H, Solarino AM (2019) Transparency and replicability in qualitative research: the case of interviews with elite informants. Strat Manag J 40:1291–1315. https://doi.org/10.1002/smj.3015

Lincoln YS, Guba EG (1985) Naturalistic inquiry. Sage Publications, Beverly Hills, CA

Book   Google Scholar  

Hannes K (2011) Chapter 4: Critical appraisal of qualitative research. In: Noyes J, Booth A, Hannes K et al (eds) Supplementary guidance for inclusion of qualitative research in Cochrane systematic reviews of interventions. Cochrane Collaboration Qualitative Methods Group, London

Munn Z, Porritt K, Lockwood C et al (2014) Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol 14:108. https://doi.org/10.1186/1471-2288-14-108

Toye F, Seers K, Allcock N et al (2013) ‘Trying to pin down jelly’—exploring intuitive processes in quality assessment for meta-ethnography. BMC Med Res Methodol 13:46. https://doi.org/10.1186/1471-2288-13-46

Katikireddi SV, Egan M, Petticrew M (2015) How do systematic reviews incorporate risk of bias assessments into the synthesis of evidence? A methodological study. J Epidemiol Community Health 69:189–195. https://doi.org/10.1136/jech-2014-204711

McKenzie JE, Brennan SE, Ryan RE et al (2019) Chapter 9: Summarizing study characteristics and preparing for synthesis. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Deeks JJ, Higgins JPT, Altman DG (2019) Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Download references

Author information

Authors and affiliations.

School of Health Sciences, City, University of London, London, UK

Edward Purssell

Florence Nightingale Faculty of Nursing, Midwifery & Palliative Care, King’s College London, London, UK

Niall McCrae

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Edward Purssell .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Purssell, E., McCrae, N. (2020). Critical Appraisal: Assessing the Quality of Studies. In: How to Perform a Systematic Literature Review. Springer, Cham. https://doi.org/10.1007/978-3-030-49672-2_6

Download citation

DOI : https://doi.org/10.1007/978-3-030-49672-2_6

Published : 05 August 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-49671-5

Online ISBN : 978-3-030-49672-2

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

X

Library Services

UCL LIBRARY SERVICES

  • Guides and databases
  • Library skills

LibrarySkills@UCL for NHS staff

  • Critical Appraisal of a quantitative study (RCT)
  • Library skills for NHS staff
  • Accessing resources
  • Evidence-based resources
  • Bibliographic databases
  • Developing your search
  • Reviewing and refining your search
  • Health management information
  • Critical appraisal of a qualitative study
  • Critical appraisal of a systematic review
  • Referencing basics
  • Referencing software
  • Referencing styles
  • Publishing and sharing research outputs
  • Publishing a protocol for your systematic or scoping review
  • Communicating and disseminating research
  • Help and training

Critical appraisal of a quantitative study (RCT)

The following video (5 mins, 36 secs.) helps to clarify the process of critical appraisal, how to systematically examine research, e.g. using checklists; the variety of tools /checklists available, and guidance on identifying the type of research you are faced with (so you can select the most appropriate appraisal tool).

Critical appraisal of an RCT: introduction to use of CASP checklists

The following video (4 min. 58 sec.) introduces the use of CASP checklists, specifically for critical appraisal of a randomised controlled trial (RCT) study paper; how the checklist is structured, and how to effectively use it.

Webinar recording of critical appraisal of an RCT

The following video is a recording of a webinar, with facilitator and participants using a CASP checklist, to critically appraise a randomised controlled trial paper, and determine whether it constitutes good practice.

'Focus on' videos

The following videos (all approx. 2-7 mins.) focus on a particular aspects of critical appraisal methodology for quantitative studies. 

  • << Previous: Evaluating information & critical appraisal
  • Next: Critical appraisal of a qualitative study >>
  • Last Updated: Apr 4, 2024 10:10 AM
  • URL: https://library-guides.ucl.ac.uk/nhs
  • Essay Writing
  • Dissertation Writing
  • Assignment Writing
  • Report Writing
  • Literature Review
  • Proposal Writing
  • Poster and Presentation Writing Service
  • PhD Writing Service
  • Coursework Writing
  • Tutoring Service
  • Exam Notes Writing Service

Editing and Proofreading Service

Technical and Statistical Services

  • Appeals and Re-Submissions

Personal Statement Writing Service

  • Sample Dissertations
  • Sample Essays
  • Free Products

Critical Appraisal of a Quantitative Study

Evidence-based practice is applied in numerous sciences whereas the use of quantitative methods allows for conducting statistical analysis the result of which can be further replicated and either confirmed or refuted by the scientific community. To generate verifiable knowledge, the same methods of research should be applied.

The sphere of corporate governance lies at the intersection of management and finance since the structure of top executives and characteristics and the balance of power between particular directors affect both managerial and financial issues, and ultimately, all other aspects of the organisational performance. Results received by previous researchers can be applied in practice.  Thus, it is important that these results are objective and unbiased.

For the aims of the paper, the article “Corporate governance and board of directors: The effect of a board composition on firm sustainability performance” by Naciti (2019) was chosen. For critically appraising this article the framework suggested by Coughlan et al. (2007) was employed. This framework considers two types of elements, namely those influencing believability of research and those affecting robustness of research.

The article is written in a very good English which means that it is concise as it includes only 8 journal pages, in general grammatically correct as it contains neither obvious mistakes nor jargon, and the text is fluent and quite easy for understanding. However, some minor mistakes and the use of figures of speech not typical for native speakers can be noticed in the text.

The author’s qualification looks to be appropriate for writing such an article since Valeria Naciti represents the University of Messina Department of Economics, Italy. Her qualification can also be confirmed by the fact the she has numerous academic publications on the issues of corporate governance and sustainable development of organisations which is evidenced by her profile at Researchgate. However, it is not clear whether the researcher has the practical experience of working in the corporate governance sphere or her knowledge is purely academic.

The title of the article clearly identifies the subject of research and the particular issue explored in the study. First, the sphere of interest is identified, namely corporate governance and board of directors. After that, the issue is indicated which makes it clear that the article examines how board structure and which of its characteristics influence sustainability of firm performance. Sometimes article titles report the sample of investigation (Khamis et al., 2015; Yameen et al., 2019), but this article specifies the explored sample in the abstract.

The abstract of the article provides a clear and comprehensive overview of the study. In particular, the research problem is formulated which is the effect of corporate governance characteristics on sustainability performance. Also, the sample and the chosen analytical method are also indicated in the abstract. Finally, the main results of the analysis are provided. However, the abstract lacks recommendations that could be suggested based on the research outcomes.

The problem examined in the study is clearly identified in the introduction. The author has chosen the approach of narrowing the research problem from a wider field. Specifically, first, the overall relationship between corporate governance characteristics and firm performance is suggested and substantiated using the references to relevant literature. Next, the issue of firm performance is constricted to sustainable performance which is a narrower concept than the overall performance. After that, the researcher shows that the association between board composition and sustainable performance has not been examined in detail in the existing literature and there is no common viewpoint how board of directors features influence sustainable performance of the firm. Therefore, the logic of establishing the goal to investigate this relationship can be easily understood.

The paper is logically consistent and has the following structure. First, the analysed problem is formulated based on the existing literature on the topic. Second, the review of both theoretical and empirical literature is conducted to find out what results were attained by other researchers. Third, the methodology of the analysis is determined based on these findings. Fourth, the outcomes of the analysis are presented. Fifth, the attained results are interpreted and discussed in the light of the previous findings. Sixth, some limitations of the article which can be addressed in the future research are indicated. This structure is logical and in general corresponds with recommendations on writing the research papers (Hoogenboom and Manske, 2012; Martin, 2014).

The research question is clearly formulated namely whether board characteristics affect sustainable performance of the firm. The three objectives arose to answer this question in line with the three explored board characteristics. Accordingly, the three hypotheses are formulated to testing the study. These hypotheses suppose that each of the characteristics including board independence, board diversity and separation of CEO and the Board chairman positively contribute to firm sustainable performance.

The literature review is structured quite logically. The author combined the review with hypothesis development.  First, the author showed that corporate governance procedures should comply with the aims of firm sustainable development. Next, the two main theories stipulating polar behaviour of the firm are outlined. In particular, the agency theory predicts that managers can pursue their own interests and thus behave not in interests of shareholders and long-term firm goals (Sami et al., 2011). Meanwhile, stakeholder theory notes that there are much more stakeholder groups around the firm such as employees, customers, local community and authorities whose interests should be accounted for as well (Onakoya et al., 2014). After that, the evidence on three explored characteristics of the board of directors and the corresponding hypotheses are formulated. These three board features are board independence, board diversity and CEO independence.

The theories comprising the theoretical framework are relevant for the study purposes. The matter is that the agency theory which focuses on the potential conflict between managers and owners mostly analyses financial performance of the firm and therefore might ignore other aspects of performance such as environmental and social impacts of the firm. Meanwhile, stakeholder theory posits equal importance of non-financial aspects along with financial ones. Thus, the choice of the model of firm behaviour affects the degree of attention to non-financial aspects of firm performance. What is less logical is that these theories are not explained comprehensively but their explanation is divided into groups in accordance with research hypotheses and explored variables. This makes the interpretation of theories less clear. Along with that, an advantage of the literature review is that empirical evidence showing different viewpoints on the impact of the aforementioned board characteristics is provided. The empirical literature used in the review is the mix of recent and old papers, most of them using secondary data in contexts of different countries.

The sample has not been appropriately explained by the author. The sample comprised 362 companies from the Fortune top 500 firms, which is 72.4% of the sample, but the criteria of selection were not mentioned. One may suggest that the choice depended on the availability of the data for the analysis, but it was not articulated by the researcher. Obviously, non-probability sampling was applied while the sample size is still sufficient to conduct the analysis and make generalisable conclusions. As for the data for the analysis, secondary data from reliable databases, namely Sustainalytics and Compustat Global Vantage, were used.

In terms of ethical considerations, the research brought no harm to any sides as it employed only secondary data that are freely available in companies’ annual reports or in databases for a charge. This means that no trade secrets or confidential information were revealed in the study. Moreover, the use of positivist approach implies that these data can be used by any other researchers to check and replicate the attained results.

The terms and theories in the study are clearly defined in the study. A drawback of the study is that the research design has not been formulated in the study. On the other hand, the research method is clear as the sequence of procedures is explained. As mentioned above, the data has been collected from reliable databases, but the criteria of data filtration have not been indicated. The data and the data collection technique are appropriate since it would be very time- and resource consuming to gather the data manually from open sources.

A drawback of the study is that while the dependent variable and explanatory variables have been explained, the empirical model was not formulated. This inhibits understanding the research method. Along with that, another specific trait of the data sample is that it remains unclear whether independent variables are used in the form of dummy variables where 0 would represent one state of a characteristic while 100 would an opposite state, or the variables can take intermediate values within the range 0-100. Moreover, unlike other studies dedicated to exploring corporate governance mechanisms (Afrifa and Tauringana, 2015; Buallay et al., 2017), the values of independent variables are taken in an unusual manner. In particular, the value 100 is assigned to these variables if the board consists of two thirds of independent directors, female directors, and directors with nationality distinct from the company’s country, respectively.

Moreover, further presentation of results shows that the effect of board characteristics and control variables was estimated separately on sustainable performance, social performance and environmental performance whereas the description of the dependent variable outlined was shown as a weighted average of these three dimensions. This ambiguity is another shortcoming of the study methodology.

The choice of the analysis method was explained in detail. Specifically, the author showed that some of the explanatory variables were determined along with the dependent variable which could entail the problem of endogeneity of the independent variables and appearance of unaccounted fixed effects. The use of a pooled OLS regression would produce biased estimators and thus distort the estimation results. Thus, neither a pooled OLS regression or fixed effects panel regression would be an appropriate method. To address the endogeneity of board characteristics and potential heteroskedasticity, autocorrelation and heterogeneity in the sample, the author applied the system-generalized method of moments (SGMM) which was shown to be more suitable in this case. Moreover, Hermalin and Weisbach (2003) previously indicated that board features were endogenously associated with performance. Therefore, to account for this endogeneity, the researcher employed t-2 and t-3 lags for all explanatory variables as instrumental variables.

In addition, the diagnostic tests were conducted in the study to check the validity of results. The first test is the Hansen-Sargan test that examined the relevance of the imposed restrictions.  This test states that the employed instruments are appropriate in case the moment conditions hold. The second test is the autoregression test with the lag t-2 that was used to ensure non-serial correlation of the error terms. The tests showed the validity of the restrictions and non-serial correlation of the error terms, respectively.

The analysis provided the following results. Board diversity and CEO duality significantly and positively affected all the three explored aspects of firm performance. That is, greater diversity of board members in terms of nationality and gender as well as separation of the CEO and Board chairman roles contributed to performance positively. Meanwhile, the effect of board independence on sustainable performance and social performance was negative. As for control variable, higher profitability and larger size influenced firm performance positively.

The attained results appeared to be in line with both considered theories. Namely, a positive effect of CEO and chairman roles separation was in line with agency theory as it mitigated potential conflict of interests and provided better control over executives. Along with that a positive impact of board diversity on performance accorded with stakeholder theory as diversity would contribute to accounting for interests of different stakeholder groups.

Other benefits of the study were the following. The author provided recommendation for further research which could improve and specify the attained results. Along with that, the limitations were also indicated which allowed for addressing them in the future studies. The references made in the APA referencing style are full as all cited sources were included in the reference list.

Afrifa, G. A. and Tauringana, V. (2015) Corporate governance and performance of UK listed small and medium enterprises, Corporate Governance: The international journal of business in society , 15(5), pp. 719-733.

Buallay, A., Hamdan, A. and Zureigat, Q. (2017) Corporate Governance and Firm Performance: Evidence from Saudi Arabia,  Australasian Accounting, Business and Finance Journal , 11(1), pp. 78-98.

Hermalin, B. E. and Weisbach, M. S. (2003) The Role of Boards of Directors in Corporate Governance: A Conceptual Framework and Survey, Journal of Economic Literature , 48 (1), pp. 58-107.

Hoogenboom, B. J. and Manske, R. C. (2012) How to write a scientific article, International Journal of Sports Physical Therapy , 7(5), pp. 512–517.

Khamis, R., Hamdan, A. M. and Elali, W. (2015) The Relationship between Ownership Structure Dimensions and Corporate Performance: Evidence from Bahrain, Australasian Accounting, Business and Finance Journal , 9(4), pp. 38-56.

Martin, E. (2014) How to write a good article, Current Sociology , 62(7), pp. 949–955.

Naciti, V. (2019). Corporate governance and board of directors: the effect of a board composition on firm sustainability performance, Journal of Cleaner Production , 237 (10), pp.1-8.

Onakoya, A. B. O., Fasanya, I. O. and Ofoegbu, D. I. (2014) Corporate Governance as Correlate for Firm Performance: A Pooled OLS Investigation of Selected Nigerian Banks, IUP Journal of Corporate Governance , 13(1), pp. 7-15.

Sami, H., Wang, J. and Zhou, H. (2011) Corporate governance and operating performance of Chinese listed firms, Journal of International Accounting, Auditing and Taxation , 20(2), pp. 106-114.

Yameen, M., Farhan, N. H., Tabash, M. I. (2019) The impact of corporate governance practices on firm’s performance: An empirical evidence from Indian tourism sector, Journal of International Studies , 12(1), pp. 208-228.

Monday - Friday:   9am - 6pm

Saturday: 10am - 6pm

Got Questions?

Email:  [email protected]

*We do NOT use AI (ChatGPT or similar), all orders are custom written by real people.

Our Services

Essay Writing Service

Assignment Writing Service

Coursework Writing Service

Report Writing Service

Reflective Report Writing Service

Literature Review Writing Service

Dissertation Proposal Writing Service

Dissertation Writing Service

MBA Writing Service

safe_payments_new (1)

Need some help?

Select a contact option below and we will get back to you for free.

  • Ask this writer a question
  • Request a copy of this paper (with tables, figures and calculations where applicable)
  • Get professional help with part of your work

critical appraisal of quantitative research example essay

How to appraise quantitative research articles

Whatever their specialty or practice area, all nurses should strive to become more sophisticated consumers of nursing research by learning how to critically appraise, synthesize, and communicate research findings. Such critical appraisal shows your commitment to evidence-informed practice and empowers you to create a practice culture based on the best available evidence.

To critically appraise nursing research, you must ask focused, meaningful questions to determine the overall integrity and applicability of the research. This article will help you better understand—and undertake—the process of critical appraisal. With practice, even nurses who once were intimidated by the research process should be able to efficiently and effectively determine the clinical relevance of scientific studies.

Problem statement The problem statement should appear at the beginning of the article and should include enough information for you to determine if the study results can be generalized to a specific patient population. Effective problem statements include independent and dependent variables, population of interest, and key concepts of the study.

The research problem should provide a clear rationale for the study. The problem statement may be in one of two forms:

• a research question that indicates the who, what, when, where, and why of the study • a “purpose” statement that describes the researcher’s purpose in conducting the study.

Also, the research problem should fill a gap in the current body of nursing research or theory or should pinpoint a single, relevant nursing issue that’s meaningful to nurses and patients. A poorly worded or inappropriate problem statement can cause flaws in a study’s methods, protocols, samples, and analyses. Be wary of broadly stated or overly generalized problem statements, as well as those whose research questions can’t be answered by the methods proposed.

Literature review The literature review is a systematic, critical review of the most important scholarly literature on a given topic. It should:

• highlight critical weaknesses in previous studies • identify previously studied concepts or variables • relate the current research project to historical research • identify the current knowledge deficit about a particular phenomenon and state what more needs to be done to overcome that deficit.

Look for a broad range of references—for example, peer-reviewed journal articles, systematic reviews of relevant research, professional standards, position statements, dissertations, and conference proceedings. In general, references should be no more than 5 years old, unless the research cited is a classic or historically important work.Conceptual framework Depending on the nature of the study, a conceptual or theoretical framework may be presented near the beginning of the article.

• Theoretical frameworks are narrower in scope and can be tested directly. • Conceptual frameworks express assumptions and can’t be tested directly.

A well-defined conceptual framework allows the reader to better understand the relationship between major concepts of the study and more fully explicates the relationship between the variables. Additionally, a conceptual framework may help you better understand how the researcher’s hypothesis and research question were developed.

Methods The methods section of the article tells you how the principal investigator went about answering the research question. It includes information about the sample selection, study design, data collection, and data or statistical analysis. This section should also provide sufficient information to permit duplication of the study and should address the protection of human subjects.

Sample selection Sample selection occurs on the basis of eligibility criteria that the researcher establishes in accordance with the study’s objectives. The sample’s eligibility and exclusion criteria, as well as its demographic composition, should be appropriate to achieve the study’s objectives. The study should have an adequate number of participants and a low dropout rate to protect against compositional and statistical bias and make the study more representative of the population.

Study design The study design should be clearly stated and appropriate for the research question being asked. The most common design associated with quantitative research is experimental design—commonly considered the most rigorous design and, for many researchers, the gold standard. In this design, the researcher controls both the selection of study subjects and introduction of the independent variable.

Also, in experimental design, subjects are randomly assigned to treatment and control groups, thereby reducing study bias because the researcher can’t influence the assignment of subjects. Ideally, groups in an experimental study are similar in all respects, and any differences between them result from the intervention administered by the researcher. Experimental design is commonly used when new drugs or medical products are being studied. In contrast, the nonexperimental study design is more qualitative and is used when the researcher wants to observe a particular phenomenon but lacks the ability and desire to manipulate the independent variables. The quasi-experimental design is more closely related to experimental design but lacks random assignment of subjects and subsequently may introduce bias.

Data collection Data collection procedures should be fully explained in the methods section and should provide a clear understanding of how data were collected and who collected them. Issues such as inter-rater reliability, instrument reliability and validity, and training of data collectors should be addressed. A clear explanation of data collection lends credibility to the study.

Statistical analysis A thorough review of the statistical analysis is important. (If you’re uncomfortable or unfamiliar with basic statistics, consult a clinical nurse specialist, nurse researcher, or other advanced practice nurse for assistance.) Study results should be presented in a logical, systematic format. For quantitative studies, both descriptive and inferential statistics should be provided.

• Descriptive statistics, including measures of central tendency (mean, median, and mode) as well as measures of dispersion or variability (variance, standard deviation, and range) provide information about the characteristics of the subjects or phenomenon being studied. • Inferential statistics allow researchers to make assumptions about the population based on a sample. Through the use of significance tests and other measures, inferential statistics help researchers understand the probability that the results of their study occurred by chance. The level of significance, expressed as a “p” value, represents the probability of obtaining the computed value by chance. If “p” is less than 0.01, the probability of obtaining the computed results by chance is less than 1%. In other words, if the study were repeated 100 times, the difference between groups would be attributed to the study intervention 99 times and to chance only one time.Protection of human subjects

To protect research subjects, the study must be approved by an institutional review board. The researcher also must obtain informed consent from participants after giving them oral and written information on the nature of the study and potential safety risks or conflicts of interest.

Results and discussion of findings Here the author should succinctly relate any descriptive and inferential statistical findings to the study’s independent and dependent variables (as stated in the original research question or research hypothesis). This section should state clearly whether the data analysis supports, or fails to support, the research hypothesis. You should be able to read the study’s results and determine the relationship between the independent and dependent variables.

Some articles may contain a separate “discussion” section in which the author interprets, analyzes, and summarizes the study’s conclusions and its relevance to the larger theoretical framework. Expect the author to objectively state any limitations or weaknesses in the study’s design, method, sample, or data collection procedures. The discussion also should identify any conceptual or theoretical relationships in need of further investigation.

Selected references Dunning M, Abi-Aad G, Gilbert D. Experience, Evidence and Everyday Practice: Creating Systems for Delivering Effective Health Care. London: King’s Fund; 1999.

Greenhalgh T. How to read a paper. The Medline database. BMJ. 1997;315(7101):180-183.

Griffin-Sobel JP. Research in practice: immersing yourself in research. Gastroenterol Nurs. 2003;26(5):219-220.

Hudson-Barr D. From research idea to research study: the how. J Spec Pediatr Nurs. 2005;10(3):147-150.

LoBiondo-Wood J, Haber J. Nursing Research: Methods, Critical Appraisal, and Utilization. St. Louis, Mo: C.V. Mosby; 2002.

Valente S. Critical analysis of research papers. J Nurs Staff Dev. 2003;19(3):130-142.

Kenneth J. Rempher, PhD, RN, MBA, CCRN, APRN, BC, is Director of Professional Nursing Practice at Sinai Hospital of Baltimore in Baltimore, Md. Cory Silkman, MAR, BSN, RN, C, is a Clinical Leader in the Comprehensive Inpatient Rehabilitation Unit at Sinai Hospital of Baltimore.

critical appraisal of quantitative research example essay

NurseLine Newsletter

  • First Name *
  • Last Name *
  • Hidden Referrer

*By submitting your e-mail, you are opting in to receiving information from Healthcom Media and Affiliates. The details, including your email address/mobile number, may be used to keep you informed about future products and services.

Test Your Knowledge

Recent posts.

critical appraisal of quantitative research example essay

From data to action

critical appraisal of quantitative research example essay

Connecting theory and practice

critical appraisal of quantitative research example essay

Effective clinical learning for nursing students

critical appraisal of quantitative research example essay

Leadership style matters

critical appraisal of quantitative research example essay

Innovation in motion

critical appraisal of quantitative research example essay

Nurse referrals to pharmacy

critical appraisal of quantitative research example essay

Breaking down barriers to sizeism

critical appraisal of quantitative research example essay

Nursing professional development at night

critical appraisal of quantitative research example essay

Delivering health equity at the bedside

critical appraisal of quantitative research example essay

How nurses can conquer their fear of AI

critical appraisal of quantitative research example essay

Engaging nurses in scholarly work

critical appraisal of quantitative research example essay

Assess, reflect, act

critical appraisal of quantitative research example essay

Mentorship matters

critical appraisal of quantitative research example essay

2023 nursing trends and salary survey results

critical appraisal of quantitative research example essay

Back to the future

critical appraisal of quantitative research example essay

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Appraising Quantitative Research in Health Education: Guidelines for Public Health Educators

Leonard jack, jr..

Associate Dean for Research and Endowed Chair of Minority Health Disparities, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971

Sandra C. Hayes

Central Mississippi Area Health Education Center, 350 West Woodrow Wilson, Suite 3320, Jackson, MS 39213; Telephone: 601-987-0272; Fax: 601-815-5388

Jeanfreau G. Scharalda

Louisiana State University Health Sciences Center School of Nursing, 1900 Gravier Street, New Orleans, Louisiana 70112; Telephone: 504-568-4140; Fax: 504-568-5853

Barbara Stetson

Department of Psychological and Brain Sciences, 317 Life Sciences Building, University of Louisville, Louisville, KY 40292; Telephone: 502-852-2540; Fax: 502-852-8904

Nkenge H. Jones-Jack

Epidemiologist & Evaluation Consultant, Metairie, Louisiana 70002. Telephone: 678-524-1147; Fax: 504-267-4080

Matthew Valliere

Chronic Disease Prevention and Control, Bureau of Primary Care and Rural Health, Office of the Secretary, 628 North 4th Street, Baton Rouge, LA 70821-3118; Telephone: 225-342-2655; Fax: 225-342-2652

William R. Kirchain

Division of Clinical and Administrative Sciences, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, Room 121, New Orleans, Louisiana 70125; Telephone: 504-520-5395; Fax: 504-520-7971

Michael Fagen

Co-Associate Editor for the Evaluation and Practice section of Health Promotion Practice , Department of Community Health Sciences, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., M/C 923, Chicago, IL 60608-1260, Telephone: 312-355-0647; Fax: 312-996-3551

Cris LeBlanc

Centers of Excellence Scholar, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971

Many practicing health educators do not feel they possess the skills necessary to critically appraise quantitative research. This publication is designed to help provide practicing health educators with basic tools helpful to facilitate a better understanding of quantitative research. This article describes the major components—title, introduction, methods, analyses, results and discussion sections—of quantitative research. Readers will be introduced to information on the various types of study designs and seven key questions health educators can use to facilitate the appraisal process. Upon reading, health educators will be in a better position to determine whether research studies are well designed and executed.

Appraising the Quality of Quantitative Research in Health Education

Practicing health educators often find themselves with little time to read published research in great detail. Some health educators with limited time to read scientific papers may get frustrated as they get bogged down trying to understand research terminology, methods, and approaches. The purpose of appraising a scientific publication is to assess whether the study’s research questions (hypotheses), methods and results (findings) are sufficiently valid to produce useful information ( Fowkes and Fulton, 1991 ; Donnelly, 2004 ; Greenhalgh and Taylor, 1997 ; Johnson and Onwuegbuze, 2004 ; Greenhalgh, 1997 ; Yin, 2003; and Hennekens and Buring, 1987 ). Having the ability to deconstruct and reconstruct scientific publications is a critical skill in a results-oriented environment linked to increasing demands and expectations for improved program outcomes and strong justifications to program focus and direction. Health educators do must not solely rely on the opinions of researchers, but, rather, increase their confidence in their own abilities to discern the quality of published scientific research. Health educators with little experience reading and appraising scientific publications, may find this task less difficult if they: 1) become more familiar with the key components of a research publication, and 2) utilize questions presented in this article to critically appraise the strengths and weaknesses of published research.

Key Components of a Scientific Research Publication

The key components of a research publication should provide important information that is needed to assess the strengths and weaknesses of the research. Key components typically include the: publication title , abstract , introduction , research methods used to address the research question(s) or hypothesis, statistical analysis used, results , and the researcher’s interpretation and conclusion or recommended use of results to inform future research or practice. A brief description of these components follows:

Publication Title

A general heading or description should provide immediate insight into the intent of the research. Titles may include information regarding the focus of the research, population or target audience being studied, and study design.

An abstract provides the reader with a brief description of the overall research, how it was done, statistical techniques employed, key results,and relevant implications or recommendations.

Introduction

This section elaborates on the content mentioned in the abstract and provides a better idea of what to anticipate in the manuscript. The introduction provides a succinct presentation of previously published literature, thus offering a purpose (rationale) for the study.

This component of the publication provides critical information on the type of research methods used to conduct the study. Common examples of study designs used to conduct quantitative research include cross sectional study, cohort study, case-control study, and controlled trial. The methods section should contain information on the inclusion and exclusion criteria used to identify participants in the study.

Quantitative data contains information that is quantifiable, perhaps through surveys that are analyzed using statistical tests to determine if the results happened by chance. Two types of statistical analyses are used: descriptive and inferential ( Johnson and Onwuegbuze, 2004 ). Descriptive statistics are used to describe the basic features of the study data and provide simple summaries about the sample and measures. With inferential statistics, researchers are trying to reach conclusions that extend beyond the immediate data alone. Thus, they use inferential statistics to make inferences from the data to more general conditions.

This section presents the reader with the researcher’s data and results of statistical analyses described in the method section. Thus, this section must align closely with the methods section.

Discussion (Conclusion)

This section should explain what the data means thereby summarizing main results and findings for the reader. Important limitations (such as the use of a non-random sample, the absence of a control group, and short duration of the intervention) should be discussed. Researchers should discuss how each limitation can impact the applicability and use of study results. This section also presents recommendations on ways the study can help advance future health education and practice.

Critically Appraising the Strengths and Weaknesses of Published Research

During careful reading of the analysis, results, and discussion (conclusion) sections, what key questions might you ask yourself in order to critically appraise the strengths and weaknesses of the research? Based on a careful review of the literature ( Greenhalgh and Taylor, 1997 ; Greenhalgh, 1997 ; and Hennekens and Buring, 1987 ) and our research experiences, we have identified seven key questions around which to guide your assessment of quantitative research.

1) Is a study design identified and appropriately applied?

Study designs refer to the methodology used to investigate a particular health phenomenon. Becoming familiar with the various study designs will help prepare you to critically assess whether its selection was applied adequately to answer the research questions (or hypotheses). As mentioned previously, common examples of study designs frequently used to conduct quantitative research include cross sectional study, cohort study, case-control study, and controlled trail. A brief description of each can be found in Table 1 .

Definitions of Study Designs

2) Is the study sample representative of the group from which it is drawn?

The study sample must be representative of the group from which it is drawn. The study sample must therefore be typical of the wider target audience to whom the research might apply. Addressing whether the study sample is representative of the group from which it is drawn will require the researcher to take into consideration the sampling method and sample size.

Sampling Method

Many sampling methods are used individually or in combination. Keep in mind that sampling methods are divided into two categories: probability sampling and non-probability sampling ( Last, 2001 ). Probability sampling (also called random sampling) is any sampling scheme in which the probability of choosing each individual is the same (or at least known, so it can be readjusted mathematically to be equal). Non-probability sampling is any sampling scheme in which the probability of an individual being chosen is unknown. Typically, researchers should offer a rationale for utilizing non-probability sampling, and when utilized, be aware of its limitations. For example, use of a convenience sample (choosing individuals in an unstructured manner) can be justified when collecting pilot data around which future studies employing more rigorous sampling methods will be utilized.

Sample Size

Established statistical theories and formulas are used to generate sample size calculations—the recommended number of individuals necessary in order to have sufficient power to detect meaningful results at a certain level of statistical significance. In the methods section, look for a statement or two confirming whether steps where taken to obtain the appropriate sample size.

3) In research studies using a control group, is this group adequate for the purpose of the study?

Source of controls.

In case-control and cohort studies, the source of controls should be such that the distribution of characteristics not under investigation are similar to those in the cases or study cohort.

In case-control studies both cases and controls are often matched on certain characteristics such as age, sex, income, and race. The criteria used for including and excluding study participants must be adequately described and examined carefully. Inclusion and exclusion criteria may include: ethnicity, age of diagnosis, length of time living with a health condition, geographic location, and presence or absence of complications. You should critically assess whether matching across these characteristics actually occurred.

4) What is the validity of measurements and outcomes identified in the study?

Validity is the extent to which a measurement captures what it claims to measure. This might take the form of questions contained on a survey, questionnaire or instrument. Researchers should address one or more of the following types of validity: face, content, criterion-related, and construct ( Last, 2001 ; William and Donnelly, 2008).

Face validity

Face validity assures that, upon examination, the variable of interest can measure what it intends to measure. If the researcher has chosen to study a variable that has not been studied before, he/she usually will need to start with face validity.

Content validity

Content validity involves comparing the content of the measurement technique to the known literature on the topic and validating the fact that the tool (e.g., survey, questionnaire) does represent the literature accurately.

Criterion-related validity

Criterion-related validity involves making sure the measures within a survey when tested proves to be effective in predicting criterion or indicators of a construct.

Construct validity

Construct validity deals with the validation of the construct that underlies the research. Here, researchers test the theory that underlies the hypothesis or research question.

5) To what extent is a common source of bias called blindness taken into account?

During data collection, a common source of bias is that subjects and/or those collecting the data are not blind to the purpose of the research. This can likely be the result of researchers going the extra mile to make sure those in the experimental group benefit from the intervention ( Fowkes and Fulton, 1991 ). Inadequate blindness can be a problem in studies utilizing all types of study designs. While total blindness is not possible, appraising whether steps were taken to be sure issues related to ensure blindness occurred is essential.

6) To what extent is the study considered complete with regard to drop outs and missing data?

Regardless of the study design employed, one must assess not only the proportion of drop outs in each group, but also why they dropped out. This may point to possible bias, as well as determine what efforts were taken to retain participants in the study.

Missing data

Despite the fact that missing data are a part of almost all research, it should still be appraised. There are several reasons why the data may be missing. The nature and extent to which data is missing should be explained.

7) To what extent are study results influenced by factors that negatively impact their credibility?

Contamination.

In research studies comparing the effectiveness of a structured intervention, contamination occurs when the control group makes changes based on learning what those participating in the intervention are doing. Despite the fact that researchers typically do not report the extent to which contamination occurs, you should nevertheless try to assess whether contamination negatively impacted the credibility of study results.

Confounding factors

A confounding factor in a study is a variable which is related to one or more of the measurements (measures or variables) defined in a study. A confounding factor may mask an actual association or falsely demonstrate an apparent association between the study variables where no real association between them exists. If confounding factors are not measured and considered, study results may be biased and compromised.

The guidelines and questions presented in this article are by no means exhaustive. However, when applied, they can help health education practitioners obtain a deeper understanding of the quality of published research. While no study is 100% perfect, we do encourage health education practitioners to pause before taking researchers at their word that study results are both accurate and impressive. If you find yourself answering ‘no’ to a majority of the key questions provided, then it is probably safe to say that, from your perspective, the quality of the research is questionable.

Over time, as you repeatedly apply the guidelines presented in this article, you will become more confident and interested in reading research publications from beginning to end. While this article is geared to health educators, it can help anyone interested in learning how to appraise published research. Table 2 lists additional reading resources that can help improve one’s understanding and knowledge of quantitative research. This article and the reading resources identified in Table 2 can serve as useful tools to frame informative conversations with your peers regarding the strengths and weaknesses of published quantitative research in health education.

Publications on How to Read, Write and Appraise Quantitative Research

Contributor Information

Leonard Jack, Jr., Associate Dean for Research and Endowed Chair of Minority Health Disparities, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971.

Sandra C. Hayes, Central Mississippi Area Health Education Center, 350 West Woodrow Wilson, Suite 3320, Jackson, MS 39213; Telephone: 601-987-0272; Fax: 601-815-5388.

Jeanfreau G. Scharalda, Louisiana State University Health Sciences Center School of Nursing, 1900 Gravier Street, New Orleans, Louisiana 70112; Telephone: 504-568-4140; Fax: 504-568-5853.

Barbara Stetson, Department of Psychological and Brain Sciences, 317 Life Sciences Building, University of Louisville, Louisville, KY 40292; Telephone: 502-852-2540; Fax: 502-852-8904.

Nkenge H. Jones-Jack, Epidemiologist & Evaluation Consultant, Metairie, Louisiana 70002. Telephone: 678-524-1147; Fax: 504-267-4080.

Matthew Valliere, Chronic Disease Prevention and Control, Bureau of Primary Care and Rural Health, Office of the Secretary, 628 North 4th Street, Baton Rouge, LA 70821-3118; Telephone: 225-342-2655; Fax: 225-342-2652.

William R. Kirchain, Division of Clinical and Administrative Sciences, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, Room 121, New Orleans, Louisiana 70125; Telephone: 504-520-5395; Fax: 504-520-7971.

Michael Fagen, Co-Associate Editor for the Evaluation and Practice section of Health Promotion Practice , Department of Community Health Sciences, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., M/C 923, Chicago, IL 60608-1260, Telephone: 312-355-0647; Fax: 312-996-3551.

Cris LeBlanc, Centers of Excellence Scholar, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971.

  • Fowkes FG, Fulton PM. Critical appraisal of published research: introductory guidelines. British Medical Journal. 1991; 302 :1136–40. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Donnelly RA. The Complete Idiots Guide to Statistics. Alpha Books; New York, NY: 2004. pp. 6–7. [ Google Scholar ]
  • Greenhalgh T, Taylor R. How to read a paper: Papers that go beyond numbers (qualitative research) British Medical Journal. 1997; 315 :740–743. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Greenhalgh T. How to read a paper: Assessing the methodological quality of published papers. British Medical Journal. 315 :305–308. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Johnson RB, Onwuegbuze AJ. Mixed methods research: A research paradigm whose time has come. Educational Researcher. 2004; 33 :14–26. [ Google Scholar ]
  • Hennekens CH, Buring JE. Epidemiology in Medicine. Little, Brown and Company; Boston, Massachusetts: 1987. pp. 106–108. [ Google Scholar ]
  • Last JM. A dictionary of epidemiology. 4. Oxford University Press, Inc; New York, New York: 2001. [ Google Scholar ]
  • Trochim WM, Donnelly J. Research methods knowledge base. 3. Atomic Dog; Mason, Ohio: 2008. pp. 6–8. [ Google Scholar ]

Logo for JCU Open eBooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.3 Critically Appraising the Literature

Now that you know the parts of a paper, we will discuss how to critically appraise a paper. Critical appraisal refers to the process of carefully and methodically reviewing research to determine its credibility, usefulness, and applicability in a certain context. 6 It is an essential element of evidence-based practice. As stated earlier, you want to ensure that what you read in the literature is trustworthy before considering applying the findings in practice. The key things to consider include the study’s results, if the results match the conclusion (validity) and if the findings will help you in practice (applicability). A stepwise approach to reading and analysing the paper is a good way to highlight important points in the paper. While there are numerous checklists for critical appraisal, we have provided a simple guide for critical appraisal of quantitative and qualitative studies. The guides were adapted from Epidemiology by Petra Buttner (2015) and How to Read a Paper [ the basics of evidence-based medicine and healthcare (2019);  papers that go beyond numbers- qualitative research (1997)] by Trisha Greenhalgh to aid your review of the papers. 5,7,8

A guide to reading scientific articles – Quantitative studies

What is the title of the study?

  • Does the title clearly describe the study focus?
  • Does it contain details about the population and the study design?

What was the purpose of the study (why was it performed)?

  • Identify the research question
  • Identify the exposure and outcome

What was the study design?

  • Was the design appropriate for the study?

Describe the study population (sample).

  • What was the sample size?
  • How were participants recruited?
  • Where did the research take place?
  • Who was included, and who was excluded?
  • Are there any potential sources of bias related to the choice of the sample?

What were data collection methods used?

  • How were the exposure and outcome variables were measured
  • How was data collected- instruments or equipment? Were the tools appropriate?
  • Is there evidence of random selection as opposed to systematic or self-selection?
  • How was bias minimised or avoided?

For experimental studies

  •  How were subjects assigned to treatment or intervention: randomly or by some other method?
  •  What control groups were included (placebo, untreated controls, both or neither)
  •  How were the treatments compared?
  •  Were there dropouts or loss to follow-up?
  •  Were the outcomes or effects measured objectively?

For observational studies

  • Was the data collection process adequate (including questionnaire design and pre-testing)?
  • What techniques were used to handle non-response and/or incomplete data?
  •  If a cohort study, was the follow-up rate sufficiently high?
  •  If a case-control study, are the controls appropriate and adequately matched?

How was the data analysed?

  • Is the statistical analysis appropriate, and is it presented in sufficient detail?

What are the findings?

  • What are the main findings of the study? Pay specific attention to the presented text and tables in relation to the study’s main findings .
  • Are the numbers consistent? Is the entire sample accounted for?

Experimental study

  •  Do the authors find a difference between the treatment and control groups?
  •  Are the results statistically significant? If there is a statistically significant difference, is it enough of a difference to be clinically significant?

Observational study

  •  Did the authors find a difference between exposed and control groups or cases and controls?
  •  Is there a statistically significant difference between groups?
  •  Could the results be of public health significance, even though the difference is not statistically significant? (This may highlight the need for a larger study).
  • Are the results likely to be affected by confounding? Why or why not?
  • What (if any) variables are identified as potential confounders in the study?
  • How is confounding dealt with in this study?
  • Are there any potential confounders that the authors have not taken into account? What might the likely impact be on the results?

Summing it up

Activity 7.2a

Read the following article:

Chen X, Jiang X, Huang X, He H, Zheng J: Association between probiotic yogurt intake and gestational diabetes mellitus: a case-control study. Iran J Public Health. 2019, 48:1248-1256.

Let’s conduct a critical appraisal of this article.

A guide to reading scientific articles – Qualitative studies

What is the research question?

Was a qualitative approach appropriate?

  • Identify the study design and if it was appropriate for the research question.

How were the setting and the subjects selected?

  • What sampling strategy was used?
  • Where was the study conducted?

Was the sampling strategy appropriate for the approach?

  • Consider the qualitative approach used and decide if the sampling strategy or technique is appropriate

What was the researcher’s position, and has this been taken into account?

  • Consider the researcher’s background, gender, knowledge, personal experience and relationship with participants

What were the data collection methods?

  • How was data collected? What technique was used?

How were data analysed, and how were these checked?

  • How did the authors analyse the data? Was this stated?
  • Did two or more researchers conduct the analysis independently, and were the outcomes compared (double coding)?
  • Did the researchers come to a consensus, and how were disagreements handled?

Are the results credible?

  • Does the result answer the research question?
  • Are themes presented with quotes and do they relate to the research question or aim?

Are the conclusions justified by the results?

  • Have the findings been discussed in relation to existing theory and previous research?
  • How well does the interpretation of the findings fit well with what is already known?

Are the findings transferable to other settings?

  • Can the findings be applied to other settings? Consider the sample.

Activity 7.2b

Wallisch A, Little L, Pope E, Dunn W. Parent Perspectives of an Occupational Therapy Telehealth Intervention. Int J Telerehabil. 2019 Jun 12;11(1):15-22. doi: 10.5195/ijt.2019.6274. PMID: 31341543; PMCID: PMC6597151.

Let’s conduct a critical appraisal of this article

Now that you know how to critically appraise both quantitative and qualitative papers, it is also important to note that numerous critical appraisal tools exist. Some have different sub-tools for different study designs, while others are designed to be used for multiple study designs. These tools aid the critical appraisal process as they contain different questions to prompt the reader while assessing the study’s quality. 9 Examples of tools commonly used in health professions are listed below in Table 7.2. Please note that this list is not exhaustive, as numerous appraisal tools exist. You can use any of these tools to appraise the quality of an article before choosing to use their findings to inform your own research or to change practice.

Table 7.2 Critical appraisal tools

An Introduction to Research Methods for Undergraduate Health Profession Students Copyright © 2023 by Faith Alele and Bunmi Malau-Aduli is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Critical Appraisal of Quantitative Research Article Essay Example

Critical Appraisal of Quantitative Research Article Essay Example

  • Pages: 6 (1379 words)
  • Published: August 21, 2018
  • Type: Case Study

Critical appraisal example

Author(s), last name and first initial: Gladstone J.

Year of publication:  1994

The title of the article:  Drug administration errors: a study into the factors underlying the occurrence and reporting of drug errors in a district general hospital.

Journal name:  Journal of advanced nursing

Journal page number:  628-637

Appraise article on medication administration by Gladstone and answer the following questions

Discuss the following in 1-2sentencesDo not respond with just yes or no.

1. How does the title of the research accurately describe what the study is about?

The study identifies themes underlying occurrence and reporting of errors in drug administration. The title reflects the theme of the survey.

2. Was the problem clearly stated with relevant background and justification for the study?  (Maybe in problem statement and/or significance of the study)

The problem is not well stated but the background information provided justifies the need for the study; that is, previous research shows that there are errors in drug administration, but they do not explain the reasons.

3. Is the majority of the review of literature current? (References listed at the end of the article less than ten years old from date of publication unless a classic study

Most of the references that the researcher use as sources of data are up to date since they were published less than ten years from the time of publication hence provide pertinent information.

4. What are the research questions and/or hypotheses?

The research questions are; what are the most common factors for nurses during drug administration? What are the outcomes of drug administration errors?

5. Are there operational definitions or terms defined or that clarify the topic studied in the   article?

There are no defined operational terms in this study. The operational definitions help

in the preserving unambiguous empirical testability of a problem.  

6. What is the research design (descriptive, experimental, etc.) and how does it support the type of study?

The researcher used experimental design which supports the quantitative study by using statistical ways of solving questions.

7. Who was the sample, how was data collected?

The sample was nurse managers and trained nurses. Data was collected by use of questionnaires and interviews.

8. Was there inclusion or exclusion criteria described to be a part of this sample?

The Inclusion or exclusion criteria used randomly. Every nurse who was between class C and G had equal chances of participation.

9. Describe if and how the subjects protected from harm and were there appropriate approval granted and by whom?

The researcher ensured the protection of the respondent by using hand written notes to collect data from respondents since some felt that tape recording was too threatening.

10. What were the findings? Were the research questions answered?

 Nurses who perform drug errors feel guilty and lose clinical confidence. The nurse teachers feel a failure that the education process did not prevent the occurrence of a mistake.

11. What were statistical tests used to analyze the data?

The statistical tests used are pie charts, bar graphs, and tables.

12. How were tables and graphs used to actually to describe the findings? Could you determine from the tables/graphs what the findings were?

The tables and graphs are self-explanatory, clear, informative titles and data were represented in explicit appropriate categories.

13. Does the discussion section state whether the findings from this study support or refute   previous studies?

The results do not state any rebuttal or support to studies made before this study. The author

just gives recommendations.

14. Are generalizations made to other populations based on the findings from this study?

Yes, they are because the present discussion ideas in a general perspective based on the study.

15. Are there suggestions/ recommendations made by the researcher(s) for nursing practice, education and/or research?

The author recommends nurses and managers be given a correct definition of drug errors. There should also be recognition and monitoring of the errors. Additionally, the author suggests promotion of accurate reporting, support, and education.

Critical Appraisal of Qualitative Research Article

Author(s), last name and first initial:  Baker H.M.

Year of publication:  1997

Title of article:  Rules outside the rules for administration of medication

Journal name:  Journal of nursing scholarship

Journal page numbers:  155-158

Appraise the online article on medication administration by Baker and answer the questions below.

Discuss the following questions in1-2 sentences. Do not respond with just yes or no.

It describes the autonomy that nurses take over drug administration. These are the rules that the nurses draft outside the norms of drug administration

2. Was the problem clearly stated with pertinent background and justification for the study? (Maybe in problem statement and/or significance of the study)

The stating of the problem was evident with relevant context; having worked as a nurse; Baker is aware of how nurses behave at work by reporting fewer errors or not following what the institution policies ask them to. This information is what is exemplified in the whole study, thus justifying the problem.

3. Is the majority of the review of literature current? (References listed at the end of the article   less than ten years old from date of publication unless classic study and do

the references cited to support the study?

Most of the data sources used for the survey are outdated since they were more than 10 years old from the date of publication. Some of the references cited to support the study while some are irrelevant.

4. Are there operational definitions or terms defined and how do they clarify the study?

There are no defined operational definitions in this study. The operational definitions help in the preserving unambiguous empirical testability of a problem.  

Critical appraisal of research

5. What is the research design and how does it support a qualitative study?

Qualitative descriptive research design; research problems become the research questions drawn from previous studies which leave gaps.

6. Who was the sample, how was data collected and were there inclusion or exclusions   described?

Nurses in one large provincial hospital were the sample and the method of data collection was participant observation. There were no inclusion or exclusion criteria outlined in the study.

7. Describe if and how the subjects protected from harm and if there was the appropriate approval granted and by whom?

The subjects' privacy was protected by keeping it confidential and telling participants the truth about the scope of the research. ;

8. Did the methodology use one of the researchers as the instrument to interview or observe the participants?

The methodology (ethnomethodology) used as a tool to understand the problem at hand through observation and to interview the respondents.

9. How large was the sample and how was size determined and was it an adequate sample size?

The sample size was three wards in a large provincial acute-care hospital. This sample size was inadequate since a large hospital has many wards with many nurses

and three wards would not be a representative of the whole.

Critical appraisal of research article example

10. How were data analyzed?

The researcher provided feedback for the data collected.

11. Were the descriptions or responses bracketed, categorized or divided in anyway?

The responses were divided into three categories; the first set results logic, the second was criteria for redefining errors, and the third one was the one entailing serendipitous findings.

12. What were the findings? Were the research questions answered?

The findings were that the nurses were autonomous over what real errors mean. The research questions were answered about bringing an understanding of how nurses define or redefine medication error.

13. Does the discussion state whether the findings from this study support or refute previous ;studies?

;No. The discussion just elaborates the findings of the survey.

14. Are generalizations made to other populations based on the research from this study?

Yes, they are because the present discussion ideas in a general perspective based on the research.

Yes. The researcher recommends more contributions by nurses to official strategic rules for the administration of error to be able to differentiate between time-critical and no-time-critical medications.;

  • Are wages more important for employees in poorer countries with harsher climates? Essay Example
  • An Important Factors for Living Essay Example
  • Research on Employee Performance Essay Example
  • Enrollment System of Nsi Essay Example
  • Measuring Internet Addiction and Its Impact on Intermediate Students
  • METHODOLOGY OF EMPLOYEE TURNOVER STUDY Essay Example
  • To Examine the Effectiveness of Internal Organizational Communication Can Reduce Workplace Conflicts at Bmw Private Limited. Essay Example
  • Significance Of Sex Segregation In The Workplace Essay Example
  • Necessity is the mother of invention Essay Example
  • Response to Jerome Kagans Essay On Stress
  • Performance Enhancing Drugs in Sports Essay Example
  • Scientific Management & Essay Example
  • Systematic Matching Sampling Essay Example
  • Cohesive Entrepreneurial Network The Network Structure Commerce Essay Example
  • Marketing Coordinator 16551 Essay Example
  • Animals essays
  • Charles Darwin essays
  • Agriculture essays
  • Archaeology essays
  • Moon essays
  • Space Exploration essays
  • Universe essays
  • Birds essays
  • Horse essays
  • Bear essays
  • Butterfly essays
  • Dolphin essays
  • Monkey essays
  • Tiger essays
  • Whale essays
  • Lion essays
  • Elephant essays
  • Mythology essays
  • Time Travel essays
  • Discovery essays
  • Thomas Edison essays
  • Linguistics essays
  • Journal essays
  • Chemistry essays
  • Biology essays
  • Physics essays
  • Seismology essays
  • Reaction Rate essays
  • Roman Numerals essays
  • Scientific Method essays
  • Mineralogy essays
  • Plate Tectonics essays
  • Logic essays
  • Genetics essays
  • Albert einstein essays
  • Stars essays
  • Venus essays
  • Mars essays
  • Evolution essays
  • Human Evolution essays
  • Noam Chomsky essays
  • Methodology essays
  • Eli Whitney essays
  • Fish essays
  • Dinosaur essays
  • Isaac Newton essays
  • Progress essays
  • Scientist essays

Haven't found what you were looking for?

Search for samples, answers to your questions and flashcards.

  • Enter your topic/question
  • Receive an explanation
  • Ask one question at a time
  • Enter a specific assignment topic
  • Aim at least 500 characters
  • a topic sentence that states the main or controlling idea
  • supporting sentences to explain and develop the point you’re making
  • evidence from your reading or an example from the subject area that supports your point
  • analysis of the implication/significance/impact of the evidence finished off with a critical conclusion you have drawn from the evidence.

Unfortunately copying the content is not possible

Tell us your email address and we’ll send this sample there..

By continuing, you agree to our Terms and Conditions .

IMAGES

  1. Critical Appraisal of Quantitative Research Article Essay Example

    critical appraisal of quantitative research example essay

  2. Scholarship Essay: Critical appraisal example essay nursing

    critical appraisal of quantitative research example essay

  3. Critique of Quantitative Research Article Free Essay Example

    critical appraisal of quantitative research example essay

  4. Essay websites: Critical appraisal of quantitative research example

    critical appraisal of quantitative research example essay

  5. Critical Appraisal Example 2020-2022

    critical appraisal of quantitative research example essay

  6. Critical Review of a Quantitative Research

    critical appraisal of quantitative research example essay

VIDEO

  1. HS2405 AssessmentTask1 Group4 Maru

  2. Critical Appraisal of Research NOV 23

  3. Critical Appraisal of a Quantitative Research

  4. Critical Appraisal (3 sessions) practical book EBM

  5. Critical appraisal of Research Papers and Protocols Testing Presence of Confounders GKSingh

  6. Critical Appraisal of a Clinical Trial- Lecture by Dr. Bishal Gyawali

COMMENTS

  1. Critical Appraisal of a quantitative paper

    This guide, aimed at health students, provides basic level support for appraising quantitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising quantitative research (based on reliability, internal and external validity) is provided and there is an opportunity to ...

  2. (PDF) Critical Appraisal of Quantitative Research

    quality. 1 Introduction. Critical appraisal describes the process of analyzing a study in a rigorous and. methodical way. Often, this process involves working through a series of questions. to ...

  3. Full article: Critical appraisal

    For example, in quantitative research a critical appraisal checklist assists a reviewer in assessing each study according to the same (pre-determined) criteria; that is, checklists help standardize the process, if not the outcome (they are navigational tools, not anchors, Booth, Citation 2007). Also, if the checklist has been through a rigorous ...

  4. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  5. How to appraise quantitative research

    Title, keywords and the authors. The title of a paper should be clear and give a good idea of the subject area. The title should not normally exceed 15 words 2 and should attract the attention of the reader. 3 The next step is to review the key words. These should provide information on both the ideas or concepts discussed in the paper and the ...

  6. Critical Appraisal of Quantitative Research

    Critical appraisal skills are required to determine whether or not the study was well-conducted and if its findings are believable or useful. Whenever a study is completed, there are three likely explanations for its findings (Mhaskar et al. 2009 ): 1. The study findings are correct and its conclusions are true.

  7. Critical appraisal of published research papers

    INTRODUCTION. Critical appraisal of a research paper is defined as "The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context."[] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[] critical appraisal is very important to distinguish ...

  8. Systematic Reviews: Critical Appraisal by Study Design

    "The purpose of critical appraisal is to determine the scientific merit of a research report and its applicability to clinical decision making." 1 Conducting a critical appraisal of a study is imperative to any well executed evidence review, but the process can be time consuming and difficult. 2 The critical appraisal process requires "a methodological approach coupled with the right ...

  9. Critical Appraisal: Assessing the Quality of Studies

    Critical appraisal of papers is important, as it allows readers to understand the strengths and limitations of the literature. ... Saying Niall is a reliable co-author is using the term reliable is a different way to the one used in research, for example. ... So far we have focussed on quantitative research. Quality appraisal in qualitative ...

  10. Critical Appraisal of a quantitative study (RCT)

    Critical appraisal of a quantitative study (RCT) The following video (5 mins, 36 secs.) helps to clarify the process of critical appraisal, how to systematically examine research, e.g. using checklists; the variety of tools /checklists available, and guidance on identifying the type of research you are faced with (so you can select the most ...

  11. Critical appraisal of quantitative and qualitative research literature

    This paper describes a broad framework of critical appraisal of published research literature that covers both quantitative and qualitative methodologies. The aim is the heart of a research study. It should be robust, concisely stated and specify a study factor, outcome factor(s) and reference population.

  12. Critical Appraisal of a Quantitative Study

    Critical Appraisal of a Quantitative Study. Evidence-based practice is applied in numerous sciences whereas the use of quantitative methods allows for conducting statistical analysis the result of which can be further replicated and either confirmed or refuted by the scientific community. To generate verifiable knowledge, the same methods of ...

  13. PDF Planning and writing a critical review

    Plan and write your draft. A short critical review should have a brief introduction, simply providing the subject of the research and the author, and outlining the structure you will be using. The simplest way to structure a critical review is to write a paragraph or two about each section of the study in turn.

  14. PDF Critical appraisal of a journal article

    3. the next step is critical appraisal of your results; 4. you then decide what action to take from your findings; 5. finally, you evaluate your new or amended practice. ces ol: n n n e l Critical appraisal is essential to: combat information overload; identify papers that are clinically relevant;

  15. How to appraise quantitative research articles

    It should: • highlight critical weaknesses in previous studies. • identify previously studied concepts or variables. • relate the current research project to historical research. • identify the current knowledge deficit about a particular phenomenon and state what more needs to be done to overcome that deficit.

  16. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ]. Critical appraisal is essential to: Continuing Professional Development (CPD).

  17. Appraising Quantitative Research in Health Education: Guidelines for

    This publication is designed to help provide practicing health educators with basic tools helpful to facilitate a better understanding of quantitative research. This article describes the major components—title, introduction, methods, analyses, results and discussion sections—of quantitative research. Readers will be introduced to ...

  18. 7.3 Critically Appraising the Literature

    7.3 Critically Appraising the Literature Now that you know the parts of a paper, we will discuss how to critically appraise a paper. Critical appraisal refers to the process of carefully and methodically reviewing research to determine its credibility, usefulness, and applicability in a certain context. 6 It is an essential element of evidence-based practice.

  19. (PDF) Critical appraisal of quantitative and qualitative research

    This paper describes a broad framework of critical appraisal of published research literature that covers both quantitative and qualitative methodologies. The aim is the heart of a research study ...

  20. Critical Appraisal Of A Research Paper Nursing Essay

    It allows practitioners to inform, adjust and monitor particular ways of practice or issues. The ability to evaluate research evidence appropriately is essential to avoid the assumption that all published research is of equal merit and validity. In order to critically appraise the article, 'Clinical handover in the trauma setting: a ...

  21. (PDF) Critical appraisal of quantitative Research Article

    4. Critical Appraisal. Critical appraisal is a process which scientifically evaluate the strength and weakness of. a research paper for the application of theory, practice and education. While ...

  22. Critical Appraisal of a Qualitative Journal Article

    This essay critically appraises a research article, Using CASP (critical appraisal skills programme, 2006) and individual sections of Bellini & Rumrill: guidelines for critiquing research articles (Bellini &Rumrill, 1999). The title of this article is; 'Clinical handover in the trauma setting: A qualitative study of paramedics and trauma team ...

  23. Critical Appraisal of Quantitative Research Article Essay Example

    Critical appraisal example. Author (s), last name and first initial: Gladstone J. Year of publication: 1994. The title of the article: Drug administration errors: a study into the factors underlying the occurrence and reporting of drug errors in a district general hospital. Journal name: Journal of advanced nursing. Journal page number: 628-637.