OEC logo

Site Search

  • How to Search
  • Advisory Group
  • Editorial Board
  • OEC Fellows
  • History and Funding
  • Using OEC Materials
  • Collections
  • Research Ethics Resources
  • Ethics Projects
  • Communities of Practice
  • Get Involved
  • Submit Content
  • Open Access Membership
  • Become a Partner

Graduate Case Analysis Rubric

A grading rubric for case analysis by graduate students, part of the " Genomics, Ethics and Society " course.

Your case analysis will be evaluated based on the rubric below.

Related Resources

Submit Content to the OEC   Donate

NSF logo

This material is based upon work supported by the National Science Foundation under Award No. 2055332. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

grading rubric for case study analysis

Case Studies

Case studies (also called "case histories") are descriptions of real situations that provide a context for engineers and others to explore decision-making in the face of socio-technical issues, such as environmental, political, and ethical issues. case studies typically involve complex issues where there is no single correct answer--a student analyzing a case study may be asked to select the "best" answer given the situation 1 . a case study is not a demonstration of a valid or "best" decision or solution. on the contrary, unsuccessful or incomplete attempts at a solution are often included in the written account. 2.

The process of analyzing a case study encourages several learning tasks:

Exploring the nature of a problem and circumstances that affect a decision or solution

Learning about others' viewpoints and how they may be taken into account

Learning about one's own viewpoint

Defining one's own priorities

Making one's own decisions to solve a problem

Predicting outcomes and consequences 1

Student Learning Outcomes in Ethics

Most engineering case studies available pertain to engineering ethics. After a two year study of education in ethics sponsored by the Hastings Center, an interdisciplinary group agreed on five main outcomes for student learning in ethics:

Sensitivity to ethical issues, sometimes called "developing a moral imagination," or the awareness of the needs of others and that there is an ethical point of view;

Recognition of ethical issues or the ability to see the ethical implications of specific situations and choices;

Ability to analyze and critically evaluate ethical dilemmas, including an understanding of competing values, and the ability to scrutinize options for resolution;

Ethical responsibility, or the ability to make a decision and take action;

Tolerance for ambiguity, or the recognition that there may be no single ideal solution to ethically problematic situations 2 .

These outcomes would make an excellent list of attributes for designing a rubric for a case analysis.

Ideas for Case Study Assignments

To assign a case analysis, an instructor needs

skill in analyzing a case (and the ability to model that process for students)

skill in managing classroom discussion of a case

a case study

a specific assignment that will guide students' case analyses, and

a rubric for scoring students' case analyses.

Below are ideas for each of these five aspects of teaching with case studies. Another viewpoint is to consider how not to teach a case study .

1. Skill in analyzing a case

For many engineering instructors, analyzing cases is unfamiliar. Examining completed case analyses could help develop case analysis skills. As an exercise for building skill in analyzing cases, use the generic guidelines for case analysis assignments (#4 below) to carefully review some completed case analyses. A few completed case analyses are available:

Five example analyses of an engineering case study

Case study part 1 [Unger, S. The BART case: ethics and the employed engineer. IEEE CSIT Newsletter. September 1973 Issue 4, p 6.]

Case study part 2 [Friedlander, G. The case of the three engineers vs. BART. IEEE Spectrum. October 1974, p. 69-76.]

Case study part 3 [Friedlander, G. Bigger Bugs in BART? IEEE Spectrum . March 1973. p32,35,37.]

Case study with an example analysis

2. Skill in managing classroom discussion of a case

Managing classroom discussion of a case study requires planning.

Suggestions for using engineering cases in the classroom

Guidelines for leading classroom discussion of case studies

3. Case studies

Case studies should be complex enough and realistic enough to be challenging, yet be manageable within the time frame. It is time-consuming to create case studies, but there are a large number of engineering case studies online.

Online Case Libraries

Case Studies in Technology, Ethics, Environment, and Public Policy

Teaching Engineering Ethics: A Case Study Approach

The Online Ethics Center for Engineering and Science

Ethics Cases

The Engineering Case Library

Cases and Teaching Tips

4. A specific assignment that will guide students' case analyses

There are several types of case study assignment:

Nine approaches to using case studies for teaching

Written Case Analysis

Case Discussions

Case analyses typically include answering questions such as:

What kinds of problems are inherent in the situation?

Describe the socio-technical situation sufficiently to enable listeners (or readers) to understand the situation faced by the central character in the case.

Identify and characterize the issue or conflict central to the situation. Identify the parties involved in the situation. Describe the origins, structure, and trajectory of the conflict.

Evaluate the strengths and weaknesses of the arguments made by each party.

How would these problems affect the outcomes of the situation?

Describe the possible actions that could have been taken by the central character in the case.

Describe, for each possible action, what the potential outcomes might be for each party involved.

Describe what action was actually taken and the outcomes for each party involved.

How would you solve these problems? Why?

Describe the action you would take if you were the central character in the case. Explain why.

What should the central character in the situation do? Why?

Describe the action you think that the central character in the case should take. Explain why.

What can be learned from this case?

Delineate the lessons about ethical (or other) issues in engineering that are illuminated by this case.

This list is adapted from two online case analysis assignments by McGinn from 3 & 4 ):

5. A rubric for scoring students' case analyses

Case studies help students explore decision-making in the face of issues. Thus, for an engineering ethics case study, the outcomes that can be assessed by scoring case analyses are a) sensitivity to ethical issues, b) recognition of ethical issues, c) the ability to analyze and critically evaluate ethical dilemmas, d) the ability to make an ethical decision and take action, and e) tolerance for ambiguity. Scoring rubrics for ethics case analyses should address these outcomes, not basic knowledge of the ethical standards of the profession. Professional standards can best be assessed by a traditional graded exam in which students must demonstrate, for example, which practices are ethically acceptable versus which are in violation of ethical standards given a hypothetical scenario 5 .

Making Scoring/Grading Useful for Assessment

General principles for making scoring/grading useful for assessment ( rubrics )

Example rubrics

For a written analysis of a case study in engineering

For a written analysis of a case study in general #1

For a written analysis of a case study in general #2

For a written and oral analysis of a case study by a group

For an oral analysis of a case study by a group

For a written analysis of a case study on ethics

For a self-assessment of learning from a case study

Case Study - Rubric

  • Available Topics
  • Top Documents
  • Recently Updated
  • Internal KB

CTLM Instructional Resources

Rubric example: a case study

  • Investigation and Research Discussions
  • Case Study - Description
  • Case Study - Example
  • Affordances of Online Discussions
  • Steps for Building an Online Asynchronous Discussion
  • Using Online Discussions to Increase Student Engagement
  • help_outline help

iRubric: Case Study rubric

  • Case study rubric

grading rubric for case study analysis

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 08 May 2024

A meta-analysis on global change drivers and the risk of infectious disease

  • Michael B. Mahon   ORCID: orcid.org/0000-0002-9436-2998 1 , 2   na1 ,
  • Alexandra Sack 1 , 3   na1 ,
  • O. Alejandro Aleuy 1 ,
  • Carly Barbera 1 ,
  • Ethan Brown   ORCID: orcid.org/0000-0003-0827-4906 1 ,
  • Heather Buelow   ORCID: orcid.org/0000-0003-3535-4151 1 ,
  • David J. Civitello 4 ,
  • Jeremy M. Cohen   ORCID: orcid.org/0000-0001-9611-9150 5 ,
  • Luz A. de Wit   ORCID: orcid.org/0000-0002-3045-4017 1 ,
  • Meghan Forstchen 1 , 3 ,
  • Fletcher W. Halliday 6 ,
  • Patrick Heffernan 1 ,
  • Sarah A. Knutie 7 ,
  • Alexis Korotasz 1 ,
  • Joanna G. Larson   ORCID: orcid.org/0000-0002-1401-7837 1 ,
  • Samantha L. Rumschlag   ORCID: orcid.org/0000-0003-3125-8402 1 , 2 ,
  • Emily Selland   ORCID: orcid.org/0000-0002-4527-297X 1 , 3 ,
  • Alexander Shepack 1 ,
  • Nitin Vincent   ORCID: orcid.org/0000-0002-8593-1116 1 &
  • Jason R. Rohr   ORCID: orcid.org/0000-0001-8285-4912 1 , 2 , 3   na1  

Nature ( 2024 ) Cite this article

729 Accesses

358 Altmetric

Metrics details

  • Infectious diseases

Anthropogenic change is contributing to the rise in emerging infectious diseases, which are significantly correlated with socioeconomic, environmental and ecological factors 1 . Studies have shown that infectious disease risk is modified by changes to biodiversity 2 , 3 , 4 , 5 , 6 , climate change 7 , 8 , 9 , 10 , 11 , chemical pollution 12 , 13 , 14 , landscape transformations 15 , 16 , 17 , 18 , 19 , 20 and species introductions 21 . However, it remains unclear which global change drivers most increase disease and under what contexts. Here we amassed a dataset from the literature that contains 2,938 observations of infectious disease responses to global change drivers across 1,497 host–parasite combinations, including plant, animal and human hosts. We found that biodiversity loss, chemical pollution, climate change and introduced species are associated with increases in disease-related end points or harm, whereas urbanization is associated with decreases in disease end points. Natural biodiversity gradients, deforestation and forest fragmentation are comparatively unimportant or idiosyncratic as drivers of disease. Overall, these results are consistent across human and non-human diseases. Nevertheless, context-dependent effects of the global change drivers on disease were found to be common. The findings uncovered by this meta-analysis should help target disease management and surveillance efforts towards global change drivers that increase disease. Specifically, reducing greenhouse gas emissions, managing ecosystem health, and preventing biological invasions and biodiversity loss could help to reduce the burden of plant, animal and human diseases, especially when coupled with improvements to social and economic determinants of health.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

grading rubric for case study analysis

Similar content being viewed by others

grading rubric for case study analysis

Towards common ground in the biodiversity–disease debate

grading rubric for case study analysis

Biological invasions facilitate zoonotic disease emergences

grading rubric for case study analysis

Measuring the shape of the biodiversity-disease relationship across systems reveals new findings and key gaps

Data availability.

All the data for this Article have been deposited at Zenodo ( https://doi.org/10.5281/zenodo.8169979 ) 52 and GitHub ( https://github.com/mahonmb/GCDofDisease ) 53 .

Code availability

All the code for this Article has been deposited at Zenodo ( https://doi.org/10.5281/zenodo.8169979 ) 52 and GitHub ( https://github.com/mahonmb/GCDofDisease ) 53 . R markdown is provided in Supplementary Data 1 .

Jones, K. E. et al. Global trends in emerging infectious diseases. Nature 451 , 990–994 (2008).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Civitello, D. J. et al. Biodiversity inhibits parasites: broad evidence for the dilution effect. Proc. Natl Acad. Sci USA 112 , 8667–8671 (2015).

Halliday, F. W., Rohr, J. R. & Laine, A.-L. Biodiversity loss underlies the dilution effect of biodiversity. Ecol. Lett. 23 , 1611–1622 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Rohr, J. R. et al. Towards common ground in the biodiversity–disease debate. Nat. Ecol. Evol. 4 , 24–33 (2020).

Article   PubMed   Google Scholar  

Johnson, P. T. J., Ostfeld, R. S. & Keesing, F. Frontiers in research on biodiversity and disease. Ecol. Lett. 18 , 1119–1133 (2015).

Keesing, F. et al. Impacts of biodiversity on the emergence and transmission of infectious diseases. Nature 468 , 647–652 (2010).

Cohen, J. M., Sauer, E. L., Santiago, O., Spencer, S. & Rohr, J. R. Divergent impacts of warming weather on wildlife disease risk across climates. Science 370 , eabb1702 (2020).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Rohr, J. R. et al. Frontiers in climate change-disease research. Trends Ecol. Evol. 26 , 270–277 (2011).

Altizer, S., Ostfeld, R. S., Johnson, P. T. J., Kutz, S. & Harvell, C. D. Climate change and infectious diseases: from evidence to a predictive framework. Science 341 , 514–519 (2013).

Article   ADS   CAS   PubMed   Google Scholar  

Rohr, J. R. & Cohen, J. M. Understanding how temperature shifts could impact infectious disease. PLoS Biol. 18 , e3000938 (2020).

Carlson, C. J. et al. Climate change increases cross-species viral transmission risk. Nature 607 , 555–562 (2022).

Halstead, N. T. et al. Agrochemicals increase risk of human schistosomiasis by supporting higher densities of intermediate hosts. Nat. Commun. 9 , 837 (2018).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Martin, L. B., Hopkins, W. A., Mydlarz, L. D. & Rohr, J. R. The effects of anthropogenic global changes on immune functions and disease resistance. Ann. N. Y. Acad. Sci. 1195 , 129–148 (2010).

Rumschlag, S. L. et al. Effects of pesticides on exposure and susceptibility to parasites can be generalised to pesticide class and type in aquatic communities. Ecol. Lett. 22 , 962–972 (2019).

Allan, B. F., Keesing, F. & Ostfeld, R. S. Effect of forest fragmentation on Lyme disease risk. Conserv. Biol. 17 , 267–272 (2003).

Article   Google Scholar  

Brearley, G. et al. Wildlife disease prevalence in human‐modified landscapes. Biol. Rev. 88 , 427–442 (2013).

Rohr, J. R. et al. Emerging human infectious diseases and the links to global food production. Nat. Sustain. 2 , 445–456 (2019).

Bradley, C. A. & Altizer, S. Urbanization and the ecology of wildlife diseases. Trends Ecol. Evol. 22 , 95–102 (2007).

Allen, T. et al. Global hotspots and correlates of emerging zoonotic diseases. Nat. Commun. 8 , 1124 (2017).

Sokolow, S. H. et al. Ecological and socioeconomic factors associated with the human burden of environmentally mediated pathogens: a global analysis. Lancet Planet. Health 6 , e870–e879 (2022).

Young, H. S., Parker, I. M., Gilbert, G. S., Guerra, A. S. & Nunn, C. L. Introduced species, disease ecology, and biodiversity–disease relationships. Trends Ecol. Evol. 32 , 41–54 (2017).

Barouki, R. et al. The COVID-19 pandemic and global environmental change: emerging research needs. Environ. Int. 146 , 106272 (2021).

Article   CAS   PubMed   Google Scholar  

Nova, N., Athni, T. S., Childs, M. L., Mandle, L. & Mordecai, E. A. Global change and emerging infectious diseases. Ann. Rev. Resour. Econ. 14 , 333–354 (2021).

Zhang, L. et al. Biological invasions facilitate zoonotic disease emergences. Nat. Commun. 13 , 1762 (2022).

Olival, K. J. et al. Host and viral traits predict zoonotic spillover from mammals. Nature 546 , 646–650 (2017).

Guth, S. et al. Bats host the most virulent—but not the most dangerous—zoonotic viruses. Proc. Natl Acad. Sci. USA 119 , e2113628119 (2022).

Nelson, G. C. et al. in Ecosystems and Human Well-Being (Millennium Ecosystem Assessment) Vol. 2 (eds Rola, A. et al) Ch. 7, 172–222 (Island Press, 2005).

Read, A. F., Graham, A. L. & Raberg, L. Animal defenses against infectious agents: is damage control more important than pathogen control? PLoS Biol. 6 , 2638–2641 (2008).

Article   CAS   Google Scholar  

Medzhitov, R., Schneider, D. S. & Soares, M. P. Disease tolerance as a defense strategy. Science 335 , 936–941 (2012).

Torchin, M. E. & Mitchell, C. E. Parasites, pathogens, and invasions by plants and animals. Front. Ecol. Environ. 2 , 183–190 (2004).

Bellay, S., de Oliveira, E. F., Almeida-Neto, M. & Takemoto, R. M. Ectoparasites are more vulnerable to host extinction than co-occurring endoparasites: evidence from metazoan parasites of freshwater and marine fishes. Hydrobiologia 847 , 2873–2882 (2020).

Scheffer, M. Critical Transitions in Nature and Society Vol. 16 (Princeton Univ. Press, 2020).

Rohr, J. R. et al. A planetary health innovation for disease, food and water challenges in Africa. Nature 619 , 782–787 (2023).

Reaser, J. K., Witt, A., Tabor, G. M., Hudson, P. J. & Plowright, R. K. Ecological countermeasures for preventing zoonotic disease outbreaks: when ecological restoration is a human health imperative. Restor. Ecol. 29 , e13357 (2021).

Hopkins, S. R. et al. Evidence gaps and diversity among potential win–win solutions for conservation and human infectious disease control. Lancet Planet. Health 6 , e694–e705 (2022).

Mitchell, C. E. & Power, A. G. Release of invasive plants from fungal and viral pathogens. Nature 421 , 625–627 (2003).

Chamberlain, S. A. & Szöcs, E. taxize: taxonomic search and retrieval in R. F1000Research 2 , 191 (2013).

Newman, M. Fundamentals of Ecotoxicology (CRC Press/Taylor & Francis Group, 2010).

Rohatgi, A. WebPlotDigitizer v.4.5 (2021); automeris.io/WebPlotDigitizer .

Lüdecke, D. esc: effect size computation for meta analysis (version 0.5.1). Zenodo https://doi.org/10.5281/zenodo.1249218 (2019).

Lipsey, M. W. & Wilson, D. B. Practical Meta-Analysis (SAGE, 2001).

R Core Team. R: A Language and Environment for Statistical Computing Vol. 2022 (R Foundation for Statistical Computing, 2020); www.R-project.org/ .

Viechtbauer, W. Conducting meta-analyses in R with the metafor package. J. Stat. Softw. 36 , 1–48 (2010).

Pustejovsky, J. E. & Tipton, E. Meta-analysis with robust variance estimation: Expanding the range of working models. Prev. Sci. 23 , 425–438 (2022).

Lenth, R. emmeans: estimated marginal means, aka least-squares means. R package v.1.5.1 (2020).

Bartoń, K. MuMIn: multi-modal inference. Model selection and model averaging based on information criteria (AICc and alike) (2019).

Burnham, K. P. & Anderson, D. R. Multimodel inference: understanding AIC and BIC in model selection. Sociol. Methods Res. 33 , 261–304 (2004).

Article   MathSciNet   Google Scholar  

Marks‐Anglin, A. & Chen, Y. A historical review of publication bias. Res. Synth. Methods 11 , 725–742 (2020).

Nakagawa, S. et al. Methods for testing publication bias in ecological and evolutionary meta‐analyses. Methods Ecol. Evol. 13 , 4–21 (2022).

Gurevitch, J., Koricheva, J., Nakagawa, S. & Stewart, G. Meta-analysis and the science of research synthesis. Nature 555 , 175–182 (2018).

Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67 , 1–48 (2015).

Mahon, M. B. et al. Data and code for ‘A meta-analysis on global change drivers and the risk of infectious disease’. Zenodo https://doi.org/10.5281/zenodo.8169979 (2024).

Mahon, M. B. et al. Data and code for ‘A meta-analysis on global change drivers and the risk of infectious disease’. GitHub github.com/mahonmb/GCDofDisease (2024).

Download references

Acknowledgements

We thank C. Mitchell for contributing data on enemy release; L. Albert and B. Shayhorn for assisting with data collection; J. Gurevitch, M. Lajeunesse and G. Stewart for providing comments on an earlier version of this manuscript; and C. Carlson and two anonymous reviewers for improving this paper. This research was supported by grants from the National Science Foundation (DEB-2109293, DEB-2017785, DEB-1518681, IOS-1754868), National Institutes of Health (R01TW010286) and US Department of Agriculture (2021-38420-34065) to J.R.R.; a US Geological Survey Powell grant to J.R.R. and S.L.R.; University of Connecticut Start-up funds to S.A.K.; grants from the National Science Foundation (IOS-1755002) and National Institutes of Health (R01 AI150774) to D.J.C.; and an Ambizione grant (PZ00P3_202027) from the Swiss National Science Foundation to F.W.H. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

These authors contributed equally: Michael B. Mahon, Alexandra Sack, Jason R. Rohr

Authors and Affiliations

Department of Biological Sciences, University of Notre Dame, Notre Dame, IN, USA

Michael B. Mahon, Alexandra Sack, O. Alejandro Aleuy, Carly Barbera, Ethan Brown, Heather Buelow, Luz A. de Wit, Meghan Forstchen, Patrick Heffernan, Alexis Korotasz, Joanna G. Larson, Samantha L. Rumschlag, Emily Selland, Alexander Shepack, Nitin Vincent & Jason R. Rohr

Environmental Change Initiative, University of Notre Dame, Notre Dame, IN, USA

Michael B. Mahon, Samantha L. Rumschlag & Jason R. Rohr

Eck Institute of Global Health, University of Notre Dame, Notre Dame, IN, USA

Alexandra Sack, Meghan Forstchen, Emily Selland & Jason R. Rohr

Department of Biology, Emory University, Atlanta, GA, USA

David J. Civitello

Department of Ecology and Evolutionary Biology, Yale University, New Haven, CT, USA

Jeremy M. Cohen

Department of Botany and Plant Pathology, Oregon State University, Corvallis, OR, USA

Fletcher W. Halliday

Department of Ecology and Evolutionary Biology, Institute for Systems Genomics, University of Connecticut, Storrs, CT, USA

Sarah A. Knutie

You can also search for this author in PubMed   Google Scholar

Contributions

J.R.R. conceptualized the study. All of the authors contributed to the methodology. All of the authors contributed to investigation. Visualization was performed by M.B.M. The initial study list and related information were compiled by D.J.C., J.M.C., F.W.H., S.A.K., S.L.R. and J.R.R. Data extraction was performed by M.B.M., A.S., O.A.A., C.B., E.B., H.B., L.A.d.W., M.F., P.H., A.K., J.G.L., E.S., A.S. and N.V. Data were checked for accuracy by M.B.M. and A.S. Analyses were performed by M.B.M. and J.R.R. Funding was acquired by D.J.C., J.R.R., S.A.K. and S.L.R. Project administration was done by J.R.R. J.R.R. supervised the study. J.R.R. and M.B.M. wrote the original draft. All of the authors reviewed and edited the manuscript. J.R.R. and M.B.M. responded to reviewers.

Corresponding author

Correspondence to Jason R. Rohr .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Colin Carlson and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 prisma flowchart..

The PRISMA flow diagram of the search and selection of studies included in this meta-analysis. Note that 77 studies came from the Halliday et al. 3 database on biodiversity change.

Extended Data Fig. 2 Summary of the number of studies (A-F) and parasite taxa (G-L) in the infectious disease database across ecological contexts.

The contexts are global change driver ( A , G ), parasite taxa ( B , H ), host taxa ( C , I ), experimental venue ( D , J ), study habitat ( E , K ), and human parasite status ( F , L ).

Extended Data Fig. 3 Summary of the number of effect sizes (A-I), studies (J-R), and parasite taxa (S-a) in the infectious disease database for various parasite and host contexts.

Shown are parasite type ( A , J , S ), host thermy ( B , K , T ), vector status ( C , L , U ), vector-borne status ( D , M , V ), parasite transmission ( E , N , W ), free living stages ( F , O , X ), host (e.g. disease, host growth, host survival) or parasite (e.g. parasite abundance, prevalence, fecundity) endpoint ( G , P , Y ), micro- vs macroparasite ( H , Q , Z ), and zoonotic status ( I , R , a ).

Extended Data Fig. 4 The effects of global change drivers and subsequent subcategories on disease responses with Log Response Ratio instead of Hedge’s g.

Here, Log Response Ratio shows similar trends to that of Hedge’s g presented in the main text. The displayed points represent the mean predicted values (with 95% confidence intervals) from a meta-analytical model with separate random intercepts for study. Points that do not share letters are significantly different from one another (p < 0.05) based on a two-sided Tukey’s posthoc multiple comparison test with adjustment for multiple comparisons. See Table S 3 for pairwise comparison results. Effects of the five common global change drivers ( A ) have the same directionality, similar magnitude, and significance as those presented in Fig. 2 . Global change driver effects are significant when confidence intervals do not overlap with zero and explicitly tested with two-tailed t-test (indicated by asterisks; t 80.62  = 2.16, p = 0.034 for CP; t 71.42  = 2.10, p = 0.039 for CC; t 131.79  = −3.52, p < 0.001 for HLC; t 61.9  = 2.10, p = 0.040 for IS). The subcategories ( B ) also show similar patterns as those presented in Fig. 3 . Subcategories are significant when confidence intervals do not overlap with zero and were explicitly tested with two-tailed one sample t-test (t 30.52  = 2.17, p = 0.038 for CO 2 ; t 40.03  = 4.64, p < 0.001 for Enemy Release; t 47.45  = 2.18, p = 0.034 for Mean Temperature; t 110.81  = −4.05, p < 0.001 for Urbanization); all other subcategories have p > 0.20. Note that effect size and study numbers are lower here than in Figs. 3 and 4 , because log response ratios cannot be calculated for studies that provide coefficients (e.g., odds ratio) rather than raw data; as such, all observations within BC did not have associated RR values. Despite strong differences in sample size, patterns are consistent across effect sizes, and therefore, we can be confident that the results presented in the main text are not biased because of effect size selection.

Extended Data Fig. 5 Average standard errors of the effect sizes (A) and sample sizes per effect size (B) for each of the five global change drivers.

The displayed points represent the mean predicted values (with 95% confidence intervals) from the generalized linear mixed effects models with separate random intercepts for study (Gaussian distribution for standard error model, A ; Poisson distribution for sample size model, B ). Points that do not share letters are significantly different from one another (p < 0.05) based on a two-sided Tukey’s posthoc multiple comparison test with adjustment for multiple comparisons. Sample sizes (number of studies, n, and effect sizes, k) for each driver are as follows: n = 77, k = 392 for BC; n = 124, k = 364 for CP; n = 202, k = 380 for CC; n = 517, k = 1449 for HLC; n = 96, k = 355 for IS.

Extended Data Fig. 6 Forest plots of effect sizes, associated variances, and relative weights (A), Funnel plots (B), and Egger’s Test plots (C) for each of the five global change drivers and leave-one-out publication bias analyses (D).

In panel A , points are the individual effect sizes (Hedge’s G), error bars are standard errors of the effect size, and size of the points is the relative weight of the observation in the model, with larger points representing observations with higher weight in the model. Sample sizes are provided for each effect size in the meta-analytic database. Effect sizes were plotted in a random order. Egger’s tests indicated significant asymmetries (p < 0.05) in Biodiversity Change (worst asymmetry – likely not bias, just real effect of positive relationship between diversity and disease), Climate Change – (weak asymmetry, again likely not bias, climate change generally increases disease), and Introduced Species (relatively weak asymmetry – unclear whether this is a bias, may be driven by some outliers). No significant asymmetries (p > 0.05) were found in Chemical Pollution and Habitat Loss/Change, suggesting negligible publication bias in reported disease responses across these global change drivers ( B , C ). Egger’s test included publication year as moderator but found no significant relationship between Hedge’s g and publication year (p > 0.05) implying no temporal bias in effect size magnitude or direction. In panel D , the horizontal red lines denote the grand mean and SE of Hedge’s g and (g = 0.1009, SE = 0.0338). Grey points and error bars indicate the Hedge’s g and SEs, respectively, using the leave-one-out method (grand mean is recalculated after a given study is removed from dataset). While the removal of certain studies resulted in values that differed from the grand mean, all estimated Hedge’s g values fell well within the standard error of the grand mean. This sensitivity analysis indicates that our results were robust to the iterative exclusion of individual studies.

Extended Data Fig. 7 The effects of habitat loss/change on disease depend on parasite taxa and land use conversion contexts.

A) Enemy type influences the magnitude of the effect of urbanization on disease: helminths, protists, and arthropods were all negatively associated with urbanization, whereas viruses were non-significantly positively associated with urbanization. B) Reference (control) land use type influences the magnitude of the effect of urbanization on disease: disease was reduced in urban settings compared to rural and peri-urban settings, whereas there were no differences in disease along urbanization gradients or between urban and natural settings. C) The effect of forest fragmentation depends on whether a large/continuous habitat patch is compared to a small patch or whether disease it is measured along an increasing fragmentation gradient (Z = −2.828, p = 0.005). Conversely, the effect of deforestation on disease does not depend on whether the habitat has been destroyed and allowed to regrow (e.g., clearcutting, second growth forests, etc.) or whether it has been replaced with agriculture (e.g., row crop, agroforestry, livestock grazing; Z = 1.809, p = 0.0705). The displayed points represent the mean predicted values (with 95% confidence intervals) from a metafor model where the response variable was a Hedge’s g (representing the effect on an infectious disease endpoint relative to control), study was treated as a random effect, and the independent variables included enemy type (A), reference land use type (B), or land use conversion type (C). Data for (A) and (B) were only those studies that were within the “urbanization” subcategory; data for (C) were only those studies that were within the “deforestation” and “forest fragmentation” subcategories. Sample sizes (number of studies, n, and effect sizes, k) in (A) for each enemy are n = 48, k = 98 for Virus; n = 193, k = 343 for Protist; n = 159, k = 490 for Helminth; n = 10, k = 24 for Fungi; n = 103, k = 223 for Bacteria; and n = 30, k = 73 for Arthropod. Sample sizes in (B) for each reference land use type are n = 391, k = 1073 for Rural; n = 29, k = 74 for Peri-urban; n = 33, k = 83 for Natural; and n = 24, k = 58 for Urban Gradient. Sample sizes in (C) for each land use conversion type are n = 7, k = 47 for Continuous Gradient; n = 16, k = 44 for High/Low Fragmentation; n = 11, k = 27 for Clearcut/Regrowth; and n = 21, k = 43 for Agriculture.

Extended Data Fig. 8 The effects of common global change drivers on mean infectious disease responses in the literature depends on whether the endpoint is the host or parasite; whether the parasite is a vector, is vector-borne, has a complex or direct life cycle, or is a macroparasite; whether the host is an ectotherm or endotherm; or the venue and habitat in which the study was conducted.

A ) Parasite endpoints. B ) Vector-borne status. C ) Parasite transmission route. D ) Parasite size. E ) Venue. F ) Habitat. G ) Host thermy. H ) Parasite type (ecto- or endoparasite). See Table S 2 for number of studies and effect sizes across ecological contexts and global change drivers. See Table S 3 for pairwise comparison results. The displayed points represent the mean predicted values (with 95% confidence intervals) from a metafor model where the response variable was a Hedge’s g (representing the effect on an infectious disease endpoint relative to control), study was treated as a random effect, and the independent variables included the main effects and an interaction between global change driver and the focal independent variable (whether the endpoint measured was a host or parasite, whether the parasite is vector-borne, has a complex or direct life cycle, is a macroparasite, whether the study was conducted in the field or lab, habitat, the host is ectothermic, or the parasite is an ectoparasite).

Extended Data Fig. 9 The effects of five common global change drivers on mean infectious disease responses in the literature only occasionally depend on location, host taxon, and parasite taxon.

A ) Continent in which the field study occurred. Lack of replication in chemical pollution precluded us from including South America, Australia, and Africa in this analysis. B ) Host taxa. C ) Enemy taxa. See Table S 2 for number of studies and effect sizes across ecological contexts and global change drivers. See Table S 3 for pairwise comparison results. The displayed points represent the mean predicted values (with 95% confidence intervals) from a metafor model where the response variable was a Hedge’s g (representing the effect on an infectious disease endpoint relative to control), study was treated as a random effect, and the independent variables included the main effects and an interaction between global change driver and continent, host taxon, and enemy taxon.

Extended Data Fig. 10 The effects of human vs. non-human endpoints for the zoonotic disease subset of database and wild vs. domesticated animal endpoints for the non-human animal subset of database are consistent across global change drivers.

(A) Zoonotic disease responses measured on human hosts responded less positively (closer to zero when positive, further from zero when negative) than those measured on non-human (animal) hosts (Z = 2.306, p = 0.021). Note, IS studies were removed because of missing cells. (B) Disease responses measured on domestic animal hosts responded less positively (closer to zero when positive, further from zero when negative) than those measured on wild animal hosts (Z = 2.636, p = 0.008). These results were consistent across global change drivers (i.e., no significant interaction between endpoint and global change driver). As many of the global change drivers increase zoonotic parasites in non-human animals and all parasites in wild animals, this may suggest that anthropogenic change might increase the occurrence of parasite spillover from animals to humans and thus also pandemic risk. The displayed points represent the mean predicted values (with 95% confidence intervals) from a metafor model where the response variable was a Hedge’s g (representing the effect on an infectious disease endpoint relative to control), study was treated as a random effect, and the independent variable of global change driver and human/non-human hosts. Data for (A) were only those diseases that are considered “zoonotic”; data for (B) were only those endpoints that were measured on non-human animals. Sample sizes in (A) for zoonotic disease measured on human endpoints across global change drivers are n = 3, k = 17 for BC; n = 2, k = 6 for CP; n = 25, k = 39 for CC; and n = 175, k = 331 for HLC. Sample sizes in (A) for zoonotic disease measured on non-human endpoints across global change drivers are n = 25, k = 52 for BC; n = 2, k = 3 for CP; n = 18, k = 29 for CC; n = 126, k = 289 for HLC. Sample sizes in (B) for wild animal endpoints across global change drivers are n = 28, k = 69 for BC; n = 21, k = 44 for CP; n = 50, k = 89 for CC; n = 121, k = 360 for HLC; and n = 29, k = 45 for IS. Sample sizes in (B) for domesticated animal endpoints across global change drivers are n = 2, k = 4 for BC; n = 4, k = 11 for CP; n = 7, k = 20 for CC; n = 78, k = 197 for HLC; and n = 1, k = 2 for IS.

Supplementary information

Supplementary information.

Supplementary Discussion, Supplementary References and Supplementary Tables 1–3.

Reporting Summary

Peer review file, supplementary data 1.

R markdown code and output associated with this paper.

Supplementary Table 4

EcoEvo PRISMA checklist.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Mahon, M.B., Sack, A., Aleuy, O.A. et al. A meta-analysis on global change drivers and the risk of infectious disease. Nature (2024). https://doi.org/10.1038/s41586-024-07380-6

Download citation

Received : 02 August 2022

Accepted : 03 April 2024

Published : 08 May 2024

DOI : https://doi.org/10.1038/s41586-024-07380-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

grading rubric for case study analysis

IMAGES

  1. Case study grading rubric by Patmani Kumara

    grading rubric for case study analysis

  2. Case Study Group Assignment

    grading rubric for case study analysis

  3. Rubric Case Study

    grading rubric for case study analysis

  4. Case Study Rubric

    grading rubric for case study analysis

  5. Case Analysis Rubric

    grading rubric for case study analysis

  6. Sample Grading Rubric for a Case Study in Strategic Marketing

    grading rubric for case study analysis

VIDEO

  1. Basic study of rubric and repertory In Hindi

  2. Blue -Print and Development of Rubrics explained by Namita

  3. Pathophysiology Bio 210 Overview of the course, Syllabus, Case Study Rubric

  4. Learn Rubrics/Rubric analysis /Mind-Timid/Dr Shivangi Gupta

  5. Learn Rubrics

  6. How to use Rubric grading method in Assignment Tool on UPOP

COMMENTS

  1. PDF Case Analysis Grading Rubric

    Case Analysis Grading Rubric (100 Points) ... Analysis 35 pts • the analysis examines, summarizes, and integrates issue(s) to reveal insight • evidence is well organized with smooth transitions ... reasoning in response to case study questions Throughout the whole work, conclusions 17-20 pts Throughout most of the work,

  2. PDF Case Study Grading Rubric Presenter Name: Judges Initials: 4 3 2 1 0 Score

    feature of the case. _____ Analysis/ Solution Options The presenter(s): • discusses an in-depth and critical assessment of the facts of the case in relation to ... Case Study Grading Rubric Presenter Name: Judges Initials: Author: Herdman, Michelle Created Date: 3/30/2020 5:26:41 PM ...

  3. PDF Undergraduate Case Analysis Rubric

    Undergraduate Case Analysis Rubric. Levels of Achievement Criteria Completely Inadequate Slightly Inadequate Competent Excellent Total Points Problem identification 1.5 Points. Fails to identify the main ethical issues; Does not show understanding of why different approaches may be taken to this problem and why stakeholders may disagree.

  4. PDF Case Study Analysis Grading Rubric

    analysis. Offered recommendations that were not practical, or were not in alignment with the factors in the case study. Offered recommendations or practical courses of action based on the conclusions of the analysis. Offered recommendations or practical courses of action based on the conclusions of the analysis. Considered possible barriers to

  5. Undergraduate Case Analysis Rubric

    3 Points. Basic position effectively justified; fair presentation of others' positions; charitable interpretation of others' arguments. 3. Thinking critically about own and others' views. 1.5 Points. Complete lack of critical thinking about sources and arguments used; doesn't offer objection to own argument. 2 Points.

  6. Graduate Case Analysis Rubric

    A grading rubric for case analysis by graduate students, part of the "Genomics, Ethics and Society" course. ... No indication as to what the argument will be and how the case study analysis will be structured at the beginning of the analysis. 0.6 Points. Some spelling and grammar errors. Does not express opinions or ideas clearly.Only vague ...

  7. PDF Case Analysis Rubric

    Case Analysis Rubric. Recognizes one or more key problems in the case. Recognizes multiple problems in the case. Indicates some issues are more important than others and explains why. Clearly describes the unique perspectives of multiple key characters. More than one reasonable action proposed.

  8. A Rubric for Evaluating Student Analyses of Business Cases

    Rochford L., Borchert P. S. (2011). Assessing higher level learning: Developing rubrics for case analysis. Journal of Education for Business, 86(5), 258-265. Crossref. Google Scholar. ... Vega G. (2010). The undergraduate case research study model. Journal of Management Education, 34, 574-604. Crossref. Google Scholar. White C. S. (2007 ...

  9. INT 700 Case Study Guidelines and Rubric

    Refer to the Business Case Study Analysis and Writing Guide and Common Mistakes Resource documents for tips on how to structure your case studies and ensure that you are addressing the critical elements. Guidelines for Submission: Your case study should be a 5- to 7-page Microsoft Word document, double spaced, with 12-point Times New Roman font ...

  10. Assessment for Curricular Improvement

    Example rubrics. For a written analysis of a case study in engineering. For a written analysis of a case study in general #1. For a written analysis of a case study in general #2. For a written and oral analysis of a case study by a group. For an oral analysis of a case study by a group. For a written analysis of a case study on ethics

  11. iRubric: Case study analysis rubric

    Case study findings lacks depth and/or connection. Student makes few connections or inappropriate connections among findings, course concepts and data gathered in field. Provides a thoughtful analysis incorporating literature review. Student connects findings to clinical concepts; however, specific quotes, principles, or theories may be lacking ...

  12. PDF Using a Rubric to Evaluate Quality in Case Study Writing

    The OCOM Case Study Rubric has 12 separate elements, each of which delineates specific expectations. A number of these elements are associated with specific sections within the format of the case study (e.g., Element 10: evaluating the quality of the Discussion section). Some elements consider more global

  13. PDF Case Study Evaluation Rubric

    The determination of an effective/needs development case study is guided by whether it is both data driven and makes logical sense, rather than how many isolated elements are found to be effective. Section 1: Elements of an Effective Case Study Effective Needs Development 1.1 Demographics of the case are adequately described (e.g., age, type

  14. Assessing Higher Level Learning: Developing Rubrics for Case Analysis

    Case study analyses allow students to demonstrate proficiency in executing authentic tasks in marketing and management, facilitating faculty evaluation of higher order learning outcomes. Effective and consistent assessment of case analyses depends in large part on the development of sound rubrics.

  15. PDF Written Assignment Grading Rubric

    Case Study Guidelines and Rubric Read the respective module page for each Case Study for assignment details. Please refer to the grading rubric below for guidance and specific Case Study requirements. Written Assignment Grading Rubric . Competency Level 4 Level 3 Level 2 Level 1 Proficiency, Knowledge, Understanding, and Research Depth

  16. PDF CREATING RUBRICS

    CREATING RUBRICS INTRODUCTION ... Sample Rubric for Case Study Assignment Note: This sample is extremely detailed. The samples in the Rubric Gallery include many that are more concise. Criterion 4 points ... Analysis, evaluation, and recommendations Presents an insightful and

  17. Case Study

    Effort in analysis and critique was insufficient, or no comment was posted. See Also. Discussion; Investigation and Research Discussions; Case Study - Description ... Using Online Discussions to Increase Student Engagement; Keywords: case study, rubric, online, discussion: Doc ID: 103920: Owner: Timmo D. Group: Instructional Resources: Created ...

  18. iRubric: Case Study rubric

    Identifies and understands most of the main issues in the case study Fair ... professional writing, and logical flow of analysis. APA formatting • Logically organized, key points, key arguments, and important criteria for evaluating the business logic easily identified. ... Link, embed, and showcase your rubrics on your website. Email. Email ...

  19. PDF Case Study Evaluation Rubric

    The National School Psychology Certification Board (NSPCB) of the National Association of School Psychologists (NASP) developed the following rubric to help guide applicants in structuring an effective case study. Additionally, the NSPCB utilizes the rubric as part of the evaluation process for NCSP candidates from graduate programs without ...

  20. PDF Rubric for grading Clinical Research Studies

    13. Analysis . a. Were the standard methods of analysis performed? b. Are methods for understanding variation within the data, including the effects of time as a variable considered? 14. Ethical Considerations (For prospective studies) Are the ethical aspects of implementing and studying the intervention(s) and how they were

  21. Rubric Best Practices, Examples, and Templates

    Rubric Best Practices, Examples, and Templates. A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects ...

  22. Rubrics For Case Study Analysis

    Short Paper/Case Study Analysis Rubric Guidelines for Submission:Short papers should use double spacing. 12-point Times New Roman font. and one-inch margins. Sources should be cited according to a discipline-appropriate citation method. (a) Rubric for participation and group work. It is also suitable for self-assessment and peer feedback.

  23. PDF I. Comprehensive Case Study

    For this case study, you will need to provide documentation of a behavioral consultation case study. This case could be an informal consultation or a more formal consultation case that has been referred for a functional behavior analysis. Your report will be evaluated on the four-step problem solving process: Problem Identification, Problem ...

  24. A meta-analysis on global change drivers and the risk of infectious

    Finally, we conduced fail-safe N analysis to address the file-drawer problem, we used the Rosenthal, Orwin and Rosenberg publication bias methods and set our fail-safe N threshold equal to 5N ...