Center for Teaching and Learning

Step 4: develop assessment criteria and rubrics.

Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can. Grading criteria articulate what is important in each assessment, what knowledge or skills students should be able to demonstrate, and how they can best communicate that to you. When you share grading criteria with students, you help them understand what to focus on and how to demonstrate their learning successfully. From good assessment criteria, you can develop a grading rubric .

Develop Your Assessment Criteria | Decide on a Rating Scale | Create the Rubric

Developing Your Assessment Criteria

Good assessment criteria are

  • Clear and easy to understand as a guide for students
  • Attainable rather than beyond students’ grasp in the current place in the course
  • Significant in terms of the learning students should demonstrate
  • Relevant in that they assess student learning toward course objectives related to that one assessment.

To create your grading criteria, consider the following questions:

  • What is the most significant content or knowledge students should be able to demonstrate understanding of at this point in the course?
  • What specific skills, techniques, or applications should students be able to use to demonstrate using at this point in the course?
  • What secondary skills or practices are important for students to demonstrate in this assessment? (for example, critical thinking, public speaking skills, or writing as well as more abstract concepts such as completeness, creativity, precision, or problem-solving abilities)
  • Do the criteria align with the objectives for both the assessment and the course?

Once you have developed some ideas about the assessment’s grading criteria, double-check to make sure the criteria are observable, measurable, significant, and distinct from each other.

Assessment Criteria Example Using the questions above, the performance criteria in the example below were designed for an assignment in which students had to create an explainer video about a scientific concept for a specified audience. Each elements can be observed and measured based on both expert instructor and peer feedback, and each is significant because it relates to the course and assignment learning goals.

assignment marking criteria

Additional Assessment Criteria Resources Developing Grading Criteria (Vanderbilt University) Creating Grading Criteria (Brown University) Sample Criteria (Brown University) Developing Grading Criteria (Temple University)

Decide on a Rating Scale

Deciding what scale you will use for an assessment depends on the type of learning you want students to demonstrate and the type of feedback you want to give students on this particular assignment or test. For example, for an introductory lab report early in the semester, you might not be as concerned with advanced levels of precision as much as correct displays of data and the tone of the report; therefore, grading heavily on copy editing or advanced analysis would not be appropriate. The criteria would likely more rigorous by the end of the semester, as you build up to the advanced level you want students to reach in the course.

Rating scales turn the grading criteria you have defined into levels of performance expectations for the students that can then be interpreted as a letter, number, or level. Common rating scales include

  • A, B, C, etc. (without or without + and -)
  • 100 point scale with defined cut-off for a letter grade if desired (ex. a B = 89-80; or a B+ = 89-87, B = 86-83, B- = 82-80)
  • Yes or no, present or not present (if the rubric is a checklist of items students must show)
  • below expectations, meets expectations, exceeds expectations
  • not demonstrated, poor, average, good, excellent

Once you have decided on a scale for the type of assignment and the learning you want students to demonstrate, you can use the scale to clearly articulate what each level of performance looks like, such as defining what A, B, C, etc. level work would look like for each grading criteria. What would distinguish a student who earns a B from one who earns a C? What would distinguish a student who excelled in demonstrating use of a tool from a student who clearly was not familiar with it? Write these distinctions out in descriptive notes or brief paragraphs.

​ Ethical Implications of Rating Scales There are ethical implications in each of these types of rating skills. On a project worth 100 points, what is the objective difference between earning an 85 or and 87? On an exceeds/meets/does not meet scale, how can those levels be objectively applied? Different understandings of "fairness" can lead to several ways of grading that might disadvantage some students.  Learn more about equitable grading practices here.

Create the Rubric

Rubrics Can Make Grading More Effective

  • Provide students with more complete and targeted feedback
  • Make grading more timely by enabling the provision of feedback soon after assignment is submitted/presented.
  • Standardize assessment criteria among those assigning/assessing the same assignment.
  • Facilitate peer evaluation of early drafts of assignment.

Rubrics Can Help Student Learning

  • Convey your expectations about the assignment through a classroom discussion of the rubric prior to the beginning of the assignment
  • Level the playing field by clarifying academic expectations and assignments so that all students understand regardless of their educational backgrounds.(e.g. define what we expect analysis, critical thinking, or even introductions/conclusions should include)
  • Promote student independence and motivation by enabling self-assessment
  • Prepare students to use detailed feedback.

Rubrics Have Other Uses:

  • Track development of student skills over several assignments
  • Facilitate communication with others (e.g. TAs, communication center, tutors, other faculty, etc)
  • Refine own teaching skills (e.g. by responding to common areas of weaknesses, feedback on how well teaching strategies are working in preparing students for their assignments).

In this video, CTL's Dr. Carol Subino Sullivan discusses the value of the different types of rubrics.

Many non-test-based assessments might seem daunting to grade, but a well-designed rubric can alleviate some of that work. A rubric is a table that usually has these parts:  

  • a clear description of the learning activity being assessed
  • criteria by which the activity will be evaluated
  • a rating scale identifying different levels of performance
  • descriptions of the level of performance a student must reach to earn that level.  

When you define the criteria and pre-define what acceptable performance for each of those criteria looks like ahead of time, you can use the rubric to compare with student work and assign grades or points for each criteria accordingly. Rubrics work very well for projects, papers/reports, and presentations , as well as in peer review, and good rubrics can save instructors and TAs time when grading .  

Sample Rubrics This final rubric for the scientific concept explainer video combines the assessment criteria and the holistic rating scale:

assignment marking criteria

When using this rubric, which can be easily adapted to use a present/not present rating scale or a letter grade scale, you can use a combination of checking items off and adding written (or audio/video) comments in the different boxes to provide the student more detailed feedback. 

As a second example, this descriptive rubric was used to ask students to peer assess and self-assess their contributions to a collaborative project. The rating scale is 1 through 4, and each description of performance builds on the previous. ( See the full rubric with scales for both product and process here. This rubric was designed for students working in teams to assess their own contributions to the project as well as their peers.)

assignment marking criteria

Building a Rubric in Canvas Assignments You can create rubrics for assignments and discussions boards in Canvas. Review these Canvas guides for tips and tricks. Rubrics Overview for Instructors What are rubrics?  How do I align a rubric with a learning outcome? How do I add a rubric to an assignment? How do I add a rubric to a quiz? How do I add a rubric to a graded discussion? How do I use a rubric to grade submissions in SpeedGrader? How do I manage rubrics in a course?

Additional Resources for Developing Rubrics Designing Grading Rubrics  (Brown University) Step-by-step process for creating an effective, fair, and efficient grading rubric. 

Creating and Using Rubrics  (Carnegie Mellon University) Explores the basics of rubric design along with multiple examples for grading different types of assignments.

Using Rubrics  (Cornell University) Argument for the value of rubrics to support student learning.

Rubrics  (University of California Berkeley) Shares "fun facts" about rubrics, and links the rubric guidelines from many higher ed organizations such as the AAC&U.

Creating and Using Rubrics  (Yale University) Introduces different styles of rubrics and ways to decide what style to use given your course's learning goals.

Best Practices for Designing Effective Resources (Arizona State University) Comprehensive overview of rubric design principles.

  Return to Main Menu | Return to Step 3 | Go to Step 5 Determine Feedback Strategy

Accessibility Information

Download Microsoft Products   >      Download Adobe Reader   >

The Teaching Knowledge Base

  • Digital Teaching and Learning Tools
  • Assessment and Feedback Tools

Assessment Criteria and Rubrics

An introduction.

This guide is an introduction to:

  • Writing an assessment brief with clear assessment criteria and rubrics
  • Grading tools available in Turnitin enabling the use of criteria and rubrics in marking.

Clear and explicit assessment criteria and rubrics are meant to increase the transparency of the assessment and aim to develop students into ‘novice assessors’ (Gipps, 1994) and facilitating deep learning.  Providing well-designed criteria and rubrics, contributes to communicating assessment requirements that can be more inclusive to all (including markers) regardless of previous learning experiences, and or individual differences in language, cultural and educational background.  It also facilitates the development of self-judgment skills (Boud & Falchikov, 2007).

  • Assessment brief
  • Assessment criteria
  • Assessment rubric
  • Guidance in how to create rubrics and grading forms
  • Guidance on how to create a rubric in Handin

Terminology Explored

The terms ‘assessment brief’ , ‘assessment criteria’ and ‘assessment rubrics’ however, are often used interchangeably and that may lead to misunderstandings and impact on the effectiveness of the design and interpretation of the assessment brief.  Therefore, it is important to first clarify these terms:

Assessment Brief

An assessment (assignment) brief refers to the instructions provided to communicate the requirements and expectations of assessment tasks, including the assessment criteria and rubrics to students.  The brief should clearly outline which module learning outcomes will assessed in the assignment.

NOTE: If you are new to writing learning outcomes, or need a refresher, have a look at Baume’s guide to “Writing and using good learning outcomes”, (2009).  See list of references.

When writing an assessment brief, it may be useful to consider the following questions with regards to your assessment brief:

  • Have you outlined clearly what type of assessment you require students to complete?  For example, instead of “written assessment”, outline clearly what type of written assessment you require from your students; is it a report, a reflective journal, a blog, presentation, etc.  It is also recommended to give a breakdown of the individual tasks that make up the full assessment within the brief, to ensure transparency.
  • Is the purpose of the assessment immediately clear to your students, i.e. why the student is being asked to do the task?  It might seem obvious to you as an academic, but for students new to academia and the subject discipline, it might not be clear.  For example, explain why they have to write a reflective report or a journal and indicate which module learning outcomes are to be assessed in this specific assessment task.
  • Is all the important task information clearly outlined, such as assessment deadlines, word count, criteria and further support and guidance?

Assessment Criteria

Assessment criteria communicate to students the knowledge, skills and understanding (thus in line with the expected module learning outcomes) the assessors expect from students to evidence in any given assessment task.  To write a good set of criteria, the focus should be on the characteristics of the learning outcomes that the assignment will evidence and not only consider the characteristics of the assignment (task), i.e., presentation, written task, etc.

Thus, the criteria outlines what we expect from our students (based on learning outcomes), however it does not in itself make assumptions about the actual quality or level of achievement (Sadler, 1987: 194) and needs to be refined in the assessment rubric.  

When writing an assessment brief, it may be useful to consider the following questions with regards to the criteria that will be applied to assess the assignment:

  • Are your criteria related and aligned with the module and (or) the course learning outcomes?
  • What are the number of criteria you will assess in any particular task?  Consider how realistic and achievable this may be.
  • Are the criteria clear and have you avoided using any terms not clear to students (academic jargon)?
  • Are the criteria and standards (your quality definitions) aligned with the level of the course?   For guidance, consider revisiting the  Credit Level Descriptors (SEEC, 2016) and the QAA Subject Benchmarks in Framework for the Higher Education Qualifications that are useful starting points to consider.

Assessment Rubric

The assessment rubric, forms part of a set of criteria and refers specifically to the “levels of performance quality on the criteria.” (Brookhart & Chen, 2015, p. 343)

Generally, rubrics are categorised into two categories, holistic and or analytic. A holistic rubric assesses an assignment as a whole and is not broken down into individual assessment criteria.  For the purpose of this guidance, the focus will be on an analytic rubric that provides separate performance descriptions for each criterion.

An assessment rubric is therefore a tool used in the process of assessing student work that usually includes essential features namely the:  

  • Scoring strategy – Can be numerical of qualitative, associated with the levels of mastery (quality definitions). (Shown as SCALE in Turnitin)
  • Quality definitions (levels of mastery) – Specify the levels of achievement / performance in each criterion.

 (Dawson, 2017).

The figure below, is an example of the features of a complete rubric including the assessment criteria. 

When writing an assessment brief, it may be useful to consider the following questions with regards to firstly, the assessment brief, and secondly, the criteria and associated rubrics.

  • Does your scoring strategy clearly define and cover the whole grading range?  For example, do you distinguish between the distinctions (70-79%) and 80% and above?
  • Are the words and terms used to indicate level of mastery, clearly outlining and enabling students to distinguish between the different judgements?  For example, how do you differentiate between work that is outstanding, excellent and good?
  • Is the chosen wording in your rubric too explicit?  It should be explicit but at the same time not overly specific to avoid students adopting a mechanistic approach to your assignment.  For example, instead of stating a minimum number references, consider stating rather effectiveness or quality of the use of literature, and or awareness or critical analysis of supporting literature.

NOTE: For guidance across Coventry University Group on writing criteria and rubrics, follow the links to guidance.

 POST GRADUATE Assessment criteria and rubrics (mode R)

 UNDER GRADUATE Assessment criteria and rubrics (mode E)

Developing Criteria and Rubrics within Turnitin

Within Turnitin, depending on the type of assessment, you have a choice between four grading tools:

  • Qualitative Rubric – A rubric that provides feedback but has no numeric scoring.  More descriptive than measurable.  This rubric is selected by choosing the ‘0’ symbol at the base of the Rubric.
  • Standard Rubric – Used for numeric scoring.  Enter scale values for each column (rubric score) and percentages for each criteria row, combined to be equal to 100%.  This rubric can calculate and input the overall grade.  This rubric is selected by choosing the % symbol at the base of the Rubric window.
  • Custom Rubric – Add criteria (row) and descriptive scales (rubric), when marking enter (type) any value directly into each rubric cell.  This rubric will calculate and input the overall grade.  This rubric is selected by choosing the ‘Pencil’ symbol at the base of the Rubric window.
  • Grading form – Can be used with or without numerical score.  If used without numerical score, then it is more descriptive feedback.  If used with numerical scoring, this can be added together to create an overall grade.  Note that grading forms can be used without a ‘paper assignment’ being submitted, for example, they can be used to assess work such as video submission, work of art, computer programme or musical performance.

Guidance on how to Create Rubric and Grading Forms

Guidance by Turnitin:

https://help.turnitin.com/feedback-studio/turnitin-website/instructor/rubric-scorecards-and-grading-forms/creating-a-rubric-or-grading-form-during-assignment-creation.htm

University of Kent – Creating and using rubrics and grading form (written guidance):

https://www.kent.ac.uk/elearning/files/turnitin/turnitin-rubrics.pdf

Some Examples to Explore

It is useful to explore some examples in Higher Education, and the resource developed by UCL of designing generic assessment criteria and rubrics from level 4 to 7, is a good starting point.

Guidance on how to Create Rubric in Handin

Within Handin, depending on the type of assessment, you have a choice between three grading tools, see list below, as well as the choice to use “free-form” grading that allows you to enter anything in the grade field when grading submissions.

  • None = qualitative
  • Range = quantitative – can choose score from range
  • Fixed = quantitative – one score per level

Guide to Handin: Creating ungraded (“free-form”) assignments

https://aula.zendesk.com/hc/en-us/articles/360053926834

Guide to Handin: Creating rubrics https://aula.zendesk.com/hc/en-us/articles/360017154820-How-can-I-use-Rubrics-for-Assignments-in-Aula-

References and Further Reading

Baume, D (2009) Writing and using good learning outcomes. Leeds Metropolitan University. ISBN 978-0-9560099-5-1 Link to Leeds Beckett Repository record: http://eprints.leedsbeckett.ac.uk/id/eprint/2837/1/Learning_Outcomes.pdf

Boud, D & Falchikov, N. (2007) Rethinking Assessment in Higher Education. London: Routledge.

Brookhart, S.M. & Chen, F. (2015) The quality and effectiveness of descriptive rubrics, Educational Review, 67:3, pp.343-368.  http://dx.doi.org/10.1080/00131911.2014.929565

Dawson, P. (2017) Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), pp.347-360. https://doi.org/10.1080/02602938.2015.1111294

Gipps, C.V. (1994) Beyond testing: Towards a theory of educational assessment. Psychology Press.

Sadler, D.R. (1987) Specifying and promulgating achievement standards. Oxford Review of Education, 13(2), pp.191-209.

SEEC (2016) Credit Level Descriptors. Available: http://www.seec.org.uk/wp-content/uploads/2016/07/SEEC-descriptors-2016.pdf

UK QAA Quality Code (2014) Part A – Setting and Maintaining Academic Standards. Available: https://www.qaa.ac.uk/docs/qaa/quality-code/qualifications-frameworks.pdf

assignment marking criteria

How to use the rubric

  • Read through the assignment rubric alongside the assignment task instructions.
  • Make a note of anything that is not clear and ask your lecturers or tutors for clarification.
  • While you are doing your assignment, keep referring to the rubric to make sure you are on track.
  • Before you hand in your assignment, have another look at the rubric to make a judgement of your work and make changes if needed.

How to learn from feedback

When you get your assignment back, it is very tempting to just look at the mark or grade and ignore any  written feedback .

Look at the marks on the rubric to understand the feedback given for your assignment. It can sometimes feel challenging to read comments that are critical of your work, especially when you believe that you have put a lot of effort into the assignment. Feedback can be very useful to you as it:

  • Enables you to build on what you have done correctly.
  • Helps you to identify where you went wrong.
  • Identifies where you need to make improvements so that you can do better next time.

If you need to clarify any feedback you have been given, be proactive and contact your lecturer. Most lecturers have office hours where you can see them to discuss any course-related issues. Discuss the feedback with them so that you understand what you might need to improve for your next assignment.

Can I get feedback before I submit my assignment?

Some courses provide an opportunity for peer review or lecturer feedback prior to submission of the assignment. This is a way of getting early feedback so that you can improve the assignment before you hand it in. In most cases you will be guided in this process by your lecturer through your Canvas course page.

Print Friendly, PDF & Email

Related topics

  • Analysing questions
  • Key study skills

See all available workshops .

Have any questions? 

This is the footer

Banner

Study Skills: Using Marking Criteria to your Advantage

  • Problem Solving
  • Approaching an Assignment
  • Using Marking Criteria to your Advantage
  • Key Literacy Skills
  • Creating Checklists
  • First Steps to Research
  • Growth Mindset
  • Improving my Work
  • Proof-Reading + Self-Editing
  • Using Feedback
  • Being a Learner
  • Reflective Practices
  • Benefits of Organisation
  • Time Management
  • Setting Goals & Working Independently
  • Self-Awareness
  • Understanding the Psychology of Distractions
  • Getting Motivated
  • Device Distractions
  • Effective Group Work
  • Setting Goals and Rewards
  • How the Brain Processes Information
  • Using Technology to Your Advantage
  • Evaluating Resources
  • Note-taking & Avoiding Plagiarism
  • The Inquiry Method
  • Reflection and Creative Thought
  • Application of Learning
  • Critical Thinking

Marking Criteria

What is Marking Criteria?  

Marking criteria  (also known as marking rubrics) are designed to help students know what is expected of them. It reflects the aspects of the assignment the teachers are specifically looking for when marking your work. You could hand in the most well-researched assignment ever...but if it doesn't meet the requirements set out in the marking criteria, you will not receive a great mark.

 Marking criteria allows your teacher to mark all of the assignments consistently and give you clear feedback on where and how you can improve your work.

Marking criteria also allows you to see where your marks will be allocated - so you can spend more time and detail on the parts of your assignment that are worth more marks.

  • Example of a marking criteria. Take note of the wording and the allocated marks for each aspect of the assignment:

assignment marking criteria

  • << Previous: Approaching an Assignment
  • Next: Key Literacy Skills >>
  • Last Updated: Aug 23, 2023 11:15 AM
  • URL: https://saintpatricks-nsw.libguides.com/library_general_study_skills
  • Center for Innovative Teaching and Learning
  • Instructional Guide
  • Rubrics for Assessment

A rubric is an explicit set of criteria used for assessing a particular type of work or performance (TLT Group, n.d.) and provides more details than a single grade or mark. Rubrics, therefore, will help you grade more objectively.

Have your students ever asked, “Why did you grade me that way?” or stated, “You never told us that we would be graded on grammar!” As a grading tool, rubrics can address these and other issues related to assessment: they reduce grading time; they increase objectivity and reduce subjectivity; they convey timely feedback to students and they improve students’ ability to include required elements of an assignment (Stevens & Levi, 2005). Grading rubrics can be used to assess a range of activities in any subject area

Elements of a Rubric

Typically designed as a grid-type structure, a grading rubric includes criteria, levels of performance, scores, and descriptors which become unique assessment tools for any given assignment. The table below illustrates a simple grading rubric with each of the four elements for a history research paper. 

Criteria identify the trait, feature or dimension which is to be measured and include a definition and example to clarify the meaning of each trait being assessed. Each assignment or performance will determine the number of criteria to be scored. Criteria are derived from assignments, checklists, grading sheets or colleagues.

Examples of Criteria for a term paper rubric

  • Introduction
  • Arguments/analysis
  • Grammar and punctuation
  • Internal citations

Levels of performance

Levels of performance are often labeled as adjectives which describe the performance levels. Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students. These levels tell students what they are expected to do. Levels of performance can be used without descriptors but descriptors help in achieving objectivity. Words used for levels of performance could influence a student’s interpretation of performance level (such as superior, moderate, poor or above or below average).

Examples to describe levels of performance

  • Excellent, Good, Fair, Poor
  • Master, Apprentice, Beginner
  • Exemplary, Accomplished, Developing, Beginning, Undeveloped
  • Complete, Incomplete
Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students.

Scores make up the system of numbers or values used to rate each criterion and often are combined with levels of performance. Begin by asking how many points are needed to adequately describe the range of performance you expect to see in students’ work. Consider the range of possible performance level.

Example of scores for a rubric

1, 2, 3, 4, 5 or 2, 4, 6, 8

Descriptors

Descriptors are explicit descriptions of the performance and show how the score is derived and what is expected of the students. Descriptors spell out each level (gradation) of performance for each criterion and describe what performance at a particular level looks like. Descriptors describe how well students’ work is distinguished from the work of their peers and will help you to distinguish between each student’s work. Descriptors should be detailed enough to differentiate between the different level and increase the objectivity of the rater.

Descriptors...describe what performance at a particular level looks like.

Developing a Grading Rubric

First, consider using any of a number of existing rubrics available online. Many rubrics can be used “as is.” Or, you could modify a rubric by adding or deleting elements or combining others for one that will suit your needs. Finally, you could create a completely customized rubric using specifically designed rubric software or just by creating a table with the rubric elements. The following steps will help you develop a rubric no matter which option you choose.

  • Select a performance/assignment to be assessed. Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity. Is the performance/assignment an authentic task related to learning goals and/or objectives? Are students replicating meaningful tasks found in the real world? Are you encouraging students to problem solve and apply knowledge? Answer these questions as you begin to develop the criteria for your rubric.
Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity.
  • List criteria. Begin by brainstorming a list of all criteria, traits or dimensions associated task. Reduce the list by chunking similar criteria and eliminating others until you produce a range of appropriate criteria. A rubric designed for formative and diagnostic assessments might have more criteria than those rubrics rating summative performances (Dodge, 2001). Keep the list of criteria manageable and reasonable.
  • Write criteria descriptions. Keep criteria descriptions brief, understandable, and in a logical order for students to follow as they work on the task.
  • Determine level of performance adjectives.  Select words or phrases that will explain what performance looks like at each level, making sure they are discrete enough to show real differences. Levels of performance should match the related criterion.
  • Develop scores. The scores will determine the ranges of performance in numerical value. Make sure the values make sense in terms of the total points possible: What is the difference between getting 10 points versus 100 points versus 1,000 points? The best and worst performance scores are placed at the ends of the continuum and the other scores are placed appropriately in between. It is suggested to start with fewer levels and to distinguish between work that does not meet the criteria. Also, it is difficult to make fine distinctions using qualitative levels such as never, sometimes, usually or limited acceptance, proficient or NA, poor, fair, good, very good, excellent. How will you make the distinctions?
It is suggested to start with fewer [score] levels and to distinguish between work that does not meet the criteria.
  • Write the descriptors. As a student is judged to move up the performance continuum, previous level descriptions are considered achieved in subsequent description levels. Therefore, it is not necessary to include “beginning level” descriptors in the same box where new skills are introduced.
  • Evaluate the rubric. As with any instructional tool, evaluate the rubric each time it is used to ensure it matches instructional goals and objectives. Be sure students understand each criterion and how they can use the rubric to their advantage. Consider providing more details about each of the rubric’s areas to further clarify these sections to students. Pilot test new rubrics if possible, review the rubric with a colleague, and solicit students’ feedback for further refinements.

Types of Rubrics

Determining which type of rubric to use depends on what and how you plan to evaluate. There are several types of rubrics including holistic, analytical, general, and task-specific. Each of these will be described below.

All criteria are assessed as a single score. Holistic rubrics are good for evaluating overall performance on a task. Because only one score is given, holistic rubrics tend to be easier to score. However, holistic rubrics do not provide detailed information on student performance for each criterion; the levels of performance are treated as a whole.

  • “Use for simple tasks and performances such as reading fluency or response to an essay question . . .
  • Getting a quick snapshot of overall quality or achievement
  • Judging the impact of a product or performance” (Arter & McTighe, 2001, p 21)

Each criterion is assessed separately, using different descriptive ratings. Each criterion receives a separate score. Analytical rubrics take more time to score but provide more detailed feedback.

  • “Judging complex performances . . . involving several significant [criteria] . . .
  • Providing more specific information or feedback to students . . .” (Arter & McTighe, 2001, p 22)

A generic rubric contains criteria that are general across tasks and can be used for similar tasks or performances. Criteria are assessed separately, as in an analytical rubric.

  • “[Use] when students will not all be doing exactly the same task; when students have a choice as to what evidence will be chosen to show competence on a particular skill or product.
  • [Use] when instructors are trying to judge consistently in different course sections” (Arter & McTighe, 2001, p 30)

Task-specific

Assesses a specific task. Unique criteria are assessed separately. However, it may not be possible to account for each and every criterion involved in a particular task which could overlook a student’s unique solution (Arter & McTighe, 2001).

  • “It’s easier and faster to get consistent scoring
  • [Use] in large-scale and “high-stakes” contexts, such as state-level accountability assessments
  • [Use when] you want to know whether students know particular facts, equations, methods, or procedures” (Arter & McTighe, 2001, p 28) 

Grading rubrics are effective and efficient tools which allow for objective and consistent assessment of a range of performances, assignments, and activities. Rubrics can help clarify your expectations and will show students how to meet them, making students accountable for their performance in an easy-to-follow format. The feedback that students receive through a grading rubric can help them improve their performance on revised or subsequent work. Rubrics can help to rationalize grades when students ask about your method of assessment. Rubrics also allow for consistency in grading for those who team teach the same course, for TAs assigned to the task of grading, and serve as good documentation for accreditation purposes. Several online sources exist which can be used in the creation of customized grading rubrics; a few of these are listed below.

Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Thousand Oaks, CA: Corwin Press, Inc.

Stevens, D. D., & Levi, A. J. (2005). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Sterling, VA: Stylus.

The Teaching, Learning, and Technology Group (n.d.). Rubrics: Definition, tools, examples, references. http://www.tltgroup.org/resources/flashlight/rubrics.htm

Selected Resources

Dodge, B. (2001). Creating a rubric on a given task. http://webquest.sdsu.edu/rubrics/rubrics.html

Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

Rubric Builders and Generators

eMints.org (2011). Rubric/scoring guide. http://www.emints.org/webquest/rubric.shtml

General Rubric Generator. http://www.teach-nology.com/web_tools/rubrics/general/

RubiStar (2008). Create rubrics for your project-based learning activities. http://rubistar.4teachers.org/index.php

Creative Commons License

Suggested citation

Northern Illinois University Center for Innovative Teaching and Learning. (2012). Rubrics for assessment. In Instructional guide for university faculty and teaching assistants. Retrieved from https://www.niu.edu/citl/resources/guides/instructional-guide

  • Active Learning Activities
  • Assessing Student Learning
  • Direct vs. Indirect Assessment
  • Examples of Classroom Assessment Techniques
  • Formative and Summative Assessment
  • Peer and Self-Assessment
  • Reflective Journals and Learning Logs
  • The Process of Grading

Phone: 815-753-0595 Email: [email protected]

Connect with us on

Facebook page Twitter page YouTube page Instagram page LinkedIn page

  • Campus Maps
  • Faculties & Schools

Now searching for:

  • Home  
  • Courses  
  • Subjects  
  • Assessment and Artificial Intelligence  
  • Design standards  
  • Methods  
  • Types  
  • Developing marking criteria  
  • Performance standards for each criterion  
  • Assigning grades and marks  
  • Evaluating marking rubrics  
  • Reflect and improve  
  • Example rubrics  
  • Review checklist  
  • Alterations  
  • Moderation  
  • Feedback  
  • Teaching  
  • Learning technology  
  • Professional learning  
  • Framework and policy  
  • Strategic projects  
  • About and contacts  
  • Help and resources  

Developing marking criteria

Marking criteria outline the knowledge, skills and application you expect the student to demonstrate at the completion of an assessment task. They should not simply restate the assessment tasks but articulate the learning required to achieve the subject learning outcomes. Developing clear criteria explicitly communicates to the students the elements the task is assessing and what you will prioritise when grading assessments.  Telling students what you value helps them to understand and produce what’s needed but also shares the responsibility for their learning and assessment.

What you need to do

1. Begin by analysing your assessment task and learning outcomes.

  • What action does the student have to take in the assessment task? Does it match the verb in the learning outcome? E.g. analyse, create, describe, identify. What’s the level of thinking you are expecting from the student?
  • What is the content area, concept, theory, knowledge being assessed being assessed in the learning task and learning outcome?
  • What is the context of the assessment? What should the content area relate to?

Once you have a clear idea of the expectations of your learning outcomes and assessment task identify the 4-5 main observable elements you are looking for in students' responses. These should match the skills, knowledge and application in your learning outcomes and assessment task.

2. Write your marking criteria. Criteria should:

  • Start with a verb to indicate the standard you require. Ensure they are measurable by avoiding terms like appreciates or have knowledge of.
  • Are kept to a manageable number for markers and students.
  • Include what you want students to do (action), know (content) and in what context.
  • Not make any assumptions about actual quality (e.g. effective, satisfactory). These should be included in your performance standards.
  • Be explicit and easy to understand and include only one element related to the learning outcome. Split different ideas into separate criteria.
  • Align with the subject learning outcome. Using similar words to the subject learning outcome can support students to clearly see a link and avoid confusion. Aligning to the learning outcomes ensures that if you change the task, you do not have to change the rubric.

The example below shows the colour coded alignment of the component parts of the learning outcome and the criterion. The green text is what you want students to ‘do’, the red text is what you want students to ‘know’, and the blue text is the ‘context’ of that knowledge.

Critical analysis' is what you want students to do, 'application' is the context of that knowledge, 'political behaviour including the strategies and tactics employed' is what you want students to know.

3.Once you have written your criteria take the time to review them and ensure:

  • The criteria describe what is important for students to demonstrate.
  • You have used clear language from the learning outcomes in your criteria.
  • They can be measured and assessed.
  • Your existing criteria align to the subject learning outcomes and task.
  • You're not assessing anything that is outside the scope of the learning outcomes that is potentially irrelevant to a student's achievement in this subject.
  • You haven't used any value descriptions, such as 'adequate' or 'satisfactory' because these are subjective terms and do not clearly describe what is valued in the criteria.
  • They do not overlap with other criteria.

This PowerPoint on writing criteria and standards may help.

Sources to assist you in developing criteria could include:

  • Accreditation requirements
  • The level of the subject (AQF)
  • The research literature
  • Peak bodies (Professional Associations)
  • Your own professional experience
  • Your knowledge of the topic & students
  • Example rubrics for similar disciplines or assessment type.

Once you've written your criteria, the next step is to develop the performance standards.

  • Undergraduate study
  • Find a course
  • Open days and visits
  • New undergraduates
  • Postgraduate study
  • Find a programme
  • Visits and open days
  • New postgraduates
  • International students
  • Accommodation
  • Schools & faculties
  • Business & partnerships
  • Current students
  • Current staff

Academic Quality and Policy Office

  • Academic Integrity
  • Academic Student Support
  • 2: Changes for 2023/24
  • 3: Academic integrity
  • 4: Academic awards and programme structures
  • 5: Recognition of prior learning
  • 6: Academic student support
  • 7: Suspension of study
  • 8: Supplementary year
  • 9: Forms of assessment
  • 10: Conduct of assessment
  • 11: Reasonable adjustment to assessment because of disability or other reason
  • 12. Submission and considering the impact of student circumstances
  • 13: Feedback to students
  • 14: Marking practices: benchmarking and calibration
  • 15: Marking criteria and scales
  • 16: Moderation and treatment of marks
  • 17: Anonymity
  • 18: Penalties
  • 19: Pass marks
  • 20: Boards of examiners
  • 21: Appeals against decisions of boards of examiners
  • 23: Treatment and publication of results
  • 24: Admission and study (UG)
  • 25: Programme structure and design (UG)
  • 26: Study abroad (UG)
  • 27: Industrial placements (UG)
  • 28: Intercalation (UG)
  • 29: Processing and recording marks (UG)
  • 30: Student progression and completion (UG modular)
  • 31: Student progression and completion (UG non-modular)
  • 32: Awards and classification (UG modular)
  • 33: Awards and classification (UG non-modular)
  • 34: Admission and study (PGT)
  • 35: Programme structure and design (PGT)
  • 36: Extension of study (PGT)
  • 37: Processing and recording marks (PGT)
  • 38: Student progression and completion (PGT)
  • 39: Awards and classification (PGT)
  • Temporary amendments for 2019/20
  • Temporary amendments for 2020/21
  • Temporary amendments for 2021/22 and after
  • Temporary amendments for 2022/23
  • Annexes to the Regulations and Code of Practice for Taught Programmes
  • Assessment and Feedback Strategy
  • Institutional Principles for Assessment and Feedback
  • Feedback to Students
  • External Examiners
  • Committees and Groups
  • Degree Outcomes Statement
  • Educational Partnerships
  • Postgraduate Education
  • Programme and Unit Development and Approval
  • Quality Framework
  • Student Surveys
  • Undergraduate Education
  • Unit Evaluation

Related links

  • Education and Student Success
  • Bristol Institute For Learning and Teaching
  • QAA Quality Code

Education and Student Success intranet

University home > Academic Quality and Policy Office > Assessment and Feedback > Regulations and Code of Practice for Taught Programmes > 15: Marking criteria and scales

15. Marking Criteria and Scales

15.1   Marking criteria are designed to help students know what is expected of them. Marking criteria differ from model answers and more prescriptive marking schemes which assign a fixed proportion of the assessment mark to particular knowledge, understanding and/or skills. The glossary  provides definitions for: marking criteria, marking scheme and model answer.

15.2   Where there is more than one marker for a particular assessment task, schools should take steps to ensure consistency of marking. Programme specific assessment criteria must be precise enough to ensure consistency of marking across candidates and markers, compatible with a proper exercise of academic judgment on the part of individual markers . 

15.3   Markers are encouraged to use pro forma in order to show how they have arrived at their decision. Comments provided on pro forma should help candidates, internal markers and moderators and external examiners to understand why a particular mark has been awarded.  Schools should agree, in advance of the assessment, whether internal moderators have access to the pro forma / mark sheets completed by the first marker before or after they mark a candidate’s work.

15.4   Detailed marking criteria for assessed group work, the assessment of class presentations, and self/peer (student) assessment must be established and made available to students and examiners.

15.5   In respect of group work, it is often desirable to award both a group and individual mark, to ensure individuals’ contributions to the task are acknowledged. The weighting of the group and individual mark and how the marks are combined should beset out in the unit specification .

University generic marking criteria

15.6   The common University generic marking criteria , set out in table 1, represent levels of attainment covering levels 4-7 of study. Establishing and applying criteria for assessment at level 8 should be managed by the school that owns the associated programme, in liaison with the faculty . A new level-specific University generic marking criteria ( UoB only ) has been agreed for introduction from 2024/25.

15.7   The common marking criteria are designed to be used for an individual piece of assessed student work. The descriptors give broad comparability of standards by level of study across all programmes as well as level of performance across the University. They reflect the QAA Framework for Higher Education Qualifications but need to be benchmarked against subject specific criteria at the programme level.

15.8   Faculties, with their constituent schools, must establish appropriately specific and detailed marking criteria which are congruent with the University-level criteria and, if appropriate, the level of study. All forms of programme-specific marking criteria must be approved by the Faculty .

Marking scales

15.9      Assessment must be marked and returned as an integer using one of the sanctioned marking scales, as follows:

  •            0-100 marking scale
  •            0-20 marking scale

or using a pass/fail marking scheme (see 10.33).

Any mark on the chosen marking scale can be used.

A five-point A-E marking scale is only available for programmes in the School of Education.

Standard setting in marking is permitted in programmes where it is a professional accreditation requirement.

15.10   Schools should utilise the marking scale that is best suited to the form of assessment. This and the marking criteria for the assessment should be established prior to its commencement.

15.11    Where the averaging of different component marks within an assessment or the outcome of two markers creates an assessment mark with a decimal point, markers should reconcile any significant difference in marks and make a deliberate academic decision as to the exact mark on the scale that should be awarded. Otherwise the mark will be rounded to the nearest integer and returned (if on the 0-20 marking scale, then this should take place before converting to a mark on the 0-100 scale).

Exceptions to the sanctioned marking scales

15.12   Highly structured assessments that are scored out of a total number less than 100 may be utilised where each mark can be justified in relation to those marks neighbouring it. In these cases, the mark must be translated onto the 0-100 point scale, mapped against the relevant marking criteria, and students informed of the use of this method in advance of the assessment in the appropriate medium (e.g. on Blackboard).

Reaching the ‘Unit Mark’ (see also Sections 29 and 37 )

15.13    Marks gauged on the 0-20 scale should be translated to a point on the 0-100 scale before entry into the VLE to calculate the overall unit mark for the purposes of progression and classification (see table 2 ).

15.14   The 0-20 point scale is a non-linear ordinal scale; for example, a mark on the 0-20 point scale IS NOT equivalent to a percentage arrived at by multiplying the mark by 5. Table 2 provides an equivalence relationship between the scales to enable the aggregation of marks from different assessment events to provide the overall unit mark which will be a percentage. This is illustrated below for a notional unit.

In this example, the MCQ uses all points on the 0-100 scale whereas all the other assessments use the 0-20 point scale .

To achieve the final unit mark each component mark needs to be adjusted as:

15.15      The overall unit mark must be expressed as a percentage as the University’s degree classification methodology is based on the percentage scale.

15.16       The final programme or taught component mark will be calculated by applying the agreed algorithm to the unit marks (see sections 32 and 39 ).

  TABLE 1:   Generic Marking Criteria mapped against the three marking scales

  TABLE 2: Relationship between the three marking scales

University of Bristol Beacon House Queens Road Bristol, BS8 1QU, UK Tel: +44 (0)117 928 9000 Contact us

Information for

  • New students

Connect with us

Study at bristol.

  • Students' Union
  • Sport, exercise and health
  • Find a researcher
  • Faculty research
  • Impact of our research
  • Research quality and assessment
  • Engaging with the public

About the University

  • Maps and travel
  • Tours and visits
  • The University on film
  • Explore the city of Bristol
  • Board of Trustees

Support the University

  • Alumni and friends
  • Working at Bristol
  • Job listings

A–Z of the University

  • Terms and conditions
  • Accessibility statements
  • Privacy and cookie policy
  • Modern Slavery statement
  • © 2024 University of Bristol

Marking criteria

Deciding on the model of setting marking criteria can depend upon the intended learning outcomes of the course and the type of assessment task. There are two main ways to provide marking criteria - marking guides and rubrics – of which there is a range of formats. The choice of using a marking guide or a rubric to present your marking criteria will depend on the type of assessment task designed, the intended learning outcomes being demonstrated and the learning technologies used. In its simplest form, a marking guide provides broad outlines for success and allocates a range of marks for each component, and a simple rubric provides specific outlines and examples of what is expected for success and allocate specific marks. There is no preference for either method (or sometimes you may use a combination of both) – they can both be done well, and poorly.

Regardless of which method you use, the purpose of marking criteria is to provide students with instructions on what it is that you are asking them to demonstrate. So teaching staff who are marking the assessment need to have a clear understanding of what the students have been asked to demonstrate in order to make a judgement of success. The language used within the criteria needs to be clear, concise and within the levels of learning expected.

Marking guides

A marking guide is a means of communicating broad expectations for an assignment, providing feedback on works in progress, and grading final products. This marking scheme articulates the expectations for an assignment by listing the criteria or elements and describing the various levels of quality from excellent to poor. Students receive a list of expectations required for each component of the task, within a range. A marking guide differs from a rubric in that each criteria is given a range, not a specific point value. For example: Excellent 8-10, Good 5-7, Poor 2-4, Unsatisfactory 0-1.

It is worth noting that depending upon the learning technology used for assessment submission and/or marking, the structure of the marking guide may differ. It is important that the technology tool chosen matches the purpose of the assessment task. Please visit the technologies to enhance assessment webpage to explore what Federation University supports.

A rubric is a means of communicating specific expectations for an assignment, providing focused feedback on works in progress, and grading final products. This marking scheme articulates the expectations for an assignment by listing the specific criteria or elements and describing the various levels of quality from excellent to poor. Students receive a comprehensive list of expectations required for each component of the task. A rubric differs from a marking guide in that each criteria is usually given a specific point value, not a range. For example: Excellent 5, Substantial 4, Moderate 3, Minimal 2, Poor 1, Unsatisfactory 0.

Rubrics are often used to grade student work but more importantly, they also have the role of teaching as well as evaluating. When used as part of a formative, student-centred approach to assessment, rubrics have the potential to help students develop understanding and skill, as well as make dependable judgments about the quality of their own work. Students should be able to use rubrics the same way that teachers use them—to clarify the standards for a quality performance, and to guide ongoing feedback about progress toward those standards.

Creating rubrics

Whilst the advantages of using rubrics are evident, they can be quite time-consuming to develop initially. Before you get started, view the Rubistar website developed by the University of Kansas to assist you in creating quality rubrics. They provide templates for many common assessment tasks, giving you a foundation to build your specific rubric for your specific assessment task marking criteria.

It is worth noting that depending upon the learning technology used for assessment submission and/or marking, the structure of the rubric may differ. It is important that the technology tool chosen matches the purpose of the assessment task. Please visit technologies to enhance assessment web page to explore what Federation University supports.

Federation University Learning and Teaching website

  • Teaching Practice - Technologies to enhance assessment
  • University of Kansas – Rubistar website

Professional Learning Modules – Online | self-paced. Access the following strategies.

  • Introduction to assessment principles (30 min)
  • Importance of effective marking criteria (15 min)
  • Introduction to simple rubrics (15 min)
  • Introduction to simple marking guides (15 min)
  • Contact your Learning Designer via the CAD job portal to assist in matching the right type of marking criteria with your assessment task, and exploring the technology tools that may enhance the assessment.
  • Contact your Learning Skills Advisor to assist you with improving the clarity and expression of marking criteria.

assignment marking criteria

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Univeristy of Pittsburgh - Home Page

University Center for Teaching and Learning

How to create and use rubrics for assessment in canvas.

  • Quick Start
  • Instructor Help
  • Student Help

What is a Rubric?

Rubrics are used as grading criteria for students and can be added to assignments , quizzes, and graded discussions. If you are importing your rubrics from Blackboard, please note that the ratings will be flipped as Blackboard has rubrics criteria from lowest to highest points, left to right, whereas Canvas has rubrics criteria go from highest to lowest, left to right. Unfortunately there is no quick way to switch the criteria to go the other way, so you may need to edit the rubric manually to reflect the assessment accurately.

Notes: Rubrics cannot be edited once they have been added to more than one assignment. When you delete a rubric it will remove the rubric from all associated assignments in the course and delete any existing associated assessments.

How to Create a Rubric

1) Click on Rubrics in your Course Navigation Menu.

Rubrics button in course navigation menu highlighted in a red rectangle

2) To add a rubric, click Add Rubric . To edit an existing rubric, click on it as it appears under Course Rubrics.

Screenshot of Manage Rubrics page with "Add Rubric" button highlighted in red rectangle

  • Title – can be anything, but should usually be something associated with the assignment so you can easily find it later.
  • Criteria – Criteria are the things that you will be determining your students’ grades on. For example, if the rubric were for an art project, criteria could include creativity, use of art materials, or relevancy to the prompt.

Blue circle with a white plus sign in the center.

  • Points – Rubric ratings default to 5 points. To adjust the total point value, enter the number of points in the Points field. The first rating (full marks) updates to the new total point value and the rest of the ratings adjust appropriately.
  • +Criterion – Adds another criterium
  • Find Outcome – Allows you to use a rubric that you have created before. If you want to use the criterion for scoring, click on the checkbox next to Use this criterion for scoring. If the checkbox is not selected, the point value will not be factored into the rubric and will not be displayed after the rubric is updated. Click the Import button and then click the OK button in the popup window to confirm.

Screenshot of rubic display with parts numbered one through 6

4) Click Create Rubric .

How to Add a Rubric to an Assignment, Quizzes, and Discussion Boards

For the sake of this guide, the screenshots shown are using Assignments as the example. Please note that after Step 2, the process is the same for adding a rubric to assignments, quizzes, and discussions.

1) Click on Assignments, Quizzes, or Discussions in your Course menu.

Screenshot of Canvas course menu with "Assignments" highlighted in a red rectangle outline

2) Click on the name of the assignment, quiz, or discussion board to open it.

Screenshot of Assignment page with the mouse hovering over the name of the assignment "Introduction Worksheet"

3) Click the Add Rubric button if adding to Assignments (left).

Options icon in Canvas.

4) To choose an existing rubric, click on Find a Rubric .

In the first column, select the course or account. In the second column, locate and click the name of the rubric.

Screenshot of the "Find an Existing Rubric" pop out window. There are three columns; the first column is for courses, the second is for rubrics, the third is a preview of the rubric.

5) Click on Use This Rubric button.

three diagonal lines in the bottom right-hand corner of a window

6) To edit the rubric, click the pencil icon. To find a new rubric, click the magnifying glass icon. To remove the rubric from the current assignment, click the trash can icon.

Screenshot of the assignment rubric with the pencil icon, magnifier icon, and trash can icon

Creating a New Rubric:

This is very similar to creating a rubric from scratch, however when attaching it to an assessment there are more options available.

  • I’ll write free-form comments – If this option is selected, no ratings are used to assess the student and criterion values are assigned manually.
  • Remove points from rubric – If this option is selected, no points are associated with the rubric, but students can still be rated using the rubric criterion.
  • Don’t post Outcomes results – students will be able to see rubric and outcome results in the Grades and submission details pages, but results will not be posted to the Learning Mastery Gradebook.
  • Use this rubric for assignment grading – if this option is selected, you can use the rubric for grading in SpeedGrader. ONLY appears for assignments and discussion boards – NOT quizzes.
  • Hide score total for assessment results – students can still see the point values for each criterion, but the total score will not be shown at the bottom of the rubric. This option is only available if the rubric is not used for grading.

Screenshot of the rubric page with the checkbox options marked from 1 to 6.

*Note: You can only reach these options if you create the assessment first and then add the rubric after.

Rubrics Help for Instructors

  • How do I align an outcome with a rubric in a course?
  • How do I add a rubric in a course?
  • How do I add a rubric to a quiz?
  • How do I manage rubrics in a course?
  • How do I add a rubric to a graded discussion?
  • How do I add a rubric to an assignment?

Rubrics Help for Students

  • How do I view the rubric for a quiz?
  • How do I view the rubric for my assignment?
  • How do I view the rubric for my external tool assignment?
  • How do I view rubric results for my assignment?
  • How do I view the rubric for my graded discussion?
  • Generative AI Resources for Faculty
  • Importing Grades from Canvas to PeopleSoft
  • Enter and Calculate Grades in Canvas
  • Finals Week Assessment Strategies
  • Alternative Final Assessment Ideas
  • Not sure what you need?
  • Accessibility Resource Hub
  • Assessment Resource Hub
  • Canvas and Ed Tech Support
  • Center for Mentoring
  • Creating and Using Video
  • Diversity, Equity and Inclusion
  • General Pedagogy Resource Hub
  • Graduate Student/TA Resources
  • Remote Learning Resource Hub
  • Syllabus Checklist
  • Student Communication and Engagement
  • Technology and Equipment
  • Classroom & Event Services
  • Assessment of Teaching
  • Classroom Technology
  • Custom Workshops
  • Open Lab Makerspace
  • Pedagogy, Practice, & Assessment
  • Need something else? Contact Us
  • Educational Software Consulting
  • Learning Communities
  • Makerspaces and Emerging Technology
  • Mentoring Support
  • Online Programs
  • Teaching Surveys
  • Testing Services
  • Classroom Recordings and Lecture Capture
  • Creating DIY Introduction Videos
  • Media Creation Lab
  • Studio & On-Location Recordings
  • Video Resources for Teaching
  • Assessment and Teaching Conference
  • Diversity Institute
  • New Faculty Orientation
  • New TA Orientation
  • Teaching Center Newsletter
  • Meet Our Team
  • About the Executive Director
  • Award Nomination Form
  • Award Recipients
  • About the Teaching Center
  • Annual Report
  • Join Our Team

Home | Contact us | Staff | Students | iExeter (Staff and Students) | Site map | 中文网

  • Alumni and supporters
  • Our departments
  • Visiting us
  • Academic Partnerships Handbook
  • Approval and Revision of Taught Modules and Programmes Handbook
  • 1 - Introduction
  • 2 - Setting and submission of assessments
  • 3 - Examinations
  • 4 - Assessing students with disabilities

5 - Marking

  • 6 - Feedback
  • 7 - Assessment, progression and awarding committees
  • 8 - Progression
  • 9 - Classification of awards
  • 10 - Mitigation: Deadline extensions and deferrals
  • 11 - Consequence of failure in assessment
  • 12 - Academic conduct and practice
  • 13 - Review and publication of degree outcomes
  • Annex A - Examination papers and rubrics
  • Annex B - Examination major incident procedures
  • Annex C - Programme / discipline committee
  • Annex D - Faculty assessment, progression and awarding committee
  • Annex E - University assessment, progression and awarding committee
  • Annex F - Mitigation
  • Annex G - Scaling Guidance
  • Annex H - Template for reporting on the internal review of degree classifications
  • Annex I - Template for preparation of the annual external Degree Outcomes Statement
  • Annex J - Generalised timeline for annual internal review of degree outcomes and preparation of external Degree Outcomes Statement
  • Annex K - Sharing of Examination Papers or Questions Investigation Guidance
  • Annex L - Viva Voce Process
  • Credit and Qualifications Framework
  • Exceptional Circumstances Handbook
  • External Examining Handbook
  • Learning and Teaching Support Handbook
  • Postgraduate Research Handbook
  • Quality Review & Enhancement Framework
  • Student Cases Handbook
  • Special Provisions for Online Programmes (including those offered in partnership with Keypath Education)
  • Special Provisions for Healthcare Programmes
  • Special Provisions for Degree Apprenticeships
  • Special Provisions for Programmes with Accreditation Licenced by the Engineering Council

Chapter 5 - Marking

5.1 Principles for Marking Assessments 5.2 Pass Mark for Individual Modules 5.3  Anonymity 5.4  Viva Voce 5.5  Moderation and Sampling 5.6  Generic Mark Scheme 5.7  Marking Criteria 5.8  Scaling of Marks 5.9  Marking the Work of Students with ILPs or Diagnosed with Specific Learning Difficulties (where competence of language is not being assessed) 5.10 Marking Criteria for Group Work Assignments

  • All marking must be based on the quality of students’ work and be free from bias or prejudice ( see 5.3 ).
  • No module’s marking should rely solely on the judgement of one marker.
  • All summative assessment must be subject to moderation.
  • Where the anonymity of candidates cannot be assured independent double marking must  be applied to a sample.
  • All Faculties (or delegated Schools)  must publish marking criteria for all assessment.
  • The relevant marking criteria must be applied consistently.
  • It must be explicit that the responsibility for proofreading students' work lies with the student.
  • Staff must signpost students to appropriate proofreading support and tools, such as those provided by Study Zone. 
  • Staff must be willing to use the whole range of marks when marking assessment(s). Where a marking scheme is introduced which does not use the full scale of marks this must be clearly communicated to students.
  • The pass mark for individual modules at Levels 3-6 is 40%. Marks below 40% constitute failure.
  • The pass mark for individual modules at Level 7 is 50%. Marks below 50% constitute failure.
  • Where a student on an undergraduate programme is taking a module at Level 7 the module must be marked according to the normal postgraduate marking criteria for the module and the marking scheme for postgraduate modules.
  • Where a student on a postgraduate programme is taking a module at Level 6 or below, the module must be marked according to the normal undergraduate marking criteria for the module and the marking scheme for undergraduate modules. The mark obtained must be used in the calculation of the credit-weighted mean for the programme as a whole (i.e., there must be no ‘scaling' of marks).
  • The mark obtained must be used in the calculation of the credit-weighted mean for the programme as a whole (i.e. there must be no ‘scaling’ of marks).
  • The most effective means of demonstrating that marking is free from bias or prejudice is to ensure that students’ assessment is anonymous. All assessments should be anonymous. However, the University recognises that this is not always practically possible. Where assessment cannot be anonymous, Faculties (or delegated Schools)  must ensure, and be able to demonstrate, that marking is fair, reliable, consistent and transparent. Students must be fully informed of the marking criteria and processes.
  • The viva voce provides the marking team with a means of determining whether work submitted by a candidate is their work. This is achieved by assessing the thoroughness of the candidate’s understanding of the submission, and the candidate’s ability to explain and justify its contents.
  • Marking and moderation are conducted anonymously in line with the University’s guidelines and therefore a student would only be identified once it had been determined that a viva voce is required.
  • This process will allow a member of the marking team together with a senior academic (e.g. Head of Department, Chair of APAC or Director of Education) to interview a student to discuss the submitted work to establish the authenticity of the material.
  • The implementation of a viva process will allow concerns to be appropriately measured and evidenced before a decision is made as to whether or not these concerns should be pursued through the University’s academic conduct procedures. The Viva Voce process is outlined in Appendix L.
  • Moderation is the process used to assure that assessment outcomes are fair and reliable, and that assessment criteria have been applied consistently. Any moderation method must be proportionate to ensure fairness, reliability and consistent application of the criteria.
  • Independent double marking: where a piece of work is marked by two markers independently, who agree a final mark for the assessment. Neither marker is aware of the other’s mark when formulating their own mark.
  • Double open marking: where a piece of work is marked by two markers, who agree a final mark for the assessment.
  • Calibration of marking within teams of multiple markers, in advance of team members marking their own batch of assessments. Calibration involves the scrutiny of a sample of submissions being graded by all markers collectively. The sample should be sufficient in number to ensure the grading approach being taken by all markers is consistent. Following calibration processes, the subsequent moderation processes may be limited to scrutinising (i) submissions that are borderline (e.g. within 1% of a class boundary), and (ii) other submissions considered to be in need of moderation by the module lead.
  • Check marking: where an assessment is read by a second marker to determine whether the mark awarded by the first marker is appropriate.
  • Where double marking or check marking is applied as the method of moderation the marking team should agree a final set of marks for the whole cohort and if they cannot agree a final mark, a third marker should be used to adjudicate an agreed mark.
  • These processes should also identify the marking patterns of individual markers to facilitate comparisons and identify inconsistencies.
  • Where model answers are agreed by staff marking assessments, it is allowable for these assessments not to be moderated. However, the model answer must be reviewed and agreed by at least two markers in advance.
  • The sample must  be representative and cover the full range of marks;
  • The sample must be sufficient to assure the APAC and External Examiner(s) that the requisite academic standards have been maintained, and that all marking is fair, reliable and valid (i.e. free from bias or prejudice, based on the quality of students’ work, and consistent with the relevant marking criteria);
  • APACs and External Examiners must be informed of the methodology (or methodologies) by which assessments are selected for internal moderation, so they can advise on its sufficiency and appropriateness.
  • The sample should not be the same sample as used in external moderation;
  • The selected sample should be proportionate to the risk to standards posed by each module/assessment, bearing in mind the credit-weighting of the assessment, the experience of the primary marker, and historic trends, such as whether the module or assessment are new or have recently changed in structure/format, or if marks have previously had to be adjusted as a result of moderation/scaling;
  • Where responsibility for assessing full submissions (as opposed to selected sections/questions) is distributed amongst a team of multiple markers, marking calibration processes should occur in advance of each marker marking their batch of assessments, in the following circumstances: a new team (or team member) is undertaking the marking, the form of assessment is new, and/or the module is new (or significantly revised);
  • Where possible, the sample should include at least one item marked according to the marking guidelines for specific learning difficulties.
  • Where a cohort includes a submission(s) made via an alternative form of assessment (as per the Inclusive Practice within Academic Study policy), the sample should include at least one alternative assessment item.
  • For modules, where there is only one primary marker, at least XX% or a minimum of XX (whichever is greater) of the submitted assessments, but to a maximum of XX submissions in total. (E.g. (a) at least 10% or a minimum of 10 (whichever is greater) of the submitted assessments should be moderated, but to a maximum of 25 submissions in total; or (b) at least 5% or a minimum of 5 (whichever is greater) of the submitted assessments, but to a maximum of 15 submissions in total .)
  • For modules, where multiple markers are used to mark a batch of assessments, sampling  should  be undertaken as above with regard to each marker rather than with regard to the whole batch of assessments. (This does not apply (i) where each member of the marking team takes responsibility for marking specific sections/questions: in that situation standard sampling should be undertaken as above, or (ii) where marking calibration processes are undertaken in advance of team members marking their own batch of assessments.)
  • The University has a generic mark scheme (that draws on QAA 1 and SEEC 2 guidelines) that characterises the level of complexity, demand and relative autonomy expected of students at each Level of the curriculum (as detailed in the Credit and Qualifications Framework ). The generic mark scheme can be found here .
  • All marking criteria must be consistent with the University's published percentage boundaries (see Chapter 9 ) for degree classification.
  • To ensure consistency all summative marking processes should be numerical, unless an alternative scheme has been approved by the Pro-Vice Chancellor and Executive Dean (PVC) and has been clearly communicated to students.
  • External Examiners must have an opportunity to comment on the assessment criteria and model answers for all summative assessments.
  • The purpose of scaling is to rectify anomalies in module and/or component mark distributions that arise from unanticipated circumstances and should be used in exceptional circumstances only. Hence, the assessment criteria and practices for any module that has its marks scaled should be reviewed, in consultation with the module/ programme External Examiner, in order to reduce the chance that scaling will be necessary in subsequent years. Guidance for scaling is set out in Annex G . The guidance should be read in the context of this Handbook, and the provisions of this Handbook remain in force.
  • APACS will be provided with descriptive module statistics (mean, median, and standard deviation) based on a comprehensive reference dataset of the student cohort performance in the current academic year and comparable historic mean module marks from the three previous academic years, where they are available Historic mean module marks from academic years that have been designated as Exceptional Years, will be excluded. APACs will undertake this comparison at module level, noting that scaling will normally be undertaken at module level rather than at individual component level.
  • APACS will then consider the application of appropriate adjustments to correct any statistically significant deviation. For example, should a module show a distribution of student attainment significantly below that of previous year groups, then the APAC will consider scaling the cohort results to make them comparable with the attainment in previous years. Where a module has been run for the first time in the current academic year, an appropriate composite historic mean based on appropriate cognate module(s) will be used for the comparison or reference made to programme level and/or year group metrics.
  • The raw marks, together with the rationale under which they were awarded, must always be made available to the Assessment, Progression and Awarding Committee.
  • Scaling must not unfairly benefit or disadvantage a subset of students (e.g. failures). This means that any scaling function applied to a set of marks must be monotonically increasing, i.e. it must not reverse the rank-order of any pair of students. The definition of any scaling function used (its domain) must encompass the full range of raw marks from 0 to 100%. For example, 'Add 3 marks to all students' or 'Multiply all marks by a factor of 0.96' are both valid scaling functions. 'Add 4 marks to all failures and leave the rest unchanged.' is not acceptable because it would cause a student whose raw mark was 39 (a fail) to leapfrog a student who got 41 (a pass).
  • External Examiners must always be consulted about the process.
  • All decisions  must be clearly recorded in the minutes of the Assessment, Progression and Awarding Committee (APAC), and must include details of the rationale for scaling, any noted objections (and any responses to these objections) and the impact on marks.
  • The system used to identify modules as potential candidates for scaling must be transparent.
  • For guidance on a range of accessibility issues, including marking guidelines, refer to the Services' Advice for Staff website.
  • Marking criteria for group work assignments must include whether the marks will be allocated individually or to the group, and how they will be allocated.
  • If peer assessment is used, the criteria for this should also be included, as well as how this will contribute to the overall mark. Please also see further guidance in Chapter 10 of the Learning Teaching Support Handbook: Peer and Self Assessment in Student Work: Principles and Criteria.
  • Further information on group work assignments and strategies for Learning and Teaching which provide an inclusive experience for all students is provided in the Education Toolkit: https://universityofexeteruk.sharepoint.com/sites/EducationToolkit/SitePages/Guidance-for-Assessed-Group-Work.aspx

1 Quality Assurance Agency frameworks for higher education qualifications and credit

2 Southern England Consortium for Credit Accumulation and Transfer 

Last reviewed October 2023

Updated January 2024

Back to top

Using our site  |  Freedom of Information  |  Data Protection  |  Copyright & disclaimer  |  Privacy & Cookies  | 

Twitter

The University of Edinburgh home

  • Schools & departments

Reflection Toolkit

Assessment criteria

Assessment criteria are critical when posing a reflective assignment. This page provides example criteria as well as questions to help you identify what you are looking for.

Criteria are your best ally in order to ensure both you and your students know exactly what you want from your reflective assignment or activity. Therefore, the first thing you should do is ask yourself the following:

  • Why am I asking students to reflect (what do I hope they gain from it)?
  • How does this assignment/activity relate to my learning outcomes?
  • Is there clear alignment between this assignment/activity and the course’s learning outcomes?
  • What does doing well on this assignment/activity look like?
  • What will not be sufficient to pass?
  • What questions/aspects must be addressed for this work to be acceptable?
  • What are the different dimensions that make up the assignment/activity? (For example clarity, critical thinking, evidence, etc.)

These questions will help you think about what your criteria should look like. While you can reuse criteria from one reflective task to the next, you should ensure that they are applicable and updated for the specific reflective activities/assignments.

Example criteria are provided below. Use what seems helpful, but do not apply them without considering the relevance to your specific reflective activity or assignment.

What does ‘good’ look like?

Developing an idea of what you want a ‘good reflection’ to look like will help you decide your criteria. You can begin defining your expectations by asking yourself:

  • What do I think good reflection looks like?
  • What will students need to demonstrate in the reflective task to make it helpful for achieving my learning outcomes?
  • To what extent do I want students’ reflections to be similar to my idea of good reflection? Is it enough they just reflect?

Consider setting the criteria in discussion with colleagues, tutors, or students

Given the sometimes challenging nature of ensuring a fair and consistent assessment of reflective work, it can be beneficial to include other people when setting assessment criteria. This can help  ensure that all the nuances of your assessment are made clear and explicit.

This can be done in preparation for the course/initiative with colleagues or tutors – especially if they are involved in the marking.

If you have time, it can also be done with the students/reflectors being marked. Doing this ensures that everyone understands what the reflective assessment is supposed to look like. This can be time consuming and may require that you finalise the criteria and assessment questions during the semester.

Example assessment criteria for reflection

These criteria are only examples; please make sure you adapt these to your needs. Once you choose a criterion, you should write you own explanation of what it means in relation to your specific assessment. Make the criteria and explanations available to your students prior to completion of the assignment or activity.

For each criterion a few comments are made on where and why it is relevant. Some of them will be general with clear overlaps, while other are very specific.

Where to next

As highlighted in other places in the Reflection Toolkit, when posing reflective tasks to students it is important to know your assessment criteria. If you are marking the assessment, an assessment rubric becomes particularly important and should be provided to students to guide them in their reflective assignments. Therefore, the next place that might potentially inform your criteria is rubrics.

Assessment rubrics  (within the Facilitators’ Toolkit)

Jones, S. (n.d.) Using reflection for assessment . Office of Service Learning, IUPUI. (link to PDF on external site)

Moon, J.A (2006) Learning Journals: A handbook for reflective practice and professional development. Routledge, London.

Thompson, S., Thompson N. (2008). The critically reflective practitioner. Palgrave Macmillan, New York.

  • Memberships
  • Institutional Members
  • Teacher Members

Academic English UK

RESOURCES: Reading / Writing / Listening / Speaking / Argument / SPSE / Reading Tests / Summary / Dictogloss / Grammar / Vocab / Critical Thinking / Instant Lessons / Medical English / Graphs / New 2024 /

Academic Criteria

Marking criteria are the standards of judgement for assignments. it is often divided into sections with explicit definitions of the quality expected for different levels of judgement..

academic marking criteria

Marking criteria

  • It is important for students to see the marking criteria in advance so that they will be able to see clearly how their work will be judged.
  • Students will be able to understand the marks they have been given.
  • Marking criteria provides a valuable tool for feedback.
  • Good marking criteria creates a clear standard with markers.
  • Marking is reliable and  measures what the assessment is intended to measure.
  • Marking criteria aligns to module and programme learning outcomes.

Academic English UK Criteria

Our criterion focuses on clear marking sections .  E.g Writing (Task, Organisation, Language) , Seminar (Language, Fluency & Pronunciation, Communication), Presentation (Language, Fluency & Pronunciation, Presentation & Engagement) and Presentation (Task fulfilment , Language, Fluency & Pronunciation, Presentation & Engagement). We have tried to keep the criteria simple yet effective in accessing students’ work. Our criterion are word documents and therefore allow you to make changes where necessary.

Writing Criteria

Essay writing criteria x 2 (updated 2023).

There are two writing criterion in this download. One is a basic marking criteria that can be used to mark students’ general writing and the other criteria includes the use of sources. They are divided into three sections of Task ( 40% ) / Organisation ( 30% ) / Language ( 30% ) and have five grade levels A-F. It is a word document so you can add or change any of the rubric to suit your writing test. Example   Level: ** * ** [B1/B2/C1] TEACHER MEMBERSHIP  / INSTITUTIONAL MEMBERSHIP

£5.00 – Add to cart Checkout Added to cart

Poster Writing Criteria

Poster writing criteria (new 2023).

This poster criteria is similar to the essay writing criteria but includes the use of images. It is divided into three sections of Task ( 40% ) / Organisation ( 30% ) / Language ( 30% ) and have five grade levels A-F. It is a word document so you can add or change any of the rubric to suit your writing test. Example Level: ** * ** [B1/B2/C1] TEACHER MEMBERSHIP  / INSTITUTIONAL MEMBERSHIP  

Seminar Criteria

Seminar speaking criteria x 2 (updated 2023).

This includes one basic seminar speaking criteria to assess seminar speaking skills and contains four key criteria: Language Accuracy ( 20% ), Language Range ( 20% ), Pronunciation ( 20% ) & Communicative Effectiveness ( 40% ), and one seminar criteria that includes ‘reference to materials’. Example   Level: ** *** [B1/B2/C1] TEACHER MEMBERSHIP  / INSTITUTIONAL MEMBERSHIP

Presentation Criteria

Presentation speaking c riteria (updated 2023).

This is a basic criteria to assess & grade presentation speaking skills. It has three key criteria: Language Accuracy & Language Range ( 25% ),  Fluency & P ronunciation ( 25% ) and  Presentation & Engagement ( 50% ).   Example   Level: ** *** [B1/B2/C1] TEACHER MEMBERSHIP  / INSTITUTIONAL MEMBERSHIP

E-Portfolio Presentation Criteria

E-portfolio presentation speaking c riteria (new 2023).

This i s a marking criteria to assess & grade the e-portfolio presentation speaking skills. It has four key criteria: Task Fulfilment & Content ( 40% ), Language Accuracy & Language Range ( 20% ),  Fluency & P ronunciation( 20% ) and Presentation & Engagement ( 20% ). Example   Level: ** *** [B1/B2/C1] TEACHER MEMBERSHIP  / INSTITUTIONAL MEMBERSHIP

Terms & Conditions of Use

      Memberships (Teacher / Institutional)

      Full access to everything -  £100 /  £200 /   £550

  Join today * x

Writing  Resources  

Academic phrases, academic style [1], academic style [2], academic style [3], academic style [4], academic word list , writing websites, error correction,   hedging [1], hedging [2], nominalisation, noun phrases [1], noun phrases [2], the syllabus, referencing, in-text referencing, harvard ref. [1], harvard ref. [2], apa ref. [1], ref. generators, reference lists, reporting verbs, credible sources, evaluating sources, academic integrity, 'me' in writing, writer's voice  , writing skills, paraphrasing [1], paraphrasing [2], paraphrase (quotes), summary writing  , summary language, critical thinking, analysis & evaluation, fact vs opinion, argument essays, spse essays, sentence str.  [1], sentence str.  [2], sentence str.  [3], academic posters [1] new, academic posters [2]   new, structure    , essay structure, introductions, thesis statements, paragraphing, paragraph quotes, topic sentences [1], topic sentences [2], definitions, conclusions, linking words, exemplification , parallelism, punctuation, marking criteria, more digital resources and lessons.

assignment marking criteria

online resources

assignment marking criteria

Medical English

new resources 2024

New for 2024

Dropbox Files AEUK

DropBox Files

Members only

assignment marking criteria

Instant Lessons

academic marking criteria

OneDrive Files

assignment marking criteria

Topic-lessons

Peer feedback forms

Feedback Forms

6-week academic English course

6-Week Course

assignment marking criteria

Free Resources

graphs and charts

Charts and graphs

assignment marking criteria

AEUK The Blog

12- week academic English course

12-Week Course

Advertisement:.

assignment marking criteria

All done here 🥳

Can we use optional cookies?

We use some essential cookies to make this website work.

We won’t turn them on unless you accept. Want to know more or adjust your preferences? Here’s our cookie notice .

Adjust your cookie preferences

We use 2 types of cookie. You can choose which cookies you are happy for us to use. For more detail, and a list of the cookies we use, see our cookie notice .

The science-y sounding cookie that is interested in what you do on our website. Where you came from, what pages you visit and for how long.

This trusty cookie is a must for our website to work and keep you safe and secure.

Mark This For Me

Your AI assignment companion tool

We are on Product Hunt

Support us 💙

Get personalised and instant feedback anytime .

Assignment feedback is limited, however that shouldn't stop you from staying motivated and unlocking new opportunities for success.

You're 2 steps away

A platform for students.

We take pride in being the go-to feedback solution for students from these schools, gaining valuable insights.

Austin Community College District

Edge Hill University

University of Birmingham

The University of Manchester

Henley Highschool

University of Exeter

Otago Polytechnic

Westminster Kingsway College

PLC Armidale

Wiltshire University

Chelmsford College

Instant Feedback? Yes. Free? Sure 👀

An extra pair of virtual eyes just in-case.

Screenshot 1

Top questions from our FAQs

Student Question

I am scared of using AI because of plagiarism

Our tools do not give you answers. We focus only on giving guidance and feedback on what you have already written. The feedback you receive aims to give enough but not everything, for example: Hey, you did a good job introducing X but you didn't really criticaly evaluate X as required in your marking criteria.

Now imagine this example but across your entire assignment.

What happens with the assignments I provide?

Your assignments and assignment critera that you provide is processed through ChatGPT's APIs owned by OpenAI. The data given through their APIs is not used to train their models. We securely store your data on our servers until you tell us you want your data deleted. You can learn more about how your data is stored and managed by looking at our terms of use and our privacy policy.

Have a question?

Reach out to us here by emailing [email protected] or tag us on our social media.

Check out our plans

10,000 words

Free lifetime

For students needing occasional one-off feedback on assignments

Personal dashboard to view your AI-generated feedback

Support for any issues that you raise

250,000 words

£3.99 per month

For students needing frequent feedback on assignments.

Everything in Personal

Download and print your feedback for offline use

Wordback feature

Cancel at any time

Psst, Did you know?

Students on MTFM have rocked over

words from their marking criteria and assignments—getting instant personalised feedback.

Have you tried properly learning with AI yet?

Over 800 + assignments marked and feedback given without breaking a sweat.

We've helped students across the world to access personalised and instant feedback for their assignments.

Should you give job applicants an assignment during the interview process? Be thoughtful about the ask

Employers have to ask themselves whether they are willing to turn off a strong candidate by asking them to do additional work.

Hiring is a time-consuming and expensive endeavor. Companies need candidates who offer the right skills and experience for a given role, and who align with their organization’s vision and mission.

To find the best fit, many companies still lean on a strategy that continues to generate debate : the assignment. Some candidates believe their experience and interviews should give prospective employers enough information to determine whether they will fit the role. Employers have to ask themselves whether they are willing to turn off a strong candidate by asking them to do additional work.

Is the assignment valuable enough to the evaluation process that they cannot move someone forward without it? Sometimes it is—sometimes they help an employer decide between two strong candidates. And if they are necessary, how can employers make assignments fair and equitable for the candidate or candidates?

When done right, assignments help assess practical skills and problem-solving abilities, giving a clearer picture of a candidate beyond what their resume or interview reveals. But employers should be thoughtful about the ask. While it may make sense for roles that require specific technical expertise or creative thinking, it isn’t appropriate for all roles—so assignments should always be given with a clear reason for why they are needed.

Plus, they don’t just benefit the employer. For job seekers, an assignment during the interview process might also help them stand out from the competition. It can also offer a window into what their day-to-day in the new role might entail. Remember that the candidate should be interviewing the company, too. Having a test run of the work they’d be asked to do is a great way to see whether they believe the role is a fit.

However, there is a rift in how people perceive the assignment as part of the interview process. Workers today span many generations, each with unique values and expectations. Whereas older workers often prioritize stability and loyalty, younger millennials and Gen Zers are more focused on flexibility and work well-being, Indeed data shows .

This mindset impacts the amount of time and energy a candidate is willing to devote to each application. After multiple rounds of interviews and prep, taking on an in-depth assignment may feel like a bridge too far—especially if the expectations for the assignment are not clearly communicated ahead of time.

Some candidates are wary of providing free labor to a company that may use their work and not hire them. Hiring managers should be clear about how the work will be used. They may also consider offering compensation if the assignment requires more than a couple hours of someone’s time, or if they plan to use the work without hiring the candidate.

The key for early career candidates in particular is to ensure their time and efforts are respected. This is a win-win for employers: By providing clarity and transparency, they not only elicit the additional information they want from candidates, but they demonstrate that the organization is transparent and fair.

Equity is also imperative: Which candidates are being asked to complete assignments? Is the hiring team consistent in giving out assignments across ages, experience levels, and roles? There should always be a process and clear evaluation criteria in place to ensure fairness.

As we adapt to the rapidly evolving world of work, we must continue to think critically about each step in the hiring process. Candidate assignments can be a valuable tool, but only with appropriate respect for job seekers’ time and contributions.

With the right strategy, we can bridge the gap between generations in the workplace and build a hiring culture that values efficiency, talent, and integrity.

Eoin Driver is the global vice president of talent at Indeed.

More must-read commentary:

  • Fannie Mae  CEO: Beyoncé is right. Climate change has already hit the housing market—and  homeowners aren’t prepared

Congress could soon spell the end of employment arbitration—but it’s not all good news for American workers

  • Outdated laws prevent gig economy workers from getting benefits. This pilot program shows the path forward
  • No, combustion engines won’t be supplanted by electric vehicles—and they’re  critical for sustainable transport

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of  Fortune .

Latest in Commentary

Apple CEO Tim Cook has positioned the company as a champion for privacy.

Apple says privacy is a ‘core value.’ Tim Cook shouldn’t compromise it to bridge the gap on AI

As salary growth stalls, employees expect to be paid in the intangible currencies of flexibility, belonging, and fulfillment.

Employees have a once-in-a-generation chance to reimagine work—and they’re using it to demand an ‘emotional salary’

For each death, there are 100 women who come close to dying, which is terrifying and unacceptable in the 21st century.

Birthing mothers’ near-death experience rates are 100 times higher than maternal mortality—and we don’t even know exactly why

Employers have to ask themselves whether they are willing to turn off a strong candidate by asking them to do additional work.

Shark Tank entrepreneur: E-commerce giants are eating my sister’s lunch—and destroying the American Dream

Under the proposed Arbitration Fairness Act, all arbitration agreements would be made after the employment dispute arises. An agreement to arbitrate made at any other time would be automatically unenforceable.

Most Popular

assignment marking criteria

The collapsed Baltimore bridge will be demolished soon, and the crew of the ship that’s trapped underneath will be onboard when the explosives go off

assignment marking criteria

The housing crisis in the U.S. is flipped upside down in Japan, where each home that’s occupied could be next to an empty one by 2033

assignment marking criteria

Meet the boomers who’d rather spend $100k to renovate their homes than risk the frozen housing market: ‘It would be too hard to purchase anything else’

assignment marking criteria

Hedge fund billionaire Ken Griffin says college protests are the result of a ‘cultural revolution’ and Harvard should ’embrace our Western values’

assignment marking criteria

Meet the 81-year-old CEO who built a $10.4 billion luxury cruise line tailored just for baby boomers: ‘They’re the richest group we have around’

assignment marking criteria

‘Housing has hit rock bottom’: Top real estate CEO says high home prices are shutting people out of the market

IMAGES

  1. Understanding marking rubrics

    assignment marking criteria

  2. Assignment ONE marking criteria Sem 1 2017

    assignment marking criteria

  3. Rubrics and criteria in Canvas

    assignment marking criteria

  4. PPT

    assignment marking criteria

  5. A Detailed Explanation On The Marking Criteria For Ielts Writing Task 1

    assignment marking criteria

  6. Assessment/Marking Criteria Creator

    assignment marking criteria

VIDEO

  1. Explanation of Marking Criteria Academic Writing Task 1 by British Council Qualified Instructors

  2. WRITING: Task 1 Academic Module 2 Question Types and Marking Criteria

  3. TPTG 620

  4. NIOS

  5. C++ Variables, Literals, an Assignment Statements [2]

  6. All about Aiou assignments//How to mark Aiou assignment online//Aiou assignment marking// Aiou news

COMMENTS

  1. Step 4: Develop Assessment Criteria and Rubrics

    Step 4: Develop Assessment Criteria and Rubrics. Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can.

  2. Assessment Criteria and Rubrics

    Assessment Rubric. The assessment rubric, forms part of a set of criteria and refers specifically to the "levels of performance quality on the criteria." (Brookhart & Chen, 2015, p. 343) Generally, rubrics are categorised into two categories, holistic and or analytic. A holistic rubric assesses an assignment as a whole and is not broken ...

  3. Understanding marking rubrics

    A rubric is the marking guideline for the assignment and you can use this to get an understanding of what the marker is looking for. An assessment rubric generally tells you about: The criteria - what you need to include in your assignment. The descriptors - a description of the criteria that outlines the levels of performance showing a ...

  4. Study Skills: Using Marking Criteria to your Advantage

    Marking criteria allows your teacher to mark all of the assignments consistently and give you clear feedback on where and how you can improve your work. Marking criteria also allows you to see where your marks will be allocated - so you can spend more time and detail on the parts of your assignment that are worth more marks. Example of a ...

  5. Rubrics for Assessment

    Criteria identify the trait, feature or dimension which is to be measured and include a definition and example to clarify the meaning of each trait being assessed. Each assignment or performance will determine the number of criteria to be scored. Criteria are derived from assignments, checklists, grading sheets or colleagues.

  6. PDF Marking Criteria and Rubrics

    Marking criteria gives the criteria against which the assessment will judged and the explicit standards of performance for each grade category. This means that there may be areas of the marking criteria highlighted to indicate where a student sits in terms of the different ... standards of judgement for the assignment and the rubrics are ...

  7. Marking criteria and rubrics

    When you develop your marking criteria, you need to include all of the elements for criterion-referenced, standards-based assessment. Rubrics are used to assess students work against criteria and standards. Clear rubrics inform students about the elements markers are looking for and the different levels of performance they can achieve.

  8. Developing marking criteria

    Write your marking criteria. Criteria should: Start with a verb to indicate the standard you require. Ensure they are measurable by avoiding terms like appreciates or have knowledge of. Are kept to a manageable number for markers and students. Include what you want students to do (action), know (content) and in what context.

  9. Marking and grading

    This is an overview about marking and grading. Learn how to mark consistently through the appropriate application of criteria and moderation practices. Additional links to supporting resources can be accessed through this page. ... for video assignments Use Panopto for video assignments ;

  10. 15. Marking Criteria and Scales

    15. Marking Criteria and Scales. 15.1 Marking criteria are designed to help students know what is expected of them. Marking criteria differ from model answers and more prescriptive marking schemes which assign a fixed proportion of the assessment mark to particular knowledge, understanding and/or skills. The glossary provides definitions for ...

  11. PDF Guidelines for Writing Effective Assessment Criteria

    criteria are a way to provide formative feedback throughout a course to support ongoing learning, as well as to provide end-of-term summative assessment. Assessment criteria take the ^guess-work _ out of grading for instructors and students. Well-defined assessment criteria allow instructors to evaluate learners work more openly, consistently and

  12. PDF WRITING MARKING SCHEMES (RUBRICS)

    commonly used to improve marking efficiency and provide student feedback. An analytic marking rubric has three parts: criteria, standards and performance descriptors. Criteria are the properties or characteristics used to judge the quality of the assessment task. Standards define the levels of achievement or performance.

  13. Marking criteria

    A marking guide is a means of communicating broad expectations for an assignment, providing feedback on works in progress, and grading final products. This marking scheme articulates the expectations for an assignment by listing the criteria or elements and describing the various levels of quality from excellent to poor.

  14. PDF Grading Rubric for Writing Assignment

    Clearly presents a main idea and supports it throughout the paper. There is a main idea supported throughout most of the paper. Vague sense of a main idea, weakly supported throughout the paper. Well-planned and well-thought out. Includes title, introduction, statement of main idea, transitions and conclusion.

  15. Rubric Best Practices, Examples, and Templates

    Rubric Best Practices, Examples, and Templates. A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects ...

  16. How to Create and Use Rubrics for Assessment in Canvas

    1) Click on Assignments, Quizzes, or Discussions in your Course menu. 2) Click on the name of the assignment, quiz, or discussion board to open it. 3) Click the Add Rubric button if adding to Assignments (left). Click the three-dotted Options button [ ] and select "Show Rubric" if adding to Quizzes (right).

  17. PDF UCL Assessment Criteria Guide

    Develop and/or discuss assessment criteria. Use the criteria for self-‐assessment and/or peer assessment, so that they can try them out in an authentic assessment activity. These activities help to develop subject knowledge and awareness of valuable scholarly practices.

  18. Assessment rubrics

    Assessment rubrics. Rubrics allow for quicker and more consistent marking. This can be extremely helpful in reflection, which can feel as if it needs to be assessed by instinct alone. A well-defined rubric will make marking of reflection systematic and support both you and the reflectors. Rubric.

  19. Marking

    Marking Criteria for Group Work Assignments. Marking criteria for group work assignments must include whether the marks will be allocated individually or to the group, and how they will be allocated. If peer assessment is used, the criteria for this should also be included, as well as how this will contribute to the overall mark.

  20. Marking formal assessment

    Marking formal assessment | NSW Education Standards. A NSW Government website. NESA | NSW Education Standards Authority. NSW Education Standards Authority. Marking guidelines should reflect the standards for the course including outcomes and performance descriptions, and help to provide meaning to the marks awarded for a task.

  21. Assessment criteria

    Assessment criteria. Assessment criteria are critical when posing a reflective assignment. This page provides example criteria as well as questions to help you identify what you are looking for. Criteria are your best ally in order to ensure both you and your students know exactly what you want from your reflective assignment or activity.

  22. Criteria

    Essay writing criteria x 2 (updated 2023) There are two writing criterion in this download. One is a basic marking criteria that can be used to mark students' general writing and the other criteria includes the use of sources. They are divided into three sections of Task (40%) / Organisation (30%) / Language (30%) and have five grade levels A ...

  23. Mark This For Me: Your Free AI Assignment Companion Tool

    Assignment feedback is limited, so we've created a platform powered by AI to help students access personalised and instant feedback anytime to improve motivation and unlock new opportunities for success. Mark This For Me is a trusted go-to platform for students seeking feedback on assignments. Gain free valuable insights using GPT-3 and GPT-4 ...

  24. Should you give job applicants assignment during interview process

    For job seekers, an assignment during the interview process might also help them stand out from the competition. It can also offer a window into what their day-to-day in the new role might entail ...