Berkeley Graduate Division

  • Basics for GSIs
  • Advancing Your Skills

Examples of Rubric Creation

Creating a rubric takes time and requires thought and experimentation. Here you can see the steps used to create two kinds of rubric: one for problems in a physics exam for a small, upper-division physics course, and another for an essay assignment in a large, lower-division sociology course.

Physics Problems

In STEM disciplines (science, technology, engineering, and mathematics), assignments tend to be analytical and problem-based. Holistic rubrics can be an efficient, consistent, and fair way to grade a problem set. An analytical rubric often gives a more clear picture of what a student should direct their future learning efforts on. Since holistic rubrics try to label overall understanding, they can lead to more regrade requests when compared to analytical rubric with more explicit criteria. When starting to grade a problem, it is important to think about the relevant conceptual ingredients in the solution. Then look at a sample of student work to get a feel for student mistakes. Decide what rubric you will use (e.g., holistic or analytic, and how many points). Apply the holistic rubric by marking comments and sorting the students’ assignments into stacks (e.g., five stacks if using a five-point scale). Finally, check the stacks for consistency and mark the scores. The following is a sample homework problem from a UC Berkeley Physics Department undergraduate course in mechanics.

Homework Problem

Learning objective.

Solve for position and speed along a projectile’s trajectory.

Desired Traits: Conceptual Elements Needed for the Solution

  • Decompose motion into vertical and horizontal axes.
  • Identify that the maximum height occurs when the vertical velocity is 0.
  • Apply kinematics equation with g as the acceleration to solve for the time and height.
  • Evaluate the numerical expression.

A note on analytic rubrics: If you decide you feel more comfortable grading with an analytic rubric, you can assign a point value to each concept. The drawback to this method is that it can sometimes unfairly penalize a student who has a good understanding of the problem but makes a lot of minor errors. Because the analytic method tends to have many more parts, the method can take quite a bit more time to apply. In the end, your analytic rubric should give results that agree with the common-sense assessment of how well the student understood the problem. This sense is well captured by the holistic method.

Holistic Rubric

A holistic rubric, closely based on a rubric by Bruce Birkett and Andrew Elby:

[a] This policy especially makes sense on exam problems, for which students are under time pressure and are more likely to make harmless algebraic mistakes. It would also be reasonable to have stricter standards for homework problems.

Analytic Rubric

The following is an analytic rubric that takes the desired traits of the solution and assigns point values to each of the components. Note that the relative point values should reflect the importance in the overall problem. For example, the steps of the problem solving should be worth more than the final numerical value of the solution. This rubric also provides clarity for where students are lacking in their current understanding of the problem.

Try to avoid penalizing multiple times for the same mistake by choosing your evaluation criteria to be related to distinct learning outcomes. In designing your rubric, you can decide how finely to evaluate each component. Having more possible point values on your rubric can give more detailed feedback on a student’s performance, though it typically takes more time for the grader to assess.

Of course, problems can, and often do, feature the use of multiple learning outcomes in tandem. When a mistake could be assigned to multiple criteria, it is advisable to check that the overall problem grade is reasonable with the student’s mastery of the problem. Not having to decide how particular mistakes should be deducted from the analytic rubric is one advantage of the holistic rubric. When designing problems, it can be very beneficial for students not to have problems with several subparts that rely on prior answers. These tend to disproportionately skew the grades of students who miss an ingredient early on. When possible, consider making independent problems for testing different learning outcomes.

Sociology Research Paper

An introductory-level, large-lecture course is a difficult setting for managing a student research assignment. With the assistance of an instructional support team that included a GSI teaching consultant and a UC Berkeley librarian [b] , sociology lecturer Mary Kelsey developed the following assignment:

This was a lengthy and complex assignment worth a substantial portion of the course grade. Since the class was very large, the instructor wanted to minimize the effort it would take her GSIs to grade the papers in a manner consistent with the assignment’s learning objectives. For these reasons Dr. Kelsey and the instructional team gave a lot of forethought to crafting a detailed grading rubric.

Desired Traits

  • Use and interpretation of data
  • Reflection on personal experiences
  • Application of course readings and materials
  • Organization, writing, and mechanics

For this assignment, the instructional team decided to grade each trait individually because there seemed to be too many independent variables to grade holistically. They could have used a five-point scale, a three-point scale, or a descriptive analytic scale. The choice depended on the complexity of the assignment and the kind of information they wanted to convey to students about their work.

Below are three of the analytic rubrics they considered for the Argument trait and a holistic rubric for all the traits together. Lastly you will find the entire analytic rubric, for all five desired traits, that was finally used for the assignment. Which would you choose, and why?

Five-Point Scale

Three-point scale, simplified three-point scale, numbers replaced with descriptive terms.

For some assignments, you may choose to use a holistic rubric, or one scale for the whole assignment. This type of rubric is particularly useful when the variables you want to assess just cannot be usefully separated. We chose not to use a holistic rubric for this assignment because we wanted to be able to grade each trait separately, but we’ve completed a holistic version here for comparative purposes.

Final Analytic Rubric

This is the rubric the instructor finally decided to use. It rates five major traits, each on a five-point scale. This allowed for fine but clear distinctions in evaluating the students’ final papers.

[b] These materials were developed during UC Berkeley’s 2005–2006 Mellon Library/Faculty Fellowship for Undergraduate Research program. Members of the instructional team who worked with Lecturer Kelsey in developing the grading rubric included Susan Haskell-Khan, a GSI Center teaching consultant and doctoral candidate in history, and Sarah McDaniel, a teaching librarian with the Doe/Moffitt Libraries.

  • Utility Menu

University Logo

GA4 Tracking Code

Home

fa51e2b1dc8cca8f7467da564e77b5ea

  • Make a Gift
  • Join Our Email List

Whenever we give feedback, it inevitably reflects our priorities and expectations about the assignment. In other words, we're using a rubric to choose which elements (e.g., right/wrong answer, work shown, thesis analysis, style, etc.) receive more or less feedback and what counts as a "good thesis" or a "less good thesis." When we evaluate student work, that is, we always have a rubric. The question is how consciously we’re applying it, whether we’re transparent with students about what it is, whether it’s aligned with what students are learning in our course, and whether we’re applying it consistently. The more we’re doing all of the following, the more consistent and equitable our feedback and grading will be:

Being conscious of your rubric ideally means having one written out, with explicit criteria and concrete features that describe more/less successful versions of each criterion. If you don't have a rubric written out, you can use this assignment prompt decoder for TFs & TAs to determine which elements and criteria should be the focus of your rubric.

Being transparent with students about your rubric means sharing it with them ahead of time and making sure they understand it. This assignment prompt decoder for students is designed to facilitate this discussion between students and instructors.

Aligning your rubric with your course means articulating the relationship between “this” assignment and the ones that scaffold up and build from it, which ideally involves giving students the chance to practice different elements of the assignment and get formative feedback before they’re asked to submit material that will be graded. For more ideas and advice on how this looks, see the " Formative Assignments " page at Gen Ed Writes.

Applying your rubric consistently means using a stable vocabulary when making your comments and keeping your feedback focused on the criteria in your rubric.

How to Build a Rubric

Rubrics and assignment prompts are two sides of a coin. If you’ve already created a prompt, you should have all of the information you need to make a rubric. Of course, it doesn’t always work out that way, and that itself turns out to be an advantage of making rubrics: it’s a great way to test whether your prompt is in fact communicating to students everything they need to know about the assignment they’ll be doing.

So what do students need to know? In general, assignment prompts boil down to a small number of common elements :

  • Evidence and Analysis
  • Style and Conventions
  • Specific Guidelines
  • Advice on Process

If an assignment prompt is clearly addressing each of these elements, then students know what they’re doing, why they’re doing it, and when/how/for whom they’re doing it. From the standpoint of a rubric, we can see how these elements correspond to the criteria for feedback:

All of these criteria can be weighed and given feedback, and they’re all things that students can be taught and given opportunities to practice. That makes them good criteria for a rubric, and that in turn is why they belong in every assignment prompt.

Which leaves “purpose” and “advice on process.” These elements are, in a sense, the heart and engine of any assignment, but their role in a rubric will differ from assignment to assignment. Here are a couple of ways to think about each.

On the one hand, “purpose” is the rationale for how the other elements are working in an assignment, and so feedback on them adds up to feedback on the skills students are learning vis-a-vis the overall purpose. In that sense, separately grading whether students have achieved an assignment’s “purpose” can be tricky.

On the other hand, metacognitive components such as journals or cover letters or artist statements are a great way for students to tie work on their assignment to the broader (often future-oriented) reasons why they’ve been doing the assignment. Making this kind of component a small part of the overall grade, e.g., 5% and/or part of “specific guidelines,” can allow it to be a nudge toward a meaningful self-reflection for students on what they’ve been learning and how it might build toward other assignments or experiences.

Advice on process

As with “purpose,” “advice on process” often amounts to helping students break down an assignment into the elements they’ll get feedback on. In that sense, feedback on those steps is often more informal or aimed at giving students practice with skills or components that will be parts of the bigger assignment.

For those reasons, though, the kind of feedback we give students on smaller steps has its own (even if ungraded) rubric. For example, if a prompt asks students to  propose a research question as part of the bigger project, they might get feedback on whether it can be answered by evidence, or whether it has a feasible scope, or who the audience for its findings might be. All of those criteria, in turn, could—and ideally would—later be part of the rubric for the graded project itself. Or perhaps students are submitting earlier, smaller components of an assignment for separate grades; or are expected to submit separate components all together at the end as a portfolio, perhaps together with a cover letter or artist statement .

Using Rubrics Effectively

In the same way that rubrics can facilitate the design phase of assignment, they can also facilitate the teaching and feedback phases, including of course grading. Here are a few ways this can work in a course:

Discuss the rubric ahead of time with your teaching team. Getting on the same page about what students will be doing and how different parts of the assignment fit together is, in effect, laying out what needs to happen in class and in section, both in terms of what students need to learn and practice, and how the coming days or weeks should be sequenced.

Share the rubric with your students ahead of time. For the same reason it's ideal for course heads to discuss rubrics with their teaching team, it’s ideal for the teaching team to discuss the rubric with students. Not only does the rubric lay out the different skills students will learn during an assignment and which skills are more or less important for that assignment,  it means that the formative feedback they get along the way is more legible as getting practice on elements of the “bigger assignment.” To be sure, this can’t always happen. Rubrics aren’t always up and running at the beginning of an assignment, and sometimes they emerge more inductively during the feedback and grading process, as instructors take stock of what students have actually submitted. In both cases, later is better than never—there’s no need to make the perfect the enemy of the good. Circulating a rubric at the time you return student work can still be a valuable tool to help students see the relationship between the learning objectives and goals of the assignment and the feedback and grade they’ve received.

Discuss the rubric with your teaching team during the grading process. If your assignment has a rubric, it’s important to make sure that everyone who will be grading is able to use the rubric consistently. Most rubrics aren’t exhaustive—see the note above on rubrics that are “too specific”—and a great way to see how different graders are handling “real-life” scenarios for an assignment is to have the entire team grade a few samples (including examples that seem more representative of an “A” or a “B”) and compare everyone’s approaches. We suggest scheduling a grade-norming session for your teaching staff.

  • Designing Your Course
  • In the Classroom
  • When/Why/How: Some General Principles of Responding to Student Work
  • Consistency and Equity in Grading
  • Assessing Class Participation
  • Assessing Non-Traditional Assignments
  • Beyond “the Grade”: Alternative Approaches to Assessment
  • Getting Feedback
  • Equitable & Inclusive Teaching
  • Advising and Mentoring
  • Teaching and Your Career
  • Teaching Remotely
  • Tools and Platforms
  • The Science of Learning
  • Bok Publications
  • Other Resources Around Campus

Logo for University of Wisconsin Pressbooks

Responding, Evaluating, Grading

Rubric for a Research Proposal

Matthew Pearson - Writing Across the Curriculum

UW-Madison WAC Sourcebook 2020 Copyright © by Matthew Pearson - Writing Across the Curriculum. All Rights Reserved.

Share This Book

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.
  • Visit the University of Nebraska–Lincoln
  • Apply to the University of Nebraska–Lincoln
  • Give to the University of Nebraska–Lincoln

Search Form

How to design effective rubrics.

Rubrics can be effective assessment tools when constructed using methods that incorporate four main criteria: validity, reliability, fairness, and efficiency. For a rubric to be valid and reliable, it must only grade the work presented (reducing the influence of instructor biases) so that anyone using the rubric would obtain the same grade (Felder and Brent 2016). Fairness ensures that the grading is transparent by providing students with access to the rubric at the beginning of the assessment while efficiency is evident when students receive detailed, timely feedback from the rubric after grading has occurred (Felder and Brent 2016). Because the most informative rubrics for student learning are analytical rubrics (Brookhart 2013), the steps below explain how to construct an analytical rubric.

Five Steps to Design Effective Rubrics

The first step in designing a rubric is determining the content, skills, or tasks you want students to be able to accomplish (Wormeli 2006) by completing an assessment. Thus, two main questions need to be answered:

  • What do students need to know or do? and
  • How will the instructor know when the students know or can do it?

Another way to think about this is to decide which learning objectives for the course are being evaluated using this assessment (Allen and Tanner 2006, Wormeli 2006). (More information on learning objectives can be found at Teaching@UNL). For most projects or similar assessments, more than one area of content or skill is occurring, so most rubrics assess more than one learning objective. For example, a project may require students to research a topic (content knowledge learning objective) using digital literacy skills (research learning objective) and presenting their findings (communication learning objective). Therefore, it is important to think through all the tasks or skills students will need to complete during an assessment to meet the learning objectives. Additionally, it is advised to review examples of rubrics for a specific discipline or task to find grade-level appropriate rubrics to aid in preparing a list of tasks and activities that are essential to meeting the learning objectives (Allen and Tanner 2006).

Once the learning objectives and a list of essential tasks for students is compiled and aligned to learning objectives, the next step is to determine the number of criteria for the rubric. Most rubrics have three or more criteria with most rubrics having less than a dozen criteria. It is important to remember that as more criteria are added to a rubric, a student’s cognitive load increases making it more difficult for students to remember all the assessment requirements (Allen and Tanner 2006, Wolf et al. 2008). Thus, usually 3-10 criteria are recommended for a rubric (if an assessment has less than 3 criteria, a different format (e.g., grade sheet) can be used to convey grading expectations and if a rubric has more than ten criteria, some criteria can be consolidated into a single larger category; Wolf et al. 2008). Once the number of criteria is established, the final step for the criteria aspect of a rubric is creating descriptive titles for each criterion and determining if some criteria will be weighted and thus be more influential on the grade for the assessment. Once this is accomplished, the right column of the rubric can be designed (Table 1).

The third aspect of a rubric design is the levels of performance and the labels for each level in the rubric. It is recommended to have 3-6 levels of performance in a rubric (Allen and Tanner 2006, Wormeli 2006, Wolf et al. 2008). The key to determining the number of performance levels for a rubric is based on how easy it is to distinguish between levels (Allen and Tanner 2006). Can the difference in student performance between a “3” and “4” be readily seen on a five-level rubric? If not, should only four levels be used for the rubric for all criteria. If most of the criteria can easily be differentiated with five levels, but only one criterion is difficult to discern, then two levels could be left blank (see “Research Skills” criterion in Table 1). It is also important to note that having fewer levels makes constructing the rubric faster but may result in ambiguous expectations and difficulty providing feedback to students.

Once the number of performance levels are set for the rubric, assign each level a name or title that indicates the level of performance. When creating the name system for the performance levels of a rubric, it is important to use terms that are not subjective, overly negative, or convey judgements (e.g., “Excellent”, “Good”, and “Bad”; Allen and Tanner 2006, Stevens and Levi 2013) and to ensure the terms use the same aspect of language (all nouns, all verbs ending in “-ing”, all adjectives, etc.; Wormeli 2006). Examples of different performance level naming systems include:

  • Exemplary, Competent, Not yet competent
  • Proficient, Intermediate, Novice
  • Strong, Satisfactory, Not yet satisfactory
  • Exceeds Expectations, Meets Expectations, Below Expectations
  • Proficient, Capable, Adequate, Limited
  • Exemplary, Proficient, Acceptable, Unacceptable
  • Mastery, Proficient, Apprentice, Novice, Absent

Additionally, the order of the levels needs to be determined with some rubrics designed to increase in proficiency across the levels (lowest, middle, highest performance) and other designed to start with the highest performance level and move toward the lowest (highest, middle, lowest performance).

It is essential to evaluate how well a rubric works for grading and providing feedback to students. If possible, use previous student work to test a rubric to determine how well the rubric functions for grading the assessment prior to giving the rubric to students (Wormeli 2006). After using the rubric in a class, evaluate how well students met the criteria and how easy the rubric was to use in grading (Allen and Tanner 2006). If a specific criterion has low grades associated with it, determine if the language was too subjective or confusing for students. This can be done by asking students to critique the rubric or using a student survey for the overall assessment. Alternatively, the instructor can ask a colleague or instructional designer for their feedback on the rubric. If more than one instructor is using the rubric, determine if all instructors are seeing lower grades on certain criterion. Analyzing the grades can often show where students are failing to understand the content or the assessment format or requirements.

Next, look at how well the rubric reflects the work turned in by the students (Allen and Tanner 2006, Wormeli 2006). Does the grade based on the rubric reflect what the instructor would expect for the student’s assignment? Or does the rubric result in some students receiving a higher or lower grade? If the latter is occurring, determine which aspect of the rubric needs to be “fudged” to obtain the correct grade for the assessment and update the criteria that are problematic. Alternatively, the instructor may find that the rubric is good for all criteria but that some aspects of the assessment are under or over valued in the rubric (Allen and Tanner 2006). For example, if the main learning objective is the content, but 40% of the assessment is on writing skills, the rubric may need to be weighed to allow content criteria to have a stronger influence on the grade over writing criteria.

Finally, analyze how well the rubric worked for grading the assessment overall. If the instructor needed to modify the interpretation of the rubric while grading, then the levels of performance or the number of criteria may need to be edited to better align with the learning objectives and the evidence being shown in the assessment (Allen and Tanner 2006). For example, if only three performance levels exist in the rubric, but the instructor often had to give partial credit on a criterion, then this may indicate that the rubric needs to be expanded to have more levels of performance. If instead, a specific criterion is difficult to grade or distinguish between adjacent performance levels, this may indicate that too much is being assessed in the criterion (and thus should be divided into two or more different criteria) or that the criterion is not well written and needs to be explained with more details. Reflecting on the effectiveness of a rubric should be done each time the rubric is used to ensure it is well-designed and accurately represents student learning.

Rubric Examples & Resources

UNCW College of Arts & Science “ Scoring Rubrics ” contains links to discipline-specific rubrics designed by faculty from many institutions. Most of these rubrics are downloadable Word files that could be edited for use in courses.

Syracuse University “ Examples of Rubrics ” also has rubrics by discipline with some as downloadable Word files that could be edited for use in courses.

University of Illinois – Springfield has pdf files of different types of rubrics on its “ Rubric Examples ” page. These rubrics include many different types of tasks (presenting, participation, critical thinking, etc.) from a variety of institutions

If you are building a rubric in Canvas, the rubric guide in Canvas 101 provides detailed information including video instructions: Using Rubrics: Canvas 101 (unl.edu)

Allen, D. and K. Tanner (2006). Rubrics: Tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE – Life Sciences Education 5: 197-203.

Stevens, D. D., and A. J. Levi (2013). Introduction to Rubrics: an assessment tool to save grading time, convey effective feedback, and promote student learning. Stylus Publishing, Sterling, VA, USA.

Wolf, K., M. Connelly, and A. Komara (2008). A tale of two rubrics: improving teaching and learning across the content areas through assessment. Journal of Effective Teaching 8: 21-32.

Wormeli, R. (2006). Fair isn’t always equal: assessing and grading in the differentiated classroom. Stenhouse Publishers, Portland, ME, USA.

This page was authored by Michele Larson and last updated September 15, 2022

Related Links

  • How to build and use rubrics in Canvas
  • Introduction to rubrics
  • Grading and Feedback

Eberly Center

Teaching excellence & educational innovation, grading and performance rubrics, what are rubrics.

A rubric is a scoring tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both.

Advantages of Using Rubrics

Using a rubric provides several advantages to both instructors and students. Grading according to an explicit and descriptive set of criteria that is designed to reflect the weighted importance of the objectives of the assignment helps ensure that the instructor’s grading standards don’t change over time. Grading consistency is difficult to maintain over time because of fatigue, shifting standards based on prior experience, or intrusion of other criteria. Furthermore, rubrics can reduce the time spent grading by reducing uncertainty and by allowing instructors to refer to the rubric description associated with a score rather than having to write long comments. Finally, grading rubrics are invaluable in large courses that have multiple graders (other instructors, teaching assistants, etc.) because they can help ensure consistency across graders and reduce the systematic bias that can be introduced between graders.

Used more formatively, rubrics can help instructors get a clearer picture of the strengths and weaknesses of their class. By recording the component scores and tallying up the number of students scoring below an acceptable level on each component, instructors can identify those skills or concepts that need more instructional time and student effort.

Grading rubrics are also valuable to students. A rubric can help instructors communicate to students the specific requirements and acceptable performance standards of an assignment. When rubrics are given to students with the assignment description, they can help students monitor and assess their progress as they work toward clearly indicated goals. When assignments are scored and returned with the rubric, students can more easily recognize the strengths and weaknesses of their work and direct their efforts accordingly.

Examples of Rubrics

Here are links to a diverse set of rubrics designed by Carnegie Mellon faculty and faculty at other institutions. Although your particular field of study and type of assessment activity may not be represented currently, viewing a rubric that is designed for a similar activity may provide you with ideas on how to divide your task into components and how to describe the varying levels of mastery.

Paper Assignments

  • Example 1: Philosophy Paper This rubric was designed for student papers in a range of philosophy courses, CMU.
  • Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology, CMU.
  • Example 3: Anthropology Writing Assignments This rubric was designed for a series of short writing assignments in anthropology, CMU.
  • Example 4: History Research Paper . This rubric was designed for essays and research papers in history, CMU.
  • Example 1: Capstone Project in Design This rubric describes the components and standard of performance from the research phase to the final presentation for a senior capstone project in the School of Design, CMU.
  • Example 2: Engineering Design Project This rubric describes performance standards on three aspects of a team project: Research and Design, Communication, and Team Work.

Oral Presentations

  • Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division history course, CMU.
  • Example 2: Oral Communication
  • Example 3: Group Presentations This rubric describes a set of components and standards for assessing group presentations in a history course, CMU.

Class Participation/Contributions

  • Example 1: Discussion Class This rubric assesses the quality of student contributions to class discussions. This is appropriate for an undergraduate-level course, CMU.
  • Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar. 

creative commons image

Skip to Content

Other ways to search:

  • Events Calendar

Rubrics are a set of criteria to evaluate performance on an assignment or assessment. Rubrics can communicate expectations regarding the quality of work to students and provide a standardized framework for instructors to assess work. Rubrics can be used for both formative and summative assessment. They are also crucial in encouraging self-assessment of work and structuring peer-assessments. 

Why use rubrics?

Rubrics are an important tool to assess learning in an equitable and just manner. This is because they enable:

  • A common set of standards and criteria to be uniformly applied, which can mitigate bias
  • Transparency regarding the standards and criteria on which students are evaluated
  • Efficient grading with timely and actionable feedback 
  • Identifying areas in which students need additional support and guidance 
  • The use of objective, criterion-referenced metrics for evaluation 

Some instructors may be reluctant to provide a rubric to grade assessments under the perception that it stifles student creativity (Haugnes & Russell, 2018). However, sharing the purpose of an assessment and criteria for success in the form of a rubric along with relevant examples has been shown to particularly improve the success of BIPOC, multiracial, and first-generation students (Jonsson, 2014; Winkelmes, 2016). Improved success in assessments is generally associated with an increased sense of belonging which, in turn, leads to higher student retention and more equitable outcomes in the classroom (Calkins & Winkelmes, 2018; Weisz et al., 2023). By not providing a rubric, faculty may risk having students guess the criteria on which they will be evaluated. When students have to guess what expectations are, it may unfairly disadvantage students who are first-generation, BIPOC, international, or otherwise have not been exposed to the cultural norms that have dominated higher-ed institutions in the U.S (Shapiro et al., 2023). Moreover, in such cases, criteria may be applied inconsistently for students leading to biases in grades awarded to students.

Steps for Creating a Rubric

Clearly state the purpose of the assessment, which topic(s) learners are being tested on, the type of assessment (e.g., a presentation, essay, group project), the skills they are being tested on (e.g., writing, comprehension, presentation, collaboration), and the goal of the assessment for instructors (e.g., gauging formative or summative understanding of the topic). 

Determine the specific criteria or dimensions to assess in the assessment. These criteria should align with the learning objectives or outcomes to be evaluated. These criteria typically form the rows in a rubric grid and describe the skills, knowledge, or behavior to be demonstrated. The set of criteria may include, for example, the idea/content, quality of arguments, organization, grammar, citations and/or creativity in writing. These criteria may form separate rows or be compiled in a single row depending on the type of rubric.

(See row headers  of  Figure 1 )

Create a scale of performance levels that describe the degree of proficiency attained for each criterion. The scale typically has 4 to 5 levels (although there may be fewer levels depending on the type of rubrics used). The rubrics should also have meaningful labels (e.g., not meeting expectations, approaching expectations, meeting expectations, exceeding expectations). When assigning levels of performance, use inclusive language that can inculcate a growth mindset among students, especially when work may be otherwise deemed to not meet the mark. Some examples include, “Does not yet meet expectations,” “Considerable room for improvement,” “ Progressing,” “Approaching,” “Emerging,” “Needs more work,” instead of using terms like “Unacceptable,” “Fails,” “Poor,” or “Below Average.”

(See column headers  of  Figure 1 )

Develop a clear and concise descriptor for each combination of criterion and performance level. These descriptors should provide examples or explanations of what constitutes each level of performance for each criterion. Typically, instructors should start by describing the highest and lowest level of performance for that criterion and then describing intermediate performance for that criterion. It is important to keep the language uniform across all columns, e.g., use syntax and words that are aligned in each column for a given criteria. 

(See cells  of  Figure 1 )

It is important to consider how each criterion is weighted and for each criterion to reflect the importance of learning objectives being tested. For example, if the primary goal of a research proposal is to test mastery of content and application of knowledge, these criteria should be weighted more heavily compared to other criteria (e.g., grammar, style of presentation). This can be done by associating a different scoring system for each criteria (e.g., Following a scale of 8-6-4-2 points for each level of performance in higher weight criteria and 4-3-2-1 points for each level of performance for lower weight criteria). Further, the number of points awarded across levels of performance should be evenly spaced (e.g., 10-8-6-4 instead of 10-6-3-1). Finally, if there is a letter grade associated with a particular assessment, consider how it relates to scores. For example, instead of having students receive an A only if they received the highest level of performance on each criterion, consider assigning an A grade to a range of scores (28 - 30 total points) or a combination of levels of performance (e.g., exceeds expectations on higher weight criteria and meets expectations on other criteria). 

(See the numerical values in the column headers  of  Figure 1 )

 a close up of a score sheet

Figure 1:  Graphic describing the five basic elements of a rubric

Note : Consider using a template rubric that can be used to evaluate similar activities in the classroom to avoid the fatigue of developing multiple rubrics. Some tools include Rubistar or iRubric which provide suggested words for each criteria depending on the type of assessment. Additionally, the above format can be incorporated in rubrics that can be directly added in Canvas or in the grid view of rubrics in gradescope which are common grading tools. Alternately, tables within a Word processor or Spreadsheet may also be used to build a rubric. You may also adapt the example rubrics provided below to the specific learning goals for the assessment using the blank template rubrics we have provided against each type of rubric. Watch the linked video for a quick introduction to designing a rubric . Word document (docx) files linked below will automatically download to your device whereas pdf files will open in a new tab.

Types of Rubrics

In these rubrics, one specifies at least two criteria and provides a separate score for each criterion. The steps outlined above for creating a rubric are typical for an analytic style rubric. Analytic rubrics are used to provide detailed feedback to students and help identify strengths as well as particular areas in need of improvement. These can be particularly useful when providing formative feedback to students, for student peer assessment and self-assessments, or for project-based summative assessments that evaluate student learning across multiple criteria. You may use a blank analytic rubric template (docx) or adapt an existing sample of an analytic rubric (pdf) . 

figure 2

Fig 2: Graphic describing a sample analytic rubric (adopted from George Mason University, 2013)

These are a subset of analytical rubrics that are typically used to assess student performance and engagement during a learning period but not the end product. Such rubrics are typically used to assess soft skills and behaviors that are less tangible (e.g., intercultural maturity, empathy, collaboration skills). These rubrics are useful in assessing the extent to which students develop a particular skill, ability, or value in experiential learning based programs or skills. They are grounded in the theory of development (King, 2005). Examples include an intercultural knowledge and competence rubric (docx)  and a global learning rubric (docx) .

These rubrics consider all criteria evaluated on one scale, providing a single score that gives an overall impression of a student’s performance on an assessment.These rubrics also emphasize the overall quality of a student’s work, rather than delineating shortfalls of their work. However, a limitation of the holistic rubrics is that they are not useful for providing specific, nuanced feedback or to identify areas of improvement. Thus, they might be useful when grading summative assessments in which students have previously received detailed feedback using analytic or single-point rubrics. They may also be used to provide quick formative feedback for smaller assignments where not more than 2-3 criteria are being tested at once. Try using our blank holistic rubric template docx)  or adapt an existing sample of holistic rubric (pdf) . 

figure 3

Fig 3: Graphic describing a sample holistic rubric (adopted from Teaching Commons, DePaul University)

These rubrics contain only two levels of performance (e.g., yes/no, present/absent) across a longer list of criteria (beyond 5 levels). Checklist rubrics have the advantage of providing a quick assessment of criteria given the binary assessment of criteria that are either met or are not met. Consequently, they are preferable when initiating self- or  peer-assessments of learning given that it simplifies evaluations to be more objective and criteria can elicit only one of two responses allowing uniform and quick grading. For similar reasons, such rubrics are useful for faculty in providing quick formative feedback since it immediately highlights the specific criteria to improve on. Such rubrics are also used in grading summative assessments in courses utilizing alternative grading systems such as specifications grading, contract grading or a credit/no credit grading system wherein a minimum threshold of performance has to be met for the assessment. Having said that, developing rubrics from existing analytical rubrics may require considerable investment upfront given that criteria have to be phrased in a way that can only elicit binary responses. Here is a link to the checklist rubric template (docx) .

 Graphic describing a sample checklist rubric

Fig. 4: Graphic describing a sample checklist rubric

A single point rubric is a modified version of a checklist style rubric, in that it specifies a single column of criteria. However, rather than only indicating whether expectations are met or not, as happens in a checklist rubric, a single point rubric allows instructors to specify ways in which criteria exceeds or does not meet expectations. Here the criteria to be tested are laid out in a central column describing the average expectation for the assignment. Instructors indicate areas of improvement on the left side of the criteria, whereas areas of strength in student performance are indicated on the right side. These types of rubrics provide flexibility in scoring, and are typically used in courses with alternative grading systems such as ungrading or contract grading. However, they do require the instructors to provide detailed feedback for each student, which can be unfeasible for assessments in large classes. Here is a link to the single point rubric template (docx) .

Fig. 5 Graphic describing a single point rubric (adopted from Teaching Commons, DePaul University)

Fig. 5 Graphic describing a single point rubric (adopted from Teaching Commons, DePaul University)

Best Practices for Designing and Implementing Rubrics

When designing the rubric format, descriptors and criteria should be presented in a way that is compatible with screen readers and reading assistive technology. For example, avoid using only color, jargon, or complex terminology to convey information. In case you do use color, pictures or graphics, try providing alternative formats for rubrics, such as plain text documents. Explore resources from the CU Digital Accessibility Office to learn more.

Co-creating rubrics can help students to engage in higher-order thinking skills such as analysis and evaluation. Further, it allows students to take ownership of their own learning by determining the criteria of their work they aspire towards. For graduate classes or upper-level students, one way of doing this may be to provide learning outcomes of the project, and let students develop the rubric on their own. However, students in introductory classes may need more scaffolding by providing them a draft and leaving room for modification (Stevens & Levi 2013). Watch the linked video for tips on co-creating rubrics with students . Further, involving teaching assistants in designing a rubric can help in getting feedback on expectations for an assessment prior to implementing and norming a rubric. 

When first designing a rubric, it is important to compare grades awarded for the same assessment by multiple graders to make sure the criteria are applied uniformly and reliably for the same level of performance. Further, ensure that the levels of performance in student work can be adequately distinguished using a rubric. Such a norming protocol is particularly important to also do at the start of any course in which multiple graders use the same rubric to grade an assessment (e.g., recitation sections, lab sections, teaching team). Here, instructors may select a subset of assignments that all graders evaluate using the same rubric, followed by a discussion to identify any discrepancies in criteria applied and ways to address them. Such strategies can make the rubrics more reliable, effective, and clear.

Sharing the rubric with students prior to an assessment can help familiarize students with an instructor’s expectations. This can help students master their learning outcomes by guiding their work in the appropriate direction and increase student motivation. Further, providing the rubric to students can help encourage metacognition and ability to self-assess learning.

Sample Rubrics

Below are links to rubric templates designed by a team of experts assembled by the Association of American Colleges and Universities (AAC&U) to assess 16 major learning goals. These goals are a part of the Valid Assessment of Learning in Undergraduate Education (VALUE) program. All of these examples are analytic rubrics and have detailed criteria to test specific skills. However, since any given assessment typically tests multiple skills, instructors are encouraged to develop their own rubric by utilizing criteria picked from a combination of the rubrics linked below.

  • Civic knowledge and engagement-local and global
  • Creative thinking
  • Critical thinking
  • Ethical reasoning
  • Foundations and skills for lifelong learning
  • Information literacy
  • Integrative and applied learning
  • Intercultural knowledge and competence
  • Inquiry and analysis
  • Oral communication
  • Problem solving
  • Quantitative literacy
  • Written Communication

Note : Clicking on the above links will automatically download them to your device in Microsoft Word format. These links have been created and are hosted by Kansas State University . Additional information regarding the VALUE Rubrics may be found on the AAC&U homepage . 

Below are links to sample rubrics that have been developed for different types of assessments. These rubrics follow the analytical rubric template, unless mentioned otherwise. However, these rubrics can be modified into other types of rubrics (e.g., checklist, holistic or single point rubrics) based on the grading system and goal of assessment (e.g., formative or summative). As mentioned previously, these rubrics can be modified using the blank template provided.

  • Oral presentations  
  • Painting Portfolio (single-point rubric)
  • Research Paper
  • Video Storyboard

Additional information:

Office of Assessment and Curriculum Support. (n.d.). Creating and using rubrics . University of Hawai’i, Mānoa

Calkins, C., & Winkelmes, M. A. (2018). A teaching method that boosts UNLV student retention . UNLV Best Teaching Practices Expo , 3.

Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies In Educational Evaluation , 53, 69-76

Haugnes, N., & Russell, J. L. (2016). Don’t box me in: Rubrics for àrtists and Designers . To Improve the Academy , 35 (2), 249–283. 

Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment , Assessment & Evaluation in Higher Education , 39(7), 840-852 

McCartin, L. (2022, February 1). Rubrics! an equity-minded practice . University of Northern Colorado

Shapiro, S., Farrelly, R., & Tomaš, Z. (2023). Chapter 4: Effective and Equitable Assignments and Assessments. Fostering International Student Success in higher education (pp, 61-87, second edition). TESOL Press.

Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (second edition). Sterling, VA: Stylus.

Teaching Commons (n.d.). Types of Rubrics . DePaul University

Teaching Resources (n.d.). Rubric best practices, examples, and templates . NC State University 

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K.H. (2016). A teaching intervention that increases underserved college students’ success . Peer Review , 8(1/2), 31-36.

Weisz, C., Richard, D., Oleson, K., Winkelmes, M.A., Powley, C., Sadik, A., & Stone, B. (in progress, 2023). Transparency, confidence, belonging and skill development among 400 community college students in the state of Washington . 

Association of American Colleges and Universities. (2009). Valid Assessment of Learning in Undergraduate Education (VALUE) . 

Canvas Community. (2021, August 24). How do I add a rubric in a course? Canvas LMS Community.

 Center for Teaching & Learning. (2021, March 03). Overview of Rubrics . University of Colorado, Boulder

 Center for Teaching & Learning. (2021, March 18). Best practices to co-create rubrics with students . University of Colorado, Boulder.

Chase, D., Ferguson, J. L., & Hoey, J. J. (2014). Assessment in creative disciplines: Quantifying and qualifying the aesthetic . Common Ground Publishing.

Feldman, J. (2018). Grading for equity: What it is, why it matters, and how it can transform schools and classrooms . Corwin Press, CA.

Gradescope (n.d.). Instructor: Assignment - Grade Submissions . Gradescope Help Center. 

Henning, G., Baker, G., Jankowski, N., Lundquist, A., & Montenegro, E. (Eds.). (2022). Reframing assessment to center equity . Stylus Publishing. 

 King, P. M. & Baxter Magolda, M. B. (2005). A developmental model of intercultural maturity . Journal of College Student Development . 46(2), 571-592.

Selke, M. J. G. (2013). Rubric assessment goes to college: Objective, comprehensive evaluation of student work. Lanham, MD: Rowman & Littlefield.

The Institute for Habits of Mind. (2023, January 9). Creativity Rubrics - The Institute for Habits of Mind . 

  • Assessment in Large Enrollment Classes
  • Classroom Assessment Techniques
  • Creating and Using Learning Outcomes
  • Early Feedback
  • Five Misconceptions on Writing Feedback
  • Formative Assessments
  • Frequent Feedback
  • Online and Remote Exams
  • Student Learning Outcomes Assessment
  • Student Peer Assessment
  • Student Self-assessment
  • Summative Assessments: Best Practices
  • Summative Assessments: Types
  • Assessing & Reflecting on Teaching
  • Departmental Teaching Evaluation
  • Equity in Assessment
  • Glossary of Terms
  • Attendance Policies
  • Books We Recommend
  • Classroom Management
  • Community-Developed Resources
  • Compassion & Self-Compassion
  • Course Design & Development
  • Course-in-a-box for New CU Educators
  • Enthusiasm & Teaching
  • First Day Tips
  • Flexible Teaching
  • Grants & Awards
  • Inclusivity
  • Learner Motivation
  • Making Teaching & Learning Visible
  • National Center for Faculty Development & Diversity
  • Open Education
  • Student Support Toolkit
  • Sustainaiblity
  • TA/Instructor Agreement
  • Teaching & Learning in the Age of AI
  • Teaching Well with Technology

Center for Teaching Innovation

Resource library.

  • Establishing Community Agreements and Classroom Norms

Sample group work rubric

  • Problem-Based Learning Clearinghouse of Activities, University of Delaware

Feel free to modify this sample rubric for assessing group work to meet your teaching needs.

CTRL Faculty Resources

Using Rubrics to Assess Learning

The assessments students complete in your courses (e.g., projects, papers, presentations, performances, exams) provide a way for students to demonstrate how well they have achieved the student learning outcomes . Designing rubrics that clearly convey your assessment criteria to students will help them understand how you will be grading their work and provide them with meaningful feedback they can use to self-assess their strengths and areas for improvement. This resource offers guidance for understanding, creating, and using rubrics effectively.

What is a rubric?

Rubrics are guidelines, criteria, or expectations used to assess student work and provide feedback. Most people think of rubrics as a tool that lays out the criteria to evaluate (grade) written student work after it is submitted or presented. However, rubrics can also be used to provide feedback on other demonstrations of learning like class participation or contributions to a specific discussion.

Rubrics support student learning by providing a structured and consistent way to assess student work, promoting transparency and fairness in assessment.

How do I create a rubric?

Components of a rubric.

Rubrics typically contain several components, which may include:

  • Criteria: These are the specific aspects or dimensions of the task or performance that you’re assessing (e.g., content, organization, clarity, argumentation, use of sources). Criteria should be clear, specific, and relevant to the learning outcomes or purpose of the assignment.
  • Performance Levels & Scoring Scale: Each criterion is typically broken into different levels of performance, ranging from exceeds expectations to does not meet expectations . The scoring scale corresponds with the levels of performance and can be numerical (e.g., 1-4) or descriptive (e.g., excellent, good, fair, poor).
  • Descriptors: Each criterion is accompanied by a description or a set of descriptors that outline the different levels of performance.
  • Weighting: In some cases, criteria may be weighted differently to reflect their relative importance.
  • Feedback Space: Rubrics may include space for providing written feedback or comments on each criterion or overall performance. This feedback helps students understand their personal strengths and areas for improvement.
  • Overall Score: Rubrics typically provide a mechanism for calculating an overall score based on the scores assigned to each criterion. The score can be used to summarize the students’ performance on task. Students can also use the rubric to score themselves and self-assess their performance before submitting an assignment.

Different Types of Rubrics

Checklist rubric.

The simplest type of rubric is a checklist rubric, which is a list of criteria and descriptors with only two performance levels possible (e.g., yes/no, present/not present, meets/does not meet). These types of rubrics can include written feedback for each criterion. One benefit is that they are quicker to use; however, they only assess for proficiency or completion in a binary way.

Checklist rubrics are great for smaller assignments or formative assessments, such as an online discussion board post, an in-class reflection, or drafts of an upcoming larger project.

Example: Online Discussion Checklist Rubric

Single-Point Rubric

The single-point rubric is similar to the checklist rubric, in that there is one single column of criteria; however, it makes space for written feedback for each criterion. Like the checklist, single-point rubrics are quicker to create and easy for students to read since the only descriptors are the target expectations. Another benefit is that they allow for higher-quality, more individualized feedback because teachers specify areas of growth and strengths for that particular student.

Single-point rubrics are very useful to provide formative feedback and support students’ growth. Thus, you could use the rubric without assigning any points, or you could assign points for completion but use the rubric to provide feedback that the student can apply to future work.

Example: Online Discussion Single-Point Rubric

Holistic Rubric

Another simple type of rubric is a holistic rubric, which consists of a single scoring scale where all criteria are considered together. For each level of performance, there is a detailed description that allows for an overall assessment. Thus, the students’ performance is matched to a single description on the scale. However, the feedback is more general than specific, and it may be difficult to determine which level of performance is most fitting.

Like checklists, holistic rubrics are useful for smaller assignments and formative assessments or behaviors like in-class participation. They can even be used for learning activities that are not formally graded by providing a summary of expectations.

Example: In-Class Participation Holistic Rubric

Analytic Rubric

The analytic rubric is what people typically imagine when thinking of a rubric. It resembles a grid or a table with the criteria listed in the leftmost column and the levels of performance listed across the top row. When scoring with an analytic rubric, each criterion is scored individually.

Analytic rubrics provide the most detailed and useful feedback on areas of strengths and weaknesses. They tend to provide more consistent grading. The one drawback is that they may take longer to create.

Example: Reflection Journal Analytic Rubric

Steps to Create a Rubric

  • Determine the purpose of the assessment. Identify what is most important for students to demonstrate with the assignment based on the course learning outcomes and how the assignment aligns with those outcomes.
  • Decide what type of rubric is most relevant. Based on the descriptions of checklist, holistic, and analytic above, determine which is best fit for the assessment.
  • Identify criteria and performance levels. Determine the specific aspects that you are assessing and, if you are creating a holistic or analytic rubric, a scale of performance for those criteria.
  • Write clear descriptions for each level of performance. If you are creating an analytic rubric, be sure to use parallel language between the different levels of performance for each criterion so that the distinctions between them are clear. For instance, if full marks for “background information” in a research report is, “The brief overview of the concept provides accurate and thorough connections to course concepts and cites at least 3 peer-reviewed sources,” then the next level down might be, “The brief overview of the concept provides some connections to course concepts and cites at least 1 peer-reviewed source,” and the lowest level might be, “The overview of the concept does not include connections to course concepts or any peer-reviewed sources.”
  • Determine relative weight of each criterion. Consider if certain criteria are more important than others and should be worth a larger percentage of the grade. For instance, for a research project, evidence and analysis might be more important than organization and presentation of the information.
  • Consider the format and layout of the rubric. We recommend inputting the rubric into Canvas so that it is accessible for students to review in conjunction with the assignment and so that students will have access to the rubric-based feedback.
  • Test and revise the rubric. Before deploying to assess student work, test your rubric on a hypothetical or actual example of student work to ensure it works in the way you intended.

Using Rubrics in an Equitable Way

Rubrics are very beneficial for learning if implemented in an equitable way. Here are some additional considerations to help ensure fairness and consistency in assessing diversity in student performance.

  • Use rubrics consistently and fairly. If working with teaching assistants or using the same rubric across instructors, practice applying the rubric together.
  • Keep rubrics clear, concise, and aligned with learning outcomes.
  • Review and revise rubrics periodically to ensure relevance and accuracy.
  • Communicate rubric expectations to students. Go through the rubric when introducing the assignment. You could even look at an example together and have students practice using the rubric to assess that example. Ask students if they have any clarifying questions about the rubric or potential changes.
  • Provide written comments as feedback in addition to ranking student work based on rubric criteria.
  • Encourage self-assessment and reflection using rubrics. You could also ask students to peer review each other’s work using the rubric.

Additional Examples of Rubrics

We highly recommend you draw upon existing examples when creating a rubric. The American Association of Colleges and Universities (AAC&U) has developed Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics for 16 broad, cross-cutting learning outcomes, including ethical reasoning, global learning, intercultural knowledge and competence, and teamwork.

The EMRN rubric is a four-level rubric framework that uses a simple flowchart to categorize student work/learning into one of four categories: Excellent, Meets expectations, Revision needed, and Not assessable. You might use EMRN as your levels of performance for a holistic or analytical rubric.

Generative AI can also help you to create a rubric. For instance, when prompted with, “Create a rubric for an undergraduate research project,” and, “Create a rubric for an undergraduate research presentation,” ChatGPT provided a list of criteria with detailed descriptions for the highest level of performance. You could present this rubric as a checklist rubric or use this information as a start to build a full analytical rubric.

Example: ChatGPT Rubric for Undergraduate Research Project

  • Clear articulation of research question(s) and objectives.
  • Relevance of the research question to the field of study.
  • Alignment between research question(s) and objectives.
  • Comprehensive review of relevant literature.
  • Critical analysis and synthesis of existing research.
  • Identification of gaps or areas for further investigation.
  • Clarity and appropriateness of research design.
  • Rigor in data collection methods.
  • Justification for chosen methodology and data analysis techniques.
  • Presentation of empirical evidence or findings.
  • Depth of analysis and interpretation of results.
  • Use of appropriate statistical or analytical tools.
  • Coherent discussion of findings in relation to research question(s).
  • Consideration of implications and significance of results.
  • Reflection on limitations and potential areas for future research.
  • Structure and organization of the research project.
  • Clarity and coherence of writing style.
  • Use of appropriate citations and references.
  • Originality and creativity in approach or perspective.
  • Innovative solutions to research problems.
  • Contribution to advancing knowledge in the field.
  • Adherence to ethical guidelines in research conduct.
  • Professionalism in communication and collaboration.
  • Proper acknowledgment of sources and contributions.
  • Overall impact of the research project.
  • Contribution to the field of study.
  • Potential for dissemination or further development.

Each criterion can be scored on a scale (e.g., 1-5) with corresponding descriptors to provide detailed feedback to students on their research projects.

Example: ChatGPT Rubric for Undergraduate Research Presentation

  • Clarity and coherence of presentation structure.
  • Clear articulation of research question(s), objectives, and significance.
  • Logical flow of ideas and transitions between sections.
  • Engaging introduction that captures the audience’s attention.
  • Clear statement of the research problem and objectives.
  • Background information provided to contextualize the research.
  • Description of research design and methodology.
  • Explanation of data collection procedures.
  • Justification for chosen methodology and techniques.
  • Clear presentation of research findings.
  • Use of appropriate visual aids (e.g., graphs, charts) to illustrate data.
  • Interpretation of results and connections to research objectives.
  • Interpretation of findings in relation to research question(s).
  • Consideration of limitations and potential sources of error.
  • Exploration of implications and significance of results.
  • Summary of key findings and their implications.
  • Restatement of research objectives and contributions.
  • Suggestions for future research directions.
  • Confidence and clarity in oral delivery.
  • Eye contact and engagement with the audience.
  • Use of appropriate vocal tone and gestures to enhance communication.
  • Clarity and effectiveness of visual aids.
  • Consistency in design and formatting.
  • Integration of visual aids to support key points.
  • Adherence to allocated presentation time.
  • Ability to cover key points within the time limit.
  • Effective use of time for each section of the presentation.
  • Ability to respond to audience questions or comments.
  • Engagement with audience feedback and discussion.
  • Demonstration of knowledge and expertise during Q&A session.

Each criterion can be scored on a scale (e.g., 1-5) with corresponding descriptors to provide comprehensive feedback to students on their research presentations.

Three blue circles with "The Center for Research Teaching and Learning" in the middle

AU's Center for Teaching, Research & Learning © A Unit of the Office of the Provost American University, Washington, DC

Get in touch!

Address: Hurst 214 Phone: 202-885-2117 Email:   [email protected]

Reflective assessment using analytics and artifacts for scaffolding knowledge building competencies among undergraduate students

  • Published: 13 May 2024

Cite this article

research work rubric

  • Yuqin Yang   ORCID: orcid.org/0000-0001-7125-3716 1 ,
  • Carol K. K. Chan 2 ,
  • Gaoxia Zhu 3 ,
  • Yuyao Tong 2 &
  • Daner Sun 4  

Knowledge building (KB) competencies are crucial for undergraduates’ creative knowledge work and academic success. While there is substantial research on KB discourse, there are limited efforts in examining how KB competencies in the conceptual, metacognitive, socio-emotional, and epistemic dimensions are demonstrated in KB discourse and how the competencies can be scaffolded. Previous studies suggest the effectiveness of reflective assessment on sustainable and productive KB discourse. This study developed a framework for analyzing KB competencies using KB discourse moves. It also examined whether a KB design augmented by reflective assessment enriched by analytic tools and artifacts could foster undergraduates’ KB competencies, and if so, how. This KB design involves principle-based pedagogy with the participants engaging in collaborative inquiry and discussion on Knowledge Forum, and reflective assessment using (a) super synthesis notes, (b) KB interaction rubrics, and (c) learning analytics visualization tools. Qualitative tracking and lag sequential pattern analysis of Knowledge Forum’s discourse revealed that implementing reflective assessment supported by analytics and artifacts could help the undergraduate students develop KB competencies manifested in discourse with evidence of conceptual advance, epistemic engagement, metacognition, and productive socio-emotional interactions with a collective focus. The thematic analysis illustrated the dynamics through which the design enriched by standards and visualizations helped the undergraduates develop KB competencies: leveraging synthesis super notes to promote conceptual and metacognitive advancement and epistemic engagement; employing KB interaction rubrics to cultivate metacognitive, socio-emotional, and epistemic competencies; and harnessing KBDeX visualizations to promote metacognitive and conceptual advancement and to facilitate epistemic engagement. The implications of scaffolding students’ epistemic agency, metacognition, productive collaborative inquiry, and developing KB competencies in a technology-supported metacognitive learning environment are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research work rubric

Data availability

These data will be made available to other researchers on a case-by-case basis.

Andrews-Todd, J., & Forsyth, C. M. (2020). Exploring social and cognitive dimensions of collaborative problem solving in an open online simulation-based task. Computers in Human Behavior, 104, Article 105759. https://doi.org/10.1016/j.chb.2018.10.025

Bell, T., Urhahne, D., Schanze, S., & Ploetzner, R. (2010). Collaborative inquiry learning: Models, tools, and challenges. International Journal of Science Education, 32 (3), 349–377. https://doi.org/10.1080/09500690802582241

Article   Google Scholar  

Bereiter, C., & Scardamalia, M. (2016). “Good Moves” in knowledge-creating dialogue. QWERTY-Interdisciplinary Journal of Technology, Culture and Education, 11 (2), 12–26.

Google Scholar  

Bjuland, R., & Helgevold, N. (2018). Dialogic processes that enable student teachers’ learning about pupil learning in mentoring conversations in a Lesson Study field practice. Teaching and Teacher Education, 70 , 246–254. https://doi.org/10.1016/j.tate.2017.11.026

Borge, M., Ong, Y. S., & Rosé, C. P. (2018). Learning to monitor and regulate collective thinking processes. International Journal of Computer-Supported Collaborative Learning, 13 , 61–92. https://doi.org/10.1007/s11412-018-9270-5

Chalkiadaki, A. (2018). A systematic literature review of 21st century skills and competencies in primary education. International Journal of Instruction, 11 (3), 1-16. https://doi.org/10.12973/iji.2018.1131a

Chan, C. K., & van Aalst, J. (2018). Knowledge building: Theory, design, and analysis. In International handbook of the learning sciences (pp. 295–307). Routledge.

Chen, B., & Hong, H.-Y. (2016). Schools as knowledge-building organizations: Thirty years of design research. Educational Psychologist, 51 , 266–288. https://doi.org/10.1080/00461520.2016.1175306

Chen, B. (2017). Fostering scientific understanding and epistemic beliefs through judgments of promisingness. Educational Technology Research & Development, 65 , 255–277. https://doi.org/10.1007/s11423-016-9467-0

Chen, B., Scardamalia, M., & Bereiter, C. (2015). Advancing knowledge-building discourse through judgments of promising ideas. International Journal of Computer-Supported Collaborative Learning, 10 (4), 345–366. https://doi.org/10.1007/s11412-015-9225-z

Chen, Y., Andrews, C. D., Hmelo-Silver, C. E., & D’Angelo, C. (2019). Coding schemes as lenses on collaborative learning. Information and Learning Sciences, 121 (1/2), 1–18. https://doi.org/10.1108/ILS-08-2019-0079

Chuy, M., Resendes, M., Tarchi, C., Chen, B., Scardamalia, M., & Bereiter, C. (2011). Ways of contributing to an explanation-seeking dialogue in science and history. QWERTY: Journal of Technology and Culture, 6 (2), 242–260.

Collins, A. (2017). What’s worth teaching? Rethinking curriculum in the Age of technology . Teachers College Press.

Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and methodological Issues. The Journal of the Learning Sciences, 13 (1), 15–42.

Damrow, A. L., & Sweeney, J. S. (2019). Beyond the bubble: Preparing preservice teachers through dialogue across distance and difference. Teaching and Teacher Education, 80 , 255–265. https://doi.org/10.1016/j.tate.2019.02.003

Damşa, C. I. (2014). The multi-layered nature of small-group learning: Productive interactions in object-oriented collaboration. International Journal of Computer-Supported Collaborative Learning, 9 , 247-281. https://doi.org/10.1007/s11412-014-9193-8

De Backer, L., Van Keer, H., & Valcke, M. (2021). The functions of shared metacognitive regulation and their differential relation with collaborative learners’ understanding of the learning content. Learning and Instruction, 77 , 101527. https://doi.org/10.1016/j.learninstruc.2021.101527

Dorfsman, M. I. (2018). The development of discourse in the online environment: Between technology and multiculturalism. International Journal of Educational Technology in Higher Education, 15 (31), https://doi.org/10.1186/s41239-018-0110-5 .

Ewell, P. T. (2005). Can assessment serve accountability? It depends on the question. Achieving Accountability in Higher Education , 1-24.

González-Salamanca, J. C., Agudelo, O. L., & Salinas, J. (2020). Key competences, education for sustainable development and strategies for the development of 21st century skills. A systematic literature review. Sustainability, 12 (24), 10366. https://doi.org/10.3390/su122410366

Hilton, M. L., & Pellegrino, J. W. (Eds.). (2012). Education for life and work: Developing transferable knowledge and skills in the 21st century. National Academies Press.

Hmelo-Silver, C. E., & Barrows, H. S. (2008). Facilitating collaborative knowledge building. Cognition and Instruction, 26 (1), 48–94. https://doi.org/10.1080/07370000701798495

Hod, Y., & Katz, S. (2020). Fostering highly engaged knowledge building communities in socio-emotional and sociocognitive hybrid learning spaces. British Journal of Educational Technology, 51 (4), 1117–1135. https://doi.org/10.1111/bjet.12910

Hong, H. Y., Lin, P. Y., Chai, C. S., Hung, G. T., & Zhang, Y. (2019). Fostering design-oriented collective reflection among preservice teachers through principle-based knowledge building activities. Computers & Education, 130 , 105–120. https://doi.org/10.1016/j.compedu.2018.12.001

House, J. (2013). Developing pragmatic competence in English as a lingua franca: Using discourse markers to express intersubjectivity and connectivity. Journal of Pragmatics, 59 , 57–67. https://doi.org/10.1016/j.pragma.2013.03.001

Järvenoja, H., & Järvelä, S. (2009). Emotion control in collaborative learning situations: Do students regulate emotions evoked by social challenges. British Journal of Educational Psychology, 79 (3), 463–481. https://doi.org/10.1348/000709909X402811

Järvelä, S., Kirschner, P. A., Hadwin, A., Järvenoja, H., Malmberg, J., Miller, M., & Laru, J. (2016). Socially shared regulation of learning in CSCL: Understanding and prompting individual-and group-level shared regulatory activities. International Journal of Computer-Supported Collaborative Learning, 11 , 263–280. https://doi.org/10.1007/s11412-016-9238-2

Jeong, H., Hmelo-Silver, C. E., & Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks 2005–2009. International Journal of Computer-Supported Collaborative Learning, 9 , 305–334. https://doi.org/10.1007/s11412-014-9198-3

Kim, D., & Lim, C. (2018). Promoting socially shared metacognitive regulation in collaborative project-based learning: A framework for the design of structured guidance. Teaching in Higher Education, 23 (2), 194–211. https://doi.org/10.1080/13562517.2017.1379484

Kulikowich, J. M., & Alexander, P. A. (2003). Cognitive Assessment. In L. Nadel (Ed.), The Encyclopedia of Cognitive Science (Vol. 1, pp. 526–532). Nature Publishing Group.

Lee, E. Y., Chan, C. K. K., & Van Aalst, J. (2006). Students assessing their own collaborative knowledge building. International Journal of Computer-Supported Collaborative Learning, 1 (1), 57–87. https://doi.org/10.1007/s11412-006-6844-4

Lei, C., & Chan, C. K. K. (2018). Developing meta-discourse through reflective assessment in knowledge building environments. Computers & Education, 126 , 153–169. https://doi.org/10.1016/j.compedu.2018.07.006

Näykki, P., Isohätälä, J., Järvelä, S., Pöysä-Tarhonen, J., & Häkkinen, P. (2017). Facilitating socio-cognitive and socio-emotional monitoring in collaborative learning with a regulation macro script–an exploratory study. International Journal of Computer-Supported Collaborative Learning, 12 , 251–279. https://doi.org/10.1007/s11412-017-9259-5

National Research Council (NRC). (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas . The National Academies Press

OECD. (2023). OECD core competency framework. https://www.oecd.org/careers/OECD-Core-Competency-Framework.pdf . Accessed 12 Jan 2024.

Oshima, J., Oshima, R., & Matsuzawa, Y. (2012). Knowledge Building Discourse Explorer: a social network analysis application for knowledge building discourse. Educational Technology Research and Development, 60 , 903–921. https://doi.org/10.1007/s11423-012-9265-2

Pettersson, F. (2018). On the issues of digital competence in educational contexts–a review of literature. Education and Information Technologies, 23 (3), 1005–1021. https://doi.org/10.1007/s10639-017-9649-3

Resendes, M., Scardamalia, M., Bereiter, C., Chen, B., & Halewood, C. (2015). Group-level formative feedback and metadiscourse. International Journal of Computer-Supported Collaborative Learning, 10 , 309–336. https://doi.org/10.1007/s11412-015-9219-x

Rogat, T. K., & Adams-Wiggins, K. R. (2025). Interrelation between regulatory and socio-emotional processes within collaborative groups characterized by facilitative and directive other-regulation. Computers in Human Behavior, 52 , 589–600. https://doi.org/10.1016/j.chb.2015.01.026

Rogat, T. K., & Linnenbrink-Garcia, L. (2011). Socially shared regulation in collaborative groups: An analysis of the interplay between quality of social regulation and group processes. Cognition and Instruction, 29 , 375–415. https://doi.org/10.1080/07370008.2011.607930

Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. Liberal Education in a Knowledge Society, 97 , 67–98.

Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: theory, pedagogy, and technology. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 397-417). New York, NY: Cambridge University Press. https://doi.org/10.1017/CBO9781139519526.025

Snell, J., & Lefstein, A. (2018). “Low Ability”, participation, and identity in dialogic pedagogy. American Educational Research Journal, 55 , 40–78. https://doi.org/10.3102/0002831217730010

Stahl, G. (2015). A decade of CSCL. International Journal of Computer-Supported Collaborative Learning, 10 , 337–344. https://doi.org/10.1007/s11412-015-9222-2

Sun, C., Shute, V. J., Stewart, A., Yonehiro, J., Duran, N., & D’Mello, S. (2020). Towards a generalized competency model of collaborative problem solving. Computers & Education, 143 , 103672. https://doi.org/10.1016/j.compedu.2019.103672

Tao, D., & Zhang, J. (2018). Forming shared inquiry structures to support knowledge building in a grade 5 community. Instructional Science, 46 , 563–592. https://doi.org/10.1007/s11251-018-9462-4

Tao, D., & Zhang, J. (2021). Agency to transform: how did a grade 5 community co-configure dynamic knowledge building practices in a yearlong science inquiry? International Journal of Computer-Supported Collaborative Learning, 16 , 403–404. https://doi.org/10.1007/s11412-021-09353-7

Tchounikine, P. (2019). Learners’ agency and CSCL technologies: towards an emancipatory perspective. International Journal of Computer-Supported Collaborative Learning, 14 , 237–250. https://doi.org/10.1007/s11412-019-09302-5

Toth, E. E., Suthers, D. D., & Lesgold, A. M. (2002). “Mapping to know”: The effects of representational guidance and reflective assessment on scientific inquiry. Science Education, 86 (2), 264–286. https://doi.org/10.1002/sce.10004

Van Aalst, J. (2009). Distinguishing knowledge-sharing, knowledge-construction and knowledge-creation discourse. International Journal of Computer Supported Collaborative Learning, 4 , 259–287. https://doi.org/10.1007/s11412-009-9069-5

Van Aalst, J., & Chan, C. K. K. (2007). Student-directed assessment of knowledge building using electronic portfolios. Journal of the Learning Sciences, 16 (2), 175–220. https://doi.org/10.1080/10508400701193697

Van Uum, M. S., Verhoeff, R. P., & Peeters, M. (2016). Inquiry-based science education: towards a pedagogical framework for primary school teachers. International Journal of Science Education, 38 (3), 450–469. https://doi.org/10.1080/09500693.2016.1147660

Voogt, J., & Roblin, N. P. (2012). A comparative analysis of international frameworks for 21st century competencies: Implications for national curriculum policies. Journal of Curriculum Studies, 44 (3), 299–321. https://doi.org/10.1080/00220272.2012.668938

White, B., & Frederiksen, J. (1998). Inquiry, modelling, and metacognition: Making science accessible to all students. Cognition and Instruction, 16 , 3–118. https://doi.org/10.1207/s1532690xci1601_2

Wise, A. F., & Schwartz, B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12 , 423–467. https://doi.org/10.1007/s11412-017-9267-5

Yang, Y. (2019). Reflective assessment for epistemic agency of academically low-achieving students. Journal of Computer Assisted Learning, 35 , 459–475. https://doi.org/10.1111/jcal.12343

Yang, Y., Chen, Y., Feng, X., Sun, D., & Pang, S. (2024). Investigating the mechanisms of analytics-supported reflective assessment for fostering collective knowledge. Journal of Computing in Higher Education . https://doi.org/10.1007/s12528-024-09398-1

Yang, Y., Feng, X., Zhu, G., & Sun, D. (2023). Exploring the mechanisms of data-supported reflective assessment for pre-service teachers’ knowledge building. Interactive Learning Environments . https://doi.org/10.1080/10494820.2023.2223238

Yang, Y., Chen, Q., Yu, Y., Feng, X., & van Aalst, J. (2020). Collective reflective assessment for shared epistemic agency by undergraduates in knowledge building. British Journal of Educational Technology, 51 (4), 1136–1154. https://doi.org/10.1111/bjet.12909

Yang, Y., Du, Y., van Aalst, J., Sun, D., & Ouyang, F. (2020a). Self-directed refective assessment for collective empowerment among pre-service teachers. British Journal of Educational Technology, 51 (6), 1961–1981.

Yang, Y., van Aalst, J., & Chan, C. K. K. (2020c). Dynamics of reflective assessment and knowledge building for academically low-achieving students. American Educational Research Journal, 57 (3), 1241–1289. https://doi.org/10.3102/0002831219872444

Yang, Y., van Aalst, J., Chan, C. K. K., & Tian, W. (2016). Reflective assessment in knowledge building by students with low academic achievement. International Journal of Computer-Supported Collaborative Learning, 11 , 281–311. https://doi.org/10.1007/s11412-016-9239-1

Yang, Y., Zhu, G., Sun, D., & Chan, C. K. K. (2022). Collaborative analytics-supported reflective assessment for scaffolding pre-service teachers’ collaborative inquiry and knowledge building. International Journal of Computer-Supported Collaborative Learning, 17 , 249–292. https://doi.org/10.1007/s11412-022-09372-y

Yuan, G., Zhang, J., & Chen, M.-C. (2022). Cross-community knowledge building with Idea Thread Mapper. International Journal of Computer-Supported Collaborative Learning, 17 (2), 293–326. https://doi.org/10.1007/s11412-022-09371-z

Zhang, J., Scardamalia, M., Lamon, M., Messina, R., & Reeve, R. (2007). Socio-cognitive dynamics of knowledge building in the work of 9-and 10-year-olds. Educational Technology Research and Development, 55 , 117–145. https://doi.org/10.1007/s11423-006-9019-0

Zhang, J., Tian, Y., Yuan, G., & Tao, D. (2022). Epistemic agency for costructuring expansive knowledge-building practices. Science Education, 106 , 890– 923. https://doi.org/10.1002/sce.21717

Zhang, J., Yuan, G., & Bogouslavsky, M. (2020). Give student ideas a larger stage: Support cross-community interaction for knowledge building. International Journal of Computer-Supported Collaborative Learning, 15 (4), 389–410. https://doi.org/10.1007/s11412-020-09332-4

Zhu, G., & Lin, F. (2023). Teachers scaffold student discourse and emotions in knowledge building classrooms. Interactive Learning Environments , 1–18. https://doi.org/10.1080/10494820.2023.2172046

Download references

Acknowledgments

The authors are indebted to the participating students.

National Natural Science Foundation of China, Grant/Award Number: 62107020; Central China Normal University, Grant/Award Numbers: CCNUTEIII 2021-11, CCNUAI&FE2022-03-15; Collaborative Innovation Center for Informatization and Balanced Development of K-12 Education by MOE, Grant/Award Number: xtzd2022-002; Ministry of Education of the People’s Republic of China, Grant/Award Number: 21YJA880078.

Author information

Authors and affiliations.

Faculty of Artificial Intelligence in Education, Central China Normal University, NO.152 Luoyu Road, Wuhan, Hubei, People’s Republic of China, 430079

Faculty of Education, The University of Hong Kong, Hong Kong, SAR, China

Carol K. K. Chan & Yuyao Tong

Learning Sciences and Assessment Academic Group, National Institute of Education (NIE), Nanyang Technological University, Singapore, Singapore

Department of Mathematics and Information Technology, The Education University of Hong Kong, Hong Kong, SAR, China

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yuqin Yang .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 37 KB)

figure 10

The process of reflective assessment using KBDeX.

see Fig. 10

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Yang, Y., Chan, C.K.K., Zhu, G. et al. Reflective assessment using analytics and artifacts for scaffolding knowledge building competencies among undergraduate students. Intern. J. Comput.-Support. Collab. Learn (2024). https://doi.org/10.1007/s11412-024-09421-8

Download citation

Received : 14 August 2023

Accepted : 26 March 2024

Published : 13 May 2024

DOI : https://doi.org/10.1007/s11412-024-09421-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Knowledge Building competencies
  • Knowledge Building discourse
  • Reflective assessment
  • Learning analytics
  • Undergraduate
  • Find a journal
  • Publish with us
  • Track your research

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Why Employees Who Work Across Silos Get Burned Out

  • Eric Quintane,
  • Jung Won Lee,
  • Camila Umaña Ruiz,
  • Martin Kilduff

research work rubric

And how companies can better support these important cross-functional workers.

When employees collaborate across silos, there are numerous benefits for organizations. But the employees who do this critical work — also known as boundary spanners or network brokers — may end up overwhelmed, burned out, and can even develop abusive behavior toward their fellow employees. Research shows why this can happen, and suggests three key strategies companies can use to mitigate any negative effects: strategically integrating cross-silo collaboration into formal roles, providing adequate resources, and developing check-in mechanisms and opportunities to disengage.

In today’s fast-paced and complex business environment, fostering collaboration across organizational silos, whether between different teams, divisions, or regional offices, is no longer a luxury — it’s a necessity. It is key to improving performance, unlocking innovation, and speeding up coordination .

  • Eric Quintane is an associate professor of organizational behavior at ESMT Berlin. He holds a PhD in management from the University of Melbourne in Australia. His research focuses on understanding the dynamics of interpersonal networks and their consequences for individuals (such as innovative performance or burnout).
  • SL Sunny Lee is an Associate Professor of Organizational Behavior and the Deputy Director of Diversity and Inclusion at UCL School of Management. She has a PhD from London Business School. Her research focuses on identifying biases within human resources processes, such as recruitment and promotion, and the psychological implications of workplace behaviors.
  • JL Jung Won Lee is an assistant professor of organizational behavior at ESSEC Business School. She has a PhD from UCL School of Management. Her research focuses on psychological antecedents and consequences of interpersonal networks.
  • CR Camila Umaña Ruiz is a consultant and Assistant Professor in Organizational Behavior and HR at Pontificia Universidad Javeriana. She has a PhD from Universidad de los Andes. Her research focuses on interpersonal and organizational antecedents and consequences of job stress and burnout.
  • Martin Kilduff is Professor and Director of Research at UCL School of Management. He has a PhD from Cornell University. His research focuses on interpersonal social networks in organizations.

Partner Center

RESEARCH ASST I (Student/Work Study) - BIO

The research assistant will work with Dr. Rebecca Tonietto on a greenhouse study.

Responsibilities*

  • Assist with mesocosm establishment
  • Watering and rotating mesocosms in the greenhouse
  • Assist with the simulator experiments

Required Qualifications*

  • Must be a current University of Michigan-Flint undergraduate student in good academic standing or recently graduated from a UM-Flint undergraduate program and not currently enrolled in a graduate program
  • Must have an interest in outdoor field work and working with plants
  • Must have successfully completed Ecology (BIO 327)

Desired Qualifications*

Experience with plant identification preferred.

Work Schedule

The appointment will be for the summer 2024 semester.

Additional Information

University of Michigan-Flint - Plan for Diversity, Equity and Inclusion

The University of Michigan-Flint's DEI plan can be found at: https://www.umflint.edu/dei/?  

The University of Michigan-Flint exhibits its commitment to diversity, equity, and inclusion through enacting fair practices, policies, and procedures particularly in support of the equitable participation of the historically underserved. UM-Flint recognizes the value of diversity in our efforts to provide equitable access and opportunities to all regardless of individual identities in support of a climate where everyone feels a sense of belonging, community, and agency.

Diversity is a core value at University of Michigan-Flint. We are passionate about building and sustaining an inclusive and equitable working and learning environment for all students, staff, and faculty. The University of Michigan-Flint seeks to recruit and retain a diverse workforce as a reflection of our commitment to serve the diverse people of Michigan, to maintain the excellence of the University, and to offer our students richly varied disciplines, perspectives, and ways of knowing and learning for the purpose of becoming global citizens in a connected world.

Background Screening

The University of Michigan conducts background checks on all job candidates upon acceptance of a contingent offer and may use a third party administrator to conduct background checks.  Background checks are performed in compliance with the Fair Credit Reporting Act.

Application Deadline

Job openings are posted for a minimum of three calendar days.  The review and selection process may begin as early as the fourth day after posting. This opening may be removed from posting boards and filled anytime after the minimum posting period has ended.

U-M EEO/AA Statement

The University of Michigan is an equal opportunity/affirmative action employer.

COMMENTS

  1. Example 9

    Professor provides this rubric to students when the assignment is given. It serves as a tool for them to structure as well as self-evaluate their work in each area of their research project. This rubric is developed for a specific original research assignment; it would need to be revised to describe the expectations for each specific assignment.

  2. Grading Rubric for A Research Paper—Any Discipline

    Style/Voice ____. Grammar/Usage/ Mechanics ____. *exceptional introduction that grabs interest of reader and states topic. **thesis is exceptionally clear, arguable, well-developed, and a definitive statement. *paper is exceptionally researched, extremely detailed, and historically accurate. **information clearly relates to the thesis.

  3. PDF Research Presentation Rubrics

    The goal of this rubric is to identify and assess elements of research presentations, including delivery strategies and slide design. • Self-assessment: Record yourself presenting your talk using your computer's pre-downloaded recording software or by using the coach in Microsoft PowerPoint. Then review your recording, fill in the rubric ...

  4. Examples of Rubric Creation

    Sociology Research Paper. ... depended on the complexity of the assignment and the kind of information they wanted to convey to students about their work. Below are three of the analytic rubrics they considered for the Argument trait and a holistic rubric for all the traits together. Lastly you will find the entire analytic rubric, for all five ...

  5. Using rubrics

    A rubric is a type of scoring guide that assesses and articulates specific components and expectations for an assignment. Rubrics can be used for a variety of assignments: research papers, group projects, portfolios, and presentations. Why use rubrics? Rubrics help instructors: Assess assignments consistently from student-to-student.

  6. PDF Scoring Rubric: Research Report/Paper

    Scoring Rubric: Research Report/Paper. The report is both accurate and com-pelling. The writing begins with an inter-esting or provocative introduction that contains a clear and concise thesis state-ment. The body fully explores the topic and presents information in a sensible order. The conclusion restates the thesis or offers a com-ment or ...

  7. Rubrics

    Whenever we give feedback, it inevitably reflects our priorities and expectations about the assignment. In other words, we're using a rubric to choose which elements (e.g., right/wrong answer, work shown, thesis analysis, style, etc.) receive more or less feedback and what counts as a "good thesis" or a "less good thesis."

  8. PDF Research Project Writing Rubric

    Research Project Writing Rubric Excellent (A) Very Competent (B) Fairly Competent (C) Not Yet Competent (D) Research Strong evidence of having used reliable sources of info. on 2 art forms (arch., art, design, drama, music). May have more than req. min. of 4 sources for each project (6 sources for teams of 3), no max. Gives vivid picture of

  9. Rubric for a Research Proposal

    The following rubric guides students' writing process by making explicit the conventions for a research proposal. It also leaves room for the instructor to comment on each particular section of the proposal. Clear introduction or abstract (your choice), introducing the purpose, scope, and method of your project.

  10. Rubric Best Practices, Examples, and Templates

    Rubric Best Practices, Examples, and Templates. A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects ...

  11. Creating and Using Rubrics

    Example 2: Engineering Design Project This rubric describes performance standards for three aspects of a team project: research and design, communication, and team work. Oral Presentations Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division course in history ...

  12. PDF graduate research rubric

    graduate_research_rubric.xlsx. student may need significant support. student needs some support to be successful in graduate research. student is prepared for graduate research. area of strength; student is already doing graduate-level work. student has exceptional preparation.

  13. Creating and Using Rubrics

    A rubric is an assessment tool often shaped like a matrix, which describes levels of achievement in a specific area of performance, understanding, or behavior. There are two main types of rubrics: Analytic Rubric: An analytic rubric specifies at least two characteristics to be assessed at each performance level and provides a separate score for ...

  14. PDF Research Paper Rubric Name: Date: Score:

    Contents. All required information is discerned with clarity and precision and contains all items listed in Meets category. Contains: application, abstract, research paper, lab report, observation log, reflective essay, guide and rubrics. Contains 5 - 6 of criteria for meets; and /or poorly organized.

  15. Rubrics

    Instructors may choose to use a standard rubric for evaluating all written work completed in a course. Course rubrics provide instructors and students a shared language for communicating the values and expectations of written work over the course of an entire semester. Best practices suggest that establishing grading criteria with students well ...

  16. How to Design Effective Rubrics

    No work turned in for project: Research Skills (weight = 1) Details for highest performance level. Details for mid-performance level. ... If possible, use previous student work to test a rubric to determine how well the rubric functions for grading the assessment prior to giving the rubric to students (Wormeli 2006). After using the rubric in a ...

  17. Rubrics

    A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as ...

  18. Rubrics

    Rubrics are a set of criteria to evaluate performance on an assignment or assessment. Rubrics can communicate expectations regarding the quality of work to students and provide a standardized framework for instructors to assess work. Rubrics can be used for both formative and summative assessment. They are also crucial in encouraging self ...

  19. A Rubric for Research

    The rubric also scores the child in his or her ability plan out the research by formulating effective questions, establishing a strategy to locate information, and knowing the end result. Teachers can evaluate whether students are prepared to face certain challenges, such as not finding enough information on a topic. Resources.

  20. Sample group work rubric

    Example of Group Work Rubric. Always willing to help and do more. Routinely offered useful ideas. Always displays positive attitude. Cooperative. Usually offered useful ideas. Generally displays positive attitude. Sometimes cooperative. Sometimes offered useful ideas.

  21. PDF Rubric for Cooperative and Collaborative Learning

    Lets others do the work and rarely supports the efforts of the group as a whole Shared responsibility and dependability: Consistently punctual with work responsibilities Follows through on assigned tasks and does not depend on others to do the work; responsibility for tasks is evenly shared Usually punctual with work responsibilities

  22. Using Rubrics to Assess Learning

    Provide written comments as feedback in addition to ranking student work based on rubric criteria. Encourage self-assessment and reflection using rubrics. You could also ask students to peer review each other's work using the rubric. ... "Create a rubric for an undergraduate research presentation," ChatGPT provided a list of criteria with ...

  23. PDF UNM Law Faculty Scholarly Workload Rubric* (DRAFT 4/30/12)

    This rubric forms the basis by which the Law Dean shall assign units for Law faculty research and creative work each semester. IT IS PRESUMED THAT THE VAST MAJORITY OF FACULTY ACHIEVEMENT WITH RESPECT TO RESEARCH AND CREATIVE WORK IN ANY GIVEN SEMESTER WILL FALL WITHIN THE 3-5 UNIT RANGE GIVEN OTHER TEACHING AND SERVICE OBLIGATIONS.

  24. PDF Fiscal Year (FY) 2024-25 Research Initial Scope of Work (ISOW)

    Fiscal Year (FY) 2024-25 Research Initial Scope of Work (ISOW) Task ID : 4476 . Task Title: California Multimodal Supply Chain Resilience Analysis . 1. What is the need? Resilience, as defined by the National Academies, is "the ability to . prepare and plan for, absorb, recover from, and more successfully adapt to adverse events".

  25. Reflective assessment using analytics and artifacts for ...

    Knowledge building (KB) competencies are crucial for undergraduates' creative knowledge work and academic success. While there is substantial research on KB discourse, there are limited efforts in examining how KB competencies in the conceptual, metacognitive, socio-emotional, and epistemic dimensions are demonstrated in KB discourse and how the competencies can be scaffolded. Previous ...

  26. €104m funding boost for scientific research

    The Government has announced a €104 million investment in scientific research across Ireland. The funding is being allocated to four Science Foundation Ireland (SFI) Research Centres.

  27. Why Employees Who Work Across Silos Get Burned Out

    Summary. When employees collaborate across silos, there are numerous benefits for organizations. But the employees who do this critical work — also known as boundary spanners or network brokers ...

  28. RESEARCH ASST I (Student/Work Study)

    The research assistant will work with Dr. Rebecca Tonietto on a greenhouse study. Responsibilities* Assist with mesocosm establishment; ... Must have an interest in outdoor field work and working with plants; Must have successfully completed Ecology (BIO 327) Desired Qualifications* Experience with plant identification preferred.