Assessment Rubrics

A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations. Markers of quality give students a clear idea about what must be done to demonstrate a certain level of mastery, understanding, or proficiency (i.e., "Exceeds Expectations" does xyz, "Meets Expectations" does only xy or yz, "Developing" does only x or y or z). Rubrics can be used for any assignment in a course, or for any way in which students are asked to demonstrate what they've learned. They can also be used to facilitate self and peer-reviews of student work.

Rubrics aren't just for summative evaluation. They can be used as a teaching tool as well. When used as part of a formative assessment, they can help students understand both the holistic nature and/or specific analytics of learning expected, the level of learning expected, and then make decisions about their current level of learning to inform revision and improvement (Reddy & Andrade, 2010). 

Why use rubrics?

Rubrics help instructors:

Provide students with feedback that is clear, directed and focused on ways to improve learning.

Demystify assignment expectations so students can focus on the work instead of guessing "what the instructor wants."

Reduce time spent on grading and develop consistency in how you evaluate student learning across students and throughout a class.

Rubrics help students:

Focus their efforts on completing assignments in line with clearly set expectations.

Self and Peer-reflect on their learning, making informed changes to achieve the desired learning level.

Developing a Rubric

During the process of developing a rubric, instructors might:

Select an assignment for your course - ideally one you identify as time intensive to grade, or students report as having unclear expectations.

Decide what you want students to demonstrate about their learning through that assignment. These are your criteria.

Identify the markers of quality on which you feel comfortable evaluating students’ level of learning - often along with a numerical scale (i.e., "Accomplished," "Emerging," "Beginning" for a developmental approach).

Give students the rubric ahead of time. Advise them to use it in guiding their completion of the assignment.

It can be overwhelming to create a rubric for every assignment in a class at once, so start by creating one rubric for one assignment. See how it goes and develop more from there! Also, do not reinvent the wheel. Rubric templates and examples exist all over the Internet, or consider asking colleagues if they have developed rubrics for similar assignments. 

Sample Rubrics

Examples of holistic and analytic rubrics : see Tables 2 & 3 in “Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners” (Allen & Tanner, 2006)

Examples across assessment types : see “Creating and Using Rubrics,” Carnegie Mellon Eberly Center for Teaching Excellence and & Educational Innovation

“VALUE Rubrics” : see the Association of American Colleges and Universities set of free, downloadable rubrics, with foci including creative thinking, problem solving, and information literacy. 

Andrade, H. 2000. Using rubrics to promote thinking and learning. Educational Leadership 57, no. 5: 13–18. Arter, J., and J. Chappuis. 2007. Creating and recognizing quality rubrics. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall. Stiggins, R.J. 2001. Student-involved classroom assessment. 3rd ed. Upper Saddle River, NJ: Prentice-Hall. Reddy, Y., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation In Higher Education, 35(4), 435-448.

helpful professor logo

7 Steps for How to Write an Evaluation Essay (Example & Template)

In this ultimate guide, I will explain to you exactly how to write an evaluation essay.

1. What is an Evaluation Essay?

An evaluation essay should provide a critical analysis of something.

You’re literally ‘evaluating’ the thing you’re looking up.

Here’s a couple of quick definitions of what we mean by ‘evaluate’:

  • Merriam-Webster defines evaluation as: “to determine the significance, worth, or condition of usually by careful appraisal and study”
  • Collins Dictionary says: “If you evaluate something or someone, you consider them in order to make a judgment about them, for example about how good or bad they are.”

Here’s some synonyms for ‘evaluate’:

So, we could say that an evaluation essay should carefully examine the ‘thing’ and provide an overall judgement of it.

Here’s some common things you may be asked to write an evaluation essay on:

This is by no means an exhaustive list. Really, you can evaluate just about anything!

Get a Pdf of this article for class

Enjoy subscriber-only access to this article’s pdf

2. How to write an Evaluation Essay

There are two secrets to writing a strong evaluation essay. The first is to aim for objective analysis before forming an opinion. The second is to use an evaluation criteria.

Aim to Appear Objective before giving an Evaluation Argument

Your evaluation will eventually need an argument.

The evaluation argument will show your reader what you have decided is the final value of the ‘thing’ you’re evaluating.

But in order to convince your reader that your evaluative argument is sound, you need to do some leg work.

The aim will be to show that you have provided a balanced and fair assessment before coming to your conclusion.

In order to appear balanced you should:

  • Discuss both the pros and cons of the thing
  • Discuss both the strengths and weaknesses of the thing
  • Look at the thing from multiple different perspectives
  • Be both positive and critical. Don’t make it look like you’re biased towards one perspective.

In other words, give every perspective a fair hearing.

You don’t want to sound like a propagandist. You want to be seen as a fair and balanced adjudicator.

Use an Evaluation Criteria

One way to appear balanced is to use an evaluation criteria.

An evaluation criteria helps to show that you have assessed the ‘thing’ based on an objective measure.

Here’s some examples of evaluation criteria:

  • Strength under pressure
  • Longevity (ability to survive for a long time)
  • Ease of use
  • Ability to get the job done
  • Friendliness
  • Punctuality
  • Ability to predict my needs
  • Calmness under pressure
  • Attentiveness

A Bed and Breakfast

  • Breakfast options
  • Taste of food
  • Comfort of bed
  • Local attractions
  • Service from owner
  • Cleanliness

We can use evaluation criteria to frame out ability to conduct the analysis fairly.

This is especially true for if you have to evaluate multiple different ‘things’. For example, if you’re evaluating three novels, you want to be able to show that you applied the same ‘test’ on all three books!

This will show that you gave each ‘thing’ a fair chance and looked at the same elements for each.

3. How to come up with an Evaluation Argument

After you have:

  • Looked at both good and bad elements of the ‘thing’, and
  • Used an evaluation criteria

You’ll then need to develop an evaluative argument. This argument shows your own overall perspective on the ‘thing’.

Remember, you will need to show your final evaluative argument is backed by objective analysis. You need to do it in order!

Analyze first. Evaluate second.

Here’s an example.

Let’s say you’re evaluating the quality of a meal.

You might say:

  • A strength of the meal was its presentation. It was well presented and looked enticing to eat.
  • A weakness of the meal was that it was overcooked. This decreased its flavor.
  • The meal was given a low rating on ‘cost’ because it was more expensive than the other comparative meals on the menu.
  • The meal was given a high rating on ‘creativity’. It was a meal that involved a thoughtful and inventive mix of ingredients.

Now that you’ve looked at some pros and cons and measured the meal based on a few criteria points (like cost and creativity), you’ll be able to come up with a final argument:

  • Overall, the meal was good enough for a middle-tier restaurant but would not be considered a high-class meal. There is a lot of room for improvement if the chef wants to win any local cooking awards.

Evaluative terms that you might want to use for this final evaluation argument might include:

  • All things considered
  • With all key points in mind

4. Evaluation Essay Outline (with Examples)

Okay, so now you know what to do, let’s have a go at creating an outline for your evaluation essay!

Here’s what I recommend:

4.1 How to Write your Introduction

In the introduction, feel free to use my 5-Step INTRO method . It’ll be an introduction just like any other essay introduction .

And yes, feel free to explain what the final evaluation will be.

So, here it is laid out nice and simple.

Write one sentence for each point to make a 5-sentence introduction:

  • Interest: Make a statement about the ‘thing’ you’re evaluating that you think will be of interest to the reader. Make it a catchy, engaging point that draws the reader in!
  • Notify: Notify the reader of any background info on the thing you’re evaluating. This is your chance to show your depth of knowledge. What is a historical fact about the ‘thing’?
  • Translate: Re-state the essay question. For an evaluative essay, you can re-state it something like: “This essay evaluates the book/ product/ article/ etc. by looking at its strengths and weaknesses and compares it against a marking criteria”.
  • Report: Say what your final evaluation will be. For example you can say “While there are some weaknesses in this book, overall this evaluative essay will show that it helps progress knowledge about Dinosaurs.”
  • Outline: Simply give a clear overview of what will be discussed. For example, you can say: “Firstly, the essay will evaluate the product based on an objective criteria. This criteria will include its value for money, fit for purpose and ease of use. Next, the essay will show the main strengths and weaknesses of the product. Lastly, the essay will provide a final evaluative statement about the product’s overall value and worth.”

If you want more depth on how to use the INTRO method, you’ll need to go and check out our blog post on writing quality introductions.

4.2 Example Introduction

This example introduction is for the essay question: Write an Evaluation Essay on Facebook’s Impact on Society.

“Facebook is the third most visited website in the world. It was founded in 2004 by Mark Zuckerberg in his college dorm. This essay evaluates the impact of Facebook on society and makes an objective judgement on its value. The essay will argue that Facebook has changed the world both for the better and worse. Firstly, it will give an overview of what Facebook is and its history. Then, it will examine Facebook on the criteria of: impact on social interactions, impact on the media landscape, and impact on politics.”

You’ll notice that each sentence in this introduction follows my 5-Step INTRO formula to create a clear, coherent 5-Step introduction.

4.3 How to Write your Body Paragraphs

The first body paragraph should give an overview of the ‘thing’ being evaluated.

Then, you should evaluate the pros and cons of the ‘thing’ being evaluated based upon the criteria you have developed for evaluating it.

Let’s take a look below.

4.4 First Body Paragraph: Overview of your Subject

This first paragraph should provide objective overview of your subject’s properties and history. You should not be doing any evaluating just yet.

The goal for this first paragraph is to ensure your reader knows what it is you’re evaluating. Secondarily, it should show your marker that you have developed some good knowledge about it.

If you need to use more than one paragraph to give an overview of the subject, that’s fine.

Similarly, if your essay word length needs to be quite long, feel free to spend several paragraphs exploring the subject’s background and objective details to show off your depth of knowledge for the marker.

4.5 First Body Paragraph Example

Sticking with the essay question: Write an Evaluation Essay on Facebook’s Impact on Society , this might be your paragraph:

“Facebook has been one of the most successful websites of all time. It is the website that dominated the ‘Web 2.0’ revolution, which was characterized by user two-way interaction with the web. Facebook allowed users to create their own personal profiles and invite their friends to follow along. Since 2004, Facebook has attracted more than one billion people to create profiles in order to share their opinions and keep in touch with their friends.”

Notice here that I haven’t yet made any evaluations of Facebook’s merits?

This first paragraph (or, if need be, several of them) should be all about showing the reader exactly what your subject is – no more, no less.

4.6 Evaluation Paragraphs: Second, Third, Forth and Fifth Body Paragraphs

Once you’re confident your reader will know what the subject that you’re evaluating is, you’ll need to move on to the actual evaluation.

For this step, you’ll need to dig up that evaluation criteria we talked about in Point 2.

For example, let’s say you’re evaluating a President of the United States.

Your evaluation criteria might be:

  • Impact on world history
  • Ability to pass legislation
  • Popularity with voters
  • Morals and ethics
  • Ability to change lives for the better

Really, you could make up any evaluation criteria you want!

Once you’ve made up the evaluation criteria, you’ve got your evaluation paragraph ideas!

Simply turn each point in your evaluation criteria into a full paragraph.

How do you do this?

Well, start with a topic sentence.

For the criteria point ‘Impact on world history’ you can say something like: “Barack Obama’s impact on world history is mixed.”

This topic sentence will show that you’ll evaluate both pros and cons of Obama’s impact on world history in the paragraph.

Then, follow it up with explanations.

“While Obama campaigned to withdraw troops from Iraq and Afghanistan, he was unable to completely achieve this objective. This is an obvious negative for his impact on the world. However, as the first black man to lead the most powerful nation on earth, he will forever be remembered as a living milestone for civil rights and progress.”

Keep going, turning each evaluation criteria into a full paragraph.

4.7 Evaluation Paragraph Example

Let’s go back to our essay question: Write an Evaluation Essay on Facebook’s Impact on Society .

I’ve decided to use the evaluation criteria below:

  • impact on social interactions;
  • impact on the media landscape;
  • impact on politics

Naturally, I’m going to write one paragraph for each point.

If you’re expected to write a longer piece, you could write two paragraphs on each point (one for pros and one for cons).

Here’s what my first evaluation paragraph might look like:

“Facebook has had a profound impact on social interactions. It has helped people to stay in touch with one another from long distances and after they have left school and college. This is obviously a great positive. However, it can also be seen as having a negative impact. For example, people may be less likely to interact face-to-face because they are ‘hanging out’ online instead. This can have negative impact on genuine one-to-one relationships.”

You might notice that this paragraph has a topic sentence, explanations and examples. It follows my perfect paragraph formula which you’re more than welcome to check out!

4.8 How to write your Conclusion

To conclude, you’ll need to come up with one final evaluative argument.

This evaluation argument provides an overall assessment. You can start with “Overall, Facebook has been…” and continue by saying that (all things considered) he was a good or bad president!

Remember, you can only come up with an overall evaluation after you’ve looked at the subject’s pros and cons based upon your evaluation criteria.

In the example below, I’m going to use my 5 C’s conclusion paragraph method . This will make sure my conclusion covers all the things a good conclusion should cover!

Like the INTRO method, the 5 C’s conclusion method should have one sentence for each point to create a 5 sentence conclusion paragraph.

The 5 C’s conclusion method is:

  • Close the loop: Return to a statement you made in the introduction.
  • Conclude: Show what your final position is.
  • Clarify: Clarify how your final position is relevant to the Essay Question.
  • Concern: Explain who should be concerned by your findings.
  • Consequences: End by noting in one final, engaging sentence why this topic is of such importance. The ‘concern’ and ‘consequences’ sentences can be combined

4.9 Concluding Argument Example Paragraph

Here’s a possible concluding argument for our essay question: Write an Evaluation Essay on Facebook’s Impact on Society .

“The introduction of this essay highlighted that Facebook has had a profound impact on society. This evaluation essay has shown that this impact has been both positive and negative. Thus, it is too soon to say whether Facebook has been an overall positive or negative for society. However, people should pay close attention to this issue because it is possible that Facebook is contributing to the undermining of truth in media and positive interpersonal relationships.”

Note here that I’ve followed the 5 C’s conclusion method for my concluding evaluative argument paragraph.

5. Evaluation Essay Example Template

Below is a template you can use for your evaluation essay , based upon the advice I gave in Section 4:

6. 23+ Good Evaluation Essay Topics

Okay now that you know how to write an evaluation essay, let’s look at a few examples.

For each example I’m going to give you an evaluation essay title idea, plus a list of criteria you might want to use in your evaluation essay.

6.1 Evaluation of Impact

  • Evaluate the impact of global warming on the great barrier reef. Recommended evaluation criteria: Level of bleaching; Impact on tourism; Economic impact; Impact on lifestyles; Impact on sealife
  • Evaluate the impact of the Global Financial Crisis on poverty. Recommended evaluation criteria: Impact on jobs; Impact on childhood poverty; Impact on mental health rates; Impact on economic growth; Impact on the wealthy; Global impact
  • Evaluate the impact of having children on your lifestyle. Recommended evaluation criteria: Impact on spare time; Impact on finances; Impact on happiness; Impact on sense of wellbeing
  • Evaluate the impact of the internet on the world. Recommended evaluation criteria: Impact on connectedness; Impact on dating; Impact on business integration; Impact on globalization; Impact on media
  • Evaluate the impact of public transportation on cities. Recommended evaluation criteria: Impact on cost of living; Impact on congestion; Impact on quality of life; Impact on health; Impact on economy
  • Evaluate the impact of universal healthcare on quality of life. Recommended evaluation criteria: Impact on reducing disease rates; Impact on the poorest in society; Impact on life expectancy; Impact on happiness
  • Evaluate the impact of getting a college degree on a person’s life. Recommended evaluation criteria: Impact on debt levels; Impact on career prospects; Impact on life perspectives; Impact on relationships

6.2 Evaluation of a Scholarly Text or Theory

  • Evaluate a Textbook. Recommended evaluation criteria: clarity of explanations; relevance to a course; value for money; practical advice; depth and detail; breadth of information
  • Evaluate a Lecture Series, Podcast or Guest Lecture. Recommended evaluation criteria: clarity of speaker; engagement of attendees; appropriateness of content; value for monet
  • Evaluate a journal article. Recommended evaluation criteria: length; clarity; quality of methodology; quality of literature review ; relevance of findings for real life
  • Evaluate a Famous Scientists. Recommended evaluation criteria: contribution to scientific knowledge; impact on health and prosperity of humankind; controversies and disagreements with other scientists.
  • Evaluate a Theory. Recommended evaluation criteria: contribution to knowledge; reliability or accuracy; impact on the lives of ordinary people; controversies and contradictions with other theories.

6.3 Evaluation of Art and Literature

  • Evaluate a Novel. Recommended evaluation criteria: plot complexity; moral or social value of the message; character development; relevance to modern life
  • Evaluate a Play. Recommended evaluation criteria: plot complexity; quality of acting; moral or social value of the message; character development; relevance to modern life
  • Evaluate a Film. Recommended evaluation criteria: plot complexity; quality of acting; moral or social value of the message; character development; relevance to modern life
  • Evaluate an Artwork. Recommended evaluation criteria: impact on art theory; moral or social message; complexity or quality of composition

6.4 Evaluation of a Product or Service

  • Evaluate a Hotel or Bed and Breakfast. Recommended evaluation criteria: quality of service; flexibility of check-in and check-out times; cleanliness; location; value for money; wi-fi strength; noise levels at night; quality of meals; value for money
  • Evaluate a Restaurant. Recommended evaluation criteria: quality of service; menu choices; cleanliness; atmosphere; taste; value for money.
  • Evaluate a Car. Recommended evaluation criteria: fuel efficiency; value for money; build quality; likelihood to break down; comfort.
  • Evaluate a House. Recommended evaluation criteria: value for money; build quality; roominess; location; access to public transport; quality of neighbourhood
  • Evaluate a Doctor. Recommended evaluation criteria: Quality of service; knowledge; quality of equipment; reputation; value for money.
  • Evaluate a Course. Recommended evaluation criteria: value for money; practical advice; quality of teaching; quality of resources provided.

7. Concluding Advice

how to write an evaluation essay

Evaluation essays are common in high school, college and university.

The trick for getting good marks in an evaluation essay is to show you have looked at both the pros and cons before making a final evaluation analysis statement.

You don’t want to look biased.

That’s why it’s a good idea to use an objective evaluation criteria, and to be generous in looking at both positives and negatives of your subject.

Read Also: 39 Better Ways to Write ‘In Conclusion’ in an Essay

I recommend you use the evaluation template provided in this post to write your evaluation essay. However, if your teacher has given you a template, of course use theirs instead! You always want to follow your teacher’s advice because they’re the person who will be marking your work.

Good luck with your evaluation essay!

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

2 thoughts on “7 Steps for How to Write an Evaluation Essay (Example & Template)”

' src=

What an amazing article. I am returning to studying after several years and was struggling with how to present an evaluative essay. This article has simplified the process and provided me with the confidence to tackle my subject (theoretical approaches to development and management of teams).

I just wanted to ask whether the evaluation criteria has to be supported by evidence or can it just be a list of criteria that you think of yourself to objectively measure?

Many many thanks for writing this!

' src=

Usually we would want to see evidence, but ask your teacher for what they’re looking for as they may allow you, depending on the situation.

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Principles of Assessing Student Writing

Grading and giving feedback are deeply linked to student educational outcomes. In an online environment, it is especially important for you to offer thoughtful, substantive feedback to your students on their writing–to help them understand where they are communicating their ideas successfully and where they can continue to develop. In a remote course, responding to students in a clear, engaged, and specific manner, provides an opportunity to connect with your students and to support their learning both within your online course and beyond.

Assessing writing can and should be complementary with your pedagogy and curriculum. We suggest that you create a plan to assess student writing that promotes transparency , accessibility , and inclusive pedagogy. This requires some advance preparation and careful thought. Writing scholar John Bean writes that “Because we teachers have little opportunity to discuss grading practice with colleagues, we often develop criteria that seem universal to us but may appear idiosyncratic or even eccentric to others.” Criteria for student success should be fair, consistent, public, and clear.

Your feedback is where students can see and/or hear you engaging with their ideas and acknowledging their labor; it’s also where students feel most vulnerable and where you might feel pressed for time or frustrated at students’ missteps. It can help to remember the purpose of commenting on student assignments: to coach revision and growth (in the present piece and in future work for your course or program). Using audio feedback or screencast feedback can be a great way to articulate these priorities.

You should also be mindful that writing assignments are not a neutral component to students’ experiences in your class. Along with course syllabi and policies, assignments comprise a significant component of larger “ecologies” of assessment – that is, systems of judging student learning and performance (Inoue 2015). The ecology of your course shapes not only what but how students learn, and it does so in ways that can be either inclusionary or exclusionary.  Inclusive assessment asks instructors to think about assessment as a way to support student development in writing, rather than for the purposes of gatekeeping a discipline or profession from “poor performers.”

Other characteristics of inclusive assessment include transparency (the TILT framework foregrounds equity in education); flexibility ; alignment among an assignment’s learning goals, its central task, and its evaluation criteria; and linguistic justice (which recognizes that student performance of “standard” English is not a measure of intelligence or effort).

The WAC program’s principles of inclusive assessment are:

To make your expectations clear, be sure to identify how you will be assessing student writing, and how that assessment fits with your course’s learning objectives.

Beyond a few basics, what makes for effective writing will vary depending on the learning goals for the assignment, the genre of the paper, the subject matter, the specific tasks, the discipline, and the level of the course. It is crucial to develop criteria that match the specific learning goals and the genre of your assignment. What’s valued in one discipline differs in others. 

In addition to sharing your evaluation criteria, spend time in your online class discussing the kinds of feedback you’re giving, and give students the opportunity to ask questions about your responses.

For an example of a writing assignment that ties evaluation to learning objectives, check out Professor Jennifer Gipson’s French 248 assignment.

Whether on a rough draft or a final draft, offer specific , actionable feedback to students with suggestions for improvement that emphasize “global concerns” such as ideas, argument, and organization over “local concerns” such as sentence-level error. In a draft where students will have the opportunity to revise their work, this feedback will likely be more substantial than in a final draft.

Research shows that students are often confused by what we want them to concentrate on in their writing and in their revisions. Our comments on their writing too often lead students to make only superficial revisions to words and sentences, overlooking larger structural revisions that would most improve a paper. So as we design writing assignments, develop evaluation criteria, and comment on and evaluate our students’ final papers, we need to find ways to communicate clearly with students about different levels of revision and about priorities for their writing and revising.

We can help signal priorities if we clearly differentiate between global and local writing concerns . In our assignments, comments, conferences, and evaluation criteria, we can help students by focusing first on conceptual- and structural-level planning and revisions before grammatical- and lexical-level revisions. By no means are we advocating that we ignore language problems in our students’ writing. But we want to offer students clear guidance about how to strengthen their ideas, their analyses, and their arguments, so that students have papers worth editing and polishing. Then we can turn our attention—and our students’—to improving sentences, words, and punctuation. When we respond to their ideas, we signal to students that we care about their development as writers.

To see sample online feedback on a student paper in Sociology, check out this resource.

For more support on global and local concerns, check out the WAC program’s resource, Global and Local Concerns in Student Writing .

A criterion-based evaluation guide that communicates an instructor’s expectations for student performance in an assignment, specifically through the identification and description of evaluation criteria. Our resource on Principles of Rubric Design is an excellent resource to draw from.

For more information about the limits of broad, general evaluation rubrics, see Anson et al, “Big Rubrics and Weird Genres: The Futility of Using Generic Assessment Tools Across Diverse Instructional Contexts,” The Journal of Writing Assessmen t 5.1 (2012).

Acknowledge and support  (rather than penalize) the range of languages, dialects, and rhetorics used by students for whom white mainstream English (often called “standard” English) is not their accustomed language.

For ideas to support your students’ diverse languages or dialects, here are a few resources:

  • Conference on College Composition and Communication Statement on Anti-Black Racism and Black Linguistic Justice
  • Conference on College Composition and Communication’s Students’ Right to Their Own Language
  • The UW Writing Center, “Translingualism: An Alternative to Restrictive Monolingual Ideologies in Writing Instruction”

When you take time to provide feedback, it is worth taking the additional step of creating an activity or assignment that asks students to review and reflect on your feedback with the goal of identifying priorities for their attention and improvement on future assignments. For example, students can submit short learning journal entries or individual assignments reviewing their strengths, areas for improvement, and plans for their next assignment or draft.

Alternative Ways to Assess Student Writing

Recent conversations in the field of Writing Studies have identified how traditional writing assessment can lead to an overemphasis on letter grades and an underemphasis on feedback and student development. In our current difficult learning and teaching environment, it can be challenging to imagine and implement new assessment practices; at the same time, these practices can allow you to connect more deeply with students who are learning remotely.

We hope to provide more robust resources on alternative assessment practices in the near future. In the meantime, we offer links below to two types of alternative assessment.

A version of competency-based assessment that uses pass/fail grading paired with feedback and revision. Individual assignments must meet stated specifications in order to receive credit. There is no partial credit or stepped-down grades (A, AB, B, etc.), but students are provided feedback as well as options for revising or dropping assignments that did not meet specs. See Chapters 5, 6, and 8 of Linda B. Nilson’s Specifications Grading for more information, or access the full book online.

  • Linda Nilson in Inside Higher Ed , “Yes, Virginia, There’s a Better Way to Grade”
  • Humboldt State Center for Teaching and Learning, “Empowering Students Through Specs Grading”
  • Johns Hopkins University, “What is Specifications Grading and Why Should You Consider Using It?”

A determination of students’ course grade based on cumulative labor or effort . Individual assignments receive feedback but no grade, and negotiation between instructor and individual students is encouraged. Contracts have been documented to work best in democratic classrooms where they are built into the classroom culture.

  •  Asao Inoue, Labor-Based Grading Contracts: Building Equity and Inclusion in the Compassionate Writing Classroom   (free e-book)
  • Asao Inoue, Grading Contract for First-Year Writing course
  • Virginia Schwarz, Grading Contract resources

Creating rubrics for effective assessment management

A pair of glasses rest on a sheet of paper with a flow chart written on it

How this will help:

Regardless of whether your course is online or face to face, you will need to provide feedback to your students on their strengths and areas for growth. Rubrics are one way to simplify the process of providing feedback and consistent grades to your students.

What are rubrics?

Rubrics are “scoring sheets” for learning tasks. There are multiple flavors of rubrics, but they all articulate two key variables for scoring how successful the learner has been in completing a specific task: the criteria for evaluation and the levels of performance. While you may have used rubrics in your face-to-face class, rubrics become essential when teaching online. Rubrics will not only save you time (a lot of time) when grading assignments, but they also help clarify expectations about how you are assessing students and why they received a particular grade. It also makes grading feel more objective to students (“I see what I did wrong here”), rather than subjective (“The teacher doesn’t like me and that’s why I got this grade.”). 

When designing a rubric, ideally, the criteria for evaluation need to be aligned with the learning objectives [link to learning objectives] of the task. For example, if an instructor asks their learners to create an annotated bibliography for a research assignment, we can imagine that the instructor wants to give the students practice with identifying valid sources on their research topic, citing sources correctly (using the appropriate format), and summarizing sources appropriately. The criteria for evaluation in a rubric for that task might be

  • Quality of sources
  • Accuracy of citation format for each source type
  • Coherence of summaries
  • Accuracy of summaries

The levels of performance don’t necessarily have a scale they must align with. Some rubric types might use a typical letter grading scale for their levels – these rubrics often include language like “An A-level response will….” Other rubric types have very few levels of performance; sometimes they are as simple as a binary scale – complete or incomplete (a checklist is an example of this kind of rubric). How an instructor thinks about the levels of performance in a rubric is going to depend on a number of factors, including their own personal preferences and approaches to evaluating student work, and on how the task is being used in the learning experience. If a task is not going to contribute to the final grade for the course, it might not be necessary (or make sense) to provide many fine-grained levels of performance. On the other hand, an assignment that is designed to provide detailed information to the instructor as to how proficient each student is at a set of skills might need many, highly specific levels of performance. At the end of this module, we provide examples of different types of rubrics and structures for levels of performance.

What teaching goals can rubrics help meet?

In an online course, clear communication from the instructor about their expectations is critical for student success and success of the course. Effective feedback, where it is clear to the learner what they have already mastered and where there are gaps in the learners knowledge or skills, is necessary for deep learning. Rubrics help an instructor clearly explain their expectations to the class as a whole while also making it easier to give individual students specific feedback on their learning.

Although one of the practical advantages to using rubrics is to make grading of submitted assignments more efficient, they can be used for many, not mutually exclusive, purposes:

  • highlighting growth of a students’ skills or knowledge over time
  • articulating to learners the important features of a high-quality submission
  • assessing student participation in discussion forums
  • guiding student self-assessments 
  • guiding student peer-reviews
  • providing feedback on ungraded or practice assignments to help students identify where they need to focus their learning efforts.

Examples of different rubrics

Different styles of rubrics are better fits for different task-types and for fulfilling the different teaching aims of a rubric . Here we focus on four different styles with varying levels of complexity: single point rubric, Specific task rubrics, general rubrics, holistic rubrics and analytical rubrics (Arter, J. A., & Chappuis, J., 2007).

Single point rubric

Sometimes, simple is easiest. A single point rubric can tell students whether they met the expectations of the criteria or not. We’d generally recommend not using too many criteria with single point rubrics, they aren’t meant for complicated evaluation. They are great for short assignments like discussion posts.

Example task : Write a 250 discussion post reflecting on the purpose of this week’s readings. (20 points)

Example rubric:

Single point rubric

Specific task rubric

This style of rubric is useful for articulating the knowledge and skill objectives (and their respective levels) of a specific assignment.

Example task:

Design and build a trebuchet that is adjustable to launch a 

  • 5g weight a distance of 0.5m
  • 7g weight a distance of 0.5m
  • 10g weight a distance of 0.75m

evaluation for assignment

Holistic rubric

This style of rubric enables a single, overall assessment/evaluation of a learner’s performance on a task

Write a historical research paper discussing ….

( Adapted from http://jfmueller.faculty.noctrl.edu/toolbox/rubrics.htm#versus )

evaluation for assignment

General rubric

This style of rubric can be used for multiple, similar assignments to show growth (achieved and opportunities) over time.

Write a blog post appropriate for a specific audience exploring the themes of the reading for this week.

(Adapted from http://www.chronicle.com/blogs/profhacker/a-rubric-for-evaluating-student-blogs/27196 )

evaluation for assignment

Analytic rubric

This style of rubric is well suited to breaking apart a complex task into component skills and allows for evaluation of those components. It can also help determine the grade for the whole assignment based on performance on the component skills. This style of rubric can look similar to a general rubric but includes detailed grading information.

( Adapted from http://www.chronicle.com/blogs/profhacker/a-rubric-for-evaluating-student-blogs/27196 )

evaluation for assignment

Designing your own rubric

You can approach designing a rubric from multiple angles. Here we outline just one possible procedure to get started. This approach assumes the learning task is graded, but it can be generalized for other structures for levels of performance. 

  • Start with the, “I know it when I see it,” principle. Most instructors have a sense of what makes a reasonable response to a task, even if they haven’t explicitly named those traits before. Write out as many traits of a “meets expectations” response as you can come up with – these will be your first draft of the criteria for learning.
  • For each type of criterion, describe what an “A” response looks like. This will be your top level of performance.
  • For complicated projects, consider moving systematically down each whole-grade level (B, C, D, F),  describe, in terms parallel to how you described the best response, what student responses at that level often look like. Or, for more simple assignments, create very simple rubrics – either the criterion was achieved or not. Rubrics do not have to be complicated [link to single point rubric]! 
  • Share the rubric with a colleague to get feedback or “play test” the rubric using past student work if possible. 
  • After grading some student responses with it, you may be tempted to fine-tune some details. However, this is not recommended. For one, Canvas will not allow you to change a rubric once it has been used for grading. But it is also not recommended to change the metrics of grading after students have already been using a rubric to work from. If you find that your rubric is grading students too harshly on a particular criterion, Also, make sure you track what changes you want to make. You may want to adjust your future course rubrics or at least for the next iteration of the task or course.

Practical Tips

  • Creating learning objectives for each task, as you design the task, helps to ensure there is alignment between your learning activities and assessments and your course level learning objectives. It also gives a head start for the design of the rubric.
  • When creating a rubric, start with just a few levels of performance. It is easier to expand a rubric to include more specificity in the levels of performance than it is to shrink the number of levels. Smaller rubrics are much easier for the instructor to navigate to provide feedback.
  • Using a rubric will (likely) not eliminate the need for qualitative feedback to each student, but keeping a document of commonly used responses to students that you can copy and paste from can make the feedback process even more efficient.
  • Explicitly have students self-assess their task prior to submitting it. For example, when students submit a paper online, have them include a short (100 word or less) reflection on what they think they did well on the paper, and what they struggled with. That step seems obvious to experts (i.e. instructors) but isn’t obvious to all learners. If students make a habit of this, they will often end up with higher grades because they catch their mistakes before they submit their response(s).
  • Canvas and other learning management systems (LMS) have tools that allow you to create point and click rubrics. You can choose to have the tools automatically enter grades into the LMS grade book.
  • Rubrics can be used for students to self-evaluate their own performance or to provide feedback to peers.

University of Michigan

CRLT – Sample lab rubrics

Cult of Pedagogy – The single point rubric

Other Resources

The Chronicle of Higher Ed – A rubric for evaluating student blogs

Canvas – Creating a rubric in Canvas

Jon Mueller – Authentic assessment toolkit

Arter, J. A., & Chappuis, J. (2007). Creating & recognizing quality rubrics. Upper Saddle River, NJ: Pearson Education.

Gilbert, P. K., & Dabbagh, N. (2004). How to structure online discussions for meaningful discourse: a case study. British Journal of Educational Technology , 36 (1), 5–18. doi: 10.1111/j.1467-8535.2005.00434.x

Wyss, V. L., Freedman, D., & Siebert, C. J. (2014). The Development of a Discussion Rubric for Online Courses: Standardizing Expectations of Graduate Students in Online Scholarly Discussions. TechTrends , 58 (2), 99–107. doi: 10.1007/s11528-014-0741-x

Serena Williams’ father speaks out about Will Smith being banned from the Oscars because of the slap. Richard Williams – father of tennis players Serena and Venus Williams – has spoken out about Will Smith’s “cancellation” over the slap to American actor and stand-up comedian Chris Rock at the Oscars a year ago. He was quoted by NME. Williams, who Smith played in the biographical film “King Richard,” which won the Best Actor award, backed the actor. “I think he did the best of what he had to do, but I never felt disgusted with Mr. Smith. In fact, I appreciate Mr. Smith,” he said. “It’s time everyone forgave Will Smith,” Williams added. In his opinion, the ban on the actor’s participation in the Academy Awards for ten years should be lifted. In late March 2022, during the Oscars, host Chris Rock made an unfortunate joke about Smith’s wife. The showman noted the “amazing, Will Smith oscar very short hair” of Smith’s wife, who suffers from alopecia. Rock compared her to the heroine of “Soldier Jane.” The actor then took the stage, slapped him in the face and yelled at him not to make jokes about his wife.

Related Articles

A grid view of participants of a video call

Teaching students to fish: Problem Roulette empowers online students to become self-sufficient learners

A woman positions a microphone in front of her on a desk with her laptop

Current events: Online Proctoring

evaluation for assignment

Academic Evaluations

In our daily lives, we are continually evaluating objects, people, and ideas in our immediate environments. We pass judgments in conversation, while reading, while shopping, while eating, and while watching television or movies, often being unaware that we are doing so. Evaluation is an equally fundamental writing process, and writing assignments frequently ask us to make and defend value judgments.

Evaluation is an important step in almost any writing process, since we are constantly making value judgments as we write. When we write an "academic evaluation," however, this type of value judgment is the focus of our writing.

A Definition of Evaluation

Kate Kiefer, English Professor Like most specific assignments that teachers give, writing evaluations mirrors what happens so often in our day-to-day lives. Every day we decide whether the temperature is cold enough to need a light or heavy jacket; whether we're willing to spend money on a good book or a good movie; whether the prices at the grocery store tell us to keep shopping at the same place or somewhere else for a better value. Academic tasks rely on evaluation just as often. Is a source reliable? Does an argument convince? Is the article worth reading? So writing evaluation helps students make this often unconscious daily task more overt and prepares them to examine ideas, facts, arguments, and so on more critically.

To evaluate is to assess or appraise. Evaluation is the process of examining a subject and rating it based on its important features. We determine how much or how little we value something, arriving at our judgment on the basis of criteria that we can define.

We evaluate when we write primarily because it is almost impossible to avoid doing so. If right now you were asked to write for five minutes on any subject and were asked to keep your writing completely value-free, you would probably find such an assignment difficult. Readers come to evaluative writing in part because they seek the opinions of other people for one reason or another.

Uses for Evaluation

Consider a time recently when you decided to watch a movie. There were at least two kinds of evaluation available to you through the media: the rating system and critical reviews.

Newspapers and magazines, radio and TV programs all provide critical evaluations for their readers and viewers. Many movie-goers consult more than one media reviewer to adjust for bias. Most movie-goers also consider the rating system, especially if they are deciding to take children to a movie. In addition, most people will also ask for recommendations from friends who have already seen the movie.

Whether professional or personal, judgments like these are based on the process of evaluation. The terminology associated with the elements of this process--criteria, evidence, and judgment--might seem alien to you, but you have undoubtedly used these elements almost every time you have expressed an opinion on something.

Types of Written Evaluation

Quite a few of the assignments writers are given at the university and in the workplace involve the process of evaluation.

One type of written evaluation that most people are familiar with is the review. Reviewers will attend performances, events, or places (like restaurants, movies, or concerts), basing their evaluations on their observations. Reviewers typically use a particular set of criteria they establish for themselves, and their reviews most often appear in newspapers and magazines.

Critical Writing

Reviews are a type of critical writing, but there are other types of critical writing which focus on objects (like works of art or literature) rather than on events and performances. Literary criticism, for instance, is a way of establishing the worth or literary merit of a text on the basis of certain established criteria. When we write about literary texts, we do so using one of many critical "lenses," viewing the text as it addresses matters like form, culture, historical context, gender, and class (to name a few). Deciding whether a text is "good" or "bad" is a matter of establishing which "lens" you are viewing that text through, and using the appropriate set of criteria to do so. For example, we might say that a poem by an obscure Nineteenth Century African American poet is not "good" or "useful" in terms of formal characteristics like rhyme, meter, or diction, but we might judge that same text as "good" or "useful" in terms of the way it addresses cultural and political issues historically.

Response Essays

One very common type of academic writing is the response essay. In many different disciplines, we are asked to respond to something that we read or observe. Some types of response, like the interpretive response, simply ask us to explain a text. However, there are other types of response (like agree/disagree and analytical response) which demand that we make some sort of judgment based on careful consideration of the text, object, or event in question.

Problem Solving Essays

In writing assignments which focus on issues, policies, or phenomena, we are often asked to propose possible solutions for identifiable problems. This type of essay requires evaluation on two levels. First of all, it demands that we use evaluation in order to determine that there is a legitimate problem. And secondly, it demands that we take more than one policy or solution into consideration to determine which will be the most feasible, viable, or effective one, given that problem.

Arguing Essays

Written argument is a type of evaluative writing, particularly when it focuses on a claim of value (like "The death penalty is cruel and ineffective") or policy claim (like "Oakland's Ebonics program is an effective way of addressing standard English deficiencies among African American students in public schools"). In written argument, we advance a claim like one of the above, then support this claim with solid reasons and evidence.

Process Analysis

In scientific or investigative writing, in which experiments are conducted and processes or phenomena are observed or studied, evaluation plays a part in the writer's discussion of findings. Often, these findings need to be both interpreted and analyzed by way of criteria established by the writer.

Source Evaluation

Although not a form of written evaluation in and of itself, source evaluation is a process that is involved in many other types of academic writing, like argument, investigative and scientific writing, and research papers. When we conduct research, we quickly learn that not every source is a good source and that we need to be selective about the quality of the evidence we transplant into our own writing.

Relevance to the Topic

When you conduct research, you naturally look for sources that are relevant to your topic. However, writers also often fall prey to the tendency to accept sources that are just relevant enough . For example, if you were writing an essay on Internet censorship, you might find that your research yielded quite a few sources on music censorship, art censorship, or censorship in general. Though these sources could possibly be marginally useful in an essay on Internet censorship, you will probably want to find more directly relevant sources to serve a more central role in your essay.

Perspective on the Topic

Another point to consider is that even though you want sources relevant to your topic, you might not necessarily want an exclusive collection of sources which agree with your own perspective on that topic. For example, if you are writing an essay on Internet censorship from an anti-censorship perspective, you will want to include in your research sources which also address the pro-censorship side. In this way, your essay will be able to fully address perspectives other than (and sometimes in opposition to) your own.

Credibility

One of the questions you want to ask yourself when you consider using a source is "How credible will my audience consider this source to be?" You will want to ask this question not only of the source itself (the book, journal, magazine, newspaper, home page, etc.) but also of the author. To use an extreme example, for most academic writing assignments you would probably want to steer clear of using a source like the National Enquirer or like your eight year old brother, even though we could imagine certain writing situations in which such sources would be entirely appropriate. The key to determining the credibility of a source/author is to decide not only whether you think the source is reliable, but also whether your audience will find it so, given the purpose of your writing.

Currency of Publication

Unless you are doing research with an historical emphasis, you will generally want to choose sources which have been published recently. Sometimes research and statistics maintain their authority for a very long time, but the more common trend in most fields is that the more recent a study is, the more comprehensive and accurate it is.

Accessibility

When sorting through research, it is best to select sources that are readable and accessible both for you and for your intended audience. If a piece of writing is laden with incomprehensible jargon and incoherent structure or style, you will want to think twice about directing it toward an audience unfamiliar with that type of jargon, structure, or style. In short, it is a good rule of thumb to avoid using any source which you yourself do not understand and are not able to interpret for your audience.

Quality of Writing

When choosing sources, consider the quality of writing in the texts themselves. It is possible to paraphrase from sources that are sloppily written, but quoting from such a source would serve only to diminish your own credibility in the eyes of your audience.

Understanding of Biases

Few are sources are truly objective or unbiased . Trying to eliminate bias from your sources will be nearly impossible, but all writers can try to understand and recognize the biases of their sources. For instance, if you were doing a comparative study of 1/2-ton pickup trucks on the market, you might consult the Ford home page. However, you would also need to be aware that this source would have some very definite biases. Likewise, it would not be unreasonable to use an article from Catholic World in an anti-abortion argument, but you would want to understand how your audience would be likely to view that source. Although there is no fail-proof way to determine the bias of a particular journal or newspaper, you can normally sleuth this out by looking at the language in the article itself or in the surrounding articles.

Use of Research

In evaluating a source, you will need to examine the sources that it in turn uses. Looking at the research used by the author of your source, what biases can you recognize? What are the quantity and quality of evidence and statistics included? How reliable and readable do the excerpts cited seem to be?

Considering Purpose and Audience

We typically think of "values" as being personal matters. But in our writing, as in other areas of our lives, values often become matters of public and political concern. Therefore, it is important when we evaluate to consider why we are making judgments on a subject (purpose) and who we hope to affect with our judgments (audience).

Purposes of Evaluation

Your purpose in written evaluation is not only to express your opinion or judgment about a subject, but also to convince, persuade, or otherwise influence an audience by way of that judgment. In this way, evaluation is a type of argument, in which you as a writer are attempting consciously to have an effect on your readers' ways of thinking or acting. If, for example, you are writing an evaluation in which you make a judgment that Mountain Bike A is a better buy than Mountain Bike B, you are doing more than expressing your approval of the merits of Bike A; you are attempting to convince your audience that Bike A is the better buy and, ultimately, to persuade them to buy Bike A rather than Bike B.

Effects of Audience

Kate Kiefer, English Professor When we evaluate for ourselves, we don't usually take the time to articulate criteria and detail evidence. Our thought processes work fast enough that we often seem to make split-second decisions. Even when we spend time thinking over a decision--like which expensive toy (car, stereo, skis) to buy--we don't often lay out the criteria explicitly. We can't take that shortcut when we write to other folks, though. If we want readers to accept our judgment, then we need to be clear about the criteria we use and the evidence that helps us determine value for each criterion. After all, why should I agree with you to eat at the Outback Steak House if you care only about cost but I care about taste and safe food handling? To write an effective evaluation, you need to figure out what your readers care about and then match your criteria to their concerns. Similarly, you can overwhelm readers with too much detail when they don't have the background knowledge to care about that level of detail. Or you can ignore the expertise of your readers (at your peril) and not give enough detail. Then, as a writer, you come across as condescending, or worse. So targeting an audience is really key to successful evaluation.

In written evaluation, it is important to keep in mind not only your own system of value, but also that of your audience. Writers do not evaluate in a vacuum. Giving some thought to the audience you are attempting to influence will help you to determine what criteria are important to them and what evidence they will require in order to be convinced or persuaded by your evaluative argument. In order to evaluate effectively, it is important that you consider what motivates and concerns your audience.

Criteria and Audience Considerations

The first step in deciding which criteria will be effective in your evaluation is determining which criteria your audience considers important. For example, if you are writing a review of a Mexican restaurant to an audience comprised mainly of senior citizens from the midwest, it is unlikely that "large portions" and "fiery green chile" will be the criteria most important to them. They might be more concerned, rather, with "quality of service" or "availability of heart smart menu items." Trying to anticipate and address your audience's values is an indispensable step in writing a persuasive evaluative argument. Your next step in suiting your criteria to your audience is to determine how you will explain and/or defend not only your judgments, but the criteria supporting them as well. For example, if you are arguing that a Mexican restaurant is excellent because, among other reasons, the texture of the food is appealing, you might need to explain to your audience why texture is a significant criterion in evaluating Mexican food.

Evidence and Audience Considerations

The amount and type of evidence you use to support your judgments will depend largely on the demands of your audience. Common sense tells us that the more oppositional an audience is, the more evidence will be needed to convince them of the validity a judgment. For instance, if you were writing a favorable review of La Cocina on the basis of their fiery green chile, you might not need to use a great deal of evidence for an audience of people who like spicy food but have not tried any of the Mexican restaurants in town. However, if you are addressing an audience who is deeply devoted to the green chile at Manuel's, you will need to provide a fair amount of solid evidence in order to persuade them to try another restaurant.

Parts of an Evaluation

When we evaluate, we make an overall value claim about a subject, using criteria to make judgments based on evidence. Often, we also make use of comparison and contrast as strategies for determining the relative worth of the subject we are considering. This section examines these parts of an evaluation and shows how each functions in a successful evaluation.

Overall Claim

An overall claim or judgment is an evaluator's final decision about worth. When we evaluate, we make a general statement about the worth of objects, goods, services, or solutions to problems.

An overall claim or judgment in an evaluation can be as simple as "See this movie!" or "Brand X is a better buy than the name brand." It can also be complex, particularly when the evaluator recognizes certain conditions that affect the judgment: If citizens of our community want to improve air and water quality and are willing to forego 300 additional jobs, then we should not approve the new plant Acme is hoping to build here.

Qualifications

An overall claim or judgment usually requires qualification so that it seems balanced. If judgments are weighted too much to one side, they will sometimes mar the credibility of your argument. If your overall judgment is wholly positive, your evaluation will wind up sounding like propaganda or advertisement. If it is wholly negative, you might present yourself as overly critical, unfair, or undiplomatic. An example of a qualified claim or judgment might be the following: Although La Cocina is not without its faults, it is the best Mexican restaurant in town. Qualifications are almost always positive additions to evaluative arguments, but writers must learn not to overuse them. If you make too many qualifications, your audience will be unable to determine your final position on your subject, and you will appear to be "waffling."

Example Text

Creating more parking lots is a possible solution to the horrendous traffic congestion in Taiwan's major cities. When a new building permit is issued, each building must include a certain number of spaces for parking. However, new construction takes time, and results will be seen only as new buildings are erected. This solution alone is inadequate for most of Taiwan's problem areas, which need a solution whose results will be noticed immediately.

Comment Notice how this sentence at the end of the paragraph seems to be a formal "thesis" or "claim" which might drive the rest of the essay. Based on this claim, we would assume that the remainder of the essay will deal with the reasons why the proposed policy along is "inadequate," and will address other possible solutions.

Supporting Judgments

In academic evaluations, the overall claim or judgment is backed up by smaller, more detailed judgments about aspects of a subject being evaluated. Supporting judgments function in the same way that "reasons" function in most arguments. They provide structure and justification for a more general claim. For example, if your overall claim or judgment in your evaluation is

"Although La Cocina is not without its faults, it is the best Mexican restaurant in town,"

one supporting judgment might be

"La Cocina's green chile is superb."

This judgment would be based on criteria you have established, and it would be supported by evidence.

Providing more parking spaces near buildings is not the only act necessary to solve Taiwan's parking problems. A combination of more parking spaces, increased fines, and lowered traffic volume may be necessary to eliminate the nightmare of driving in the cities. In fact, until laws are enforced and fines increased, no number of new parking spaces will impact the congestion seen in downtown areas.

Comment There are arguably three supporting judgments being made here, as three possible solutions are being suggested to rectify this problem of parking in Taiwan. If we were reading these supporting judgments at the beginning of an essay, we would expect the essay to discuss them in depth, pointing out evidence that these proposed solutions would be effective.

When we write evaluations, we consciously adopt certain standards of measurement, or criteria .

Criteria can be concrete standards, like size or speed, or can be abstract, like practicality. When we write evaluations in an academic context, we typically avoid using criteria that are wholly personal, and rely instead on those that are less "subjective" and more likely to be shared by the majority of the audience we are addressing. Choosing appropriate criteria often involves careful consideration of audience demands, values, and concerns.

As an evaluator, you will sometimes discover that you will need to explain and/or defend not only your judgments, but also the criteria informing those judgments. For example, if you are arguing that a Mexican restaurant is excellent because (among other reasons) the texture of the food is appealing, you might need to explain to your audience why texture is a significant criterion in evaluating Mexican food.

Types of Criteria

If you are evaluating a concrete canoe for an engineering class, you will use concrete criteria such as float time, cost of materials, hydrodynamic design, and so on. If you are evaluating the suitability of a textbook for a history class, you will probably rely on more abstract criteria such as readability, length, and controversial vs. mainstream interpretation of history.

In evaluation, we often rely on concrete , measurable standards according to which subjects (usually objects) may be evaluated. For example, cars may be evaluated according to the criteria of size, speed, or cost.

Many academic evaluations, however, don't focus on objects that we can measure in terms of size, speed, or cost. Rather, they look at somewhat more abstract concepts (problems and solutions often), which we might measure in terms of "effectiveness," "feasibility," or other abstract criteria. When writing this kind of evaluation, it is vital to be as clear as possible when articulating, defining, and using your criteria, since not all readers are likely to understand and agree with these criteria as readily as they would understand and agree with concrete criteria.

Related Information: Abstract Criteria

Abstract criteria are not easily measurable, and they are usually less self-evident, more in need of definition, than concrete criteria. Even though criteria may be abstract, they should not be imprecise. Always state your criteria as clearly and precisely as possible. "Feasibility" is one example of an abstract criterion that a writer might use to evaluate a solution to a problem. Feasibility is the degree of likelihood of success of something like a plan of action or a solution to a problem. "Capability of being implemented" is a way to look at feasibility in terms of solutions to problems. The relative ease with which a solution would be adopted is sometimes a way to look at feasibility. The following example mentions directly the criteria it is using (the words in italics). Fire prevention should be the major consideration of a family building a home. By using concrete, the risk of fire is significantly decreased. But that is not all that concrete provides. It is affordable , suitable for all climates , and helps reduce deforestation . Since all of these factors are important, concrete should be demanded more than it is, and it should certainly be used more than wood for homebuilding.

Related Information: Concrete Criteria

Concrete criteria are measurable standards which most people are likely to understand and (usually) to agree with. For example, a person might make use of criteria like "size," "speed," and "cost" when buying a car.

If size is your main criterion, and something with a larger size will receive a more favorable evaluation.

Perhaps the only quality that you desire in a car is low initial cost. You don't need to take into account anything else. In this case, you can put judgments on these three cars in the local used car lot:

Because the Nissan has the lowest initial price, it receives the most favorable judgment. The evidence is found on the price tag. Each car is compared by way of a single criterion: cost.

Using Clear and Well-defined Criteria

When we evaluate informally (passing judgments during the course of conversation, for instance), we typically assume that our criteria are self-evident and require no explanation. However, in written evaluation, it is often necessary that we clarify and define our criteria in order to make a persuasive evaluative argument.

Criteria That Are Too Vague or Personal

Although we frequently find ourselves needing to use abstract criteria like "feasibility" or "effectiveness," we also must avoid using criteria that are overly vague or personal and difficult to support with evidence. As evaluators, we must steer clear of criteria that are matters of taste, belief, or personal preference. For example, the "best" lamp might simply be the one that you think looks prettiest in your home. If you depend on a criterion like "pretty in my home," and neglect to use more common, shared criteria like "brightness," "cost," and "weight," you are probably relying on a criterion that is too specific to your own personal preferences. To make "pretty in my home" an effective criterion, you would need to explain what "pretty in my home" means and how it might relate to other people's value systems. (For example: "Lamp A is attractive because it is an unoffensive style and color that would be appropriate for many people's decorating tastes.")

Using Criteria Based on the Appropriate "Class" of Subjects

When you make judgments, it is important that you use criteria that are appropriate to the type of object, person, policy, etc. that you are examining. If you are evaluating Steven Spielburg's film, Schindler's List , for instance, it is unfair to criticize it because it isn't a knee-slapper. Because "Schindler's List" is a drama and not a comedy, using the criterion of "humor" is inappropriate.

Weighing Criteria

Once you have established criteria for your evaluation of a subject, it is necessary to decide which of these criteria are most important. For example, if you are evaluating a Mexican restaurant and you have arrived at several criteria (variety of items on the menu, spiciness of the food, size of the portions, decor, and service), you need to decide which of these criteria are most critical to your evaluation. If the size of the portions is good, but the service is bad, can you give the restaurant a good rating? What about if the decor is attractive, but the food is bland? Once you have placed your criteria in a hierarchy of importance, it is much easier to make decisions like these.

When we evaluate, we must consider the audience we hope to influence with our judgments. This is particularly true when we decide which criteria are informing (and should inform) these judgments.

After establishing some criteria for your evaluation, it is important to ask yourself whether or not your audience is likely to accept those criteria. It is crucial that they do accept the criteria if, in turn, you expect them to accept the supporting judgments and overall claim or judgment built on them.

Related Information: Explaining and Defending Criteria

In deciding which criteria will be effective in your evaluation is determining which criteria your audience considers important. For example, if you are writing a review of a Mexican restaurant to an audience comprised mainly of senior citizens from the midwest, it is unlikely that "large portions" and "fiery green chile" will be the criteria most important to them. They might be more concerned, rather, with "quality of service" or "availability of heart smart menu items." Trying to anticipate and address your audience's values is an indispensable step in writing a persuasive evaluative argument.

Related Information: Understanding Audience Criteria

How Background Experience Influences Criteria

Laura Thomas - Composition Lecturer Your background experience influences the criteria that you use in evaluation. If you know a lot about something, you will have a good idea of what criteria should govern your judgments. On the other hand, it's hard if you don't know enough about what you're judging. Sometimes you have to research first in order to come up with useful criteria. For example, I recently went shopping for a new pair of skis for the first time in fifteen years. When I began shopping, I realized that I didn't even know what questions to ask anymore. The last time I had bought skis, you judged them according to whether they had a foam core or a wood core. But I had no idea what the important considerations were anymore.

Evidence consists of the specifics you use to reach your conclusion or judgment. For example, if you judge that "La Cocina's green chile is superb" on the basis of the criterion, "Good green chile is so fiery that you can barely eat it," you might offer evidence like the following:

"I drank an entire pitcher of water on my own during the course of the meal."
"Though my friend wouldn't admit that the chile was challenging for him, I saw beads of sweat form on his brow."

Related Information: Example Text

In the following paragraph, evidence appears in italics. Note that the reference to the New York Times backs up the evidence offered in the previous sentence:

Since killer whales have small lymphatic systems, they catch infections more easily when held captive ( Obee 23 ). The orca from the movie "Free Willy," Keiko, developed a skin disorder because the water he was living in was not cold enough. This infection was a result of the combination of tank conditions and the animal's immune system, according to a New York Times article .

Types of Evidence

Evidence for academic evaluations is usually of two types: concrete detail and analytic detail. Analytic detail comes from critical thinking about abstract elements of the thing being evaluated. It will also include quotations from experts. Concrete detail comes from sense perceptions and measurements--facts about color, speed, size, texture, smell, taste, and so on. Concrete details are more likely to support concrete criteria (as opposed to abstract criteria) used in judging objects. Analytic detail will more often support abstract criteria (as opposed to concrete criteria), like the criterion "feasibility," discussed in the section on criteria. Analytic detail also appears most often in academic evaluations of solutions to problems, although such solutions can also sometimes be evaluated according to concrete criteria.

What Kinds of Evidence Work

Good evidence ranges from personal experience to interviews with experts to published sources. The kind of evidence that works best for you will depend on your audience and often on the writing assignment you have been given.

Evidence and the Writing Assignment

When you choose evidence to support the judgments you are making in an evaluation, it will be important to consider what type of evaluation you are being asked to do. If, for instance, you are being asked to review a play you have attended, your evidence will most likely consist primarily of your own observations. However, if your assignment asks you to compare and contrast two potential national health care policies (toward deciding which is the better one), your evidence will need to be more statistical, more dependent on reputable sources, and more directed toward possible effects or outcomes of your judgment.

Comparison and Contrast

Comparison and contrast is the process of positioning an item or concept being evaluated among other like items or concepts. We are all familiar with this technique as it's used in the marketing of products: soft drink "taste tests," comparisons of laundry detergent effectiveness, and the like. It is a way of determining the value of something in relation to comparable things. For example, if you have made the judgment that "La Cocina's green chile is superb" and you have offered evidence of the spiciness and the flavor of the chile, you might also use comparison by giving your audience a scale on which to base judgment: "La Cocina's chile is even more fiery and flavorful than Manuel's, which is by no means a walk in the park."

In this case, the writer compares limestone with wood to show that limestone is a better building material. Although this comparison could be developed much more, it still begins to point out the relative merits of limestone. Concrete is a feasible substitute for wood as a building material. Concrete comes from a rock called limestone. Limestone is found all over the United States. By using limestone instead of wood, the dependence on dwindling forest reserves would decrease. There are more sedimentary rocks than there are forests left in this country, and they are more evenly distributed. For this reason, it is quite possible to switch from wood to concrete as the primary building material for residential construction.

Determining Relative Worth

Comparing and contrasting rarely means placing the item or concept being evaluated in relation to another item or concept that is obviously grossly inferior. For instance, if you are attempting to demonstrate the value of a Cannondale mountain bike, it would be foolish to compare it with a Huffy. However, it would be useful to compare it with a Klein, arguably a similar bicycle. In this type of maneuver, you are not comparing good with bad; rather, you are deciding which bike is better and which bike is worse. In order to determine relative worth in this way, you will need to be very careful in defining the criteria you are using to make the comparison.

Using Comparison and Contrast Effectively

In order to make comparison and contrast function well in evaluation, it is necessary to be attentive to: 1) focusing on the item or concept under consideration and 2) the use of evidence in comparison and contrast. When using comparison and contrast, writers must remember that they are using comparable items or concepts only as a way of demonstrating the worth of the main item or concept under consideration. It is easy to lose focus when using this technique, because of the temptation to evaluate two (or more) items or concepts rather than just the one under consideration. It is important to remember that judgments made on the basis of comparison and contrast need to be supported with evidence. It is not enough to assert that "La Cocina's chile is even more fiery and flavorful than Manuel's." It will be necessary to support this judgment with evidence, showing in what ways La Cocina's chile is more flavorful: "Manuel's chile relies heavily on a tomato base, giving it an Italian flavor. La Cocina follows a more traditional recipe which uses little tomato and instead flavors the chile with shredded pork, a dash of vinegar, and a bit of red chile to give it a piquant taste."

The Process of Writing an Evaluation

A variety of writing assignments call for evaluation. Bearing in mind the various approaches that might be demanded by those particular assignments, this section offers some general strategies for formulating a written evaluation.

Choosing a Topic for Evaluation

Sometimes your topic for evaluation will be dictated by the writing assignment you have been given. Other times, though, you will be required to choose your own topic. Common sense tells you that it is best to choose something about which you already have a base knowledge. For instance, if you are a skier, you might want to evaluate a particular model of skis. In addition, it is best to choose something that is tangible, observable, and/or researchable. For example, if you chose a topic like "methods of sustainable management of forests," you would know that there would be research to support your evaluation. Likewise, if you chose to evaluate a film like Pulp Fiction , you could rent the video and watch it several times in order to get the evidence you needed. However, you would have fewer options if you were to choose an abstract concept like "loyalty" or "faith." When evaluating, it is usually best to steer clear of abstractions like these as much as possible.

Brainstorming Possible Judgments

Once you have chosen a topic, you might begin your evaluation by thinking about what you already know about the topic. In doing this, you will be coming up with possible judgments to include in your evaluation. Begin with a tentative overall judgment or claim. Then decide what supporting judgments you might make to back that claim. Keep in mind that your judgments will likely change as you collect evidence for your evaluation.

Determining a Tentative Overall Judgment

Start by making an overall judgment on the topic in question, based on what you already know. For instance, if you were writing an evaluation of sustainable management practices in forestry, your tentative overall judgment might be: "Sustainable management is a viable way of dealing with deforestation in old growth forests."

Brainstorming Possible Supporting Judgments

With a tentative overall judgment in mind, you can begin to brainstorm judgments (or reasons) that could support your overall judgment by asking the question, "Why?" For example, asking "Why?" of the tentative overall judgment "Sustainable management is a viable way of dealing with deforestation in old growth forests" might yield the following supporting judgments:

  • Sustainable management allows for continued support of the logging industry.
  • It eliminates much unnecessary waste.
  • It is much better for the environment than unrestricted, traditional forestry methods.
  • It is less expensive than these traditional methods.

Anticipating Changes to Your Judgments After Collecting Evidence

When brainstorming possible judgments this early in the writing process, it is necessary to keep an open mind as you enter into the stage in which you collect evidence. Once you have done observations, analysis, or research, you might find that you are unable to advance your tentative overall judgment. Or you might find that some of the supporting judgments you came up with are not true or are not supportable. Your findings might also point you toward other judgments you can make in addition to the ones you are already making.

Defining Criteria

To prepare to organize and write your evaluation, it is important to clearly define the criteria you are using to make your judgments. These criteria govern the direction of the evaluation and provide structure and justification for the judgments you make.

Looking at the Criteria Informing Your Judgments (Working Backwards)

We often work backwards from the judgments we make, discovering what criteria we are using on the basis of what our judgments look like. For instance, our tentative judgments about sustainable management practices are as follows:

If we were to analyze these judgments, asking ourselves why we made them, we would see that we used the following criteria: wellbeing of the logging industry, conservation of resources, wellbeing of the environment, and cost.

Thinking of Additional Criteria

Once you have identified the criteria informing your initial judgments, you will want to determine what other criteria should be included in your evaluation. For example, in addition to the criteria you've already come up with (wellbeing of the logging industry, conservation of resources, wellbeing of the environment, and cost), you might include the criterion of preservation of the old growth forests.

Comparing Your Criteria with Those of Your Audience

In deciding which criteria are most important to include in your evaluation, it is necessary to consider the criteria your audience is likely to find important. Let's say we are directing our evaluation of sustainable management methods toward an audience of loggers. If we look at our list of criteria--wellbeing of the logging industry, conservation of resources, wellbeing of the environment, cost, and preservation of the old growth forests--we might decide that wellbeing of the logging industry and cost are the criteria most important to loggers. At this point, we would also want to identify additional criteria the audience might expect us to address: perhaps feasibility, labor requirements, and efficiency.

Deciding Which Criteria Are Most Important

Once you have developed a long list of possible criteria for judging your subject (in this case, sustainable management methods), you will need to narrow the list, since it is impractical and ineffective to use of all possible criteria in your essay. To decide which criteria to address, determine which are least dispensable, both to you and to your audience. Your own criteria were: wellbeing of the logging industry, conservation of resources, wellbeing of the environment, cost, and preservation of the old growth forests. Those you anticipated for your audience were: feasibility, labor requirements, and efficiency. In the written evaluation, you might choose to address those criteria most important to your audience, with a couple of your own included. For example, your list of indispensable criteria might look like this: wellbeing of the logging industry, cost, labor requirements, efficiency, conservation of resources, and preservation of the old growth forests.

Criteria and Assumptions

Stephen Reid, English Professor Warrants (to use a term from argumentation) come on the scene when we ask why a given criterion should be used or should be acceptable in evaluating the particular text, product, or performance in question. When we ask WHY a particular criterion should be important (let's say, strong performance in an automobile engine, quickly moving plot in a murder mystery, outgoing personality in a teacher), we are getting at the assumptions (i.e., the warrant) behind why the data is relevant to the claim of value we are about to make. Strong performance in an automobile engine might be a positive criterion in an urban, industrialized environment, where traveling at highway speeds on American interstates is important. But we might disagree about whether strong performance (accompanied by lower mileage) might be important in a rural European environment where gas costs are several dollars a litre. Similarly, an outgoing personality for a teacher might be an important standard of judgment or criterion in a teacher-centered classroom, but we could imagine another kind of decentered class where interpersonal skills are more important than teacher personality. By QUESTIONING the validity and appropriateness of a given criterion in a particular situation, we are probing for the ASSUMPTIONS or WARRANTS we are making in using that criterion in that particular situation. Thus, criteria are important, but it is often equally important for writers to discuss the assumptions that they are making in choosing the major criteria in their evaluations.

Collecting Evidence

Once you have established the central criteria you will use in our evaluation, you will investigate your subject in terms of these criteria. In order to investigate the subject of sustainable management methods, you would more than likely have to research whether these methods stand up to the criteria you have established: wellbeing of the logging industry, cost, labor requirements, time efficiency, conservation of resources, and preservation of the old growth forests. However, library research is only one of the techniques evaluators use. Depending on the type of evaluation being made, the evaluator might use such methods as observation, field research, and analysis.

Thinking About What You Already Know

The best place to start looking for evidence is with the knowledge you already possess. To do this, you might try brainstorming, clustering, or freewriting ideas.

Library Research

When you are evaluating policies, issues, or products, you will usually need to conduct library research to find the evidence your evaluation requires. It is always a good idea to check journals, databases, and bibliographies relevant to your subject when you begin research. It is also helpful to speak with a reference librarian about how to get started.

Observation

When you are asked to evaluate a performance, event, place, object, or person, one of the best methods available is simple observation. What makes observation not so simple is the need to focus on criteria you have developed ahead of time. If, for instance, you are reviewing a student production of Hamlet , you will want to review your list of criteria (perhaps quality of acting, costumes, faithfulness to the text, set design, lighting, and length of time before intermission) before attending the play. During or after the play, you will want to take as many notes as possible, keeping these criteria in mind.

Field Research

To expand your evaluation beyond your personal perspective or the perspective of your sources, you might conduct your own field research . Typical field research techniques include interviewing, taking a survey, administering a questionnaire, and conducting an experiment. These methods can help you support your judgment and can sometimes help you determine whether or not your judgment is valid.

When you are asked to evaluate a text, analysis is often the technique you will use in collecting evidence. If you are analyzing an argument, you might use the Toulmin Method. Other texts might not require such a structured analysis but might be better addressed by more general critical reading strategies.

Applying Criteria

After developing a list of indispensable criteria, you will need to "test" the subject according to these criteria. At this point, it will probably be necessary to collect evidence (through research, analysis, or observation) to determine, for example, whether sustainable management methods would hold up to the criteria you have established: wellbeing of the logging industry, cost, labor requirements, efficiency, conservation of resources, and preservation of the old growth forests. One way of recording the results of this "test" is by putting your notes in a three-column log.

Organizing the Evaluation

One of the best ways to organize your information in preparation for writing is to construct an informal outline of sorts. Outlines might be arranged according to criteria, comparison and contrast, chronological order, or causal analysis. They also might follow what Robert K. Miller and Suzanne S. Webb refer to in their book, Motives for Writing (2nd ed.) as "the pattern of classical oration for evaluations" (286). In addition to deciding on a general structure for your evaluation, it will be necessary to determine the most appropriate placement for your overall claim or judgment.

Placement of the Overall Claim or Judgment

Writers can state their final position at the beginning or the end of an essay. The same is true of the overall claim or judgment in a written evaluation.

When you place your overall claim or judgment at the end of your written evaluation, you are able to build up to it and to demonstrate how your evaluative argument (evidence, explanation of criteria, etc.) has led to that judgment.

Writers of academic evaluations normally don't need to keep readers in suspense about their judgments. By stating the overall claim or judgment early in the paper, writers help readers both to see the structure of the essay and to accept the evidence as convincing proof of the judgment. (Writers of evaluations should remember, of course, that there is no rule against stating the overall claim or judgment at both the beginning and the end of the essay.)

Organization by Criteria

The following is an example from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing how a writer might arrange an evaluation according to criteria:

Introductory paragraphs: information about the restaurant (location, hours, prices), general description of Chinese restaurants today, and overall claim : The Hunan Dynasty is reliable, a good value, and versatile.
Criterion # 1/Judgment: Good restaurants should have an attractive setting and atmosphere/Hunan Dynasty is attractive.
Criterion # 2/Judgment: Good restaurants should give strong priority to service/ Hunan Dynasty has, despite an occasional glitch, expert service.
Criterion # 3/Judgment: Restaurants that serve modestly priced food should have quality main dishes/ Main dishes at Hunan Dynasty are generally good but not often memorable. (Note: The most important criterion--the quality of the main dishes--is saved for last.)
Concluding paragraphs: Hunan Dynasty is a top-flight neighborhood restaurant (338).

Organization by Comparison and Contrast

Sometimes comparison and contrast is not merely a strategy used in part [italics] of an evaluation, but is the strategy governing the organization of the entire essay. The following are examples from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing two ways that a writer might organize an evaluation according to comparison and contrast.

Introductory paragraph(s)

Thesis [or overall claim/judgment]: Although several friends recommended the Yakitori, we preferred the Unicorn for its more authentic atmosphere, courteous service, and well-prepared food. [Notice that the criteria are stated in this thesis.]

Authentic atmosphere: Yakitori vs. Unicorn

Courteous service: Yakitori vs. Unicorn

Well-prepared food: Yakitori vs. Unicorn

Concluding paragraph(s) (Reid 339)

The Yakitori : atmosphere, service, and food

The Unicorn : atmosphere, service, and food as compared to the Yakitori

Concluding paragraph(s) (Reid 339).

Organization by Chronological Order

Writers often follow chronological order when evaluating or reviewing events or performances. This method of organization allows the writer to evaluate portions of the event or performance in the order in which it happens.

Organization by Causal Analysis

When using analysis to evaluate places, objects, events, or policies, writers often focus on causes or effects. The following is an example from Stephen Reid's The Prentice Hall Guide for College Writers (4th ed.), showing how one writer organizes an evaluation of a Goya painting by discussing its effects on the viewer.

Criterion #1/Judgment: The iconography, or use of symbols, contributes to the powerful effect of this picture on the viewer.

Evidence : The church as a symbol of hopefulness contrasts with the cruelty of the execution. The spire on the church emphasizes for the viewer how powerless the Church is to save the victims.

Criterion #2/Judgment: The use of light contributes to the powerful effect of the picture on the viewer.

Evidence : The light casts an intense glow on the scene, and its glaring, lurid, and artificial qualities create the same effect on the viewer that modern art sometimes does.

Criterion #3/Judgment: The composition or use of formal devices contributes to the powerful effect of the picture on the viewer.

Evidence : The diagonal lines scissors the picture into spaces that give the viewer a claustrophobic feeling. The corpse is foreshortened, so that it looks as though the dead man is bidding the viewer welcome (Reid 340).

Pattern of Classical Oration for Evaluations

Robert K. Miller and Suzanne S. Webb, in their book, Motives for Writing (2nd ed.) discuss what they call "the pattern of classical oration for evaluations," which incorporates opposing evaluations as well as supporting reasons and judgments. This pattern is as follows:

Present your subject. (This discussion includes any background information, description, acknowledgement of weaknesses, and so forth.)

State your criteria. (If your criteria are controversial, be sure to justify them.)

Make your judgment. (State it as clearly and emphatically as possible.)

Give your reasons. (Be sure to present good evidence for each reason.)

Refute opposing evaluations. (Let your reader know you have given thoughtful consideration to opposing views, since such views exist.)

State your conclusion. (You may restate or summarize your judgment.) (Miller and Webb 286-7)

Example: Part of an Outline for an Evaluation

The following is a portion of an outline for an evaluation, organized by way of supporting judgments or reasons. Notice that this pattern would need to be repeated (using criteria other than the fieriness of the green chile) in order to constitute a complete evaluation proving that "Although La Cocina is not without its faults, it is the best Mexican restaurant in town."

Evaluation of La Cocina, a Mexican Restaurant

Intro Paragraph Leading to Overall Judgment: "Although La Cocina is not without its faults, it is the best Mexican restaurant in town."

Supporting Judgment: "La Cocina's green chile is superb."

Criterion used to make this judgment: "Good green chile is so fiery that you can barely eat it."

Evidence in support of this judgment: "I drank an entire pitcher of water on my own during the course of the meal" or "Though my friend wouldn't admit that the chile was challenging for him, I saw beads of sweat form on his brow."

Supporting Judgment made by way of Comparison and Contrast: "La Cocina's chile is even more fiery and flavorful than Manuel's, which is by no means a walk in the park itself."

Evidence in support of this judgment: "Manuel's chile relies heavily on a tomato base, giving it an Italian flavor. La Cocina follows a more traditional recipe which uses little tomato, and instead flavors the chile with shredded pork, a dash of vinegar, and a bit of red chile to give it a piquant taste."

Writing the Draft

If you have an outline to follow, writing a draft of a written evaluation is simple. Stephen Reid, in his Prentice Hall Guide for College Writers , recommends that writers maintain focus on both the audience they are addressing and the central criteria they want to include. Such a focus will help writers remember what their audience expects and values and what is most important in constructing an effective and persuasive evaluation.

Guidelines for Revision

In his Prentice Hall Guide for College Writers , 4th ed., Stephen Reid offers some helpful tips for revising written evaluations. These guidelines are reproduced here and grouped as follows:

Examining Criteria

Criteria are standards of value . They contain categories and judgments, as in "good fuel economy," "good reliability," or "powerful use of light and shade in painting." Some categories, such as "price," have clearly implied judgments ("low price"), but make sure that your criteria refer implicitly or explicitly to a standard of value.

Examine your criteria from your audience's point of view. Which criteria are most important in evaluating your subject? Will your readers agree that the criteria you select are indeed the most important ones? Will changing the order in which you present your criteria make your evaluation more convincing? (Reid 342)

Balancing the Evaluation

Include both positive and negative evaluations of your subject. If all of your judgments are positive, your evaluation will sound like an advertisement. If all of your judgments are negative, your readers may think you are too critical (Reid 342).

Using Evidence

Be sure to include supporting evidence for each criterion. Without any data or support, your evaluation will be just an opinion that will not persuade your reader.

If you need additional evidence to persuade your readers, [go back to the "Collecting" stage of this process] (Reid 343).

Avoiding Overgeneralization

Avoid overgeneralizing your claims. If you are evaluating only three software programs, you cannot say that Lotus 1-2-3 is the best business program around. You can say only that it is the best among the group or the best in the particular class that you measured (Reid 343).

Making Appropriate Comparisons

Unless your goal is humor or irony, compare subjects that belong in the same class. Comparing a Yugo to a BMW is absurd because they are not similar cars in terms of cost, design, or purpose (Reid 343).

Checking for Accuracy

If you are citing other people's data or quoting sources, check to make sure your summaries and data are accurate (Reid 343).

Working on Transitions, Clarity, and Style

Signal the major divisions in your evaluation to your reader using clear transitions, key words, and paragraph hooks. At the beginning of new paragraphs or sections of your essay, let your reader know where you are going.

Revise sentences for directness and clarity.

Edit your evaluation for correct spelling, appropriate word choice, punctuation, usage, and grammar (343).

Nesbitt, Laurel, Kathy Northcut, & Kate Kiefer. (1997). Academic Evaluations. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=47

  • Columbia University in the City of New York
  • Office of Teaching, Learning, and Innovation
  • University Policies
  • Columbia Online
  • Academic Calendar
  • Resources and Technology
  • Instructional Technologies
  • Teaching in All Modalities

Designing Assignments for Learning

The rapid shift to remote teaching and learning meant that many instructors reimagined their assessment practices. Whether adapting existing assignments or creatively designing new opportunities for their students to learn, instructors focused on helping students make meaning and demonstrate their learning outside of the traditional, face-to-face classroom setting. This resource distills the elements of assignment design that are important to carry forward as we continue to seek better ways of assessing learning and build on our innovative assignment designs.

On this page:

Rethinking traditional tests, quizzes, and exams.

  • Examples from the Columbia University Classroom
  • Tips for Designing Assignments for Learning

Reflect On Your Assignment Design

Connect with the ctl.

  • Resources and References

evaluation for assignment

Cite this resource: Columbia Center for Teaching and Learning (2021). Designing Assignments for Learning. Columbia University. Retrieved [today’s date] from https://ctl.columbia.edu/resources-and-technology/teaching-with-technology/teaching-online/designing-assignments/

Traditional assessments tend to reveal whether students can recognize, recall, or replicate what was learned out of context, and tend to focus on students providing correct responses (Wiggins, 1990). In contrast, authentic assignments, which are course assessments, engage students in higher order thinking, as they grapple with real or simulated challenges that help them prepare for their professional lives, and draw on the course knowledge learned and the skills acquired to create justifiable answers, performances or products (Wiggins, 1990). An authentic assessment provides opportunities for students to practice, consult resources, learn from feedback, and refine their performances and products accordingly (Wiggins 1990, 1998, 2014). 

Authentic assignments ask students to “do” the subject with an audience in mind and apply their learning in a new situation. Examples of authentic assignments include asking students to: 

  • Write for a real audience (e.g., a memo, a policy brief, letter to the editor, a grant proposal, reports, building a website) and/or publication;
  • Solve problem sets that have real world application; 
  • Design projects that address a real world problem; 
  • Engage in a community-partnered research project;
  • Create an exhibit, performance, or conference presentation ;
  • Compile and reflect on their work through a portfolio/e-portfolio.

Noteworthy elements of authentic designs are that instructors scaffold the assignment, and play an active role in preparing students for the tasks assigned, while students are intentionally asked to reflect on the process and product of their work thus building their metacognitive skills (Herrington and Oliver, 2000; Ashford-Rowe, Herrington and Brown, 2013; Frey, Schmitt, and Allen, 2012). 

It’s worth noting here that authentic assessments can initially be time consuming to design, implement, and grade. They are critiqued for being challenging to use across course contexts and for grading reliability issues (Maclellan, 2004). Despite these challenges, authentic assessments are recognized as beneficial to student learning (Svinicki, 2004) as they are learner-centered (Weimer, 2013), promote academic integrity (McLaughlin, L. and Ricevuto, 2021; Sotiriadou et al., 2019; Schroeder, 2021) and motivate students to learn (Ambrose et al., 2010). The Columbia Center for Teaching and Learning is always available to consult with faculty who are considering authentic assessment designs and to discuss challenges and affordances.   

Examples from the Columbia University Classroom 

Columbia instructors have experimented with alternative ways of assessing student learning from oral exams to technology-enhanced assignments. Below are a few examples of authentic assignments in various teaching contexts across Columbia University. 

  • E-portfolios: Statia Cook shares her experiences with an ePorfolio assignment in her co-taught Frontiers of Science course (a submission to the Voices of Hybrid and Online Teaching and Learning initiative); CUIMC use of ePortfolios ;
  • Case studies: Columbia instructors have engaged their students in authentic ways through case studies drawing on the Case Consortium at Columbia University. Read and watch a faculty spotlight to learn how Professor Mary Ann Price uses the case method to place pre-med students in real-life scenarios;
  • Simulations: students at CUIMC engage in simulations to develop their professional skills in The Mary & Michael Jaharis Simulation Center in the Vagelos College of Physicians and Surgeons and the Helene Fuld Health Trust Simulation Center in the Columbia School of Nursing; 
  • Experiential learning: instructors have drawn on New York City as a learning laboratory such as Barnard’s NYC as Lab webpage which highlights courses that engage students in NYC;
  • Design projects that address real world problems: Yevgeniy Yesilevskiy on the Engineering design projects completed using lab kits during remote learning. Watch Dr. Yesilevskiy talk about his teaching and read the Columbia News article . 
  • Writing assignments: Lia Marshall and her teaching associate Aparna Balasundaram reflect on their “non-disposable or renewable assignments” to prepare social work students for their professional lives as they write for a real audience; and Hannah Weaver spoke about a sandbox assignment used in her Core Literature Humanities course at the 2021 Celebration of Teaching and Learning Symposium . Watch Dr. Weaver share her experiences.  

​Tips for Designing Assignments for Learning

While designing an effective authentic assignment may seem like a daunting task, the following tips can be used as a starting point. See the Resources section for frameworks and tools that may be useful in this effort.  

Align the assignment with your course learning objectives 

Identify the kind of thinking that is important in your course, the knowledge students will apply, and the skills they will practice using through the assignment. What kind of thinking will students be asked to do for the assignment? What will students learn by completing this assignment? How will the assignment help students achieve the desired course learning outcomes? For more information on course learning objectives, see the CTL’s Course Design Essentials self-paced course and watch the video on Articulating Learning Objectives .  

Identify an authentic meaning-making task

For meaning-making to occur, students need to understand the relevance of the assignment to the course and beyond (Ambrose et al., 2010). To Bean (2011) a “meaning-making” or “meaning-constructing” task has two dimensions: 1) it presents students with an authentic disciplinary problem or asks students to formulate their own problems, both of which engage them in active critical thinking, and 2) the problem is placed in “a context that gives students a role or purpose, a targeted audience, and a genre.” (Bean, 2011: 97-98). 

An authentic task gives students a realistic challenge to grapple with, a role to take on that allows them to “rehearse for the complex ambiguities” of life, provides resources and supports to draw on, and requires students to justify their work and the process they used to inform their solution (Wiggins, 1990). Note that if students find an assignment interesting or relevant, they will see value in completing it. 

Consider the kind of activities in the real world that use the knowledge and skills that are the focus of your course. How is this knowledge and these skills applied to answer real-world questions to solve real-world problems? (Herrington et al., 2010: 22). What do professionals or academics in your discipline do on a regular basis? What does it mean to think like a biologist, statistician, historian, social scientist? How might your assignment ask students to draw on current events, issues, or problems that relate to the course and are of interest to them? How might your assignment tap into student motivation and engage them in the kinds of thinking they can apply to better understand the world around them? (Ambrose et al., 2010). 

Determine the evaluation criteria and create a rubric

To ensure equitable and consistent grading of assignments across students, make transparent the criteria you will use to evaluate student work. The criteria should focus on the knowledge and skills that are central to the assignment. Build on the criteria identified, create a rubric that makes explicit the expectations of deliverables and share this rubric with your students so they can use it as they work on the assignment. For more information on rubrics, see the CTL’s resource Incorporating Rubrics into Your Grading and Feedback Practices , and explore the Association of American Colleges & Universities VALUE Rubrics (Valid Assessment of Learning in Undergraduate Education). 

Build in metacognition

Ask students to reflect on what and how they learned from the assignment. Help students uncover personal relevance of the assignment, find intrinsic value in their work, and deepen their motivation by asking them to reflect on their process and their assignment deliverable. Sample prompts might include: what did you learn from this assignment? How might you draw on the knowledge and skills you used on this assignment in the future? See Ambrose et al., 2010 for more strategies that support motivation and the CTL’s resource on Metacognition ). 

Provide students with opportunities to practice

Design your assignment to be a learning experience and prepare students for success on the assignment. If students can reasonably expect to be successful on an assignment when they put in the required effort ,with the support and guidance of the instructor, they are more likely to engage in the behaviors necessary for learning (Ambrose et al., 2010). Ensure student success by actively teaching the knowledge and skills of the course (e.g., how to problem solve, how to write for a particular audience), modeling the desired thinking, and creating learning activities that build up to a graded assignment. Provide opportunities for students to practice using the knowledge and skills they will need for the assignment, whether through low-stakes in-class activities or homework activities that include opportunities to receive and incorporate formative feedback. For more information on providing feedback, see the CTL resource Feedback for Learning . 

Communicate about the assignment 

Share the purpose, task, audience, expectations, and criteria for the assignment. Students may have expectations about assessments and how they will be graded that is informed by their prior experiences completing high-stakes assessments, so be transparent. Tell your students why you are asking them to do this assignment, what skills they will be using, how it aligns with the course learning outcomes, and why it is relevant to their learning and their professional lives (i.e., how practitioners / professionals use the knowledge and skills in your course in real world contexts and for what purposes). Finally, verify that students understand what they need to do to complete the assignment. This can be done by asking students to respond to poll questions about different parts of the assignment, a “scavenger hunt” of the assignment instructions–giving students questions to answer about the assignment and having them work in small groups to answer the questions, or by having students share back what they think is expected of them.

Plan to iterate and to keep the focus on learning 

Draw on multiple sources of data to help make decisions about what changes are needed to the assignment, the assignment instructions, and/or rubric to ensure that it contributes to student learning. Explore assignment performance data. As Deandra Little reminds us: “a really good assignment, which is a really good assessment, also teaches you something or tells the instructor something. As much as it tells you what students are learning, it’s also telling you what they aren’t learning.” ( Teaching in Higher Ed podcast episode 337 ). Assignment bottlenecks–where students get stuck or struggle–can be good indicators that students need further support or opportunities to practice prior to completing an assignment. This awareness can inform teaching decisions. 

Triangulate the performance data by collecting student feedback, and noting your own reflections about what worked well and what did not. Revise the assignment instructions, rubric, and teaching practices accordingly. Consider how you might better align your assignment with your course objectives and/or provide more opportunities for students to practice using the knowledge and skills that they will rely on for the assignment. Additionally, keep in mind societal, disciplinary, and technological changes as you tweak your assignments for future use. 

Now is a great time to reflect on your practices and experiences with assignment design and think critically about your approach. Take a closer look at an existing assignment. Questions to consider include: What is this assignment meant to do? What purpose does it serve? Why do you ask students to do this assignment? How are they prepared to complete the assignment? Does the assignment assess the kind of learning that you really want? What would help students learn from this assignment? 

Using the tips in the previous section: How can the assignment be tweaked to be more authentic and meaningful to students? 

As you plan forward for post-pandemic teaching and reflect on your practices and reimagine your course design, you may find the following CTL resources helpful: Reflecting On Your Experiences with Remote Teaching , Transition to In-Person Teaching , and Course Design Support .

The Columbia Center for Teaching and Learning (CTL) is here to help!

For assistance with assignment design, rubric design, or any other teaching and learning need, please request a consultation by emailing [email protected]

Transparency in Learning and Teaching (TILT) framework for assignments. The TILT Examples and Resources page ( https://tilthighered.com/tiltexamplesandresources ) includes example assignments from across disciplines, as well as a transparent assignment template and a checklist for designing transparent assignments . Each emphasizes the importance of articulating to students the purpose of the assignment or activity, the what and how of the task, and specifying the criteria that will be used to assess students. 

Association of American Colleges & Universities (AAC&U) offers VALUE ADD (Assignment Design and Diagnostic) tools ( https://www.aacu.org/value-add-tools ) to help with the creation of clear and effective assignments that align with the desired learning outcomes and associated VALUE rubrics (Valid Assessment of Learning in Undergraduate Education). VALUE ADD encourages instructors to explicitly state assignment information such as the purpose of the assignment, what skills students will be using, how it aligns with course learning outcomes, the assignment type, the audience and context for the assignment, clear evaluation criteria, desired formatting, and expectations for completion whether individual or in a group.

Villarroel et al. (2017) propose a blueprint for building authentic assessments which includes four steps: 1) consider the workplace context, 2) design the authentic assessment; 3) learn and apply standards for judgement; and 4) give feedback. 

References 

Ambrose, S. A., Bridges, M. W., & DiPietro, M. (2010). Chapter 3: What Factors Motivate Students to Learn? In How Learning Works: Seven Research-Based Principles for Smart Teaching . Jossey-Bass. 

Ashford-Rowe, K., Herrington, J., and Brown, C. (2013). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education. 39(2), 205-222, http://dx.doi.org/10.1080/02602938.2013.819566 .  

Bean, J.C. (2011). Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom . Second Edition. Jossey-Bass. 

Frey, B. B, Schmitt, V. L., and Allen, J. P. (2012). Defining Authentic Classroom Assessment. Practical Assessment, Research, and Evaluation. 17(2). DOI: https://doi.org/10.7275/sxbs-0829  

Herrington, J., Reeves, T. C., and Oliver, R. (2010). A Guide to Authentic e-Learning . Routledge. 

Herrington, J. and Oliver, R. (2000). An instructional design framework for authentic learning environments. Educational Technology Research and Development, 48(3), 23-48. 

Litchfield, B. C. and Dempsey, J. V. (2015). Authentic Assessment of Knowledge, Skills, and Attitudes. New Directions for Teaching and Learning. 142 (Summer 2015), 65-80. 

Maclellan, E. (2004). How convincing is alternative assessment for use in higher education. Assessment & Evaluation in Higher Education. 29(3), June 2004. DOI: 10.1080/0260293042000188267

McLaughlin, L. and Ricevuto, J. (2021). Assessments in a Virtual Environment: You Won’t Need that Lockdown Browser! Faculty Focus. June 2, 2021. 

Mueller, J. (2005). The Authentic Assessment Toolbox: Enhancing Student Learning through Online Faculty Development . MERLOT Journal of Online Learning and Teaching. 1(1). July 2005. Mueller’s Authentic Assessment Toolbox is available online. 

Schroeder, R. (2021). Vaccinate Against Cheating With Authentic Assessment . Inside Higher Ed. (February 26, 2021).  

Sotiriadou, P., Logan, D., Daly, A., and Guest, R. (2019). The role of authentic assessment to preserve academic integrity and promote skills development and employability. Studies in Higher Education. 45(111), 2132-2148. https://doi.org/10.1080/03075079.2019.1582015    

Stachowiak, B. (Host). (November 25, 2020). Authentic Assignments with Deandra Little. (Episode 337). In Teaching in Higher Ed . https://teachinginhighered.com/podcast/authentic-assignments/  

Svinicki, M. D. (2004). Authentic Assessment: Testing in Reality. New Directions for Teaching and Learning. 100 (Winter 2004): 23-29. 

Villarroel, V., Bloxham, S, Bruna, D., Bruna, C., and Herrera-Seda, C. (2017). Authentic assessment: creating a blueprint for course design. Assessment & Evaluation in Higher Education. 43(5), 840-854. https://doi.org/10.1080/02602938.2017.1412396    

Weimer, M. (2013). Learner-Centered Teaching: Five Key Changes to Practice . Second Edition. San Francisco: Jossey-Bass. 

Wiggins, G. (2014). Authenticity in assessment, (re-)defined and explained. Retrieved from https://grantwiggins.wordpress.com/2014/01/26/authenticity-in-assessment-re-defined-and-explained/

Wiggins, G. (1998). Teaching to the (Authentic) Test. Educational Leadership . April 1989. 41-47. 

Wiggins, Grant (1990). The Case for Authentic Assessment . Practical Assessment, Research & Evaluation , 2(2). 

Wondering how AI tools might play a role in your course assignments?

See the CTL’s resource “Considerations for AI Tools in the Classroom.”

This website uses cookies to identify users, improve the user experience and requires cookies to work. By continuing to use this website, you consent to Columbia University's use of cookies and similar technologies, in accordance with the Columbia University Website Cookie Notice .

Center for Teaching and Learning

Step 4: develop assessment criteria and rubrics.

Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can. Grading criteria articulate what is important in each assessment, what knowledge or skills students should be able to demonstrate, and how they can best communicate that to you. When you share grading criteria with students, you help them understand what to focus on and how to demonstrate their learning successfully. From good assessment criteria, you can develop a grading rubric .

Develop Your Assessment Criteria | Decide on a Rating Scale | Create the Rubric

Developing Your Assessment Criteria

Good assessment criteria are

  • Clear and easy to understand as a guide for students
  • Attainable rather than beyond students’ grasp in the current place in the course
  • Significant in terms of the learning students should demonstrate
  • Relevant in that they assess student learning toward course objectives related to that one assessment.

To create your grading criteria, consider the following questions:

  • What is the most significant content or knowledge students should be able to demonstrate understanding of at this point in the course?
  • What specific skills, techniques, or applications should students be able to use to demonstrate using at this point in the course?
  • What secondary skills or practices are important for students to demonstrate in this assessment? (for example, critical thinking, public speaking skills, or writing as well as more abstract concepts such as completeness, creativity, precision, or problem-solving abilities)
  • Do the criteria align with the objectives for both the assessment and the course?

Once you have developed some ideas about the assessment’s grading criteria, double-check to make sure the criteria are observable, measurable, significant, and distinct from each other.

Assessment Criteria Example Using the questions above, the performance criteria in the example below were designed for an assignment in which students had to create an explainer video about a scientific concept for a specified audience. Each elements can be observed and measured based on both expert instructor and peer feedback, and each is significant because it relates to the course and assignment learning goals.

evaluation for assignment

Additional Assessment Criteria Resources Developing Grading Criteria (Vanderbilt University) Creating Grading Criteria (Brown University) Sample Criteria (Brown University) Developing Grading Criteria (Temple University)

Decide on a Rating Scale

Deciding what scale you will use for an assessment depends on the type of learning you want students to demonstrate and the type of feedback you want to give students on this particular assignment or test. For example, for an introductory lab report early in the semester, you might not be as concerned with advanced levels of precision as much as correct displays of data and the tone of the report; therefore, grading heavily on copy editing or advanced analysis would not be appropriate. The criteria would likely more rigorous by the end of the semester, as you build up to the advanced level you want students to reach in the course.

Rating scales turn the grading criteria you have defined into levels of performance expectations for the students that can then be interpreted as a letter, number, or level. Common rating scales include

  • A, B, C, etc. (without or without + and -)
  • 100 point scale with defined cut-off for a letter grade if desired (ex. a B = 89-80; or a B+ = 89-87, B = 86-83, B- = 82-80)
  • Yes or no, present or not present (if the rubric is a checklist of items students must show)
  • below expectations, meets expectations, exceeds expectations
  • not demonstrated, poor, average, good, excellent

Once you have decided on a scale for the type of assignment and the learning you want students to demonstrate, you can use the scale to clearly articulate what each level of performance looks like, such as defining what A, B, C, etc. level work would look like for each grading criteria. What would distinguish a student who earns a B from one who earns a C? What would distinguish a student who excelled in demonstrating use of a tool from a student who clearly was not familiar with it? Write these distinctions out in descriptive notes or brief paragraphs.

​ Ethical Implications of Rating Scales There are ethical implications in each of these types of rating skills. On a project worth 100 points, what is the objective difference between earning an 85 or and 87? On an exceeds/meets/does not meet scale, how can those levels be objectively applied? Different understandings of "fairness" can lead to several ways of grading that might disadvantage some students.  Learn more about equitable grading practices here.

Create the Rubric

Rubrics Can Make Grading More Effective

  • Provide students with more complete and targeted feedback
  • Make grading more timely by enabling the provision of feedback soon after assignment is submitted/presented.
  • Standardize assessment criteria among those assigning/assessing the same assignment.
  • Facilitate peer evaluation of early drafts of assignment.

Rubrics Can Help Student Learning

  • Convey your expectations about the assignment through a classroom discussion of the rubric prior to the beginning of the assignment
  • Level the playing field by clarifying academic expectations and assignments so that all students understand regardless of their educational backgrounds.(e.g. define what we expect analysis, critical thinking, or even introductions/conclusions should include)
  • Promote student independence and motivation by enabling self-assessment
  • Prepare students to use detailed feedback.

Rubrics Have Other Uses:

  • Track development of student skills over several assignments
  • Facilitate communication with others (e.g. TAs, communication center, tutors, other faculty, etc)
  • Refine own teaching skills (e.g. by responding to common areas of weaknesses, feedback on how well teaching strategies are working in preparing students for their assignments).

In this video, CTL's Dr. Carol Subino Sullivan discusses the value of the different types of rubrics.

Many non-test-based assessments might seem daunting to grade, but a well-designed rubric can alleviate some of that work. A rubric is a table that usually has these parts:  

  • a clear description of the learning activity being assessed
  • criteria by which the activity will be evaluated
  • a rating scale identifying different levels of performance
  • descriptions of the level of performance a student must reach to earn that level.  

When you define the criteria and pre-define what acceptable performance for each of those criteria looks like ahead of time, you can use the rubric to compare with student work and assign grades or points for each criteria accordingly. Rubrics work very well for projects, papers/reports, and presentations , as well as in peer review, and good rubrics can save instructors and TAs time when grading .  

Sample Rubrics This final rubric for the scientific concept explainer video combines the assessment criteria and the holistic rating scale:

evaluation for assignment

When using this rubric, which can be easily adapted to use a present/not present rating scale or a letter grade scale, you can use a combination of checking items off and adding written (or audio/video) comments in the different boxes to provide the student more detailed feedback. 

As a second example, this descriptive rubric was used to ask students to peer assess and self-assess their contributions to a collaborative project. The rating scale is 1 through 4, and each description of performance builds on the previous. ( See the full rubric with scales for both product and process here. This rubric was designed for students working in teams to assess their own contributions to the project as well as their peers.)

evaluation for assignment

Building a Rubric in Canvas Assignments You can create rubrics for assignments and discussions boards in Canvas. Review these Canvas guides for tips and tricks. Rubrics Overview for Instructors What are rubrics?  How do I align a rubric with a learning outcome? How do I add a rubric to an assignment? How do I add a rubric to a quiz? How do I add a rubric to a graded discussion? How do I use a rubric to grade submissions in SpeedGrader? How do I manage rubrics in a course?

Additional Resources for Developing Rubrics Designing Grading Rubrics  (Brown University) Step-by-step process for creating an effective, fair, and efficient grading rubric. 

Creating and Using Rubrics  (Carnegie Mellon University) Explores the basics of rubric design along with multiple examples for grading different types of assignments.

Using Rubrics  (Cornell University) Argument for the value of rubrics to support student learning.

Rubrics  (University of California Berkeley) Shares "fun facts" about rubrics, and links the rubric guidelines from many higher ed organizations such as the AAC&U.

Creating and Using Rubrics  (Yale University) Introduces different styles of rubrics and ways to decide what style to use given your course's learning goals.

Best Practices for Designing Effective Resources (Arizona State University) Comprehensive overview of rubric design principles.

  Return to Main Menu | Return to Step 3 | Go to Step 5 Determine Feedback Strategy

Accessibility Information

Download Microsoft Products   >      Download Adobe Reader   >

evaluation for assignment

Search form

evaluation for assignment

  • Table of Contents
  • Troubleshooting Guide
  • A Model for Getting Started
  • Justice Action Toolkit
  • Best Change Processes
  • Databases of Best Practices
  • Online Courses
  • Ask an Advisor
  • Subscribe to eNewsletter
  • Community Stories
  • YouTube Channel
  • About the Tool Box
  • How to Use the Tool Box
  • Privacy Statement
  • Workstation/Check Box Sign-In
  • Online Training Courses
  • Capacity Building Training
  • Training Curriculum - Order Now
  • Community Check Box Evaluation System
  • Build Your Toolbox
  • Facilitation of Community Processes
  • Community Health Assessment and Planning
  • Section 1. A Framework for Program Evaluation: A Gateway to Tools

Chapter 36 Sections

  • Section 2. Community-based Participatory Research
  • Section 3. Understanding Community Leadership, Evaluators, and Funders: What Are Their Interests?
  • Section 4. Choosing Evaluators
  • Section 5. Developing an Evaluation Plan
  • Section 6. Participatory Evaluation
  • Main Section
This section is adapted from the article "Recommended Framework for Program Evaluation in Public Health Practice," by Bobby Milstein, Scott Wetterhall, and the CDC Evaluation Working Group.

Around the world, there exist many programs and interventions developed to improve conditions in local communities. Communities come together to reduce the level of violence that exists, to work for safe, affordable housing for everyone, or to help more students do well in school, to give just a few examples.

But how do we know whether these programs are working? If they are not effective, and even if they are, how can we improve them to make them better for local communities? And finally, how can an organization make intelligent choices about which promising programs are likely to work best in their community?

Over the past years, there has been a growing trend towards the better use of evaluation to understand and improve practice.The systematic use of evaluation has solved many problems and helped countless community-based organizations do what they do better.

Despite an increased understanding of the need for - and the use of - evaluation, however, a basic agreed-upon framework for program evaluation has been lacking. In 1997, scientists at the United States Centers for Disease Control and Prevention (CDC) recognized the need to develop such a framework. As a result of this, the CDC assembled an Evaluation Working Group comprised of experts in the fields of public health and evaluation. Members were asked to develop a framework that summarizes and organizes the basic elements of program evaluation. This Community Tool Box section describes the framework resulting from the Working Group's efforts.

Before we begin, however, we'd like to offer some definitions of terms that we will use throughout this section.

By evaluation , we mean the systematic investigation of the merit, worth, or significance of an object or effort. Evaluation practice has changed dramatically during the past three decades - new methods and approaches have been developed and it is now used for increasingly diverse projects and audiences.

Throughout this section, the term program is used to describe the object or effort that is being evaluated. It may apply to any action with the goal of improving outcomes for whole communities, for more specific sectors (e.g., schools, work places), or for sub-groups (e.g., youth, people experiencing violence or HIV/AIDS). This definition is meant to be very broad.

Examples of different types of programs include:

  • Direct service interventions (e.g., a program that offers free breakfast to improve nutrition for grade school children)
  • Community mobilization efforts (e.g., organizing a boycott of California grapes to improve the economic well-being of farm workers)
  • Research initiatives (e.g., an effort to find out whether inequities in health outcomes based on race can be reduced)
  • Surveillance systems (e.g., whether early detection of school readiness improves educational outcomes)
  • Advocacy work (e.g., a campaign to influence the state legislature to pass legislation regarding tobacco control)
  • Social marketing campaigns (e.g., a campaign in the Third World encouraging mothers to breast-feed their babies to reduce infant mortality)
  • Infrastructure building projects (e.g., a program to build the capacity of state agencies to support community development initiatives)
  • Training programs (e.g., a job training program to reduce unemployment in urban neighborhoods)
  • Administrative systems (e.g., an incentive program to improve efficiency of health services)

Program evaluation - the type of evaluation discussed in this section - is an essential organizational practice for all types of community health and development work. It is a way to evaluate the specific projects and activities community groups may take part in, rather than to evaluate an entire organization or comprehensive community initiative.

Stakeholders refer to those who care about the program or effort. These may include those presumed to benefit (e.g., children and their parents or guardians), those with particular influence (e.g., elected or appointed officials), and those who might support the effort (i.e., potential allies) or oppose it (i.e., potential opponents). Key questions in thinking about stakeholders are: Who cares? What do they care about?

This section presents a framework that promotes a common understanding of program evaluation. The overall goal is to make it easier for everyone involved in community health and development work to evaluate their efforts.

Why evaluate community health and development programs?

The type of evaluation we talk about in this section can be closely tied to everyday program operations. Our emphasis is on practical, ongoing evaluation that involves program staff, community members, and other stakeholders, not just evaluation experts. This type of evaluation offers many advantages for community health and development professionals.

For example, it complements program management by:

  • Helping to clarify program plans
  • Improving communication among partners
  • Gathering the feedback needed to improve and be accountable for program effectiveness

It's important to remember, too, that evaluation is not a new activity for those of us working to improve our communities. In fact, we assess the merit of our work all the time when we ask questions, consult partners, make assessments based on feedback, and then use those judgments to improve our work. When the stakes are low, this type of informal evaluation might be enough. However, when the stakes are raised - when a good deal of time or money is involved, or when many people may be affected - then it may make sense for your organization to use evaluation procedures that are more formal, visible, and justifiable.

How do you evaluate a specific program?

Before your organization starts with a program evaluation, your group should be very clear about the answers to the following questions:.

  • What will be evaluated?
  • What criteria will be used to judge program performance?
  • What standards of performance on the criteria must be reached for the program to be considered successful?
  • What evidence will indicate performance on the criteria relative to the standards?
  • What conclusions about program performance are justified based on the available evidence?

To clarify the meaning of each, let's look at some of the answers for Drive Smart, a hypothetical program begun to stop drunk driving.

  • Drive Smart, a program focused on reducing drunk driving through public education and intervention.
  • The number of community residents who are familiar with the program and its goals
  • The number of people who use "Safe Rides" volunteer taxis to get home
  • The percentage of people who report drinking and driving
  • The reported number of single car night time crashes (This is a common way to try to determine if the number of people who drive drunk is changing)
  • 80% of community residents will know about the program and its goals after the first year of the program
  • The number of people who use the "Safe Rides" taxis will increase by 20% in the first year
  • The percentage of people who report drinking and driving will decrease by 20% in the first year
  • The reported number of single car night time crashes will decrease by 10 % in the program's first two years
  • A random telephone survey will demonstrate community residents' knowledge of the program and changes in reported behavior
  • Logs from "Safe Rides" will tell how many people use their services
  • Information on single car night time crashes will be gathered from police records
  • Are the changes we have seen in the level of drunk driving due to our efforts, or something else? Or (if no or insufficient change in behavior or outcome,)
  • Should Drive Smart change what it is doing, or have we just not waited long enough to see results?

The following framework provides an organized approach to answer these questions.

A framework for program evaluation

Program evaluation offers a way to understand and improve community health and development practice using methods that are useful, feasible, proper, and accurate. The framework described below is a practical non-prescriptive tool that summarizes in a logical order the important elements of program evaluation.

The framework contains two related dimensions:

  • Steps in evaluation practice, and
  • Standards for "good" evaluation.

The six connected steps of the framework are actions that should be a part of any evaluation. Although in practice the steps may be encountered out of order, it will usually make sense to follow them in the recommended sequence. That's because earlier steps provide the foundation for subsequent progress. Thus, decisions about how to carry out a given step should not be finalized until prior steps have been thoroughly addressed.

However, these steps are meant to be adaptable, not rigid. Sensitivity to each program's unique context (for example, the program's history and organizational climate) is essential for sound evaluation. They are intended to serve as starting points around which community organizations can tailor an evaluation to best meet their needs.

  • Engage stakeholders
  • Describe the program
  • Focus the evaluation design
  • Gather credible evidence
  • Justify conclusions
  • Ensure use and share lessons learned

Understanding and adhering to these basic steps will improve most evaluation efforts.

The second part of the framework is a basic set of standards to assess the quality of evaluation activities. There are 30 specific standards, organized into the following four groups:

  • Feasibility

These standards help answer the question, "Will this evaluation be a 'good' evaluation?" They are recommended as the initial criteria by which to judge the quality of the program evaluation efforts.

Engage Stakeholders

Stakeholders are people or organizations that have something to gain or lose from what will be learned from an evaluation, and also in what will be done with that knowledge. Evaluation cannot be done in isolation. Almost everything done in community health and development work involves partnerships - alliances among different organizations, board members, those affected by the problem, and others. Therefore, any serious effort to evaluate a program must consider the different values held by the partners. Stakeholders must be part of the evaluation to ensure that their unique perspectives are understood. When stakeholders are not appropriately involved, evaluation findings are likely to be ignored, criticized, or resisted.

However, if they are part of the process, people are likely to feel a good deal of ownership for the evaluation process and results. They will probably want to develop it, defend it, and make sure that the evaluation really works.

That's why this evaluation cycle begins by engaging stakeholders. Once involved, these people will help to carry out each of the steps that follows.

Three principle groups of stakeholders are important to involve:

  • People or organizations involved in program operations may include community members, sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff.
  • People or organizations served or affected by the program may include clients, family members, neighborhood organizations, academic institutions, elected and appointed officials, advocacy groups, and community residents. Individuals who are openly skeptical of or antagonistic toward the program may also be important to involve. Opening an evaluation to opposing perspectives and enlisting the help of potential program opponents can strengthen the evaluation's credibility.

Likewise, individuals or groups who could be adversely or inadvertently affected by changes arising from the evaluation have a right to be engaged. For example, it is important to include those who would be affected if program services were expanded, altered, limited, or ended as a result of the evaluation.

  • Primary intended users of the evaluation are the specific individuals who are in a position to decide and/or do something with the results.They shouldn't be confused with primary intended users of the program, although some of them should be involved in this group. In fact, primary intended users should be a subset of all of the stakeholders who have been identified. A successful evaluation will designate primary intended users, such as program staff and funders, early in its development and maintain frequent interaction with them to be sure that the evaluation specifically addresses their values and needs.

The amount and type of stakeholder involvement will be different for each program evaluation. For instance, stakeholders can be directly involved in designing and conducting the evaluation. They can be kept informed about progress of the evaluation through periodic meetings, reports, and other means of communication.

It may be helpful, when working with a group such as this, to develop an explicit process to share power and resolve conflicts . This may help avoid overemphasis of values held by any specific stakeholder.

Describe the Program

A program description is a summary of the intervention being evaluated. It should explain what the program is trying to accomplish and how it tries to bring about those changes. The description will also illustrate the program's core components and elements, its ability to make changes, its stage of development, and how the program fits into the larger organizational and community environment.

How a program is described sets the frame of reference for all future decisions about its evaluation. For example, if a program is described as, "attempting to strengthen enforcement of existing laws that discourage underage drinking," the evaluation might be very different than if it is described as, "a program to reduce drunk driving by teens." Also, the description allows members of the group to compare the program to other similar efforts, and it makes it easier to figure out what parts of the program brought about what effects.

Moreover, different stakeholders may have different ideas about what the program is supposed to achieve and why. For example, a program to reduce teen pregnancy may have some members who believe this means only increasing access to contraceptives, and other members who believe it means only focusing on abstinence.

Evaluations done without agreement on the program definition aren't likely to be very useful. In many cases, the process of working with stakeholders to develop a clear and logical program description will bring benefits long before data are available to measure program effectiveness.

There are several specific aspects that should be included when describing a program.

Statement of need

A statement of need describes the problem, goal, or opportunity that the program addresses; it also begins to imply what the program will do in response. Important features to note regarding a program's need are: the nature of the problem or goal, who is affected, how big it is, and whether (and how) it is changing.

Expectations

Expectations are the program's intended results. They describe what the program has to accomplish to be considered successful. For most programs, the accomplishments exist on a continuum (first, we want to accomplish X... then, we want to do Y...). Therefore, they should be organized by time ranging from specific (and immediate) to broad (and longer-term) consequences. For example, a program's vision, mission, goals, and objectives , all represent varying levels of specificity about a program's expectations.

Activities are everything the program does to bring about changes. Describing program components and elements permits specific strategies and actions to be listed in logical sequence. This also shows how different program activities, such as education and enforcement, relate to one another. Describing program activities also provides an opportunity to distinguish activities that are the direct responsibility of the program from those that are conducted by related programs or partner organizations. Things outside of the program that may affect its success, such as harsher laws punishing businesses that sell alcohol to minors, can also be noted.

Resources include the time, talent, equipment, information, money, and other assets available to conduct program activities. Reviewing the resources a program has tells a lot about the amount and intensity of its services. It may also point out situations where there is a mismatch between what the group wants to do and the resources available to carry out these activities. Understanding program costs is a necessity to assess the cost-benefit ratio as part of the evaluation.

Stage of development

A program's stage of development reflects its maturity. All community health and development programs mature and change over time. People who conduct evaluations, as well as those who use their findings, need to consider the dynamic nature of programs. For example, a new program that just received its first grant may differ in many respects from one that has been running for over a decade.

At least three phases of development are commonly recognized: planning , implementation , and effects or outcomes . In the planning stage, program activities are untested and the goal of evaluation is to refine plans as much as possible. In the implementation phase, program activities are being field tested and modified; the goal of evaluation is to see what happens in the "real world" and to improve operations. In the effects stage, enough time has passed for the program's effects to emerge; the goal of evaluation is to identify and understand the program's results, including those that were unintentional.

A description of the program's context considers the important features of the environment in which the program operates. This includes understanding the area's history, geography, politics, and social and economic conditions, and also what other organizations have done. A realistic and responsive evaluation is sensitive to a broad range of potential influences on the program. An understanding of the context lets users interpret findings accurately and assess their generalizability. For example, a program to improve housing in an inner-city neighborhood might have been a tremendous success, but would likely not work in a small town on the other side of the country without significant adaptation.

Logic model

A logic model synthesizes the main program elements into a picture of how the program is supposed to work. It makes explicit the sequence of events that are presumed to bring about change. Often this logic is displayed in a flow-chart, map, or table to portray the sequence of steps leading to program results.

Creating a logic model allows stakeholders to improve and focus program direction. It reveals assumptions about conditions for program effectiveness and provides a frame of reference for one or more evaluations of the program. A detailed logic model can also be a basis for estimating the program's effect on endpoints that are not directly measured. For example, it may be possible to estimate the rate of reduction in disease from a known number of persons experiencing the intervention if there is prior knowledge about its effectiveness.

The breadth and depth of a program description will vary for each program evaluation. And so, many different activities may be part of developing that description. For instance, multiple sources of information could be pulled together to construct a well-rounded description. The accuracy of an existing program description could be confirmed through discussion with stakeholders. Descriptions of what's going on could be checked against direct observation of activities in the field. A narrow program description could be fleshed out by addressing contextual factors (such as staff turnover, inadequate resources, political pressures, or strong community participation) that may affect program performance.

Focus the Evaluation Design

By focusing the evaluation design, we mean doing advance planning about where the evaluation is headed, and what steps it will take to get there. It isn't possible or useful for an evaluation to try to answer all questions for all stakeholders; there must be a focus. A well-focused plan is a safeguard against using time and resources inefficiently.

Depending on what you want to learn, some types of evaluation will be better suited than others. However, once data collection begins, it may be difficult or impossible to change what you are doing, even if it becomes obvious that other methods would work better. A thorough plan anticipates intended uses and creates an evaluation strategy with the greatest chance to be useful, feasible, proper, and accurate.

Among the issues to consider when focusing an evaluation are:

Purpose refers to the general intent of the evaluation. A clear purpose serves as the basis for the design, methods, and use of the evaluation. Taking time to articulate an overall purpose will stop your organization from making uninformed decisions about how the evaluation should be conducted and used.

There are at least four general purposes for which a community group might conduct an evaluation:

  • To gain insight .This happens, for example, when deciding whether to use a new approach (e.g., would a neighborhood watch program work for our community?) Knowledge from such an evaluation will provide information about its practicality. For a developing program, information from evaluations of similar programs can provide the insight needed to clarify how its activities should be designed.
  • To improve how things get done .This is appropriate in the implementation stage when an established program tries to describe what it has done. This information can be used to describe program processes, to improve how the program operates, and to fine-tune the overall strategy. Evaluations done for this purpose include efforts to improve the quality, effectiveness, or efficiency of program activities.
  • To determine what the effects of the program are . Evaluations done for this purpose examine the relationship between program activities and observed consequences. For example, are more students finishing high school as a result of the program? Programs most appropriate for this type of evaluation are mature programs that are able to state clearly what happened and who it happened to. Such evaluations should provide evidence about what the program's contribution was to reaching longer-term goals such as a decrease in child abuse or crime in the area. This type of evaluation helps establish the accountability, and thus, the credibility, of a program to funders and to the community.
  • Empower program participants (for example, being part of an evaluation can increase community members' sense of control over the program);
  • Supplement the program (for example, using a follow-up questionnaire can reinforce the main messages of the program);
  • Promote staff development (for example, by teaching staff how to collect, analyze, and interpret evidence); or
  • Contribute to organizational growth (for example, the evaluation may clarify how the program relates to the organization's mission).

Users are the specific individuals who will receive evaluation findings. They will directly experience the consequences of inevitable trade-offs in the evaluation process. For example, a trade-off might be having a relatively modest evaluation to fit the budget with the outcome that the evaluation results will be less certain than they would be for a full-scale evaluation. Because they will be affected by these tradeoffs, intended users have a right to participate in choosing a focus for the evaluation. An evaluation designed without adequate user involvement in selecting the focus can become a misguided and irrelevant exercise. By contrast, when users are encouraged to clarify intended uses, priority questions, and preferred methods, the evaluation is more likely to focus on things that will inform (and influence) future actions.

Uses describe what will be done with what is learned from the evaluation. There is a wide range of potential uses for program evaluation. Generally speaking, the uses fall in the same four categories as the purposes listed above: to gain insight, improve how things get done, determine what the effects of the program are, and affect participants. The following list gives examples of uses in each category.

Some specific examples of evaluation uses

To gain insight:.

  • Assess needs and wants of community members
  • Identify barriers to use of the program
  • Learn how to best describe and measure program activities

To improve how things get done:

  • Refine plans for introducing a new practice
  • Determine the extent to which plans were implemented
  • Improve educational materials
  • Enhance cultural competence
  • Verify that participants' rights are protected
  • Set priorities for staff training
  • Make mid-course adjustments
  • Clarify communication
  • Determine if client satisfaction can be improved
  • Compare costs to benefits
  • Find out which participants benefit most from the program
  • Mobilize community support for the program

To determine what the effects of the program are:

  • Assess skills development by program participants
  • Compare changes in behavior over time
  • Decide where to allocate new resources
  • Document the level of success in accomplishing objectives
  • Demonstrate that accountability requirements are fulfilled
  • Use information from multiple evaluations to predict the likely effects of similar programs

To affect participants:

  • Reinforce messages of the program
  • Stimulate dialogue and raise awareness about community issues
  • Broaden consensus among partners about program goals
  • Teach evaluation skills to staff and other stakeholders
  • Gather success stories
  • Support organizational change and improvement

The evaluation needs to answer specific questions . Drafting questions encourages stakeholders to reveal what they believe the evaluation should answer. That is, what questions are more important to stakeholders? The process of developing evaluation questions further refines the focus of the evaluation.

The methods available for an evaluation are drawn from behavioral science and social research and development. Three types of methods are commonly recognized. They are experimental, quasi-experimental, and observational or case study designs. Experimental designs use random assignment to compare the effect of an intervention between otherwise equivalent groups (for example, comparing a randomly assigned group of students who took part in an after-school reading program with those who didn't). Quasi-experimental methods make comparisons between groups that aren't equal (e.g. program participants vs. those on a waiting list) or use of comparisons within a group over time, such as in an interrupted time series in which the intervention may be introduced sequentially across different individuals, groups, or contexts. Observational or case study methods use comparisons within a group to describe and explain what happens (e.g., comparative case studies with multiple communities).

No design is necessarily better than another. Evaluation methods should be selected because they provide the appropriate information to answer stakeholders' questions, not because they are familiar, easy, or popular. The choice of methods has implications for what will count as evidence, how that evidence will be gathered, and what kind of claims can be made. Because each method option has its own biases and limitations, evaluations that mix methods are generally more robust.

Over the course of an evaluation, methods may need to be revised or modified. Circumstances that make a particular approach useful can change. For example, the intended use of the evaluation could shift from discovering how to improve the program to helping decide about whether the program should continue or not. Thus, methods may need to be adapted or redesigned to keep the evaluation on track.

Agreements summarize the evaluation procedures and clarify everyone's roles and responsibilities. An agreement describes how the evaluation activities will be implemented. Elements of an agreement include statements about the intended purpose, users, uses, and methods, as well as a summary of the deliverables, those responsible, a timeline, and budget.

The formality of the agreement depends upon the relationships that exist between those involved. For example, it may take the form of a legal contract, a detailed protocol, or a simple memorandum of understanding. Regardless of its formality, creating an explicit agreement provides an opportunity to verify the mutual understanding needed for a successful evaluation. It also provides a basis for modifying procedures if that turns out to be necessary.

As you can see, focusing the evaluation design may involve many activities. For instance, both supporters and skeptics of the program could be consulted to ensure that the proposed evaluation questions are politically viable. A menu of potential evaluation uses appropriate for the program's stage of development could be circulated among stakeholders to determine which is most compelling. Interviews could be held with specific intended users to better understand their information needs and timeline for action. Resource requirements could be reduced when users are willing to employ more timely but less precise evaluation methods.

Gather Credible Evidence

Credible evidence is the raw material of a good evaluation. The information learned should be seen by stakeholders as believable, trustworthy, and relevant to answer their questions. This requires thinking broadly about what counts as "evidence." Such decisions are always situational; they depend on the question being posed and the motives for asking it. For some questions, a stakeholder's standard for credibility could demand having the results of a randomized experiment. For another question, a set of well-done, systematic observations such as interactions between an outreach worker and community residents, will have high credibility. The difference depends on what kind of information the stakeholders want and the situation in which it is gathered.

Context matters! In some situations, it may be necessary to consult evaluation specialists. This may be especially true if concern for data quality is especially high. In other circumstances, local people may offer the deepest insights. Regardless of their expertise, however, those involved in an evaluation should strive to collect information that will convey a credible, well-rounded picture of the program and its efforts.

Having credible evidence strengthens the evaluation results as well as the recommendations that follow from them. Although all types of data have limitations, it is possible to improve an evaluation's overall credibility. One way to do this is by using multiple procedures for gathering, analyzing, and interpreting data. Encouraging participation by stakeholders can also enhance perceived credibility. When stakeholders help define questions and gather data, they will be more likely to accept the evaluation's conclusions and to act on its recommendations.

The following features of evidence gathering typically affect how credible it is seen as being:

Indicators translate general concepts about the program and its expected effects into specific, measurable parts.

Examples of indicators include:

  • The program's capacity to deliver services
  • The participation rate
  • The level of client satisfaction
  • The amount of intervention exposure (how many people were exposed to the program, and for how long they were exposed)
  • Changes in participant behavior
  • Changes in community conditions or norms
  • Changes in the environment (e.g., new programs, policies, or practices)
  • Longer-term changes in population health status (e.g., estimated teen pregnancy rate in the county)

Indicators should address the criteria that will be used to judge the program. That is, they reflect the aspects of the program that are most meaningful to monitor. Several indicators are usually needed to track the implementation and effects of a complex program or intervention.

One way to develop multiple indicators is to create a "balanced scorecard," which contains indicators that are carefully selected to complement one another. According to this strategy, program processes and effects are viewed from multiple perspectives using small groups of related indicators. For instance, a balanced scorecard for a single program might include indicators of how the program is being delivered; what participants think of the program; what effects are observed; what goals were attained; and what changes are occurring in the environment around the program.

Another approach to using multiple indicators is based on a program logic model, such as we discussed earlier in the section. A logic model can be used as a template to define a full spectrum of indicators along the pathway that leads from program activities to expected effects. For each step in the model, qualitative and/or quantitative indicators could be developed.

Indicators can be broad-based and don't need to focus only on a program's long -term goals. They can also address intermediary factors that influence program effectiveness, including such intangible factors as service quality, community capacity, or inter -organizational relations. Indicators for these and similar concepts can be created by systematically identifying and then tracking markers of what is said or done when the concept is expressed.

In the course of an evaluation, indicators may need to be modified or new ones adopted. Also, measuring program performance by tracking indicators is only one part of evaluation, and shouldn't be confused as a basis for decision making in itself. There are definite perils to using performance indicators as a substitute for completing the evaluation process and reaching fully justified conclusions. For example, an indicator, such as a rising rate of unemployment, may be falsely assumed to reflect a failing program when it may actually be due to changing environmental conditions that are beyond the program's control.

Sources of evidence in an evaluation may be people, documents, or observations. More than one source may be used to gather evidence for each indicator. In fact, selecting multiple sources provides an opportunity to include different perspectives about the program and enhances the evaluation's credibility. For instance, an inside perspective may be reflected by internal documents and comments from staff or program managers; whereas clients and those who do not support the program may provide different, but equally relevant perspectives. Mixing these and other perspectives provides a more comprehensive view of the program or intervention.

The criteria used to select sources should be clearly stated so that users and other stakeholders can interpret the evidence accurately and assess if it may be biased. In addition, some sources provide information in narrative form (for example, a person's experience when taking part in the program) and others are numerical (for example, how many people were involved in the program). The integration of qualitative and quantitative information can yield evidence that is more complete and more useful, thus meeting the needs and expectations of a wider range of stakeholders.

Quality refers to the appropriateness and integrity of information gathered in an evaluation. High quality data are reliable and informative. It is easier to collect if the indicators have been well defined. Other factors that affect quality may include instrument design, data collection procedures, training of those involved in data collection, source selection, coding, data management, and routine error checking. Obtaining quality data will entail tradeoffs (e.g. breadth vs. depth); stakeholders should decide together what is most important to them. Because all data have limitations, the intent of a practical evaluation is to strive for a level of quality that meets the stakeholders' threshold for credibility.

Quantity refers to the amount of evidence gathered in an evaluation. It is necessary to estimate in advance the amount of information that will be required and to establish criteria to decide when to stop collecting data - to know when enough is enough. Quantity affects the level of confidence or precision users can have - how sure we are that what we've learned is true. It also partly determines whether the evaluation will be able to detect effects. All evidence collected should have a clear, anticipated use.

By logistics , we mean the methods, timing, and physical infrastructure for gathering and handling evidence. People and organizations also have cultural preferences that dictate acceptable ways of asking questions and collecting information, including who would be perceived as an appropriate person to ask the questions. For example, some participants may be unwilling to discuss their behavior with a stranger, whereas others are more at ease with someone they don't know. Therefore, the techniques for gathering evidence in an evaluation must be in keeping with the cultural norms of the community. Data collection procedures should also ensure that confidentiality is protected.

Justify Conclusions

The process of justifying conclusions recognizes that evidence in an evaluation does not necessarily speak for itself. Evidence must be carefully considered from a number of different stakeholders' perspectives to reach conclusions that are well -substantiated and justified. Conclusions become justified when they are linked to the evidence gathered and judged against agreed-upon values set by the stakeholders. Stakeholders must agree that conclusions are justified in order to use the evaluation results with confidence.

The principal elements involved in justifying conclusions based on evidence are:

Standards reflect the values held by stakeholders about the program. They provide the basis to make program judgments. The use of explicit standards for judgment is fundamental to sound evaluation. In practice, when stakeholders articulate and negotiate their values, these become the standards to judge whether a given program's performance will, for instance, be considered "successful," "adequate," or "unsuccessful."

Analysis and synthesis

Analysis and synthesis are methods to discover and summarize an evaluation's findings. They are designed to detect patterns in evidence, either by isolating important findings (analysis) or by combining different sources of information to reach a larger understanding (synthesis). Mixed method evaluations require the separate analysis of each evidence element, as well as a synthesis of all sources to examine patterns that emerge. Deciphering facts from a given body of evidence involves deciding how to organize, classify, compare, and display information. These decisions are guided by the questions being asked, the types of data available, and especially by input from stakeholders and primary intended users.

Interpretation

Interpretation is the effort to figure out what the findings mean. Uncovering facts about a program's performance isn't enough to make conclusions. The facts must be interpreted to understand their practical significance. For example, saying, "15 % of the people in our area witnessed a violent act last year," may be interpreted differently depending on the situation. For example, if 50% of community members had watched a violent act in the last year when they were surveyed five years ago, the group can suggest that, while still a problem, things are getting better in the community. However, if five years ago only 7% of those surveyed said the same thing, community organizations may see this as a sign that they might want to change what they are doing. In short, interpretations draw on information and perspectives that stakeholders bring to the evaluation. They can be strengthened through active participation or interaction with the data and preliminary explanations of what happened.

Judgments are statements about the merit, worth, or significance of the program. They are formed by comparing the findings and their interpretations against one or more selected standards. Because multiple standards can be applied to a given program, stakeholders may reach different or even conflicting judgments. For instance, a program that increases its outreach by 10% from the previous year may be judged positively by program managers, based on standards of improved performance over time. Community members, however, may feel that despite improvements, a minimum threshold of access to services has still not been reached. Their judgment, based on standards of social equity, would therefore be negative. Conflicting claims about a program's quality, value, or importance often indicate that stakeholders are using different standards or values in making judgments. This type of disagreement can be a catalyst to clarify values and to negotiate the appropriate basis (or bases) on which the program should be judged.

Recommendations

Recommendations are actions to consider as a result of the evaluation. Forming recommendations requires information beyond just what is necessary to form judgments. For example, knowing that a program is able to increase the services available to battered women doesn't necessarily translate into a recommendation to continue the effort, particularly when there are competing priorities or other effective alternatives. Thus, recommendations about what to do with a given intervention go beyond judgments about a specific program's effectiveness.

If recommendations aren't supported by enough evidence, or if they aren't in keeping with stakeholders' values, they can really undermine an evaluation's credibility. By contrast, an evaluation can be strengthened by recommendations that anticipate and react to what users will want to know.

Three things might increase the chances that recommendations will be relevant and well-received:

  • Sharing draft recommendations
  • Soliciting reactions from multiple stakeholders
  • Presenting options instead of directive advice

Justifying conclusions in an evaluation is a process that involves different possible steps. For instance, conclusions could be strengthened by searching for alternative explanations from the ones you have chosen, and then showing why they are unsupported by the evidence. When there are different but equally well supported conclusions, each could be presented with a summary of their strengths and weaknesses. Techniques to analyze, synthesize, and interpret findings might be agreed upon before data collection begins.

Ensure Use and Share Lessons Learned

It is naive to assume that lessons learned in an evaluation will necessarily be used in decision making and subsequent action. Deliberate effort on the part of evaluators is needed to ensure that the evaluation findings will be used appropriately. Preparing for their use involves strategic thinking and continued vigilance in looking for opportunities to communicate and influence. Both of these should begin in the earliest stages of the process and continue throughout the evaluation.

The elements of key importance to be sure that the recommendations from an evaluation are used are:

Design refers to how the evaluation's questions, methods, and overall processes are constructed. As discussed in the third step of this framework (focusing the evaluation design), the evaluation should be organized from the start to achieve specific agreed-upon uses. Having a clear purpose that is focused on the use of what is learned helps those who will carry out the evaluation to know who will do what with the findings. Furthermore, the process of creating a clear design will highlight ways that stakeholders, through their many contributions, can improve the evaluation and facilitate the use of the results.

Preparation

Preparation refers to the steps taken to get ready for the future uses of the evaluation findings. The ability to translate new knowledge into appropriate action is a skill that can be strengthened through practice. In fact, building this skill can itself be a useful benefit of the evaluation. It is possible to prepare stakeholders for future use of the results by discussing how potential findings might affect decision making.

For example, primary intended users and other stakeholders could be given a set of hypothetical results and asked what decisions or actions they would make on the basis of this new knowledge. If they indicate that the evidence presented is incomplete or irrelevant and that no action would be taken, then this is an early warning sign that the planned evaluation should be modified. Preparing for use also gives stakeholders more time to explore both positive and negative implications of potential results and to identify different options for program improvement.

Feedback is the communication that occurs among everyone involved in the evaluation. Giving and receiving feedback creates an atmosphere of trust among stakeholders; it keeps an evaluation on track by keeping everyone informed about how the evaluation is proceeding. Primary intended users and other stakeholders have a right to comment on evaluation decisions. From a standpoint of ensuring use, stakeholder feedback is a necessary part of every step in the evaluation. Obtaining valuable feedback can be encouraged by holding discussions during each step of the evaluation and routinely sharing interim findings, provisional interpretations, and draft reports.

Follow-up refers to the support that many users need during the evaluation and after they receive evaluation findings. Because of the amount of effort required, reaching justified conclusions in an evaluation can seem like an end in itself. It is not . Active follow-up may be necessary to remind users of the intended uses of what has been learned. Follow-up may also be required to stop lessons learned from becoming lost or ignored in the process of making complex or political decisions. To guard against such oversight, it may be helpful to have someone involved in the evaluation serve as an advocate for the evaluation's findings during the decision -making phase.

Facilitating the use of evaluation findings also carries with it the responsibility to prevent misuse. Evaluation results are always bounded by the context in which the evaluation was conducted. Some stakeholders, however, may be tempted to take results out of context or to use them for different purposes than what they were developed for. For instance, over-generalizing the results from a single case study to make decisions that affect all sites in a national program is an example of misuse of a case study evaluation.

Similarly, program opponents may misuse results by overemphasizing negative findings without giving proper credit for what has worked. Active follow-up can help to prevent these and other forms of misuse by ensuring that evidence is only applied to the questions that were the central focus of the evaluation.

Dissemination

Dissemination is the process of communicating the procedures or the lessons learned from an evaluation to relevant audiences in a timely, unbiased, and consistent fashion. Like other elements of the evaluation, the reporting strategy should be discussed in advance with intended users and other stakeholders. Planning effective communications also requires considering the timing, style, tone, message source, vehicle, and format of information products. Regardless of how communications are constructed, the goal for dissemination is to achieve full disclosure and impartial reporting.

Along with the uses for evaluation findings, there are also uses that flow from the very process of evaluating. These "process uses" should be encouraged. The people who take part in an evaluation can experience profound changes in beliefs and behavior. For instance, an evaluation challenges staff members to act differently in what they are doing, and to question assumptions that connect program activities with intended effects.

Evaluation also prompts staff to clarify their understanding of the goals of the program. This greater clarity, in turn, helps staff members to better function as a team focused on a common end. In short, immersion in the logic, reasoning, and values of evaluation can have very positive effects, such as basing decisions on systematic judgments instead of on unfounded assumptions.

Additional process uses for evaluation include:

  • By defining indicators, what really matters to stakeholders becomes clear
  • It helps make outcomes matter by changing the reinforcements connected with achieving positive results. For example, a funder might offer "bonus grants" or "outcome dividends" to a program that has shown a significant amount of community change and improvement.

Standards for "good" evaluation

There are standards to assess whether all of the parts of an evaluation are well -designed and working to their greatest potential. The Joint Committee on Educational Evaluation developed "The Program Evaluation Standards" for this purpose. These standards, designed to assess evaluations of educational programs, are also relevant for programs and interventions related to community health and development.

The program evaluation standards make it practical to conduct sound and fair evaluations. They offer well-supported principles to follow when faced with having to make tradeoffs or compromises. Attending to the standards can guard against an imbalanced evaluation, such as one that is accurate and feasible, but isn't very useful or sensitive to the context. Another example of an imbalanced evaluation is one that would be genuinely useful, but is impossible to carry out.

The following standards can be applied while developing an evaluation design and throughout the course of its implementation. Remember, the standards are written as guiding principles, not as rigid rules to be followed in all situations.

The 30 more specific standards are grouped into four categories:

The utility standards are:

  • Stakeholder Identification : People who are involved in (or will be affected by) the evaluation should be identified, so that their needs can be addressed.
  • Evaluator Credibility : The people conducting the evaluation should be both trustworthy and competent, so that the evaluation will be generally accepted as credible or believable.
  • Information Scope and Selection : Information collected should address pertinent questions about the program, and it should be responsive to the needs and interests of clients and other specified stakeholders.
  • Values Identification: The perspectives, procedures, and rationale used to interpret the findings should be carefully described, so that the bases for judgments about merit and value are clear.
  • Report Clarity: Evaluation reports should clearly describe the program being evaluated, including its context, and the purposes, procedures, and findings of the evaluation. This will help ensure that essential information is provided and easily understood.
  • Report Timeliness and Dissemination: Significant midcourse findings and evaluation reports should be shared with intended users so that they can be used in a timely fashion.
  • Evaluation Impact: Evaluations should be planned, conducted, and reported in ways that encourage follow-through by stakeholders, so that the evaluation will be used.

Feasibility Standards

The feasibility standards are to ensure that the evaluation makes sense - that the steps that are planned are both viable and pragmatic.

The feasibility standards are:

  • Practical Procedures: The evaluation procedures should be practical, to keep disruption of everyday activities to a minimum while needed information is obtained.
  • Political Viability : The evaluation should be planned and conducted with anticipation of the different positions or interests of various groups. This should help in obtaining their cooperation so that possible attempts by these groups to curtail evaluation operations or to misuse the results can be avoided or counteracted.
  • Cost Effectiveness: The evaluation should be efficient and produce enough valuable information that the resources used can be justified.

Propriety Standards

The propriety standards ensure that the evaluation is an ethical one, conducted with regard for the rights and interests of those involved. The eight propriety standards follow.

  • Service Orientation : Evaluations should be designed to help organizations effectively serve the needs of all of the targeted participants.
  • Formal Agreements : The responsibilities in an evaluation (what is to be done, how, by whom, when) should be agreed to in writing, so that those involved are obligated to follow all conditions of the agreement, or to formally renegotiate it.
  • Rights of Human Subjects : Evaluation should be designed and conducted to respect and protect the rights and welfare of human subjects, that is, all participants in the study.
  • Human Interactions : Evaluators should respect basic human dignity and worth when working with other people in an evaluation, so that participants don't feel threatened or harmed.
  • Complete and Fair Assessment : The evaluation should be complete and fair in its examination, recording both strengths and weaknesses of the program being evaluated. This allows strengths to be built upon and problem areas addressed.
  • Disclosure of Findings : The people working on the evaluation should ensure that all of the evaluation findings, along with the limitations of the evaluation, are accessible to everyone affected by the evaluation, and any others with expressed legal rights to receive the results.
  • Conflict of Interest: Conflict of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results.
  • Fiscal Responsibility : The evaluator's use of resources should reflect sound accountability procedures and otherwise be prudent and ethically responsible, so that expenditures are accounted for and appropriate.

Accuracy Standards

The accuracy standards ensure that the evaluation findings are considered correct.

There are 12 accuracy standards:

  • Program Documentation: The program should be described and documented clearly and accurately, so that what is being evaluated is clearly identified.
  • Context Analysis: The context in which the program exists should be thoroughly examined so that likely influences on the program can be identified.
  • Described Purposes and Procedures: The purposes and procedures of the evaluation should be monitored and described in enough detail that they can be identified and assessed.
  • Defensible Information Sources: The sources of information used in a program evaluation should be described in enough detail that the adequacy of the information can be assessed.
  • Valid Information: The information gathering procedures should be chosen or developed and then implemented in such a way that they will assure that the interpretation arrived at is valid.
  • Reliable Information : The information gathering procedures should be chosen or developed and then implemented so that they will assure that the information obtained is sufficiently reliable.
  • Systematic Information: The information from an evaluation should be systematically reviewed and any errors found should be corrected.
  • Analysis of Quantitative Information: Quantitative information - data from observations or surveys - in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
  • Analysis of Qualitative Information: Qualitative information - descriptive information from interviews and other sources - in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
  • Justified Conclusions: The conclusions reached in an evaluation should be explicitly justified, so that stakeholders can understand their worth.
  • Impartial Reporting: Reporting procedures should guard against the distortion caused by personal feelings and biases of people involved in the evaluation, so that evaluation reports fairly reflect the evaluation findings.
  • Metaevaluation: The evaluation itself should be evaluated against these and other pertinent standards, so that it is appropriately guided and, on completion, stakeholders can closely examine its strengths and weaknesses.

Applying the framework: Conducting optimal evaluations

There is an ever-increasing agreement on the worth of evaluation; in fact, doing so is often required by funders and other constituents. So, community health and development professionals can no longer question whether or not to evaluate their programs. Instead, the appropriate questions are:

  • What is the best way to evaluate?
  • What are we learning from the evaluation?
  • How will we use what we learn to become more effective?

The framework for program evaluation helps answer these questions by guiding users to select evaluation strategies that are useful, feasible, proper, and accurate.

To use this framework requires quite a bit of skill in program evaluation. In most cases there are multiple stakeholders to consider, the political context may be divisive, steps don't always follow a logical order, and limited resources may make it difficult to take a preferred course of action. An evaluator's challenge is to devise an optimal strategy, given the conditions she is working under. An optimal strategy is one that accomplishes each step in the framework in a way that takes into account the program context and is able to meet or exceed the relevant standards.

This framework also makes it possible to respond to common concerns about program evaluation. For instance, many evaluations are not undertaken because they are seen as being too expensive. The cost of an evaluation, however, is relative; it depends upon the question being asked and the level of certainty desired for the answer. A simple, low-cost evaluation can deliver information valuable for understanding and improvement.

Rather than discounting evaluations as a time-consuming sideline, the framework encourages evaluations that are timed strategically to provide necessary feedback. This makes it possible to make evaluation closely linked with everyday practices.

Another concern centers on the perceived technical demands of designing and conducting an evaluation. However, the practical approach endorsed by this framework focuses on questions that can improve the program.

Finally, the prospect of evaluation troubles many staff members because they perceive evaluation methods as punishing ("They just want to show what we're doing wrong."), exclusionary ("Why aren't we part of it? We're the ones who know what's going on."), and adversarial ("It's us against them.") The framework instead encourages an evaluation approach that is designed to be helpful and engages all interested stakeholders in a process that welcomes their participation.

Evaluation is a powerful strategy for distinguishing programs and interventions that make a difference from those that don't. It is a driving force for developing and adapting sound strategies, improving existing programs, and demonstrating the results of investments in time and other resources. It also helps determine if what is being done is worth the cost.

This recommended framework for program evaluation is both a synthesis of existing best practices and a set of standards for further improvement. It supports a practical approach to evaluation based on steps and standards that can be applied in almost any setting. Because the framework is purposefully general, it provides a stable guide to design and conduct a wide range of evaluation efforts in a variety of specific program areas. The framework can be used as a template to create useful evaluation plans to contribute to understanding and improvement. The Magenta Book - Guidance for Evaluation  provides additional information on requirements for good evaluation, and some straightforward steps to make a good evaluation of an intervention more feasible, read The Magenta Book - Guidance for Evaluation.

Online Resources

Are You Ready to Evaluate your Coalition? prompts 15 questions to help the group decide whether your coalition is ready to evaluate itself and its work.

The  American Evaluation Association Guiding Principles for Evaluators  helps guide evaluators in their professional practice.

CDC Evaluation Resources  provides a list of resources for evaluation, as well as links to professional associations and journals.

Chapter 11: Community Interventions in the "Introduction to Community Psychology" explains professionally-led versus grassroots interventions, what it means for a community intervention to be effective, why a community needs to be ready for an intervention, and the steps to implementing community interventions.

The  Comprehensive Cancer Control Branch Program Evaluation Toolkit  is designed to help grantees plan and implement evaluations of their NCCCP-funded programs, this toolkit provides general guidance on evaluation principles and techniques, as well as practical templates and tools.

Developing an Effective Evaluation Plan  is a workbook provided by the CDC. In addition to information on designing an evaluation plan, this book also provides worksheets as a step-by-step guide.

EvaluACTION , from the CDC, is designed for people interested in learning about program evaluation and how to apply it to their work. Evaluation is a process, one dependent on what you’re currently doing and on the direction in which you’d like go. In addition to providing helpful information, the site also features an interactive Evaluation Plan & Logic Model Builder, so you can create customized tools for your organization to use.

Evaluating Your Community-Based Program  is a handbook designed by the American Academy of Pediatrics covering a variety of topics related to evaluation.

GAO Designing Evaluations  is a handbook provided by the U.S. Government Accountability Office with copious information regarding program evaluations.

The CDC's  Introduction to Program Evaluation for Publilc Health Programs: A Self-Study Guide  is a "how-to" guide for planning and implementing evaluation activities. The manual, based on CDC’s Framework for Program Evaluation in Public Health, is intended to assist with planning, designing, implementing and using comprehensive evaluations in a practical way.

McCormick Foundation Evaluation Guide  is a guide to planning an organization’s evaluation, with several chapters dedicated to gathering information and using it to improve the organization.

A Participatory Model for Evaluating Social Programs from the James Irvine Foundation.

Practical Evaluation for Public Managers  is a guide to evaluation written by the U.S. Department of Health and Human Services.

Penn State Program Evaluation  offers information on collecting different forms of data and how to measure different community markers.

Program Evaluaton  information page from Implementation Matters.

The Program Manager's Guide to Evaluation  is a handbook provided by the Administration for Children and Families with detailed answers to nine big questions regarding program evaluation.

Program Planning and Evaluation  is a website created by the University of Arizona. It provides links to information on several topics including methods, funding, types of evaluation, and reporting impacts.

User-Friendly Handbook for Program Evaluation  is a guide to evaluations provided by the National Science Foundation.  This guide includes practical information on quantitative and qualitative methodologies in evaluations.

W.K. Kellogg Foundation Evaluation Handbook  provides a framework for thinking about evaluation as a relevant and useful program tool. It was originally written for program directors with direct responsibility for the ongoing evaluation of the W.K. Kellogg Foundation.

Print Resources

This Community Tool Box section is an edited version of:

CDC Evaluation Working Group. (1999). (Draft). Recommended framework for program evaluation in public health practice . Atlanta, GA: Author.

The article cites the following references:

Adler. M., &  Ziglio, E. (1996). Gazing into the oracle: the delphi method and its application to social policy and community health and development. London: Jessica Kingsley Publishers.

Barrett, F.   Program Evaluation: A Step-by-Step Guide.  Sunnycrest Press, 2013. This practical manual includes helpful tips to develop evaluations, tables illustrating evaluation approaches, evaluation planning and reporting templates, and resources if you want more information.

Basch, C., Silepcevich, E., Gold, R., Duncan, D., & Kolbe, L. (1985).   Avoiding type III errors in health education program evaluation: a case study . Health Education Quarterly. 12(4):315-31.

Bickman L, & Rog, D. (1998). Handbook of applied social research methods. Thousand Oaks, CA: Sage Publications.

Boruch, R.  (1998).  Randomized controlled experiments for evaluation and planning. In Handbook of applied social research methods, edited by Bickman L., & Rog. D. Thousand Oaks, CA: Sage Publications: 161-92.

Centers for Disease Control and Prevention DoHAP. Evaluating CDC HIV prevention programs: guidance and data system . Atlanta, GA: Centers for Disease Control and Prevention, Division of HIV/AIDS Prevention, 1999.

Centers for Disease Control and Prevention. Guidelines for evaluating surveillance systems. Morbidity and Mortality Weekly Report 1988;37(S-5):1-18.

Centers for Disease Control and Prevention. Handbook for evaluating HIV education . Atlanta, GA: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Division of Adolescent and School Health, 1995.

Cook, T., & Campbell, D. (1979). Quasi-experimentation . Chicago, IL: Rand McNally.

Cook, T.,& Reichardt, C. (1979).  Qualitative and quantitative methods in evaluation research . Beverly Hills, CA: Sage Publications.

Cousins, J.,& Whitmore, E. (1998).   Framing participatory evaluation. In Understanding and practicing participatory evaluation , vol. 80, edited by E Whitmore. San Francisco, CA: Jossey-Bass: 5-24.

Chen, H. (1990).  Theory driven evaluations . Newbury Park, CA: Sage Publications.

de Vries, H., Weijts, W., Dijkstra, M., & Kok, G. (1992).  The utilization of qualitative and quantitative data for health education program planning, implementation, and evaluation: a spiral approach . Health Education Quarterly.1992; 19(1):101-15.

Dyal, W. (1995).  Ten organizational practices of community health and development: a historical perspective . American Journal of Preventive Medicine;11(6):6-8.

Eddy, D. (1998). Performance measurement: problems and solutions . Health Affairs;17 (4):7-25.Harvard Family Research Project. Performance measurement. In The Evaluation Exchange, vol. 4, 1998, pp. 1-15.

Eoyang,G., & Berkas, T. (1996).  Evaluation in a complex adaptive system . Edited by (we don´t have the names), (1999): Taylor-Powell E, Steele S, Douglah M. Planning a program evaluation. Madison, Wisconsin: University of Wisconsin Cooperative Extension.

Fawcett, S.B., Paine-Andrews, A., Fancisco, V.T., Schultz, J.A., Richter, K.P, Berkley-Patton, J., Fisher, J., Lewis, R.K., Lopez, C.M., Russos, S., Williams, E.L., Harris, K.J., & Evensen, P. (2001). Evaluating community initiatives for health and development. In I. Rootman, D. McQueen, et al. (Eds.),  Evaluating health promotion approaches . (pp. 241-277). Copenhagen, Denmark: World Health Organization - Europe.

Fawcett , S., Sterling, T., Paine-, A., Harris, K., Francisco, V. et al. (1996).  Evaluating community efforts to prevent cardiovascular diseases . Atlanta, GA: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion.

Fetterman, D.,, Kaftarian, S., & Wandersman, A. (1996).  Empowerment evaluation: knowledge and tools for self-assessment and accountability . Thousand Oaks, CA: Sage Publications.

Frechtling, J.,& Sharp, L. (1997).  User-friendly handbook for mixed method evaluations . Washington, DC: National Science Foundation.

Goodman, R., Speers, M., McLeroy, K., Fawcett, S., Kegler M., et al. (1998).  Identifying and defining the dimensions of community capacity to provide a basis for measurement . Health Education and Behavior;25(3):258-78.

Greene, J.  (1994). Qualitative program evaluation: practice and promise . In Handbook of Qualitative Research, edited by NK Denzin and YS Lincoln. Thousand Oaks, CA: Sage Publications.

Haddix, A., Teutsch. S., Shaffer. P., & Dunet. D. (1996). Prevention effectiveness: a guide to decision analysis and economic evaluation . New York, NY: Oxford University Press.

Hennessy, M.  Evaluation. In Statistics in Community health and development , edited by Stroup. D.,& Teutsch. S. New York, NY: Oxford University Press, 1998: 193-219

Henry, G. (1998). Graphing data. In Handbook of applied social research methods , edited by Bickman. L., & Rog.  D.. Thousand Oaks, CA: Sage Publications: 527-56.

Henry, G. (1998).  Practical sampling. In Handbook of applied social research methods , edited by  Bickman. L., & Rog. D.. Thousand Oaks, CA: Sage Publications: 101-26.

Institute of Medicine. Improving health in the community: a role for performance monitoring . Washington, DC: National Academy Press, 1997.

Joint Committee on Educational Evaluation, James R. Sanders (Chair). The program evaluation standards: how to assess evaluations of educational programs . Thousand Oaks, CA: Sage Publications, 1994.

Kaplan,  R., & Norton, D.  The balanced scorecard: measures that drive performance . Harvard Business Review 1992;Jan-Feb71-9.

Kar, S. (1989). Health promotion indicators and actions . New York, NY: Springer Publications.

Knauft, E. (1993).   What independent sector learned from an evaluation of its own hard-to -measure programs . In A vision of evaluation, edited by ST Gray. Washington, DC: Independent Sector.

Koplan, J. (1999)  CDC sets millennium priorities . US Medicine 4-7.

Lipsy, M. (1998).  Design sensitivity: statistical power for applied experimental research . In Handbook of applied social research methods, edited by Bickman, L., & Rog, D. Thousand Oaks, CA: Sage Publications. 39-68.

Lipsey, M. (1993). Theory as method: small theories of treatments . New Directions for Program Evaluation;(57):5-38.

Lipsey, M. (1997).  What can you build with thousands of bricks? Musings on the cumulation of knowledge in program evaluation . New Directions for Evaluation; (76): 7-23.

Love, A.  (1991).  Internal evaluation: building organizations from within . Newbury Park, CA: Sage Publications.

Miles, M., & Huberman, A. (1994).  Qualitative data analysis: a sourcebook of methods . Thousand Oaks, CA: Sage Publications, Inc.

National Quality Program. (1999).  National Quality Program , vol. 1999. National Institute of Standards and Technology.

National Quality Program . Baldridge index outperforms S&P 500 for fifth year, vol. 1999.

National Quality Program , 1999.

National Quality Program. Health care criteria for performance excellence , vol. 1999. National Quality Program, 1998.

Newcomer, K.  Using statistics appropriately. In Handbook of Practical Program Evaluation, edited by Wholey,J.,  Hatry, H., & Newcomer. K. San Francisco, CA: Jossey-Bass, 1994: 389-416.

Patton, M. (1990).  Qualitative evaluation and research methods . Newbury Park, CA: Sage Publications.

Patton, M (1997).  Toward distinguishing empowerment evaluation and placing it in a larger context . Evaluation Practice;18(2):147-63.

Patton, M. (1997).  Utilization-focused evaluation . Thousand Oaks, CA: Sage Publications.

Perrin, B. Effective use and misuse of performance measurement . American Journal of Evaluation 1998;19(3):367-79.

Perrin, E, Koshel J. (1997).  Assessment of performance measures for community health and development, substance abuse, and mental health . Washington, DC: National Academy Press.

Phillips, J. (1997).  Handbook of training evaluation and measurement methods . Houston, TX: Gulf Publishing Company.

Poreteous, N., Sheldrick B., & Stewart P. (1997).  Program evaluation tool kit: a blueprint for community health and development management . Ottawa, Canada: Community health and development Research, Education, and Development Program, Ottawa-Carleton Health Department.

Posavac, E., & Carey R. (1980).  Program evaluation: methods and case studies . Prentice-Hall, Englewood Cliffs, NJ.

Preskill, H. & Torres R. (1998).  Evaluative inquiry for learning in organizations . Thousand Oaks, CA: Sage Publications.

Public Health Functions Project. (1996). The public health workforce: an agenda for the 21st century . Washington, DC: U.S. Department of Health and Human Services, Community health and development Service.

Public Health Training Network. (1998).  Practical evaluation of public health programs . CDC, Atlanta, GA.

Reichardt, C., & Mark M. (1998).  Quasi-experimentation . In Handbook of applied social research methods, edited by L Bickman and DJ Rog. Thousand Oaks, CA: Sage Publications, 193-228.

Rossi, P., & Freeman H.  (1993).  Evaluation: a systematic approach . Newbury Park, CA: Sage Publications.

Rush, B., & Ogbourne A. (1995).  Program logic models: expanding their role and structure for program planning and evaluation . Canadian Journal of Program Evaluation;695 -106.

Sanders, J. (1993).  Uses of evaluation as a means toward organizational effectiveness. In A vision of evaluation , edited by ST Gray. Washington, DC: Independent Sector.

Schorr, L. (1997).   Common purpose: strengthening families and neighborhoods to rebuild America . New York, NY: Anchor Books, Doubleday.

Scriven, M. (1998) . A minimalist theory of evaluation: the least theory that practice requires . American Journal of Evaluation.

Shadish, W., Cook, T., Leviton, L. (1991).  Foundations of program evaluation . Newbury Park, CA: Sage Publications.

Shadish, W. (1998).   Evaluation theory is who we are. American Journal of Evaluation:19(1):1-19.

Shulha, L., & Cousins, J. (1997).  Evaluation use: theory, research, and practice since 1986 . Evaluation Practice.18(3):195-208

Sieber, J. (1998).   Planning ethically responsible research . In Handbook of applied social research methods, edited by L Bickman and DJ Rog. Thousand Oaks, CA: Sage Publications: 127-56.

Steckler, A., McLeroy, K., Goodman, R., Bird, S., McCormick, L. (1992).  Toward integrating qualitative and quantitative methods: an introduction . Health Education Quarterly;191-8.

Taylor-Powell, E., Rossing, B., Geran, J. (1998). Evaluating collaboratives: reaching the potential. Madison, Wisconsin: University of Wisconsin Cooperative Extension.

Teutsch, S.  A framework for assessing the effectiveness of disease and injury prevention . Morbidity and Mortality Weekly Report: Recommendations and Reports Series 1992;41 (RR-3 (March 27, 1992):1-13.

Torres, R., Preskill, H., Piontek, M., (1996).   Evaluation strategies for communicating and reporting: enhancing learning in organizations . Thousand Oaks, CA: Sage Publications.

Trochim, W. (1999).  Research methods knowledge base , vol.

United Way of America. Measuring program outcomes: a practical approach . Alexandria, VA: United Way of America, 1996.

U.S. General Accounting Office. Case study evaluations . GAO/PEMD-91-10.1.9. Washington, DC: U.S. General Accounting Office, 1990.

U.S. General Accounting Office. Designing evaluations . GAO/PEMD-10.1.4. Washington, DC: U.S. General Accounting Office, 1991.

U.S. General Accounting Office. Managing for results: measuring program results that are under limited federal control . GAO/GGD-99-16. Washington, DC: 1998.

U.S. General Accounting Office. Prospective evaluation methods: the prosepctive evaluation synthesis . GAO/PEMD-10.1.10. Washington, DC: U.S. General Accounting Office, 1990.

U.S. General Accounting Office. The evaluation synthesis . Washington, DC: U.S. General Accounting Office, 1992.

U.S. General Accounting Office. Using statistical sampling . Washington, DC: U.S. General Accounting Office, 1992.

Wandersman, A., Morrissey, E., Davino, K., Seybolt, D., Crusto, C., et al. Comprehensive quality programming and accountability: eight essential strategies for implementing successful prevention programs . Journal of Primary Prevention 1998;19(1):3-30.

Weiss, C. (1995). Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives for families and children . In New Approaches to Evaluating Community Initiatives, edited by Connell, J. Kubisch, A. Schorr, L.  & Weiss, C.  New York, NY, NY: Aspin Institute.

Weiss, C. (1998).  Have we learned anything new about the use of evaluation? American Journal of Evaluation;19(1):21-33.

Weiss, C. (1997).  How can theory-based evaluation make greater headway? Evaluation Review 1997;21(4):501-24.

W.K. Kellogg Foundation. (1998). The W.K. Foundation Evaluation Handbook . Battle Creek, MI: W.K. Kellogg Foundation.

Wong-Reiger, D.,& David, L. (1995).  Using program logic models to plan and evaluate education and prevention programs. In Evaluation Methods Sourcebook II, edited by Love. A.J. Ottawa, Ontario: Canadian Evaluation Society.

Wholey, S., Hatry, P., & Newcomer, E. .  Handbook of Practical Program Evaluation.  Jossey-Bass, 2010. This book serves as a comprehensive guide to the evaluation process and its practical applications for sponsors, program managers, and evaluators.

Yarbrough,  B., Lyn, M., Shulha, H., Rodney K., & Caruthers, A. (2011).  The Program Evaluation Standards: A Guide for Evalualtors and Evaluation Users Third Edition . Sage Publications.

Yin, R. (1988).  Case study research: design and methods . Newbury Park, CA: Sage Publications.

  • Our Mission

Reframing How We Assess Student Writing

The work of assessing writing assignments can be shared with students, creating a critical learning opportunity for them.

A teacher giving a student feedback about his writing

Every teaching role has its unique burden. Science teachers invest long hours in preparing laboratory experiments with expensive and sometimes hazardous materials. Math teachers wrestle with innumeracy and  negative stereotypes of math , especially at the higher levels. History teachers work hard to avoid turning their subject matter into the rote memorization of isolated facts .

English teachers like myself, and upper division ones especially, are witness to a tidal wave of written work: quick writes, compositions, literary responses, annotated bibliographies, timed and take-home essays, and more. I consider myself an effective and disciplined teacher, but I am regularly submerged under the work produced by my 180+ students.

The problem with the traditional method of handling the paper load is that it is still fundamentally teacher-centered, relying on our time management and efficiency rather than on innovations that broaden the number of stakeholders and focus on qualitative outcomes over more traditional grading. A better way, one that I first considered as I learned to teach students to think about their audience, involves turning students from mere producers of writing into scholars and theorists whose experiences carry authority, merit, and value.

Students should write a lot. The following strategies aim to help English teachers in particular, and teachers of writing more broadly, understand how to reframe the assigning and assessing of writing to improve students’ skills—while improving teachers’ mental health, too.

4 Ways to Increase Student Writers’ Authority

1. Complicate the audience: One of the biggest problems with in-school writing, whether in high school or college, is the one-dimensionality of the compositions. They are generally intended for an audience of one—the person doing the grading—and read like an extended advertisement, parroting the grader’s ideas back at them.

My time at the Reynolds High School Journalism Institute as a journalism teacher showed me how powerful writing for a complex, multifaceted audience can be. Rather than writing to impress, students should aim for communicating their ideas , regardless of topic, to a heterogeneous and educated but uninformed audience that is willing to be convinced—if the writer conveys information with careful, measured argument and prose. This requires that teachers teach and reinforce an understanding of audience as an expansive, open category rather than a closed one.

2. Develop the students’ ability to synthesize: The most important aspect of the AP English Language and Composition exam, and one that is increasingly relevant outside of the classroom, is the synthesis essay . For this task, students read a series of short texts on a topic and then create their own informed position. Instead of a straightforward argumentative essay—though there is that, too, on the exam—students must aim to be inclusive and nuanced, situating their thought among ideas they find interesting and those they actively disagree with. The goal here is not to win the argument  but to demonstrate an understanding of the topic and take a stance, anticipating and responding to a reader’s counterarguments and questions.

In the classroom, teachers can easily create opportunities for synthesis writing. For example, teachers can have students draw on their peers’ ideas about a topic as the bank of information they can use in their own responses. The resulting papers will aim to integrate and respond to the many classroom perspectives and teach students that their classmates are sources of wisdom. An essay about America’s place in the world, the purpose of education, or our duties or obligations to the environment, for instance, would produce a wide range of opinions that students could then synthesize, expanding their own understanding and situating themselves among the stances of their peers.

3. Focus on individual growth over grades: Overemphasizing grades can stifle growth and creativity. Students of writing should understand how they are developing relative to their own past performance—in addition to where they stack up against other writers generally. Ideally, this means that teachers would work with students on developing a writing portfolio , a cross-section of work completed throughout the term or class that reflects their strengths and growth as writers.

Using those materials as a historical record, teachers can lead students through the revision process in a deeper, more comprehensive way—not just fixing the small, conventional things but also looking at persistent, high-level structural and organizational issues that, once addressed, can turn adequate writers into exceptional ones.

One of the main differences I have found between the typical and the excellent student writer is an awareness of one’s own prose and style. After taking a close, honest look at their body of work, students become more comfortable speaking to their own writing in a mature and introspective way. At a bare minimum, requiring revisions will ensure that students know how to improve their work, taking a look at it later with clear eyes.

4. Let students practice self and peer assessment: Every year, I tell my students that my objective is to make myself irrelevant—I’ll help during the course of the year, but they eventually need to go it alone. In my class, this has meant incorporating metacognition as a part of my classroom’s daily practice.

Students spend a great deal of time scoring essays —their own, those of their peers, and, when I can get them, sample essays from other teachers or the AP exam. While rubrics are critical to ensuring reflective conversations, it’s not enough to ask students to evaluate the writing against the rubric alone. Rather, I try to have students mimic the kind of conversations that teachers have with colleagues when they’re assessing writing: The goal is not just a grade, but a clear, persuasive explanation that identifies specific passages and choices the author made.

From there, teachers can have high-level conversations about why the student score is right or wrong, and move on to a brief, targeted writing conference on specific ways to tweak the piece. As students gain fluency, these writing conferences can be student-to-student, gradually removing the teacher from the equation (I told you I’d make myself irrelevant!).

The important takeaway here—across all of these strategies—is that teachers are not the best or only source of assessment, and they are not the only audience for writing. It is far better to teach students how to fish; far better for teachers to spread the hard work of assessment around so more writing practice can be incorporated; and far better for student writers to consider the craft from many perspectives, and with an awareness of the complexity of real audiences of readers.

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

The Evaluation Essay

Key features of a well-written paper about an evaluative essay about a film, a concise description of the subject.

You should include just enough information to help readers who may not be familiar with your subject understand it. Remember, the goal is to evaluate, not summarize.

For instance, if writing about a movie, you’d want to describe the main plot points, only providing what readers need to understand the context of your evaluation. While you are evaluating the movie, you want to try to avoid retelling the story of it.

Another thing to keep in mind is that depending on your topic and medium, some of this descriptive information may be in visual or audio form.

Clearly Defined Criteria

Since you are evaluating the subject, you will need to determine clear criteria as the basis for your judgment. In reviews or other evaluations written for a broad audience, you can integrate the criteria into the discussion as reasons for your assessment. In more formal evaluations, you may need to announce your criteria explicitly.

For instance, you could evaluate a film based on the stars’ performances, the complexity of their characters, and the film’s coherence. There are lots of other criteria to choose from, depending on your film choice.

A few things to keep in mind when coming up with your criteria:

  • Don’t try to have too many things to evaluate. Using three to four elements to evaluate should be enough criteria to support an overall evaluation of the subject.
  • Pick things relevant to evaluating your subject. For instance, if you are specifically reviewing a movie, you don’t want to include criteria evaluating the popcorn at the movie theater.
  • Remember, you’re going to have to define the criteria for your evaluation, so make sure you pick things you either know about or that you can learn about.

A Knowledgeable Discussion of the Subject

To evaluate something credibly, you need to show that you know it yourself and that you understand its context. Cite many examples showing your knowledge of the film. Some evaluations require that you research what other authoritative sources have said about your subject. You are welcome to refer to other film reviews to show you have researched other views, but your evaluation should be your own.

A Balanced and Fair Assessment

An evaluation is centered on a judgment. You can point out both its weaknesses and strengths. It is important that any judgment be balanced and fair. This is why it’s important to select your criteria before starting your evaluation. Seldom is something all good or all bad, and your audience knows this. If only presenting the positive or negative, your audience may feel you aren’t that credible of a source. While it may feel weird to include less-than-positive comments about something you enjoy, a fair evaluation acknowledges both strengths and weaknesses.

Well-Supported Reasons

You need to argue for your judgment, providing reasons and evidence that might include visual and audio as well as verbal material. Support your reasons with several specific examples from the film. This is also a good place to use knowledge of other movies, movie terminology, and other references to not only support your argument (aka your evaluation) but also show your ethos of the subject.

Step 1: Choosing a Topic

For this assignment, you will choose a film you have watched that was meaningful enough to evaluate. It can be one that was meaningful because it changed your perspective, for instance. You are also welcome to choose a film that was critically acclaimed, but you have objections to it. Choose something that strikes you as a film worth analyzing and discussing.

Things to consider while making this selection:

  • What is the purpose of your evaluation? Are you writing to affect your audience’s opinion of a film?
  • Who is your audience? To whom are you writing? What will your audience already know about the film? What will they expect to learn from your evaluation of it? Are they likely to agree with you or not?
  • What is your stance? What is your attitude toward the subject, and how will you show that you have evaluated it fairly and appropriately? Think about the tone you want to use should it be reasonable? Passionate? Critical?

What film are you going to evaluate in this essay? Make sure it is accessible to you (accessible as in you own it, you have checked it out from the library, or it’s available through a subscription you have like Netflix, Amazon Prime, Disney Plus, etc.). You will need to watch it and take detailed notes so that you have specifics, dialogue, etc., to include. So, what film will you evaluate?

Step 2: Generating Ideas and Text

Now that you know the film you want to evaluate, it’s time to watch it. Make sure you take extensive notes as it needs to be clear that you have taken the time to watch and study your film and that you have thought through not only the criteria that you want to talk about but also specific examples of those criteria.

Explore what you already know. Freewrite to answer the following questions:

  • What do you know about this subject?
  • What are your initial or gut feelings, and why do you feel as you do?
  • How does this film reflect or affect your basic values or beliefs?
  • How have others evaluated subjects like this?

Now, it’s time to identify criteria. Make a list of criteria you think should be used to evaluate your film. Consider which criteria will likely be important to your audience.

Here are ideas for specific criteria:

  • Evaluate your subject. Study your film closely to determine to what extent it meets each of your criteria.
  • You may want to list your criteria and take notes related to each one as you watch the film.
  • You may develop a rating scale for each criterion to help stay focused on it.
  • Come up with a tentative judgment. Choose 3-4 criteria to discuss in your essay.
  • Compare your subject with others. Often, evaluating something involves comparing and contrasting it with similar things. We judge movies in comparison with other movies we’ve seen in a similar genre.
  • State your judgment as a tentative thesis statement. Your thesis statement should address both pros and cons. “Hawaii Five-O is fun to watch despite its stilted dialogue.” “Of the five sport utility vehicles tested, the Toyota 4 Runner emerged as the best in comfort, power, and durability, though not in styling or cargo capacity.” Both of these examples offer a judgment but qualify it according to the writer’s criteria. Experiment with thesis statements and highlight one you want to use.
  • Anticipate other opinions. I think Will Ferrell is a comic genius whose movies are first-rate. You think Will Ferrell is a terrible actor who makes awful movies. How can I write a review of his latest film that you will at least consider? One way is by acknowledging other opinions–and refuting those opinions as best I can. I may not persuade you to see Ferrell’s next film, but I can at least demonstrate that by certain criteria he should be appreciated. You may need to research how others have evaluated your subject.
  • Identify and support your reasons. Write out all the reasons you can think of that will convince your audience to accept your judgment. Review your list to identify the most convincing or important reasons. Then, review how well your subject meets your criteria and decide how best to support your reasons through examples, authoritative opinions, statistics, visual or audio evidence, or something else.

Step 3: Organization of the Evaluation Essay

The following provides two ways to organize your document:

Black text "start with your subject" above five light blue boxes in a line connected with black arrows pointing to the next box in the line. Each box includes writing. First box, "describe what you are evaluating"; second box "state your judgement"; third box "provide reasons and evidence, discussing criteria as you apply them"; fourth box "acknowledge objections or other opinions"; "restate your overall judgement"

Step 4: Drafting

Now that you’ve watched the thing, written the notes, and collected your thoughts, it’s time to draft. Use the organizational scheme you created in Step 3 to help you create your evaluation.

Step 5: Get Feedback

Step 6: revising.

Once you’ve received feedback, if possible, read through it and then walk away from the work for a little while. This will allow your brain time to process the feedback you received making it much easier to sit back down to make adjustments. While revising, try to avoid messing with punctuation or fixing any grammatical issues. Revision is when you focus on your ideas and make sure they are presented properly, so make sure you’ve set aside plenty of time or scheduled multiple times to go through your project.

Once you’re finished with revision—everything is well defined, claims justified, and conclusions given—it’s time to edit. This is when you correct punctuation and adjust grammatical issues. During this stage, try to only focus on one or two issues at a time. Work all the way through your project looking for these two things, and then start again with the next couple of issues you may need to smooth.

Hopefully, you’ve finished all of these steps before the deadline. If you are running behind, make sure you reach out to your instructor to let them know; they may have some tips to help get you through the final push.

ATTRIBUTIONS AND LICENSE

Creative Commons License

“ The Evaluation Essay ” by Rachael Reynolds is licensed under a  Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License  and adapted work from the source below:

Adapted from “ Writing the Evaluation Essay ” by Sara Layton and is used according to CC BY-NC-SA 3.0.

UNM Core Writing OER Collection Copyright © 2023 by University of New Mexico is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Center for Teaching Innovation

How to evaluate group work.

Students working in small groups often learn more and demonstrate better retention than students taught in other instructional formats. When instructors incorporate group assignments and activities into their courses, they must make thoughtful decisions regarding how to organize the group, how to facilitate it, and how to evaluate the completed work.

Instructor Evaluations

  • Create a rubric to set evaluation standards and share with students to communicate expectations.
  • Assess the performance of the group and its individual members.
  • Give regular feedback so group members can gauge their progress both as a group and individually.
  • Decide what criteria to base final evaluations upon. For example, you might weigh the finished product, teamwork, and individual contributions differently.
  • Consider adjusting grades based on peer evaluations.

Peer Evaluations

Consider providing a rubric to foster consistent peer evaluations of participation, quality, and quantity of work.

  • This may reveal participation issues that the instructor might not otherwise know about.
  • Students who know that their peers will evaluate them may contribute more to the group and have a greater stake in the project.
  • Completing evaluations early in the project allows groups to assess how they can improve.

General Strategies for Evaluation

  • Groups need to know who may be struggling to complete assignments, and members need to know they cannot sit back and let others do all the work. You can assess individual student progress by giving spot quizzes and evaluate group progress by setting up meetings with each group to review the project status.
  • Once or twice during the group task, ask group members to fill out a group and/or peer evaluation to assess team effectiveness. Consider asking “What action has each member taken that was helpful for the group? What action could each member take to make the group more effective?”
  • Help students reflect on what they have learned and how they have learned it. Consider asking students to complete a short survey that focuses on their individual contributions to the group, how the group interacted together, and what the individual student learned from the project in relation to the rest of the course.
  • Explain your grading system to students before they begin their work. The system should encourage teamwork, positive interdependence, and individual accountability. If you are going to consider the group’s evaluation of each member’s work, it is best to have students evaluate each other independently and confidentially.

Example Group Work Assessment Rubric

Here is an example of a group work assessment rubric. Filling out a rubric for each member of the group can help instructors assess individual contributions to the group and the individual’s role as a team player.

This rubric can also be used by group members as a tool to guide a mid-semester or mid-project discussion on how each individual is contributing to the group.

Total Points ______

Notes and Comments:

Gueldenzoph, L. E., & May, G. L. (2002). Collaborative peer evaluation: Best practices for group member assessments.  Business Communication Quarterly, 65 (1), 9-20.

Johnston, L., & Miles, L. (2004). Assessing contributions to group assignments.  Assessment and Evaluation in Higher Education, 29 (6), 751-768.

Oakley, B., Felder, F. M., Brent, R., & Elhajj, I, (2004). Turning student groups into effective teams.  Journal of Student Centered Learning, 2 (1) 9-34.

Evaluation Essay

Barbara P

Evaluation Essay - Definition, Examples, and Writing Tips

13 min read

Evaluation Essay

People also read

Learn How to Write an Editorial on Any Topic

Best Tips on How to Avoid Plagiarism

How to Write a Movie Review - Guide & Examples

A Complete Guide on How to Write a Summary for Students

Write Opinion Essay Like a Pro: A Detailed Guide

How to Write a Thematic Statement - Tips & Examples

How to Write a Bio - Quick Tips, Structure & Examples

How to Write a Synopsis – A Simple Format & Guide

How to Write a Comparative Essay – A Complete Guide

Visual Analysis Essay - A Writing Guide with Format & Sample

List of Common Social Issues Around the World

Writing Character Analysis - Outline, Steps, and Examples

11 Common Types of Plagiarism Explained Through Examples

Article Review Writing: A Complete Step-by-Step Guide with Examples

A Detailed Guide on How to Write a Poem Step by Step

Detailed Guide on Appendix Writing: With Tips and Examples

Are you unsure about what it takes to evaluate things from your perspective in an evaluation essay?

If you’re having a hard time understanding how to present a balanced assessment of the subject, worry not!  We are here to help you get through the evaluation essay writing process.

In this blog, you will learn all about evaluation essays. From the definition, writing process, topics, tips, and a lot more, you’ll learn how to write an evaluation essay effortlessly!  

Continue reading to get a better idea.

Arrow Down

  • 1. What is an Evaluation Essay?
  • 2. Evaluation Essay Structure
  • 3. How to Start an Evaluation Essay?
  • 4. How to Write an Evaluation Essay?
  • 5. How to Format Your Evaluation Essay?
  • 6. Evaluation Essay Examples
  • 7. Evaluation Essay Topics For College Students
  • 8. Evaluation Essay vs. Review

What is an Evaluation Essay?

Let’s first understand the evaluation essay meaning, here is the standard definition:

An evaluation essay offers a value judgment or an opinion of something. It presents an overall view of a particular subject’s quality. Moreover, it provides a critical analysis and a complete evaluation of something.

What is the Purpose of an Evaluation Essay?

The main purpose of an evaluation essay is to present an opinion and evaluate a topic critically. This type of writing determines the condition, worth, or significance by careful appraisal and study.  

This essay features the writer’s opinion, but when done correctly, it does not sound opinionated. Instead, it provides the facts and evidence to justify the opinions about the essay’s subject.

To write a good evaluation essay, you need to master critical evaluation and present the evaluation in an unbiased manner. You may also discuss both the pros and cons of the subject.

Order Essay

Paper Due? Why Suffer? That's our Job

Evaluation Essay Structure

The four different ways to format and organize the evaluation essay are as follows.

1. Chronological Structure

It is a sequential organization that could be used for evaluating historical or current events. It tells how something works and assesses the effectiveness of a mechanism, procedure, or process.

2. Spatial Structure

The spatial organization structure is used for evaluating or describing art or architecture. Here, you will define one element of the artifact and spatially move to the next. 

3. Compare and Contrast Structure

The compare and contrast structure is used to evaluate or review the culinary or music genre. Here the writer evaluates a subject by comprising and contrasting it with the known subject.

4. Point-by-Point Structure

The point-by-point structure is also used for culinary and music reviews. But, in this structure, you describe one element and then evaluate it, describe the second element and evaluate it, and so on.

After setting the criteria and collecting evidence for strengthening your judgment, you’ll start your evaluation essay. Let’s see what are the steps involved in starting an evaluation essay.

How to Start an Evaluation Essay?

When you start writing an evaluation essay, grabbing the reader’s attention is essential. For this, hook the reader from the beginning until the end to ensure that your essay’s opening follows an engaging tone. 

Step 1. Choose an Interesting Topic

Deciding the topic and evaluation essay criteria is important. Make sure it's not just compelling and interesting, but also informative so that you can find enough material for a detailed evaluation. 

Step 2. Set the Evaluation Essay Criteria

For an evaluation essay, you have to set the criteria for evaluation first. Criteria are the standards or measures by which someone assesses the quality or value of the subject. 

Some key points to establish the criteria are:

  • Identifying relevant aspects that relate to the subject 
  • Defining the criteria clearly so that it is specific and understandable for readers
  • Your criteria should be directly relevant to the nature of the subject
  • Always consider the audience’s expectations and standards while setting the criteria
  • Your thesis statement should always align with your evaluation criteria

Step 3. Collect Evidence for Your Judgment

The author’s judgment of the subject states whether the subject is good or bad. It is an overall assessment or the opinion supported by the evidence. The judgment corresponds to the benchmarks set by the author in the essay criteria. 

The evidence is a combination of supporting data and facts. Using the evidence, the author demonstrates how well the subject meets the judgment. The evidence serves as the foundation of your evaluation. 

Without providing strong and accurate evidence, you will not be able to convince the readers of your judgment. 

Step 4. Decide the Essay Structure

After that, decide on the structure that you want to follow. It can be a chronological or point-by-point structure

Step 5. Craft the Essay Outline

When you create an essay outline , evaluate what should be added and removed. If you skip this step before writing, you may lose track of what to include in your essay while you write.   

So, writing an outline for your evaluation essay is a critical step that eases your writing journey. 

Here is a sample evaluation essay outline:

Step 6. Declare Your Thesis Statement

For an evaluation essay that keeps the reader hooked from the start, opt for a catchy thesis statement . The thesis should state the main point of the evaluation. 

In the thesis statement, you should always express your stance on the subject clearly. In doing so, the readers will have a clear idea about the purpose and direction of your essay. 

Now, understand how to write an evaluation essay by following the detailed procedure mentioned below.

Tough Essay Due? Hire Tough Writers!

How to Write an Evaluation Essay?

Here is a step-by-step guide for you to write an evaluation essay.

Step 1. Write the Introduction

The introduction is the first impression your readers will have of you, so it's crucial to make a good one. It should capture attention and excite readers, drawing them into what you have to say about this topic. 

The following are the elements that you should consider while writing the introduction:

  • Start with an interesting hook statement so that you can get the reader’s attention.
  • Provide background information about the topic for the reader to understand the subject
  • Establish the evaluation essay thesis statement. It sets out the overall purpose of the evaluation, so make sure it is apparent and to the point

Read this evaluation essay introduction example, and you’ll understand exactly what to pen down in yours:

Step 2. Draft the Body Section

The body of the essay consists of three paragraphs. Each paragraph holds different ideas related to one another and flows smoothly from start to finish, just like how a good story should be told.

Here are the important points that must be included in the body paragraphs.

  • Start with the topic sentence that presents your judgment about the topic
  • Present the supporting evidence to back up the topic sentence and your viewpoint.
  • Present a balanced evaluative argument to show impartiality
  • Compare and contrast the subject to another subject to show the strengths and weaknesses
  • Present the evaluation from multiple perspectives, while being both positive and critical
  • Always use transition words between your paragraphs to ensure a smooth and coherent flow for the reader. 

Step 3. Write the Conclusion

It is the final chance to convince your reader to agree with your point of view. You’re supposed to summarize and conclude the essay. In the conclusion , you present your final evaluation of the essay. 

Keep in mind the following aspects while writing a closing paragraph of an evaluation essay. 

  • Summarize the points and evaluative arguments that you made in the body section.
  • Justify your thesis statement.
  • Provide a concrete and secure conclusion to your argument by ultimately leaving the reader convinced by your evaluation.

Step 4. Proofread, Revise, and Edit

The final step is proofreading and editing. Always spend enough time reading your essay carefully. It will help you catch the unintentional mistakes you have made and recover them. If needed, you can also revise your essay 2–3 times.

How to Format Your Evaluation Essay?

For formatting your evaluation essay, follow the standard academic writing guidelines. You can opt for different formatting styles like APA, MLA, or Chicago. 

In general, you should stick to the below formatting guidelines: 

Font and Size:

  • Use a legible font such as Times New Roman or Arial.
  • Choose a standard font size, often 12-point.
  • Set one-inch margins on all sides of the paper.
  • Double-space the entire essay, including the title, headings, and body paragraphs.
  • Create a title for your essay that reflects the subject and purpose of the evaluation.
  • Center the title on the page.
  • Use title case (capitalize the first letter of each major word).
  • Include a header with your last name and page number in the top right corner.
  • Follow the format “Last Name Page Number” (e.g., “Smith 1”).

Citations (if applicable):

  • Include citations for any sources used in your evaluation.
  • Follow the citation style specified by your instructor or the required style guide (APA, MLA, Chicago).

Counterargument (if included):

  • Clearly label and present any counterargument.
  • Provide a well-reasoned response to the counterargument.

References or Works Cited Page (if applicable):

  • Include a separate page for references or a works cited page if your essay includes citations.
  • List all sources in the appropriate citation style.

Well, the time has come to look at some great evaluation essay examples. Getting help from sample essays is always a great way to perfect your evaluation papers.

Evaluation Essay Examples

Evaluation can be written on any topic, i.e., book, movie, music, etc. Below, we have given some evaluation essay examples for students: 

Evaluation Essay Sample PDF

Movie Evaluation Essay Example

Critical evaluation Essay Example PDF

Product Evaluation Essay PDF

Source Evaluation Essay Example PDF

Employee Self-Evaluation Essay Example

How to Start A Self-Evaluation Essay Example PDF

Evaluation Essay Topics For College Students

For writing an amazing evaluation essay, the first thing that you require is an essay topic.  Here are some incredible topic ideas for college students. You can use or mold them according to your preference. 

  • Artificial intelligence's impact on society: A double-edged sword?
  • Evaluate the online teaching and on-campus teaching styles
  • Analyze and evaluate the Real Madrid football team and their performance
  • Is media a threat to cultural cohesion or a source of enrichment?
  • Compare and evaluate recorded music and live performance
  • Evaluate how a university's football team impacts students' personalities
  • Critically evaluate a remake of an original movie you have watched recently
  • Analyze how the roles of females and males changed in recent romantic movies
  • Evaluate your favorite restaurant, its food, aroma, and everything
  • Critically evaluate gender disparities in college majors and career choices.

Evaluation Essay vs. Review

At first glance, an evaluation essay might look like a review. But, there are some notable differences between them. See this table to see how both pieces of writing differ from each other.

To conclude, 

After reading the step-by-step guide and examples, you must have learned the art of writing a good evaluation essay. We’re confident that you’re now able to provide a balanced and effective evaluation of the topics you choose for your essay.

But writing a perfect essay is not that simple; you require a lot of practice and experience to become a good writer. That is why we are here to help you write any type of academic essay. 

MyPerfectWords.com is a professional essay writing service that offers help for all academic writing assignments. We have a team of professional writers who are experts in writing all types of essays and evaluation papers. 

So what are you waiting for? Let us handle your evaluation essay worries and have a sigh of relief! 

Frequently Asked Questions

1. what are the four components of an evaluation essay.

FAQ Icon

The four components of an evaluation essay are:

  • Introduction
  • Background information

2. What are the 4 types of evaluation?

The four types of evaluation are:

AI Essay Bot

Write Essay Within 60 Seconds!

Barbara P

Dr. Barbara is a highly experienced writer and author who holds a Ph.D. degree in public health from an Ivy League school. She has worked in the medical field for many years, conducting extensive research on various health topics. Her writing has been featured in several top-tier publications.

Get Help

Paper Due? Why Suffer? That’s our Job!

Keep reading

How to Write an Editorial

Eberly Center

Teaching excellence & educational innovation, grading methods for group work, instructor assessment of group product, student assessment of group product.

CONTACT US to talk with an Eberly colleague in person!

  • Faculty Support
  • Graduate Student Support
  • Canvas @ Carnegie Mellon
  • Quick Links

creative commons image

  • Study Guides
  • Homework Questions

Evaluation of health assignment

  • Health Science
  • SAP Community
  • Products and Technology
  • Human Capital Management

Assignment removed after failure

  • Subscribe to RSS Feed
  • Mark Question as New
  • Mark Question as Read
  • Printer Friendly Page
  • Report Inappropriate Content

orvic

  • SAP Managed Tags:
  • SAP SuccessFactors Learning SAP SuccessFactors HXM Suite

You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.

Accepted Solutions (0)

Answers (0).

  • Assignment Profiles - Removing curricula without removing existing assignments in Human Capital Management Q&A a month ago
  • Assignment profile Deletion in Human Capital Management Q&A 12-13-2023
  • How to disable Budget Assignment for the Head of Hierarchy in Human Capital Management Q&A 11-16-2023
  • Must Know New Features for EC in 2H 2023 in Human Capital Management Blogs by SAP 10-19-2023

Re: Publish Recurring pay components from Compensa...

Successfactors: external content is being cut off, 404 not found error - after launching content/cour..., re: field for training provider at item level, workforce analytics implementation.

  • Français
  • Español

Consultant for Evaluation of Zanzibar Gender Policy 2016 and its Plan of Action (2016-2020)

Advertised on behalf of.

Home based, TANZANIA

Type of Contract :

Individual Contract

Starting Date :

01-May-2024

Application Deadline :

21-Apr-24 (Midnight New York, USA)

Post Level :

National Consultant

Duration of Initial Contract :

Time left :, languages required :.

English  

Expected Duration of Assignment :

UNDP is committed to achieving workforce diversity in terms of gender, nationality and culture. Individuals from minority groups, indigenous groups and persons with disabilities are equally encouraged to apply. All applications will be treated with the strictest confidence. UNDP does not tolerate sexual exploitation and abuse, any kind of harassment, including sexual harassment, and discrimination. All selected candidates will, therefore, undergo rigorous reference and background checks.

UN Women, grounded in the vision of equality enshrined in the Charter of the United Nations, works for the elimination of discrimination against women and girls; the empowerment of women; and the achievement of equality between women and men as partners and beneficiaries of development, human rights, humanitarian action and peace and security.

The Revolutionary Government of Zanzibar in collaboration with UN Women and with financial support from the European Union Delegation has partnered to evaluate the Gender Policy 2016 – 2020 and develop the next Gender Policy 2024 – 2029.

The Revolutionary Government of Zanzibar has dedicated itself to implement, monitor, and report its implementation of various international and regional conventions, treaties, and commitments for promoting gender equality and women empowerment. These include the Declaration of the Human Rights 1948; the UN Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW 1979); the Convention on the Rights of the Child (CRC) 1989; the Convention on the Elimination of Discrimination in Employment and Occupation (1958); the Convention on Equal Remuneration for Work of Equal Value (1951); the Beijing Platform for Action 1995; ICPD Plan of Action 1994; the Convention on Workers with Family Responsibilities (1981); The protocol to the African Charter on Human and Peoples Rights on the right of Women in Africa (Maputo Protocol,2003) and the Convention on Maternity Protection (2000). The Millennium Declaration of 2000 emphasized the role of the UN on human rights by declaring the Gender Equality and Women’s Empowerment objective for UN member states and the Sustainable Development Goals (SDGs) which have specific goal number 5 on Gender equality and women empowerment as well as mainstream gender in all its remaining goals. At the regional level, URT recognizes the regional commitments as set in the provisions of the African Charter on Human and People’s Rights, African Charter on Human and Peoples' Rights (1981), Declaration on the HIV/AIDS Epidemic at the XI International Conference on AIDS and STDs in Africa (1999), Women’s Declaration and Agenda for a Culture of Peace in Africa adopted at the close of a Pan African Conference in Zanzibar, 1999 and Protocol to the African Charter on Human and Peoples’ Rights (ACPHR) on the Rights of Women in Africa. Another important declaration is the SADC Declaration on Gender and Development (1997), which binds member countries to implement affirmative actions to promote female participation in politics. At the country level, the Zanzibar Constitution of 1984 declares equal rights of men and women, and equal access to social, economic, and development opportunities.

To ensure GEWE commitments are achieved, the Revolutionary Government of Zanzibar established the national machinery mandated for overall GEWE coordination which is currently the Ministry of Community Development Gender Elderly and Children (MCDGEC). Amongst its key obligations is to monitor and report on progress on the implementation of the RGoZ commitments under various GEWE resolutions and instruments. 

With technical and financial support from UN Women and the European Union Delegation, the MCDGEC plans to evaluate the Gender Policy 2016 – 2020 and develop the next Gender Policy 2024 – 2029 as an overarching gender mainstreaming framework. The Zanzibar Gender Policy guides gender mainstreaming in all spheres of life in Zanzibar.

The Policy underscores and emphasizes the need to mainstream gender issues in all national, sub-national and sectoral policies, programs, plans, and budgets as well as promoting gender equity, equality, and women empowerment as essential accelerators for promoting social justice, peace, economic growth, and sound management of all sectors.

This partnership is part of the EU Framework in Tanzania that aims at embracing Gender Equality and Women’s empowerment as one of the main priorities, singing a unique initiative called “Gender Transformative Action: Breaking the Glass Ceiling” targeting both Tanzania Mainland and Zanzibar.

Furthermore, the evaluation of the Gender Policy is critical to be better aligned with national priorities as articulated in Vision 2050, emerging Global and Regional commitments, and agreements such as CSW agreed conclusions; At national levels, some changes have happened in terms of national Development frameworks such as the formulation of the new Vision 2050, medium development plan namely the Zanzibar Development Plan (ZADEP (2021/22-2026/27, and the National Gender Development Plan guiding the implementation of the United Republic of Tanzania Generation Equality Action Coalition Commitments especially on Women’s Economic Justice and Rights and other Action Coalitions to mention a few. Also, since 2016, some important emerging issues including COVID-19, and climate change have had disproportionate impacts and implications on gender equality and to Gender Policy. 

The Evaluation will provide information on the extent to which progress has been achieved, as well as identify challenges and emerging issues that need to be taken into consideration in the Review of the Zanzibar Gender Policy. 

The effective implementation of this assignment requires a competent consultant with an adequate understanding of the gender situation in Zanzibar and the context. He/she must have high Oversight and experience in carrying out policy, program, and project reviews, assessments, and evaluations. This assignment will be supervised under the Permanent Secretary of the MCDGEC with technical support from the Directorate of Community Development, Gender Elders, and Children with technical support from UN Women. A designated Oversight A Technical Committee will be established and composed of selected key stakeholders and gender Oversights from other MDAs, UN Agencies, DP, and CSOs to provide technical guidance and oversight to the consultant(s) throughout the process of conducting the assignment. The Oversight Technical Committee will also strengthen ownership of the process. The consultant(s) shall be ultimately accountable to the Principal Secretary (PS) of the MCDGEC for this assignment. 

Duties and Responsibilities

The main objective of this consultancy is to conduct an evaluation of the Zanzibar Gender Policy 2016 and its Plan of Action (2016-2020), including through a Literature review, Key Informants interviews, and multi-stakeholder consultations including MDAs, Academia, Media, Private Sector, DPs, CSOs, OPDs and FBOs to provide clear findings and recommendations for consideration in the Review of the Zanzibar Gender Policy to strengthen ongoing efforts towards achieving GEWE objectives in Zanzibar.

More specifically, the objectives of the review are:

  • Assess the relevance, effectiveness, efficiency, coherence, and sustainability of the Gender Policy of its performance against targets set out.
  • Identify new global, regional, and national evidence and emerging issues that have influenced the implementation of the policy-related programs.
  • Assess efficiency in the utilization of resources deployed in the implementation of policy.
  • Identify achievements, best practices, challenges, and lessons learned from the implementation of Policy
  • Identify achievements, best practices, challenges, and lessons learned from the implementation of Policy the Zanzibar Gender Policy and Implementation Plan.
  • Draw recommendations and conclusions that will inform the development/ Review of the next Zanzibar Gender Policy.

4.  SPECIFIC TASKS

4.1 To achieve the above objectives of the assignment, the tasks of the consultant and deliverables will include to:

4.1.1 Inception Phase

  • Conduct desk review/literature review of the available policies, programs, procedures/regulations, and reports, including global, regional, and national evidence and emerging issues that have influenced the implementation of the Policy related programs such as the Zanzibar Constitution, Parliamentary Reports, national budgetary allocations, policy guidelines, and legal frameworks pertinent to gender equality institutionalization, mainstreaming and implementation in the country.
  • Develop an inception report that demonstrates an understanding of the assignment, methodology/tools/checklist, stakeholders, and work plan and present it to the Oversight Technical Committee and officials responsible for coordinating the review of the Gender Policy, including the Gender Mainstreaming Technical Working Group (GMTWG).
  • Incorporate comments and present for the approval of the Final Inception Report, including revised tools, for the evaluation of the Gender Policy/ Methodology and Review tools/checklists to the Permanent Secretary Ministry of Community Development Elderly and Children.

4.1.2 Data Collection, Analysis and Reporting

  • Lead National Government Ministries, Regional and LGAs consultations in both Unguja and Pemba / key-informant interviews National, Regional and LGAs consultations in both Unguja and Pemba using the agreed methodology, checklists, and tools including to assess stakeholders’ institutional and technical capacity, coordination, partnership building, resource mobilization.
  • Conduct selected stakeholders’ consultations including UN agencies, development partners, autonomous agencies, non-governmental organizations/ civil society organizations, law enforcement agencies, the Judiciary, gender focal persons, private sector representatives, and media representatives.
  • Identify gender issues, challenges, and emerging issues that have been addressed by Zanzibar Gender Policy (2016) and its Plan of Action (2016-2020) in alignment with regional and global normative frameworks and national development priorities articulated in Vision 2050 and ZADEP.
  • Identify good practices, lessons learned/achievements, and gaps/challenges from the Zanzibar Gender Policy and its Plan of Action (2016-2020).
  • Develop a Draft Zanzibar Evaluation report that Incorporates all issues raised from the evaluation process according to the Gender Policy theme detailing the main findings from the evaluation of the Zanzibar Gender Policy and its Plan of Action (2016-2020) and recommendations to be considered in the Reviewed/next Gender Policy.
  • Share Draft Evaluation report /preliminary findings and recommendations with the Oversight Technical Committee; and the Gender Mainstreaming Technical Working Group (GMTWG) for feedback and inputs.
  • Organize and facilitate National multistakeholder consultative meetings including the Gender Mainstreaming Technical Working Group (GMTWG) to present and Validate the Draft Evaluation Report of the Zanzibar Gender Policy Evaluation and its Plan of Action (2016-2020) Plan
  • Incorporate inputs from national multistakeholder consultative meetings into the Draft Evaluation Report of the Zanzibar Gender Policy (2016) and its Plan of Action (2016-2020)
  • Present a Revised draft Evaluation report to the Oversight Technical Committee and the selected stakeholders for feedback and input.
  •  Submit the final Evaluation Report of the Zanzibar Gender Policy (2016) and its Plan of Action (2016-2020) to the Principal Secretary of the Ministry of Community Development, Gender, Elders, and Children in both soft and hard copies to inform the drafting of the Reviewed Gender Policy.
  • Prepare minutes and activity reports from all consultative meetings and workshops at National, Regional, and LGA levels.
  • Develop and submit a Zanzibar Gender Policy evaluation process report that describes the entire evaluation process, a list of stakeholders consulted, a matrix of all stakeholders’ comments obtained throughout the evaluation; challenges encountered, and lessons learned.

5.0 Deliverables

The deliverables of this assignment will include:

  • Draft and Final Inception Report with detailed work plan, methodology, tools, roles, responsibilities, and time frame.
  • Literature Review Report detailing the main findings of relevant international, regional, and national documents, reports, and instruments, including the Zanzibar Constitution, Parliamentary Reports, national budgetary allocations, policy guidelines, and legal frameworks pertinent to gender equality institutionalization, mainstreaming, and implementation in the country.
  • Multi-Stakeholders’ Consultations Report to provide the main findings of in-depth consultations with key stakeholders (including, but not exclusive to, all relevant Government ministries and local government bodies, Members of the House of Representatives, the Gender Mainstreaming Technical Working Group, UN agencies, Development partners, autonomous agencies, professional organizations,  and non-governmental organizations/ civil society organizations, law enforcement agencies, the Judiciary, gender focal persons, relevant, and private sector representatives).
  • Draft Evaluation Report of the Zanzibar Gender Policy and its Plan of Action (2016-2020) on preliminary findings and recommendations.
  • Final Evaluation report of the Zanzibar Gender Policy and its Plan of Action (2016-2020).
  • Zanzibar Gender Policy evaluation process report that describes the entire evaluation process, a list of stakeholders consulted, a matrix of all stakeholders’ comments obtained throughout the evaluation; challenges encountered, and lessons learned.

The consultant will carry out the assignment for an overall period of workdays over three months i.e. from 01 May 2024 to August 2024.

The consultant will sign the contract which will provide the consultancy details including the responsibilities, consultancy remuneration, modality of payments, and logistical arrangements during the assignment.

Payments will be made upon submission and acceptance of specified deliverables and submission of an invoice as follows:

Consultant’s Workplace and Official Travel

This is a home-based consultancy.

As part of this assignment, there will be several travels to Pemba Island.

The effective implementation of this assignment requires a competent consultant with an adequate understanding of the gender situation in Zanzibar. He/she must have high expertise and experience in carrying out policy, program, and project reviews, assessments, and evaluations. This assignment will be supervised under the Permanent Secretary of the MCDGEC with technical support from the Directorate of Community Development, Gender Elders, and Children with technical support from UN Women. A designated      Technical Committee will be established and composed of selected key stakeholders and gender experts from other MDAs, UN Agencies, DP, and CSOs to provide technical guidance and oversight to the consultant(s) throughout the process of conducting the assignment. The Technical Committee will also strengthen ownership of the process. The consultant(s) shall be ultimately accountable to the Principal Secretary (PS) of the MCDGEC for this assignment.  

Competencies

Core Values: 

  • Respect for Diversity 
  • Integrity 
  • Professionalism 

Core Competencies: 

  • Awareness and Sensitivity Regarding Gender Issues 
  • Accountability 
  • Creative Problem Solving 
  • Effective Communication 
  • Inclusive Collaboration 
  • Stakeholder Engagement 
  • Leading by Example 

Please visit this link for more information on UN Women’s Core Values and Competencies:  

https://www.unwomen.org/en/about-us/employment/application-process#_Values  

FUNCTIONAL COMPETENCIES: 

  • Strong technical knowledge and expertise in Gender Equality and Women Empowerment at all levels.
  • Outstanding technical knowledge and experience in the development of Capacity capacity-building government plans.
  • Outstanding writing skills, with proven ability to meet tight deadlines.
  • Communicates sensitively, effectively and creatively.

Required Skills and Experience

  • Advanced degree in gender and development, social science, development studies, planning, monitoring and evaluation, or related fields.
  • A project/program management certification would be an added advantage.

Experience:

  • At least 5 years of progressively responsible work experience development of Gender-related policies.
  • Proven experience in review, assessment, and evaluation work in a development context and proven success in leading reviews of policies and programs, preferably related to gender.
  • Experience in the development of the Results Framework of the plan is required.
  • At least 10 years’ experience in assessing gender equality, women’s empowerment, human rights, and related policies and programs will be an added advantage.
  • Knowledge of the governance, policy development on gender equality, and women empowerment context within Zanzibar is an asset.
  • Ability and experience in interacting with people from diverse backgrounds.
  • Excellent analytical skills with a strong drive for results and capacity to work independently.

Fluency in English and Swahili is required.

Submission of application :

  • Personal CV and  Form P11 (P11 can be downloaded from: https://www.unwomen.org/sites/default/files/Headquarters/Attachments/Sections/About%20Us/Employment/UN-Women-P11-Personal-History-Form.doc )
  • A cover letter (maximum length: 1 page)
  • Managers may ask (ad hoc) for any other materials relevant to pre-assessing the relevance of their experience, such as reports, presentations, publications, campaigns, or other materials.

Kindly note that applications without a completed and signed UN Women P-11 form will be treated as incomplete and will not be considered for  assessment.

IMAGES

  1. 13+ Evaluation Sheet Templates -Free Sample, Example Format Download

    evaluation for assignment

  2. 7 Peer Evaluation Forms

    evaluation for assignment

  3. FREE 39+ Student Evaluation Forms in PDF

    evaluation for assignment

  4. Writing Assignment Evaluation Form printable pdf download

    evaluation for assignment

  5. 43 Great Peer Evaluation Forms [+Group Review] ᐅ TemplateLab

    evaluation for assignment

  6. Evaluation Report Template

    evaluation for assignment

VIDEO

  1. Tut8-Matthew 5

  2. Video Based Evaluation

  3. ENG 101 Unit 3 Source Evaluation Assignment

  4. GPISD Culturally Responsive PD Evaluation

  5. Research Evaluation Assignment (5-16-2022)

  6. Peer evaluation assignment DEC

COMMENTS

  1. Assessment Rubrics

    Assessment Rubrics. A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations.

  2. PDF Writing Assessment and Evaluation Rubrics

    Holistic scoring is a quick method of evaluating a composition based on the reader's general impression of the overall quality of the writing—you can generally read a student's composition and assign a score to it in two or three minutes. Holistic scoring is usually based on a scale of 0-4, 0-5, or 0-6.

  3. 7 Steps for How to Write an Evaluation Essay (Example & Template)

    How to write an Evaluation Essay. There are two secrets to writing a strong evaluation essay. The first is to aim for objective analysis before forming an opinion. The second is to use an evaluation criteria. Aim to Appear Objective before giving an Evaluation Argument. Your evaluation will eventually need an argument.

  4. Rubric Best Practices, Examples, and Templates

    Rubric Best Practices, Examples, and Templates. A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects ...

  5. Principles of Assessing Student Writing

    A criterion-based evaluation guide that communicates an instructor's expectations for student performance in an assignment, specifically through the identification and description of evaluation criteria. Our resource on Principles of Rubric Design is an excellent resource to draw from.

  6. Creating rubrics for effective assessment management

    This style of rubric is well suited to breaking apart a complex task into component skills and allows for evaluation of those components. It can also help determine the grade for the whole assignment based on performance on the component skills. This style of rubric can look similar to a general rubric but includes detailed grading information.

  7. Using rubrics

    A rubric can be a fillable pdf that can easily be emailed to students. Rubrics are most often used to grade written assignments, but they have many other uses: They can be used for oral presentations. They are a great tool to evaluate teamwork and individual contribution to group tasks. Rubrics facilitate peer-review by setting evaluation ...

  8. Guide: Academic Evaluations

    Types of Written Evaluation. Quite a few of the assignments writers are given at the university and in the workplace involve the process of evaluation. Reviews. One type of written evaluation that most people are familiar with is the review. Reviewers will attend performances, events, or places (like restaurants, movies, or concerts), basing ...

  9. Designing Assignments for Learning

    An authentic assessment provides opportunities for students to practice, consult resources, learn from feedback, and refine their performances and products accordingly (Wiggins 1990, 1998, 2014). Authentic assignments ask students to "do" the subject with an audience in mind and apply their learning in a new situation.

  10. PDF Writing Assessment and Evaluation Rubrics

    writing assignment in Writer 's Choice. • All assignments can be evaluated by using either the General Rubric for Holistic Evaluation or the General Rubric for Analytic Evaluation. • Most assignments can be evaluated by using one of the general rubrics or by using an analytic rubric specific to a particular writing mode.

  11. Creating and Using Rubrics

    Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon). Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon). Example 3: Anthropology Writing Assignments This rubric was designed for a series of short ...

  12. Evaluation of Writing Assignments

    The evaluation should communicate those strengths and weaknesses to the students. Moreover, the evaluation should levy a grade that is consistently applied to all assignments within that set. Such a hurdle poses problems for an instructor with more than 100 students and a handful of teaching assistants, some of whom may not possess English as ...

  13. Step 4: Develop Assessment Criteria and Rubrics

    Standardize assessment criteria among those assigning/assessing the same assignment. Facilitate peer evaluation of early drafts of assignment. Rubrics Can Help Student Learning. Convey your expectations about the assignment through a classroom discussion of the rubric prior to the beginning of the assignment;

  14. PDF Writing High-Quality Evaluations of Student Performance: Best Practices

    feedback you have discussed, to use whe n writing your summary evaluation. • Complete your written evaluations promptly, within a week of working with the student. • Describe specific behaviors and concrete examples in your evaluation. • Discuss midpoint feedback using competency-based language.

  15. Chapter 36. Introduction to Evaluation

    Program evaluation - the type of evaluation discussed in this section - is an essential organizational practice for all types of community health and development work. It is a way to evaluate the specific projects and activities community groups may take part in, rather than to evaluate an entire organization or comprehensive community initiative.

  16. 10.1 The Analysis and Evaluation Assignment

    Analysis/Evaluation. provided by the authors. Evaluative writing is a specific genre that analyzes a subject in order to make and support a "judgment call," a judgment that is based on specific, clear criteria. That judgment - which is your reasoned opinion - becomes the heart of the essay's thesis, clearly stating whether the subject is successful or not based on how it meets ...

  17. Self and Peer Assessment on Student Writing

    After taking a close, honest look at their body of work, students become more comfortable speaking to their own writing in a mature and introspective way. At a bare minimum, requiring revisions will ensure that students know how to improve their work, taking a look at it later with clear eyes. 4. Let students practice self and peer assessment ...

  18. The Evaluation Essay

    Step 1: Choosing a Topic. For this assignment, you will choose a film you have watched that was meaningful enough to evaluate. It can be one that was meaningful because it changed your perspective, for instance. You are also welcome to choose a film that was critically acclaimed, but you have objections to.

  19. How to Evaluate Group Work

    When instructors incorporate group assignments and activities into their courses, they must make thoughtful decisions regarding how to organize the group, how to facilitate it, and how to evaluate the completed work. Instructor Evaluations. Create a rubric to set evaluation standards and share with students to communicate expectations.

  20. Evaluation Essay

    When you start writing an evaluation essay, grabbing the reader's attention is essential. For this, hook the reader from the beginning until the end to ensure that your essay's opening follows an engaging tone. Step 1. Choose an Interesting Topic. Deciding the topic and evaluation essay criteria is important.

  21. Grading Methods for Group Work

    Performance Rubrics for 95820 Production Management Assignment Performance Rubrics for 95821 Production Management Assignment Weighted Peer Evaluation for Group Project College of Humanities & Social Sciences Worksheet to guide students' observation and analysis of Polar World

  22. PDF Self & Peer Evaluations of Group Work

    Sample #1: Research Group Project. Self & Peer Evaluation for a Research Paper Project. Students are required to evaluate the personal productivity of each group member, including themselves. Rate yourself and your group members on each of the following 6 categories. Total the score for yourself and each of the group members.

  23. Tips for Writing a Strong Self-Evaluation (With Examples)

    Acknowledge the full spectrum of your experiences, including any specific examples you might feel hesitant to highlight in your formal performance review. Coming up with an unfiltered version will help you understand how your perspective comes across, and you can always make edits once you start writing.‍. 2. Review your goals.

  24. Evaluation of health assignment (docx)

    Explain. 2. Through the lens as a future health professional, explain why this is such an important concept. Health-science document from James Madison University, 1 page, HTH 231 - Evaluation of Health Assignment Evaluation. 20 points, 4 points each. Give yourself a score A through F for each of the listed Determinants of Health.

  25. Assignment removed after failure

    Assignment removed after failure. Hello, We do some manual Task List mandatory assignments that users need to complete. They should have one single chance to go through it. Is there any way to get the item assignment removed from their learning plan if they are marked as "Failed"?

  26. PDF 'This assignment is exempt from recent selection criteria' and

    - Assignments which contain one of these notes do not get included in the recent selection evaluation during the selection process (this assignment is exempt from recent selection criteria) 12/20/22 CONSTRUCTION ENG. & INSPECTION (Road & Bridge) PROJECT 171-484 2410 GARG 12/20/22 TASK ORDER ENVIRONMENTAL COMPLIANCE LEAD, ASBESTOS,AIR

  27. Firefighter Type 2 (Crewmember)

    Firefighter Type 1 (FFT1) Helicopter Crewmember (HECM) Incident Commander Type 3 (ICT3) Incident Commander Type 4 (ICT4) Incident Commander Type 5 (ICT5) Intermediate Faller (FAL2) Operations Section Chief Type 3, Wildland Fire (OPS3) Prescribed Fire Burn Boss Type 1 (RXB1) Prescribed Fire Burn Boss Type 2 (RXB2)

  28. UN WOMEN Jobs

    Develop and submit a Zanzibar Gender Policy evaluation process report that describes the entire evaluation process, a list of stakeholders consulted, a matrix of all stakeholders' comments obtained throughout the evaluation; challenges encountered, and lessons learned. 5.0 Deliverables. The deliverables of this assignment will include: