Rubric Design

Main navigation, articulating your assessment values.

Reading, commenting on, and then assigning a grade to a piece of student writing requires intense attention and difficult judgment calls. Some faculty dread “the stack.” Students may share the faculty’s dim view of writing assessment, perceiving it as highly subjective. They wonder why one faculty member values evidence and correctness before all else, while another seeks a vaguely defined originality.

Writing rubrics can help address the concerns of both faculty and students by making writing assessment more efficient, consistent, and public. Whether it is called a grading rubric, a grading sheet, or a scoring guide, a writing assignment rubric lists criteria by which the writing is graded.

Why create a writing rubric?

  • It makes your tacit rhetorical knowledge explicit
  • It articulates community- and discipline-specific standards of excellence
  • It links the grade you give the assignment to the criteria
  • It can make your grading more efficient, consistent, and fair as you can read and comment with your criteria in mind
  • It can help you reverse engineer your course: once you have the rubrics created, you can align your readings, activities, and lectures with the rubrics to set your students up for success
  • It can help your students produce writing that you look forward to reading

How to create a writing rubric

Create a rubric at the same time you create the assignment. It will help you explain to the students what your goals are for the assignment.

  • Consider your purpose: do you need a rubric that addresses the standards for all the writing in the course? Or do you need to address the writing requirements and standards for just one assignment?  Task-specific rubrics are written to help teachers assess individual assignments or genres, whereas generic rubrics are written to help teachers assess multiple assignments.
  • Begin by listing the important qualities of the writing that will be produced in response to a particular assignment. It may be helpful to have several examples of excellent versions of the assignment in front of you: what writing elements do they all have in common? Among other things, these may include features of the argument, such as a main claim or thesis; use and presentation of sources, including visuals; and formatting guidelines such as the requirement of a works cited.
  • Then consider how the criteria will be weighted in grading. Perhaps all criteria are equally important, or perhaps there are two or three that all students must achieve to earn a passing grade. Decide what best fits the class and requirements of the assignment.

Consider involving students in Steps 2 and 3. A class session devoted to developing a rubric can provoke many important discussions about the ways the features of the language serve the purpose of the writing. And when students themselves work to describe the writing they are expected to produce, they are more likely to achieve it.

At this point, you will need to decide if you want to create a holistic or an analytic rubric. There is much debate about these two approaches to assessment.

Comparing Holistic and Analytic Rubrics

Holistic scoring .

Holistic scoring aims to rate overall proficiency in a given student writing sample. It is often used in large-scale writing program assessment and impromptu classroom writing for diagnostic purposes.

General tenets to holistic scoring:

  • Responding to drafts is part of evaluation
  • Responses do not focus on grammar and mechanics during drafting and there is little correction
  • Marginal comments are kept to 2-3 per page with summative comments at end
  • End commentary attends to students’ overall performance across learning objectives as articulated in the assignment
  • Response language aims to foster students’ self-assessment

Holistic rubrics emphasize what students do well and generally increase efficiency; they may also be more valid because scoring includes authentic, personal reaction of the reader. But holistic sores won’t tell a student how they’ve progressed relative to previous assignments and may be rater-dependent, reducing reliability. (For a summary of advantages and disadvantages of holistic scoring, see Becker, 2011, p. 116.)

Here is an example of a partial holistic rubric:

Summary meets all the criteria. The writer understands the article thoroughly. The main points in the article appear in the summary with all main points proportionately developed. The summary should be as comprehensive as possible and should be as comprehensive as possible and should read smoothly, with appropriate transitions between ideas. Sentences should be clear, without vagueness or ambiguity and without grammatical or mechanical errors.

A complete holistic rubric for a research paper (authored by Jonah Willihnganz) can be  downloaded here.

Analytic Scoring

Analytic scoring makes explicit the contribution to the final grade of each element of writing. For example, an instructor may choose to give 30 points for an essay whose ideas are sufficiently complex, that marshals good reasons in support of a thesis, and whose argument is logical; and 20 points for well-constructed sentences and careful copy editing.

General tenets to analytic scoring:

  • Reflect emphases in your teaching and communicate the learning goals for the course
  • Emphasize student performance across criterion, which are established as central to the assignment in advance, usually on an assignment sheet
  • Typically take a quantitative approach, providing a scaled set of points for each criterion
  • Make the analytic framework available to students before they write  

Advantages of an analytic rubric include ease of training raters and improved reliability. Meanwhile, writers often can more easily diagnose the strengths and weaknesses of their work. But analytic rubrics can be time-consuming to produce, and raters may judge the writing holistically anyway. Moreover, many readers believe that writing traits cannot be separated. (For a summary of the advantages and disadvantages of analytic scoring, see Becker, 2011, p. 115.)

For example, a partial analytic rubric for a single trait, “addresses a significant issue”:

  • Excellent: Elegantly establishes the current problem, why it matters, to whom
  • Above Average: Identifies the problem; explains why it matters and to whom
  • Competent: Describes topic but relevance unclear or cursory
  • Developing: Unclear issue and relevance

A  complete analytic rubric for a research paper can be downloaded here.  In WIM courses, this language should be revised to name specific disciplinary conventions.

Whichever type of rubric you write, your goal is to avoid pushing students into prescriptive formulas and limiting thinking (e.g., “each paragraph has five sentences”). By carefully describing the writing you want to read, you give students a clear target, and, as Ed White puts it, “describe the ongoing work of the class” (75).

Writing rubrics contribute meaningfully to the teaching of writing. Think of them as a coaching aide. In class and in conferences, you can use the language of the rubric to help you move past generic statements about what makes good writing good to statements about what constitutes success on the assignment and in the genre or discourse community. The rubric articulates what you are asking students to produce on the page; once that work is accomplished, you can turn your attention to explaining how students can achieve it.

Works Cited

Becker, Anthony.  “Examining Rubrics Used to Measure Writing Performance in U.S. Intensive English Programs.”   The CATESOL Journal  22.1 (2010/2011):113-30. Web.

White, Edward M.  Teaching and Assessing Writing . Proquest Info and Learning, 1985. Print.

Further Resources

CCCC Committee on Assessment. “Writing Assessment: A Position Statement.” November 2006 (Revised March 2009). Conference on College Composition and Communication. Web.

Gallagher, Chris W. “Assess Locally, Validate Globally: Heuristics for Validating Local Writing Assessments.” Writing Program Administration 34.1 (2010): 10-32. Web.

Huot, Brian.  (Re)Articulating Writing Assessment for Teaching and Learning.  Logan: Utah State UP, 2002. Print.

Kelly-Reilly, Diane, and Peggy O’Neil, eds. Journal of Writing Assessment. Web.

McKee, Heidi A., and Dànielle Nicole DeVoss DeVoss, Eds. Digital Writing Assessment & Evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press, 2013. Web.

O’Neill, Peggy, Cindy Moore, and Brian Huot.  A Guide to College Writing Assessment . Logan: Utah State UP, 2009. Print.

Sommers, Nancy.  Responding to Student Writers . Macmillan Higher Education, 2013.

Straub, Richard. “Responding, Really Responding to Other Students’ Writing.” The Subject is Writing: Essays by Teachers and Students. Ed. Wendy Bishop. Boynton/Cook, 1999. Web.

White, Edward M., and Cassie A. Wright.  Assigning, Responding, Evaluating: A Writing Teacher’s Guide . 5th ed. Bedford/St. Martin’s, 2015. Print.

Know Your Terms: Holistic, Analytic, and Single-Point Rubrics

May 1, 2014

' src=

Can't find what you are looking for? Contact Us

Rubrics-Box-Pin

Whether you’re new to rubrics, or you just don’t know their formal names, it may be time for a primer on rubric terminology.

So let’s talk about rubrics for a few minutes. What we’re going to do here is describe two frequently used kinds of rubrics,  holistic and analytic , plus a less common one called the single-point rubric (my favorite, for the record). For each one, we’ll look at an example and explore its pros and cons.

Holistic Rubrics

A holistic rubric is the most general kind. It lists three to five levels of performance, along with a broad description of the characteristics that define each level. The levels can be labeled with numbers (such as 1 through 4), letters (such as A through F) or words (such as Beginning through Exemplary ). What each level is called isn’t what makes the rubric holistic — it’s the way the characteristics are all lumped together.

Suppose you’re an unusually demanding person. You want your loved ones to know what you expect if they should ever make you breakfast in bed. So you give them this holistic rubric:

When your breakfast is done, you simply gather your loved ones and say, “I’m sorry my darlings, but that breakfast was just a 2. Try harder next time.”

The main advantage of a holistic rubric is that it’s easy on the teacher — in the short run, anyway. Creating a holistic rubric takes less time than the others, and grading with one is faster, too. You just look over an assignment and give one holistic score to the whole thing.

The main disadvantage of a holistic rubric is that it doesn’t provide targeted feedback to students , which means they’re unlikely to learn much from the assignment. Although many holistic rubrics list specific characteristics for each level, the teacher gives only one score, without breaking it down into separate qualities. This often leads the student to approach the teacher and ask, “Why did you give me a 2?” If the teacher is the explaining kind, he will spend a few minutes breaking down the score. If not, he’ll say something like, “Read the rubric.” Then the student has to guess which factors had the biggest influence on her score. For a student who really tries hard, it can be heartbreaking to have no idea what she’s doing wrong.

Holistic rubrics are most useful in cases when there’s no time (or need, though that’s hard to imagine) for specific feedback. You see them in standardized testing — the essay portion of the SAT is scored with a 0-6 holistic rubric. When hundreds of thousands of essays have to be graded quickly, and by total strangers who have no time to provide feedback, a holistic rubric comes in handy.

Analytic Rubrics

An analytic rubric  breaks down the characteristics of an assignment into parts, allowing the scorer to itemize and define exactly what aspects are strong, and which ones need improvement.

So for the breakfast in bed example, an analytic rubric would look like this:

In this case, you’d give your loved ones a separate score for each category. They might get a 3 on Presentation , but a 2 on Food and just a 1 on Comfort . To make feedback even more targeted, you could also highlight specific phrases in the rubric, like, “the recipient is crowded during the meal” to indicate exactly what went wrong.

This is where we see the main advantage of the analytic rubric: It gives students a clearer picture of why they got the score they got. It is also good for the teacher, because it gives her the ability to justify a score on paper, without having to explain everything in a later conversation.

Analytic rubrics have two significant disadvantages , however: (1) Creating them takes a lot of time . Writing up descriptors of satisfactory work — completing the “3” column in this rubric, for example — is enough of a challenge on its own. But to have to define all the ways the work could go wrong, and all the ways it could exceed expectations, is a big, big task. And once all that work is done, (2) students won’t necessarily read the whole thing.  Facing a 36-cell table crammed with 8-point font is enough to send most students straight into a nap. And that means they won’t clearly understand what’s expected of them.

Still, analytic rubrics are useful when you want to cover all your bases, and you’re willing to put in the time to really get clear on exactly what every level of performance looks like.

Single-Point Rubrics

A single-point rubric is a lot like an analytic rubric, because it breaks down the components of an assignment into different criteria. What makes it different is that it only describes the criteria for proficiency ; it does not attempt to list all the ways a student could fall short, nor does it specify how a student could exceed expectations.

A single-point rubric for breakfast in bed would look like this:

Notice that the language in the “Criteria” column is exactly the same as the “3” column in the analytic rubric. When your loved ones receive this rubric, it will include your written comments on one or both sides of each category, telling them exactly how they fell short (“runny eggs,” for example) and how they excelled (“vase of flowers”). Just like with the analytic rubric, if a target was simply met,  you can just highlight the appropriate phrase in the center column.

If you’ve never used a single-point rubric, it’s worth a try. In 2010, Jarene Fluckiger studied a collection of teacher action research studies on the use of single-point rubrics. She found that student achievement increased with the use of these rubrics, especially when students helped create them and used them to self-assess their work.

The single-point rubric has several  advantages : (1) It contains far less language than the analytic rubric, which means students are more likely to read it and it will take less time to create , while still providing rich detail about what’s expected. (2) Areas of concern and excellence are open-ended . When using full analytic rubrics, I often find that students do things that are not described on the rubric, but still depart from expectations. Because I can’t find the right language to highlight, I find myself hand-writing justifications for a score in whatever space I can find. This is frustrating, time-consuming and messy. With a single-point rubric, there’s no attempt to predict all the ways a student might go wrong. Similarly, the undefined “Advanced” column places no limits on how students might stretch themselves. “If the highest level is already prescribed then creativity may be limited to that pre-determined level,” says Fluckiger. “Students may surprise us if we leave quality open-ended.”

The main disadvantage  of single-point rubrics is that using them requires more writing on the teacher’s part. If a student has fallen short in many areas, completing that left-hand column will take more time than simply highlighting a pre-written analytic rubric.

Need Ready-Made Rubrics?

My Rubric Pack gives you four different designs in Microsoft Word and Google Docs formats. It also comes with video tutorials to show you how to customize them for any need, plus a Teacher’s Manual to help you understand the pros and cons of each style. Check it out here:

analytic scoring rubric for essay writing

Fluckiger, J. (2010). Single point rubric: A tool for responsible student self-assessment. Teacher Education Faculty Publications.  Paper 5. Retrieved April 25, 2014 from http://digitalcommons.unomaha.edu/tedfacpub/5 .

Mertler, C. A. (2001). Designing scoring rubrics for your classroom.  Practical Assessment, Research & Evaluation , 7(25). Retrieved April 30, 2014 from http://PAREonline.net/getvn.asp?v=7&n=25 .

Know Your Terms  is my effort to build a user-friendly knowledge base of terms every educator should know. New items will be added on an ongoing basis. If you heard some term at a PD and didn’t want to admit you didn’t know what it meant, send it to me via the  contact  form and I’ll research it for you. 

What to Read Next

analytic scoring rubric for essay writing

Categories: Instruction , Learning Theory

Tags: assessment , college teaching , Grades 3-5 , Grades 6-8 , Grades 9-12 , Grades K-2 , know your terms , rubrics

69 Comments

' src=

Jen, This is an awesome, thoughtful post and idea. I’m using this in my class with a final project the kids are turning in this morning. I’m excited about the clarity with which I can evaluate their projects.

' src=

I’m so glad to hear it. If you’re willing to share what you made and tell me how it all went later on, I would be thrilled to hear it.

' src=

So appreciated! These practical, detailed applications are helpful! Mahalo from Kauai, Hi.

' src=

Rubrics are great tools for making expectations explicit. Thanks for this post which gives me some vocabulary to discuss rubrics. Though, I could use some resources on rubric scoring, b/c I see a lot of teachers simply adding up the number of squares and having that be the total point value of an assignment, which leads to incorrect grades on assignments. I’ve found some converters, but haven’t found a resource that has the math broken out.

Thanks for the feedback, Jeremey! You are not the first person to request a clearer breakdown on the math for this rubric (or others), and you’re right, teachers definitely have different approaches to this. I have some good ideas on this, so I will plan a post on it for the near future.

' src=

Did you do a post regarding grading a single rubric?

' src=

Yup! Here’s Meet the Single Point Rubric . You might also be interested in How To Turn Rubric Scores into Grades . Hope this helps!

' src=

Really rubric is a very useful tool when assessing students in class

' src=

There is no such thing as an appropriate converter. Levels are levels and points and percentages are points and percentages and never the twain should meet.

' src=

(I’m very late to the discussion.)

Years ago, Ken O’Connor was the person who turned my grading around. For that reason, I would be against using the “0-80%” or “0-80 points” piece. O’Connor is very clear about how grades below 50 ruin a grade average.

I would love to be able to grade with standards only, but what I do instead, to fit into our district grading software, is to grade by standards (using letters, where “proficient” is a “B”), and the traditional letters are equal to 95/85/75/65/55. That gives kids a chance if they ever somehow earn only a F. It doesn’t kill the rest of their grade.

(I forgot to say that I absolutely love the one-column rubric. It is going to be a huge help to me this year.)

' src=

This post was so helpful! I am struggling right now with assigning Habits of Work grades to my Spanish students in middle and high school. I was using an analytic rubric for both my assessment and the students’ self-assessment, but it’s possible the quantity of words was exacerbating the problem of students scoring themselves in the best column out of reflex or habit. I’m going to try a single-point rubric to see if that can lead us to some more reflective thought.

' src=

This website was very helpful. Thank you.

' src=

LOVELY post. So didactic and useful. After reading some quite dense posts on rubrics, I’ve enjoyed this a lot. You have now convinced me to use rubrics! THANK YOU Jenny and CONGRATS!!!

SINGLE-POINT rubrics

' src=

I have not seen or heard of single point rubrics. I’m really excited to try that out. Less wordy and easier for students to see what is expected of them and get meaningful feedback.

' src=

Oooh! I never thought I’d like a post on rubrics, but this was awesome! Thanks for your great explanations. I’m currently working my way through your Teacher’s Guide to Tech/Jumpstart program and I wanted to take a minute and tell you how much I appreciate your site and podcasts too. Everything is so concise, interesting and helpful!

Sariah, thank you!! I haven’t gotten a ton of feedback on the JumpStart program, so it’s really nice to hear that! Let me know if you have any questions!

' src=

I am Master of Mathematics Education student and I am busy compiling my assignments on rubrics. Your notes are well explained and straight to the point. However, my Professor have instructed as to look up on primarily rubrics and multi-trait rubrics that i seems not to get. Do you care to differentiate them for me? Thank you.

Hi Martha. I was not familiar with those two terms, so I did a bit of reading in this post: http://carla.umn.edu/assessment/vac/improvement/p_5.html It seems to me that a primary trait rubric focuses on a single, somewhat broad description of how well the student achieved a certain goal. Multi-trait rubrics allow teachers to assess a task on a variety of descriptors. To me, the primary trait seems very much like the holistic rubric, and the multi-trait rubric seems a lot like an analytic rubric. If anyone else reading this knows the finer points of the differences among these four, I would love to hear them!

' src=

How to better calculate a grade with a rubric. Please see: http://tinypic.com/r/2dl6d5c/9 .

' src=

Thank you so much for this humorous and informative approach to rubrics. It seems to me that the single-point rubric, which I agree makes the most sense for assignment specific rubrics, is really just a clear set of assignment instructions / expectations with the addition of over/under columns to make it rubric-ish.

' src=

I love to have students help create rubrics. By the end of the year, we often create the entire rubric together as a class, but often I allow them to start by assigning one “open” section that they think I should grade on for which I help them write “exceeds, meets, doesn’t meet” standards. Then we move on to them assigning points for each standard that I’ve written (this is fascinating for me to see what they weight more heavily), and finally on to writing their own categories for which I write the standards, and then we reverse so that I write the categories and they write the standards. I give a lot of writing and speaking assignments and they really like being involved in how and what and how much we grade. (I never find they are too easy on themselves, either.) I love the single-point rubric especially for assignments I come up with off the cuff and don’t have time to write an elaborate rubric for!

' src=

If you’re moving away from traditional grades, the single-point rubric is a perfect instrument for delivering specific feedback.

' src=

This is a great site and I really liked the one example used with the multiple rubric styles so we could really understand the difference in them. I am confused about the difference between a Single Point rubric and a Primary Trait rubric. You didn’t mention the Primary Trait rubric so I am wondering if they are the same. Thank you, Karen

' src=

Thanks for writing in and for your kind words! I work for Cult of Pedagogy, and in answering your question, I started scrolling myself. Jenn responded to another reader, and I think you might find her response helpful as well as the link:

http://carla.umn.edu/assessment/vac/improvement/p_5.html

“It seems to me that a primary trait rubric focuses on a single, somewhat broad description of how well the student achieved a certain goal. Multi-trait rubrics allow teachers to assess a task on a variety of descriptors. To me, the primary trait seems very much like the holistic rubric, and the multi-trait rubric seems a lot like an analytic rubric.”

Hope this helps!

' src=

I do a sort of analytic + single point. I don’t include lots of writing on an analytic rubric. I give them the thick descriptions printed out earlier and I go over them (so each category actually does have detailed descriptions), but the rubric I mark is made up of lots of space and numbers 1-10. I keep it to ten categories. I leave lots of space for comments and comment on every category (even if it’s just one word). I conference with each student briefly when I hand back the rubrics. Each student is given two attempts – first for feedback, second for growth and a final score. (I taught high school theatre, so this method worked the best for me.)

' src=

Jen, Thank you for succinctly explaining the types of rubrics and THANK YOU for the free downloadable templates. I will share them with my education senior students!!! AWEsome work you have done.

You are very welcome, Alberta!

' src=

Dear Jennifer,

Thank you for the detailed information. I have been using single point rubrics from last year and I love them, but do you think we should give students a checklist as well? If so, what should it look like? I don’t want to kill their creativity, though.

I think the rubric can contain a checklist if you want students to include specific things in their end product, or you could do a separate checklist, then add something like “all items from checklist are included” in your rubric language. There is definitely a gray area here: Defining requirements too narrowly could stifle creativity, but it’s also important to be clear about expectations.

I have been working on a variation of the single-point rubric that I think might be even more useful for communicating expectations and feedback to students. Check it out here: https://docs.google.com/document/d/12JBIcpjeDYuTbQhEgJg2LKC5YPMDwTcIYCtl6jSGTeE/edit?usp=sharing

' src=

Really appreciate this post! Thank you. I have used the analytic approach, but I can really see the benefits of a single-point system. Thanks for your clear explanation.

' src=

Saying that ‘analytical’ rubrics are difficult and time consuming to write is true, but is also a cop-out. Taking the time to clearly define and articulate student behaviours at each level promotes student independence and self-assessment, and results in better outcomes. The fact that students have departed from what’s written on your rubric suggests that either the assessment wasn’t explained well enough or the rubric itself is of poor quality.

The analytical rubrics provided here fall well short of quality rubric standards. I would suggest reading Patrick griffin’s Assessment for Teaching, and visit the ReliableRubrics websites for good examples.

Thanks for the book and website suggestions, Martin. I do think it’s possible to construct a clear 4-column analytical rubric, but I have rarely seen one that manages to cover all the bases. The ones that DO cover every possible outcome are often insanely long. I’m thinking of some I got in grad school that were–I kid you not–several pages long and written in 9-point font. Despite the fact that I am a diligent student, even I got to the point where I threw in the towel and stopped reading the whole thing. Instead, I just gave my attention to the “3” and “4” columns. I’m guessing that other students do the same thing. If our goal is to have students understand what’s being asked of them and to pay attention to the details, why spend so much time on defining what NOT to do?

' src=

Thank you Jennifer, I have shared this with fellow colleagues in Costa Rica. I know this will be of great use!!

' src=

Thank you for this work. Your site has been very helfpul to me.

' src=

Thank you so much Jennifer! You seem to be an expert in making rubrics! I really appreciate the simplicity of the delivery of your thoughts about rubrics. I just want to ask if there is such a rubric for a cooperative activity? I am Geraldine, by the way, and me and my classmates are planning to conduct cooperative listening activities among Grade 8 students. We are having a hard time looking for a rubric that will assess their outputs as a group. Can you suggest one? Your response will be of great help. Thank you so much. May God bless you more and always!

Hi Geraldine, I work with Cult of Pedagogy and although we can’t think of anything specific to what you’re looking for, I’m thinking you might want to check out our Assessment & Feedback Pinterest board — there are a ton or resources that might help you create a rubric that would be specific to your needs. The most important thing is to identify what you want students to be able to do in the end. For example: listen to others with eye contact. (Be sure to check out Understanding by Design .) Then you can choose a rubric structure that will best fit your needs and provide effective feedback. Other than that, you might be able to find some great ideas through a Google search.

Well, thank you so much! May God bless you!

' src=

Great information. Can you tell me how you come to a total/final score on an analytic rubric if the student receives a variety of scores in the different categories? Thanks.

This is a great question! I’d check out Jenn’s post, Speed Up Grading with Rubric Codes . Even if you don’t use the codes, you’ll see in the video how an overall score can be given to a paper, even when scores in indivual categories vary. Basically the overall score reflects where most criteria have been met, along with supportive feedback. Hope this helps!

' src=

I loved the all of the rubrics you created for “Breakfast In Bed”. Your topic was an awesome analogy for teacher created tasks. I personally prefer the analytic rubric because I believe it gives the most accurate feedback to the student. If you feel more information is needed, you could expand the categories in the rubric, for example in this case, you could add a column called “sensory enhancements” , such as music or table setting. If you want to add a more personal comment you can always add it in the margin.

' src=

This issue has always frustrated me. I have recently been a HUGE proponent of holistic rubrics, but I do see the disadvantage of the feedback issue. For my first time teaching college composition, I used analytic rubrics–and hated them. It wasn’t the making rubrics that was time consuming, but determining how to break up the points and how to assign earned points for a paper. I would score a paper, add up all the points, and realized the paper got a B when, in reality, I knew it was a C-level paper. So I would erase and recalculate until I got the points I thought were more accurate. It took FOREVER!!! After some research, I decided to move to a holistic rubric, and it made grading way faster, but more importantly, I thought the numerical grade was much more accurate and consistent. (Score 6 would get a 95, 5 would be 85, etc., and I would give + or – for 3 more or less points). For feedback, I would annotate and underline/circle the parts of the criteria that they struggled in or did well in and left an end comment. And while I had them turn in a draft that I would give feedback on, I didn’t use the rubric for the draft feedback. Just comments on the paper.

I’m willing to try to single-point, but to get to that final numerical grade (since a no-grade classroom isn’t allowed, unfortunately) you’d still have to break down the points arbitrarily like an analytic rubric. Who’s to say that “structure” should be 30 points while “grammar” should be 10? What’s the actual difference between a 40/50 in “analysis” and a 42/50? My grading PTSD is resurfacing just thinking about grading essays that way. But at the same time, I also don’t like the limited feedback of the holistic rubric.

This is a link to a site where you can download a PDF that talks about a lot of composition issues, but pages 74-76 is about rubrics. Curious to know everyone’s thoughts. https://community.macmillan.com/docs/DOC-1593

' src=

Thank you so much 🙂 I learned a lot from this kind of Rubrics 🙂 (y)

' src=

Thank you very much for the breakdown of the the types of rubrics. This was very informational!

' src=

Thank you for the fantastic article. I came here from the Single Point Rubric post, and I feel so much better equipped to grade my next assignment. Thank you again!

' src=

Radhika, Yay! We are glad you found what you needed for your next assignment!

' src=

Jennifer, the clear, concise explanations of three types of rubrics are very refreshing. I teach a course called “Assessment and Measurement” to pre-service teachers and I introduce the analytic and holistic rubrics for them to use in performance assessments. The pre-service teachers spend a lot of time with just the language they want to use and, although I think rubrics are the path to more accuracy in grading, I find the idea is overwhelming to novice teachers. May I share this with my students? Of course giving you due credit. This is excellent.

Hi Hazel! Thanks for the positive feedback. You are welcome to share this post with your students!

' src=

Thank you! Your picture at the very beginning (and your examples) made the difference between holistic and analytical instantly click for me! Also, I have never heard of single point rubrics before, so I am excited to try them out this fall with an assignment or two that I think they would go perfectly with! Lastly, thanks for the templates!

' src=

I’ve been using rubrics for a long time. I started with the most complex, comprehensive things you cannot even imagine. It drove the kids crazy, and me too. Now I teach English to adults (as a 2nd or nth language) and I write much simpler rubrics. But they still have too much information. You are brilliant here with the single point rubric. What do you need to do to get it right? Write in the ways they didn’t match it, which is what you need to do anyways. I’m changing immediately to single point rubrics. I’ll also read your other posting about single point rubrics to see if you have any other ideas. I just met your blog this week (Online Global Academy) and will return, I’m sure. Many thanks. Lee

This is great to hear, Lee! Thanks for sharing.

' src=

I’ve always used rubrics but especially appreciate the single point rubric.

' src=

Hi My name is Andrena Weir, I work at the American School of Marrakech. Thank you so much for your information. As a Physical Ed teacher these rubrics are great. I like the one column rubric. I feel I spent too much time grading in ways that consume too much time. This is so much appreciated. I need someone like you to be in-contact with if I’m struggling to retrieve new Ideas. Thank you so very much, have the best day.

' src=

How about this type of rubric. Al the benefits of analytic but without the verbiage.

The food is raw/burned under/over cooked perfectly cooked

The tray is missing missing some items complete and utensils are dirty clean well presented

You get the idea

e.g. For Maths projects

https://docs.google.com/spreadsheets/d/19MXAjBdiEHXwuxg0w7NkRIKuc4X7VNj1qOE74o_vNss/edit?usp=sharing

The blog destroyed my formatting.

The food is || raw/burned || under/over cooked || perfectly cooked

The tray is || missing || missing some items || complete and utensils are || dirty || clean || well presented

or check the linked example

' src=

I use rubrics with most of my practical assignments and yes they are very time consuming. After reading this post I’m very excited to try the single-point rubric. Most of the time my students just want to know what is needed. This way they can identify what I want them to be able to do. Thanks so much for this information about rubrics.

' src=

So glad this was helpful, Amy! I’ll be sure to let Jenn know.

' src=

Want to use analytical rubric

' src=

Thanks so much for all of the information. This is great to have as a resource!

' src=

I’ve never heard of a single-point rubric before but I love the idea! Your article totally spoke my language and touched on all of my concerns. Thanks for the tips!

' src=

Hi, Jennifer, I always come away with actionable tips. I am a faculty developer and Instructional Coach. Rubrics pose challenges for teachers, novice and seasoned alike, so thank you for these discussions to shine a light on rubrics, good and bad.

Meg, I am glad this post was helpful for you in your role! I will be sure to pass on your comments to Jenn.

' src=

Being in a rubricade, a crusade of rubrics, against the powers that might be from my school… I’m glad to read what you’ve made.

Neither the academic coordinator nor the headmaster seems to know anything about having more than four levels of achievement. Nothing about having single point rubrics or the ones needed for my laboratory reports which go up to 7 with numbers not correlative.

I’m a high (and middle) school natural sciences teacher, my specialty field is physics.

The rubric in question (rejected by my superiors) has been developped since my first days in the classroom, about 2 thousand eleven. I’ve been modifying it from time to time according to the new breakthroughs experienced in practice.

Maybe your really nice webpage will help me out in going past this nonsense.

Thanks a lot!

Glad you found this helpful!

' src=

A holistic rubric is only easier if the faculty are just slapping grades on assignments, which they shouldn’t be doing with any rubric, including a very detailed analytic one. There should be summary comments that explain how the student’s specific response to the assignment meets the descriptor for each score level and then suggestions for what they could do to improve (even if they got an A).

' src=

Thanks for your comment- as Jenn mentions in the post, holistic rubrics are limited in their space for feedback. Many teachers prefer the Single Point Rubric for personalized feedback. If the point of rubrics is to set students up with their next steps, this is one you might want to try!

Leave a Reply

Your email address will not be published.

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Logo for University of Nebraska Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Analytic Rubrics

The who, what, why, where, when, and how of an analytic rubrics.

WHO : Analytic rubrics are for  you  and  your students .

WHAT : An analytic rubric is a scoring tool that helps you identify the criteria that are relevant to the assessment and learning objectives. It is divided into components of the assignment contains a detailed description that clearly states the performance levels (unacceptable to acceptable) and allows you to assign points/grades/levels based on the students’ performance.

WHY: Rubrics help guide students when completing their assignments by giving the guidelines to follow. Students also know what you are looking for in an assignment, and this leads to fewer questions and more time engaged in the assessment and knowledge attainment.  Rubrics help you or your assistant grade assignments objectively from the first submission to the last. Rubrics returned to students with the assignment, give the students basic feedback by selecting the correct criteria they met.

WHERE:  Create a paper rubric or use the Canvas interactive grading rubric. Learn more about using Canvas Rubrics by selecting the following link  https://guides.instructure.com/m/4152/l/724129-how-do-i-add-a-rubric-to-an-assignment

WHEN : Share the analytic rubric before the assessment to share the criteria they must meet and to help guide them when completing the assignment. After the assignment has been completed, return the marked rubric with the assignment as a form of feedback.

HOW:  Watch the following video on Analytic Rubrics.

analytic scoring rubric for essay writing

Optional Handouts: Blank rubric for the session (1)

Rubric Design Activity

Teaching Online: Course Design, Delivery, and Teaching Presence Copyright © by Analisa McMillan. All Rights Reserved.

Share This Book

  • Open access
  • Published: 26 September 2020

Examining consistency among different rubrics for assessing writing

  • Enayat A. Shabani   ORCID: orcid.org/0000-0002-7341-1519 1 &
  • Jaleh Panahi 1  

Language Testing in Asia volume  10 , Article number:  12 ( 2020 ) Cite this article

8514 Accesses

4 Citations

3 Altmetric

Metrics details

The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred essays (task 2) from the academic IELTS test were randomly selected from about 800 essays from an official IELTS center, a representative of IDP Australia, which was taken between 2015 and 2016. The test takers were 19 to 42 years of age, 120 of them were female and 80 were males. Three raters were provided with four sets of rubrics used for scoring the essay writing task of tests developed by Educational Testing Service (ETS) and Cambridge English Language Assessment (i.e., Independent TOELF iBT, GRE, CPE, and CAE) to score the essays which had been previously scored officially by a certified IELTS examiner. The data analysis through correlation and factor analysis showed a general agreement among raters and scores; however, some deviant scorings were spotted by two of the raters. Follow-up interviews and a questionnaire survey revealed that the source of score deviations could be related to the raters’ interests and (un)familiarity with certain exams and their corresponding rubrics. Specifically, the results indicated that despite the significance which can be attached to rubrics in writing assessment, raters themselves can exceed them in terms of impact on scores.

Introduction

Writing effectively is a very crucial part of advancement in academic contexts (Rosenfeld et al. 2004 ; Rosenfeld et al. 2001 ), and generally, it is a leading contributor to anyone’s progress in the professional environment (Tardy and Matsuda 2009 ). It is an essential skill enabling individuals to have a remarkable role in today’s communities (Cumming 2001 ; Dunsmuir and Clifford 2003 ). Capable and competent L2 writers demonstrate their idea in the written form, present and discuss their contentions, and defend their stances in different circumstances (Archibald 2004 ; Bridgeman and Carlson 1983 ; Brown and Abeywickrama 2010 ; Cumming 2001 ; Hinkel 2009 ; Hyland 2004 ). Writing correctly and impressively is vital as it ensures that ideas and beliefs are expressed and transferred effectively. Being capable of writing well in the academic environment leads to better scores (Faigley et al. 1981 ; Graham et al. 2005 ; Harman 2013 ). It also helps those who require admission to different organizations of higher education (Lanteigne 2017 ) and provides them with better opportunities to get better job positions. Business communications, proceedings, legal agreements, and military agreements all have to be well written to transmit information in the most influential way (Canseco and Byrd 1989 ; Grabe and Kaplan 1996 ; Hyland 2004 ; Kroll and Kruchten 2003 ; Matsuda 2002 ). What should be taken into consideration is that even well until the mid-1980s, L2 writing in general, and academic L2 writing in particular, was hardly regarded as a major part of standard language tests desirable of being tested on its own right. Later, principally owing to the announced requirements of some universities, it meandered through its path to first being recognized as an option in these tests and then recently turning into an indispensable and integral part of them.

L2 writing is not the mere adequate use of grammar and vocabulary in composing a text, rather it is more about the content, organization and accurate use of language, and proper use of linguistic and textual parts of the language (Chenoweth and Hayes 2001 ; Cumming 2001 ; Holmes 2006 ; Hughes 2003 ; Sasaki 2000 ; Weissberg 2000 ; Wiseman 2012 ). Essay, as one of the official practices of writing, has become a major part of formal education in different countries. It is used by different universities and institutes in selecting qualified applicants, and the applicants’ mastery and comprehension of L2 writing are evaluated by their performance in essay writing.

Essay, as one of the most formal types of writing, constitutes a setting in which clear explanations and arguments on a given topic are anticipated (Kane 2000 ; Muncie 2002 ; Richards and Schmidt 2002 ; Spurr 2005 ). The first steps in writing an essay are to gain a good grasp of the topic, apprehend the raised question and produce the response in an organized way, select the proper lexicon, and use the best structures (Brown and Abeywickrama 2010 ; Wyldeck 2008 ). To many, writing an essay is hampering, yet is a key to success. It makes students think critically about a topic, gather information, organize and develop an idea, and finally produce a fulfilling written text (Levin 2009 ; Mackenzie 2007 ; McLaren 2006 ; Wyldeck 2008 ).

L2 writing has had a great impact on the field of teaching and learning and is now viewed not only as an independent skill in the classroom but also as an integral aspect of the process of instruction, learning, and most freshly, assessment (Archibald 2001 ; Grabe and Kaplan 1996 ; MacDonald 1994 ; Nystrand et al. 1993 ; Raimes 1991 ). Now, it is not possible to think of a dependable test of English language proficiency without a section on essay writing, especially when academic and educational purposes are of concern. Educational Testing Service (ETS) and Cambridge English Language Assessment offer a particular section on essay writing for their tests of English language proficiency. The independent TOEFL iBT writing section, the objective of which is to gauge and assess learners’ ability to logically and precisely express their opinions using their L2 requires the learners to write well at the sentence, paragraph, and essay level. It is written on a computer using a word processing program with rudimentary qualities which does not have a self-checker and a grammar or spelling checker. Generally, the essay should have an introduction, a body, and a conclusion. A standard essay usually has four paragraphs, five is possibly better, and six is too many (Biber et al. 2004 ; Cumming et al. 2000 ). TOEFL iBT is scored based on the candidates’ performance on two tasks in the writing section. Candidates should at least do one of the writing tasks. Scoring could be done either by human rater or automatically (the eRater). Using human judgment for assessing content and meaning along with automated scoring for evaluating linguistic features ensures the consistency and reliability of scores (Jamieson and Poonpon 2013 ; Kong et al. 2009 ; Weigle 2013 ).

The Graduate Record Examination (GRE) analytic writing consists of two different essay tasks, an “issue task” and an “argument task”, the latter being the focus of the present study. Akin to TOELF iBT, the GRE is also written on a computer employing very basic features of a word processing program. Each essay has an introduction including some contextual and upbringing information about what is going to be analyzed, a body in which complex ideas should be articulated clearly and effectively using enough examples and relevant reasons for supporting the thesis statement. Finally, the claims and opinions have to be summed up coherently in the concluding part (Broer et al. 2005 ). The GRE is scored two times on a holistic scale, and usually, the average score is reported if the two scores are within one point; otherwise, a third reader steps in and examines the essay (Staff 2017 ; Zahler 2011 ).

IELTS essay writing (in both Academic and General Modules) involves developing a formal five-paragraph essay in 40 min. Similar to essays in other exams, it should include an introductory paragraph, two to three body paragraphs, and a concluding paragraph (Aish and Tomlinson 2012 ; Dixon 2015 ; Jakeman 2006 ; Loughead 2010 ; Stewart 2009 ). To score IELTS essay writing, the received scores for the (four) components of the rubric are averaged (Fleming et al. 2011 ).

The writing sections of the Cambridge Advanced Certificate in English (CAE) and the Cambridge English: Proficiency (CPE) exams have two parts. The first part is compulsory and candidates are asked to write in response to an input text including articles, leaflets, notices, and formal and/or informal letters. In the second part, the candidates must select one of the writing tasks that might be a letter, proposal, report, or a review (Brookhart and Haines 2009 ; Corry 1999 ; Duckworth et al. 2012 ; Evans 2005 ; Moore 2009 ). The essays should include an introduction, a body, and a conclusion (Spratt and Taylor 2000 ). Similar to IELTS essay writing, these exams are scored analytically. The scores are added up and then converted to a scale of 1 to 20 (Brookhart 1999 ; Harrison 2010 ).

Assessing L2 writing proficiency is a flourishing area, and the precise assessment of writing is a critical matter. Practically, learners are generally expected to produce a piece of text so that raters can evaluate the overall quality of their performance using a variety of different scoring systems including holistic and analytic scoring, which are the most common and acceptable ways of assessing essays (Anderson 2005 ; Brossell 1986 ; Brown and Abeywickrama 2010 ; Hamp-Lyons 1990 , 1991 ; Kroll 1990 ). Today, the significance of L2 writing assessment is on an increase not only in language-related fields of studies but also arguably in all disciplines, and it is a very pressing concern in various educational and also vocational settings.

L2 writing assessment is the focal point of an effective teaching process of this complicated skill (Jones 2001 ). A diligent assessment of writing completes the way it is taught (White 1985 ). The challenging and thorny natures of assessment and writing skills impede the reliable assessment of an essay (Muenz et al. 1999 ) such that, to date, a plethora of research studies have been conducted to discern the validity and reliability of writing assessment. Huot ( 1990 ) argues that writing assessment encounters difficulty because usually, there are more than two or three raters assessing essays, which may lead to uncertainty in writing assessment.

L2 writing assessment is generally prone to subjectivity and bias, and “the assessment of writing has always been threatened due to raters’ biasedness” (Fahim and Bijani 2011 , p. 1). Ample studies document that raters’ assessment and judgments are biased (Kondo-Brown 2002 ; Schaefer 2008 ). They also suggested that in order to reduce the bias and subjectivity in assessing L2 writing, standard and well-described rating scales, viz rubrics, should be determined (Brown and Jaquith 2007 ; Diederich et al. 1961 ; Hamp-Lyons 2007 ; Jonsson and Svingby 2007 ; Aryadoust and Riazi 2016 ). Furthermore, there are some studies suggesting the tendency of many raters toward subjectivity in writing assessment (Eckes 2005 ; Lumley 2005 ; O’Neil and Lunz 1996 ; Saeidi et al. 2013 ; Schaefer 2008 ). In light of these considerations, it becomes of prominence to improve consistency among raters’ evaluations of writing proficiency and to increase the reliability and validity of their judgments to avoid bias and subjectivity to produce a greater agreement between raters and ratings. The most notable move toward attaining this objective is using rubrics (Cumming 2001 ; Hamp-Lyons 1990 ; Hyland 2004 ; Raimes 1991 ; Weigle 2002 ). In layman’s terms, rubrics ensure that all the raters evaluate a writing task by the same standards (Biggs and Tang 2007 ; Dunsmuir and Clifford 2003 ; Spurr 2005 ). To curtail the probable subjectivity and personal bias in assessing one’s writing, there should be some determined and standard criteria for assessing different types of writing tasks (Condon 2013 ; Coombe et al. 2012 ; Shermis 2014 ; Weigle 2013 ).

Assessment rubrics (alternatively called instruments) should be reliable, valid, practical, fair, and constructive to learning and teaching (Anderson et al. 2011 ). Moskal and Leydens ( 2000 ) considered validity and reliability as the two significant factors when rubrics are used for assessing an individual’s work. Although researchers may define validity and reliability in various ways (for instance, Archibald 2001 ; Brookhart 1999 ; Bachman and Palmer 1996 ; Coombe et al. 2012 ; Cumming 2001 ; Messick 1994 ; Moskal and Leydens 2000 ; Moss 1994 ; Rezaei and Lovorn 2010 ; Weigle 2002 ; White 1994 ; Wiggan 1994 ), they generally agree that validity in this area of investigation is the degree to which the criteria support the interpretations of what is going to be measured. Reliability, they generally settle, is the consistency of assessment scores regardless of time and place. Rubrics and any rating scales should be so developed to corroborate these two important factors and equip raters and scorers with an authoritative tool to assess writing tasks fairly. Arguably, “the purpose of the essay task, whether for diagnosis, development, or promotion, is significant in deciding which scale is chosen” (Brossell 1986 , p. 2). As rubrics should be conceived and designed with the purpose of assessment of any given type of written task (Crusan 2015 ; Fulcher 2010 ; Knoch 2009 ; Malone and Montee 2014 ; Weigle 2002 ), the development and validation of rating scales are very challenging issues.

Writing rubrics can also help teachers gauge their own teaching (Coombe et al. 2012 ). Rubrics are generally perceived as very significant resources attainable for teachers enabling them to provide insightful feedback on L2 writing performance and assess learners’ writing ability (Brown and Abeywickrama 2010 ; Knoch 2011 ; Shaw and Weir 2007 ; Weigle 2002 ). Similarly, but from another perspective, rubrics help learners to follow a clear route of progress and contribute to their own learning (Brown and Abeywickrama 2010 ; Eckes 2012 ). Well-defined rubrics are constructive criteria, which help learners to understand what the desired performance is (Bachman and Palmer 1996 ; Fulcher and Davidson 2007 ; Weigle 2002 ). Employing rubrics in the realm of writing assessment helps learners understand raters’ and teachers’ expectations better, judge and revise their own work more successfully, promote self-assessment of their learning, and improve the quality of their writing task. Rubrics can be used as an effective tool enabling learners to focus on their efforts, produce works of higher quality, get better grades, find better jobs, and feel more concerned and confident about doing their assignment (Bachman and Palmer 2010 ; Cumming 2013 ; Kane 2006 ).

Rubrics are set to help scorers evaluate writers’ performances and provide them with very clear descriptions about organization and coherence, structure and vocabulary, fluent expressions, ideas and opinions, among other things. They are also practical for the purpose of describing one’s competence in logical sequencing of ideas in producing a paragraph, use of sufficient and proper grammar and vocabulary related to the topic (Kim 2011 ; Pollitt and Hutchinson 1987 ; Weigle 2002 ). Employing rubrics reduces the time required to assess a writing performance and, most importantly, well-defined rubrics clarify criteria in particular terms enabling scorers and raters to judge a work based on standard and unified yardsticks (Gustilo and Magno 2015 ; Kellogg et al. 2016 ; Klein and Boscolo 2016 ).

Selecting and designing an effective rating scale hinges upon the purpose of the test (Alderson et al. 1995 ; Attali et al. 2012 ; Becker 2011 ; East 2009 ). Although rubrics are crucial in essay evaluation, choosing the appropriate rating scale and forming criteria based on the purpose of assessment are as important (Bacha 2001 ; Coombe et al. 2012 ). It seems that a considerable part of scale developers prefers to adapt their scoring scales from a well-established existing one (Cumming 2001 ; Huot et al. 2009 ; Wiseman 2012 ). The relevant literature supports the idea of adapting rating scales used in large-scale tests for academic purposes (Bacha 2001 ; Leki et al. 2008 ). Yet, East ( 2009 ) warned about the adaptation of rating scales from similar tests, especially when they are to be used across languages.

Holistic and analytic scoring systems are now widely used to identify learners’ writing proficiency levels for different purposes (Brown and Abeywickrama 2010 ; Charney 1984 ; Cohen 1994 ; Coombe et al. 2012 ; Cumming 2001 ; Hamp-Lyons 1990 ; Reid 1993 ; Weir 1990 ). Unlike the analytic scoring system, the holistic one takes the whole written text into consideration. This scoring system generally emphasizes what is done well and what is deficient (Brown and Hudson 2002 ; White 1985 ). The analytic scoring system (multi-trait rubrics), however, includes discrete components (Bacha 2001 ; Becker 2011 ; Brown and Abeywickrama 2010 ; Coombe et al. 2012 ; Hamp-Lyons 2007 ; Knoch 2009 ; Kuo 2007 ; Shaw and Weir 2007 ). To Weigle ( 2002 ), accuracy, cohesion, content, organization, register, and appropriacy of language conventions are the key components or traits of an analytic scoring system. One of the early analytic scoring rubrics for writing was employed in the ESL Composition by Jacobs et al. 1981 , which included five components, namely language development, organization, vocabulary, language use, and mechanics).

Each scoring system has its own merits and limitations. One of the advantages of analytic scoring is its distinctive reliability in scoring (Brown et al. 2004 ; Zhang et al. 2008 ). Some researchers (e.g. Johnson et al. 2000 ; McMillan 2001 ; Ward and McCotter 2004 ) contend that analytic scoring provides the maximum opportunity for reliability between raters and ratings since raters can use one scoring criteria for different writing tasks at a time. Yet, Myford and Wolfe ( 2003 ) considered the halo effect as one of the major disadvantages of analytic rubrics. The most commonly recognized merit of holistic scoring is its feasibility as it requires less time. However, it does not encompass different criteria, affecting its validity in comparison to analytic scoring, as it entails the personal reflection of raters (Elder et al. 2007 ; Elder et al. 2005 ; Noonan and Sulsky 2001 ; Roch and O’Sullivan 2003 ). Cohen ( 1994 ) stated that the major demerit of the holistic scoring system is its relative weakness in providing enough diagnostic information about learners’ writing.

Many research studies have been conducted to examine the effect of analytic and holistic scoring systems on writing performance. For instance, more than half a century ago, Diederich et al. ( 1961 ) carried out a study on the holistic scoring system in a large-scale testing context. Three-hundred essays were rated by 53 raters, and the results showed variation in ratings based on three criteria, namely ideas, organization, and language. About two score years later, Borman ( 1979 ) conducted a similar study on 800 written tasks and found that the variations can be attributed to ideas, organizations, and supporting details. Charney ( 1984 ) did a comparison study between analytic and holistic rubrics in assessing writing performance in terms of validity and found a holistic scoring system to be more valid. Bauer ( 1981 ) compared the cost-effectiveness of analytic and holistic rubrics in assessing essay tasks and found the time needed to train raters to be able to employ analytic rubrics was about two times more than the required time to train raters to use the holistic one. Moreover, the time needed to grade the essays using analytic rubrics was four times the time needed to grade essays using holistic rubrics. Some studies reported findings that corroborated that holistic scoring can be the preferred scoring system in large-scale testing context (Bell et al. 2009 ). Chi ( 2001 ) compared analytic and holistic rubrics in terms of their appropriacy, the agreement of the learners’ scores, and the consistency of rater. The findings revealed that raters who used the holistic scoring system outperformed those employing analytic scoring in terms of inter-rater and intra-rater reliability. Thus, there is research to suggest the superiority of analytic rubrics in assessing writing performance in terms of reliability and accuracy in scoring (Birky 2012 ; Brown and Hudson 2002 ; Diab and Balaa 2011 ; Kondo-Brown 2002 ). It is, generally speaking, difficult to decide which one is the best, and the research findings so far can best be described as inconclusive.

Rubrics of internationally recognized tests used in assessing essays have many similar components, including organization and coherence, task achievement, range of vocabulary used, grammatical accuracy, and types of errors. The wording used, however, is usually different in different rubrics, for instance, “task achievement” that is used in the IELTS rubrics is represented as the “realization of tasks” in CPE and CAE, “content coverage” in GRE, and “task accomplishment” in TOEFL iBT. Similarly, it can be argued that the point of focus of the rubrics for different tests may not be the same. Punctuation, spelling, and target readers’ satisfaction, for example, are explicitly emphasized in CAE and CPE while none of them are mentioned in GRE and TOEFL iBT. Instead, idiomaticity and exemplifications are listed in the TOEFL iBT rubrics, and using enough supporting ideas to address the topic and task is the focus of GRE rating scales (Brindley 1998 ; Hamp-Lyons and Kroll 1997 ; White 1984 ).

Broadly speaking, the rubrics employed in assessing L2 writing include the above-mentioned components but as mentioned previously, they are commonly expressed in different wordings. For example, the criteria used in IELTS Task 2 rating scale are task achievement, coherence and cohesion, lexical resources, and grammatical range and accuracy. These criteria are the ones based on which candidates’ work is assessed and scored. Each of these criteria has its own descriptors, which determine the performance expected to secure a certain score on that criterion. The summative outcome, along with the standards, determines if the candidate has attained the required qualification which is established based on the criteria. The summative outcome of IELTS Task 2 rating scale will be between 0 and 9. Similar components are used in other standard exams like CAE and CPE, their summative outcomes being determined from 1 to 5. Their criteria are used to assess content (relevance and completeness), language (vocabulary, grammar, punctuation, and spelling), organization (logic, coherence, variety of expressions and sentences, and proper use of linking words and phrases), and finally communicative achievement (register, tone, clarity, and interest). CAE and CPE have their particular descriptors which demonstrate the achievement of each learners’ standard for each criterion (Betsis et al. 2012 ; Capel and Sharp 2013 ; Dass 2014 ; Obee 2005 ). Similar to the other rubrics, the GRE scoring scale has the main components like the other essay writing scales but in different wordings. In the GRE, the standards and summative outcomes are reported from 0–6, denoting fundamentally deficient, seriously flawed, limited, adequate, strong, and outstanding, respectively. Like the GRE, the TOEFL iBT is scored from 0–5. Akin to the GRE, Independent Writing Rubrics for the TOEFL iBT delineates the descriptors clearly and precisely (Erdosy 2004 ; Gass et al. 2011 ).

Abundant research studies have been carried out to show that idea and content, organization, cohesion and coherence, vocabulary and grammar, and language and mechanics are the main components of essay rubrics (Jacobs et al. 1981 ; Schoonen 2005 ). What has been considered a missing element in the analytic rating scale is the raters’ knowledge of, and familiarity with, rubrics and their corresponding elements as one of the key yardsticks in measuring L2 writing ability (Arter et al. 1994 ; Sasaki and Hirose 1999 ; Weir 1990 ). Raters play a crucial role in assessing writing. There is research to allude to the impact of raters’ judgments on L2 writing assessment (Connor-Linton 1995 ; Sasaki 2000 ; Schoonen 2005 ; Shi 2001 ).

The past few decades have witnessed an increasing growth in research on different scoring systems and raters’ critical role in assessment. There are some recent studies discussing the importance of rubrics in L2 writing assessment (e.g. Deygers et al. 2018 ; Fleckenstein et al. 2018 ; Rupp et al. 2019 ; Trace et al. 2016 ; Wesolowski et al. 2017 ; Wind et al. 2018 ). They commonly consider rubrics as significant tools for measuring L2 learners’ performances and suggest that rubrics enhance the reliability and validity of writing assessment. More importantly, they argue that employing rubrics can increase the consistency among raters.

Shi ( 2001 ) made comparisons between native and non-native, as well as between experienced and novice raters, and found that raters have their own criteria to assess an essay, virtually regardless of whether they are native or non-native and experienced or novice. Lumley ( 2002 ) and Schoonen ( 2005 ) conducted comparison studies between two groups of raters, one group trained expert raters provided with no standard rubrics, the other group novice raters with no training who had standard rubrics. The trained raters with no rubrics outperformed the other group in terms of accuracy in assessing the essays, implying the importance of raters. Rezaei and Lovorn ( 2010 ) compared the use of rubrics between summative and formative assessment. They argued that using rubrics in summative assessment is predominant and that it overshadows the formative aspects of rubrics. Their results showed that rubrics can be more beneficial when used for formative assessment purposes.

Izadpanah et al. ( 2014 ) conducted a study drawing on Jacobs et al. ( 1981 ) to see if the rubrics of one exam can be the predictor of another one. Practically, they wanted to examine whether the same score would be obtained if a rubric for an IELTS exam was used for assessing CPE or any other standard test. Their findings revealed that the rubrics were comparable with each other in terms of their different components by which different standard essays are assessed. Bachman ( 2000 ) compared TOEFL PBT and CPE and found a very meaningful relationship between the scores gained from essay writing tests. He also concluded that scoring CPE was usually more difficult than PBT, and that under similar conditions, exams from UCLES/Cambridge Assessment (like CPE) received lower scores in comparison to the ones from ETS (like PBT). In Fleckenstein et al. ( 2019 ) experts from different countries linked upper secondary students’ writing profiles elicited in a constructed response test (integrated and independent essays from the TOEFL iBT) to CEFR level. The Delphi technic was used to find out the intra- and inter-panelist consistency while scoring students’ writing profiles. The findings showed that panelists are able to provide ratings consistent with the empirical item difficulties and the validity of the estimate of the cut scores.

Schoonen ( 2005 ) and Attali and Burstein ( 2005 ) compared the generalizability of writing scores to different essays using only one set of the rubric. They checked and analyzed three components of writing rubric, including content, language use, and organization and found that the obtained scores from different essays are similar. Wind ( 2020 ) conducted a study to illustrate and explore methods for evaluating the degree to which raters apply a common rating scale consistently in analytic writing assessments. The results indicated a lack of invariance in rating scale category functioning across domains for several raters. Becker ( 2011 ) also examined different rubrics used to measure writing performance. He investigated the three different types of rubrics, namely holistic, analytic, and primary-trait scoring systems, to find which one is more appropriate for assessing L2 writing. He studied the merits and demerits of the three rubrics and concluded that none of them had superiority over the others, making each legitimate for assessing a piece of writing depending on the purpose of writing, the time allocated for assessment, and the raters’ expertise.

In a recent study, Ghaffar et al. ( 2020 ) examined the impact of rubrics and co-constructed rubrics on middle school students’ writing skill performance. The findings of their study indicated that co-constructed rubrics as assessment tools help students to outperform in their writing due to their familiarity with these types of rubrics. In addition, there are researchers who are of the contention that the use of rubrics is inconclusive and can be controversial especially when they are just used for summative assessment purposes and that when rubrics are used for both summative and formative assessment, they are more advantageous (Andrade 2000 ; Broad 2003 ; Ene and Kosobucki 2016 ; Inoue 2004 ; Panadero and Jonsson 2013 ; Schirmer and Bailey 2000 ; Wilson 2006 , 2017 ).

What all of these studies indicated is that employing well-developed rubrics increase equality and fairness in writing assessment. It is also suggested that various factors could affect writing assessment, especially raters’ expertise and time allocated to the rating (Bacha 2001 ; Ghalib and Hattami 2015 ; Knoch 2009 , 2011 ; Lu and Zhang 2013 ; Melendy 2008 ; Nunn 2000 ; Nunn and Adamson 2007 ). The purpose of the present study is twofold. First, it attempts to investigate the consistency among different standard rubrics in writing assessment. Second, it tries to examine whether any of these rubrics could be used as a predictor of others and if they all tap the same underlying construct.

To meet the objectives of the study, 200 samples of Academic IELTS Task 2 (i.e., essay writing) were used. The samples were randomly selected from more than 800 essays written as part of academic IELTS tests taken between 2015 and 2016 at an official IELTS test center, a representative of IDP Australia. The essays were asked to be written based on different prompts. As an instruction to the IELTS writing Task 2, it is required that the test takers write at least 250 words, a condition that 21 samples did not meet. Test takers were 19 to 42 years of age, 120 of the females and 80 males.

One of the raters in this study was an (anonymous) official IELTS examiner who had scored the essays officially; the other raters were four experienced IELTS instructors from an English department of a nationally prominent language institute, three males and one female, between 26 and 39 years of age, with 5 to 12 years of English language teaching experience. These four raters were selected based on their qualifications, teaching credentials and certifications, and years of teaching experience, particularly in IELTS classes. All the four raters were M.A. holders in TEFL and had been teaching different writing courses at universities and language institutes and were familiar with different scoring systems and their relevant components. Each rater was invited to an individual briefing session with one of the researchers to ensure their familiarity with the rubrics of interest and discuss some practical considerations pertaining to this study. They were asked to read and score each essay four times, each time based on one of the four rubrics (TOELF iBT, GRE, CPE, and CAE). The raters completed the scorings in 12 weeks during which time they were instructed not to share ideas about the task (the costs of scorings were modestly met).

Instrumentation

Four sets of rubrics for different writing tests (i.e., Independent TOEFL iBT, GRE, CPE, & CAE) were taken from ETS and Cambridge English Language Assessment. The official IELTS scores of the 200 essays were collected from the IELTS center. The rubrics employed for assessing and evaluating the writing tasks of these five standard exams were analytic rubrics with different scales, namely a nine-point scale for assessing IELTS Task 2, five-point scales for GRE and TOEFL iBT, and six-point scales for CAE and CPE writing tasks. They assess the main components of essay writing construct, including the range of vocabulary and grammar used in addressing the task, cohesion and organization, and range of using cohesive devices, which were presented in different wordings in these rubrics.

Another instrument was a questionnaire designed by the researchers, which included both open-ended and closed-ended questions (see Appendix). The aim was to determine the raters’ attitudes toward their rating experience and their familiarity with each exam and its corresponding rubrics. The themes of questionnaire items were determined based on a review of the literature on the important issues and factors affecting raters’ performances and attitudes (Brown and Abeywickrama 2010 ; Coombe et al. 2012 ; Fulcher and Davidson 2007 ; Weigle 2002 ). In addition, an interview was carried out with the four raters to find out about their interest in rating and also to investigate their familiarity of the exams and their conforming rating scales.

To carry out the study, 200 essay samples were scored once by a certified IELTS examiner. The assigned scores together with the IELTS examiner’s relevant comments were written next to each essay sample. Afterward, all essays were rated by the four other raters, who were kept uninformed of the official IELTS scores. They were provided with the rubrics of the four essay writing tests and were instructed to assess each essay with the four given rubrics. By so doing, in addition to the official IELTS scores, four other scores were given to each essay from each rater; that is to say, each essay received 16 scores plus the official IELTS score. Therefore, all in all, the researchers collected 17 scores for each essay. The researcher-made questionnaire was carried out, and then an interview was conducted whereby the 4 raters were asked about their interest in rating and also their awareness and concerns about each exam and their relevant rubrics.

To do the analysis of the data, the SPSS program, version 22, was employed. Initially, the descriptive statistics of the data were computed, and intercorrelations among the 17 scores were calculated to see if any statistically significant association could be found among the rubrics. To have a better picture of the existing association among the scoring rubrics of the different exams, PCA as a variant of factor analysis was run to examine the extent the rubrics tap the same underlying construct.

To address the first research question, intercorrelations were computed among the IELTS, CAE, CPE, TOEFL iBT, and GRE scores. To answer the second research question, factor analysis was run to examine the extent the standard essay writings in these five tests of English language proficiency tap the same underlying construct. In this section, the results of the intercorrelations and factor analyses computations are reported in detail.

Intercorrelations among ratings

To estimate the intercorrelations among test ratings and raters, first, alpha was calculated for these five sets of scores together (i.e., IELTS, CAE, CPE, TOEFL iBT, and GRE). To analyze the data, primarily, alpha was calculated for each rater separately to check the consistency among raters. Then, alpha was computed for all the raters together to find inter-reliability among the raters. The intercorrelations were afterward computed between each exam score and the IELTS scores to see which score is (more) correlated with the IELTS.

Table 1 presents the alphas as the average of intercorrelations among the five sets of scores including the IELTS scores, and the four scores given by the raters. Evidently, rater 1 has an alpha of about .67, which is lower than the other alphas. However, because there were only five sets of scores correlated in each alpha, this low value of alpha could still be considered acceptable. Nevertheless, this lower value of alpha in comparison to the other alphas could be meaningful since, after all, this rater showed less internal consistency among his ratings.

To see which test rating given by the four raters agreed the least with the IELTS scores, intercorrelations of each test rating with the IELTS scores were computed as shown in Table 2 . As the intercorrelations of the first rater demonstrate, Rater 1’s CPE rating and Rater 4’s TOEFL iBT rating show lower correlations with the IELTS ratings. Afterward, an alpha was computed for an aggregate of the ratings of all the raters including the IELTS scores.

Table 3 shows an alpha of around .86, which could be considered acceptable with regard to the small number of ratings.

To see which rating had a negative effect on the total alpha, item-total correlation for each test rating was computed. Item-total correlation showed the extent to which each test rating agrees with the total of the other test ratings including the IELTS scores. As it is shown in Table 4 , CPE1 and iBT4 had the lowest correlations with the total ratings. This table also indicates that the removal of these scores would have increased the total alpha considerably.

These results, as expected, confirmed the results found in each rater’s alpha and inter-test correlations computed in the previous section.

Factor analysis

This study was carried out having hypothesized that the construct of essay writing is similar across different standardized tests (i.e., IELTS, CAE, CPE, TOEFL iBT, and GRE), and a given essay is expected to be scored similarly by the rubrics and scales of these different exams. To see whether this was the case, the ratings of these exams were examined. The correlation analyses reported above showed that there is an acceptable agreement among all test ratings except two of them, CPE and TOEFL iBT. That is, rater 1 in CPE and Rater 4 in TOEFL iBT showed the least correlation among other test ratings (.15 and .13, respectively). To have a better picture of this issue, it was decided to run a PCA to examine the extent these exams tap the same underlying construct. Factor analysis provides some factor loadings for each test item (i.e., test rating); if two or more items load on the same factor, it will show that these items (i.e., test ratings) tap the same construct (i.e., essay writing construct).

Table 5 presents the results of Kaiser-Meyer-Olkin measure (KMO) and Bartlett’s test of sphericity on the sampling adequacy for the analysis. The reported KMO is .83, which is larger than the acceptable value (KMO > .5) according to Field ( 2009 ). Bartlett’s test of sphericity [ χ 2 (136) = 1377.12, p < .001] was also found significant, indicating large enough correlations among the items for PCA; therefore, this sample could be considered adequate for running the PCA.

The next step was to investigate the number of factors required to be retained in the PCA. To do so, the scree plot was checked (Fig. 1 ). The first point that should be identified in the scree plot is the point of inflexion, that is, where the slopes of the line in the scree plot changes dramatically. Only those factors, which fall to the left of the point of inflexion, should be retained. Based on Fig. 1 , it seems that the point of inflexion is on the fourth factor; therefore, four factors were retained.

figure 1

According to Table 6 , the first four retained factors explain around 60 percent of the whole variance, which is quite considerable.

Table 7 presents the four factor loadings after varimax rotation. Obviously, the different test ratings were loaded on 4 factors. In other words, those test ratings that clustered around the same factor seemed to be loading on the same underlying factor or latent variable.

Following the above analysis, it was decided to further examine the factor loadings as follows: It should be noted that the above factor structure was achieved by considering only those loadings above .4 as suggested by Stevens ( 2002 ), which explained around 16 percent of the variance in the variable. This value was strict, though, resulting in the emergence of limited factors. Therefore, employing Kaiser’s criterion, a second factor analysis was run with a more lenient absolute value for each factor, which was .3 as suggested by Field ( 2009 ). By so doing, more factor loadings emerged and more information was achieved. The factor loadings above .3 are presented in Table 8 , which almost revealed the same factor structure as found in the previous factor analysis with absolute values greater than .4; however, one important finding was that the IELTS ratings this time showed loadings on all the factors on which other tests also loaded. It can be construed, therefore, that the other tests had significant potential to tap the same construct.

After estimating reliability using Cronbach’s alpha and then by running a Confirmatory Factor Analysis, it was decided to omit Rater 1 due to his unfamiliarity with the exam and its corresponding rubrics reported by him in the questionnaire.

Table 9 and Fig. 2 (scree plot) demonstrate the factor structure after removing Rater 1. The scree plot shows that 4 factors should be retained in the analysis, and Table 9 indicates that the first four retained factors explain about 70 percent of the whole variance, which was quite satisfactory.

figure 2

Scree plot (Rater 1 removed)

Finally, Table 10 shows that after removing Rater 1’s data, all the ratings of Raters 3 and 4 have loaded on the same factors with the IELTS. Of course, like the previous factor analysis, the IELTS ratings again showed loadings on all the factors on which other tests loaded except iBT4. All in all, it could be concluded that the results from the factor analysis confirm the previous findings from alpha computations showing iBT4 ratings had the lowest correlations with the total ratings.

Discussion and conclusions

The purpose of the present study is to examine the consistency of the rubrics endorsed for assessing the writing tasks by the internationally recognized tests of English language proficiency. Standard rubrics can be considered constructive tool helping raters to assess different types of essays (Busching, 1998 ). Using rubrics enhances the reliability of the assessment of essays provided that these rubrics are well described and that they tap the same construct (Jonsson & Svingby, 2007 ). The current study is an attempt to examine the reliability among different rubrics of essay writing with regard to their major components, namely, organization, coherence and cohesion, range of lexical and grammatical complexity used, and accuracy.

The results of this study show that all in all, there is a high correlation among raters (i.e., the IELTS examiner and the four other raters) and rating scores (i.e., the official IELTS scores and the other 16 test ratings received from the four raters). The intercorrelations among test ratings and the raters as well as the computation of inter-item correlations between each test rating and the IELTS scores revealed that CPE1 and iBT4 had the least agreement with the official IELTS ratings. Therefore, these low correlations were investigated in a follow-up study by giving the four raters a questionnaire including both open-ended and closed-ended questions. The raters’ responses to the questionnaire denoted the extent to which they were familiar with each exam and their corresponding rubrics.

The responses of two of the raters, that is, Rater 1 in CPE and Rater 4 in TOEFL iBT, proved to be illuminating in explaining their performance. Rater 1 s’ responses to the questionnaire showed that he had no teaching experience for CPE classes. However, his responses to other questions of the questionnaire indicated his familiarity with this exam and its writing essay scoring rubrics. The responses of Rater 4 revealed that she had no teaching experience for TOEFL iBT and no familiarity with the exam and its corresponding rating scales. The outcome from the interview with Rater 4 suggests that using well-trained raters leads to fewer problems in rating. What Rater 4 stated in her responses to the questionnaire and interview were in line with the findings of Sasaki and Hirose ( 1999 ), who concluded that familiarity with different tests and their relevant rubrics leads to better scoring. Additionally, the results of the present study are consistent with what Schoonen ( 2005 ), Attali and Burstein ( 2005 ), Wind et al. ( 2018 ), Deygers et al. ( 2018 ), Wesolowski et al. ( 2017 ), Trace et al. ( 2016 ), Fleckenstein et al. ( 2018 ), Rupp et al. ( 2019 ) found in their studies, that is employing rubrics enhances the reliability of writing assessment as well as among raters.

To this point, the obtained results from this study provide an affirmative answer to the first question of the study, indicating a very high agreement among test ratings and the raters. Also, in order to ensure that the construct of essay writing is similar across different standardized tests and identical essays are scored similarly by the internationally recognized rubrics of these different exams, inter-item correlation analysis was computed which indicated that CPE1 and iBT4 had the lowest correlations with the total ratings. This could be due to either the raters’ inconsistencies or the hypothesis that essay writing is conceptualized differently based on the scoring rubrics of these exams. The follow-up survey also corroborated that the disagreement among Raters 1 and 4 and the other raters was due to either the rater’s discrepancies or the way every writing task was hypothesized differently according to the rubrics of each exam. It can be supported by Weigle ( 2002 ) who concluded that raters should have a good grasp of scoring and its essential details. She also discussed that raters should have a sharp conceptualization of the construct of essay writing.

The results from the rotated component matrix revealed that all the ratings of Raters 3 and 4 loaded on the same factor, meaning that they tap the same construct. Examining the other factor loadings revealed that CAE1, iBT1, CAE2, and iBT2 also loaded on the same factor with the IELTS, suggesting that these rater’s conceptualizations of the construct of essay writing in CAE and TOEFL iBT were more similar to that of the IELTS raters rather than those of CPE and GRE scorers. However, what remained questionable was why CPE1 and GRE1 did not load on the same factor as CAE1 and iBT1, and why CPE1 and GRE1 loaded on the same factor with CPE2 and GRE2. Additionally, why CPE1 also loaded with GRE2 and CPE2 on the same factor remained open to discussion.

What was found above was the results of the PCA considering those factor loadings above .4 based on Stevens ( 2002 ). As this value was strict, and the number of obtained factors was limited, it was decided to apply Kaiser’s Criterion with a less rigorous eigenvalue of .3 based on Field’ ( 2009 ) suggestion. The findings showed almost the same factor loadings as was found in the previous factor analysis. Again Raters 3 and 4 loaded on the same factor, but this time, the IELTS scores loaded on the same factor with CAE2 and iBT2. CAE1, GRE1, and iBT1 loaded on the same factor and what was still debatable was why CPE1 loaded with GRE 2 and CPE2.

Up to this mentioned point, all the results obtained from alpha computation and factor analysis indicated something different in Rater 1, based on which it was decided to omit Rater 1 from the PCA. It is interesting to note that after interviewing all the four raters and scrutinizing the questionnaire survey, it was found that Rater 1, in his responses to the questionnaire, had indicated that he had no teaching experience in teaching CPE classes, and yet he claimed that he was familiar with this exam and its related rating scales, contrary to other raters’ responses to the questionnaire.

After omitting Rater 1 from the PCA, the findings showed that Rater 3’s and Rater 4’s test ratings loaded on the same factor, and this time the IELTS loaded on the factors that all the other tests had loaded except iBT4, meaning that Rater 4 had no agreement with the IELTS raters in rating the essay. What was found from the questionnaire survey of this rater indicated that Rater 4 had no teaching experience for this particular exam. She also had no familiarity with the exam and its corresponding rubrics. This rater also believed that scoring exams like TOEFL iBT and the exams developed by ETS were more difficult, and that they generally received lower scores in comparison to the Cambridge English Language Assessment exams. The results of what Rater 4 stated were not in line with the findings of Bachman ( 2000 ) who did a comparison study between TOEFL PBT and CPE essay task and concluded that CPE scoring is more difficult than scoring TOEFL PBT. Contrary to the findings of the present study, he also concluded that exams like CPE received lower scores.

The results from alpha computation and factor analysis showed the noticeable role of raters in assessing writing. The results from this study are in line with the findings of Lumley ( 2002 ) and Schoonen ( 2005 ) who argue that raters need to be considered one of the most remarkable concerns in the process of assessment. Shi ( 2001 ) argued in favor of the significant role of raters in assessing essays using their own criteria in addition to the standard and determined rating scales. Likewise, the outcome of factor analysis in this research study revealed that raters play a remarkable role in assessing essays by showing that all the items (i.e., test ratings) load on the same factor, especially when all the essay writings were rated by the same rater.

This study aimed to examine the consistency and reliability among different standard rubrics and rating scales used for assessing writing in the internationally recognized tests of English language proficiency. The results from alpha estimation provide evidence for a strong association among the raters and test ratings. Also, what has been found from the PCA indicate that these test ratings tap the same underlying construct. This study encourages employing practical rater trainer and rater training courses, providing them with the authentic opportunities to get familiar with different rubrics. This area requires more investigation on how raters themselves might affect the rating and how employing trained and certified raters can affect the process of rating. Test administrators and developers are the other groups who benefit from the findings of this study, since, when argued that all the test ratings tap the same underlying construct and different essay writing rating scales can be predictors of each other, it would be practical for them to set standard essay writing rubrics which can be used for rating and assessing writing. Also, as the findings of the present study alluded, the developers of the writing rubrics for these tests may also take into stock the implication that there are critical constructs within writing that weigh more heavily when being assessed across standardized measures. Teachers and learners are other groups who benefit from the result of this research study. They might devote less time on describing all these rubrics with their descriptions stated in different words. Instead, they could spend more time on practicing writing and essay writing tasks.

The study tried to examine the reliability of analytic rubrics used in assessing the essay component of the following standardized examinations: IELTS, TOEFL iBT, CAE, CPE, and GRE. While the first four of the tests listed above are indeed English language proficiency examinations designed to assess language skills of English as a Second Language (ESL) learners, the last one (i.e. GRE) is intended for those seeking admission to graduate programs in the U.S., regardless of the first language background. GRE candidates are, at minimum, bachelor degree holders, most of whom are native speakers of English whose education was completed in the English language, while the minority are international applicants to U.S. universities’ master’s and Ph.D. programs from various language backgrounds. GRE writing task, in other words, is not intended for L2 English learners. Therefore, it seems that juxtaposing the GRE requirements for the writing task, which zero in on argumentation and critical thinking, with English language proficiency standards as measured by the other four tests can dilute the generalizability of the results particularly with reference to this particular exam, due to the divergent assessment purposes and intended candidate profiles for this test. Future researchers are encouraged to take heed of this limitation in the present study.

Availability of data and materials

The authors were provided with the data for research purposes. Sharing the data with a third party requires obtaining consent from the organization which provided the data. The materials are available in the article.

Aish, F., & Tomlinson, J. (2012). Get ready for IELTS writing . London: HarperCollins.

Google Scholar  

Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation . Cambridge: Cambridge University Press.

Anderson, B., Bollela, V., Burch, V., Costa, M. J., Duvivier, R., Galbraith, R., & Roberts, T. (2011). Criteria for assessment: consensus statement and recommendations form the Ottawa 2010 conference. Medical Teacher , 33 (3), 206–214.

Anderson, C. (2005). Assessing writers . Portsmouth: Heinemann.

Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership , 57 (5), 13–18.

Archibald, A. (2001). Targeting L2 writing proficiencies: Instruction and areas of change in students’ writing over time. International Journal of English Studies , 1 (2), 153–174.

Archibald, A. (2004). Writing in a second language. In The higher education academy subject centre for languages, linguistics and area studies Retrieved from http://www.llas.ac.uk/resources/gpg/2175 .

Arter, J. A., Spandel, V., Culham, R., & Pollard, J. (1994). The impact of training students to be self-assessors of writing . New Orleans: Paper presented at the Annual Meeting of the American Educational Research Association.

Aryadoust, V., & Riazi, A. M. (2016). Role of assessment in second language writing research and pedagogy. Educational Psychology , 37 (1), 1–7.

Attali, Y., & Burstein, J. (2005). Automated essay scoring with e-rater.V.2.0. (RR- 04-45) . Princeton: ETS.

Attali, Y., Lewis, W., & Steier, M. (2012). Scoring with the computer: alternative procedures for improving the reliability of holistic essay scoring. Language Testing , 30 (1), 125–141.

Bacha, N. (2001). Writing evaluation: what can analytic versus holistic essay scoring tell? System , 29 (3), 371–383.

Bachman, L., & Palmer, A. S. (2010). Language assessment in practice: developing language assessments and justifying their use in the real world . Oxford: Oxford University Press.

Bachman, L. F. (2000). Modern language testing at turn of the century: assuring that what we count counts. Language Testing , 17 (1), 1–42.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: designing and developing useful language tests . Oxford: Oxford University Press.

Bauer, B. A. (1981). A study of the reliabilities and the cost-efficiencies of three methods of assessment for writing ability . Champaign: University of Illinois.

Becker, A. (2011). Examining rubrics used to measure writing performance in U.S. intensive English programs. The CATESOL Journal , 22 (1), 113–117.

Bell, R. M., Comfort, K., Klein, S. P., McCarffey, D., Ormseth, T., Othman, A. R., & Stecher, B. M. (2009). Analytic versus holistic scoring of science performance tasks. Applied Measurement in Education , 11 (2), 121–137.

Betsis, A., Haughton, L., & Mamas, L. (2012). Succeed in the new Cambridge proficiency (CPE)- student’s book with 8 practice tests . Brighton: GlobalELT.

Biber, D., Byrd, M., Clark, V., Conrad, S. M., Cortes, E., Helt, V., & Urzua, A. (2004). Representing language use in the university: analysis of the TOEFL 2000 spoken and written academic language corpus. In ETS research report series (RM-04-3, TOEFL Report MS-25) . Princeton: ETS.

Biggs, J., & Tang, C. (2007). Teaching for quality learning at university . Maidenhead: McGraw Hill.

Birky, B. (2012). A good solution for assessment strategies. A Journal for Physical and Sport Educators , 25 (7), 19–21.

Borman, W. C. (1979). Format and training effects on rating accuracy and rater errors. Journal of Applied Psychology , 64 (4), 410–421.

Bridgeman, B., & Carlson, S. (1983). Survey of academic writing tasks required of graduate and undergraduate foreign students. In ETS Research Report Series (RR- 83-18, TOELF- RR-15) . Princeton: ETS.

Brindley, G. (1998). Describing language development? Rating scales and SLA. In L. F. Bachman, & A. D. Cohen (Eds.), Interfaces between second language acquisition and language testing research , (pp. 112–140). Cambridge: Cambridge University Press.

Broad, B. (2003). What we really value: beyond rubrics in teaching and assessing writing . Logan: Utah State UP.

Broer, M., Lee, Y. W., Powers, D. E., & Rizavi, S. (2005). Ensuring the fairness of GRE writing prompts: Assessing differential difficulty. In ETS research report series (GREB Report No. 02-07R, RR-05-11) .

Brookhart, G., & Haines, S. (2009). Complete CAE student’s book with answers . Cambridge: Cambridge University Press.

Brookhart, S. M. (1999). The art and science of classroom assessment: the missing part of pedagogy. ASHE-ERIC Higher Education Report , 27 (1), 1–128.

Brossell, G. (1986). Current research and unanswered questions in writing assessment. In K. Greenberg, H. Wiener, & R. Donovan (Eds.), Writing assessment: issues and strategies , (pp. 168–182). New York: Longman.

Brown, A., & Jaquith, P. (2007). Online rater training: perceptions and performance . Dubai: Paper presented at Current Trends in English Language Testing Conference (CTELT).

Brown, G. T. L., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing , 9 (2), 105–121.

Article   Google Scholar  

Brown, H. D., & Abeywickrama, P. (2010). Language assessment: Principles and classroom practice . Lewiston: Pearson Longman.

Brown, J. (2002). Training needs assessment: a must for developing an effective training program. Sage Journal , 31 (4), 569–578 https://doi.org/10.1177/009102600203100412 .

Brown, J. D., & Hudson, T. (2002). Criterion-referenced language testing. Cambridge applied linguistics series . Cambridge: Cambridge University Press.

Busching, B. (1998). Grading inquiry projects. New Directions for Teaching and Learning , ( 74 ), 89–96.

Canseco, G., & Byrd, P. (1989). Writing required in graduate courses in business administration. TESOL Quarterly , 23 (2), 305–316.

Capel, A., & Sharp, W. (2013). Cambridge english objective proficiency , (2nd ed., ). Cambridge: Cambridge University Press.

Charney, D. (1984). The validity of using holistic scoring to evaluate writing. Research in the Teaching of English , 18 (1), 65–81.

Chenoweth, N. A., & Hayes, J. R. (2001). Fluency in writing: Generating text in L1 and L2. Written Communication , 18 (1), 80–98 https://doi.org/10.1177/0741088301018001004 .

Chi, E. (2001). Comparing holistic and analytic scoring for performance assessment with many facet models. Journal of Applied Measurement , 2 (4), 379–388.

Cohen, A. D. (1994). Assessing language ability in the classroom . Boston: Heinle & Heinle.

Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings? Assessing Writing , 18 , 100–108.

Connor-Linton, J. (1995). Crosscultural comparison of writing standards: American ESL and Japanese EFL. World English , 14 (1), 99–115.

Coombe, C., Davidson, P., O’Sullivan, B., & Stoynoff, S. (2012). The Cambridge guide to second language assessment . New York: Cambridge University Press.

Corry, H. (1999). Advanced writing with English in use: CAE . Oxford: Oxford University Press.

Crusan, D. (2015). And then a miracle occurs: the use of computers to assess student writing. International Journal of TESOL and Learning , 4 (1), 20–33.

Cumming, A. (2001). Learning to write in a second language: two decades of research. International Journal of English Studies , 1 (2), 1–23.

Cumming, A. (2013). Assessing integrated writing tasks for academic purposes: promises and perils. Language Assessment Quarterly , 10 (1), 1–8.

Cumming, A. H., Kantor, R., Powers, D., Santos, T., & Taylor, C. (2000). TOEFL 2000 writing framework: A working paper , ETS Research Report Series (RM-00-5; TOEFL-MS-18) . Princeton: ETS.

Dass, B. (2014). Adult & continuing professional education practices: CPE among professional providers . Singapore: Partridge Singapore.

Deygers, B., Zeidler, B., Vilcu, D., & Carlsen, C. H. (2018). One framework to unite them all? Use of CEFR in European university entrance policies. Language Assessment Quarterly , 15 (1), 3–15 https://doi.org/10.1080/15434303.2016.1261350 .

Diab, R., & Balaa, L. (2011). Developing detailed rubrics for assessing critique writing: impact on EFL university students’ performance and attitudes. TESOL Journal , 2 (1), 52–72.

Diederich, P. B., French, J. W., & Carlton, S. T. (1961). Factors in judgments of writing ability (Research Bulletin No. RB-61-15) . Princeton: Educational Testing Service https://doi.org/10.1002/j.2333-8504.1961.tb00286.x .

Dixon, N. (2015). Band 9-IELTS writing task 2-real tests . Oxford: Oxford University Press.

Duckworth, M., Gude, K., & Rogers, L. (2012). Cambridge english: proficiency (CPE) masterclass: student’s book . Oxford: Oxford University Press.

Dunsmuir, S., & Clifford, V. (2003). Children’s writing and the use of ICT. Educational Psychology in Practice , 19 (3), 171–187.

East, M. (2009). Evaluating the reliability of a detailed analytic scoring rubric for foreign language writing. Assessing Writing , 14 (2), 88–115.

Eckes, T. (2005). Examining rater effects in TestDaF writing and speaking performance assessments: a many-facet Rasch analysis. Language Assessment Quarterly , 2 (3), 197–221.

Eckes, T. (2012). Operational rater types in writing assessment: linking rater cognition to rater behavior. Language Assessment Quarterly , 9 ( 3 ), 270–292.

Elder, C., Barkhuizen, G., Knoch, U., & von Randow, J. (2007). Evaluating rater responses to an online training program for L2 writing assessment. Language Testing , 24 (1), 37–64.

Elder, C., Knoch, U., Barkhuizen, G., & von Randow, J. (2005). Individual feedback to enhance rater training: does it work? Language Assessment Quarterly , 2 (3), 175–196.

Ene, E., & Kosobucki, V. (2016). Rubrics and corrective feedback in ESL writing: a longitudinal case study of an L2 writer. Assessing Writing , 30 , 3–20 https://doi.org/10.1016/j.asw.2016.06.003 .

Erdosy, M. U. (2004). Exploring variability in judging writing ability in a second language: a study of four experienced raters of ESL composition. In ETS research report series (RR-03-17) . Ontario: ETS.

Evans, V. (2005). Entry tests CPE 2 for the revised Cambridge proficiency examination: Student’s book . New York City: Pearson Education.

Fahim, M., & Bijani, H. (2011). The effect of rater training on raters’ severity and bias in second language writing assessment. Iranian Journal of Language Testing , 1 (1), 1–16.

Faigley, L., Daly, J. A., & Witte, S. P. (1981). The role of writing apprehension in writing performance and competence. Journal of Educational Research , 75 (1), 16–21.

Field, A. P. (2009). Discovering statistics using SPSS (and sex and drugs and rock’ n’ roll) , (3rd ed., ). London: Sage Publication.

Fleckenstein, J., Keller, S., Kruger, M., Tannenbaum, R. J., & Köller, O. (2019). Linking TOEFL iBT writing rubrics to CEFR levels: Cut scores and validity evidence from a standard setting study. Assessing Writing , 43 https://doi.org/10.1016/j.asw.2019.100420 .

Fleckenstein, J., Leucht, M., & Köller, O. (2018). Teachers’ judgement accuracy concerning CEFR levels of prospective university students. Language Assessment Quarterly , 15 (1), 90–101 https://doi.org/10.1080/15434303.2017.1421956 .

Fleming, S., Golder, K., & Reeder, K. (2011). Determination of appropriate IELTS writing and speaking band scores for admission into two programs at a Canadian post-secondary polytechnic institution. The Canadian Journal of Applied Linguistics , 14 (1), 222 – 250 .

Fulcher, G. (2010). Practical language testing . London: Hodder Education.

Fulcher, G., & Davidson, F. (2007). Language testing and assessment: an advanced resource book . New York: Routledge.

Gass, S., Myford, C., & Winke, P. (2011). Raters’ L2 background as a potential source of bias in rating oral performance. Language Testing , 30 (2), 231–252.

Ghaffar, M. A., Khairallah, M., & Salloum, S. (2020). Co-constructed rubrics and assessment forlearning: The impact on middle school students’ attitudes and writing skills. Assessing Writing , 45 https://doi.org/10.1016/j.asw.2020.100468 .

Ghalib, T. K., & Hattami, A. A. (2015). Holistic versus analytic evaluation of EFL writing: a case study. English Language Teaching , 8 (7), 225–236.

Grabe, W., & Kaplan, R. B. (1996). Theory and practice of writing: an applied linguistic perspective . London: Longman.

Graham, S., Harris, K. R., & Mason, L. (2005). Improving the writing performance, knowledge, and self-efficacy of struggling young writers: the effects of self-regulated strategy development. Contemporary Educational Psychology , 30 (2), 207–241 https://doi.org/10.1016/j.cedpsych.2004.08.001 .

Gustilo, L., & Magno, C. (2015). Explaining L2 Writing performance through a chain of predictors: A SEM approach. 3 L: The Southeast Asian Journal of English Language Studies , 21 (2), 115–130.

Hamp-Lyons, L. (1990). Second language writing assessment. In B. Kroll (Ed.), Second language writing: research insights for the classroom , (pp. 69–87). California: Cambridge University Press.

Hamp-Lyons, L. (1991). Holistic writing assessment of LEP students . Washington, DC: Paper presented at Symposium on limited English proficient student.

Hamp-Lyons, L. (2007). Editorial: worrying about rating. Assessing Writing , 12 , 1–9.

Hamp-Lyons, L., & Kroll, B. (1997). TOEFL 2000 – writing: composition, community and assessment (toefl monograph series no. 5) . Princeton: Educational Testing Service.

Harman, R. (2013). Literary intertextuality in genre-based pedagogies: building lexicon cohesion in fifth-grade L2 writing. Journal of Second Language Writing , 22 (2), 125–140.

Harrison, J. (2010). Certificate of proficiency in English (CPE) test preparation course . Oxford: Oxford University Press.

Hinkel, E. (2009). The effects of essay topics on modal verb uses in L1 and L2 academic writing. Journal of Pragmatics , 41 (4), 667–683.

Holmes, P. (2006). Problematizing intercultural communication competence in the pluricultural classroom: Chinese students in New Zealand University. Journal of Language and Intercultural Communication , 6 (1), 18–34.

Hughes, A. (2003). Testing for language teachers . Cambridge: Cambridge University Press.

Huot, B. (1990). The literature of direct writing assessment: major concerns and prevailing trends. Review of Educational Research , 60 (2), 237–239.

Huot, B., Moore, C., & O’Neill, P. (2009). Creating a culture of assessment in writing programs and beyond. College Composition and Communication , 61 ( 1 ), 107–132.

Hyland, K. (2004). Disciplinary discourses: social interactions in academic writing . Michigan: University of Michigan Press.

Inoue, A. (2004). Community-based assessment pedagogy. Assessing Writing , 9 (3), 208–238 https://doi.org/10.1016/j.asw.2004.12.001 .

Izadpanah, M. A., Rakhshandehroo, F., & Mahmoudikia, M. (2014). On the consensus between holistic rating system and analytical rating system: a comparison between TOEFL iBT and Jacobs’ et al. composition. International Journal of Language Learning and Applied Linguistics World , 6 (1), 170–187.

Jacobs, H. L., Zingraf, S. A., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: a practical approach . Rowley: Newbury House.

Jakeman, V. (2006). Cambridge action plan for IELTS: academic module . Cambridge: Cambridge University Press.

Jamieson, J., & Poonpon, K. (2013). Developing analytic rating guides for TOEFL iBT integrated speaking tasks. In ETS research series (RR-13-13, TOEFLiBT-20) . Princeton: ETS.

Johnson, R. L., Penny, J., & Gordon, B. (2000). The relation between score resolution methods and interrater reliability: An empirical study of an analytic scoring rubric. Applied Measurement in Education , 13 , 121–138 https://doi.org/10.1207/S15324818AME1302_1 .

Jones, C. (2001). The relationship between writing centers and improvement in writing ability: An assessment of the literature. Journal of Education , 122 (1), 3–20.

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review , 2 , 130–144.

Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement , (4th ed., pp. 17–64). Westport: American Council on Education and Praeger Publishers.

Kane, T. S. (2000). Oxford essential guide to writing . New York: Berkey Publishing Group.

Kellogg, R. T., Turner, C. E., Whiteford, A. P., & Mertens, A. (2016). The role of working memory in planning and generating written sentences. Journal of Writing Research , 7 (3), 397–416.

Kim, Y. H. (2011). Diagnosing EAP writing ability using the reduced reparametrized unified model. Language Testing , 28 (4), 509–541.

Klein, P. D., & Boscolo, P. (2016). Trends in research on writing as a learning activity. Journal of Writing Research , 7 (3), 311–350 https://doi.org/10.17239/jowr-2016.07.3.01 .

Knoch, U. (2009). The assessment of academic style in EAP writing: the case of the rating scale. Melbourne Papers in Language Testing , 13 (1), 35.

Knoch, U. (2011). Rating scales for diagnostic assessment of writing: what should they look like and where should the criteria come from? Assessing Writing , 16 (2), 81–96.

Kondo-Brown, K. (2002). A facet analysis of rater bias in Japanese second language writing performance. Language Testing , 19 (1), 3–31.

Kong, N., Liu, O. L., Malloy, J., & Schedl, M. A. (2009). Does content knowledge affect TOEFL iBT reading performance? A confirmatory approach to differential item functioning. In ETS research report series (RR-09-29, TOEFLiBT-09) . Princeton: ETS.

Kroll, B. (1990). Second language writing (Cambridge Applied Linguistics): research insights for the classroom . Cambridge: Cambridge University Press.

Kroll, B., & Kruchten, P. (2003). The rational unified process made essay: a practitioner’s guide to the RUP . Boston: Pearson Education.

Kuo, S. (2007). Which rubric is more suitable for NSS liberal studies? Analytic or holistic? Educational Research Journal , 22 (2), 179–199.

Lanteigne, B. (2017). Unscrambling jumbled sentences: an authentic task for English language assessment? Studies in Second Language Learning and Teaching , 7 (2), 251–273 https://doi.org/10.14746/ssllt.2017.7.2.5 .

Leki, L., Cumming, A., & Silva, T. (2008). A synthesis of research on second language writing in English . New York: Routledge.

Levin, P. (2009). Write great essays . London: McGraw-Hill Education.

Loughead, L. (2010). IELTS practice exam: with audio CDs . Hauppauge: Barron’s Education Series.

Lu, J., & Zhang, Z. (2013). Assessing and supporting argumentation with online rubrics. International Education Studies , 6 (7), 66–77.

Lumley, T. (2002). Assessment criteria in a large-scale writing test: what do they really mean to the raters? Language Testing , 19 (3), 246–276.

Lumley, T. (2005). Assessing second language writing: the rater’s perspective . Frankfurt: Lang.

MacDonald, S. (1994). Professional academic writing in the humanities and social sciences . Carbondale: Southern Illinois University Press.

Mackenzie, J. (2007). Essay writing: teaching the basics from the group up . Markham: Pembroke Publishers.

Malone, M. E., & Montee, M. (2014). Stakeholders’ beliefs about the TOEFL iBT test as a measure of academic language ability (TOEFL iBT Report No. 22, ETS Research Report No. RR-14-42) . Princeton: Educational Testing Service https://doi.org/10.1002/ets2.12039 .

Matsuda, P. K. (2002). Basic writing and second language writers: Toward an inclusive definition. Journal of Basic Writing , 22 (2), 67–89.

McLaren, S. (2006). Essay writing made easy . Sydney: Pascal Press.

McMillan, J. H. (2001). Classroom assessment: principles and practice for effective instruction , (2nd ed., ). Boston: Allyn & Bacon.

Melendy, G. A. (2008). Motivating writers: the power of choice. Asian EFL Journal , 20 (3), 187–198.

Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessment. Educational Researcher , 23 (2), 13–23.

Moore, J. (2009). Common mistakes at proficiency and how to avoid them . Cambridge: Cambridge University Press.

Moskal, B. M., & Leydens, J. (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation , 7 , 10.

Moss, P. A. (1994). Can there be validity without reliability? Educational Researcher , 23 (2), 5–12.

Muenz, T. A., Ouchi, B. Y., & Cole, J. C. (1999). Item analysis of written expression scoring systems from the PIAT-R and WIAT. Psychology and Schools , 36 (1), 31–40.

Muncie, J. (2002). Using written teacher feedback in EFL composition classes. ELT Journal , 54 (1), 47–53 https://doi.org/10.1093/elt/54.1.47 .

Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet rasch measurement: Part I. Journal of Applied Measurement , 4 (4), 386–422.

Noonan, L. E., & Sulsky, L. M. (2001). Impact of frame-of-reference and behavioral observation training on alternative training effectiveness criteria in a Canadian military sample. Human Performance , 14 (1), 3–26.

Nunn, R. C. (2000). Designing rating scales for small-group interaction. ELT Journal , 54 (2), 169–178.

Nunn, R. C., & Adamson, J. (2007). Toward the development of interactional criteria for journal paper evaluation. Asian EFL Journal , 9 (4), 205–228.

Nystrand, M., Greene, S., & Wiemelt, J. (1993). Where did composition studies come from? An intellectual history. Written Communication , 10 (3), 267–333.

O’Neil, T. R., & Lunz, M. E. (1996). Examining the invariance of rater and project calibrations using a multi-facet rasch model . New York: Paper presented at the Annual Meeting of the American Educational Research Associations.

Obee, B. (2005). Practice tests for the revised CPE . Berkshire: Express Publishing.

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purpose revisited. Educational Research Review , 9 , 129–144.

Pollitt, A., & Hutchinson, C. (1987). Calibrating graded assessments: rasch partial credit analysis of performance in writing. Language Testing , 4 (1), 72–92.

Raimes, A. (1991). Out of the woods: Emerging traditions in the teaching of writing. TESOL Quarterly , 25 (3), 407–430.

Reid, J. (1993). Teaching ESL writing . Englewood Cliffs: Regents Prentice Hall.

Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing , 15 (1), 18–39.

Richards, J. C., & Schmidt, R. (2002). Longman dictionary of language teaching and applied linguistics . New York: Pearson Education.

Roch, S. G., & O’Sullivan, B. J. (2003). Frame of reference rater training issues: recall, time and behavior observation training. International Journal of Training and Development , 7 (2), 93–107.

Rosenfeld, M., Courtney, R., & Fowles, M. (2004). Identifying the writing tasks important for academic success at the undergraduate and graduate levels. Research report 42 . Princeton: Educational Testing Service.

Rosenfeld, M., Leung, S., & Oltman, P. K. (2001). Identifying the reading, writing, speaking, and listening tasks important for academic success at the undergraduate and graduate levels (TOEFL Monograph Series MS-21) . Princeton: Educational Testing Service.

Rupp, A. A., Casabianca, J. M., Krüger, M., Keller, S., & Köller, O. (2019). Automated essay scoring at scale: a case study in Switzerland and Germany (RR-86. ETS RR-19-12) . ETS Research Report Series , 2019 https://doi.org/10.1002/ets2.12249 .

Saeidi, M., Yousefi, M., & Baghayi, P. (2013). Rater bias in assessing Iranian EFL learners’ writing performance. Iranian Journal of Applied Linguistics , 16 (1), 145–175.

Sasaki, M. (2000). Toward an empirical model of EFL writing processes: an explanatory study. Journal of Second Language Writing , 9 (3), 259–291.

Sasaki, M., & Hirose, K. (1999). Development of an analytic rating scale for Japanese L1 writing. Language Testing , 16 (4), 457–478.

Schaefer, E. (2008). Rater bias patterns in an EFL writing assessment. Language Testing , 25 (4), 465–493.

Schirmer, B. R., & Bailey, J. (2000). Writing assessment rubric: an instructional approach for struggling writers. Teaching Exceptional Children , 33 (1), 52–58.

Schoonen, R. (2005). Generalizability of writing scores: an application of structural equation modeling. Language Testing , 22 (1), 1–5.

Shaw, S. D., & Weir, C. J. (2007). Examining writing: research and practice in assessing second language writing . Cambridge: Cambridge University Press.

Shermis, M. (2014). State-of-the-art automated essay scoring: competition, results, and future directions from a United States demonstration. Assessing Writing , 20 , 53–76 https://doi.org/10.1016/j.asw.2013.04.001 .

Shi, L. (2001). Native- and nonnative- speaking EFL teachers’ evaluation of Chinese students’ English writing. Language Testing , 18 (3), 303–325.

Spratt, M., & Taylor, L. B. (2000). The Cambridge CAE course: self-study student’s book . Cambridge: Cambridge University Press.

Spurr, B. (2005). Successful essay writing for senior high school . NSW: New Frontier Publishing.

Staff, M. P. (2017). GRE guide to the use of scores. In Graduate record examination . Princeton: ETS.

Stevens, J. P. (2002). Applied multivariate statistics for the social sciences , (4th ed., ). Hillsdale: Erlbaum.

Stewart, A. (2009). IELTS preparation & practice: reading and writing—academic module . New York: Pearson Education.

Tardy, M. C., & Matsuda, P. K. (2009). The construction of author voice by editorial board members. Written Communication , 26 (1), 32–52.

Trace, J., Meier, V., & Janseen, G. (2016). “I can see that”: developing shared rubric category interpretations through score negotiation. Assessing Writing , 30 , 32–43 https://doi.org/10.1016/j.asw.2016.08.001 .

Ward, J. R., & McCotter, S. S. (2004). Reflection as a visible outcome for preservice teachers. Teaching and Teacher Education , 20 (3), 243–257.

Weigle, S. C. (2002). Assessing writing . Cambridge: Cambridge University Press.

Book   Google Scholar  

Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing , 18 , 85–99.

Weir, C. J. (1990). Communicative language testing . New Jersey: Prentice Hall, Inc.

Weissberg, B. (2000). Developmental relationship in the acquisition of English syntax: Writing vs. speech. Journal of Learning and Instruction , 10 (1), 37–53 https://doi.org/10.1016/S0959-4752(99)00017-1 .

Wesolowski, B. W., Wind, S. A., & Engelhard, G. (2017). Evaluating differential rater functioning over time in the context of solo music performance assessment. Bulletin of the Council for Research in Music Education , ( 212 ), 75–98 https://doi.org/10.5406/bulcouresmusedu.212.0075 .

White, E. M. (1984). Teaching and assessing writing , (2nd ed., ). San Francisco: Jossey-Bass.

White, E. M. (1985). Teaching and assessing writing . San Francisco: Jossey-Bass.

White, E. M. (1994). Teaching and assessing writing , (2nd ed. ). San Francisco: Jossey-Bass.

Wiggan, G. (1994). The constant danger of sacrificing validity to reliability: making writing assessment serves writer. Assessing Writing , 1 , 129–139 https://doi.org/10.1016/1075-2935(94)90008-6 .

Wilson, M. (2006). Rethinking rubrics in writing assessment . Postmouth: Heinemann.

Wilson, M. (2017). Reimaging writing assessment: from scales to stories . Postmouth: Heinemann.

Wind, S. A. (2020). Do raters use rating scale categories consistently across analytic rubric domains in writing assessment? Assessing Writing , 43 https://doi.org/10.1016/j.asw.2019.100416 .

Wind, S. A., Tsai, C. L., Grajeda, S. B., & Bergin, C. (2018). Principals’ use of rating scale categories in classroom observation for teacher evaluation. School Effectiveness and School Improvement , 29 (3), 485–510 https://doi.org/10.1080/09243453.2018.1470989 .

Wiseman, C. S. (2012). A comparison of the performance of analytic vs. holistic scoring rubrics to assess L2 writing. Iranian Journal of Language Testing , 2 (1), 59–61.

Wyldeck, K. (2008). Everyday spelling and grammar . Sydney: Pascal Press.

Zahler, K. A. (2011). McGraw-Hill’s conquering the NEW GRE verbal and writing . New York: McGraw-Hill Education.

Zhang, B., Johnson, L., & Kilic, G. B. (2008). Assessing the reliability of self-and-peer rating in student group work. Assessment & Evaluation in Higher Education , 33 (3), 329–340 https://doi.org/10.1080/02602930701293181 .

Download references

Acknowledgements

The authors would like to thank the reviewers for their fruitful comments. We would also like to thank the raters who kindly accepted to contribute to this study.

Author information

Authors and affiliations.

Department of Foreign Languages, TUMS International College, Tehran University of Medical Sciences (TUMS), Keshavarz Blvd., Tehran, 1415913311, Iran

Enayat A. Shabani & Jaleh Panahi

You can also search for this author in PubMed   Google Scholar

Contributions

The authors made almost equal contributions to this manuscript, and both read and approved the final manuscript.

Authors’ information

Enayat A. Shabani 1 ( [email protected] ) is a Ph.D. in TEFL and is currently the Chair of the Department of Foreign Languages at Tehran University of Medical Sciences (TUMS). His areas of research interest include language testing and assessment, and internationalization of higher education.

Jaleh Panahi 2 ( [email protected] ) holds an M.A. in TEFL. She has been teaching English for 12 years with the main focus of IELTS teaching and instruction. She is currently a part-time instructor at the Department of Foreign Languages, Tehran University of Medical Sciences. Her fields of research interest are language assessment, and language and cognition.

Corresponding author

Correspondence to Enayat A. Shabani .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Shabani, E.A., Panahi, J. Examining consistency among different rubrics for assessing writing. Lang Test Asia 10 , 12 (2020). https://doi.org/10.1186/s40468-020-00111-4

Download citation

Received : 16 June 2020

Accepted : 03 September 2020

Published : 26 September 2020

DOI : https://doi.org/10.1186/s40468-020-00111-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scoring rubrics
  • Essay writing
  • Tests of English language proficiency
  • Writing assessment

analytic scoring rubric for essay writing

The GRE ® General Test

One test for graduate, business and law school

Select a step to learn more about your GRE ® General Test journey.

Analytical Writing Measure Scoring

Score level descriptions for the analytical writing measure.

The reported Analytical Writing score ranges from 0 to 6, in half-point increments.

The statements below describe, for each score level, the overall quality of analytical writing demonstrated. The test assesses your critical thinking and analytical writing skills (the ability to reason, assemble evidence to develop a position and communicate complex ideas) along with your control of grammar and the mechanics of writing.

Scores 6 and 5.5

Sustains insightful, in-depth analysis of complex ideas; develops and supports main points with logically compelling reasons and/or highly persuasive examples; is well focused and well organized; skillfully uses sentence variety and precise vocabulary to convey meaning effectively; demonstrates superior facility with sentence structure and usage, but may have minor errors that do not interfere with meaning.

Scores 5 and 4.5

Provides generally thoughtful analysis of complex ideas; develops and supports main points with logically sound reasons and/or well-chosen examples; is generally focused and well organized; uses sentence variety and vocabulary to convey meaning clearly; demonstrates good control of sentence structure and usage, but may have minor errors that do not interfere with meaning.

Scores 4 and 3.5

Provides competent analysis of ideas in addressing specific task directions; develops and supports main points with relevant reasons and/or examples; is adequately organized; conveys meaning with acceptable clarity; demonstrates satisfactory control of sentence structure and usage, but may have some errors that affect clarity.

Scores 3 and 2.5

Displays some competence in analytical writing and addressing specific task directions, although the writing is flawed in at least one of the following ways: limited analysis or development; weak organization; weak control of sentence structure or usage, with errors that often result in vagueness or a lack of clarity.

Scores 2 and 1.5

Displays serious weaknesses in analytical writing. The writing is seriously flawed in at least one of the following ways: serious lack of analysis or development; unclear in addressing specific task directions; lack of organization; frequent problems in sentence structure or usage, with errors that obscure meaning.

Scores 1 and 0.5

Displays fundamental deficiencies in analytical writing. The writing is fundamentally flawed in at least one of the following ways: content that is extremely confusing or mostly irrelevant to the assigned tasks; little or no development; severe and pervasive errors that result in incoherence.

Your analytical writing skills cannot be evaluated because the response does not address any part of the assigned task(s), merely attempts to copy the assignments, is in a foreign language or displays only indecipherable text.

You produced no text whatsoever

“Analyze an Issue” task scoring guide

Score 6 outstanding.

In addressing the specific task directions, a 6 response presents a cogent, well-articulated analysis of the issue and conveys meaning skillfully.

A typical response in this category:

  • articulates a clear and insightful position on the issue in accordance with the assigned task
  • develops the position fully with compelling reasons and/or persuasive examples
  • sustains a well-focused, well-organized analysis, connecting ideas logically
  • conveys ideas fluently and precisely, using effective vocabulary and sentence variety
  • demonstrates superior facility with the conventions of standard written English (i.e., grammar, usage and mechanics), but may have minor errors

Score 5 Strong

In addressing the specific task directions, a 5 response presents a generally thoughtful, well-developed analysis of the issue and conveys meaning clearly.

  • presents a clear and well-considered position on the issue in accordance with the assigned task
  • develops the position with logically sound reasons and/or well-chosen examples
  • is focused and generally well organized, connecting ideas appropriately
  • conveys ideas clearly and well, using appropriate vocabulary and sentence variety
  • demonstrates facility with the conventions of standard written English, but may have minor errors

Score 4 Adequate

In addressing the specific task directions, a 4 response presents a competent analysis of the issue and conveys meaning with acceptable clarity.

  • presents a clear position on the issue in accordance with the assigned task
  • develops the position with relevant reasons and/or examples
  • is adequately focused and organized
  • demonstrates sufficient control of language to express ideas with acceptable clarity
  • generally demonstrates control of the conventions of standard written English, but may have some errors

Score 3 Limited

A 3 response demonstrates some competence in addressing the specific task directions, in analyzing the issue and in conveying meaning, but is obviously flawed.

A typical response in this category exhibits  one or more of the following characteristics:

  • is vague or limited in addressing the specific task directions and in presenting or developing a position on the issue or both
  • is weak in the use of relevant reasons or examples or relies largely on unsupported claims
  • is limited in focus and/or organization
  • has problems in language and sentence structure that result in a lack of clarity
  • contains occasional major errors or frequent minor errors in grammar, usage or mechanics that can interfere with meaning

Score 2 Seriously Flawed

A 2 response largely disregards the specific task directions and/or demonstrates serious weaknesses in analytical writing.

  • is unclear or seriously limited in addressing the specific task directions and in presenting or developing a position on the issue or both
  • provides few, if any, relevant reasons or examples in support of its claims
  • is poorly focused and/or poorly organized
  • has serious problems in language and sentence structure that frequently interfere with meaning
  • contains serious errors in grammar, usage or mechanics that frequently obscure meaning

Score 1 Fundamentally Deficient

A 1 response demonstrates fundamental deficiencies in analytical writing.

  • provides little or no evidence of understanding the issue
  • provides little or no evidence of the ability to develop an organized response (e.g., is disorganized and/or extremely brief)
  • has severe problems in language and sentence structure that persistently interfere with meaning
  • contains pervasive errors in grammar, usage or mechanics that result in incoherence

Off topic (i.e., provides no evidence of an attempt to address the assigned topic), is in a foreign language, merely copies the topic, consists of only keystroke characters or is illegible or nonverbal.

The essay response is blank.

“Analyze an Argument” task scoring guide (for General Tests administered before September 22, 2023)

In addressing the specific task directions, a 6 response presents a cogent, well-articulated examination of the argument and conveys meaning skillfully.

  • clearly identifies aspects of the argument relevant to the assigned task and examines them insightfully
  • develops ideas cogently, organizes them logically and connects them with clear transitions
  • provides compelling and thorough support for its main points

In addressing the specific task directions, a 5 response presents a generally thoughtful, well-developed examination of the argument and conveys meaning clearly.

  • clearly identifies aspects of the argument relevant to the assigned task and examines them in a generally perceptive way
  • develops ideas clearly, organizes them logically and connects them with appropriate transitions
  • offers generally thoughtful and thorough support for its main points

In addressing the specific task directions, a 4 response presents a competent examination of the argument and conveys meaning with acceptable clarity.

  • identifies and examines aspects of the argument relevant to the assigned task, but may also discuss some extraneous points
  • develops and organizes ideas satisfactorily, but may not connect them with transitions
  • supports its main points adequately, but may be uneven in its support
  • demonstrates sufficient control of language to convey ideas with reasonable clarity

A 3 response demonstrates some competence in addressing the specific task directions, in examining the argument and in conveying meaning, but is obviously flawed.

  • does not identify or examine most of the aspects of the argument relevant to the assigned task, although some relevant examination of the argument is present
  • mainly discusses tangential or irrelevant matters, or reasons poorly
  • is limited in the logical development and organization of ideas
  • offers support of little relevance and value for its main points
  • does not present an examination based on logical analysis, but may instead present the writer's own views on the subject
  • does not follow the directions for the assigned task
  • does not develop ideas, or is poorly organized and illogical
  • provides little, if any, relevant or reasonable support for its main points
  • provides little or no evidence of understanding the argument
  • provides little evidence of the ability to develop an organized response (e.g., is disorganized and/or extremely brief)

Off topic (i.e., provides no evidence of an attempt to respond to the assigned topic), is in a foreign language, merely copies the topic, consists of only keystroke characters, or is illegible or nonverbal.

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, act writing rubric: full analysis and essay strategies.

ACT Writing

feature_ACTessayrubric

What time is it? It's essay time! In this article, I'm going to get into the details of the newly transformed ACT Writing by discussing the ACT essay rubric and how the essay is graded based on that. You'll learn what each item on the rubric means for your essay writing and what you need to do to meet those requirements.

feature image credit: A study in human nature, being an interpretation with character analysis chart of Hoffman's master painting "Christ in the temple"; (1920) by CircaSassy , used under CC BY 2.0 /Resized from original.

ACT Essay Grading: The Basics

If you've chosen to take the ACT Plus Writing , you'll have 40 minutes to write an essay (after completing the English, Math, Reading, and Science sections of the ACT, of course). Your essay will be evaluated by two graders , who score your essay from 1-6 on each of 4 domains, leading to scores out of 12 for each domain. Your Writing score is calculated by averaging your four domain scores, leading to a total ACT Writing score from 2-12.

NOTE : From September 2015 to June 2016, ACT Writing scores were calculated by adding together your domain scores and scaling to a score of 1-36; the change to an averaged 2-12 ACT Writing score was announced June 28, 2016 and put into action September 2016.

The Complete ACT Grading Rubric

Based on ACT, Inc's stated grading criteria, I've gathered all the relevant essay-grading criteria into a chart. The information itself is available on the ACT's website , and there's more general information about each of the domains here . The columns in this rubric are titled as per the ACT's own domain areas, with the addition of another category that I named ("Mastery Level").

ACT Writing Rubric: Item-by-Item Breakdown

Whew. That rubric might be a little overwhelming—there's so much information to process! Below, I've broken down the essay rubric by domain, with examples of what a 3- and a 6-scoring essay might look like.

Ideas and Analysis

The Ideas and Analysis domain is the rubric area most intimately linked with the basic ACT essay task itself. Here's what the ACT website has to say about this domain:

Scores in this domain reflect the ability to generate productive ideas and engage critically with multiple perspectives on the given issue. Competent writers understand the issue they are invited to address, the purpose for writing, and the audience. They generate ideas that are relevant to the situation.

Based on this description, I've extracted the three key things you need to do in your essay to score well in the Ideas and Analysis domain.

#1: Choose a perspective on this issue and state it clearly. #2: Compare at least one other perspective to the perspective you have chosen. #3: Demonstrate understanding of the ways the perspectives relate to one another. #4: Analyze the implications of each perspective you choose to discuss.

There's no cool acronym, sorry. I guess a case could be made for "ACCE," but I wanted to list the points in the order of importance, so "CEAC" it is.

Fortunately, the ACT Writing Test provides you with the three perspectives to analyze and choose from, which will save you some of the time of "generating productive ideas." In addition, "analyzing each perspective" does not mean that you need to argue from each of the points of view. Instead, you need to choose one perspective to argue as your own and explain how your point of view relates to at least one other perspective by evaluating how correct the perspectives you discuss are and analyzing the implications of each perspective.

Note: While it is technically allowable for you to come up with a fourth perspective as your own and to then discuss that point of view in relation to another perspective, we do not recommend it. 40 minutes is already a pretty short time to discuss and compare multiple points of view in a thorough and coherent manner—coming up with new, clearly-articulated perspectives takes time that could be better spend devising a thorough analysis of the relationship between multiple perspectives.

To get deeper into what things fall in the Ideas and Analysis domain, I'll use a sample ACT Writing prompt and the three perspectives provided:

Many of the goods and services we depend on daily are now supplied by intelligent, automated machines rather than human beings. Robots build cars and other goods on assembly lines, where once there were human workers. Many of our phone conversations are now conducted not with people but with sophisticated technologies. We can now buy goods at a variety of stores without the help of a human cashier. Automation is generally seen as a sign of progress, but what is lost when we replace humans with machines? Given the accelerating variety and prevalence of intelligent machines, it is worth examining the implications and meaning of their presence in our lives.

Perspective One : What we lose with the replacement of people by machines is some part of our own humanity. Even our mundane daily encounters no longer require from us basic courtesy, respect, and tolerance for other people.

Perspective Two : Machines are good at low-skill, repetitive jobs, and at high-speed, extremely precise jobs. In both cases they work better than humans. This efficiency leads to a more prosperous and progressive world for everyone.

Perspective Three : Intelligent machines challenge our long-standing ideas about what humans are or can be. This is good because it pushes both humans and machines toward new, unimagined possibilities.

First, in order to "clearly state your own perspective on the issue," you need to figure out what your point of view, or perspective, on this issue is going to be. For the sake of argument, let's say that you agree the most with the second perspective. A essay that scores a 3 in this domain might simply restate this perspective:

I agree that machines are good at low-skill, repetitive jobs, and at high-speed, extremely precise jobs. In both cases they work better than humans. This efficiency leads to a more prosperous and progressive world for everyone.

In contrast, an essay scoring a 6 in this domain would likely have a more complex point of view (with what the rubric calls "nuance and precision in thought and purpose"):

Machines will never be able to replace humans entirely, as creativity is not something that can be mechanized. Because machines can perform delicate and repetitive tasks with precision, however, they are able to take over for humans with regards to low-skill, repetitive jobs and high-skill, extremely precise jobs. This then frees up humans to do what we do best—think, create, and move the world forward.

Next, you must compare at least one other perspective to your perspective throughout your essay, including in your initial argument. Here's what a 3-scoring essay's argument would look like:

I agree that machines are good at low-skill, repetitive jobs, and at high-speed, extremely precise jobs. In both cases they work better than humans. This efficiency leads to a more prosperous and progressive world for everyone. Machines do not cause us to lose our humanity or challenge our long-standing ideas about what humans are or can be.

And here, in contrast, is what a 6-scoring essay's argument (that includes multiple perspectives) would look like:

Machines will never be able to replace humans entirely, as creativity is not something that can be mechanized, which means that our humanity is safe. Because machines can perform delicate and repetitive tasks with precision, however, they are able to take over for humans with regards to low-skill, repetitive jobs and high-skill, extremely precise jobs. Rather than forcing us to challenge our ideas about what humans are or could be, machines simply allow us to BE, without distractions. This then frees up humans to do what we do best—think, create, and move the world forward.

You also need to demonstrate a nuanced understanding of the way in which the two perspectives relate to each other. A 3-scoring essay in this domain would likely be absolute, stating that Perspective Two is completely correct, while the other two perspectives are absolutely incorrect. By contrast, a 6-scoring essay in this domain would provide a more insightful context within which to consider the issue:

In the future, machines might lead us to lose our humanity; alternatively, machines might lead us to unimaginable pinnacles of achievement. I would argue, however, projecting possible futures does not make them true, and that the evidence we have at present supports the perspective that machines are, above all else, efficient and effective completing repetitive and precise tasks.

Finally, to analyze the perspectives, you need to consider each aspect of each perspective. In the case of Perspective Two, this means you must discuss that machines are good at two types of jobs, that they're better than humans at both types of jobs, and that their efficiency creates a better world. The analysis in a 3-scoring essay is usually "simplistic or somewhat unclear." By contrast, the analysis of a 6-scoring essay "examines implications, complexities and tensions, and/or underlying values and assumptions."

  • Choose a perspective that you can support.
  • Compare at least one other perspective to the perspective you have chosen.
  • Demonstrate understanding of the ways the perspectives relate to one another.
  • Analyze the implications of each perspective you choose to discuss.

To score well on the ACT essay overall, however, it's not enough to just state your opinions about each part of the perspective; you need to actually back up your claims with evidence to develop your own point of view. This leads straight into the next domain: Development and Support.

Development and Support

Another important component of your essay is that you explain your thinking. While it's obviously important to clearly state what your ideas are in the first place, the ACT essay requires you to demonstrate evidence-based reasoning. As per the description on ACT.org [bolding mine]:

Scores in this domain reflect the ability to discuss ideas, offer rationale, and bolster an argument. Competent writers explain and explore their ideas, discuss implications, and illustrate through examples . They help the reader understand their thinking about the issue.

"Machines are good at low-skill, repetitive jobs, and at high-speed, extremely precise jobs. In both cases they work better than humans. This efficiency leads to a more prosperous and progressive world for everyone."

In your essay, you might start out by copying the perspective directly into your essay as your point of view, which is fine for the Ideas and Analysis domain. To score well in the Development and Support domain and develop your point of view with logical reasoning and detailed examples, however, you're going to have to come up with reasons for why you agree with this perspective and examples that support your thinking.

Here's an example from an essay that would score a 3 in this domain:

Machines are good at low-skill, repetitive jobs and at high-speed, extremely precise jobs. In both cases, they work better than humans. For example, machines are better at printing things quickly and clearly than people are. Prior to the invention of the printing press by Gutenberg people had to write everything by hand. The printing press made it faster and easier to get things printed because things didn't have to be written by hand all the time. In the world today we have even better machines like laser printers that print things quickly.

Essays scoring a 3 in this domain tend to have relatively simple development and tend to be overly general, with imprecise or repetitive reasoning or illustration. Contrast this with an example from an essay that would score a 6:

Machines are good at low-skill, repetitive jobs and at high-speed, extremely precise jobs. In both cases, they work better than humans. Take, for instance, the example of printing. As a composer, I need to be able to create many copies of my sheet music to give to my musicians. If I were to copy out each part by hand, it would take days, and would most likely contain inaccuracies. On the other hand, my printer (a machine) is able to print out multiple copies of parts with extreme precision. If it turns out I made an error when I was entering in the sheet music onto the computer (another machine), I can easily correct this error and print out more copies quickly.

The above example of the importance of machines to composers uses "an integrated line of skillful reasoning and illustration" to support my claim ("Machines are good at low-skill, repetitive jobs and at high-speed, extremely precise jobs. In both cases, they work better than humans"). To develop this example further (and incorporate the "This efficiency leads to a more prosperous and progressive world for everyone" facet of the perspective), I would need to expand my example to explain why it's so important that multiple copies of precisely replicated documents be available, and how this affects the world.

body_theworld-1

World Map - Abstract Acrylic by Nicolas Raymond , used under CC BY 2.0 /Resized from original.

Organization

Essay organization has always been integral to doing well on the ACT essay, so it makes sense that the ACT Writing rubric has an entire domain devoted to this. The organization of your essay refers not just to the order in which you present your ideas in the essay, but also to the order in which you present your ideas in each paragraph. Here's the formal description from the ACT website :

Scores in this domain reflect the ability to organize ideas with clarity and purpose. Organizational choices are integral to effective writing. Competent writers arrange their essay in a way that clearly shows the relationship between ideas, and they guide the reader through their discussion.

Making sure your essay is logically organized relates back to the "development" part of the previous domain. As the above description states, you can't just throw examples and information into your essay willy-nilly, without any regard for the order; part of constructing and developing a convincing argument is making sure it flows logically. A lot of this organization should happen while you are in the planning phase, before you even begin to write your essay.

Let's go back to the machine intelligence essay example again. I've decided to argue for Perspective Two, which is:

An essay that scores a 3 in this domain would show a "basic organizational structure," which is to say that each perspective analyzed would be discussed in its own paragraph, "with most ideas logically grouped." A possible organization for a 3-scoring essay:

An essay that scores a 6 in this domain, on the other hand, has a lot more to accomplish. The "controlling idea or purpose" behind the essay should be clearly expressed in every paragraph, and ideas should be ordered in a logical fashion so that there is a clear progression from the beginning to the end. Here's a possible organization for a 6-scoring essay:

In this example, the unifying idea is that machines are helpful (and it's mentioned in each paragraph) and the progression of ideas makes more sense. This is certainly not the only way to organize an essay on this particular topic, or even using this particular perspective. Your essay does, however, have to be organized, rather than consist of a bunch of ideas thrown together.

Here are my Top 5 ACT Writing Organization Rules to follow:

#1: Be sure to include an introduction (with your thesis stating your point of view), paragraphs in which you make your case, and a conclusion that sums up your argument

#2: When planning your essay, make sure to present your ideas in an order that makes sense (and follows a logical progression that will be easy for the grader to follow).

#3: Make sure that you unify your essay with one main idea . Do not switch arguments partway through your essay.

#4: Don't write everything in one huge paragraph. If you're worried you're going to run out of space to write and can't make your handwriting any smaller and still legible, you can try using a paragraph symbol, ¶, at the beginning of each paragraph as a last resort to show the organization of your essay.

#5: Use transitions between paragraphs (usually the last line of the previous paragraph and the first line of the paragraph) to "strengthen the relationships among ideas" ( source ). This means going above and beyond "First of all...Second...Lastly" at the beginning of each paragraph. Instead, use the transitions between paragraphs as an opportunity to describe how that paragraph relates to your main argument.

Language Use

The final domain on the ACT Writing rubric is Language Use and Conventions. This the item that includes grammar, punctuation, and general sentence structure issues. Here's what the ACT website has to say about Language Use:

Scores in this domain reflect the ability to use written language to convey arguments with clarity. Competent writers make use of the conventions of grammar, syntax, word usage, and mechanics. They are also aware of their audience and adjust the style and tone of their writing to communicate effectively.

I tend to think of this as the "be a good writer" category, since many of the standards covered in the above description are ones that good writers will automatically meet in their writing. On the other hand, this is probably the area non-native English speakers will struggle the most, as you must have a fairly solid grasp of English to score above a 2 on this domain. The good news is that by reading this article, you're already one step closer to improving your "Language Use" on ACT Writing.

There are three main parts of this domain:

#1: Grammar, Usage, and Mechanics #2: Sentence Structure #3: Vocabulary and Word Choice

I've listed them (and will cover them) from lowest to highest level. If you're struggling with multiple areas, I highly recommend starting out with the lowest-level issue, as the components tend to build on each other. For instance, if you're struggling with grammar and usage, you need to focus on fixing that before you start to think about precision of vocabulary/word choice.

Grammar, Usage, and Mechanics

At the most basic level, you need to be able to "effectively communicate your ideas in standard written English" ( ACT.org ). First and foremost, this means that your grammar and punctuation need to be correct. On ACT Writing, it's all right to make a few minor errors if the meaning is clear, even on essays that score a 6 in the Language Use domain; however, the more errors you make, the more your score will drop.

Here's an example from an essay that scored a 3 in Language Use:

Machines are good at doing there jobs quickly and precisely. Also because machines aren't human or self-aware they don't get bored so they can do the same thing over & over again without getting worse.

While the meaning of the sentences is clear, there are several errors: the first sentence uses "there" instead of "their," the second sentence is a run-on sentence, and the second sentence also uses the abbreviation "&" in place of "and." Now take a look at an example from a 6-scoring essay:

Machines excel at performing their jobs both quickly and precisely. In addition, since machines are not self-aware they are unable to get "bored." This means that they can perform the same task over and over without a decrease in quality.

This example solves the abbreviation and "there/their" issue. The second sentence is missing a comma (after "self-aware"), but the worse of the run-on sentence issue is absent.

Our Complete Guide to ACT Grammar might be helpful if you just need a general refresh on grammar rules. In addition, we have several articles that focus in on specific grammar rules, as they are tested on ACT English; while the specific ways in which ACT English tests you on these rules isn't something you'll need to know for the essay, the explanations of the grammar rules themselves are quite helpful.

Sentence Structure

Once you've gotten down basic grammar, usage, and mechanics, you can turn your attention to sentence structure. Here's an example of what a 3-scoring essay in Language Use (based on sentence structure alone) might look like:

Machines are more efficient than humans at many tasks. Machines are not causing us to lose our humanity. Instead, machines help us to be human by making things more efficient so that we can, for example, feed the needy with technological advances.

The sentence structures in the above example are not particularly varied (two sentences in a row start with "Machines are"), and the last sentence has a very complicated/convoluted structure, which makes it hard to understand. For comparison, here's a 6-scoring essay:

Machines are more efficient than humans at many tasks, but that does not mean that machines are causing us to lose our humanity. In fact, machines may even assist us in maintaining our humanity by providing more effective and efficient ways to feed the needy.

For whatever reason, I find that when I'm under time pressure, my sentences maintain variety in their structures but end up getting really awkward and strange. A real life example: once I described a method of counteracting dementia as "supporting persons of the elderly persuasion" during a hastily written psychology paper. I've found the best ways to counteract this are as follows:

#1: Look over what you've written and change any weird wordings that you notice.

#2: If you're just writing a practice essay, get a friend/teacher/relative who is good at writing (in English) to look over what you've written and point out issues (this is how my own awkward wording was caught before I handed in the paper). This point obviously does not apply when you're actually taking the ACT, but it very helpful to ask for someone else to take a look over any practice essays you write to point out issues you may not notice yourself.

Vocabulary and Word Choice

The icing on the "Language Use" domain cake is skilled use of vocabulary and correct word choice. Part of this means using more complicated vocabulary in your essay. Once more, look at this this example from a 3-scoring essay (spelling corrected):

Machines are good at doing their jobs quickly and precisely.

Compare that to this sentence from a 6-scoring essay:

Machines excel at performing their jobs both quickly and precisely.

The 6-scoring essay uses "excel" and "performing" in place of "are good at" and "doing." This is an example of using language that is both more skillful ("excel" is more advanced than "are good at") and more precise ("performing" is a more precise word than "doing"). It's important to make sure that, when you do use more advanced words, you use them correctly. Consider the below sentence:

"Machines are often instrumental in ramifying safety features."

The sentence uses a couple of advanced vocabulary words, but since "ramifying" is used incorrectly, the language use in this sentence is neither skillful nor precise. Above all, your word choice and vocabulary should make your ideas clearer, not make them harder to understand.

body_adjective-1

untitled is also an adjective by Procsilas Moscas , used under CC BY 2.0 /Resized and cropped from original.

How Do I Use the ACT Writing Grading Rubric?

Okay, we've taken a look at the ACTual ACT Writing grading rubric and gone over each domain in detail. To finish up, I'll go over a couple of ways the scoring rubric can be useful to you in your ACT essay prep.

Use the ACT Writing Rubric To...Shape Your Essays

Now that you know what the ACT is looking for in an essay, you can use that to guide what you write about in your essays...and how develop and organize what you say!

Because I'm an Old™ (not actually trademarked), and because I'm from the East Coast, I didn't really know much about the ACT prior to starting my job at PrepScholar. People didn't really take it in my high school, so when I looked at the grading rubric for the first time, I was shocked to see how different the ACT essay was (as compared to the more familiar SAT essay ).

Basically, by reading this article, you're already doing better than high school me.

body_portraitofthemusician

An artist's impression of L. Staffaroni, age 16 (look, junior year was/is hard for everyone).

Use the ACT Writing Rubric To...Grade Your Practice Essays

The ACT can't really give you an answer key to the essay the way it can give you an answer key to the other sections (Reading, Math, etc). There are some examples of essays at each score point on the ACT website , but these examples assume that students will be at an equal level in each of domains, which will not necessarily be true for you. Even if a sample essay is provided as part of a practice test answer key, it will probably use different context, have a different logical progression, or maybe even argue a different viewpoint.

The ACT Writing rubric is the next best thing to an essay answer key. Use it as a filter through which to view your essay . Naturally, you don't have the time to become an expert at applying the rubric criteria to your essay to make sure you're in line with the ACT's grading principles and standards. That is not your job. Your job is to write the best essay that you can. If you're not confident in your ability to spot grammar, usage, and mechanics issues, I highly recommend asking a friend, teacher, or family member who is really good at (English) writing to take a look over your practice essays and point out the mistakes.

If you really want custom feedback on your practice essays from experienced essay graders, may I also suggest the PrepScholar test prep platform ? As I manage all essay grading, I happen to know a bit about the essay part of this platform, which provides you with both an essay grade and custom feedback. Learn more about PrepScholar ACT Prep and our essay grading here!

What's Next?

Desirous of some more sweet sweet ACT essay articles? Why not start with our comprehensive guide to the ACT Writing test and how to write an ACT essay, step-by-step ? (Trick question: obviously you should do this.)

Round out your dive into the details of the ACT Writing test with tips and strategies to raise your essay score , information about the best ACT Writing template , and advice on how to get a perfect score on the ACT essay .

Want actual feedback on your essay? Then consider signing up for our PrepScholar test prep platform . Included in the platform are practice tests and practice essays graded by experts here at PrepScholar.

Want to improve your ACT score by 4 points?

Check out our best-in-class online ACT prep program . We guarantee your money back if you don't improve your ACT score by 4 points or more.

Our program is entirely online, and it customizes what you study to your strengths and weaknesses . If you liked this Writing lesson, you'll love our program. Along with more detailed lessons, you'll get thousands of practice problems organized by individual skills so you learn most effectively. We're special in having expert instructors grade your essays and give you custom feedback on how to improve . We'll also give you a step-by-step program to follow so you'll never be confused about what to study next.

Check out our 5-day free trial:

Get 4 More Points on Your ACT, GUARANTEED

Laura graduated magna cum laude from Wellesley College with a BA in Music and Psychology, and earned a Master's degree in Composition from the Longy School of Music of Bard College. She scored 99 percentile scores on the SAT and GRE and loves advising students on how to excel in high school.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

analytic scoring rubric for essay writing

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

IMAGES

  1. (PDF) Analytic Scoring Rubric for Writing

    analytic scoring rubric for essay writing

  2. Writing An Analytical Essay

    analytic scoring rubric for essay writing

  3. Writing an Analytical Essay (Rubric)

    analytic scoring rubric for essay writing

  4. Types of Rubrics

    analytic scoring rubric for essay writing

  5. Analytical Essay Rubric

    analytic scoring rubric for essay writing

  6. 46 Editable Rubric Templates (Word Format) ᐅ TemplateLab

    analytic scoring rubric for essay writing

VIDEO

  1. Lec 1

  2. Reviewing Writing Essay Rubric Up Dated Sp 2024

  3. Team grading in Canvas with Rubrics

  4. What is an Analytic Rubric

  5. Vid 1 Overview of the course

  6. Analytic Scoring Rubric

COMMENTS

  1. PDF Writing Assessment and Evaluation Rubrics

    Holistic scoring is a quick method of evaluating a composition based on the reader's general impression of the overall quality of the writing—you can generally read a student's composition and assign a score to it in two or three minutes. Holistic scoring is usually based on a scale of 0-4, 0-5, or 0-6.

  2. Rubric Design

    Analytic scoring makes explicit the contribution to the final grade of each element of writing. For example, an instructor may choose to give 30 points for an essay whose ideas are sufficiently complex, that marshals good reasons in support of a thesis, and whose argument is logical; and 20 points for well-constructed sentences and careful copy ...

  3. PDF Essay Rubric

    Essay Rubric Directions: Your essay will be graded based on this rubric. Consequently, use this rubric as a guide when writing your essay and check it again before you submit your essay. Traits 4 3 2 1 Focus & Details There is one clear, well-focused topic. Main ideas are clear and are well supported by detailed and accurate information.

  4. Know Your Terms: Holistic, Analytic, and Single-Point Rubrics

    An analytic rubric breaks down the characteristics of an assignment into parts, ... Mertler, C. A. (2001). Designing scoring rubrics for your classroom. ... I do a sort of analytic + single point. I don't include lots of writing on an analytic rubric. I give them the thick descriptions printed out earlier and I go over them (so each category ...

  5. PDF Text Analysis Essay Rubric

    writing task. are clear to the read; closely Thesis may be obvious or unimaginative. Thesis and purpose are somewhat vague OR only loosely related to the writing task. Thesis is somewhat original.Thesis and purpose are fairly clear and match the writing task. thinking. Develops fresh insight that challenges the reader's Thesis and purpose

  6. Rubric Best Practices, Examples, and Templates

    Disadvantage of analytic rubrics: Requires more work for instructors writing feedback. Step 3 (Optional): Look for templates and examples. You might Google, "Rubric for persuasive essay at the college level" and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar ...

  7. PDF Th e Use of Analytic Rubric in the Assessment of Writing ...

    Analytic rubrics are most suitable in distinguishing the analytic levels. Th e application of analytic scoring to writing means attempting to obta-in shared score sets to analyze the sub-skills in samples of each student's writing (Babin, & Harrison, 1999, p. 117). Analytic scoring means sco-

  8. PDF Direct assessment of second language writing: Holistic and analytic scoring

    See Analytic Scoring Rubric for Writing (Appendix 2), which was originally developed by scholars in Virginia in the 1990s and was adapted by Wright (2015). The multiple ratings of different components of L2 writing in an analytic rubric are awarded to the same essay in an attempt to enhance the reliability of assessment (e.g.,

  9. Analytic Rubrics

    Analytic Rubrics The WHO, WHAT, WHY, WHERE, WHEN, and HOW of an Analytic Rubrics. WHO: Analytic rubrics are for you and your students.. WHAT: An analytic rubric is a scoring tool that helps you identify the criteria that are relevant to the assessment and learning objectives.It is divided into components of the assignment contains a detailed description that clearly states the performance ...

  10. SAT Essay Rubric: Full Analysis and Writing Strategies

    The SAT essay rubric says that the best (that is, 4-scoring) essay uses " relevant, sufficient, and strategically chosen support for claim (s) or point (s) made. " This means you can't just stick to abstract reasoning like this: The author uses analogies to hammer home his point that hot dogs are not sandwiches.

  11. Evaluation: and Grading of Students' Writing: Holistic and Analytic

    The results revealed that there is a high interest in using analytic scoring rubrics to correct their students' writing. The total mean reached 3.27 with standard deviation (0.65) by high ...

  12. Examining consistency among different rubrics for assessing writing

    The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred ...

  13. PDF An Initial Development of an Analytic Rubric for Assessing ...

    A valid and reliable rubric is crucial for critical thinking assessment since it can be used to evaluate critical thinking accurately and provide students with diagnostic data for improving their critical thinking. However, the existing rubrics for assessing critical thinking in English argumentative writing suffer from four striking flaws.

  14. PDF Guide to Scoring Rubrics

    With an analytic scoring rubric, the student and teacher can see more clearly what areas need work and what areas are mastered. It is far more descriptive than a simple A, B, or C grade. Holistic scoring rubrics Whereas analytic rubrics break down the assignment into measurable pieces, a holistic scoring rubric evaluates the work as a whole. In

  15. PDF Analytic Rubric Samples

    Analytic Rubric Samples . Writing - This rubric could be used to assess a third grade student's personal narrative or fiction story. 1 - Needs Improvement . 2 - Fair . ... Reading - This rubric could be used to assess an upper elementary or middle school student's book trailer video. 1 - Needs Improvement ; 2 - Fair ; 3 - Good .

  16. PDF Analytic Rubric for Scoring Persuasive Writing (ENG 10)

    Analytic Rubric for Scoring Persuasive Writing (ENG 10) Ideas 4 3 2 1 A main idea is distinct and is developed through vivid and relevant details. A main idea is clear and is developed through relevant details. A main idea is evident and is somewhat developed through details, some of which may be irrelevant.

  17. GRE General Test Analytical Writing Scoring

    Analytical Writing Measure Scoring. Score level descriptions for the Analytical Writing measure. "Analyze an Issue" task scoring guide. "Analyze an Argument" task scoring guide (for General Tests administered before September 22, 2023) Find score level descriptions and scoring guides for the GRE General Test Analytical Writing measure.

  18. PDF Sample Analytic Rubric

    Sample Directed Self-Placement Analytic Rubric. The Directed Self-Placement for Writing for First-Year Students (DSP) gives incoming students a chance to learn about the types of academic writing most often assigned and valued at the University of Michigan. It asks students to read an article, write an evidence-based argument in response to the ...

  19. PDF FINAL ANALYTIC SCORING RUBRIC

    This Analytic Scoring Rubric as a Monitoring and Evaluation (M&E) tool is the joint property of the Literacy Coordinating Council, Department of Education, and the Philippine Normal University. Intellectual property, however, reverts back to the Authors.

  20. PDF Analytic Trait Scoring Rubric for Writing Samples

    Analytic Trait Scoring Rubric for Writing Samples This rubric can be used by teachers to communicate to students how well they incorporate traits of "good writing" into their writing. The rubric provides a scale from 1 to 6, with 6 as the highest level of performance. Ideas/Content 6 Exceptionally clear, focused, and interesting.

  21. PDF Evaluation and Grading of Students' Writing: Holistic and Analytic

    interest in using analytic scoring rubrics to correct their students' writing. The total mean reached 3.27 with standard deviation (0.65) by high agreement degree.

  22. ACT Writing Rubric: Full Analysis and Essay Strategies

    The analysis in a 3-scoring essay is usually "simplistic or somewhat unclear." By contrast, the analysis of a 6-scoring essay "examines implications, complexities and tensions, and/or underlying values and assumptions." Again, to summarize what you need to do to score well in the Ideas and Analysis domain: Choose a perspective that you can support.