Designing and Using Rubrics

Grading rubrics (structured scoring guides) can make writing criteria more explicit, improving student performance and making valid and consistent grading easier for course instructors. This page provides an overview of rubric types and offers guidelines for their development and use.

Why use a rubric?

While grading criteria can come in many forms—a checklist of requirements, a description of grade-level expectations, articulated standards, or a contract between instructor and students, to name but a few options—they often take the form of a rubric, a structured scoring guide.

Because of their flexibility, rubrics can provide several benefits for students and instructors:

What types of rubrics are there?

Rubrics come in many forms. Here are some of the key types, using terms introduced by John Bean (2011) , along with the advantages and disadvantages of rubric types, as detailed by the Center for Advanced Research on Language Acquisition (CARLA ).

Holistic Rubrics stress an overall evaluation of the work by creating single-score categories (letter or numeric). Holistic rubrics are often used in standardized assessments, such as Advanced Placement exams. Here is a sample of a holistic rubric .

Some potential benefits of holistic rubrics:

Some potential challenges of holistic rubrics:

Analytic Rubrics stress the weight of different criteria or traits, such as content, organization, use of conventions, etc. Most analytic rubrics are formatted as grids. Here is a sample of an analytic rubric .

Some potential benefits of analytic rubrics:

Some potential challenges of analytic rubrics:

Generic Rubrics can take holistic or analytic forms. In generic rubrics, the grading criteria are generalized in such a way that the rubric can be used for multiple assignments and/or across multiple sections of courses. Here is a sample of a generic rubric .

Some potential benefits of generic rubrics:

Some potential challenges of generic rubrics:

Task-Specific Rubrics closely align the grading criteria with the language and specifications in the assignment prompt. Here is a sample of a task-specific rubric .

Some potential benefits of task-specific rubrics:

Some potential challenges of task-specific rubrics:

Guidelines for Creating a Writing Rubric

Step 1: Identify your grading criteria.

steel fram structure

What are the intended outcomes for the assignment? What do you want students to do or demonstrate? What are the primary dimensions (note: these are often referred to as “traits” or as “criteria”) that count in the evaluation? Try writing each one as a noun or noun phrase—for example, “Insights and ideas that are central to the assignment”; “Address of audience”; “Logic of organization”; “Integration of source materials.”

Suggestion: Try not to exceed more than ten total criteria. If you have too many criteria, you can make it challenging to distinguish among them, and you may be required to clarify, repeatedly, the distinctions for students (or for yourself!).

Step 2: Describe the levels of success for each criterion.

For each trait or criterion, consider a 2–4-point scale (e.g. strong, satisfactory, weak). For each point on the scale, describe the performance.

Suggestions : Either begin with optimum performances and then describe lower levels as less than (adequately, insufficiently, etc.) OR fully describe a baseline performance and then add values. To write an effective performance level for a criterion, describe in precise language what the text is doing successfully.

Effective grading criteria are…

  1. Explicit and well detailed, and leave little room for unstated assumptions.

Ineffective: Includes figures and graphs.

Effective: Includes figures that are legible and labeled accurately, and that illustrate data in a manner free from distortion.

  1. Focused on qualities, not components, segments, or sections.

Ineffective: Use the IMRAD structure.

Effective: Includes a materials and methods section that identify all components, technical standards, equipment, and methodological description such that a professional might reproduce the research.

  1. Address discrete features and try not to do too much.

Ineffective: Contains at least five sources.

Effective: Uses research from carefully vetted sources, presented with an in-text and terminal citation, to support assertions.

  1. Address observable characteristics of writing, not impressions of writer’s intent.

Ineffective: Does not use slang or jargon.

Effective: Uses language appropriate to fellow professionals and patient communication in context.

Step 3: Weight the criteria.

When criteria have been identified and performance-levels described, decisions should be made about their varying importance in relation to each other.

Suggestion: If you use a point-based grading system, consider using a range of points within performance levels, and make sure the points for each criterion reflect their relative value to one another. Rubrics without carefully determined and relative grade weights can often produce a final score that does not align with the instructor’s expectations for the score. Here is a sample of a rubric with a range of points within each performance level .

Step 4: Create a format for the rubric.

When the specific criteria and levels of success have been named and ranked, they can be sorted into a variety of formats and distributed with the assignment. The right format will depend on how and when you are using the rubric. Consider these three examples of an Anthropology rubric and how each format might be useful (or not), depending on the course context. [ Rubric 1 , Rubric 2 , Rubric 3 ]

Suggestion: Consider allowing space on the rubric to insert comments on each item and again at the end. Regardless of how well your rubric identifies, describes, and weighs the grading criteria, students will still appreciate and benefit from brief comments that personalize your assessment.

Step 5: Test (and refine) the rubric.

Assortment of random pile of wood letter steps.

Ideally, a rubric will be tested in advance of full implementation. A practical way to test the rubric is to apply it to a subset of student assignments. Even after you have tested and used the rubric, you will likely discover, as with the assignment prompt itself, that there are parts that need tweaking and refinement.

Suggestion: A peer review of the rubric before it gets used on an assignment will allow you to take stock of the questions, confusions, or issues students have about your rubric, so you can make timely and effective adjustments.

Additional Ways to Use Rubrics

Beyond their value as formative and summative assessment tools, rubrics can be used to support teaching and learning in the classroom.

Here are three suggestions for additional uses:

  1. For in-class norming sessions with students—effective for discussing, clarifying, and reinforcing writing criteria;
  2. For constructing rubric criteria and values with students—most effective when students are quite familiar with the specific writing genre (e.g. capstone-level writing);
  3. For guiding a peer-review session

Any Downsides to Rubrics?

While many faculty members use rubrics, some resist them because they worry that rubrics are unable to accurately convey authentic and nuanced assessment. As Bob Broad (2003) argues, rubrics can leave out many of the rhetorical qualities and contexts that influence how well a work is received or not. Rubrics, Broad maintains, convey a temporary sense of standardization that does not capture the real ways that real readers respond in different ways to a given work. John Bean (2011) has also described this as the “myth of the universal reader” and the “problem of implied precision” (279). Of course, the alternative to using a rubric, such as providing a holistic grade with comments that justify the grade—still a common practice among instructors—is often labor-intensive and poses its own set of challenges when it comes to consistency with assessment across all students enrolled in a course. Ultimately, a rubric’s impact depends on the criteria on which it is built and the ways it is used.

Further Resources

Moskal, Barbara. “Scoring Rubrics: What, When and How?” Practical Assessment, Research & Evaluation 7:3, 2000.