In the current era of testing and accountability, instrumental music educators must document evidence of individual student growth. The majority of this data is often derived from individual performance-based assessments.
For example, students might complete a playing test on a scale or excerpt to determine their proficiency on a given task. While these types of assessments are a common practice in instrumental music classrooms, there are considerable opportunities to improve the utility and effectiveness of performance-based assessments. In this article, I will explore several considerations to ensure performance-based assessments yield more than just a data point for accountability, encouraging both students and teachers to more consciously use the assessment process for long-term learning, skill development, and growth.
Fine-Tuning Assessment Tools
Checklists, rating scales, and rubrics are effective tools to objectively assess instrumental music students’ performance. However, when crafting these assessment tools, it is important to address a few key items to ensure that each provides the student with appropriate, useful, and productive information.
First, it is critical to create assessment tools that use developmentally appropriate language. When developing a rubric for beginning band students, for instance, you would not use language such as, “The beat is erratic,” or, “Rhythms are seldom accurate.” This language is not suitable for young students. Instead, consider language that better suits students’ developmental needs. Amending your assessment tool language can ensure students easily understand learning and performance objectives. Moreover, when students receive a completed rubric following a performance-based assessment, clear language gives them more useful information moving forward.
Another common issue with assessment tools is scaling language to account for students’ developing skillsets. For example, to evaluate tone quality, a rating scale assessment might read, “Tone quality is characteristic of the instrument,” and the evaluator would subsequently rate the extent to which a student’s performance represented that sound. However, it is again important to consider the student’s level. What constitutes a characteristic tone for a given instrument?
Certainly, tone operates on a sliding scale, depending on the student’s experience. An eighth-grade violinist is not expected to sound like a collegiate violinist but rating an eighth-grade student highly in this category might create a misconception that he or she has “arrived” in terms of tone. As such, amending this language to instead read, “Tone quality is characteristic of the instrument, given the student’s level,” can help clear up this issue.
Assessment tools might also be unclear when evaluative categories cover multiple skillsets. On a playing test, perhaps one criterion reads, “Student plays all notes and rhythms accurately.” Certainly, this is a meritorious objective. However, this criterion includes two distinct skillsets – performing correct pitches and performing accurate rhythms. Combining these two skillsets into a single evaluative category can blur the dimension of musicianship needing the most attention. For example, if a student plays all of the rhythms accurately, but misses several notes, it can be confusing to know how to apply this assessment tool language. Instead, distinguish assessment criteria from one another by using distinct categories whenever possible.
Refining Feedback to Encourage Follow-Through
Assessment tools do not typically provide corrective directives for students to apply and improve performance. Once students have an assessment returned to them, they are often provided with little information to move forward. If assessment language indicates a student clapped rhythm accurately 75% of the time, does the student know (a) which rhythms were incorrect, and (b) how to fix those issues? There are several ways to craft assessment tools to ensure students have this information.
Younger students can benefit from visual assessment tools. If a criterion centers on proper instrument carriage or bow hold, for example, it might be more useful for students to see a series of images representing a range of instrument carriage and posture. Teachers can circle the image that best represents the student’s playing position, while also providing an image of the desired instrument carriage.
Seeing this goal with a clear image can yield more useful information than to simply use language indicating that posture and playing position were lacking. Teachers can also craft assessment tools to include a checklist of suggestions for improvement on a given criterion. If an evaluation indicates a clarinetist’s tone is unsupported, the assessment tool could also include a complementary checklist with exercises targeted toward addressing tone quality issues. For example, the checklist could include recommendations for specific long tone or breathing exercises that the student might explore to address an unsupported sound. Having a list of pre-established suggestions that teachers can easily check off can help with assessment efficiency, as there is not always adequate time to provide extensive written commentary and feedback during a performance-based assessment. It can also be handy to have “recipe cards” for common issues that students might experience. These recipe cards can explain the nature of a student’s performance challenge while also giving them ideas on how to improve. Having this tool at the ready makes it easy to provide students with a practical approach moving forward.
If a student has a pervasive issue with a dimension of musicianship, teachers can also encourage follow-through using practice “bingo cards.” Bingo cards are effective in helping students craft a comprehensive practice session each day; choose your own adventure and be sure to spell “Bingo” with the exercises you select. However, if performance assessments indicate a student is struggling specifically with articulations, a teacher might encourage the student to just focus on the “O” column one to two times a week. This approach provides a range of tools to help the student address a given area of concern.
Finally, to encourage follow-through on any performance-based assessments, teachers should include a “check-in” date, giving students a specific time frame to address any areas of concern using the feedback and suggestions provided. This practice encourages the philosophy that performance-based assessments are not a one-and-done experience; rather, the evaluation is a pit stop on a longer path toward continuous musical improvement.