Table of Contents

 
Seductive Reasoning
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 

So Much Assessment
So Little Time

Ideas for Making Up Your Own Mind About the Value of
Assessments and Suggestions  for Making Assessment
Manageable and Effective

by Steve Peha
________________________________________________

Seductive Reasoning

I remember distinctly the first time I heard about assessment. My mother, an elementary and middle school teacher with 20+ years of experience, was doing her Master's degree. It was the mid-1980s, the dawning of the Age of Assessment. I was home from college and we were talking over dinner with friends about her studies. Of course, the only kind of "assessment" I knew about then was grading, and so it was exciting for me to hear that teachers were exploring so many other, and seemingly more promising, ways of figuring out what kids could do and how to help them do it better.

Ten years later I was learning about assessment first hand as I started working with kids in classrooms. And I was even more excited about it. I loved doing things with kids that allowed me to gather assessment data, and it was fun to pore over the results looking for patterns and developing insights. I used as many different kinds of assessment as I could find out about, and I worked hard to create good tools and procedures for taking assessments and tracking the results.

But no matter how much or how well I assessed, student performance remained about the same.

Fascinated as I was with the idea of assessment, I had a tendency — a desire, I realized later — to overlook the fact that the kids weren't really improving much. They must be making some progress, I thought. Maybe I just need more assessments to figure out what it is. In the end, however, this proved not to be the case.

Assessment made me feel knowledgeable, technical, responsible, and professional. But it didn't make me competent. In fact, the more I did it, the less competent I became in my ability to judge accurately the effect of my teaching on student performance. I allowed myself to be seduced by the siren song we hear all too frequently these days: better data equals better teaching.

The Assessment-Instruction Paradox

The conventional wisdom about assessment that I was introduced to was this: assess first, instruct second. On the face of things, this makes a lot of sense. After all, how can you know what to teach if you don't know what students need to learn? And how can you know what they need to learn if you don't assess first?

Consider two identical schools, "A" and "T". Both would like to make significant improvements in a particular subject area. Both have reasonable budgets for training and materials. Both have staffs that are committed to working together toward a common goal of raising achievement school-wide by a certain reasonable amount as measured by the results of the same state test.

Following the conventional wisdom, School "A" invests its time, energy, and effort in a program of rigorous training in assessment. Methods are surveyed. Experts brought in. Training is conducted. Samples of student work are reviewed. Rubrics are created. Recording systems are developed. Data is gathered. Everyone works hard and does a good job applying what they learn.

School "T" takes a different approach. Instead of focusing on assessment, they choose to focus on teaching. Methods are surveyed. Experts brought in. Training is conducted. New practices are perfected in the classroom. Strategies are developed and executed. Effective techniques are shared and developed. Everyone works hard and does a good job of applying what they learn.

When the test results come back the following year, which school is likely to have made better gains?

If the chickeny-eggy quality of this argument is starting to bother you, you're not alone. It bugged me for about a year and a half. And then it hit me right between the eyes: I could continue to perfect my skills in assessment, but I'd be doing more for students and for myself if I perfected my teaching skills instead. Of course, as my teaching improved, my results improved. And the difference was so dramatic that — paradoxically — I needed less and less assessment to see it.

Assessment is only valuable if we can use the data we gather to improve our instruction. If we don't know how to teach effectively in the first place, it isn't likely that we will learn as a result of conducting assessments. In the end, we can assess all we want and all we'll ever get out of it is more data. Why not just cut to the chase and spend what little time we have studying and implementing effective teaching?