Monday, October 29, 2018

Assessing to Develop Skill, Not Identify It

Thomas Guskey said that a teacher’s job should be to “develop talent, not select talent.” This statement is easy to agree with on the surface--of course we are developing talent, we’re teachers! Students come to us, we teach them, and they leave knowing, understanding, or being able to do more, right? But the true test of whether we are actually developing or selecting is to examine our assessment systems. Does the way we assess ensure growth, or does it just happen to capture growth when it happens? Is the system we set up designed to intentionally improve all students’ skills, or to identify those who can or cannot? 

Here’s an example:

Yesterday in class, we asked our students to use what they had learned about brain-friendly and brain-hostile practices (From Thomas Armstrong’s Power of the Adolescent Brain) to evaluate a half-dozen models of education. They had spent a previous class learning about the models (i.e. Montessori, KIPP, place-based, language immersion schools, etc) and taking notes on each, and they had read and talked about the neuroscience; ultimately, we wanted them to use their evaluation of the models to determine which model (or combination of models) would be most effective in our community.
Simple, right?

Out of 20 students, here’s what we got:

  • Three students nailed it. They applied their knowledge of neuroscience to the provided models and then used that application to evaluate the potential effectiveness of the models in our own community. 
  • Eight students were close, but they jumped right to the evaluation, so their findings, while occasionally referencing the neuroscience, lacked the weight of the first set. 
  • Six students were close in a different way. They had very thorough application of the neuroscience, color-coding and using symbols to critically read and apply a variety of elements of the brain research, but they forgot about the overall goal, which was to evaluate effectiveness of a model in our community. 
  • Three students gave very detailed explanations of their own opinions about the models, using the lens of their experience to highlight pros and cons. 
In the (not-too-distant) past, we would have scored these (using our general critical thinking scale, which includes evaluation), written comments to 17 of them about what they were missing, recorded a few 4s, lots of 3s, and a few 2s or 1s in the grade book, and then moved on to the next set of content. In other words, we would have “selected talent,” identifying those that could do what we asked and those that could not.

Even though we thought we had been clear in our expectations, we fell short in our instruction of the central skill we wanted--evaluation. We assumed that because we had taught the content--the neuroscience--that students would be able to successfully apply it to a skill we had explained. The results of our assessment showed otherwise.

Instead of recording scores and moving on, we discovered that we had do the hard work of determining and articulating what exactly it means to evaluate an idea or a model. It’s not enough for us to know what we want, we also have to be able to communicate the increasing levels of skill complexity that will lead to what we want--and then we have to design incremental instruction and practice to ensure that all of our students improve on the skill. In other words, we have to intentionally develop the talent. (And after our next assessment that uses this skill, we will likely need to differentiate in order to continue that development.)

After yesterday’s class, we determined we needed a separate learning scale for Evaluating a Claim, Model, or Idea, as it’s a skill we will continue to instruct and apply throughout the year. Our general critical thinking scale would have allowed us to assess, but not to instruct what we really intended to instruct. The student work we collected yesterday has helped us figure out what this might look like, and we will continue to test and revise this scale until it becomes an effective tool for development of the skill, not just for assessment of the skill.

Evaluating a Claim, Model, or Idea (Working Draft!)
There used to be an element of assessment for us that involved closing our eyes and crossing our fingers and hoping students nailed it. And honestly (and with a bit of embarrassment), there was often an element that included rationalizing poor performance by blaming the learners (they didn’t try, they didn’t listen, they didn’t focus, etc…). We used rubrics to assess--and maybe to explain requirements--but didn't see their value as instructional tools--in other words, we used them to select, not to develop.

When we accept that our job as teachers must be to develop learning, not merely to identify it when it happens, then everything changes. Our assessments become diagnostic and the results are as much (or more) about us and for us than they are about or for our students. We become compelled to use the results of those assessments to shift (or completely change) direction in order to improve student success. And when our success as teachers becomes inextricably tied to our students’ success, we become better teachers.