There is a long article in this morning’s LA Times about the issue that caused the San Diego Unified superintendent, Terry Grier, to leave after about a year on the job, which was resistance to the teacher evaluation system he wanted to install, called “value added.”
Value added plots three years of a student’s test scores and runs the trend line forward as a student’s expected progress. Deviation from the trend line is "the teacher effect." If the actual next year’s score is above or below the expected level, the teacher has been effective or ineffective to that extent.
(UPDATE: I didn't realize when I wrote this that what they used to run a trend line forward was expecting the percentile to remain constant, which is a much better idea than what I thought they were doing. I've since decided that, while what I describe below will work, it is much harder and more expensive than the value-added system of percentiles.
Except that it still has the problem that:) I think this is virtually as simplistic as looking at a single test score. It assumes that the student’s life is steady, that the only difference between this year and the last three is the teacher. If a kid’s score was down last year because his brother was shot, then another brother will be shot this year, and the next. If his dad wasn’t in jail two years ago, he won’t be this year. Sometimes that’s true.
Given all that, why would I be in favor of merit pay? I'm not so much in favor of it as I am of the opinion that it is inevitable. There are too many forces pushing in that direction to stop it. So what we should do is join the debate of how to do it—not whether—and make sure it’s as good a system as we can get.
The obvious reason to be against merit pay is that so many things affect how well a kid progresses in school, all the messy home life things like jail, injury, poverty, being cold all winter, and having no prospects in life, the variation in teacher ability is a drop in the bucket as an influence, and test scores are so misleading as to be useless.
This is true, and it is a conclusive reason to be against either simplistic merit pay or “value added” evaluations, but it is not a reason to be against merit pay, because evaluation does not have to be that simplistic. We have very smart people with computers, who can tease out very complicated relationships. We should get them to fire up their neural networks and regression algorithms and help us figure out really how to evaluate teachers.
Here’s what I think we should do. First convene a committee of teachers, child development experts, demographers, statisticians, and busybodies to guess what factors might affect student progress and what data we could gather about them. We have to accept at the outset that in-depth teacher observations won't be part of it. It has to be something that can be put into a spread sheet cell.
Some that come immediately to mind are:
- Family income (either individual or median income of the zip code)
- Education of the mother
- Educational milieu of the child (the educational level of people the kid is around and kids they play with; API would be a proxy)
- Number of parents in the household
- Birthweight, gestational age at birth
- Childhood illnesses, disabilities
- Deaths or other traumatic events in the family (and at what age it happened, a la Vygotsky)
- Recent moves, divorces, marriages
- Number of days of school missed or late to school
- Quality of previous year’s teacher (and years before)
- IQ score
- Which type of intelligence a person has*
- Outside temperature that winter
- Experience of teacher (my guess is better each year up to about 5 and then constant)
- Education of teacher (more is better to a threshold, then not so much)
- Price of food, clothing, heating oil
You’d also want to know how you were going to evaluate the kids, such as:
- Test scores
- Evaluation of written work of different types
Once the committee decides what information might be useful to track, we run some pilots in different kinds of schools for 5 years. Then we hand all the data over to the math geeks to train their algorithms.
Then, when we run the algorithms on current evaluation data, they should be able to tease out what part of a kid’s progress (or not) is due to the teacher and what part to life. They should also be able to figure out how this teacher does with kids like that compared with how other teachers do with the same kinds of kids. And that would become the basis of merit pay. Q.E.D.
*This analysis does not take into account the issue of different intelligences. What I am talking about may well work only with cognitive intelligence, and we would have to find a different way to evaluate art or music teachers or coaches (and teachers of kids who can draw but not do math). We would certainly like to see not only if a kid is progressing cognitively but along any other dimension of talent as well. There are no doubt ways to evaluate progress in fine art, and maybe we could compare that with all the same life variables? I don’t know. Maybe you’d have to have quarterly art projects evaluated by some outside evaluator. And an abusive mom might hurt test scores but be a muse for vivid art. This is much harder than simply measuring cognitive growth, so I’m going to ignore it until somebody who knows something about it writes something I can read. (Mmmmmm, reading.)