Showing posts with label merit pay. Show all posts
Showing posts with label merit pay. Show all posts

Monday, August 23, 2010

More on measuring teachers in LA

I've been reading the LA Times articles on measuring teachers, and I wrote about  it briefly. I've also written about another way to measure quality in teachers.

The problem is the thing you're measuring. If short-sighted superintendents or boards of education base salaries or advancement on test scores -- or, put another way, if teachers know that how much money they make depends on teaching kids to do well on a test -- more teachers than we would like will teach nothing that will not be on the test, and a few will cheat.

So how should you evaluate progress in kids? And progress at what? As I said in the post linked above, this system is limited to regular cognitive progress, thinking and learning stuff. Art, music, athletics, leadership, socialization, writing poetry or fiction-- None of these is captured in the test the State of California uses to measure elementary school kids. Measuring any of them is possible, but probably not in a system that can be reduced to numbers. "Your brush strokes are 10% better than last year, but your imagery is 5% more derivative."

Another problem is that the value added system, as I wrote before, "assumes that the student’s life is steady, that the only difference between this year and the last three is the teacher. If a kid’s score was down last year because his brother was shot, then another brother will be shot this year, and the next. If his dad wasn’t in jail two years ago, he won’t be this year. Sometimes that’s true."

I guess the answer here is that if you look at scores of kids for every teacher, every teacher will average the same number of kids with brothers who got shot.

Prediction: In sum, we have a good tool to measure one aspect of kids' progress. It is cheap and easy to use and to understand. Every other part of measuring progress is difficult and would be very expensive. Therefore, school districts will inevitably slide toward using it as the sole measure, to the detriment of kids and teachers.

Some school districts will use the opportunity for bad teachers to learn from good ones. Others will use it as a way to prune the payroll when they have to make budget cuts.

And I think our only hope is to have the test reflect at least all the cognitive things we want kids to learn.

Tuesday, August 17, 2010

Measuring teachers

Sunday's LA Times had another big article on Sunday, telling which 5th grade teachers in the LA Unified School District are good and bad, with names and pictures of a couple of the best and worst.

The Times got LAUSD to give them 7 years of math and English test scores and gave them to a moonlighting researcher from the Rand Corporation, who applied a statistical approach called value-added analysis.

They looked at test scores as percentiles, to control for home-life issues. If a kid scored int he 30th percentile because his mom is single, poor, and uneducated, so he got to kindergarten knowing half as many words as the other kids, he's probably going to score in the 30th percentile next year, too, unless he has a particularly good or particularly bad teacher.

The Times found that in some teachers' classrooms in the same school (which controls for neighborhood and local events), kids consistently raised their percentiles, and in others, kids consistently lowered them. Kids got better or worse compared with the other kids. The difference is attributed to teacher quality.

This has good and bad aspects to it. I think the methodology is nicely done. It's similar to an idea I had, but much simpler and cheaper. I've written before why and  how I thought we should measure teachers. And we already knew from a twin study and other research that some teachers are better than others at teaching reading.

This analysis I think accurately tells which teachers are good at having their kids score well on California's standardized 5th grade tests. To the extent that scoring well on those tests measures the quality of a kid's education, the analysis is valid to measure teachers. The Times' experts said they thought it should account for  a fraction of the measurement. (I forget what fraction they said, and the newspaper is all the way in the living room, and I can't find it in the online version. If I find it, I'll insert it, delete this sentence, and pretend I knew it all along.)

But you know that the cheap, easy, incomplete measure will become the sole standard in practice. And teachers will have even more incentive to teach to the test.

One thing I wonder about. Perry Preschool's educational advantage was all in girls. Boys showed improvement in not receiving social services as an adult, not being arrested, and owning their own home at age 27, but income and educational advantages were pretty much all for girls. So I wonder how the LA Times analysis would look if you separated girls from boys. And if you looked at boys arrest records after 10 years.

And so to work, with images of failing teachers and lost kids in my head.

Sunday, October 18, 2009

Why we should work toward K-12 merit pay

The short version is that merit pay is inevitable. It should be possible to design a system to judge teachers’ abilities, but it would be hard, and there is no good reason to believe the people who are actually in charge will do it right, so we should join in to keep them from screwing it up too bad.

There is a long article in this morning’s LA Times about the issue that caused the San Diego Unified superintendent, Terry Grier, to leave after about a year on the job, which was resistance to the teacher evaluation system he wanted to install, called “value added.”

Value added plots three years of a student’s test scores and runs the trend line forward as a student’s expected progress. Deviation from the trend line is "the teacher effect." If the actual next year’s score is above or below the expected level, the teacher has been effective or ineffective to that extent.

(UPDATE: I didn't realize when I wrote this that what they used to run a trend line forward was expecting the percentile to remain constant, which is a much better idea than what I thought they were doing. I've since decided that, while what I describe below will work, it is much harder and more expensive than the value-added system of percentiles.

Except that it still has the problem that:) I think this is virtually as simplistic as looking at a single test score. It assumes that the student’s life is steady, that the only difference between this year and the last three is the teacher. If a kid’s score was down last year because his brother was shot, then another brother will be shot this year, and the next. If his dad wasn’t in jail two years ago, he won’t be this year. Sometimes that’s true.