Categories
Education in America

Education in America: L.A. Times Teacher Ratings

The Los Angeles Times has once again published “Value Added” ratings for all of its 3rd through 8th grade teachers, and the response is everything you might expect.  Teachers are angry for an assortment of reasons, teacher haters in comments sections are sharpening the points on their good pitchforks and once again, we’re trying to put out a forest fire by sending one guy out to piss on it.

Grading the Teachers: Value Added Analysis

“Value Added Assessment” is the term school reformers have adopted from the corporate world to address concerns that standardized testing under No Child Left Behind doesn’t effectively measure a student’s individual growth, but only creates a snapshot of where a student is performing compared to his or her chronological peers.  Teachers have long rallied for an assessment measure that was based on a growth model, and Value Added measures are not without benefit, as long as they’re used to guide and improve instruction.  The L.A. Times, by posting individual value added “scores” for teachers, is using data to intimidate instead of to help.  While researching this article, I came across a slew of links to Mathematical Intimidation: Driven by the Data by mathematician John Ewing.  It’s a fascinating piece, and it’s only about seven pages long. It’s worth a read for anyone battling data myths.  Here’s a highlight:

Making policy decisions on the basis of value added models has the potential to do even more harm than browbeating teachers. If we decide whether alternative certification is better than regular certification, whether nationally board certified teachers are better than randomly selected ones, whether small schools are better than large, or whether a new curriculum is better than an old by using a flawed measure of success, we almost surely will end up making bad decisions that affect education for decades to come.

This is insidious because, while people debate the use of value-added scores to judge teachers, almost no one questions the use of test scores and value-added models to judge policy. Even people who point out the limitations of VAM appear to be willing to use “student achievement” in the form of value-added scores to make such judgments. People recognize that tests are an imperfect measure of educational success, but when sophisticated mathematics is applied, they believe the imperfections go away by some mathematical magic. But this is not magic. What really happens is that the mathematics is used to disguise the problems and intimidate people into ignoring them–a modern, mathematical version of the Emperor’s New Clothes.

The National Education Policy Center conducted its own research by seeking out illogical arguments in the initial report used by the Times, and by using the same data with a different, more statistically accurate method of value added assessment.

First, they investigated whether, when using the L.A. Times model, a student’s teacher in the future would appear to have an effect on a student’s test performance in the past–something that is logically impossible and a sign that the model is flawed. This is analogous to using a value-added model to isolate the effect of an NBA coach on the performance of his players. At first glance we might not be surprised when the model indicates that Phil Jackson is an effective coach. But if the same model could also be used to indicate that Phil Jackson improved Kobe Bryant’s performance when he was in high school, we might wonder whether the model was truly able to separate Jackson’s ability as a coach from his good fortune at being surrounded by extremely talented players.

Briggs and Domingue found strong evidence of these illogical results when using the L.A. Times model, especially for reading outcomes: “Because our sensitivity test did show this sort of backwards prediction, we can conclude that estimates of teacher effectiveness in LAUSD are a biased proxy for teacher quality.”

Next, they developed an alternative, arguably stronger value-added model and compared the results to the L.A. Times model. In addition to the variables used in the Times’ approach, they controlled for (1) a longer history of a student’s test performance, (2) peer influence, and (3) school-level factors. If the L.A. Times model were perfectly accurate, there would be no difference in results between the two models. But this was not the case.

For reading outcomes, their findings included the following:

“¢ More than half (53.6%) of the teachers had a different effectiveness rating under the alternative model.
“¢ Among those who changed effectiveness ratings, some moved only moderately, but 8.1% of those teachers identified as “more” or “most” effective under the alternative model are identified as “less” or “least” effective in the L.A. Times model, and 12.6% of those identified as relatively ineffective under the alternative model are identified as effective by the L.A. Times model.

Source: Research Study Shows Times Teacher Ratings are Neither Reliable Nor Valid by Derek Briggs, University of Colorado at Boulder, and William Mathis, NEPC

In addition to the outright problems with the model the Times used to measure teacher value, and the problems inherent in using data like magic, can you blame a teacher for not wanting a commenter like the fella below to know his or her name?

 

By [E] Selena MacIntosh*

Selena MacIntosh is the owner and editor of Persephone Magazine. She also fixes it when it breaks. She is fueled by Diet Coke, coffee with a lot of cream in it, and cat hair.

3 replies on “Education in America: L.A. Times Teacher Ratings”

This is bullshit. How about this? If we’re going to put our public educators through this sort of public performance analysis, let’s do it with ALL public service workers. Cops. Firefighters. That lady at the DMV. City council. Judges. Everyone. Include their names and a graph of their effectiveness based on a flawed model.

My school has embarked on something similar, if on a smaller scale: our so-called effectiveness scores and passing rates are only published to the rest of the faculty right now. My department meeting yesterday turned into a shouting match over it, people finally expressing that this is nothing short of bullying and intimidation on the part of admin and it’s not helpful, stating that we know this kind of thing doesn’t work with students, so why would it work with us.

There was something beautiful about 25 righteously pissed off special educators standing up for themselves in one place.

Leave a Reply