Monday, August 23, 2010

NEA, CTA, and UTLA jointly write a letter to the LA Times

August 20, 2010

Mr. Russ Stanton, Editor
Mr. Davan Maharaj, Managing Editor
Los Angeles Times
202 W. 1st St.
Los Angeles, CA 90012

Dear Mr. Stanton and Mr. Maharaj,

In a reckless and destructive move, which ignores the prevailing consensus that value-added measures are too unreliable and unstable to draw valid conclusions about a teacher’s ability to teach to a standardized test, much less to teach students, the LA Times has decided to publish a database naming 6,000 teachers and purporting to rate their effectiveness. Reasonable people understand a single test
score does not define student learning and can never solely measure the effectiveness of a teacher. We would think a reasonable and respectable institution such as the LA Times would as well. So, we are only left to assume, the purpose of the publication was to sell newspapers. Otherwise, we’d have to believe
that you felt it was ethical to publicly label teachers as “effective” or “ineffective” based on data, and a methodology, that even your own paper admits are “controversial” and knows are an incomplete and inaccurate measure of the quality of a teacher.

There is significant and widespread consensus that the type of value-added methodology the LA Times is using generates significantly unstable measures of a teacher’s effectiveness at teaching standardized test subjects. A recent report, released just this past month by the U.S. Department of Education’s
Institute of Education Sciences (”Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains,” July 2010), concluded that there is a 20 percent likelihood that a teacher’s rating under such a system will radically shift from year to year. Other researchers have concluded that the instability of such measures may be significantly higher, generating swings of up to 35 percent of
teachers moving from the most highly effective group of teachers in one year to the least effective group of teachers in the next.1 In other words, the teachers that the LA Times has seen fit to publicly shame as “ineffective” under its value-added measure would be labeled “effective” if the value-added measure were rerun for a different set of years. Or, to put the point bluntly, the LA Times rating of
these teachers as “ineffective” is false.

The radical instability in value-added measures of a teacher’s ability to teach standardized test subjects reflects what we all know to be true: a student’s performance on a standardized test reflects multiple factors, many of which are entirely independent of the teacher who administered the test. A student’s
performance on a standardized test reflects what they learned in prior years under other teachers, reflects what they learned or retained over summer break, and reflects their personal circumstances (did they come to school hungry the day of the test, do they come to school hungry every day, were they distracted that day due to personal hardships, such as a divorce, a lost job, a lost home, a lost
family member).

A teacher’s performance on a value-added measure also reflects the fact that student assignment is not random. Teachers do not all teach equally gifted or equally challenged students, and the teaching required to boost one student’s score on a standardized test by five points is not equivalent to the teaching required to boost another student’s score by the same increment. Student attendance
throughout the year also plays a significant role. The analysis the LA Times has done accounts only in part for a few of these variants. Indeed, the consensus in the literature is that such external factors are exceedingly difficult, if not impossible, to capture effectively in value-added measures of an individual
teacher’s effectiveness. In addition, the California Standards Test (CST), on which the LA Times value added measure is based, is not designed for that purpose. The CST is designed to measure standards at each grade level, not continuous student growth from year to year. Indeed, the CST is not even vertically aligned, meaning that it is nonsense for the LA Times to assume, as it has, that a student’s performance on the CST should be stable from one year to the next.

Equally to the point, even if the value-added measure used by the LA Times were accurate, the information it would yield is at best partial, measuring only a fraction of an elementary school teacher’s work. Our children go to elementary school to learn how to read and write and share and work together and express themselves and understand cause and effect and learn about our society and world.

To be sure, the LA Times could say that it recognizes all of the above and has made the decision to go ahead and publish its data because it provides the public with some glimpse into the ability of teachers, based on a single notoriously inaccurate measure, to teach to standardized tests, which would still be irresponsible. But that is not what the LA Times has done. In the article that the paper saw fit to run this past Sunday, teachers were labeled as “ineffective” without any qualification and blamed for years of supposed failures without any sound factual basis for such public shaming. The LA Times decision to run the article and name those teachers damaged both the reputations of those teachers and those of
their students. The LA Times proposal to expand its public shaming to the 6,000 teachers in its “database” will exponentially compound the damage. If the collective goal is to have highly qualified teachers in every classroom, how does exposing teachers to this public scrutiny entice anyone into the profession? The proposed publication of the data purporting to rate teachers publicly is not supported by the underlying data, which is limited, according to the sparse technical information that the LA Times has made available to date, to “identifying what factors improve student outcomes over time” and to providing insights into how a district may “align” its resources, not as a measure of the effectiveness of
individual teachers. Even the researcher who did this work for the LA Times says that he made “no attempt to link the scrambled identifier [he used for the analysis] with teacher names.”

The LA Times is the second-largest newspaper in the country. Its readers across the country expect and deserve better than the loose journalism ethics and integrity that led to this simplistic approach to measuring teacher effectiveness and the decision to publish the names of local teachers as if they are public officials.


As the elected leaders of NEA, CTA, and UTLA, we call on the LA Times to act responsibly and cease the publication of data that is materially false and misleading about the dedicated teachers who serve in our schools. Rather than publishing data that is false and misleading, and will distract from, rather than
advance, the efforts to improve our public schools and the evaluation systems used in those schools, we invite the LA Times to engage in an honest discussion about what is really needed to provide a quality education to all students and to create a fair and comprehensive teacher evaluation system that uses multiple measures of teacher performance and student outcomes, gives reliable and actionable feedback to teachers about their strengths and weaknesses, and offers quality professional
development and intervention to help teachers improve their practice. Only through a meaningful and comprehensive system such as this can a teacher’s quality and effectiveness be accurately measured, teacher practice improved and all of our students be ensured great public schools.

Sincerely,

Dennis Van Roekel
President, National Education Association

David A. Sanchez
President, California Teachers Association

A.J. Duffy
President, United Teachers Los Angeles

1 The current state of the research on value-added measures is captured by the National Academy of Science’s Report, “Getting Value Out of Value-Added,” released earlier this year. The report, based on papers commissioned from sixteen leading scholars in the field and a review of over fifty studies, concluded that “the year-to-year stability of estimated teacher effects can be characterized as being quite low from one year to the next”(p. 46). For that reason, there is widespread consensus that value-added measures should not be used for high stakes decisions regarding teachers and must be used with care and precision so as not to generate results that are far too unstable to be considered fair or reliable.