Politicians and policymakers who promote the use of standardized test score data to evaluate teachers know they are in trouble. Parents have begun to realize the cost of our testing obsession, one that goes far beyond the dollars and cents of exam preparation, administration and scoring. Today’s students are paying the price as enrichment activities give way to test prep and classroom instruction is replaced by testing periods.
But supporters of test-based accountability tell parents that it’s all for the best. They often point to a 2012 study, “Measuring the Impacts of Teachers,” which advanced the idea that students who have the “best” teachers have greater economic success later in life.
The report’s authors, economists Raj Chetty and John Friedman of Harvard and Jonah Rockoff from Columbia, concluded that students taught by teachers who had higher value-added scores will enjoy greater earnings over their lifetimes.
Value-added proponents jumped at the opportunity to tie test scores to earning potential. Surely students and parents won’t mind all those high-stakes tests if it means dollars in their pockets down the road. It’s all worthwhile if it brings a higher standard of living in adulthood. Right?
Moshe Adler of the Department of Urban Planning at Columbia University is the most recent critic of the Chetty study. Adler found that the methodology and rationale it used invalidate the paper's main claims about the value of value-added.
“The only valid conclusion from this study is the opposite of what’s been reported and trumpeted: that teacher value-added scores have not been shown to have a long-term impact on income,” Adler wrote.
Still, New Jersey and other states forge onward, determined to implement value-added despite the growing evidence that it will not benefit students.
Perhaps this march toward bad policy is why the American Statistical Association (ASA) issued a “Statement on Using Value-Added Models (VAM) for Educational Assessment” last month. The seven-page document, which is surprisingly easy to digest, should be required reading for all education policymakers. The ASA notes that VAM does not necessarily predict long-range learning outcomes nor does it provide information on how to improve teaching. The association also warns against the unintended consequences of using VAM for teacher evaluation.
But wait, there’s much more.
“Most estimates in the literature attribute between 1 percent and 14 percent of the total variability to teachers,” the ASA wrote in its position statement. “This is not saying that teachers have little effect on students, but that variation among teachers accounts for a small part of the variation in scores. The majority of the variation in test scores is attributable to factors outside of the teacher’s control such as student and family background, poverty, curriculum, and unmeasured influences.”
The ASA is dispassionate in its assessment of value-added measures of teacher effectiveness. All the more reason for policymakers to abandon an approach that is harming our public schools.
See for yourself
“Measuring the Impacts of Teachers,” Parts I and II, by Raj Chetty and John Friedman and Jonah Rockoff can be found at the National Bureau of Economic Research website, www.nber.org/papers.
Read Dr. Bruce Baker’s critique of the Chetty study on his School Finance 101 blog found at http://schoolfinance101.wordpress.com.
Moshe Adler’s review of “Measuring the Impacts of Teachers” is available at http://nepc.colorado.edu/thinktank/review-measuring-impact-of-teachers.
You can review the American Statistical Association’s “Statement on Using Value-Added Models for Educational Assessment” at www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf