Advocates of using student growth data to evaluate teachers all face the same obstacle: how to measure student achievement when standardized test results aren’t available. In the N.J. Department of Education’s (NJDOE) proposed evaluation system, more than 80 percent of teachers fall into this category. That’s why the department has called for the use of Student Growth Objectives. These objectives, called SGOs for short, may sound harmless, but teachers and administrators will again find measuring student growth accurately is anything but easy.
The NJDOE defines SGOs as “academic goals for groups of students that each teacher sets with his or her principal or supervisor at the start of the year.” These SGOs should be “ambitious but achievable” and can be based on “national standardized tests; statewide assessments; or locally-developed measures such as tests, portfolios, etc.”
The department’s first two examples are more of the same: tests. Given the shortage of time and the abundance of paperwork related to teacher evaluation, the temptation to use a ready-made assessment will be great. Our students will suffer the consequences as they endure more standardized testing than ever before. Meanwhile, local boards of education will see more money spent to buy these tests than ever before.
Even if educators choose to create their own tests for SGO purposes, there are other things to worry about. Sure teachers make up tests all the time, but as noted researcher Dr. Howard Wainer explains, those tests usually have two purposes: to push the students into studying and to see if the course of future instruction needs to be adjusted.
“But when you add a further purpose – the formal evaluation of the teacher and the principal – the test score must carry a much heavier load,” says Wainer, author of Uneducated Guesses—Using Evidence to Uncover Misguided Education Policies (Princeton University Press, 2011). “Even professionally developed tests cannot support this load without extensive pre-testing and revision,” something that takes a lot of time and a lot of money.
That leaves portfolios, another idea that Wainer believes “only sounds good if you say it fast.”
“When portfolios were used as part of a statewide testing program in Vermont about 15 years, ago it was a colossal failure,” he recalls. “It was unreliable, unpredictable and fantastically expensive,” and soon, state officials abandoned the program.
What is the lesson to be learned? “Some measurement methods that work acceptably well at a classroom level do not scale,” explains Wainer. “A folder of a student’s work produced for a parent-teacher conference illustrates what is going on and directs the discussion, but when the folder is reified as a ‘Portfolio Assessment,’ we have asked more from it than it can provide. Research shows that portfolios are well suited for one purpose but not the other. What would make New Jersey’s use different?”
Indeed, nothing will make New Jersey different. There is a mountain of evidence out there that tells us that student growth metrics aren’t ready for prime time when it comes to teacher evaluation. No state has done it well, and given the NJDOE’s current proposal, New Jersey won’t either.