Educators in the LeadWhen Acting Commissioner of Education Christopher Cerf announced the creation of a teacher evaluation pilot program, education stakeholders welcomed the chance to review teacher evaluation systems in New Jersey but warned that an unrealistic timeline would undermine the project. Thanks to feedback from pilot districts, the N.J. Department of Education (NJDOE) has finally acknowledged that more time is needed if districts are going to effectively implement a new method of evaluating teachers.

“We’ve learned that the selection a research-based teacher practice framework and the trainings associated with that framework require significant time,” Cerf wrote in a Feb. 8 letter to school superintendents. He added that the NJDOE wants to “make sure that the lessons learned from this first year and provide all districts with the guidance they need….”

Those “lessons learned” have resulted in a significant change to the implementation timeline. The pilot program, which was supposed to be completed by this September, has been extended through the end of the 2012-13 school year. Every district in the state was supposed to select its evaluation framework and be ready to put it into practice in September; now they have two options for 2012-13.

Option 1: Up to 30 districts can participate in a new pilot of the evaluation system. The 11 districts that were part of the pilot program this year must reapply if they wish to participate again in 2012-13. Cerf expects the new pilot will “help us continue to refine our plans for a strong statewide system.” The NJDOE expects to release a notice of grant opportunity to interested districts sometime this month.

Option 2: Districts not participating in the pilot will be required to take a number of steps to prepare for full implementation in 2013-14. These districts will have the option to pilot the new evaluation system in some or all of their schools in 2012-13. They must, however, meet certain benchmarks to prepare for full implementation in the 2013-14 school year. These benchmarks include:

  • Formation of a District Advisory Committee by November 2012.
  • Adoption of an evaluation framework by January 2013.
  • Testing and refinement of observation frameworks and rubrics from January through August 2013.
  • Thorough training of teachers on the framework’s “Measure of Teacher Practice” by June 2013.
  • Thorough training of observers by August 2013.
  • Completion of progress reports in January and July 2013.

The NJDOE has recommended that districts form their District Advisory Committees well before selecting a framework, as engaging stakeholders and examining options in detail should take a minimum of eight to 12 weeks.

Selecting a teacher evaluation framework

Regardless of which option your district picks, every district must choose an evaluation framework by January (sooner for pilot districts). Districts may select among four state-identified evaluation framework providers, or choose another provider that offers a “research and standards-based” framework consistent with elements put forth by the NJDOE and that meets the district’s needs.

The four identified models are:

  • Charlotte Danielson’s Framework for Teaching.
  • Dr. Robert Marzano’s Casual Teacher Evaluation Model.
  • Mid-Continent Research for Education and Learning’s McREL Teacher Evaluation System.
  • James Stronge’s Teacher Evaluation System.

A description of Danielson’s Framework for Teaching was included in the October 2011 NJEA Review (“It’s your evaluation—collaborating to improve teacher practice,” Pages 24-27). The November 2011 issue provided information on the other three frameworks in “Comparing teacher evaluation models” on Pages 22-26. Like all the articles in our “Educators in the Lead” series, these items are available on njea.org.

NJEA believes that members should insist on being part of their district’s framework selection process since they will ultimately be evaluated under the new system. The Association has compiled the following list of questions for members and leaders to ask to ensure that the model chosen best meets the needs of their particular district.

  1. Which model has been employed successfully for the longest period?
  2. What has been the experience of pilot districts that chose each model?
  3. Which model most emphasizes providing increased support for teachers as part of a global goal of improving teaching and learning?
  4. Does the model use evaluation as part of a continuous conversation between the individual teacher and supervisor about teaching and learning, as well as influence collaborative approaches?
  5. Is comprehensive training provided for all teachers and supervisors? Who conducts it?
  6. How is professional development linked to teacher evaluation?
  7. Does the model emphasize narrative observations and teacher-evaluator interaction or rely primarily on an observation checklist?
  8. Does the model emphasize the goals for individual PD plans or prescribe the means by which the individual will acquire knowledge and skills?
  9. Does the model reflect collaborative professional development and assistance or rely exclusively on training via videos or books?
  10.  Does the model acknowledge a continuum of learning based on overall experience, as well as experience working in a specific assignment?
  11.  How does the model recommend ratings be determined and used?
  12.  How will technology (such as iPads/e-tablets, video) be used in the evaluation process? Is technology driving the process and procedures of the evaluation?
  13.  What does the model say about use of standardized test scores to evaluate teachers?
  14.  Will personnel records dealing with observations and evaluations be confidential?  
  15.  Does the model call for a pre- and post-conference with the evaluator?
  16.  What does the model recommend regarding informal observations/walkthroughs?
  17.  How does evaluation of supervisors connect to the teacher evaluation framework?
  18.  How does the model fit in with evaluation procedures as outlined in the local collective bargaining agreement?
  19.  Do teachers receive advance notice of an observation under the model?
  20.  What do the contracts with any vendors (whether framework or data) require?

Proposed regulations expected to be discussed soon

The purpose of the pilot program, dubbed Excellent Educators for New Jersey or EE4NJ, was to inform new regulations affecting teacher evaluations in New Jersey. Even though the pilot has not yet been completed, those regulations are expected to be introduced to the State Board of Education in either March or April.

Feedback from the 18 pilot districts (10 districts applied for the pilot; schools in seven other districts that received School Improvement Grants were required to participate including Newark, which received a separate grant to implement the pilot districtwide) has been gathered by the Evaluation Pilot Advisory Committee (EPAC), which has met monthly since the fall. Initially, EPAC had 21 members who were appointed by Cerf, but the committee has grown to ensure representation of stakeholder groups as well as pilot districts. One teacher and one administrator from each pilot district are invited to attend EPAC meetings. In addition, NJEA Secretary-Treasurer Marie Blistan and five other NJEA members serve on EPAC.

Those regulations will also be proposed before an independent evaluation of the pilot occurs. The NJDOE recently announced that it has partnered with Rutgers University Graduate School of Education to provide this external evaluation. An interim report is expected from the evaluator by the end of this school year; a final report is due in December 2012. Data will be collected from the pilot districts via interviews, focus groups and online surveys.

A midyear progress report on EE4NJ was given to the State Board of Education at its February meeting. Data for this report was submitted by the pilot districts and compiled from field visits by NJDOE personnel. Peter Shulman, chief talent officer for the NJDOE, presented the update. He reported that all evaluators in the 11 pilot districts have been trained (a minimum of three days) on the new evaluation framework. Only five of those districts, however, had completed training teachers in the new model by mid-January.

The State Board also learned that informal observations using the new evaluation system have taken place in all of the pilot districts, but only six districts report that formal observations have begun. Principals in pilot districts have acknowledged that the distinction between formal and informal observations is not clear. In addition, they have had difficulty in finding time to complete all of the evaluations. In part, that is because the new system requires two evaluations of tenured teachers per year as opposed to once a year under the current regulations. The NJDOE has promised to examine this situation, and may consider the use of outside evaluators.

Creating assessments and collecting date prove problematic

Another area of concern is the development of assessments for students in grades and subjects where standardized tests do not exist. (Experts estimate that this applies to approximately 70 percent of teachers.) The NJDOE reports that teachers and administrators are not properly trained to design rigorous and high-quality assessments. The department plans to explore ways to support districts in the development of these assessments. This is critical to the success of the new system since it requires that up to 45 percent of a teacher’s evaluation will be based on “measures of student achievement.”

Finally, the NJDOE has identified the need of many districts with regard to data collection for student growth percentiles and NJ SMART. Pilot districts were required to start with two years of test scores (2009-10 and 2010-11) and use an approach known as the Colorado Growth Model (CGM) to monitor students’ progress on state tests. The CGM – still undergoing testing in Colorado – charts progress of individual students and groups of students from year to year toward state standards, comparing each student’s progress with the progress of students in the state with a similar score history on the state’s standardized test in that subject area. The April NJEA Review will feature an article describing the Colorado Growth Model.

The state’s NJ SMART electronic data storage, retrieval, and analysis system already includes student state standardized test results and other student information, but that data has not been tied to individual teachers or formulated using the growth model. The NJDOE must still assign every teacher a unique identification number, and then must link student records to specific teachers. The state plans to have that link in place statewide by September 2012. Meanwhile, it will have to accelerate that process and information for all districts.

Principal evaluation pilot to be announced

The teacher evaluation pilot was born out of recommendations of the N.J. Educator Effectiveness Task Force Report, which was released one year ago. That task force also recommended a new system of evaluation for school administrators. The NJDOE is expected to soon announce that a principal evaluation pilot program will be implemented during the 2012-13 school year. A separate evaluation advisory committee will likely be created for this pilot. Meanwhile, the state expects that the teacher evaluation advisory committee (EPAC) will continue to meet throughout the extended pilot in 2012-13.

Full implementation still slated for September 2013

Although the pilot has been extended for the 2012-13 school year, full implementation of the new teacher evaluation system is still set for September 2013. It is unclear whether this rollout will immediately feature the use of the teacher evaluations in high-stakes personnel decision making.       NJEA members are urged to stay abreast of developments in teacher evaluation through Association publications and njea.org.

More information on EE4NJ can be found on the NJDOE website at www.state.nj.us/education.