Testimony before the NJ State Board Of Education

Educator Effectiveness Regulations

September 5, 2012

Good afternoon. I am Francine Pfeffer, associate director of NJEA Government Relations, testifying on behalf of the 200,000-member New Jersey Education Association. Thank you for the opportunity to provide comments on the Educator Effectiveness Evaluation System regulations that are currently being considered.

Just last month, the governor signed historic tenure reform legislation. NJEA was a key collaborator in crafting the final law. We look forward to working with the Department of Education and the State Board of Education in a similar collaborative manner as the process of developing code concerning evaluation unfolds.

Now entering the second year of a pilot program, New Jersey is testing new concepts in teacher evaluation. Last year, NJEA embraced the pilot. We encouraged our local associations to apply, worked with members in the pilot districts, and began an ongoing conversation with the Department of Education about issues as they arose. NJEA’s involvement has contributed to the success of the pilot program. We will continue our involvement this year. Some of our concerns with the proposed regulations stem directly from our experiences with the pilot; others reflect our general policies calling for fair, effective evaluation systems.

NJEA understands that this proposal is the first step in a new evaluation system. We believe that the new system should apply to all public schools. The regulations do not appear to apply equally to charter schools, even though they are public schools. All public school teachers, whether in a traditional or a charter school, should be subject to the same regulations when it comes to evaluation.

We are glad to see that the regulations guarantee the confidentiality of employee evaluation records, and that they respect the collective bargaining process. And NJEA believes that the District Evaluation Advisory Committee (DEAC) has helped with the implementation of the pilot and will help districts as they move to implement the new system. However, we believe the teaching staff members of the DEAC should be selected through the majority representative, to ensure that the teachers on the DEAC accurately represent the concerns of the staff and to ensure that district staff members are kept apprised of what is happening. In addition, PL 2012, c.26 states that the teacher representative on the school improvement panel needs to be chosen in conjunction with the majority representative. The code should include this as well.

The training requirements in the proposed regulations are not strong enough. Research we conducted with pilot districts found that those districts which provided comprehensive training for both teachers and administrators and which did not rush in implementing a new system had more positive results. Teachers felt better supported and their teaching was strengthened. In those places where a new system was implemented haphazardly and without sufficient training, the new evaluation system failed to create the increased support for teaching and learning necessary to enhance skills.

The proposed regulations require that observers receive “comprehensive, rigorous training.” Those being observed should have the same level of training, and all districts should be held to the same standards in preparing for and applying the evaluation instrument. Without comprehensive training for all, some districts might use video training and train-the-trainer models with no quality control. This could result in a poorly-implemented system.

The regulations address observation calibration and mention disqualification of observers. What would happen to teacher observations in this case? How would it be fair to make employment decisions and base tenure charges on these observations?

Definitions are an important part of these regulations. However, in using the term “rubric,” the definitions are unclear, confusing, and misuse the word. Districts should have an evaluation plan or program; a rubric is one part of this program, which includes a teacher practice evaluation tool, a multiple objective measures of student learning component, ratings for teachers, etc.

Many of the proposed definitions make the evaluation system more complex than it needs to be, especially when it comes to ensuring the validity and inter-rater reliability of observations. Research-based observation models, if implemented correctly, should provide sufficient guidance regarding rubrics and how they are interpreted to accurately assess a teacher’s performance in the course of his or her duties. Many of the definitions are not necessary if all stakeholders receive comprehensive training, and regular reviews of observer consistency and objectivity in applying the program provisions are completed. A complete list of suggested changes to definitions is attached.

In addition, the code needs to be clear about the difference between the terms “observation” and “evaluation.”

NJEA appreciates that the Department includes the concepts of principal leadership and support for professional development and instruction as part of a principal’s evaluation, for these are critical to enhancing teacher practice. However, the regulations do not reflect support for both individual and teacher-led collaborative professional development, nor does the emphasis on “scoring” and converting scores to a rating category encourage school leaders to approach evaluation from a positive perspective.

While some have raised concerns about the ability of administrators to carry out the required number of evaluations, we again encourage the board and department to allow districts to bargain an alternate approach to evaluation with the local association for tenured teachers who are rated “effective” or “highly effective” in two consecutive years. Under this process, which we shared with you in May, teachers could choose an alternative collaborative experience for the following two years, such as collegial coaching, action research, or engaging in the National Board for Professional Teaching Standards certification process. At the end of two years, teachers would return to the standard evaluation process for another two years. This would relieve principals of conducting some observations of successful teachers, yet help to continue strengthening teaching and learning.

Finally, NJEA remains concerned about the extent to which student test scores will be used in teacher evaluation. Although the current proposal does not include this element, it has been introduced in the pilot districts. The formula the Department is using in pilot districts emphasizes multiple standardized assessments, and this year in the pilots will create different rules for different subjects and grade levels. This does not promote collaboration among educators. Also, many factors affect student learning. Research shows that placing undue emphasis on standardized assessment results is unfair to both teachers and students, particularly when the tests are not designed for that purpose and teachers do not receive useful timely data to be able to influence their work. This emphasis causes great stress for both students and staff.

NJEA looks forward to collaborating with the Department to create an evaluation system that identifies and shares good teaching and that helps struggling teachers improve their performance while allowing for the fair dismissal of those who cannot.

Thank you for your time.