Educators in the LeadAs the 2011-12 school year comes to a close, NJEA is turning teachers in districts participating in the first year of the state’s teacher evaluation pilot for their reactions  to the program. NJEA conducted focus groups and interviews of teachers in the 10 pilot districts and eight districts with school improvement grants (SIG) that were required to participate to determine their reaction to the first year of the pilot. Focus groups were conducted in March and April.

Overall, teachers in the pilot districts were somewhat optimistic, but cautious, about the pilot teacher evaluation program. NJEA members were asked for their reactions to training, observations, and the professional development that emanated from their evaluation. Participants were also provided the opportunity to comment on any aspect of the pilot program, including their concerns and expectations about the implementation of a new evaluation system.

Training

Members viewed training as a critical component of the pilot – an absolute necessity for success. Teachers reporting that training went well indicated that several factors were essential.  They include:

  • Local association representatives were involved in the local evaluation committee, participated in training for principals and teachers, and monitored closely the impact of the training on working conditions and compliance with the negotiated agreement.
  • Training was most successful when the vendor responded quickly to concerns about training and when the same person delivered training for teachers and administrators.
  • Administrators carefully built time for training into the school year, rather than finding time  now and then  to complete the training in a haphazard way.
  • Someone in the district (or the local committee) had responsibility for making sure that training was completed as required, and that no one would be observed before training was completed.
  • Turnkey trainers were given appropriate support, materials and time to prepare for their roles.
  • The district had already moved to a new evaluation system in the last few years (most often Danielson). In these instances, training was building on prior knowledge and experience with the chosen model.

Teacher evaluation“We insisted that everyone be trained before anyone was evaluated,” said one local leader. “This is a high risk activity and should be done right.”

Teachers in other districts reported problems with training. Some of the problems experienced were:

  • Many teachers felt that the timeline for training was completely unrealistic. Districts struggled to find time to complete the training mandated for teachers and administrators.  While the N.J. Department of Education responded to these concerns by extending the timelines to complete the training, the lack of timely training generated concerns  about the implementation of a new evaluation system.
  • Several teachers (particularly in districts with SIG schools) reported that they never received the amount of training mandated by the pilot (three full days for administrators, two full days for teachers). In some instances, teachers were trained only minimally --- less than a few hours.
  • Some teachers expressed concerns about the quality of the training provided by the provider. In some cases, members said that the turnkey trainers were better than the model providers.
  • Teachers had mixed feelings about the use of videos for training. In some instances, videos were embedded in workshops conducted by experts in the model or turnkey trainers. Teachers were provided the opportunity to talk about the video, and to ask questions about the frameworks.  In other instances, the videos were used to  replace traditional face-to-face training. Those teachers felt that they didn’t have a solid grasp of the frameworks.

Finally, many teachers made a distinction between training for a clear understanding of the framework versus professional development and strategies that would help them receive a rating of effective or highly effective according to the framework. Most of the training provided in Year One dealt with an understanding of framework and not professional development designed to improve teaching strategies.

Observations

The pilot called for all teachers to be evaluated both informally and formally. For the purposes of this pilot, only the formal observations would  count  towards a teacher’s summative evaluation. Thus far, most teachers experienced only informal observations prior to the focus groups, as districts were just beginning formal observations in the spring.

In some cases, informal observations were “walk-throughs” or brief observations by administrators. In other instances, the informal observations were a full class period.  For the most part, teachers believed that administrators were still learning how to use the framework.

Teachers who were satisfied with the informal observations in their district reported that they developed a good relationship with their evaluator and valued their input. For the most part, these teachers had a preconference with their administrator to discuss the class profile. They also were prepared for the evaluator’s visit, and were given the opportunity to discuss the lesson face to face with their evaluator soon after the observation. Both the evaluator and the teacher had a clear understanding of the frameworks, and believed that the evaluator was willing to listen to the teacher’s concerns about their observation and ratings. These teachers felt that the opportunity for constructive dialogue before and after the observation created an atmosphere of trust and was the most beneficial aspect of the evaluation. They felt challenged to develop new skills and were confident that the experience would make them better educators. Generally speaking, these teachers were the most optimistic about the pilot program.

“I received strong feedback,” one member reported. “My administrator took the time to sit with me and discuss my observation. I think this will make me a better teacher.”

Teachers who had concerns about the observation felt that the administrator was not well versed in the frameworks, that he or she didn’t receive adequate training, or that the frameworks didn’t reflect the reality of their assignment (for example, special education teachers).  Others felt that the use of checklists meant less communication with their evaluators. These teachers were less optimistic about the new framework.

“The checklist doesn’t encourage discussion,” lamented one teacher. “I expected help with instruction, but the checklist didn’t provide the help I was looking for.”

Many teachers raised concerns about the inter-rater reliability of evaluators. In other words, would two evaluators observing the same lesson assign the same ratings to the teacher? Many saw the inconsistency in rating as a byproduct of inadequate training, while others believed that any evaluation system would have some degree of subjectivity, regardless of the training.

As would be expected, teachers concerned about the informal observations experienced little direct contact with their evaluators. The evaluators appeared with a checklist, conducted the observation, and the results and recommendations were emailed to the teachers without the opportunity to discuss the observation, lesson, or rating.

As the pilot required all districts to develop a database for observation and evaluation ratings, many teachers raised concerns about confidentiality of employee records. Informal observations were included in the electronic database, even though they were not to be included in the summative evaluation. This left many to wonder if informal observations would be considered during the summative evaluation.

Professional development

Most teachers reported that they are still waiting to see how professional development will change in their district based on results of their personal evaluation.  Some teachers were told to watch a video of best practices provided by the vendor, read materials provided by the provider, or to visit the classroom of a colleague. For the most part, however, teachers in the pilot districts are unsure how professional development will change with the adoption of a new teacher evaluation system. But they did fear that PD might be treated as just another checklist.

Concerns and suggestions

Teachers in the focus groups were also given the opportunity to indicate if they had any other concerns about the pilot or suggestions to improve the implementation of a new evaluation system.

The majority of teachers expressed concerns about the use of standardized test scores and other measures of pupil progress to assess teacher effectiveness. While this aspect of the new evaluation system had not been implemented when the focus groups were conducted, it looms large in the minds of teachers. Many believe that standardized test scores are only a snapshot of a student’s abilities, even when measured as growth over time.  The vast majority felt that test scores do not take into account the unique needs of students with special needs, class size, poverty, lack of resources in the classroom, and other issues over which teachers have no control.

Teachers in grades 5-8 were especially concerned about the use of the state’s test to measure their effectiveness.  Teachers in nontested areas are concerned about the possible use of inappropriate or inaccurate measures of pupil progress – or the lack of any reliable or valid tests to measure pupil progress.  

While teachers on local evaluation committees had begun work on selecting or designing other measures of pupil progress (in addition to state tests), this work is clearly in the beginning stages.

“The entire testing piece has me worried,” explained one educator.   How will teachers of non-tested subjects be judged? How about special education teachers? I am not confident that this will be done fairly or accurately.”

A participant in another district added: “Students have become pieces of data. We need to be concerned about all aspects of a student’s growth and development, not just test scores."

Other concerns about the new evaluation system are:

  • Several teachers believe that the purpose of a new evaluation system is driven by a political agenda, rather than for purposes of school improvement. The political overtones in the state are having a negative impact on the implementation of the program and/or support for changing the system among educators.
  • Some teachers believe that this initiative is being done so that a district could fire teachers at will (a “gotcha”) without due process.
  • Many questioned whether a new system would encourage teachers to compete with one another to receive a higher overall score, rather than collaborate for the betterment of the students.  
  • Others worried that they would be evaluated by a process or framework that did not reflect their subject area or grade level. This feeling was particularly strong among special education teachers  and teachers with special assignments (i.e. instructional coaches).
  • What happens if the district switches administrators –will the district change frameworks and have to start over with training? Will there be funds and support to do this?
  • The evaluation plan calls for the option of using student surveys to assess teacher effectiveness. Teachers in districts considering this option are concerned about the validity of student surveys and how they will be used.
  • There are too many items in the frameworks– an evaluator can’t possibly see all of these items in a single observation.

Time—or the lack of it—was a big concern. Members commented on the additional paperwork their frameworks required. Others wondered how administrators could possibly complete all of the observations required in the pilot. When would a principal find the time to actually be an instructional leader?

 And then there’s the issue of the length of the pilot itself. “We have been working with the Danielson model for several years,” said one member. “It takes at least three years to fully implement a new evaluation model  Why does the state think this can be done in one or two years?”

What is it all about?

Teachers want an evaluation system that is fair, valid and reliable – and one that provides constructive feedback and professional development to improve teaching skills. They hope the new evaluation system will meet these objectives through a collaborative effort that provides high quality training and observations.  Teachers’ biggest concern is the use of standardized test scores and other measures of pupil progress to determine teacher effectiveness. Teachers know all too well that many factors beyond what they do in the classroom have an impact on student learning and pupil progress.

“Are they going to hold students and parents accountable too?” one teacher asked. “You can’t just change one thing and expect to have better outcomes.”

Districts that had already implemented one of the frameworks prior to the pilot teacher evaluation program used the funding to expand training, enhance their process and criteria, and build on the adoption of their chosen framework. These districts had a greater likelihood of completing the requirements of the pilot. Similarly, when local associations actively participated in the pilot, teachers felt more comfortable with the implementation, knowing that their working conditions and contractual rights would be protected, and that their association would stand up for them if the evaluation was unfair. Teachers in districts that communicated early and often with their teachers about the pilot were clearly more satisfied with the experience than teachers in districts that did not.

More than 30 percent of districts in New Jersey were already using the Danielson model prior to the pilot. But those that were starting from scratch clearly experienced more challenges than those that were already using an approved framework. These districts have far to go before a new evaluation system will be viewed as fair, valid and reliable.

The focus groups provided great insights for stakeholders as all districts move forward to selecting and implementing a new teacher evaluation framework during 2012-13 school year.

Dr. Rosemary Knab is an associate director of research at NJEA.