Educators in the LeadA voluntary pilot program will bring a new system of teacher evaluation to nine districts this year. Another 19 struggling schools—and the districts that contain them--are being required to change their teacher evaluation system in order to receive federal funds. And other districts are simply embracing new approaches in preparation for statewide overhaul.

When will all this affect YOU? The state hopes to roll out the new program next September and use it in making personnel decisions during the 2013-2014 school year.

If you are a teaching staff member in New Jersey, how you are evaluated and the criteria used in that review could change radically depending on the results of teacher evaluation programs being tested in several public school districts this year.

After more than three decades of state-mandated teacher evaluations in this state, New Jersey is taking big steps toward reformulating these programs, including moving toward more prescriptive formulas that it thinks will help better identify effective teachers.

This year armed with state grant money, nine districts are piloting an evaluation approach whipped up by the governor’s task force on teacher evaluation. The N.J. Department of Education (NJDOE) chose the nine among 31 applicants statewide for the pilot – dubbed Excellent Educators for New Jersey or EE4NJ. (The nine districts were not announced by press time; visit njea.org for up-to-date information.)

But those nine aren’t the only ones testing the new recipe. Simultaneously, the state is requiring 19 New Jersey struggling schools that are receiving federal School Improvement Grants (SIG) to implement the task force’s evaluation proposals as a state condition of receiving the federal dollars and for their districts to submit their plans to do so districtwide as well. So, a program billed as a “pilot” in some districts is taking on bigger connotations in others.

Throw into that mix a number of other school districts that NJDOE staffers say want to test the approaches, even if they aren’t part of the official pilot.

How the programs will be monitored still isn’t clear. Neither is it certain whether the state will be equipped to provide the teacher-specific student achievement data for districts beyond the nine pilots or the districts will be able to supply the two previous years of teacher-student rosters. But we’re getting ahead of ourselves here.

An external evaluator will be hired by the NJDOE to evaluate the pilot program and assess district’s experiences in implementing the teacher evaluation system.

The external evaluation, the pilot guidance indicates, will be used to “help improve the system framework, develop assessments, develop the appropriate supports for principals and teachers, and inform a statewide implementation of the evaluation system.”

New Jersey isn’t alone in its quest to reconfigure teacher evaluation. Across the country states are taking another look at how they analyze teachers. Many of these were spurred by attempts to garner federal Race to the Top (RTTT) funds, as well as other federal funding programs calling for beefed-up evaluation programs tied to student achievement.

When N.J. failed to win RTTT funds in the second round last year, Gov. Chris Christie vowed to plow ahead with his proposals, including those that affect teacher and principal evaluation.

Student growth + teacher practice = criteria

The base evaluation formula proposed by the N.J. Educator Effectiveness Task Force in March originates from the governor’s Executive Order establishing the task force last fall. The order directed that at least 50 percent of the evaluation framework be based on “identified measures of student achievement,” with the remainder based on “demonstrated practices of effective teachers and leaders.”

Therefore, every pilot and SIG-funded school will institute a student-learning based evaluation formula relying:

  • 50 percent on student achievement factors, primarily on improvements in student standardized test scores, and
  • 50 percent on effective teacher practice.

Within those components, the segments are broken down even further.

Measures of student achievement (labeled “outputs”) will base:

  • 35–45 percent on the teacher’s individual students’ growth as shown through state-approved assessments or performance-based evaluations
  • 5 percent  on a state-approved schoolwide performance measure
  • 0 - 10 percent, at the district’s option, on additional state-approved student performance measures.

This is the most controversial part of the new program since initially it creates a tiered evaluation system – with teachers of grades and content areas subject to state tests experiencing the full force of the formula. These include math and language arts in grades 4-8, where both pre- and post- scores on the state assessments are available.

State-tested grants/subjects

Districts will start with two years of test scores – 2009-10 and 2010-11 – and use an approach known as the Colorado Growth Model (CGM) to chart students’ progress on state tests. The CGM – still undergoing testing in Colorado – charts progress of individual students and groups of students from year to year toward state standards, comparing each student’s progress with the progress of students in the state with a similar score history on the state’s standardized test in that subject area.

Other subjects/grades

For those grades and subjects not tested, the pilot districts will be expected to work with the NJDOE in identifying existing assessments or develop new assessments that could be used to generate growth scores for as many teachers as possible. Those assessments may include:
  • Performance tasks for subjects such as art, music, theater, gym, vocational-technical
  • Standards-based commercial or curriculum-based assessments
  • Nationally-normed tests, such as AP, IB, SAT
  • Student Learning Objectives, in which teachers set specific student goals, then pre-test and post-test to see whether students have met those successfully
  • “Progress monitoring” evaluations for special education teachers.

Upgrading data systems

The state’s “NJ Smart” electronic data storage, retrieval, and analysis system already includes student state standardized test results and other student information, but not tied to individual teachers or formulated using the Growth Model. The NJDOE must still assign every teacher a unique identification number, then must link student records to specific teachers. The state plans to have that link in place statewide by September 2012. Meanwhile, it will have to accelerate that process and information for the pilot districts.

Measures of teacher practice (termed “inputs”) base:

  • 25%-47.5 percent on a state-approved, research-based, standards-driven evaluation framework that assesses teacher on-the-job performance
  • 2.5%-25 percent on one additional NJDOE-approved tool to assess teacher practice, such as documentation log/portfolio review, student survey, assessment of teachers’ pedagogical knowledge.

How that will work

The pilot and SIG districts will choose an evaluation framework model, train teachers and the supervisors who will evaluate them in the model, and use the model to implement the state criteria. Districts will have some limited flexibility to make some adjustments within the broad-based criteria for NJDOE approval.

Standards-based models

The pilots may choose among four state-identified evaluation framework providers, or choose another provider that offers a “research and standards-based” framework consistent with NJDOE identified elements and that meets the district’s needs. The pilots will contract with the evaluation framework vendors to provide materials, training, and resources.

The four nationally known identified models are:

  • Charlotte Danielson’s Framework for Teaching
  • Dr. Robert Marzano’s Casual Teacher Evaluation Model
  • Mid-Continent Research for Education and Learning’s McREL Teacher Evaluation System
  • TAP System for Teacher and Student Advancement (formerly Teacher Advancement Program), created by Lowell Milken of the Milken Family Foundation.

The evaluation framework must be based on the recently revised national core teaching standards developed by the Interstate Teacher Assessment and Support Consortium (InTASC). Additional information about these models and the InTASC standards will be included in an upcoming issue of the Review.

Reviews and ratings

Formal observations

Nontenured teachers will continue to be formally observed and evaluated a minimum of three times a year. Tenured teachers will be required to be formally observed and evaluated a minimum of two times a year, instead of once a year as the state regulations now provide.

All formal evaluations must consist of one instructional period or a minimum of 40 minutes. (Current rules require one instructional period in secondary schools and one complete subject lesson in elementary grades.) Pre- and post-conference input and feedback will be required.

Informal observations

In addition, a minimum of two informal observations – without a pre- or post-conference -- will be required for all teachers. These informal observations could include short classroom visits for a specific purpose, “power walk-throughs,” or a review of “teacher artifacts.” They can last a full instructional period or shorter, be agreed upon prior to the visit or be unannounced. The results will be discussed in a written observation report with feedback.

Summative evaluation

As currently required, one summative evaluation must produce a mutually-developed teacher professional development plan.

Self-reflection

In a newly prescribed element, each teacher will have to conduct a self-assessment of his/her own practice at least once a year, compare it with the evaluator’s assessments, and then, if they don’t match, realign the teacher’s personal vision of effective practice.

Supportive environment and professional development

Districts and evaluators will be required to promote a supportive, positive environment and culture, including: supportive and accurate feedback for teacher practice; professional learning experiences to improve teacher practice; and follow-up support for teachers to improve professional practice and student achievement.

Districts will be responsible for providing access for teachers and evaluators to any resources and materials needed to support the teacher practice evaluation framework. It is expected that “district leadership will support stable school and district learning environments focused on student achievement.”

Ratings

Based on the evaluation components, teachers will be rated either “highly effective,” “effective,” “partially effective,” or “ineffective.”

Other pilot requirements

The pilot districts also must:

Establish a District Stakeholder Advisory Committee -- to oversee the implementation of the teacher effectiveness evaluation system. Membership must include representation from:

  • Teachers from each school level (e.g., elementary, middle, high school)
  • Central office administrators overseeing the teacher evaluation process
  • Administrators conducting evaluations
  • The local school board
  • A data coordinator who will be responsible for managing the evaluation system’s student data components.

The superintendent may extend the committee to include representatives of other groups, such as counselors, child study team members, instructional coaches, and new teacher mentors.

One committee member will be designated as the liaison with the NJDOE. That person will meet at least four times with an NJDOE representative to discuss implementation, successes, obstacles, and resources, and to solve problems.

Conduct training – A minimum of three days for administrators/evaluators, as well as potentially coaches and mentors; two days for teachers.

Use technology – The pilot districts must collect teacher practice data using an electronic or Internet-based “performance management system” so the data can be electronically stored, analyzed, and reported.

Collaborate with non-public schools -- Non-public school teachers and administrators may participate in professional development offered through the pilot, learn about the teacher evaluation process, and adopt a system as required under the pilot.

Be prepared to fund – The grant is based on a sliding scale – from $49,000 to $206,000 – depending on how many teachers work in the schools, including teachers from participating non-public schools. School districts with more than 600 teachers will select which schools will conduct the pilot. Any costs exceeding the grant will be paid by the school district.

Where does NJEA stand?

NJEA has long supported high standards for students and teachers. The initial teacher evaluation regulations were adopted more than 35 years ago – and need to be revised to incorporate new research on teacher professional development, contemporary thinking on teaching standards, improved and continous training for teachers and evaluators, and to recognize the important roles that teacher collaboration and a more supportive teaching and learning environment play in school improvement.

But just because something needs improvement doesn’t mean that every new idea is a good one.

Educators and their local associations are important stakeholders.

Local associations have long collaborated with school districts to refine their evaluation systems and have made remarkable progress to embrace new models of evaluation. Even the NJDOE recognizes that more than 30% of districts have already adopted some form of the Danielson model. Any new evaluation system must be designed with the collaboration of educators and representatives of the local association.

All state-level stakeholders must be represented on the state-level advisory committee.

The representatives selected by the statewide education groups representing the different segments of the educational community should be accepted, and teachers should make up the majority of the advisory committee dealing with teacher evaluation.

NJEA agrees that a pilot program is the appropriate way to evaluate the effectiveness of a new evaluation system.

However,given the complexity of the models, the lack of clearly defined multiple measures in all grade and subjects to determine student outputs, and the impact of any evaluation system on personnel decisions, one year (see timeline below) is simply not enough time to field test new models to assess professional practice. The pilot should be extended to ensure the program implemented statewide is credible, workable, and reliable.       

The Task Force recommendations place too heavy an emphasis on standardized tests to assess teacher competency.

A blue ribbon panel of national testing experts and researchers met at ETS in January 2011 to discuss the serious limitations of using standardized tests or value added models to evaluate teachers (See NJEA Review March 2011 issue – “Research vs. rhetoric”).

NJEA members believe that any teacher evaluation system must be “fair”

  • Providing each teacher the opportunity to grow as a professional and to improve his/her teaching and students’ learning
  • Relying on multiple measures and various indicators of students’ progress, not solely or primarily standardized tests
  • Taking into account the profile of the students in their classroom
  • Recognizing that new curriculums and programs need additional time to assure teacher and students success
  • Conducted by properly trained administrators who have the appropriate experience to evaluate the teacher’s performance.

It is appropriate to assess teacher practice on a continuum, with considerations both for overall experience and experience in working in the content/grade level/position.

Many programs that use “checklists” tend to minimize the teacher/student and teacher/principal interaction and the experience level. These models must not over-rely on superficial checklists and forms that translate into easy “point” systems that lose the big picture of teaching. 

Requirements of the pilot must be consistent with provisions in contracts.

Evaluation criteria are adopted by local school boards within broad parameters established by state regulation. But procedures regarding pre- and post-conferences, notification of observations, forms employed, and other procedural issues not specified or required by statute have been subject to collective bargaining.

Some local associations and school districts already have negotiated more observations. And typically additional evaluations usually have been allowed in specific instances when needed to determine whether documented deficiencies have been addressed.

NJEA believes that even within the pilots, bargained contracts must still be honored. Collaboration, respect, and trust will remain key elements to the success of any evaluation program.  

Districts participating in the pilot should not use the results of evaluations to make high-stakes personnel decisions.

Since the purpose of a pilot is to field test a new evaluation system for further refinement, using the pilot system to make personnel decisions is not only unfair but risky. Districts should wait until the evaluation system has been fully implemented statewide, the data systems are reliable, and all educators are trained in the standards and evaluation systems before using the new evaluation system to make high-stakes decisions.

The pilot evaluation system should place additional emphasis on meeting the professional development needs of individual teachers and support schoolwide goals as well.

NJEA believes that teacher evaluation’s primary purpose is to improve teaching and learning. The success of the pilot models should be measured by their impact to identify the needs of teachers and schools and to drive high quality professional development experiences. Again, this process should be collaborative, with teachers continuing to take a leadership role.

NJEA believes that teachers are the most valuable resource for helping struggling colleagues, and every district should establish a Professional Support Team.

Many models of collegial support already exist in districts – including mentoring, coaching, collaborative professional learning communities, and classroom demonstrations of best practices. A professional support team should be established in each district to provide a “safe haven” for struggling teachers. The program should be voluntary for all teachers, both novice and experienced, who wish to enhance their knowledge and skills.  But supervisors also could refer struggling staff to the program for assistance from colleagues. Districts and local associations should work together to create a program during the pilot year to assure appropriate selection procedures, release time, and/or stipends for members of the professional support team.

Certain provisions in the current regulations should continue, including:

  • Review of the teaching staff member’s performance based on the job description
  • Review of the teaching staff member’s progress toward meeting the objectives of the individual professional development plan developed at the previous annual conference
  • Review of available indicators of student progress and growth toward the program objectives, and
  • Review of the annual written performance report.

The written report now must reflect:

  • Performance areas of strength
  • Performance areas needing improvement based upon the job description
  • An individual professional development plan developed by the supervisor and the teaching staff member, and
  • A summary of indicators of student progress and growth and how these indicators related to the effectiveness of the overall program and the performance of the individual teaching staff member.

The evaluation must reflect the support that the district will provide in helping the teacher meet the individual professional development plan.

In addition, tenured staff have 10 days to enter into the record provisions for performance data that have not been included by the supervisor.

These components and others reflecting school employee rights should not be neglected in either the pilots nor as any future transition to a new system occurs.

NJEA to monitor pilots, aid locals

NJEA will work closely with local associations in the pilot districts and other districts implementing the task force’s proposed evaluation system.

The Association will provide support, encourage collaboration and communication, help locals assess whether the pilots are implemented correctly, and work with locals to assure that members’ experiences in the pilot are reported accurately and fairly and that members’ rights are not violated. An NJEA task force focusing on this issue will monitor the impact of the program, along with the NJEA Certification, Evaluation, and Tenure Committee.

Whether you work in one of the pilot districts, a School Improvement Grant-funded district, or any other school district in New Jersey:

  • Alert your local association to any changes being implemented in evaluation, evaluation tools, or evaluation forms in your school.
  • Keep good records regarding your students, your work with students, examples of student work and achievement, how your students meet the curricular standards, how you meet standards for teachers, professional development and collaboration.
  • Pay attention to what is happening in evaluation statewide.
  • Check the NJEA website and publications regularly for updates and more information on evaluation throughout the year.

Any education transformation should enhance teaching-and-learning in our public schools, not undermine it. So stay tuned, stay involved, and stay alert.

Rosemary Knab, NJEA Research and Economic Services associate director, also serves on the N.J. Center for Teaching and Learning Board of Directors.
Martha O. DeBlieu, NJEA Research and Economic Services associate director for education research and issues analysis, serves as staff contact to the NJEA Certification, Evaluation, and Tenure Committee.