No wonder you’re confused! In the last 18 months, there has been a major task force report, a pilot program, a new law, and proposed regulations--all relating to teacher evaluation. To complicate matters further, this school year brings a second pilot program featuring substantial differences from the first experiment.
Wondering how all of this affects you? At this point, it all depends on where you teach and what you teach.
One thing is clear, however. The New Jersey Department of Education (NJDOE) is planning for statewide implementation of a new teacher evaluation system next September. That means every school in every district.
This is not going away. You need to get the facts. So, if you are a certificated staff member at a New Jersey public school, read on.
Not sure if you’re in a pilot district?
State-funded Pilot I districts
- Monroe Township
- Ocean City
- Red Bank
- West Deptford Township
NOTE: Every district that participated in Pilot I will join Pilot II.
State-funded Pilot II districts
- Bordentown Regional
- Collingswood Borough (Lead district for a consortium that includes Audubon and Merchantville)
- Haddonfield Borough
- Lenape Valley Regional (Lead district for a consortium that includes Stanhope)
- Middlesex County Vocational
- Piscataway Township
- Rockaway Township
- Woodbury City
NOTE: It is expected that up to 10 Title 1 districts will be added to the pilot. In addition, up to 21 districts that applied for the program but were not selected for state funding have been invited to join the 2012-13 pilot, without funding. Stay tuned to njea.org for an announcement about additional district participants.
If you are in a pilot district, read this:
If your district is in Pilot II, find out what teacher practice instrument (sometimes called a framework, model or provider) your district selected to measure teacher practice. For example, your district may be using Charlotte Danielson’s Framework for Teaching or the Marzano Causal Teacher Evaluation System. The first step is to form a District Evaluation Advisory Committee that will oversee the training of teachers and administrators in the use of your particular instrument. Although Pilot I required three full days of training for administrators and two full days for teachers, the state called for training in Pilot II that is “rigorous and comprehensive” but did not specify a minimum number of days.
Both NJEA and the state believe that training is critically important to the success of any new evaluation system. Your local association should insist on high quality training and a representative of your local should attend administrator trainings. For more advice to local associations, turn to “NJEA’s concerns, recommendations” below.
The state has recommended that training in the evaluation instrument be completed and observations begin by Oct. 1. The way in which those observations will be conducted is substantially different than in Pilot I. For example:
- Unannounced observations will be required for all teachers.
- All teachers will be observed once by an evaluator not from the teacher’s school.
- Some observations will be done by two observers to ensure inter-rater agreement.
Of particular concern to NJEA is the categorization of teachers into two groups – core and non-core. Core teachers will receive more observations, more unannounced observations, and will be observed for a greater period of time. For core teachers, student achievement will account for 50 percent of the evaluation. For non-core teachers, student achievement will account for between 15 and 50 percent of the evaluation. Because NJEA believes that evaluation procedures must be consistent for all teachers, the Association has voiced its objection to NJDOE officials.
The role of student achievement
Since observations featuring the new instrument should be underway in Pilot I districts, the state is hoping that these districts will make progress on the use of student achievement measures in teacher evaluation.
Various NJDOE officials have acknowledged the difficulty in designing this portion of the new system and have admitted it is a work in progress. Because nearly 70 percent of teachers work in a grade level or subject area that does not have state tests, Pilot I districts have found that identifying and/or creating appropriate assessments is difficult and time consuming. And for the 30 percent of teachers who do work in a tested grade or subject area, the fact that districts don’t receive test scores until after teachers’ summative evaluations are completed has proved problematic. In fact, Pilot I teachers in this category received an interim summative rating last spring and should receive a final rating this fall. Those teachers are advised to stay in close contact with local association leadership regarding this matter.
Here’s what we know so far. For teachers in tested grades (4-8) and subjects (math and language arts), student growth percentiles (SGP) on state assessments (where two consecutive years of data are available) will be calculated. These SGPs will account for 35-45 percent of the teacher’s summative rating.
Districts will determine the remaining 5-15 percent of the student learning component, which may include a schoolwide measure of student achievement and other optional measure of student achievement.
For teachers in non-tested grades and subjects, student achievement measures may be used for 10-45 percent of the summative evaluation, with 5-10 percent derived from schoolwide measures.
Based on this formula, the summative rating for these teachers may be based on a higher percentage of teacher practice measures, up to 85 percent (with 15 percent based on student assessments).
If you are not in a pilot district, read this:
Happy that you didn’t see your district’s name on the list for Pilot I or Pilot II? Not so fast! It is expected that up to 10 Title 1 districts will be added to the pilot. In addition, up to 21 districts that applied for the program but were not selected for state funding have been invited to join the 2012-13 pilot, without funding. These districts can participate in the state-level Evaluation Pilot Advisory Committee (EPAC), receive support and guidance from NJDOE’s evaluation implementation team, and contribute to a forthcoming third-party evaluation of the pilot.
If it turns out that your district is not a pilot district, you should still peruse “If you are in a pilot district, read this.” The fact is many of the features of the new evaluation system are being tested in the pilot program. For example, Pilot II introduces different requirements for “core” and “non-core” teachers and includes the use of observers from another school in the district. Although it’s impossible to say at this time, those features may become part of new regulations related to teacher evaluation, so it’s best to stay apprised of what’s happening in the pilot.
An aggressive implementation timeline
All schools will be required to meet certain milestones this school year so they can be ready for the new system in September 2013. All non-pilot districts must:
- No later than November 2012, form a District Evaluation Advisory Committee (DEAC) to ensure stakeholder engagement. NJEA strongly advises these committees be formed as soon as possible, as they will help to drive decision-making for all other parts of the process.
- Have representation from the following groups on the DEAC: teachers from each school level in the district, central office administrators overseeing the teacher evaluation process, and administrators conducting evaluations. Members must also include the superintendent, a special education administrator, a parent, and a member of the district board of education. At the discretion of the superintendent, membership on the DEAC may be extended to the district business administrator and to representatives of other groups.
- By January 2013, adopt an evidence-supported teaching practice observation instrument. (See “State likely to approve more evaluation instruments.”)
- By July 2013, thoroughly train teachers in the teaching practice observation instrument.
- By September 2013, thoroughly train observers to ensure fair and consistent application of the instrument.
- In January and July 2013, complete progress reports on these milestones.
Just as last year, NJEA staff will provide information and support to all pilot districts.
Everybody, read this:
State likely to approve more evaluation instruments
But wait, there’s more. More instruments, that is.
Under the state’s new evaluation system, 50 percent of your evaluation will be based on teacher practice (as determined by classroom observations, portfolios, etc.). If it hasn’t already, your district must select an evaluation instrument from a list of providers approved by the NJDOE.
Pilots I featured four providers: Charlotte Danielson’s Framework for Teaching, the McREL Teacher Evaluation System, the Stronge Teacher Evaluation System, and the Marzano Causal Teacher Evaluation System. However, this summer the NJDOE created a process that allows other vendors to get approval of their teacher evaluation instruments for use by New Jersey school districts. A final list of state-approved teacher evaluation instruments is expected by January.
In addition, if a district chooses to use an instrument that is not on the approved list, it must submit additional documentation to the Department of Education for approval.
What about the student growth piece?
Part of your evaluation will be based on measures of student achievement. It is no surprise that this portion of the evaluation generates the most anxiety among educators and for good reason. As noted under “If you are in a pilot district, read this,” the student measures portion of the teacher evaluation system is proving to be a conundrum for the NJDOE. Still, the department is moving forward. That’s why all members should watch a video titled “Using student growth percentiles” found on the NJDOE website.
NJEA has serious concerns about the use of these state test scores in teacher evaluation. Because test scores only offer a snapshot of a student’s abilities, multiple measures of student achievement must be used to accurately determine student growth. For more on NJEA’s position on this topic, read “Research vs. rhetoric,” an article from the March 2011 issue of the Review.
How does the new tenure law affect teacher evaluation?
The tenure law signed by Gov. Chris Christie in August makes important changes to the teacher evaluation system in New Jersey.
Several components of the law will be familiar to NJEA members. This includes requirements for formal observations as part of the evaluation process, post-conferences, and professional development plans (PDPs). Currently, districts also have the option of conducting informal observations –often called walk throughs or power walks--in any classroom. This option will still be available to districts. However, informal observations are not specifically included in the law as part of the evaluation process.
There are several components of the evaluation process that will be new to most educators.
- All evaluations shall have four ratings: “Highly Effective,” “Effective,” Partially Effective,” and “Ineffective.” This may be a departure from the current rating in your district of “Satisfactory,” “Needs Improvement,” or “Unsatisfactory.”
- The law also requires a “Corrective Action Plan” for teachers rated as “Partially Effective” or “Ineffective.” The plan shall be developed by the teacher and the teacher’s supervisor, and must include timelines and list the responsibilities of the district.
- Each school will have a School Improvement Panel consisting of the principal or designee, an assistant or vice principal, and a teacher (selected in consultation with the local association). The panel will oversee mentoring for new teachers, and identify professional development opportunities. Supervisors on the panel will also conduct evaluations, including mid-year evaluations for teachers rated “partially effective” or “ineffective.” The teacher on the panel may evaluate teachers, but only if the local association agrees.
- All evaluations must include multiple measures of teacher practice and multiple measures of student progress.
- Districts must use an evaluation instrument that is based on the professional standards for the individual’s job description. Performance measures in the rubric must be linked to student achievement. Districts must submit their evaluation instrument to the commissioner of education annually for approval.
- Measures of pupil progress cannot rely exclusively on a single standardized test score. Standardized assessments shall not be the predominant factor in the overall evaluation of the teacher.
- Aspects of evaluation shall continue to be mandatory subjects of negotiations if they are not superseded by statute or regulation.
State Board to adopt new teacher evaluation regs
The new tenure law, however, leaves many aspects of the evaluation process to be determined by regulation. Those regulations will be proposed by the commissioner of education and must be adopted by the State Board of Education.
Some of these regulations will address items specifically referenced in the law, such as the requirement for a four-part rating scale dealing with effectiveness, the use of multiple measures of pupil progress, and a requirement that the evaluation rubric be based on professional standards of conduct. Other regulations will deal with items not detailed in the law. This may include guidelines for training in the evaluation system, the process for approval of an evaluation instrument, and a process for ensuring that the evaluation process help to inform instruction.
The State Board may address other items not specifically mentioned in the law such as the number and length of observations, evaluation timelines, options for measuring student growth in grades or subjects without standardized tests, and standards on what will constitute multiple measures of pupil progress.
At its Aug. 1 meeting—prior to Gov. Chris Christie signing the tenure law--the State Board was presented with the first set of regulations pertaining to the new evaluation system. As expected, parts of the proposal simply codify certain features of the state’s proposed evaluation system and what is already in the tenure law described above. NJEA is currently analyzing these regulations and will monitor the approval process closely.
The proposed regulations will be before the board at the adoption level this fall. Additional regulations pertaining to teacher evaluation will be introduced in 2013-14—after the new system is to be fully implemented across the state. Those regulations will likely address the student achievement portion of the evaluation system.
It is worth noting that these regulations have been introduced before an independent evaluation of the pilot is complete, although the NJDOE has stated that information gained from the pilot through the Evaluation Pilot Advisory Committee did inform the proposal. The NJDOE partnered with Rutgers University Graduate School of Education to provide this external evaluation, with a final report due in December 2012. Data has been and will be collected from the pilot districts via interviews, focus groups and online surveys.
NJEA’s concerns, recommendations
NJEA continues to be concerned about the statewide implementation of a new teacher evaluation system by September 2013. Significant questions persist about the efficacy of these teacher practice instruments for educators in special education, ESL teachers and teachers in non-tested subjects. It is also important to note although the new tenure provisions apply to those with educational services certificates, the new evaluation system does not. In addition, districts are required to identify multiple measures of pupil progress that will be used in the evaluation process for each educator – without the time and resources necessary to determine the validity and reliability of these assessments. At this point, there are far more questions than answers when it comes to the student growth portion of teacher evaluation.
Even a good evaluation system can be implemented poorly. And remember, your collective bargaining agreement must be honored throughout the process. Local associations in districts that are struggling to meet the mandate in any way should contact their UniServ representatives.
Resources you can find on njea.org
Is your head swimming? Fortunately, njea.org has resources related to teacher evaluation. You'll find NJEA’s position on teacher evaluation, past NJEA Review articles on Pilots I and II and downloadable resources for members and leaders. These include “The NJEA Resource Guide on Evaluation for Teachers,” a brochure on using student test scores to evaluate teachers for use with parents (available in English and Spanish), and tips on ways to ensure the new system is properly implemented and is collaborative and fair. Information on four teacher practice instruments, including videos, is also available.
You may also want to visit the N.J. Department of Education website at www.state.nj.us/education. Click on the “Educator Evaluation EE4NJ” to access the March 2011 task force report upon which much of the new system is based. The NJDOE has also prepared a page with answers to “Frequently Asked Questions” that it updates regularly.