Briefly describe your model.
The McREL teacher evaluation instrument is a web-based, formative, rubric-driven evaluation system created collaboratively by teachers, principals, and researchers. It reflects the latest research on effective instruction and aligns with the Model Core Teaching Standards (InTASC), adopted by the Council of Chief State School Officers. It has been used for three-plus years and has gained support from teachers and teacher associations nationwide, including the North Carolina Association of Educators.
Where has your model been used and what data do you have regarding its efficacy?
More than 130,000 teachers in Arizona, California, Colorado, Indiana, Michigan, North Carolina, Montana, Utah, Washington, Wyoming, and the Commonwealth of the Northern Marianas Islands are now being evaluated using McREL’s evaluation system. The rubrics in the instrument are based on 40-plus years of research on effective teaching and professional practice (described in more detail below). We subjected our rubrics to extensive pilot testing and study to ensure their validity, confirming that they measure what they purport to measure.
As a research organization, we are compelled to report that we do not have rigorous data on its efficacy. Because our instrument, like many, is still relatively new and has not been subjected to true scientific study (e.g., a two- to three-year study of student performance in a district using our instrument versus a comparison district not using it), we would be remiss to draw causal inferences about its efficacy or claim its proven ability to raise student achievement. However, we can claim with integrity that surveys of teacher working conditions in North Carolina found higher levels of job satisfaction among teachers in those districts using the instrument than in those that were not—the result, according to teachers, of getting honest feedback on their performance in the spirit of continuous improvement.
What is your definition of good teaching?
In the McREL publication, Simply Better: Doing What Matters Most to Change the Odds for Student Success (Goodwin, 2011), we identify three key characteristics of highly effective teachers. All three are reflected in and measured by our instrument.
Highly effective teachers challenge their students. Good teachers not only have high expectations for all students but also challenge them, providing instruction that develops higher-order thinking skills. According to Carol Dweck, author of Mindset, great teachers encourage a “growth mindset” in their classrooms, helping students see intelligence not as a fixed trait, but as something that grows with concerted effort. Effective teachers take personal responsibility for student progress, yet also encourage students to be responsible for their own learning and hold high expectations for them.
Highly effective teachers create positive classroom environments. One of the strongest correlates of effective teaching is the strength of relationships teachers develop with their students. In addition to challenging students, effective teachers understand their students’ learning needs and how their various backgrounds can influence, and be an asset to, their learning. They also create respectful and inclusive classroom environments that encourage all students, regardless of background or ability, to contribute to classroom discourse.
Highly effective teachers are intentional about their teaching. Good teachers are clear about what they’re trying to teach and master a broad repertoire of instructional strategies to help students accomplish their learning goals. They not only have deep knowledge of their content areas but also know how to teach their content (pedagogical content knowledge). Great teachers continually monitor student progress and use classroom assessment data to adapt instruction to student needs.
How does your model promote a collaborative environment among educators?
In addition to providing teachers with opportunities to assess their own performance and demonstrate proficiency on multiple measures with teacher-provided artifacts (e.g. lesson plans and student work) our instrument was explicitly developed to support coaching conversations between principals and teachers. Our rubrics are intentionally designed to give both teachers and principals a clear idea of what teachers must do to progress to higher levels of performance.
How does your model differ from other models that are part of the New Jersey pilot?
A key premise of our instrument is that good teachers are made, not born. With research-based strategies, guided practice, and ongoing professional development, teachers can improve their practices and raise student achievement. Thus, our instrument does not categorize teacher behaviors as “unsatisfactory” or “deficient,” but rather, begins with the term “developing,” and then identifies specific, observable behaviors that teachers can learn and master to improve their performance to become “proficient,” “accomplished,” and the top rating, “distinguished.” Moreover, although McREL’s instructional framework guided the creation of our system, we do not require schools and districts to adopt our model of instruction as a condition of using the tool. Indeed, school systems that use other instructional models have found that our instrument nicely complements and reinforces these other models for creating more uniform approaches to instruction.
How does your model ensure quality training of administrators and teachers?
At a minimum, we require three days of training—two days for teachers and three for principals—to help everyone develop a shared understanding of the rubrics, the evaluation process, and how to use the web-based application. We show principals how to fairly and objectively evaluate teacher performance and coach teachers to higher levels of performance during pre- and post-evaluation conferences. We also offer training-of-trainer sessions to develop district capacity to sustain and support the instrument in the long-term with no additional support from McREL. In New Jersey, we have partnered with EIRC (Educational Information and Resource Center located in Mullica Hill, NJ), a local service agency, to deliver training in our instrument. As a result, we’ve been able to lower trainer costs, bring a local understanding of New Jersey issues and context to the sessions, and establish a local resource to support use of the instrument.
How does your model incorporate the use of standardized student test scores?
While we believe strongly in using multiple measures and formative feedback to evaluate and guide improvements to teacher performance, we recognize that states and districts nationwide are moving toward using student achievement data to evaluate teachers. We can support these efforts, at the simplest level, by incorporating student test scores as artifacts into teacher evaluations. Alternatively, our research team can, in partnership with districts, create algorithms that use student achievement data to represent a suitable portion of a teacher’s overall evaluation rating.
For more information on the McRel model, visit www.mcrel.org/evalsystems.