Those misleading new School Performance Reports

Some of the state’s best schools mislabeled due to flawed methodology

Published on Friday, April 26, 2013

Confusing. Inaccurate. Mixed bag. Proceed with caution.Question mark

These are just some of the terms being used to describe the state’s new School Performance Reports, which were released earlier this month.

Department of Education (DOE) officials had hoped these reports would “provide a more complete picture of school performance” and “help schools and stakeholders engage in local goal setting and improvement.” Instead they seem to be baffling parents and frustrating district officials who are trying to explain their report to anxious community members. The new performance reports are so complicated, the department also released a 13-page Interpretive Guide and a 16-page white paper to explain the state’s peer group methodology.

The School Performance Reports, formerly known as School Report Cards, were redesigned in large part to meet the requirements of the state’s waiver of certain provisions of the Elementary and Secondary Education Act (ESEA) granted by the federal authorities. Unfortunately, in making these changes, the department sacrificed simplicity and readability and has failed to meet the needs of parents and children.

Two aspects of the new reports are particularly controversial: the introduction of a new peer group methodology and the use of percentile ranks within those peer groups.

In the past, districts, not schools, were placed into eight groups based on the socioeconomic conditions of the communities they served. Now, the DOE has employed "Propensity Score Matching," which creates a list of "peers" for each school in New Jersey, grouping them together based on shared demographic characteristics, namely student poverty, limited English proficiency, and special education classification. This could mean, however, that schools from opposite ends of the state—or even different grade levels—fall into the same peer group, giving parents little basis for comparison.

Even more problematic is the use of percentile ranks. Each report identifies a school's position relative to other schools using a scale from zero to 99, representing the percentage of "peer schools" that school is outperforming. But percentile ranks are a zero-sum game and can be very deceiving. A high-performing school can be labeled as “lagging” or even “significantly lagging” simply because it is being compared to other high-performing schools. Similarly, a struggling school may look like it’s doing just fine—because it is being compared to other struggling schools.

There are a number of other problems with the reports, such as the absence of information that is required by law. But the more troubling issue is why the Department of Education would choose to analyze the data in such a way that renders it at best, not very useful, and at worst, misleading.

Bookmark and Share