District report cards released Thursday by the Ohio Department of Education, which grade schools on an A-F scale, show far fewer districts earned top scores compared to past years. Only two “A” grades were awarded on the performance index, down from 37 in 2013-14. About half the districts received “C” grades, and a third received “D” grades, the Cleveland Plain Dealer reports.
District grades are down in part because a lower percentage of students scored at or above proficiency on statewide tests. This year Ohio officials replaced PARCC with a test developed by the American Institute for Research.
The results have some educators up in arms. “I would give the state an ‘F’ on this state report card,” Cincinnati superintendent Mary Ronan told the Cincinnati Enquirer. “It’s pretty confusing. It’s gotten terribly complex.”
Little Miami superintendent Greg Power was more blunt, calling the report cards a “train wreck” and a “monstrosity.” “We’re getting to a point of data for data’s sake and not data to support our teachers and kids,” he explained.
As many educators acknowledge, the drop in scores is due to the fact state officials have set high targets for students. That is a good thing. By setting expectations high, and measuring to those, Ohio will provide accurate information on how well schools are doing to prepare students for success at high levels of learning, and, ultimately, to graduate high school prepared for college or a career.
The real culprit behind the chaos and confusion is Ohio’s decision to go it alone by implementing an independent test. This year Ohio replaced its PARCC assessments with an exam developed by the American Institute for Research (AIR). Already state officials have indicated they may change the proficiency benchmarks for those tests next year.
Jim Cowen explained earlier this year that states like Ohio that have “gone it alone” with student assessments have incurred disruptions, costs and uncertainty. “Beyond the costs, time constraints and technical challenges that accompany the development and implementation of new assessments, states that have struck out on their own have also jeopardized their ability to compare their progress to other states—and may very well come out with an inferior assessment in the process.”
A Chalkbeat article reiterates that conclusion: “The process of leaving consortia that was meant to pacify local protests against Common Core-aligned tests has actually led to chaos and confusion in the classroom, not to mention extra costs to those same states to develop replacement exams.”
Certainly, that seems to be how Ohio teachers feel. “The folks up there in Columbus keep moving the finish line,” Powers says. Dan Good, superintendent of Columbus City Schools, agrees the confusion arises from frequent changes to tests and cutoff scores. “Enough. Stabilize the system. Set one score,” he suggests.
Ohio is doing right by setting high academic standards and a high bar for proficiency. But by changing its test and continuing to move proficiency targets, state officials fail to provide consistency to students and teachers, or to establish a consistent baseline. Cowen’s advice earlier this year is particularly prescient:
“Leaders should resist temptations to go it alone, otherwise they risk undoing the years of work to get to this point. High-quality student assessments are one of the strongest tools teachers and parents have to ensure students receive the support they need. That shouldn’t be surrendered to the political winds of the moment.”