States adhering to the PARCC test鈥檚 standards are setting higher expectations for students than those using SBAC or the ACT Aspire, .
The study, released Thursday by the statistical wing of the U.S. Department of Education, maps states鈥 cut scores鈥攖he point at which a student is deemed proficient鈥攐nto the testing scale used by the National Assessment of Educational Progress. This allows for comparisons of the tests鈥 technical difficulty, despite variations in each test鈥檚 emphasis and format.
It is the sixth report the Education Department has released using this method, but the first to look at how the shared tests developed in the wake of the Common Core State Standards stack up.
In general, the report shows that most states demand a significantly higher level of student performance than they did a decade ago. Some of that growth appears to be due to decisions made by the two federally funded testing consortia, SBAC and PARCC. (They are formally known as the Smarter Balanced Assessment Consortium and the Partnership for Assessment of Readiness for College and Careers.)
Both of those testing groups explicitly aimed to set a more stringent, 鈥渃ollege and career ready鈥 bar when they set their benchmarks in 2014. In fact, the move to SBAC and PARCC assessments created lots of headaches for states when worried parents saw their kids鈥 results on the newer, harder tests.
When it comes to head-on comparisons, though, PARCC clearly emerges as the hardest of the shared tests, in each subject and grade studied鈥4th and 8th grade in reading and math. In fact, PARCC鈥檚 definition of 鈥減roficient鈥 performance actually surpasses NAEP鈥檚 in both 4th and 8th grade mathematics.
Here, for example, is a graphic showing the range of 8th grade reading expectations.
鈥淥verwhelmingly, the PARCC standards are higher than the other two, ACT [Aspire] or Smarter Balanced,鈥 said Commissioner Peggy Carr, the associate commissioner of the National Center for Education Statistics, the statistical agency that produced the report, in a conference call with reporters. 鈥淯nfortunately, this doesn鈥檛 mean that [student] performance is also high, and there is little to no relationship between the performance of students in these states and how high the bar has been set.鈥
Comparing Testing Standards
Here鈥檚 a look at some of the other highlights in the study.
- In all grade and subject combinations, the range of where states set the proficiency bar has narrowed, mainly because states at the lower end raised their standards.
- In 4th and 8th grade math, no state set the bar below NAEP鈥檚 鈥渂asic鈥 benchmark. Just four states did in 4th grade reading and one in math.
- Expectations on the ACT Aspire, a more recent competitor in the state-testing sweepstakes that鈥檚 signed two states, were below SBAC in math at both grade levels, but equal to or higher than SBAC in reading.
- Kansas set some of the highest proficiency bars overall, even though it doesn鈥檛 use any of the shared tests.
The findings also echo the findings of two other recent reports, both of which conclude鈥攗sing a less rigorous methodology than the NCES鈥攖hat between NAEP expectations and those on the state tests.
Here鈥檚 one really tricky thing to remember: States in PARCC and Smarter Balanced agreed to report scores using a common definition of proficiency. But they were permitted to make their own decisions about which level would be used for their own school ratings or consequences, like graduation.
So while Ohio and Louisiana administered PARCC in 2015, they chose to use 鈥渁pproaching proficiency"鈥攁 lower standard鈥攆or accountability. NCES treated those states separately from the other PARCC states. And its analysts also dropped from the study states that had serious testing disruptions or didn鈥檛 follow all of the consortia鈥檚 test-administration rules.
Finally, keep in mind that this is a snapshot of the state of things in 2015. Since then, the , with
Federal officials, meanwhile, underscored that a higher bar is not necessarily a better one.
鈥淚t鈥檚 important to evaluate states on the relative stringency of their standards in comparison to other states, but states need to look within their population of students and their own goals. I think it鈥檚 not an absolute answer,鈥 Carr said.
鈥淲hat we鈥檙e seeing, though, is that states are raising the bar and becoming more alike in terms of what they identify as proficient performance.鈥