The from the main 2022 National Assessment of Educational Progress, released yesterday, portray the devastating effects of COVID-related disruptions on student achievement across the United States. Policymakers are at a dangerous juncture with these new results. If they act on misinterpretations, they could send kids down the wrong path and delay their timetable to catch up.
This NAEP is the most important in the program’s history because it provides the first national comparison of student achievement before and toward the end of the pandemic. The results carry a clear message for state policymakers: They need to step up in a big way before we lose a generation of students. But first, state and district leaders need to accurately interpret the NAEP results and also consider results from other tests.
With this new NAEP data, and the long-term-trend NAEP results released Sept. 1, we’re awash in rich, but potentially confusing, data. If we don’t interrogate the multiple sources of assessment information honestly, states and districts could miss opportunities to support students.
Many people will be tempted to look at the overall NAEP scores for their states to see how they compare with other states’. But in order to understand where needs are greatest, leaders must examine the extensive set of disaggregated results across all four tests: math and reading, 4th and 8th grades.
Here’s an example that shows why this is important. As you can see in the graph below, the 4th grade math scores for the lowest-performing students (at the 10th percentile and below) dropped 7 points, while the highest-performing students (90th percentile and above) declined only 2 points. (In the chart, some numbers are rounded.)
One could reasonably conclude that the pandemic hit the lowest-performing students the hardest, and this conclusion would be supported by these results. But take a look at the 8th grade math scores to see why we must look across all four tests before drawing general conclusions.
In 8th grade math—the grade/content area that was probably hit hardest by COVID—the drops in performance were quite consistent across the full achievement continuum. (In the chart, some numbers are rounded.)
Why are the differing patterns between 4th and 8th grade math worth noticing? Because information like this facilitates a nuanced analysis of performance that—along with state summative and interim-test results—can inform a thoughtful response from state and district leaders. Such analyses will allow educators and policymakers to provide support where it’s needed most.
Eighth grade reading provides yet another example of why we must avoid overgeneralization. While score drops were generally worse for students of color among the four tests, only white students experienced significant drops in 8th grade reading.
State leaders and their assessment experts now must reconcile these new NAEP data with their own summative state results. Most state assessment results tell a story similar to NAEP’s, but some states are reporting that students are essentially back to pre-pandemic levels. Unless those results are confirmed by NAEP, I’d urge leaders not to believe such a rosy picture.
Contrary to what some might try to argue, the results do not support arguments about how school reopening policies affected scores.
NAEP can provide a crucial reality check on rosy results. Here’s why.
It’s the only test in the country that can provide valid comparisons across states and large urban districts. State summative tests can’t do that. The National Center for Education Statistics, which runs NAEP, can establish the validity of NAEP results in ways that state and interim assessments cannot, due largely to rigorous sampling protocols that ensure that the pools of students participating in the 2019 and 2022 tests are as comparable as possible. NCES also has the time, resources, and expertise to ensure that the inferences that can be drawn from the 2022 test scores are equivalent to those drawn from the 2019 scores. They do this through a set of processes known as test equating.
All these strengths position NAEP as a powerful basis for understanding patterns of student learning. That said, there is . No test—including NAEP—portrays “the truth.” And I say that even as a proud member of NAEP’s governing board. That’s just the fundamental truth of measurement.
It would be unusual for any state’s results to be a perfect reflection of NAEP. Many reasons explain these differences.
First, NAEP and state tests don’t measure exactly the same math and reading content. The differences aren’t usually substantial, but any difference can lead to differences in test results.
Second, the NAEP achievement levels—the cutoff points for each level of performance—are different from those for states’ own end-of-year assessments, making it impossible to compare the percentage of students scoring at “basic,” “proficient,” and other levels across state tests and NAEP.
State leaders should evaluate whether the change in their NAEP-scale scores from 2019 to 2022, across all four tests, aligns with the score changes they see on the state assessment. Leaders should avoid the temptation to cherry-pick the results that tell the most positive story. Instead, they should see those differences as a sign that they need to examine the quality of their state assessment, particularly how well the scores were made comparable across years.
I urge leaders to resist making political points based on the NAEP results. Contrary to what some might try to argue, the results do not support arguments about how school reopening policies affected scores. The patterns in the test results are too nuanced and varied to draw the conclusion, for instance, that states where schools were in person, as opposed to remote, for more days produced better results. Politicians and pundits who make that argument likely will be wrong.
No state or district escaped the effects of the pandemic disruptions. We need to look carefully at the rich set of data offered by this NAEP release, along with other assessment and opportunity-to-learn data, to understand student and school needs.
And understanding must lead to action. State departments of education must provide clear guidance and leadership on evidence-based practices to support accelerated learning. Importantly, state policy leaders must continue the funding necessary to enable students to make up lost ground. The federal Elementary and Secondary School Emergency Relief Fund provided much-needed—but short-term—help. Remember: These funds must be spent by September 2024. And yet the need will continue well beyond this time.