One of the many responsibilities that falls to school district leaders is whether to use an interim assessment—one that’s given every couple of months—to measure student progress.
These tests can serve several different purposes, including predicting performance on state exams and identifying subsets of skills for which students might need support.
Picking the right tool is a high-stakes decision. Teachers may use the results of these tests to adjust their instruction or determine which students will receive interventions. But it can also be hard to identify exactly what test will best suit a district’s specific purposes.
Last year, the nonprofit curriculum reviewer EdReports announced that it would start releasing reviews of interim assessments as well, judging their technical quality and usability. These kinds of outside evaluations are hard to come by right now, as they’re often proprietary, created by private companies.
But earlier this year, the organization put the plan on hold indefinitely, because not enough assessment companies agreed to participate.
It was disappointing news for Christine Droba, the assistant superintendent of teaching and learning in North Palos School District 117 in Palos Hills, Ill.
“An external review would be huge,” she said, removing some of the burden of assessing validity and reliability of these tests from teachers’ and other educators’ shoulders.
Droba and North Palos superintendent Jeannie Stachowiak spoke with Education Week about how their district chooses interim assessments, and what they did after discovering that the test they were using wasn’t aligned with the year-end test in Illinois. This state test is used for federal accountability purposes.
They also shared their advice for other school leaders wondering about the alignment of interim assessments to their teaching.
This interview has been edited for length and clarity.
How does your district use interim assessments?
Droba: We use our interim assessments as a tool to predict how students are going to do on the state assessment. It is something that we use to identify which students need enrichment, which students need additional support in terms of: Are they going to be ready for the end of year benchmark?
We look at, where’s grade-level proficiency? How close is it to our target? How many students are below that level? What do they need to do to get to the end of the year benchmark? Which students are going to start intervention? That’s our fall assessment. By the winter, we track progress from fall to winter, and then revise any plans that we have—if we need to do more support in this area, or maybe less support in this area.
And then the spring testing session is really done to track growth from fall to spring, and we also use the spring assessment to continuously make sure that the interim assessment is aligned to the state assessment. We’re always looking at: Are these numbers showing us the same thing?
How did you figure out that your interim assessments weren’t aligned?
Stachowiak: [The assessment we were using] is not directly aligned to Illinois State Standards.
We had teachers, understandably, taking a look at some of the things on the [interim] assessment and beefing up their instruction in those areas. However, those were not areas that were target areas for assessment on a state assessment. So they were working very hard to make sure students met standards on an interim assessment that was really not aligned.
We started to look for other potential assessments that would be better aligned, which was when we made the switch [to a new interim test]. We have a data coordinator in the district who meets with our leadership team, constantly. And we are looking to do a data dive to make sure that [the new interim assessment] is a better predictor for our students.
Droba: We worked with our data coordinator to take the assessment from the spring and then the [state] assessment data for the same group of kids. And he ran a correlational study to figure out what was the correlation between the two data sets. I believe that number was around 0.7 or 0.8, which is very high. He was basically saying that these numbers are correlated.
That was similar to the research that [the interim assessment provider] already presented to us. Their correlations were a little bit higher than what he found with our data set. But it was still high enough that we were like, “Yeah, let’s move forward, this is still good. It’s in alignment.”
What advice would you give to other districts that may be having similar questions?
Droba: I would say that you need to have clarity on your goals and priorities. First and foremost, we made it very, very clear that the state test is what’s measuring the state standards. It’s what our whole curriculum and system is built on.
[The interim assessment] is a tool to predict how kids are going to do on the state assessment. So we have very clear priorities. If you don’t have that, it can be very confusing. Which assessment? What am I looking for? What’s the purpose of the assessment? You really need to have clarity, first on the purpose of the assessment and how you want to use it.
Stachowiak: We value the state assessment, because we believe it measures what our teachers teach and what our students should learn. And then based on that value, we create goals for the school district. We share those goals with the Board of Education, obviously. We share those goals with the teachers, so that at all of our professional learning, community meetings, everything that we’re doing with our staff, that’s the goal in mind—to make sure that the students are going to achieve those goals that we expect.
If you don’t have that goal in mind, and that alignment, it’s really difficult to make sure that everyone is sharing and doing the same thing and valuing not only the state assessments, but then whatever interim assessments you’re using to measure.
Is there information that you would want publicly available about interim assessments that you don’t have access to?
Droba: An external review would be huge.
A lot of what we get is from the company itself. They’re going to give us this report that says, “Yes, it’s aligned to IAR. Yes, we do that.” They do their own research. Having an external reviewer would help just make sure that their methods were valid, that everything meets the high-quality standards that you would expect.
We read through the reports that they provided to us, and then we piloted the program to make sure that in practice, it was what we wanted it to be. But having the external review would just provide a set of eyes that was not the company.
We work with teachers on our review committee, and they know the usability of it. But to have a research company explore the validity and reliability of an assessment, that just allows [teachers] to review that instead of having to actually do the review themselves.