The federal Investing in Innovation program helped build evidence of the effectiveness of new interventions, but also highlighted how much local education groups need support from regional and national experts to build successful ones.
That is the takeaway from an evaluation of the program, known as i3, that was released last week by the Social Innovation Research Center, a nonprofit think tank supported by the national venture-philanthropy fund New Profit.
The findings raise concerns about states’ and districts’ ability to develop and study their own school improvement and other interventions under the Every Student Succeeds Act. The new federal law gives districts much more flexibility to implement school improvement and other interventions but requires them to meet evidence standards modeled after those of i3.
“With all these things happening under ESSA with new evidence definitions, if you are going to just throw that out there and hope the locals will do it with no assistance, you are dreaming,” said Patrick Lester, the director of the Social Innovation Research Center. “These [i3 grantees] are in the top 3 percent of [i3] applicants, they are supposed to be the cream of the crop, the elite of school districts, ... and we see what the results look like: For the most part, school districts were out of their depth.”
The Obama administration launched the $1.4 billion i3 program in 2009, part of the economic-stimulus education spending that also brought the much bigger Race to the Top and School Improvement Grant programs, but i3 is the only one of the massive federal competitive grants to be codified in the Every Student Succeeds Act, as the revamped Education Innovation and Research program. Both iterations of the grants are intended to support the developing, testing, and scaling-up of effective interventions for school improvement, early-childhood education, dropout prevention, and other education areas.
Readiness Lacking
Thomas Brock, the acting director of the U.S. Department of Education’s research agency, the Institute of Education Sciences, said he agreed with the study’s findings, adding that district and state research capacity “has been front on my mind since ESSA was passed, because it’s clear it is trying to push this [evidence-based] work,” he said. “My worry is ... there may just not be the readiness yet for states and localities to undertake this work, and even when there is readiness, it’s still just very hard work.”
About a third of all 44 interventions designed and evaluated under the i3 grants to date showed significant benefits, and others showedsome positive results, according to the study.
That’s more than double the average success rate for research and development in education; a 2013 study by the Coalition for Evidence-Based Policy showed that only 12 percent of well-conducted experimental evaluations of education interventions found positive effects. Final evaluations have only been released for projects from the first three years of the program, 2010 to 2012. But if the current success rate holds, Lester estimates 52 of the grant’s 172 total projects will show evidence of success.
The i3 interventions helped fill gaps in the field on school turnaround strategies, science assessments, and the use of data to improve student achievement, the study found. While 12 projects focused on teacher and principal professional development, however, only three found benefits to their interventions.
“It does support the argument for research-practice partnerships pretty strongly, to help districts on the evidence side, the analysis side, maybe even the data-collection side,” said John Q. Easton, the vice president for programs at the Spencer Foundation and a former director of IES.
Elements of Success
The success of interventions skewed heavily toward experienced, well-funded organizations that were centered around their own interventions, rather than grassroots interventions launched by school districts, Lester found. Out of the 16 districts that received i3 grants directly, only three showed positive effects. By contrast, 3 in 4 university-led grants showed evidence of success for their interventions.
Part of that is because all but one of the district-led interventions received development grants, which required the lowest initial evidence base and which were also the least likely to show benefits. For example, interventions evaluated using a randomized control trial were as likely to see benefits as interventions studied under other statistical evaluation methods, but groups with more resources and expertise were more likely to undertake the so-called “gold standard” experimental studies in the first place.
Caitlin Scott, a manager in research and evaluation at the research group Education Northwest, agreed with the evaluation’s recommendation that districts engaging in education research and development should receive better access to national experts and research partners who can take on the technical side of the projects. "[Districts] are in the business of educating students; this is not their daily mission,” she said.