The U.S. Department of Education’s $650 million experiment to find and scale up innovative education ideas was a mixed success—for the first time, money was awarded to programs that showed evidence of past success, but those rigorous standards also produced a list of winners full of the “usual suspects,” a new finds.
The report released today by , a Washington consulting firm, hammered away at a crucial question: Was the Obama administration’s program successful in finding truly innovative ideas that will improve K-12 education?
“Is it immediately obvious that they found breakthrough innovation? No, but that wasn’t necessarily their purpose,” said Kim Smith, a co-founder and CEO of Bellwether, which is working with support from the Rockefeller Foundation on research about innovation. The report is the culmination of interviews with dozens of i3 applicants, winners, and philanthropists, plus a review of public documents about the program.
“I think the department accomplished some really important things. It motivated a lot of action in the field. [The Department] is really juicing up the innovation ecosystem, and it’s going to take a little while to start to make progress.”
As the Aug. 2 deadline nears for a second, smaller round of Investing in Innovation, or i3, grants, the report acknowledges that in many ways, the competition itself was innovative, especially for a federal education department that is more accustomed to handing out grants via formula than through a competitive process.
Last year, nearly 1,700 applicants vied for $650 million in prize money, which was funded by the 2009 American Recovery and Reinvestment Act, the economic-stimulus package passed by Congress. Forty-nine winners were chosen, with awards split into three tiers ranging from nearly $5 million to $50 million. The biggest awards went to the proposals with the strongest research base.
Although this year’s i3 round will award only $150 million, interest does not appear to have waned. Nearly 1,400 would-be applicants told the Education Department they plan to apply.
In today’s i3 report, the researchers give the department credit for encouraging partnerships between the philanthropic sector and K-12 public education by requiring winners to secure matching dollars and establishing an online registry where foundations and education entrepreneurs could find each other.
And, researchers said, the department took a bold and significant step in requiring varying levels of evidence for each type of innovation grant, acknowledging that some ideas and innovations might be worthy of government investment but have far less research to back them up. This evidence framework was “a giant leap forward” and “by far the most significant innovation that i3 brought to the table,” the researchers said.
But this rigorous evidence framework came at a cost, since it favored ideas that had been around long enough, and had enough financial backing, to make evaluations possible. The result, the researchers said, was a “pool of applicants and grantees made up of existing organizations that had already addressed K-12 schooling in some way.”
The winners included such well-known entities as Teach For America, the Knowledge is Power Program, and the 69ý Recovery program through Ohio State University.
The report quotes one unnamed i3 applicant who said: “Neither the iPhone or iPad teams at Apple would have been able to meet this standard to get the funds to initiate these projects.”
Frederick M. Hess, the director of education policy studies at the American Enterprise Institute, agrees, but he doesn’t necessarily fault the department.
“It did not find innovative programs because it was not set up to find them,” Mr. Hess said. “They chose to write rules which required established evidence of effectiveness. That’s perfectly reasonable. You’re giving away $650 million in tax dollars.”
Branding Issue
For the department, part of the problem became the competition’s name.
In the beginning, the department called it the Invest in What Works and Innovation Fund, but that name was later simplified to Investing in Innovation, and given the “i3” nickname.
James H. Shelton, the department’s assistant deputy secretary for innovation and improvement, acknowledged that the department may have done itself a “bit of a disservice” by taking “what works” out of the name, thus setting up unrealistic expectations about the kind of innovation the department would fund.
However, he pointed out that although the list of winners included well-known organizations, it also contained applicants with no national profile. (For example, the 27,000-student St. Vrain Valley School District, in Colorado, was the highest-scoring winner, netting $3.6 million for its plan to use targeted reading and math interventions with English-language learners.)
“We had two important criteria: that a proposal be significantly better than the status quo, and two, that it goes to scale,” Mr. Shelton said.
The report notes that more involvement from the for-profit sector could have led to more innovative proposals, especially in the area of technology. In this case, however, the department’s hands were largely tied. The legislation Congress passed creating i3 made districts, groups of schools, and nonprofit partners the only eligible grant recipients. That left no opening for applicants from the for-profit sector, which is far more likely to embrace risk than the government.
In the end, the list of first-round winners disappointed many foundation officials, the report said. This is an important point, because foundations and other private-sector organizations were called upon to provide 20 percent matching funds to the winners. (The matching requirements have been lowered to between 5 percent and 15 percent, depending on the tier, for the second round.)
Some foundation leaders referenced in the report indicated there were few winning proposals that they wanted to fund. And they were further disappointed at the winners in the smallest “development” category, where the chance of finding creative, unique ideas seemed more likely given that a less rigorous research base was acceptable.
“Out of the development grants, I would be amazed if these grantees really develop into game-changers,” one funder is quoted in the report as saying.
In their narrative, the researchers question whether such criticism is based on a thorough review of the winning proposals, or merely a quick glance at the list of winners.
Other education policy observers, however, argue that some critics have an unrealistic definition of what innovation means.
Simply scaling up an education program, or implementing an idea in a part of the country that has not seen such a thing before, can be innovative, said James W. Kohlmoos, the president and chief executive officer of the Washington-based Knowledge Alliance, which represents research groups.
“If innovation is doing something different to create improvement, ... that kind of a meaning can have broad applications,” he said. “Innovation can be something very small.”
Selection Process Reviewed
The researchers also took a hard look at the selection process for winners, which relied almost exclusively on a cadre of outside peer reviewers who scored each application. The report questions whether strict rules the department used to weed out peer reviewers with potential conflicts of interest may have eliminated the most-qualified reviewers from the pool, leaving “district data officers and retired professors” as judges who favored “more incremental innovations.” The report questions whether the department could have used the peer reviewers differently and not relied on them entirely to pick the winners.
While acknowledging the quick timeline on the project, with awards made just three months after the application deadline, the researchers questioned whether the reviewers had enough training. And while the department tried to reconcile differences in scores among reviewers who judged the same applications, that “norming” process may actually have watered down the reviews, the report said. As part of the process, the department gave the judges a chance to come together, review, and even revise their scores. One reviewer quoted in the report said reviewers often regressed to the mean and deferred to the most conservative scorer.
As the Aug. 2 deadline for second-round applications nears, the department has made some changes to the process. When it comes to scoring the applications, no longer will peer reviewers judge the evidence applicants present; instead, that responsibility will fall to experts with the Institute of Education Sciences, which is the research arm of the department. Instead, the peer reviewers will focus on other scoring categories, such as how much need there is for a project, and how much experience the applicant has.
In addition, Mr. Shelton said, the department is going to give peer reviewers better training based on lessons learned—and more of it.