Only 10 of the 18 most widely used school improvement programs for middle and high schools have “mǻٱ” or “lٱ” evidence to show they work, and none deserves a top rating, a review by a Washington think tank concludes.
The review, released last week at a national conference in Houston, is the third in a series of consumer-style reports produced by the Comprehensive School Reform Quality Center at the American Institutes for Research. The federally financed reviews evaluate the research base for popular, prepackaged schoolwide-improvement designs and assign ratings based on their effectiveness and other characteristics.
is posted by the .
This time around, none of the models had a research track record robust enough to earn one of the nonprofit group’s two highest ratings—“very strong” or “strong.” Nonetheless, Steve Fleischman, the AIR vice president who oversees the project, characterized the results for secondary schools as encouraging.
“Part of this, I think, is that it’s really hard to do research at the high school or middle school level,” he said, “and some of the models are newer, so they haven’t had time to establish a research base.”
“But if we had done this a year and a half ago, there would have been even less evidence,” he added, describing 2005 as “a really good year for research.”
The four programs rated moderate for their effectiveness were: America’s Choice School Design, based in Washington; the School Development Program, the model developed by the Yale University psychologist James P. Comer; and Success for All—Middle Grades and Talent Development High School, both designed by researchers at Johns Hopkins University in Baltimore.
At the other end of the scale, no programs were found to lack any research evidence at all or to have a negative impact on student achievement. But the center’s analysts gave eight programs a rating of “zero,” meaning the studies found for those programs were not rigorous enough.
The center bases its standards on the federal government’s definition of “scientifically based research,” which tends to favor studies that use comparison and control groups.
Higher Standards
Among the zero-rated programs were well-established improvement models, such as High 69ý That Work, of Atlanta; Accelerated 69ý PLUS, of Storrs, Conn.; the Coalition of Essential 69ý, based in Oakland, Calif.; and Modern Red Schoolhouse, of Nashville, Tenn.
Four of 18 school improvement models the Comprehensive School Reform Quality Center evaluated were given a “mǻٱ” rating for their effectiveness in raising student achievement:
• America’s Choice School Design, Washington
• School Development Program, New Haven, Conn.
• Success for All—Middle Grades, Baltimore
• Talent Development High 69ý, Baltimore
The center gave “lٱ” effectiveness ratings to six other programs:
• Expeditionary Learning, Garrison, N.Y.
• First Things First, Toms River, N.J.
• Knowledge Is Power Program (KIPP), San Francisco
• Middle Start, New York City
• More Effective 69ý, Kinderhook, N.Y.
• Project GRAD, Houston
Source: American Institutes for Research
“We knew we did not have the kind of research where you have control groups. We’ve never had the resources to do that, but we’d welcome it,” said Gene Bottoms, the senior vice president of the Southern Regional Education Board, which founded High 69ý That Work 20 years ago.
Used in 1,100 schools nationwide, that model got a stronger rating from the think tank seven years ago, when it undertook a similar review of improvement programs. But the research group’s research-quality standards for the current project are tougher. Of the 1,500 effectiveness studies the center reviewed for the secondary school report, only 41 made the cut.
The reviewers, who concentrated on programs used in at least 40 sites and three or more states, also rated programs on other characteristics besides effectiveness. Analysts looked, for example, to see whether the programs are particularly effective with diverse student populations, whether they lead more families to become involved in schooling, and whether they provide needed services and support for schools.
The ratings for secondary school programs were not as high as they were for elementary schools, which were the focus of the center’s first report in 2005. (“Report Critiques Evidence on School Improvement Models,” Dec. 7, 2005.) That review, which has since been downloaded from the center’s Web site more than 50,000 times, gave “moderately strong” effectiveness to two programs and moderate ratings to five others.
Nevertheless, Mr. Fleischman said the secondary school report and an updated version of the elementary school report due out later this fall may be the center’s last. Its three-year, $4 million grant to vet the research on comprehensive school reform models runs out next month, and the U.S. Department of Education’s office of elementary and secondary education has no plans to extend it.
Mr. Fleischman said the AIR would continue to make the center’s reports available on its Web site.