Does Title I of the Elementary and Secondary Education Act raise test scores?
Fifty years after the passage of the law, this seemingly simple question has remained unanswered as evaluation after evaluation has failed to identify definitive or long-lasting impacts of the federal funding stream aimed at improving the achievement of disadvantaged children.
And yet the program ticks along, to the tune of $14.4 billion in federal grants to school districts this fiscal year, and some say its very existence has improved the lot of disadvantaged children.
Title I is not the only funding source that is allocated on the basis of priorities other than research evidence, according to Iris C. Rotberg, research professor of education policy at the George Washington University, who directed multiple national studies of Title I.
“I don’t think studies are likely to lead to increased funding,” she said. “Nor do they determine how the funds are distributed. The level and distribution of funds are political decisions.”
The lack of definitive research evidence is in large part the direct result of such decisions, which have led to the design of a program that resists easy categorizations such as “effective” or “ineffective.”
Although Title I aims to target students from low-income families, more than 90 percent of school districts in the nation get at least some of the funds. And they use them for purposes as disparate as class-size reduction, extended learning time, professional development, and instructional salaries. So researchers who try to address the age-old policy question of whether the program “works” find themselves with the more fundamental problem of how to clearly define the object of their analysis.
Adding to the complexity is the fact that federal revenue accounts for only about 10 percent of U.S. school funding, making it difficult to tease out the effects of Title I from the effects of much-larger state and local funding streams.
In addition, the funds are not assigned randomly to certain districts or states and withheld from others. So randomized controlled trials—the standard research method in many scientific fields—are not feasible. With many students and types of students benefitting from the funds, it can be difficult to find an appropriate comparison group or to use statistics to account for key differences between those who do and do not receive services.
“The question of whether Title I makes a difference in test scores cannot be answered,” Ms. Rotberg said. “One reason is that Title I is not an educational program; it’s a funding stream. Title I programs vary enormously and, in addition, there are too many confounding variables to make a generic comparison of these programs.”
Influential Studies
That is not to say that research has had no influence on Title I. For instance, a 1969 study by the Washington Research Project and the NAACP Legal Defense and Educational Fund revealed that Title I funds were being used to replace existing state and local revenue and to make purchases tangential to classrooms, such as portable swimming pools. Subsequent reauthorizations aimed to curtail such practices.
One of the most influential government evaluations also occurred in the early years of the program, according to Christopher T. Cross, who served as assistant secretary for educational research and improvement at the U.S. Department of Education from 1989 to 1991 and was deputy assistant secretary in the old Department of Health, Education, and Welfare from 1969 to 1972.
Launched in 1974 and completed three years later, the National Institute of Education Compensatory Education Study consisted of 35 different research projects focused on fund allocation, compensatory services, student development, and administration. The director of that study was Paul T. Hill, who went on to found the Center on Reinventing Public Education at the University of Washington Bothell; Ms. Rotberg was the deputy director.
Mr. Hill suggested that the compensatory education research caught policymakers’ attention because members of Congress commissioned an evaluation that responded to their own questions.
“Our study was a turning point,” he said. “It took a program that had been extremely controversial because it didn’t consistently lead to higher reading scores and explained to Congress it had set up a program that wasn’t a machine to do just one thing. It was [intended] to change the priority of educating poor kids from being something secondary to the primary concern of local districts, and Title I really had done that. ...
“Almost everyone in Congress could find something in the program that they liked,” he said.
Weighing Outcomes
Previous and subsequent federal studies did address the thorny question of achievement outcomes of Title I and non-Title I students.
A 1996 meta-analysis in the peer-refereed journal Educational Evaluation and Policy Analysis found that, between 1966 and 1993, there had been no fewer than 17 different federal evaluations that examined the connection between Title I and student achievement. Across the studies included in the analysis, Title I was shown to have provided a slight advantage, equivalent to moving the average student from the 50th to the 54th percentile on a standardized exam. The researchers also found evidence that results improved as the program matured.
“Unfortunately, there has been no major national evaluation of Title I since the late 1990s, and the former Title I Evaluation and Reporting System, which provided national compilations of Title I students’ achievement outcomes, was disbanded in the 1990s,” meta-analysis lead author and University of Wisconsin-Madison education professor Geoffrey D. Borman wrote in an email to Education Week. “Therefore, there has been little systematic national data on Title I and student achievement available since our meta-analysis from 1996.”
Now an associate professor of education at Howard University in Washington, Zollie Stevenson Jr. worked in the U.S. Department of Education for a decade, retiring in 2010 as director of student achievement and school accountability programs.
“Research played a limited role, in particular for Title I,” he said of his time in the department.
Overall, the lack of definitive research on Title I may have had its own kind of influence, according to Chester E. Finn Jr., the former president of the right-leaning Fordham Institute, who served as assistant secretary for research and improvement and counselor to the secretary at the U.S. Department of Education between 1985 and 1988.
“The absence of proven impact may also be part of the explanation for the ever-tighter regulatory hardness and additional strings attached to Title I,” said Mr. Finn. “Because if it had been shown to be ‘effective,’ folks would have said, ‘It’s working as it is, don’t mess with it.’ But because of the lack of demonstrated efficacy, there’s been endless tinkering, endless efforts to add just a few more rules and accountability provisions, in the hope that maybe someday it will actually accomplish its stated purposes if only we keep fiddling with it.”