A number of commercial reading programs have satisfied the requirement under the federal reading law for embodying a strong research base, yet there appears to be limited outside evidence that any of them produces a conclusive and consistent effect on overall reading achievement.
The publishers of each of the top-selling reading programs make convincing cases for their effectiveness, offering research summaries, case studies, and commissioned empirical investigations to support their claims.
Scott Foresman 69ý, published by the New York City-based Pearson Education, comes with a bound, 1,000-page synopsis of journal articles and scientific studies on which it is based. The effectiveness of McGraw-Hill’s Open Court is outlined in a commissioned study of California schools that used the program and saw some improvements in test results.
And the Boston-based Houghton Mifflin, which has cornered the profitable California market and proved popular in districts across the country, has a link on its Web site explaining the program’s alignment with the findings of the National 69ý Panel and the work of prominent scholars who have influenced policymakers.
Despite such impressive documentation, it appears that none of the top commercial series—including Open Court, Houghton Mifflin 69ý, Harcourt, Scott Foresman 69ý, and Macmillan/McGraw-Hill of New York City, all of which have been accepted for use under the stringent federal 69ý First rules—meets what is called the gold standard for research.
They don’t have randomized studies pitting their products against other methods or materials; the studies they have commissioned have not been published in scholarly journals; and the companies have not documented improvements in student achievement across the range of schools and students. The programs have thrived, however, on their reputations among educators as having met the specified—and perceived—research standards in the 69ý First legislation, which is part of the No Child Left Behind Act.
Those perceptions were bolstered by federal officials after some of the materials were held up as examples at reading academies sponsored by the U.S. Department of Education to give state officials an overview of the 69ý First requirements.
Researchers and publishers criticized the officials’ action as a subtle endorsement of specific curricula, which is prohibited by the No Child Left Behind Act, as well as previous federal legislation.
“Assertions made by government officials in advising 69ý First applicants, and claims made by publishers in advertising their products, that certain comprehensive programs are ‘research based’ (with the implication that others and teacher-constructed programs are not) are not supported by anything in the National 69ý Panel report,” panel member Joanne Yatvin wrote in a Commentary piece for Education Week last year.
Confusion Over Research
Other programs, meanwhile, such as Success for All, which has a number of studies supporting its effectiveness, have been shunted aside. That whole-school-reform program—developed at Johns Hopkins University—has seen a general decrease in the number of schools using it. Its founder, Robert Slavin, complains that the success of a product depends more on marketing than on science.
“There is bright potential in really using evidence and research to make a difference in reading instruction,” said Mr. Slavin, a professor of education at Johns Hopkins. “But there’s a huge difference between marketing research and genuine research.”
Exacerbating the issue, some experts say, is the widespread confusion over what constitutes credible research and how to interpret it for practical purposes.
“No Child Left Behind made them aware of the research, but whether they are becoming critical consumers of the research, I wouldn’t go that far,” said Susan B. Neuman, a researcher at the University of Michigan in Ann Arbor who helped roll out the federal law as the U.S. Department of Education’s assistant secretary for elementary and secondary education early in the Bush administration.
Federal officials themselves may have added to the confusion.
G. Reid Lyon, the chief of the reading-research branch of the National Institute of Child Health and Human Development, for example, has suggested that commercial programs should have published studies demonstrating their effectiveness with the kinds of students for which districts intend to use them.
When asked about New York City’s choice of reading textbook last year, he said it lacked published experimental or quasi-experimental studies on its effectiveness in a variety of classrooms. Mr. Lyon hinted in a New York Times article that the text would not pass the rigorous review under 69ý First, potentially threatening the city’s $111 million share. In the hope of appeasing federal reviewers for the grant program, city officials eventually agreed on another reading program for schools applying for 69ý First money—one that was accepted by state reviewers even though it, too, lacked that very kind of scientific proof.
But Ms. Neuman said at the time that she did not believe that 69ý First required that higher standard of evidence.
Publishers’ Proof
Publishers are working hard to make sure their products accurately reflect research principles, and to gather convincing evidence that they work, said Marci Baughmann, the director of academic research for the Pearson Education school group.
Like many other publishers, Pearson, the parent company of Scott Foresman, has hired a small army of researchers to consult throughout every stage of development and production of reading texts. The publisher has been gathering data from clients to map changes in student achievement, and has commissioned independent studies or reviews from prominent researchers.
Studies on the Scott Foresman program, for instance, have been submitted to the What Works Clearinghouse of the federal Education Department’s Institute of Education Studies, Ms. Baughmann said, in the hope of getting a positive peer review to back up the product’s claims.
Many publishers have also asked the Florida Center for 69ý Research to evaluate the research justifications behind their products. The center, directed by Joseph Torgesen at Florida State University, one of a select group of researchers called on to help launch 69ý First, has produced eight reports on core reading programs and more than three dozen evaluations of interventions for struggling readers.
Mr. Torgesen, who has won research funding from the NICHD and runs one of three technical-assistance centers for 69ý First as part of a $37 million Education Department grant, has conducted numerous efficacy studies on reading interventions. The center’s reports summarize the principles behind the programs, evaluate the content, and identify the elements of the programs that do and do not align with research conclusions. The reports do not make claims about programs’ effectiveness.
Beyond those appraisals, however, publishers are unlikely to see their studies in peer-reviewed journals—the stamp of approval to many in the field. The publishing process is often lengthy and could distract researchers from their other projects, according to Ms. Baughmann.
The new focus on research evidence “places a certain amount of responsibility on the publishers to hire good academic researchers willing to work to the highest standards,” she said. “But we don’t anticipate being able to get [their work] published in a journal.”