The What Works Clearinghouse, the Department of Education’s newly functioning enterprise to give Consumer Reports-style ratings on research and educational programs, is getting mixed reviews so far from one key group: the researchers whose work it features.
While a few among that small group of scholars are happy to see their work reaching a wider audience, some also express concerns about the way it is being presented. They say reviewers are misinterpreting and pigeonholing their studies and sometimes inadvertently casting aspersions on potentially useful research.
“When I looked at it, I was just kind of appalled,” said G. Michael Pressley, the director of doctoral programs in teacher education at Michigan State University in East Lansing. A study he co-wrote on a practice known as reciprocal teaching is among the handful of studies that got a thumbs-up from clearinghouse reviewers when the site was unveiled last month. (“‘What Works’ Research Site Unveiled,” July 14, 2004.)
For their part, clearinghouse researchers say they are already working to address some of the scholars’ concerns. They are drafting text for the Web site, for instance, to explain better why some studies are listed as failing to pass muster, and are working as quickly as possible to add more studies to their online archives.
The reviewers plan for now, though, to continue using the strict methodological criterion that is drawing much of the criticism for weeding out useful research. Emulating standards set by medical research, the clearinghouse puts a premium on randomized field trials, in which subjects are randomly to either control or experimental groups. As a result, of 18,000 studies reviewed, only 12 so far have fully met clearinghouse standards.
“Random assignment is a strong design,” said Rebecca S. Herman, the project director. “It’s always been a strong design, and I think it continues to be.”
‘Does Not Pass Screen’
Some of the harshest criticism has come from Mr. Pressley, who in letters to the clearinghouse and to Education Week has charged that the reviewers mistakenly categorized his study on reciprocal teaching as a trial of peer tutoring. He says peer tutoring is just one part of that approach.
“There is no way to draw a conclusion about peer tutoring or cooperative learning per se from this study,” he writes.
In a written reply to Mr. Pressley, Ms. Herman and Robert F. Boruch, the project’s principal investigator, said they included the study because they wanted to offer educators as much information as possible on peer-assisted learning.
But Mr. Pressley, like other researchers, also raised questions about the 173 peer-tutoring studies that did not make the grade. They are listed on the Web site under the heading “does not pass screen,” which means that they were either deemed irrelevant or did not meet the clearinghouse’s methodological standards. The problem, the researchers said, was that the site does not explain why those studies were rejected.
“Many studies that ‘do not pass the screen’ may be viewed as not valuable when in fact they may be extremely helpful in understanding or adapting an intervention to a new context,” said Anthony J. Gabriele, a researcher at the University of Northern Iowa in Cedar Falls whose own study landed in that category. The label did not surprise him, he said, because the study was never designed to answer whether peer tutoring works. But he became concerned when a friend sent an e-mail expressing condolences on the study’s categorization.
“To the extent that this classification discourages stakeholders from looking at these studies, I think we may be providing too narrow a focus given our relative understanding regarding what works,” Mr. Gabriele said.
James J. Baker, the developer of a middle school mathematics program known as Expert Mathematician, is also dismayed at the way his research on the program is reported. His study—the only one that fully met the criteria for his topic—used a random-assignment strategy to test whether students could learn as much with his student-driven, computer-based math program as they could from a traditional, teacher-directed curriculum known as Transition Mathematics. The problem, he argues, is that the Web site says his program had no effect without explaining that students made learning gains in both groups.
Ms. Herman said the clearinghouse could not provide that context because it had no research to show that Transition Mathematics works better than other curricula. “Without that, we couldn’t report out that it was effective,” she said. Ms. Herman said other analyses in the study—showing, for instance, that students’ attitudes toward math improved more with Expert Mathematician—did not meet clearinghouse criteria.
In all, she said, the clearinghouse has received about 100 comments on the new site, which took two years to develop. Some suggested that the screening criteria had not gone far enough in weeding out the “best of the best” research in the field. Ms. Herman said some feedback came from the practitioners the clearinghouse was designed to serve.
“They said that we were providing the kind of critical look that helps them figure out whether a study is useful or not,” she added.