The Department of Education is circulating the draft guidelines it hopes to use for evaluating the studies that go into its What Works Clearinghouse.
When it鈥檚 up and running next year, the clearinghouse is intended to provide an online 鈥渙ne-stop shop鈥 where policymakers and educators can go for scientific evidence for what really works in education.
The draft standards circulating this month are aimed at helping reviewers decide which studies should be included in the new research syntheses and how much weight to give them. The guidelines use hierarchies of questions that reviewers have to answer as they pore over each study.
Review the from the . (Requires .)
At one end of that spectrum are the basic questions that policymakers want answered. At the other end are the methodological questions that interest researchers more.
鈥淥ne of our biggest and most exciting challenges was developing a system that could satisfy a diverse set of users,鈥 said Harris M. Cooper, the University of Missouri- Columbia professor of psychological sciences who drafted the standards with colleague Jeffrey Valentine, a research assistant professor at the university.
Beginning on Nov. 11, the proposed criteria were posted on the clearinghouse鈥檚 Web site at www.w-w-c.org. The department is soliciting comments on them through Dec. 3. Clearinghouse developers also planned to hold a public forum here last Friday to discuss them.
鈥楪old Standard鈥
While it鈥檚 too soon to know what educators and other researchers think of the new standards, a few experts who have seen them said they rely too heavily on particular research methodologies, such as experiments in which children are randomly assigned to either an intervention group or a comparison group.
鈥淢y feeling is that providing experimental and quasi-experimental designs as sole-source evidence is saying 100 years of research methods developed in anthropology, sociology, history, and education have no importance in defining effectiveness,鈥 said H. Jerome Freiberg, a professor of curriculum and instruction at the University of Houston.
But Mr. Cooper said that experimental methods had to play a central role because they are recognized as the 鈥済old standard鈥 for determining whether an intervention actually works.
鈥淗owever, these standards also maintain a place for well-designed quasi-experiments, and they also clearly indicate that random assignment is only one part of an equation that makes a study trustworthy or not,鈥 Mr. Cooper added.
The clearinghouse plans to issue its final standards for evaluating studies by January.