The U.S. Department of Education鈥檚 recent report to Congress on the effectiveness of reading and mathematics software products sounded a wake-up call to anyone pondering technology in education. (鈥淢ajor Study on Software Stirs Debate,鈥 April 11, 2007.) Its authors conclude that math and reading software produce no better test results than conventional teaching methods.
How can a technology that is transforming the way we acquire information throughout the economy鈥攔evolutionizing businesses from games to banking鈥攆ail to benefit education? How can technology revolutionizing training in the U.S. Department of Defense fizzle in elementary schools?
鈥擯atti Raine
The Education Department report is not evidence that technology cannot be a powerful learning tool. It proves only that results on standardized tests are not significantly improved by systems found in a sample set of schools. Moreover, the study focused on whether the technology was better than traditional teaching methods, failing to consider new technology as a productivity tool. Yet its authors admit that the study produced results no worse than for traditional teaching, and that 86 percent to 92 percent of the teachers in the program found the software sufficiently useful to keep using it. Clearly, these teachers perceive a value not registering on the tests.
In its denigration of the technology already in use, the report underscores the nation鈥檚 woeful underinvestment in new technology for learning. Despite public investment to bring hardware to schools, the United States has expected private investors to undertake the expensive research, development, and testing needed to use that hardware effectively.
Ten years ago, an inside joke for economists was that information technology was showing up everywhere in the economy except in the productivity statistics. No one is laughing now, because productivity is heavily dependent on IT advances. Results indicating such gains didn鈥檛 appear in the original statistics, since they combined data on failed efforts to use information technology with data on innovations that went on to revolutionize the economy. It took years to figure out how best to use new technology and software to reinvent business practices.
Given the difficulty of marketing innovations to school systems already overwhelmed by increasing demands and fixed resources, education hasn鈥檛 been up to such an effort. It鈥檚 painful, then, to see the federal government investing $10 million to review educational software, when it has spent so little on the research and testing needed to design tools that make effective use of the new technology.
Surely the nation can use for its schools the serious technology already effectively training its doctors, its pilots, and its emergency responders.
The results of the study were, according to this report, 鈥渂ased on schools and teachers who were not using the products in the previous school year鈥濃攊n other words, teachers using systems for the first time. Fifty percent of the teachers later indicated that, once they began to use the software, they recognized the need for more support and training. They weren鈥檛 fully fluent in using the material. And in many of the tests, an average of one computer for every three students complicated instructors鈥 tasks, limiting their sessions to only 10 percent to 15 percent of class time.
Vital features of technology-based instruction could not be tested in most of the older systems, which do not employ state-of-the-art software. For example, well-designed software allows each student to spend more time on task and to proceed at his or her own pace. It can also provide challenges that students find so exciting they are willing to spend hours seeking solutions. It鈥檚 difficult to exploit such features in standard classrooms. And the best software also integrates continuous testing, showing students whether they are advancing toward a goal.
More fundamentally, however, most of the software under study was not designed to produce a result on a specific test. It鈥檚 not surprising that an instructor focusing narrowly on specific tests can produce good results. But instructional software may be best at teaching skills poorly measured by standardized exams. In a separate study, for example, the Carnegie Cognitive Tutor Algebra software showed a small impact on SAT scores, but more than tripled performance on 鈥減roblem-solving鈥 tests.
The Federation of American Scientists is exploring the uses of advanced video-gaming technologies to create vivid educational experiences for learners from 1st graders to fire chiefs. We鈥檙e very encouraged by the results. But we鈥檙e also painfully aware that an enormous gap still separates the potential of educational technology and the products now in the market. The entrepreneurs who achieved revolutionary productivity gains in other parts of the economy haven鈥檛 made much of a dent in education.
69传媒 are a tough market for entrepreneurs. Outside of the military, few teaching institutions are familiar with managing technology-driven innovation. Developing effective and engaging educational software takes enormous amounts of time and money and involves huge risks. Federal research is essential to design effective instructional software and to test innovations to see what works and what doesn鈥檛.
There鈥檚 no doubt that American students expect to learn from technology鈥攖hey revel in it outside of school. Surely the nation can use for its schools the serious technology already effectively training its doctors, its pilots, and its emergency responders.