When Education Week began tracking educational technology across the states for Technology Counts 1997, the newspaper’s editors likened the process to mapping a quickly shifting “landscape replete with unexplored and uncharted territory.”
At the time, a paucity of state-by-state data on school technology made providing reliable information and indicators a formidable undertaking.
Feature Stories |
---|
Getting Up to Speed |
Teaching Assistants |
Outside Interests |
Collecting Evidence |
E-Learning Curve |
Information Exchange |
State Data Analysis |
Executive Summary |
Nonetheless, that inaugural Technology Counts settled on indicators in a number of key areas that have remained at the core of the report in the succeeding years, including the availability of computers for students, access to the Internet for schools, and training requirements for teachers.
Technology Counts has chronicled a 10-year period of momentous growth and change in educational technology. Over time, to stay in step with those changes, the reports have added data on new areas to supplement the original indicators.
In Technology Counts’ second year, the report introduced a framework with three main categories of data: access, capacity, and use. Those headings gather together indicators relating to students’ and teachers’ access to technology; educators’ training and capacity to instruct students using technology; and, perhaps most important, students’ and teachers’ actual use of technology in schools.
Indicators Evolve
Subsequent editions of the report have used the same three-category framework. Yet educational technology that is state-of-the-art today can quickly verge on obsolete. So the specific data examinedfor Technology Counts have changed as new technologies emerged and others faded out. It would no longer make much sense to include data on the percent of schools with videodisc players, for instance, as the first report did.
In 1997, the report contained only one indicator related to the Internet—the percent of schools with Internet access in each state. But from 1999 to 2004, Technology Counts contained 10 or more columns of data each year on Internet connections in schools and classrooms.
As initiatives to wire schools with dial-up Internet service were largely supplanted by efforts to provide high-speed connections, the indicators were adjusted accordingly. In 1999, the report began tracking the percent of schools wired to the Internet through various types of high-speed connections, such as T1 lines or cable modems. Now that Internet access in schools has become nearly universal, fewer columns of data are being devoted to the topic.
Return to the main story,
The data presented in Technology Counts have always been a mixture of statistical measures that rely on numerical analyses and data on state technology policies that are reported in a simple yes-or-no format. Over time, a shift toward including more state policy data has occurred.
Policy Thrust
Since 2004, Technology Counts has contained more yes-or-no policy indicators than purely numerical measures. Before that, the report offered far more data on such topics as the percent of schools with a full-time technology coordinator or the percent of teachers participating in professional development on using computers than on such policy issues as whether the state had established technology standards for students, teachers, and administrators.
The report’s policy indicators now touch on subjects that were virtually nonexistent a decade ago. For example, they track whether states have established virtual schools and other forms of online education, options that were in their infancy when the report began.
Each year, Technology Counts has had a special theme, such as the “digital divide” in 2001, online education in 2002, and educational technology funding in 2005. In addition to articles focused on those topics, the Editorial Projects in Education Research Center has collected and reported data related to the themes.
Throughout its history, Technology Counts has aimed to provide information on and for the educational technology “clients” identified in the first report: students, teachers, administrators, the public, and policymakers.
As those players work to make use of myriad new and emerging digital tools—from individual student identifiers to iPods—more uncharted territory remains.