Imagine an afternoon when a teacher can sit down at a computer desktop and quickly sort through reams of data she’ll use to plan lessons for the next day.
She’ll look over attendance records and test scores ranging from the students’ first years in school right up to that very day. She’ll see the courses her students have taken and every grade they’ve received. She’ll compare every student’s achievement against state standards to decide which students need review and which ones are ready to move on.
Feature Stories |
---|
Delving Into Data |
|
District Initiative |
Risk & Reward |
‘National Effort’ |
State Analysis |
Executive Summary |
Table of Contents |
And all of the information will be available by clicking a mouse and keying in a few words here and there. After her planning period, the teacher will have prepared lessons matching the needs of the students she’ll see in class the next day.
That technological capability can be found only in the rare classroom today, but some experts say that such a data-rich approach to instruction will eventually be commonplace. They note that the technology exists to collect, store, and distribute educational data in ways that can help state policymakers, district leaders, principals, and teachers all do their jobs more effectively.
So far, the infrastructure needed to support such applications doesn’t exist in many places, leaving the data’s potential to inform educational decisionmaking largely unfulfilled. That situation may be starting to change, though.
| |
(Requires .) |
“We’re poised to do much better,” says Dale Mann, a former professor at Teachers College, Columbia University, and the president of Interactive Inc., a Huntington, N.Y.-based consulting firm advising states on how to build educational data networks. “The data-based-decisionmaking rhetoric has started to get real.”
At the local level, the No Child Left Behind Act is inspiring a small fraction of the country’s nearly 15,000 school districts to seek technological solutions to meet the law’s ambitious goals for student learning. Some are tracking student test scores year to year—or even month to month—to plan interventions for children who are lagging in efforts to meet state standards in reading and mathematics.
And as states start to collect more data to meet the requirements of the federal law, they have increasingly become significant players in efforts to use data to drive instructional decisions.
In the four years since President Bush signed that legislation, states have expanded their data capacity, mostly to comply with the law’s accountability requirements. Now they are starting to put features in place that could produce data that teachers can use to change instruction to improve student achievement.
“I think NCLB has really accelerated that movement,” says Mitchell D. Chester, the associate superintendent for policy and accountability in the Ohio Department of Education, “because year-to-year testing, and having test records from 3rd to 8th grade and into high school, creates possibilities that weren’t present before.”
But before the potential of data-driven decisionmaking can be realized, some experts say, state and local officials will need to build the data infrastructure, develop the tools to analyze the numbers, and train teachers for a new way of teaching and preparing to teach.
Says Dane Linn, the education policy director for the National Governors Association: “We will have missed the boat if we don’t answer the question of how these data systems can inform instruction.”
State Starting Points
Even before the No Child Left Behind law, states had begun creating “student identifiers,” one of the building blocks that experts see as most important to effective longitudinal-data systems.
The student identifier is a code that stays with each student, the same way that Social Security numbers serve to record workers’ income annually and over time. With it, policymakers can track student-level data over the course of students’ K-12 careers—and into higher education if the databases from each system are linked.
Systems with identifiers can show at what grade levels individual students’ achievement dipped, for example, and even highlight the specific skills a student has failed to master. Such systems can track whether a student completed high school in the school he or she attended as a 9th grader, or in another one somewhere else in the state. Teachers and administrators can use it to evaluate the success of specific interventions attempted to improve student performance.
In 1999, eight states had an identification number for every student. By 2003, 21 states had student identifiers, according to a survey by the National Center for Educational Accountability, an Austin, Texas-based nonprofit organization that advocates the increased use of data to measure schools’ performance.
Now, 43 states and the District of Columbia have student identifiers, according to a survey of states conducted by the Editorial Projects in Education Research Center for Technology Counts 2006.
But of the states with identifiers, three don’t match them with performance on statewide assessments, according to the survey. Further, six states plus the District of Columbia don’t use their identifiers to track whether students eventually complete high school, and 27 states and the District of Columbia don’t link their identifiers to high school transcripts, the survey found.
Without links to other data, the usefulness of student identifiers is diminished, experts say. For example, at the start of the school year, teachers won’t have a snapshot of their students’ performance on previous years’ tests that would show them which students are struggling and what material they need to learn. Accurate counts of students who graduate on time will also be elusive.
Data on Teachers Uneven
States are also making progress in setting up similar codes that identify individual teachers. In 2005, 13 states had such identification numbers, according to a survey by the Data Quality Campaign, a coalition of policy groups—including the NGA and the Natinal Center for Educational Accountability—that is pressing for improved and expanded state data systems.
The EPE Research Center’s survey found that 43 states have such an identification system.
The survey also found that, as with student identifiers, the teacher identifiers might not be linked to other data in ways that could make them most useful. For example, just nine states had the ability to match teacher identifiers with student identifiers.
Without such matches, policymakers won’t be able to determine which teachers increased student performance over time. That makes it harder, researchers say, to identify good teachers and to figure out the educational and other characteristics that lead to their success in the classroom.
When the University of Texas hired the National Center for Educational Accountability to evaluate its teacher education programs, for example, researchers struggled because the state data system didn’t indicate which students were in each teacher’s class. The NCEA’s research team had to collect that information from districts.
“We figured out how to do it, but it’s a pain in the rear,” says Chrys Dougherty, the NCEA’s research director.
A few states—most prominently, Tennessee and Florida—use their data to help struggling teachers and to reward successful ones. In Tennessee, for example, every teacher receives an annual report on the achievement gains produced in his or her classroom. The data are not publicly available, but the information is used in principals’ evaluations of their teachers, says Mary S. Reel, the senior executive director of assessment, evaluation, and research for the Tennessee Department of Education.
In Florida, a new statewide pay-for-performance system will use a teacher identifier linked to classroom achievement to give bonuses to teachers whose students show the biggest achievement gains. The state couldn’t do that without data matching students’ test scores to the teachers who taught them.
While the No Child Left Behind law has spurred efforts to improve the collection of student-performance data, state leaders recognize that the federal law and their own initiatives will require them to improve their data collection and their ability to analyze the information.
In 2005, all 50 governors signed a compact to standardize the way states calculate graduation rates. Instead of relying on measures that researchers say are common but misleading—such as publishing the percentage of seniors who earn a diploma—the governors agreed to publish the percentage of students who earned a diploma based on the number of students who entered 9th grade four years earlier.
Calculating such a figure seems simple on the surface, but it won’t be possible if states don’t improve their data systems. Accuracy requires knowing which 9th graders stay in their high schools, which ones enroll elsewhere and finish there, and which ones truly drop out of the system.
Reliable data collection is important for other reasons, says Linn of the NGA. With properly designed data systems, he and other experts say, states could measure the coursetaking patterns that result in success on state graduation tests, for example, or what professional development is needed to help teachers prepare students for state tests. Developing potential strategies for closing the achievement gaps between racial and ethnic groups is another area they cite.
Still, other experts warn against overstating the value of statewide data, particularly test scores.
James W. Pellegrino, a professor of psychology and education at the University of Illinois at Chicago, says results on statewide tests aren’t as useful as many policymakers think. Because large-scale assessments are designed to measure achievement of large groups of students, he says, they don’t produce results that pinpoint subject matter each student needs to learn.
“While it’s useful to have that kind of longitudinal data [for policy purposes],” he says, “one should not impute conclusions about the true depth of a child’s intellectual development in terms of math, science, or whatever.”
Tools and Training
Whatever the data’s limitations, most states are still far away from putting their electronic information into a form that teachers can easily use. Even states with relatively advanced data systems, such as Florida, are just now unveiling data tools designed for teachers and principals.
Meanwhile, states are starting to play a role in helping educators turn digital data into action. Twenty-six states and the District of Columbia are providing professional development to educators on how to use data for instructional decisions, the EPE Research Center’s survey found.
Some states also are creating formative assessments, tests that educators can use to track students’ progress during the school year. Twenty-two and the District of Columbia reported to the EPE Research Center that they provide formative assessments linked to state standards, and 31 plus the District of Columbia said they provide test items so educators can develop their own tests.
On their own, districts have taken the initiative to create computerized data tools, typically in response to NCLB accountability requirements or similar state measures, says Irene Spero, the vice president of the Consortium on School Networking, or CoSN, a Washington-based membership group of K-12 technology directors. Although the numbers are small, interest is growing, she says.
Many local officials are able to figure out how to use data to drive their decisions on their own, but it’s not easy, say experts who advise them.
“It’s something people are just trying to get their hands around,” says Kathryn Parker Boudett, an adjunct lecturer at Harvard University’s graduate school of education and a co-author of Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning, a 2005 book aimed at helping schools make decisions based on data.
In Data Wise, Boudett and co-authors Elizabeth A. City and Richard J. Murnane say their experience working with principals in Boston taught them that professional development and careful planning are required before data can be used successfully to inform instructional decisions.
“People in schools are so busy,” Boudett notes. “They are not going to suddenly take up something new unless they have to.”
Giving 69ý a Road Map
In the Gainesville, Ga., school system, educators have written a series of formative assessments in subjects across the curriculum to identify how well students are learning to the state standards. Because the test questions come from a bank of items that once appeared on state tests, teachers can see what material students know before they begin a series of lessons. They then can concentrate their instruction on the standards the students don’t know.
69ý take the tests at the start of every quarter in every subject, and the results are available the same day. The data are broken down by the racial, ethnic, economic, and other subgroups used in NCLB accountability reports.
The tests “are giving a road map for the next 45 days,” says David Shumake, the assistant superintendent for instruction for the 5,500-student district, located about 50 miles northeast of Atlanta. 69ý take tests on the same content at the end of the quarter so teachers know whether they will need to review material to help ensure students succeed on state exams later in the year.
So far, though, data-rich districts such as Gainesville are the exception. “The real power of data in using it for targeted interventions isn’t happening on as wide of a scale as we would like to see,” says Spero of CoSN.
Progress toward having computerized data systems that have the potential to inform educators’ decisions has been dramatic since President Bush signed the No Child Left Behind Act into law in 2002. State and local officials are starting to see the value in analyzing data to figure out how to meet the ambitious achievement goals of the federal law and state accountability measures.
But states still have a lot of work to do, data experts say.
Statewide identifiers for students and teachers have recently become the norm. But to have what experts consider complete data systems, states need to add information on student transcripts and scores from college-entrance exams and Advanced Placement tests. They also have to create links to higher education and audit their data to ensure accuracy. All of those are the “essential elements” of longitudinal-data systems, according to the Data Quality Campaign.
Even though some states already collect much of the needed information, to make it useful on a broad scale they’ll need to build data warehouses to store and analyze the data—something that can cost $10 million or more per state. “The real question,” says Linn of the governors’ association, “is do we have the political will to make sure these data systems are realized?”
On the local level, superintendents, principals, and teachers will need to do their own work to ensure that state data can then be turned into instructional guideposts, by building analysis tools that ensure teachers are capable of understanding the data and by ensuring teachers have the skills to use those tools.
Without such local efforts, says Boudett of the Harvard graduate school, “you’re not going to realize the Holy Grail of knowing what’s going on in your school in order to make changes.”