When an ed-tech product doesn’t get much traction in the school district that bought it, someone inevitably asks: What went wrong?
The EdTech Genome Project last year started to tackle that problem to help educators and district administrators make better-informed decisions about education technology.
The “genome” project—a collaborative effort involving more than 100 education research and advocacy organizations—is mapping the many factors that influence the outcome of an ed-tech product’s usage.
Ultimately, they identified about 80 variables. And recently, they winnowed the field to what they deemed to be the 10 most worthy of more study this year. Those implementation factors are:
- Adoption plans
- Competing priorities
- Foundational resources, such as technology and financial resources and operational tech support
- Implementation plans
- Professional development/learning and support
- School or staff culture
- Support from school and district administration
- Teacher agency or autonomy
- Teachers’ beliefs about technology/self-efficacy and technological pedagogical-content knowledge
- Vision for teaching and learning with technology
Each variable is being evaluated further by a working group that will build consensus about how to define and measure them those factors, said Bart Epstein, the president and CEO of the Jefferson Education Exchange, which is leading the project. JEX is a nonprofit affiliated with the University of Virginia Curry School of Education and Human Development.
These variables that make or break implementation echo comments Johns Hopkins University has heard in its longitudinal studies of school districts’ ed-tech initiatives, said Steven M. Ross, a senior research scientist and professor at the Center for Research and Reform in Education there.
Although Johns Hopkins has not studied the same 10 factors, many of them have been mentioned as important by teachers and administrators in the center’s ongoing work with such districts as 111,000-student Baltimore County in Maryland and 188,000-student Fairfax County in Virginia, he said.
“All of those factors are valuable to look at,” said Ross, “but, ultimately, the question is, ‘What raises achievement and what doesn’t?’”
What the Research Reflects
Having good implementation doesn’t necessarily mean math scores will go up, he said. “That may depend on the teacher.”
Epstein said the genome project identified the variables by reviewing academic research that showed they could be responsible for the success or failure of various interventions. Some studies found that it was impossible to get statistically significant data because a product or program wasn’t used enough, he said.
Among the working-group members trying to get more clarity by studying specific variables are representatives from school districts, research organizations, nonprofits, and companies across the country.
Eventually, the collaborating organizations plan to publish an implementation framework for educators, companies, and other interested parties late this year.
“Education is overdue to do what every other industry has done, which is to develop shared instruments of measurement and language to describe how the tools of their trade are applied,” said Epstein.
The lack of a common language and means of monitoring progress has created a challenge for teachers and administrators alike, according to Epstein.
“One thing that keeps coming up over and over is how much frustration there is in education when we have to rely on anecdotes,” he said. “When you go into a school and ask, ‘How much agency and authority do teachers have about the decision to bring in technology?,’ you’re almost sure to get anecdotes.”
The point of conducting this research is to “help schools understand how ready they are—or are not—for different products, programs, and policies,” said Epstein. “Districts across the U.S. are spending billions of dollars on well-intentioned efforts that fail because they don’t have the data to understand that their environment is not conducive to the success of what they’re buying.”
Did Specific Tools Work?
Ross commended the project’s intention to help districts conduct self-evaluations in areas they want to improve, and, he said the variables may be a useful checklist with “good discussion points” for educators and administrators alike. But he said they should have “reasonable expectations” about outcomes since “the jury is out as to what degree these tools and resources are actually used to make changes.”
Meanwhile, plans are underway for the next phase of study on implementation.
Next year, JEX expects to develop a platform that will allow districts to document the technologies in their classrooms, how they were chosen, and what the implementation is like. “In the interim, we’re collecting research manually and paying cash stipends to large numbers of educators in return for their taking the time to provide this information,” Epstein said.
The teachers document their experiences and perspectives to help ed-tech decisionmakers and policymakers understand and explain why any given ed-tech tool can work in some environments but not in others.
“The goal,” Epstein said, “is to avoid educators’ buying things that won’t work.”