At Pittsburgh’s Avonworth School District educators are experimenting with a new way to test digital tools they might buy for their classrooms.
In the past, the approach to such an ed-tech pilot project might have involved an administrator or teacher hearing a buzz about an app or software, trying it out in a class for some period of time, then recommending it based on whether students or teachers said they liked it. But in Avonworth this year, that process is more formal, with upfront planning, a relationship with the product vendor, and conclusions based on hard data.
The old way “was more of an impulse buy,” said Scott Miller, the principal of the Avonworth Primary Center, a K-2 school. “That’s not really effective. We want to make an educated, informed decision to see if a product is a fit for us.”
School districts routinely do some kind of testing to sample ed-tech products for their students and often invest in much of that technology. In 2014, pre-K-12 schools spent , according to the Software & Information Industry Association.
But the evaluations often look very different in different districts, or within the same district. They can be short-lived or long term, spread over several academic years. These trials can be an amorphous exercise with no defined way to determine what products are best.
“I see a lot of misunderstandings during this process,” said Katrina Stevens, the deputy director of the office of educational technology at the . “It’s ripe for improvement.”
As districts are inundated with ed-tech products that aim to solve their “pain points” and claim to provide everything from personalized instruction to gamified content, finding ways to help districts run more effective pilot projects—and ultimately make better spending decisions—has become a high priority.
More structured pilot projects are now being encouraged through a number of initiatives. For example, the Learning Assembly project, funded by the Bill & Melinda Gates Foundation, brings together seven organizations working to improve school and district ed-tech projects. (The Gates Foundation also provides support for Education Week’s coverage of college- and career-ready standards and personalized learning.)
One of those organizations, the Washington-based nonprofit , which promotes the use of ed-tech in schools, is working with the 1,650-student Avonworth district, and Miller said that’s made a big difference.
Under the project, Avondale was paired with researchers from Pittsburgh’s Carnegie Mellon University who helped the district set up two elementary-grade pilots this academic year. ESpark, a personalized-learning program, is being used in 1st grade classes, while the digital toy Puzzlets is being sampled in grades K-2. 69ý used the tools from October through April.
District officials worked with researchers upfront to determine if the products were aligned with district needs, Miller said. And the district collaborated closely with both vendors to be sure teachers were using the products as intended. Teachers also provided feedback about how eSpark and Puzzlets worked or didn’t, Miller said.
For eSpark, the district will primarily use student-growth and -achievement data to determine effectiveness. For Puzzlets, the district is strictly looking at engagement and student interest, Miller said. Out of this process, the district hopes to craft a system or checklist for pilot projects that can be replicated when its grant through the Gates Foundation runs out, Miller said.
“This is going to allow us to make an educated and informed decision on whether these products are a fit for us,” he said. “If they are, great, but do they need any tweaks? If not, we’ll walk away, no harm, no foul.”
Financial concerns play a major role in the growing interest in creating more formal ed-tech pilot projects in schools.
Ed-tech products “can be an enormous investment for a district,” said Julia Freeland Fisher, the director of education research for the Clayton Christensen Institute, which studies blended learning. “They want to make sure they’re spending their scarce dollars wisely.”
The Education Department’s Stevens said her office is currently trying to improve rapid-cycle evaluations for ed-tech products by working to create an online pilot “wizard,” akin to a TurboTax for school-product-testing projects, she said.
The digital toolkit would give districts a guide on the front end on how to do a needs assessment, the technicalities of rolling out a pilot, what questions to ask the product developer, and how to collect and analyze data to determine if a product should be used on a wider basis.
Currently, the department is creating a prototype of the pilot tool and will test it out in districts in the fall, she said.
“We want to walk a school or district leader through setting up a pilot and evaluating the tools being used in their system,” Stevens said.
Other efforts to streamline the pilot-testing process are more regional. , a Chicago-based nonprofit that is also part of the Learning Assembly project, is working with schools to bridge the “gap between innovation and education,” said CEO Phyllis Lockett.
LEAP partners with Chicago-area K-8 schools to match them with ed-tech companies seeking to pilot their products. “Many of our schools get calls from vendors constantly, and they don’t know where to start,” she said.
LEAP has a panel of experts, including learning scientists, educators, and ed-tech investment experts, to evaluate and vet ed-tech products. Those that are approved are matched with individual Chicago-area schools for one-year pilots. Educators involved receive semester-long professional development before their project launches to hone their role in the process, Lockett said.
LEAP then works with researchers to crunch the data. “We can tell schools if the solution moved the dial on achievement,” Lockett said.
Digital Promise is working on pilots with several other districts in addition to Avonworth. It also plans to use the information it has gleaned to create a product-testing template, which can then be tailored to each district’s unique characteristics, said Aubrey Francisco, the organization’s research director. Digital Promise is also hoping to share the results of district pilot projects to provide information to other educators.
“A district might look at a study and say, ‘I feel comfortable using this product,’ based on the research done elsewhere,” she said
Along those lines, Jefferson Education, a commercial entity advised by the University of Virginia’s Curry School of Education, hopes to build a system to share robust pilot-project information with valid data on a wider scale, so that every district doesn’t have to do its own test of a product, said CEO Bart Epstein. The project is in the beginning stages, he said.
“Very few schools have the bandwidth to be able to do pilots properly,” he said. “Right now there are probably 1,000 school districts all reviewing the same 15 math products.”
In a 2015 Digital Promise study, researchers found that districts’ prevailing processes for testing technology products are largely informal and often lack a clear approach and consistency. The study also found a disconnect between the aims of companies looking to test their products and schools.
“There’s a real need to have a more structured process to talk about what is needed, how to bring teachers in early so they buy in, how to work with the developer, implement properly, and measure success,” Francisco said.
Some of these new pilot efforts may also help districts that have already purchased ed-tech software and digital tools in an ad hoc way. That’s the situation in the 1,200-student West Ada district in Meridian, Idaho, where 50 different math programs are being used across schools, said Eian Harm, the district’s research and data coordinator.
West Ada is, in effect, trying to do pilot projects in reverse on the most popular five math programs to determine which ones are most effective, Harm said.
To that end, a tool like the new EduStar platform might be of help, said Benjamin F. Jones, a professor of entrepreneurship and strategy at Northwestern University’s Kellogg School of Management, who is a co-creator.
EduStar, developed in collaboration with the nonprofit digital-learning provider PowerMyLearning, aims to provide rigorous and rapid trials of digital-learning tools and more granular content, like a lesson, video, or a game. Those trials can take just a few minutes—to test an app, for example—and are done through an automated system, he said. Currently the system is being tested with 40 schools already using the PowerMyLearning platform, but Jones said he hopes to add many more that want to test out digital content.
The goal is to provide feedback to the developer about how a product works in a real classroom and to communicate deeper research about why or how certain games or techniques work or don’t, Jones said.
“In the long run,” he said, “we hope the system can scale so it could test large numbers of digital-learning activities and provide a Consumer Reports function in the marketplace.”