In 2013, a national panel of education experts called for U.S. states and districts to move away from a focus on testing primarily for accountability, and toward building tests that would help teachers provide more individualized instruction and support for students.
More than a decade after the Gordon Commission on the Future of Assessment in Education, experts at the American Educational Research Association conference in Philadelphia pointed out that the technology now exists for more nuanced measurement, but states and districts have not yet developed the training and infrastructure to help teachers use the new tools effectively.
“We don’t really set our educators and our students up for success right now,” said LaVerne Evans Srinivasan, of the Carnegie Corp. of New York, a philanthropic group that supports education programs (distinct from the Carnegie Foundation for the Advancement of Teaching), during an AERA symposium. “Our educators are at a disadvantage in terms of having the professional learning support that they need to have digital literacy and competency to work with new [artificial intelligence] technology to feel confident that they can use these tools and technologies ... to both reduce the pain of their workload, but also optimize their ability to differentiate learning and personalize learning for young people.”
Testing tools built on artificial intelligence systems have expanded rapidly in K-12.
James Moore III, head of the National Science Foundation’s education directorate, said NSF’s STEM education grants alone have invested $75 million in the last year alone in , such as an open-access, online assessment of in several languages and a program to boost student in science.
But E. Wyatt Gordon, vice president and head of evaluation systems at the computer-adaptive testing company Pearson VUE (for Virtual University Enterprises), said, so far, most AI testing tools “essentially amount to learners asking fact-based questions, and getting fact-based answers.”
“We know that’s not a good teaching environment. So the challenge lies in transforming those interactions into effective learning experiences,” Gordon said.
That means, for example, using programs that collect data about the strategies students use to solve problems—not just checking correct answers—and then relaying information to teachers about whether a student has particular misconceptions about a concept or less efficient learning strategies.
‘The education sector is very slow to evolve and change’
Some high-profile assessments are already trying to leverage AI to provide more nuanced information. The 2025 Program for International Student Assessment, for example, will include performance tasks in which students may work with an AI-driven chatbot, to ensure students have basic background knowledge on a subject and track students’ decisionmaking approach to completing tasks.
“The challenge is, the education sector is very slow to evolve and change,” Srinivasan said. “Already, we have young people educated at an enormous disadvantage by the limited progress that we’ve made in having measurement and assessment keep up with the progress that we’re making in innovations of how learning happens in classrooms.”
For example, she noted that the Carnegie Corp. has supported efforts to redesign high schools for the last two decades. “Those new high school designs were based on assessing mastery and competency,” Srinivasan said, “but when we implemented those designs, we didn’t have the tools to make it easy to adjust how we measure and assess progress toward mastery of rigorous material.”
In a newly released , the National Academy of Education recommends states and districts create both formal training systems to help teachers understand how to use different kinds of assessments, and informal networks of school leaders and mentors to share best practices.