69ý

Privacy & Security

Educators and Artificial Intelligence: “Err on the Side of Caution,” Says RAND Researcher

By Benjamin Herold — January 23, 2019 6 min read
  • Save to favorites
  • Print
Email Copy URL

Artificial intelligence will likely have a relatively modest impact in K-12 classrooms, focused primarily on helping students develop “narrow procedural knowledge and skills,” according to a new report from the RAND Corporation.

“The work of teachers and act of teaching, unlike repetitive tasks on the manufacturing floor, cannot be completely automated,” wrote Robert F. Murphy, a senior policy researcher with the group.

The RAND report, titled “,” does not include original research into AI in education. Rather, it’s a “perspective,” in which Murphy, a veteran evaluator of educational technology, provides “expert insight on a timely policy issue.”

Without question, artificial intelligence is a topic of mounting interest. In the K-12 world, the discussion has numerous components, including how schools can best prepare students for a labor market already being disrupted by AI-powered automation, as well as the ways in which AI-powered technologies are .

The RAND report focuses on the latter. Murphy looks specifically at instructional tools, such as adaptive software and automated essay-scoring systems, already in wide use in the classroom. He also considers administrative tools such as “early warning systems,” which thousands of school districts now use to help identify students at risk of dropping out.

The potential benefits of such technologies are real, but likely limited in scope, Murphy said in an interview. AI is still light years away from being able to replicate the kind of creativity, empathy, and improvisation that is core to good teaching. There is also currently little evidence to show that such tools are effective at improving educational outcomes, and the technologies must overcome significant hurdles around privacy, potential bias, and public trust.

69ý would be wise to consider those realities before embracing AI whole-hog, Murphy said.

“I would err on the side of caution,” he said. “If publishers and developers aren’t willing to provide information about how decisions are made within their systems, I think that raises red flags.”

Augmenting, Not Replacing, Teachers

The RAND paper broadly defines artificial intelligence as “applications of software algorithms and techniques that allow computers and machines to simulate human perception and decision-making processes to successfully complete tasks.”

Within the K-12 world, the most visible use of such technologies are in adaptive instructional systems and intelligent tutoring systems that aim to provide customized content based on each student’s strengths and weaknesses.

Many such tools are what Murphy described as “rule-based” applications that operate primarily off of if-then logic statements they are programmed in advance. Perceived advantages of such technologies include the ability to let students work at their own pace, advance only when they master requisite skills and concepts, and receive continuous feedback on their performance.

There’s a fair bit of research evidence to show that such systems can be effective in the classroom—but only when it comes to topics and skills that revolve around facts, methods, operations and “procedural skills,” the RAND report says.

“The systems are less able to support the learning of complex, difficult-to-assess, higher-order skills,” such as critical thinking, effective communication, argumentation, and collaboration, according to the report.

Other AI tools now using machine-learning techniques to discover patterns and identify relationships that are not part of their original programming face similar limitations, the report contends.

Automated essay scoring systems, for example, now provide valuable feedback to students, the RAND report says. But such feedback is mostly focused on things like grammatical errors and the use of passive voice, rather than the depth of the ideas being communicated.

The lesson for K-12 educators and administrators?

Such tools can’t replace teachers, Murphy said. But they can help teachers do more—by making it easier to backfill missing foundational knowledge and skills for students who are behind, for example, or by making it more possible to assign student writing with a realistic expectation that all students will receive at least some immediate feedback on their work.

“In the broader field of AI, there’s a lot of focus on augmenting workers’ capacity,” Murphy said. “It’s a way to help teachers solve some important challenges.”

Concerns Over Privacy, Bias, Transparency

When it comes to artificially intelligent administrative tools, meanwhile, there are both bigger opportunities and bigger questions.

AI-powered tools are already helping schools identify students who are at risk for dropping out, hire teachers and other staff, improve logistics, and offer college and career counseling.

There does not appear to be much evidence behind such tools one way or another, Murphy said.

The best-case scenario, he said, is that AI can help improve organizational decision-making to the point where enough money and staff time is freed up so that more resources can be directed into the classroom.

But a series of “huge” hurdles must be overcome before that’s possible, Murphy maintained.

For one, there’s reason to believe that most developers of AI-powered tools don’t have access to enough “high-quality” data to really drive effective decision-making. (AI tools “learn” by being trained on large data sets. In K-12, though, such training sets are typically limited to information on things like test results, demographics, and discipline—not the kinds of granular data on what’s happening between students and teachers that truly shapes learning.)

At the same time, though, the push to collect more educational data is raising significant concerns. Some parents and activists worry about privacy, arguing that efforts to gather, say, every click recorded by educational software are pushing far beyond the capacity of current laws to protect students’ personal information. The RAND report also flags issues of bias, noting that the statistical models that power AI often reinforce racial, gender, and other biases that are already embedded in the data upon which such tools are trained.

And heightening both sets of concerns is a lack of transparency, with many software developers proving unable or unwilling to communicate publicly in a clear, understandable way about how their systems make decisions.

That erodes public trust and creates a big problem for K-12 educators and policymakers, Murphy said, especially as the stakes attached to AI-powered decisions continue to outpace most school districts’ capacity to fully consider the pros, cons, and potential unintended consequences of such technologies.

“I think we really need to be cautious about the blind acceptance of AI-powered decision tools when there are serious consequences for the individuals who are the object of their decision-making,” Murphy said.

Image: Getty


See also:

A version of this news article first appeared in the Digital Education blog.