C3PO from “Star Wars.” HAL from “2001: A Space Odyssey.” “The Terminator.” And now Apple’s SIRI and Amazon’s Alexa. Artificial Intelligence has always been part of our collective imagination. But it’s now becoming part of our everyday lives.
There is, of course, a ton of hype. Experts think this new type of “machine learning” could help people do all sorts of things over the next couple of decades: power self-driving cars, cure cancer, cope with global warming, and yes, transform K-12 education and the jobs students are preparing for.
It’s too early to say how much of that promise will end up bearing out. But it’s a good idea for educators to get familiar with AI, whether they are the chief technology officer of a large urban district or a 1st grade teacher in a rural community.
So what exactly is AI? The simplest explanation is that AI trains a machine to do tasks that simulate some of what the human brain can do. That means it can learn to do things like recognize faces and voices (helpful for radiology, security, and more), understand natural language, and even make recommendations (think the algorithm Netflix uses to suggest your next binge-worthy TV show.) And, as the technology advances, much more could be possible.
How It Works
So how does that actually work? That’s a complicated question, in part because experts aren’t always on the same page about what AI is and what it isn’t.
Right now, all sorts of technology, including educational software, is “adaptive.” That means it’s pre-programmed to take certain steps, based on what a user—say, a student—does. In simple terms, if a kid taking an adaptive test gets an answer right, the system knows to give that kid a tougher question next. (Think of this as a much more sophisticated, computerized version of those choose-your-own adventure books you might have read as kid.)
Plenty of experts would call those systems “AI” and plenty of vendors market their educational software that way. But these so-called “rule-based” systems aren’t the “fancy, sexy AI” that’s grabbing headlines, said Robert Murphy, a senior policy researcher for the RAND Corporation. That’s because all the information is already pre-programmed. The machine can’t get any better at a particular task.
Cutting-edge AI relies on systems that can actually learn, usually by analyzing vast quantities of data and searching out new patterns and relationships. Instead of following one already predetermined pathway, these systems can actually improve over time, becoming more and more complex and accurate as they take in more and more information.
Current Use of AI in 69ý
How is AI being used in K-12 schools?
Classrooms are already using AI-powered tools—including smart speakers, like Amazon’s Alexa or Google Assistant—as . And school districts are beginning to use the technology to do things like plan bus routes, screen applications for teaching positions, and even predict when a piece of HVAC equipment is likely to go bad.
But widespread use of much more sophisticated tools in the classroom is down the road, said Michael Chui, a partner at McKinsey & Company who has a deep background in computer science. Eventually, AI has the potential to individualize lessons for students “the way a really, really awesome teacher does,” Chui said. But he cautioned, “it’s very, very early.”
Already, though, at least some form of AI is used in so-called smart tutors, which help schools differentiate instruction for different kinds of learners. In some cases, they can process natural language to interact with students. AI is also used in applications like automated essay scoring, and early warning systems, which can help identify which students are at risk of dropping out.
To be sure, education technology companies have even loftier ambitions for how AI might reshape K-12 education. Case in point: , a game-based classroom management tool, is teaming up with researchers at the University of Montreal to see if AI can find patterns in student engagement and use them to make suggestions to teachers right now, and the program allows teachers to give students “points” for positive behavior such as critical thinking, collaboration, and even empathy. The company and researchers are hoping to use that data to help teachers figure out when their students are less likely to be engaged and combat that problem.
Potential Trouble Spot: Bias
The use of artificial intelligence in education is expected to explode to a worldwide market value of $6 billion over the next six years. And about 20 percent of that growth will come from applications for K-12 teaching and learning in the United States, according to a report by Global Market Insights. What’s more, the McKinsey Global Institute predicts that —mostly noninstructional job responsibilities, like tracking student progress and communicating with parents—could be automated by 2030 with the help of AI.
But don’t expect an army of AI-powered robots to be filling teacher job applications at a district office near you. Andreas Oranje, a general manager in the ETS Research Division, said during a session at the International Society for Technology in Education’s annual conference this year that he expects AI will ultimately help educators perform rote tasks, not replace them.
“My hope for AI is we actually will expand teaching,” Oranje said. “No teacher ever lost her job because every kid had an iPad. We need more teachers, not fewer. The nature of teaching will change. But it doesn’t mean that 40 percent of teachers will lose their jobs.”
What are some of the problems with using AI in classroom technology? AI may be fancy and sexy, but it’s far from perfect. One big problem: Human biases can be written right into the algorithms that power AI and then amplified by the technology. What’s more, the data that these systems use also can be biased. That can lead the machines to inaccurate, discriminatory, and even racist conclusions.
How this plays out in the real world: facial recognition software, which is currently used for airport security and may even be deployed for school safety, is notoriously bad at identifying women and people of color. More troubling: Studies have shown that risk-assessment algorithms used to figure out criminal sentences tend to make harsher predictions about black defendants than white defendants. And Tay, a chatbot developed by Microsoft, was supposed to figure out how to emulate natural conversation by interacting with Twitter users. Instead, it began communicating in vulgar and racist hate speech.
Not the Ultimate Decisionmaker
Bias issues may not be such a big deal if an AI-powered system is trying to, say, predict what pair of pants a retail customer will buy next. But it is problematic if the system is trying to figure out whether a student should apply to a particular college or not, or suggest a specific lesson for an individual student, Oranje said.
That’s why AI systems—especially those designed for teaching and learning—shouldn’t be the ultimate decider of what students learn or what their educational pathway should be, Murphy said. But AI can still be an important supplemental tool, he added.
“Maybe 10 percent, 20 percent, 40 percent of the time [the system] will get it wrong,” Murphy said. “It will vary by system, but 70 percent of the time they’ll get it right.” The systems could still help districts individualize instruction, but educators need to remain the most important part of the equation.
And of course, there’s another big, obvious concern: Data privacy, especially for K-12 students. That’s something advocates on both sides of the privacy debate are keeping an eye on, as well as educators.
“I’m super against this idea of ‘let’s put an Alexa in the classroom,’ because you’re giving Amazon access to kids’ voices without parents’ consent,” said Mary Beth Hertz, the technology coordinator for the Science Leadership Academy at Beeber in Philadelphia. “I personally am trying to learn more about what ways should we use AI in the classroom, keeping in mind privacy concerns around the data. AI doesn’t work unless you are feeding it data.”
Tech Skills for the Future
Should students be learning skills sophisticated enough for them to get involved in creating AI? A lot of experts see that as the next frontier. Many are particularly interested in making sure that students from groups that have historically been underrepresented in STEM fields—including girls and racial and ethnic minorities—are involved in creating AI, to help counteract the potential for bias.
A jarring but true fact: Vladimir Putin told millions of Russian school children that the nation that leads in AI “will be the ruler of the world.” And China is striving to be the world leader in AI by 2030.
But in the United States, many schools aren’t even offering computer science courses, much less AI learning opportunities. Big barriers include lack of curriculum and instructional materials. Some experts and educators are trying to change that, including , a working group that is developing national guidelines to help schools figure out what to teach on AI. But for now, less than 100 schools in the country are offering some form of K-12 instruction in this area, experts estimate.
Another challenge even for relatively affluent, tech-savvy districts like Leyden High School District 212 outside Chicago: Finding educators who can teach those high-tech skills, particularly at a time when the teacher ranks in general are thinning in some parts of the country.
“What we really need are teachers with a level of humility who are willing to learn alongside the students at this point,” said Nick Polyak, the superintendent of the district. “The traditional method of learning a topic deeply in college and then going on to teach isn’t relevant anymore because the knowledge is changing too quickly.”
But he sees figuring out the challenge as an imperative.
“I don’t want our students to be the people who just buy autonomous cars,” he said. “I want them to be the people who are designing and improving them. It’s imperative on us to provide an education that makes them ready to step into the evolving job market.”