Today’s kindergartners will graduate in 2036, and when they do, they will need a command of artificially intelligent technologies that would have been difficult for most people to fathom just a couple years ago.
While there is broad consensus among education and technology experts that students will need to be AI literate by the time they enter the workforce, when and how, exactly, students should be introduced to this tech is less prescribed.
There’s a minefield of potential issues related to AI for educators to navigate: Will students develop their critical-thinking skills when emerging technology can give them the answers—in an 8,000-word expository essay nonetheless—so easily? How might biases baked into AI affect students’ emerging identity and understanding of the world? And what harm can an adolescent do with the ability to create and share highly realistic fake images of classmates before the parts of their brains that control decisionmaking, planning, and impulses are fully developed?
That leaves teachers with the difficult task today of determining when it is age-appropriate to introduce students to artificially intelligent technologies, and they’re largely doing it without a road map.
Education Week consulted four teachers and two child-development experts on when K-12 students should start using AI-powered tech and for what purposes.
They all agree on this central fact: There is no avoiding AI. Whether they are aware of it or not, students are already interacting with AI in their daily lives when they scroll on TikTok, ask a smart speaker a question, or use an adaptive-testing program in class. And they were using this tech before ChatGPT burst onto the scene at the end of 2022 and raised the specter for educators that students may never have to do their own homework ever again.
All this makes it essential that students learn about AI in school, experts say. But when, and how, exactly? We’ve got answers.
Click below to jump to a specific age group:
K-2 youngsters: Understand that AI is not a real person
Kindergartners through 2nd graders are at a point in their brain development where they are more likely to attribute human qualities to artificially intelligent technologies like smart speakers and chatbots. They may even trust what an AI-powered device or tool is saying over the adults in their lives, like their teachers.
One study of 3- to 6-year-olds found that some young children believe that the smart speakers in their house have their own thoughts and feelings.
“When they drew pictures of the smart speaker, some kids drew a picture of a face inside the smart speaker, some attributed emotions to the smart speaker,” said Tiffany Munzer, a developmental behavioral pediatrician at the University of Michigan. “They felt like it had a memory, like when it remembered the age of the child.”
This is more an issue for younger elementary students, said Munzer. Although all children develop at their own pace, kids have usually grown out of this way of thinking by 3rd grade.
It’s also harder for younger children to distinguish advertisements from the content they’re embedded in, making children especially susceptible to ads.
Just like teaching young children that the characters they see in their favorite TV shows are not real, adults need to reinforce that understanding with AI-powered technologies. They also need to be cognizant of kids’ exposure to ads within these platforms.
That doesn’t mean that young children should be shielded from AI but rather that educators should give them a peek under the hood so they can start to unpack how these technologies work.
Monica Rodriguez uses a program from Google called Quick, Draw! to introduce her kindergartners to how systems that use AI technology work. made by users to try to guess what a person is drawing in 20 seconds or less.
Rodriguez’s students, who attend Burleson Elementary in Odessa, Texas, use the program on her Promethean board so the entire class can watch it work.
Quick, Draw! also allows users to on—say, the 139,000-plus pictures of apples people have drawn. The game is an example of a way that the AI behind everything from ChatGPT to streaming services’ shows how the technology learns.
“One student was like, ‘Ms. Rodriguez, is this like when I practice writing my name?’ And I said, ‘Yes, even though it’s artificial, it learns by what we put into it,” she said, similar to the student writing their name over and over again and getting feedback from her. “One of my girls was like, ‘That’s why I can text on my mom’s phone easy because it’s remembering what I put!’ Yes, it’s remembering what you wrote before and it remembers those phrases.”
Rodriguez feels her kindergartners are not only capable of learning about AI, it’s also important they do. Some of her students already have their own TikTok and YouTube accounts they post to using their parents’ devices, she said.
“And I’m like, ‘What in the world? You know how to do this, but you can’t write the letter A?’ Technology is such a part of these kids’ lives,” she said. “Just because they’re little, they have been exposed to AI since they were born. Just now, we’re putting a name to it.”
As a supplement to learning, AI technologies can be a powerful tool, said Tovah Klein, a child-development psychologist and an associate professor at Barnard College. But technology should never totally supplant hands-on learning. Elementary students in particular learn through their senses, and hands-on learning is critical to their ability to fully grasp concepts.
Upper-elementary students: Focus on developing problem-solving skills
Trying and failing (or not succeeding) is another crucial part of learning, said Klein. Technology, powered by AI, can be good at answering questions, but an overreliance on that kind of tech can short-circuit students’ development of problem-solving skills.
“So much of learning is parsing through a lot of information,” said Klein. “Whether [age] 2, or 10, or a teenager, they are dealing with a lot of information, questions, and newness, and I do think about: What does that mean if you can ask any question [and get the answer]?”
ChatGPT has changed his teaching and for the better, said Aaron Grossman, a 5th grade teacher at Ted Hunsberger Elementary in Reno, Nev. He’s constantly finding new ways to leverage the technology to develop richer learning materials and lessons for his students—such as customized short theater scripts for students to practice reading aloud that are aligned to standards and reinforce topic areas students are studying in other subjects.
Although his students are working with a lot of AI-generated materials, Grossman is far more cautious when it comes to having his students directly use AI-powered technology in class. Absent detailed policies or guidelines from his local school board or the state on how students can use AI in the classroom, or what Grossman should do if students either come across or generate inappropriate content, he is hesitant to allow his students to use it very much.
But he has found uses. He has a smart speaker in his classroom and encourages students to ask it simple and specific questions he’s confident the technology will answer accurately—such as the definition or spelling of a word. This also frees up Grossman to focus on students with more difficult questions.
Grossman will also use ChatGPT or Bard with his students, but he is always in the driver’s seat.
“They get to see this demoed,” he said. “We will syllabicate a word, and the kids will say, ‘What does that morpheme mean?’ Turns out their teacher doesn’t always know the answer to a question, and we’ll put it into Bard.” (Grossman said he’s found Bard is better at grammar questions than ChatGPT.)
That kind of modeling proper use of AI tools—being thoughtful about what platform or tool to use for what need, refining prompts to get rich and nuanced answers, recognizing where AI bias is seeping into outputs, and testing the accuracy of the information—is an important strategy for teaching students how to responsibly use the tech, say experts.
Middle schoolers: Don’t loosen restrictions around access to AI too much
While the adults might want to start loosening restrictions around access to AI-powered tech as kids move into the middle school grades, they should resist that temptation, said Klein.
Age 13 is when ChatGPT says kids can create an account with their parent’s permission (although not that much is stopping kids from doing it earlier and without their parents’ consent).
It’s also at this stage in development where tweens’ curiosity about more adult things—sexuality, violence, tragedy—start to go up, but they still lack a lot of impulse control, said Klein.
“Sexual content is very interesting, and rumors are very interesting,” she said. “On the other hand, if you’re passionate about whales or how railroads are built, there’s a ton of information out there and you don’t want to stop children from having access to. It’s a matter of really considering what are the guardrails and limits on this stuff?”
Educators should capitalize on the stage that older elementary and middle school students are at developmentally, said Munzer. This is when students’ critical thinking and abstract thinking really starts to come into bloom, she said.
Exercises in which students ask a generative AI chatbot to answer a question or write an essay and then critique it—looking for factual errors and the like—would be developmentally appropriate for this age group, Munzer said.
“It should be used as a tool to complement and challenge the critical-thinking skills that come online at this age,” she said.
High school students: Learn about the limitations of artificial intelligence
High school students are fast becoming sophisticated users of programs like ChatGPT.
Teachers may feel their main duty at this stage is to police students and make sure they’re not using ChatGPT, Photomath, and other similar technologies to do all their assignments. But experts say that educators have a more important role to play: primarily, to teach students the limitations of the technology. The text and images created by generative AI programs, for example, can be plagued with biases, stereotypes, and inaccuracies.
Teens are naturally very skeptical, said Klein, and teachers should leverage that.
“I think with ChatGPT, we have to remind them of that,” she said. “‘Remember how you don’t trust anybody? Do that with machines.’”
Eamon Marchant, a high school computer science teacher at Whitney High School in Cerritos, Calif., has found that he has to prompt his students to exercise their natural suspicion when it comes to AI-powered technology.
“They don’t doubt the machine as much as I would like them to,” he said. “When they see ChatGPT, they are taking those answers at face value unless prompted to be skeptical about it.”
Marchant has his students experiment with ChatGPT, asking it questions, comparing its answers to other sources, seeing what it’s good at and what it’s not.
“It was really good at very specific biology calculations,” he and his students have found from their tests. “But then, physics, it was atrocious at.”
While high school students are on the verge of adulthood, at least legally if not developmentally, this age group still needs boundaries and guidance.
Teachers should be judicious about how many bright, new shiny AI tools they bring to the classroom, especially as more and more are developed every day, said Brenda Colón, an Advanced Placement biology teacher at Poinciana High School in Kissimmee, Fla.
“Sometimes, we get the students overstimulated with all of the resources,” she said.
AI literacy is also an important component to their education, said Colón.
“When we’re thinking about high schoolers, they have all of these amazing new, bright, logical reasoning skills,” said Munzer. “They are also more likely to take risks because the prefrontal cortex is not quite developed yet.”
And that can be a scary combination with the awesome powers of generative AI. In particular, Munzer worries about how AI might turbocharge cyberbullying when teens can make distorted images or deepfakes so easily. A recent case in New Jersey in which male students made and shared sexually explicit deepfakes of more than 30 female students in their school is a prime example of this concern.
Many educators have been early adopters of AI, but there are still others who may feel that their students’ prowess with this tech already far outstrips their own and that they have little to offer high school students in the way of guidance.
But that should not be the case, said Munzer. Many of the digital literacy and social-emotional skills that high school teachers are already teaching apply to AI. A strong grounding in those skills will help students learn to use AI-powered tools responsibly, in ways that allow them to reap the benefits of this powerful technology while avoiding the harms.
“We’re preparing kids for a world that we have no idea what it will look like in ten, twenty years,” said Munzer. “The most important things we want our children to take from us right now are kindness, equity, and critical thinking skills to challenge the information that they are seeing. It’s about imparting those key skills.”