Teenagers tend to struggle to identify fake content online, especially when it is generated by artificial intelligence. Yet most of them report that they change their behavior when they realize they’ve been misled by fake or AI-generated content or bots online.
That’s according to a new report from and what they see online in the age of AI. The findings can help inform schools’ and educators’ efforts to develop student literacy in both digital media and artificial intelligence.
Robbie Torney, the senior director for AI programs at Common Sense Media, a group that researches and advocates for healthy tech and media use among youth, makes the case for why schools should play a bigger role in helping students understand AI’s strengths and weaknesses and how to use it responsibly.
The Common Sense Media report “underscores the need for more AI literacy,” said Torney, a former teacher and principal. “If only about 4 in 10 teens can determine that they have been exposed to inaccurate content, that number feels a lot lower than it should be in terms of issues we know about with [generative] AI.”
In an interview with Education Week, Torney talked about the lessons learned from the Common Sense Media findings and what they mean for how schools should help students navigate a world that is increasingly driven by AI technologies.
This interview has been edited for length and clarity.
What did you find most striking about the findings in your report?

The number of kids who reported seeing images that were misleading actually seemed pretty low to us.
As somebody who has used a gen AI chatbot for work or for other purposes almost every day for a while now, every single time I use a chatbot, there’s something in there that’s slightly inaccurate or that’s wrong. You have to have the ability to discern and detect that.
This is a double-edged sword. It’s not that gen AI is bad—there are aspects that are harmful and aspects that are helpful.
What can schools do given that AI technologies are likely to continue advancing?
One of the very promising findings from our AI report was that talking to kids about generative AI helps teens have more nuanced views about the usefulness and challenges of [the technology].
It’s necessary for teens to learn about gen AI for their future careers, [but the technology] could [also] be used to cheat or it could have inaccurate content in it, right? We’ve seen time and time again that school-based conversations about technology and appropriate technology use are really important.
How would you advise teachers to address the benefits and drawbacks of AI?
The first thing that I would just underscore and emphasize as a core message for educators is the acknowledgment that this is moving really fast. This may feel outside of your comfort zone. It may feel difficult to keep up with, but these things like DeepFakes, AI companions, use of gen AI for schoolwork, and other personal reasons—these are very real aspects of teens’ lives these days.
An educator doesn’t need to be an expert on technology to facilitate a conversation about responsible usage and responsible choices. Educators have a lot more experience than many teens based on being older and having made mistakes. And I wouldn’t want educators to decenter the value of that type of experience—that critical thinking lens about how to navigate risk that’s an important part of being a human—just because they don’t have the specific knowledge of a particular platform or tool.