Corrected: The original version of this story misspelled Daniel Vargas Campos.
ChatGPT’s release in November prompted big worries over how students could use it to cheat on all kinds of assignments.
But that concern, while valid, has overshadowed other important questions educators should be asking about artificial intelligence, such as how it will affect their jobs and students, said Daniel Vargas Campos, a curriculum program manager with Common Sense Media, a nonprofit research organization that develops curricula and reviews digital media.
One big question: How will artificial intelligence change the teaching of media literacy skills that help students determine the intent and accuracy of the media they consume?
Education Week spoke with Vargas Campos about how media literacy education is at a critical moment as educators grapple with the implications of AI-driven technologies. This interview was edited for length and clarity.
In what ways could you see AI changing media literacy education?
There are layers to it. We are concerned that with the rise of artificial intelligence, that misinformation is going to proliferate a lot more in the online spaces. That’s one layer. Another layer to this, and something that’s a little bit less talked about, is how even just the artificial intelligence hype is already challenging how we think about media literacy before we even see examples of AI being used for misinformation purposes explicitly.
There was a term that the World Health Organization came up with like two years ago in the middle of the pandemic, the “infodemic.” There’s too much information out there, and that makes it difficult to sort what’s real versus what’s fake. That is what’s happening right now with artificial intelligence. The real challenge is that even just talking about the potential negative impacts artificial intelligence can have in the field of misinformation is that we are creating an environment where it’s harder for people to trust what they see online.
To give you an example: A [few] weeks ago there was a video that went viral of a drag show, and there were babies in the video. It was trying to stoke emotions, like, “Oh, that shouldn’t be allowed.” But what was interesting is that people’s response to it was immediately, “Oh, this is a deep fake.” Turns out, the video was real, it was just an example of the most common type of misinformation, which is real information taken out of context.
Now, the challenge is that when we just label that automatically as a deep fake, then we don’t go through that extra step of putting into practice our media literacy skills. You’re bypassing the critical thinking that you need to do to actually consider, what are the impacts? What is this information trying to do?
How do educators need to change their approach?
It does require a shift. And this is a shift that’s not necessarily just because of AI, it’s because the information-seeking pattern of young people is different. In terms of how we teach media literacy, we need to update our approach to meet students’ actual experiences before we even dive into AI. We have to understand that most kids get their news from social media and a lot of the information-seeking behaviors and habits that they develop are developed as part of an online community.
Now, when it comes down to artificial intelligence, a big part of this conversation is to just talk to young people about this issue but really from the perspective of what they’re worried about. Because AI is already having lots of negative impacts in kids’ lives.
So, this is a question about how do we update media literacy for the next five, 10 years? And part of it is integrating or adding these conversations around AI literacy into how we talk about media literacy.
Do you see a disconnect between adults and kids regarding their biggest worries about AI?
Especially in education, we went straight to: “Kids are going to use this to write essays, and it’s going be plagiarism.” And we kind of just jumped way ahead to this very unique use case. I do think that there’s a disconnect because kids are engaging with this sort of AI in all sorts of different realms of their digital lives.
[For example, the social media site] Discord has a summarizing AI. So, if you’re in an online forum, keeping up with the conversation could be super hard, especially if you have a thousand people commenting on something. Now there’s AI that gets used to summarize the conversation.
These are deeper questions that are less about plagiarism, but more about within your social life, your community, how can you identify bias? How can you identify whether the text that’s being used to share information to you is giving you an accurate representation of what’s happening?
A big component to this, just general advice for teachers, is how do we create more meaningful connections between media literacy and social-emotional learning? That’s a space that’s underdeveloped. Social-emotional learning is about self-awareness, the social awareness.
We want kids to also consider not just how is [media] making you feel or how is it making you react, but what can you notice about the general impact that this type of information or this conversation is having on people’s behavior?