Last fall, students in and used artificial intelligence tools to create fake, pornographic images of their female classmates.
If is enacted, these kinds of activities would be against federal law and students who undertake them could be on the hook for thousands of dollars in damages.
The legislation—nicknamed the “No AI Fraud Act”—gives “all Americans the tools to protect their digital personas,” said Rep. Madeline Dean, D-Pa., who introduced the bill with Rep. María Elvira Salazar, R-Fla., “By shielding individuals’ images and voices from manipulation, the [bill] prevents artificial intelligence from being used for harassment, bullying, or abuse.”
The specifically references the New Jersey incident, citing it as a reason that legislation is needed. “From October 16 to 20, 2023, AI technology was used to create, false, non-consensual intimate images of high school girls in Westfield, N.J,” it says. It also highlights other incidents where AI created images of celebrities and others that were used without permission, such as an ad that used the actor Tom Hanks’ face to advertise a dental plan.
Specifically, the bill would make it clear that every individual’s likeness and identity is protected, and everyone has the right to control the use of their own image and voice.
It would allow people to sue for thousands of dollars in damages if they have been negatively impacted—including emotionally—when others create or spread AI frauds using their identifying characteristics, without their permission.
At least five states—Indiana, New Hampshire, New Jersey, Utah, and Washington—have introduced bills for upcoming legislative sessions to deal with deepfakes, said Amelia Vance, the president of the Public Interest Privacy Center, a nonprofit that works on child and student data privacy issues.
Educators have been warily eyeing the events in New Jersey and Washington, she said.
“It’s obviously causing massive concern across the country. You have a lot of districts who are saying they don’t know what to do about it,” Vance said. “Despite open First Amendment questions, it seems like there is solid legal ground for legislators and others to pass laws restricting these fabricated, intimate or sexually explicit images and depictions.”
But Vance isn’t sure whether the proposed laws—federal or state—are as necessary when it’s “kids generating these images of other kids,” she said.
“Kids are like, ‘Oh, I wonder if I could do this!’” she said, equating it to when past generations might take a yearbook photo of their teacher’s face and place it on the body of, say, a dragon or monster and pass it around class.
Vance emphasized the importance of clear policies from districts for students to understand that distributing AI-generated images of classmates is inappropriate, against the rules, and has consequences.
When those policies are violated, schools can already discipline students in age-appropriate ways, Vance said. State cyberbullying laws can already be used to bring in local law enforcement if schools need to stop the distribution of AI-created intimate images at school, she added.
“It is great these bills are being put forward, it will clarify the landscape more generally, which will hopefully keep more 12-year-olds from experimenting,” Vance said. “But it isn’t absolutely necessary to address the problem in schools specifically.”