AI-fueled cheating鈥攁nd how to stop students from doing it鈥攈as become a major concern for educators.
But how prevalent is it? Newly released data from a popular plagiarism-detection company is shedding some light on the problem.
And it may not be as bad as educators think it is.
Of the more than 200 million over the past year, some AI use was detected in about 1 out of 10 assignments, while only 3 out of every 100 assignments were generated mostly by AI.
These numbers have not changed much from when Turnitin released data in August of 2023 about the first three months of the use of its detection tool, said the company鈥檚 chief product officer, Annie Chechitelli.
鈥淲e hit a steady state, and it hasn鈥檛 changed dramatically since then,鈥 she said. 鈥淭here are students who are leaning on AI too much. But it鈥檚 not pervasive. It wasn鈥檛 this, 鈥榯he sky is falling.鈥欌
The fact that the number of students using AI to complete their schoolwork hasn鈥檛 skyrocketed in the past year dovetails with that were released in December. Researchers there polled students in 40 different high schools and found that the percentage of students who admitted to cheating has remained flat since the advent of ChatGPT and other readily available generative AI tools. For years before the release of ChatGPT, between 60 and 70 percent of students admitted to cheating, and that remained the same in the 2023 surveys, the researchers said.
Turnitin鈥檚 latest data release shows that in 11 percent of assignments run through its AI detection tool that at least 20 percent of each assignment had evidence of AI use in the writing. In 3 percent of the assignments, each assignment was made up of 80 percent or more of AI writing, which tracks closely with what .
Experts warn against fixating on cheating and plagiarism
However, a separate survey of educators has found that AI detection tools are becoming more popular with teachers, a trend that worries some experts.
The survey of middle and high school teachers by the Center for Democracy and Technology, a nonprofit focused on technology policy and consumer rights, found that 68 percent have used an AI detection tool, up substantially from the previous year. Teachers also reported in the same survey that students are increasingly getting in trouble for using AI to complete assignments. In the 2023-24 school year, 63 percent of teachers said students had gotten in trouble for being accused of using generative AI in their schoolwork, up from 48 percent last school year.
Despite scant evidence that AI is fueling a wave in cheating, half of teachers reported in the Center for Democracy and Technology survey that generative AI has made them more distrustful that their students are turning in original work.
Some experts warn that fixating on plagiarism and cheating is the wrong focus.
This creates an environment where students are afraid to talk with their teachers about AI tools because they might get in trouble, said Tara Nattrass, the managing director of innovation and strategy at ISTE+ASCD, a nonprofit that offers content and professional development on educational technology and curriculum.
鈥淲e need to reframe the conversation and engage with students around the ways in which AI can support them in their learning and the ways in which it may be detrimental to their learning,鈥 she said in an email to Education Week. 鈥淲e want students to know that activities like using AI to write essays and pass them off as their own is harmful to their learning while using AI to break down difficult topics to strengthen understanding can help them in their learning.鈥
Shift the focus to teaching AI literacy, crafting better policies
69传媒 said in the Stanford survey that is generally how they think AI should be used: as an aid to understanding concepts rather than a fancy plagiarism tool.
Nattrass said schools should be teaching AI literacy while including students in drafting clear AI guidelines.
Nattrass also recommends against schools using AI detection tools. They are too unreliable to authenticate students鈥 work, she said, and false positives can be devastating to individual students and breed a larger environment of mistrust. Some research has found that AI detection tools are especially weak at identifying the original writing of English learners from AI-driven prose.
鈥69传媒 are using AI and will continue to do so with or without educator guidance,鈥 Nattrass said. 鈥淭eaching students about safe and ethical AI use is a part of our responsibility to help them become contributing digital citizens.鈥
AI detection software actually uses AI to function: these tools are trained on large amounts of machine- and human-created writing so that the software can ideally recognize differences between the two.
Turnitin claims that its AI detector is 99 percent accurate at determining whether a document was written with AI, specifically ChatGPT, as long as the document was composed with at least 20 percent of AI writing, according to the company鈥檚 website.
Chechitelli pointed out that no detector or test鈥攚hether it鈥檚 a fire alarm or medical test鈥攊s 100 percent accurate.
While she said teachers should not rely solely on AI detectors to determine if a student is using AI to cheat, she makes the case that detection tools can provide teachers with valuable data.
鈥淚t is not definitive proof,鈥 she said. 鈥淚t鈥檚 a signal that taken with other signals can be used to start a conversation with a student.鈥
As educators become more comfortable with generative AI, Chechitelli said she predicts the focus will shift from detection to transparency: how should students cite or communicate the ways they鈥檝e used AI? When should educators encourage students to use AI in assignments? And do schools have clear policies around AI use and what, exactly, constitutes plagiarism or cheating?
鈥淲hat the feedback we鈥檙e hearing now from students is: 鈥業鈥檓 gonna use it. I would love a little bit more guidance on how and when so I don鈥檛 get in trouble,鈥 but still use it to learn, Chechitelli said.