Most experts recommend that schools and districts steer clear of banning artificial intelligence, instead letting students learn about AI by permitting them to use AI-powered tools like ChatGPT to complete some assignments.
That opens up a host of questions for teachers and school leaders:
- When is it OK to use AI and when is it not?
- How much can a student rely on AI?
- How should students cite information they learned or sourced from AI tools?
- And what’s the best way for teachers to communicate their expectations for students on how to use AI appropriately?
Enter North Carolina, which is one of at least five states that have released AI guidance for schools. The Tar Heel State sought to give teachers a roadmap for setting parameters around AI’s use.
North Carolina knew early on that it did not want its schools to ban ChatGPT and other AI tools, said Catherine Truitt, North Carolina’s superintendent of public instruction. Rather, she said it wanted to teach students how to understand and use those tools appropriately, in part to prepare them for a future job market in which AI skills and knowledge are likely to be valued.
Prohibiting the use of these tools is like “sticking your head in the sand,” Truitt said. “This is not something that we can pretend doesn’t exist and think that we can ban it and then it won’t be an issue.”
In crafting the , the state sought to “create a step-by-step playbook that makes it so easy for a school district or even a school by itself to embrace AI and feel that they’re doing it in the right way,” said Vanessa Wrenn, the chief information officer for the North Carolina education agency. “Our students are either going to use it on the downlow, or they can use it when we give them guidance on how to use it right, how to be safe, and how to use it well.”
In an effort to make its guidance as user-friendly and practical as possible, North Carolina included a chart that outlines different possibilities for using AI on assignments without encouraging cheating or plagiarism.
The graphic has five different levels based on the colors red, yellow, or green. The first level—noted in red and called level “0"—communicates the expectation that students complete an assignment the old-fashioned way, without any help from AI.
The second level—noted in yellow and called level “1"—means students are allowed to brainstorm ideas or figure out how to structure their writing with assistance from AI. 69ý must disclose that they used AI and submit a link to interactions with chatbots. The third level—also noted in yellow and called level “2"—allows students to use AI in editing, but not to create new content. Again, students must disclose its use and share links to chats.
If an assignment corresponds with the fourth level—noted in green and called level “3"—students are permitted to use AI to complete certain elements of a task, as specified by the teacher. And on the fifth level, also noted in green and called level “4,” students are allowed to use AI tools in any way that helps them complete the assignment, as long as the student is responsible for evaluating and providing oversight of the technology’s work. In those instances, AI is supposed to serve as a partner or “co-pilot,” not as a solo content creator. 69ý are also required to submit citations explaining how they used AI with both the fourth and fifth levels.
“There has to be a scale for when some AI is acceptable versus when all AI is, and it’s always going to depend on the assignment itself,” Truitt said.
If that graphic of the different levels of AI use looks like it was created for a teacher to print out and post on their wall, that’s because it was, Wrenn said.
“I wanted something that if a teacher wanted to use this graphic in the classroom, it’ll be very easy for teachers, [and] for students, to understand when they could or could not use AI on an assignment,” Wrenn said.