
AI cheating is something educators everywhere are trying to navigate. How do we get it right? Here are 20 things educators should consider about AI and cheating.
"Don't use AI to cheat."
Educators have put vague threats like this on their syllabi and on their assignments since generative AI tools like ChatGPT have become more mainstream.
There's a reason, too.
Part of it is fear -- they're afraid that students will use them and that they don't have any control over it.
Part of it is the unknown -- they don't understand AI and they don't know how to create better conditions so that students aren't misusing AI.
What teachers call "AI cheating" is still at the forefront of many discussions around AI in education. Teachers want to maintain academic integrity. They want to make sure that students are thinking ... that they're developing ... that they're acquiring new skills.
But in an academic environment as murky as we have now, it's hard to make sure we're focusing on the right things -- and fighting the right battles.
I know this because I've seen it with my own eyes. I've taught for more than 11 years in public schools -- and have taught a full load of high school Spanish classes since ChatGPT has emerged. I've worked to turn around a culture of "just use an online translator" to create a culture of "I know how to do this myself -- but I know when it's right to use an AI tool to help me."
Let's think more deeply about this topic of AI and cheating.
Here are 20 things I'd like educators to consider as generative AI tools like ChatGPT become more and more prevalent in schools -- and on students' cell phones -- and in students' work.
1. AI detectors are not the answer.
AI detectors are wildly inaccurate. They don't really detect whether someone has used AI. They're just looking for patterns common to AI-generated text -- and the often get it wrong. Studies have found that AI detectors are inconsistent ... they have biases toward classifying output as human-written ... they've raised doubt about using them to enforce policies on AI use ... and they're biased against non-native English speakers.
2. An AI detector assumes that AI is only for copy/pasting the whole thing.
When an educator feeds a student's work into an AI detector, it's implying one thing -- that the student has done the whole thing with AI and the teacher wants to catch them. Some students will do that, but when that implication is made, it also leaves out the student who uses it to support their work (and not have AI do their work for them). Actions that imply that all AI use is cheating just implicates students who are actually doing the thinking work that you want them to do.
3. You can't always just "eyeball it" and know if a student used AI to do their work.
OK, sometimes you CAN. When a student struggles to put together a coherent paragraph and then turns in a literary masterpiece, something's fishy. (By the way, you DON'T need to use an AI detector here. You just need to have a conversation.) But students are using text humanizers and adversarial prompting (doing a follow-up AI prompt to make the text less likely to be identified by an AI detector).
4. "You used AI" is too broad an accusation.
It's kind of like saying "you used the internet" or "you used a laptop to do your work." In this post, I show that AI use in student work is a spectrum -- with more or less AI use depending on how it's being used. Instead of accusing that "you used AI," let's look at how it's being used -- and whether that supports student learning and growth.
5. Playing the AI cheating game is a "cat and mouse" game we will never win.
When we try to play the "AI cheating game" against students -- and beat them at their own game -- no one wins. It just takes the focus off the learning and the whole reason that we're doing assignments anyway. Look at the "cat and mouse" game as described in the graphic below. Every effort the teacher makes is countered by the student in an effort to win the game that we're playing -- the "AI cheating game" -- and not focus on the learning.

6. Be very, very careful with "let's just go back to paper and pencil."
It's easy to say, "I'll just have students use paper and pencil. That way we eliminate AI from the equation entirely." But eliminating technology from student learning opens up its own set of problems:
- It doesn't help students resolve the issue of appropriate, responsible use of AI (and technology in general).
- It removes accessibility features that students might depend on (or that at least benefit them).
- It eliminates the possibility of tech being used to support student thinking and skill development.
7. When we tell students "don't ever do this," it makes them want to do it.
If you've ever taught in classrooms before, you understand this. When you tell students that they shouldn't do something, it makes them think a few things. First: if they weren't aware of it, now they're keenly aware of it. Second: They want to try it out. Third: They're trying to figure out why the teacher doesn't want them to do it -- and whether the benefits (in their eyes) might outweigh what the teacher wants. It's all in how you present the situation.
8. Not all students are misusing AI.
If you suspect that some of them are, then, yes ... some of your students are probably using AI in ways that you don't want them to. But not all of them are. In fact, the ones that aren't are likely to have a better moral compass and want to actually learn -- and better themselves -- and do things right. Broad, sweeping action against "AI cheating" can have negative effects on students who are trying to do right. It labels them as "cheaters" -- either explicitly or implicitly by imposing policies or action on them that's unjust. It hurts to be called a cheater -- and it hurts even more when you aren't cheating.
9. Way before AI, students were finding ways to avoid work they found irrelevant.
Let's go back in time -- before AI technology started to take off. When 1:1 computing started to be used in schools, the computers got lots of the blame for students avoiding work. But even before laptops and Chromebooks, students were trying to avoid work that they saw as irrelevant or unimportant. This isn't an AI issue. It's a relevance and motivation issue. (More on that below.)
10. You will always have students trying to avoid work as long as compulsory education exists.
When we, as a society, mandate that students go to school -- as long as school is required -- then we'll have students that don't want to do it. It's human nature -- especially for teenagers -- to want to push back against something we're being forced to do ... especially when it's something we don't want to do. (Let's be honest ... this is an issue for adults, too. Have you ever watched teachers at a staff meeting before?!?) Sure, we can look at the technology that students use to avoid work (AI, internet, etc.). But we also need to look at the reason they want to use that technology in the first place.
11. This is highlighting problems in education that go way beyond AI.
AI is just another lens we're using to look at problems we've had in education for ages. We saw it with remote teaching during the COVID-19 pandemic, too. We've had issues with student motivation ... with equity ... with the reason that we send students to school in the first place. We're just shining a light -- and using a magnifying glass -- to look at cracks in the foundation that have been there for a while -- and are getting worse.
12. When AI cheapens the answer, focus on questions.
So, what can we do? First, we can address a reality that AI creates. AI assistants like ChatGPT cheapen answers. It's easier to come up with answers than it ever has been. (And providing answers? That's at the heart of traditional education. And when students can provide what traditional education requires -- and do it more quickly and painlessly with AI? They're going to do it.) When AI cheapens the answer, let's instead focus on questions. Questions are becoming more and more valuable with AI systems that can provide answers and explain things to us.
13. When AI cheapens the product, focus on the process.
Here's something else that AI cheapens: the final product. The artifact of learning that students turn in to the teacher so we can see how they think ... how much they've learned ... get insight into their development. If the student hasn't created that product -- if it's not the result of their thinking and skill -- then it does us no good. (We don't need to know how ChatGPT would do a task. That doesn't help us assess the student!) When AI cheapens the product, let's focus on the process. Build in opportunities for students to talk about -- and reflect on -- the process of getting to the final product. Ask questions like:
- What did you find most interesting in this?
- Was there a part in the writing process where you struggled?
- If you could do this all over again from scratch, what would you do differently?
14. We need to be crystal clear about what our expectations are.
It's a little too easy -- and highly unhelpful -- when teachers tell students: "Do this activity ... and don't use AI to cheat." It's so vague ... and that puts the student in the position to make all of the judgment calls about what is and isn't appropriate use of AI. That's a tough (and really unfair) position to put students in.
- First: If students have to choose whether to do something that'll shorten an assignment they don't want to do -- will they really choose the most academically honest one ... the one that is best for them?
- Second: It also means that the teacher doesn't have to take the time to understand the AI systems and create clear guidelines on how it should/shouldn't be used. That work is put on the student -- who also probably doesn't understand the AI systems, either!
- Third: It puts the teacher in an unfair position of power to punish students for using AI in ways that the teacher doesn't like -- even if the teacher didn't spell out how it should/shouldn't be used.
Instead, as teachers, let's do the responsible thing and try our best to give students concrete guidance on appropriate AI usage.
15. To be crystal clear, we need to understand how AI systems work.
If we want to get crystal clear about what our expectations are (above), it needs to start at the fundamental level. As educators, we need to understand how AI systems work. We don't need a computer science degree. We don't have to use all of the features of all of the AI tools. But we have to have some core AI literacy -- so that our conversations and requirements for students around AI are rooted in fact. If teachers don't have any sort of AI literacy training, then they have to make decisions about AI and classwork and academic integrity on (for lack of a better term) vibes alone. On what they've seen on the news. Or heard from their friends. Or (yikes!) read on Facebook. If AI is already impacting the classroom -- and is going to impact our students' lives -- then teachers need to better understand it.
16. Students need their teachers' guidance.
We need to be willing to turn our attention away from being the cheating police -- and turn toward becoming a trusted advisor on how to use this technology right. As educators, it's up to us to at least try out this technology that is impacting student work -- and to try to understand it, at least on a basic level. Even if we don't understand AI, we can share our insight on what we like (and don't like) about its application on learning and work in certain circumstances. If we don't take this opportunity, then students will be fumbling in the dark when it comes to AI (or worse ... will learn about AI from TikTok!). Instead of avoiding it (because we're afraid of being wrong or making a mistake), let's enter into the conversation and do our best to support students to make smart decisions.
17. Instead of shunning AI, design it into lessons.
Teachers don't need to design it into ALL lessons. We don't need it every time. But if we believe that AI can be used to support student learning, then let's make it a purposeful part of learning tasks -- where it makes sense.
- Have students discuss with an AI chatbot that is trained to have an opposite perspective.
- Encourage students to brainstorm big-picture ideas for inspiration before beginning to write.
- Provide an opportunity for students to use AI to fill in holes they might not have considered in their work.
- Get students to interact with an AI chatbot that helps them reflect on their work afterward.
18. One of our best options might be to show students how to use it responsibly.
We can try to fight AI cheating. We can try to beat students at their own game. But if that seems like an exercise in futility (it is), then we can try something else -- putting our emphasis on the positive instead of the negative.
Instead of battling "AI cheating," let's show students how AI can be used responsibly as part of their work. (And let's also talk about the reasons why we wouldn't use it in certain ways.) When we show students how it can be used responsibly, it's like the tide that comes into the harbor. All ships rise. When we create a culture of using AI responsibly -- and emphasizing and nurturing the importance of our humanity -- we can help students see a vision of how it should be used -- instead of trying to just police the ways it shouldn't be used.
19. Remember that AI use does not equal ChatGPT.
A quick tech reminder: when we talk about students use of AI, it doesn't just mean "let them use ChatGPT." I think lots of folks -- especially those that aren't in education -- miss that point.
We have lots of AI tools with models that are trained for use by students -- tools with additional guardrails for safety, with data privacy protections, with additional training to support classroom learning. In the confines of these tools, it mitigates lots of risks with using a commercial, public-facing tool like free ChatGPT (which students might be using in the absence of a tool developed for student use).
And teachers can train AI chatbots to interact with students in certain ways -- to emphasize certain content and avoid certain behaviors. (By the way, "training AI chatbots" just means writing instructions in plain English. Any teacher can do it and it doesn't require advanced training.)
Again, nuance. Understand the tools and the technology to make better decisions for students.
20. We need to focus on relevance and student motivation.
I don't want to come down on teachers as irrelevant or disconnected to what motivates students. I've taught for more than a decade in public school classrooms, and I know the struggle of keeping students motivated and making learning relevant. It feels impossible sometimes.
But if we want to focus our effort and attention on something that's going to work -- that's going to create results? I think putting our efforts toward relevance and student motivation is crucial.
When students feel that learning is relevant -- that it's connected to their interests and their goals -- that means a LOT.
And when students are motivated -- when they know why they're doing something, how it's going to improve them, and they are in sync with the teacher about its importance -- that is a catalyst to great learning.
It's hard work. And we can't get it right 100 percent of the time. But if we want to spend our time improving the situation, then thinking about relevance and motivation is much more productive than fighting against cheating.