Setting school policy about AI: A cautionary tale

Artificial intelligence

Artificial intelligence | Tuesday, March 14, 2023

Setting school policy about AI: A cautionary tale

How can we implement AI in schools responsibly? How can we set policies fairly? This story can give us some guidance.


In February 2023, a Florida high school found that students were using artificial intelligence assistants like ChatGPT to do their schoolwork.

The response, according to news reports: students could face "more severe consequences" if they didn't admit to using AI in their work, and they might be withheld from graduating.

This is a common issue that lots of schools are confronted with right now: how much should artificial intelligence (AI) be used by students for learning purposes, and in what ways is its use responsible?

We're in uncharted territory right now with AI. We've seen innovative disruptions before, but nothing quite compares in scope and magnitude.

It's pretty clear that we need a couple of things right now more than ever:

Transparency. And lots of conversations.

In this post, we'll look at this school's reaction to students' use of AI -- and some steps we all can take to move forward in a positive direction.

Full disclosure: Most of my details are coming from local and national news reports. I'll do my best to provide context and nuance, but really, this just provides an example of what schools everywhere are experiencing.

Details about the Florida high school's reaction

I read about the school in question, Cape Coral High School, which is part of the International Baccalaureate (IB) program. Students can opt to take IB courses to earn a special diploma designation.

"All Student Scholars are held to the highest standard of academic excellence and personal behavior," the school's student handbook states. "The role of academic work is to teach skills, provide content knowledge and allow for intellectual growth needed to be career and college ready."

As such, when teachers saw that student work looked suspiciously different from the style of previous work, they started discussing it. 

An email was sent by the school IB coordinator to families of high school seniors addressing the concerns. Some excerpts from the email (full email available in this local news report): 

Recently the use of AI generators has become a major concern. The use of AI generators is a violation of our academic integrity policy.
There have been some IB papers submitted submitted that are questionable in a few ways including being very different styles of writing from previously submitted papers.
We are using AI detectors and our own investigation of Chromebooks to verify the authenticity of suspicious student work. 
I have asked students to please speak to me privately if they have used AI generators for their papers so we can correct the issues now. If students do not come forward on their own there will be more severe consequences if misconduct is found.
Our teachers must authenticate all student work prior to submission to IB. If they are unable to authenticate a student's work then the student will not have successfully completed the IB program. If the work gets to IB and IB finds that the work is not authentic then the student has not successfully completed the program, which means the student has not earned a high school diploma.
Please have a conversation at home about academic integrity and the use of AI generators with your student(s).

The school district released this statement to the local news station:

We do not tolerate cheating. Students who violate the Code of Conduct and Academic Integrity Policy will be disciplined. As part of our ongoing cybersecurity efforts, our Information Services team continues to strengthen Chromebook security features to block the use of AI from aiding any student work.

Here's how the president of the county teachers association, Kevin Daly, weighed in on the situation (according to the local news station):

I’m not sure students’ learning needs are best served by using this device. I’ve never heard about it in the educational setting. It seems like it has the ability to be misused.
There’s no way to compare it against anything in TurnItIn because in and of itself, it’s an original source. I think the system is going to be at a disadvantage for a considerable amount of time until we are able to get some reasonable kind of firewalls and guardrails to kind of see that this is not misused and students are doing original work.

After this story broke, International Baccalaureate issued a statement about the use of ChatGPT and artificial intelligence. In part, it suggested that we need to adapt to these tools that will be part of our everyday lives going forward. However, the IB doesn't recognize work created by AI tools as a student's own work and such work should be cited in bibliography. Read more about this in the IB statement and in this article by IB's head of assessment principles and practice.

AI for EDUCATORS

Learning Strategies, Teacher Efficiencies, and a Vision for an Artificial Intelligence Future by Matt Miller

Available NOW on AMAZON!

Observations that can guide us going forward

The purpose of this post isn't to kick dirt on this school and its handling of this situation. I'm sure it's full of teachers and staff who care about their students and want to see them succeed. Plus, when a situation like this arises, some of the earliest ones to take action seem to be under the most scrutiny. They were certainly in a very difficult situation, and I don't know all of the people and influences who were at play in decisions being made.

We can learn a lot by analyzing this situation.

Really, this can be used as a cautionary tale as we try to find an appropriate and responsible place for AI in education.

1. Appropriate use of AI was unclear under school policy.

The school's academic honesty policy in the student handbook leaves lots of gray area that's up to interpretation. It's worded much like lots of these policies in student handbooks everywhere. Here's an excerpt: 

It is the responsibility of every student to complete their own work on assignments, tests and quizzes and not copy another student’s work and submit it as their own, which is called plagiarizing. The consequence for plagiarizing an assignment, test or quiz will result in a grade of zero (0).

The email from the IB director stated:

Recently the use of AI generators has become a major concern. The use of AI generators is a violation of our academic integrity policy.

Definitions matter. In this section of the handbook, it defines plagiarism as "not copy(ing) another student's work and submit it as their own." We're talking about intellectual property here, and according to current definitions (as of publication of this post), it's the "product of the human intellect."

ChatGPT (as of publication of this post) will tell you that its responses are "generally considered to be in the public domain." That is, unless its responses are generated under the terms or conditions of a paid service or contract.

Definition of public domain: "the state of belonging or being available to the public as a whole, and therefore not subject to copyright" (from Oxford Languages via Google)

By all definitions above, the students' use of AI-created responses is not plagiarism.

Is it what we want for students, for them not to think but to mindlessly plug classwork prompts into an AI assistant without learning? No. I think we can agree it is not.

But it's also hard to play by the rules when they're very unclear. 

Now, granted, I don't know what was said or written to students outside of the student handbook and the email published in local news. The school might have approached this. Plus, it's next to impossible to revise a school's handbook in the middle of the school year to address a brand new form of technology that is impacting education in unprecedented ways.

But it's a tough pill to swallow when you're being accused of cheating under policy that doesn't make it clear.

What can you do? Have lots of conversations about what responsible use looks like. Talk to students, teachers, administrators, parents, community members ... really, anyone with a stake in the student's learning. Talk to business owners who may employ students. Then, after lots of conversation, revise school policy to reflect 

2. The standard of "doing the work yourself" is murky anyway.

A common rebuttal to all of this is: "Students should have known better. They need to be doing the work themselves."

That's a really unclear standard to hold students to as well. 

For example, we already use search engines to look up basic facts -- and that's a common practice in the classroom now.

Plus ... What are research papers? They're a compilation of existing research. We borrow (ahem ... "cite") from other research papers to create a new, unique work. (Well, sort of new and sort of unique.)

"Authenticity" and "authentication" were repeated several times in the email from the high school. 

We are using AI detectors and our own investigation of Chromebooks to verify the authenticity of suspicious student work.
If they are unable to authenticate a student's work then the student will not have successfully completed the IB program (...) which means the student has not earned a high school diploma.

We need to be clear about what we consider to be authentic work and what expectations are for students -- especially in the changing face of future work. We also need to be sure that the work we're asking them to do is really preparing them for the future they'll face.

The IB class where the students used AI in their work, according to the email sent to parents, was the Theory of Knowledge class. I never took or taught this class, so I'm still learning about it ... but it sounds like a perfect place to have these types of conversations with students.

What can you do? Discuss with students (and with teaching staff) the balancing act of using/remixing/editing resources and creating from scratch. Identify times when it's responsible and not. Identify how it is and isn't being done in the real world. Then, use the results from those discussions to guide policy and practice in school.

3. Policies and probing come before punishments.

It's clear that AI assistants and other generative AI tools created a scenario that this school's policies did not envision. When school leaders saw something that didn't sit well with them, they could have taken several steps.

Those leaders could have identified what they didn't like about it and put mechanisms in place to help students know how to proceed in the future. In this case, in a vacuum of information, students chose their course of action.

Or those leaders could punish students for what they didn't like, even though it wasn't specified that students shouldn't take those actions.

When things are unclear, punishing what you don't like won't solve the problem. 

  • It ruins relationships when students are punished for a violation they didn't know they committed.
  • It's especially ruinous for the future when students know that punishments are being handed out and they don't know why. They walk on eggshells and carry anxiety worrying that they'll be accused of something -- even if they want to act appropriately.
  • It's a rash action to take when you can't put your finger on what you don't like.

What can you do? Gather information, perspectives, opinions, etc. before making actions -- especially punitive ones. 

4. If you are going to take action, know that you have accurate information.

The email from the Florida high school stated: "We are using AI detectors and our own investigation of Chromebooks to verify the authenticity of suspicious student work."

The school district's statement said: "As part of our ongoing cybersecurity efforts, our Information Services team continues to strengthen Chromebook security features to block the use of AI from aiding any student work."

It's been well documented that the existing AI detectors are fraught with inaccuracy. They routinely produce false positives, saying human-created text was created by AI, and false negatives, saying AI-created text was created by a human.

Using AI detectors to punish students for cheating can be a losing proposition. You would need ironclad proof, much like evidence in a court of law. If your proof isn't valid, it's hard to academically convict a student of something you're not sure they did. AI detectors don't provide ironclad proof.

Using an investigation of student Chromebooks likely won't help, either. Proving that a student has visited an AI assistant only shows correlation and not causation. It might show that the student has been on the site, but it doesn't show that the student used the site to copy and paste work for class.

Blocking public sites we don't want students using is futile. Let's be honest. If students are going to use an AI assistant to help with classwork -- especially if they know they're being surveilled by the school -- they won't use their school-issued device to do it.

What can you do? If the school or district decides to block AI tools like ChatGPT, it should have a good reason to do so. But that's only one piece of the puzzle. It must also be part of a bigger decision on the role it plays in learning and how it should be used responsibly in school.

5. Keep the future in mind.

The high school's vision, according to the student handbook: "Every student Future Ready."

If the school's vision is a slogan to guide the overall strategy and day to day operations of the school, I don't think this decision is preparing students for the future of work.

The teachers union president said: "I think the system is going to be at a disadvantage for a considerable amount of time until we are able to get some reasonable kind of firewalls and guardrails to kind of see that this is not misused and students are doing original work."

I discuss the idea of seeing school through "tomorrow glasses" in my book, AI for Educators. It's easy to look at school through "today glasses" because it's based on lived experience. Data we've collected. The past. We can make decisions based on what has happened before when we use "today glasses."

What our students need are educators who look through "tomorrow glasses."

"Tomorrow glasses" attempt to see the world as it will be. Notice I said "attempt to see." That's because using "tomorrow glasses" is imprecise. It's based on predictions, analyzing what information we have and anticipating what's going to happen.

Above, the teachers union president wanted to ensure that students are doing "original work." What's the definition of "original work" when we look through "tomorrow glasses"? Adults are using AI assistants in their daily work right now. They aren't generating brand new original work by hand every time. They're finding responsible ways to use AI to help them do their work -- in an authentic way -- so they can get more done. And they're still thinking and engaging in the process while doing so. That might be a better definition of "original work" when looking through "tomorrow glasses."

It's messy. It's hard. We'll get it wrong sometimes when we use "tomorrow glasses." But it's our only hope to prepare students as best as we can for this world they'll live in. 

What can you do? Ask yourself, "How does this decision prepare students for the future they'll live in? How does it just maintain the status quo from the present -- or even the past?"

6. Don't put students at a disadvantage.

The academic integrity statement by the IB, published in October 2019, states: "Results cannot be fair if some students have an unreasonable advantage over others."

This is only one side of the issue. And, of course, the IB and the schools that use its program, have a lot at stake to protect this side of the issue. They want to ensure the acclaim of the IB program so that the school and, in turn, the students, can earn the rights and privileges of having graduated from an IB program.

Here's the other side of the issue, though ...

Students are being shielded from a technology that will be commonplace in their workplace in the future. Those students have an unreasonable DISadvantage over others. 

Sometimes, in schools, we think we can create an artificial bubble where things we don't like just don't exist. And sometimes we can. We can shield students from certain things while they're on school grounds. 

Artificial intelligence isn't one of those things. It's available everywhere. It's even available inside of the artificial bubble we're trying to create when students access it on their phones -- or use their phone's hotspot with wifi that isn't subject to the school's internet filter.

If we stand our ground and try to maintain our bubble, it starts to create an equity gap -- between the students who have cell phones and cellular data (and know how to access and use AI) and those who don't.

The equity gap gets even worse if we stand our ground on our artificial "no AI" bubble. 

Some schools will become models for appropriate AI use. Teachers will show students how it can be used ethically and responsibly. Those students will understand how AI works and what it can do. They'll get practice with it.

Students at other schools with an artificial "no AI" bubble? They won't get those opportunities. 

Fast forward five years. Or ten. Which students are at an advantage or disadvantage? We used the disadvantage argument earlier to say that AI tools shouldn't be used in the classroom. But now, when looking to the long-term, this feels like a much bigger disadvantage.

What can you do? Teachers can brainstorm and try new instructional practices that make appropriate use of artificial intelligence. Then, they can reflect on and evaluate the effectiveness of those practices to refine them. School leaders can empower teachers to try those new teaching practices -- and provide them with professional development that equips them to use it effectively.

Closing: Academic integrity is still crucial.

This article isn't my attempt to let students mindlessly do all of their work with artificial intelligence just because it's easier and faster and it's available to them.

I hope it helps us begin a conversation about what academic integrity looks like in a world where AI exists. 

It doesn't have a clear-cut solution. And any solutions we come up with will require nuance -- and adjustments as technology changes and our world adapts to it. 

If we don't like our present-day reality, the best course of action isn't to start punishing students who are perpetuating what we don't like.

We need to stop. Observe. Learn. Envision the future. And we need to continuously ask ourselves: "What's best for the students ... and what best positions them for the future?"

We have to be willing to act. And be willing to get it wrong sometimes, because that's what is bound to happen when we use our "tomorrow glasses" to plan for our students' future. 

In the end, we have to realize we're all in this together and we have to work together to find our best steps forward.

FREE teaching ideas and templates in your inbox every week!
Subscribe to Ditch That Textbook
Love this? Don’t forget to share
  • JF says:

    As an IB graduate, I took Theory of Knowledge and the papers required in that class focus deeply on personal opinion and philosophy. My concern with AI being used to create work for this class is that it negates the very purpose of the class. ToK aims to have students think deeply about their perspectives. Utilizing a tool to write a paper about your own perception is really an inaccurate representation. One of the goals of the IB program is to produce deep thinkers – and utilizing AI to generate essays on your own thoughts is basically the opposite. It’s also an international program with high pressure and difficult requirements and it’s absolutely unfair for some students to have an advantage. Papers are graded internationally by multiple graders through the IB and submitted to a rigorous standard far beyond the typical grading process. We were frequently reminded that submitting work that was not our own was grounds for dismissal from the program – as it should be. Obviously discussions on the ethics of AI are important AND we also need to hold students to a high standard of accurately representing their own work.

  • Marc Bernier says:

    The difference between adults using AI in their fields and students using AI to do their work is that the adults have some background through which they can filter the information and work provided by the AI. Students need to learn how to analyze and synthesize data to create a product that a teacher, professor or supervisor can be confident is their own work so that the person can then evaluate that work and help the student see areas in need of improvement. Even the use of search engines with simple cut-and-paste functions have cut into the students’ ability and willingness to interact with and think about the information they are providing and claiming an understanding of. Prompting and passing material in is a far cry from researching, reading, comparing, analyzing, and creating a product. Helping students realize that the information they are quoting from is often inaccurate and/or incomplete is something we already struggle with in our digital world. Common use of AI is going to make this a much more difficult problem to address when teachers are unable to even tell from where the information was garnered.

    What sense of accomplishment or pride can one take from prompt-to-publish material?

  • Jason says:

    Much of this seems focused on the essentially ‘legal’ wording of the school’s attempt to address the problem. While I don’t disagree that there could be some improvements in their policy articulation, focusing on a handbook that the majority of students and parents do not read until after they have already committed an offense may not be the best avenue to correcting the true issue at the heart of this situation.

    When students take short-cuts in their academic endeavors, it can be due to lack of understanding possibly, but overwhelmingly it is because grades are the tail that wags the dog from the perspective of students and parents. Skills acquired and content learned generally are not a high priority for most of these folks, as their objective is just the golden ticket to get through the hoop in order to proceed on to whatever their actual goal may be (ranking in class, gpa, university admission, career path, etc.).

    The best way I know to combat this is simply communication. If the teacher can emphasize the true goal of each assignment, lesson, assessment and unit and how it may benefit the student, then hopefully the student can better appreciate the “why” and possibly focus on something more than just the hoop of a grade in order to better support the ‘future’ version of themselves. I may be wrong (and generally am), but I would venture to guess most folks in education got into the gig with the best of intentions for helping youth achieve and fulfill their potential. I doubt dealing with the minutia of grades and consequences of policy violation ever inspired anyone on either end.

    Parents, on the other hand, are probably a lost cause as their malleability is most likely well expired.

  • >