The AI hype is everywhere -- on TV commercials, on social media, and even on billboards.
"AI will save you time. AI will automate this and that."
The pundits are all over AI, too, making predictions about how it'll revolutionize the world and the workforce. (And I might scream if I hear someone else say "AI won't take your job, but someone who uses AI will.") 😱
AI is making its way into classrooms ... schools ... education. It has its promises, but there are perils, too.
In an effort to balance out the AI hype in education, Ken Shelton and Dee Lanier wrote the book: The Promises and Perils of AI in Education: Ethics and Equity Have Entered the Chat.
Promises and perils?
Promises: "On the one hand, AI holds the potential to democratize knowledge, personalize learning, and bridge socioeconomic divides. It can offer adaptive instruction, tailored feedback, and even virtual tutoring, potentially leveling the playing field for students from diverse backgrounds."
Perils: "[...] AI could worsen existing inequalities or create new forms of disadvantages if we don't prioritize equity and justice. [...] scrutinizing its potential impact on power dynamics and resource distribution within the educational landscape."
I got an early copy of this book and started digging in. This isn't a book you can knock out in an afternoon. It makes you think. The topics and commentary aren't light. But it touches on lots of concerns about AI in education that we should be considering.
Here are five things I drew a star next to in my book. (Note: Headlines and commentary are mine, while quotes in italics are directly from the book.)
1. AI bias can impact students and schools in ways we might not expect.
AI could impact who gets hired next at your school -- in negative ways that you might not expect. More and more businesses (including educational institutions) are using AI-driven programs to sort through the resumes of prospective job applicants.
Quietly, in the background, AI models prioritize certain things. You start to notice them in the results that they produce.
Example: Search for "professional hairstyles" and "unprofessional hairstyles" in a search engine.
From the book: "Pay attention to hair texture, length, styles, and the race/ethnicity of the people modeling those hairstyles."
"If an algorithm used for hiring processes is trained on datasets that equate 'professionalism' with certain hairstyles, you've built a biased system right from the start."
If this AI bias is potentially impacting hiring decisions, we have to ask ourselves where it also comes into play in other parts of educational institutions.
2. Keep humans in the loop in high-stakes decisions in education.
Admissions in schools. High-stakes testing. School leaders have lots of big decisions to make.
Even little decisions matter. Every day, teachers choose what to highlight in lessons with students. They choose what feedback to offer students on their assignments.
When teachers let AI do too much of that work for them, it removes humanity from the equation.
It also takes the decision making out of our human hands.
From the book: "While AI promises invaluable insights, retaining meaningful human oversight remains essential for ethical application in high-stakes educational decisions."
"For consequential decisions like tracking into advanced or remedial paths, AI informed human discernment is imperative. Beyond performance data, human guidance advisors incorporate motivations, challenges, and goals. Multifaceted success cannot be reduced to numbers.
"Humans must be empowered to override flawed AI."
3. AI doesn't help us break down historical prejudices. It only reinforces them.
"An AI's foundation is its data and if that foundation reflects historical prejudices, the system echoes them, warping potential into a tool for perpetuating injustice."
Everybody loves when AI can predict things and create things for us. But when it does, it's using pre-existing data.
What happens when that pre-existing data is harmful to others -- or it's skewed based on injustice?
"This biased and replicated pattern of interpretations of behavior and desires for compliance and conformity trigger disciplinary actions, suspensions, and ultimately feed into criminal records.
"Now, imagine AI systems unknowingly becoming cogs in this unjust machine -- recommending biased educational materials, being used to determine learning pathways for students, being used as an 'intervention mechanism,' or reinforcing stereotypes during student assessment.
"This calls for decisive action."
In my notes in the margin, I wrote: "Bad actions become bad data, which fuel bad responses by AI."
4. AI poses personal and academic threats to students.
The harms that unchecked and misapplied AI can do to students are all over the place ...
- AI detectors. They're flawed (and there are much better ways to handle them). "AI detectors aimed at preventing cheating can lead to false accusations."
- Deep fakes. Images, videos and audio can be created that put words in students' mouths -- or make them look like they're in compromising situations. This synthetic media conversation isn't just about artists and musicians. "We see not just stolen words and melodies, but lives potentially uprooted, reputations shattered on the altar of synthetic fabrication."
So, what do we do? Here are two steps we can take, as shared in the book ...
- Media literacy "must transcend rote antiquated lessons on plagiarism and copyright laws. It must become a shield against manipulation masquerading as progress."
- Critical thinking and questioning: "We have to equip our teachers and administrators with the latest tools to dissect, analyze, and question the very pixels on their screens."
5. Protect student data and privacy.
Personal data -- and big data -- are a commodity. They power business decisions -- and they train powerful AI models that businesses thrive on.
As data becomes more and more valuable to businesses, we have to protect our students' data and privacy.
From the book, a caution to tread carefully when using AI: "What specific AI technology am I using, and how does it process student data? Why am I using this tool, and does the purpose justify the use of potentially sensitive student information?"
"When considering AI platforms, especially popular LLMs, resist the urge to either demonize them or assume digital natives instinctively know their safe use.
"Instead, implement structured guidance. Equip students with the knowledge to avoid divulging PII (personally identifying information) while interacting with LLMs. This emphasizes the need for an ethics framework within education."