Webinar Recap: What Educators Get Wrong About AI (And How to Get It Right)

October 28, 2025 Author: Selina Bradley Read time: 5 min
log hero image for an AI in education webinar. The text reads: "Webinar Recap: What Educators Get Wrong About AI (And How to Get It Right)"

Missed our webinar? Get the high-level 5-minute recap on what educators get wrong about AI, from expert systems to LLMs, and learn how to get it right.

Instructors feel overwhelmed. Administrators feel the pressure from all sides. Students are already using AI, with or without guidance.

The AI conversation in education isn’t just loud; it’s confusing, full of hype, and riddled with fear.

When we polled the hundreds of educators who attended our live webinar, “What Educators Get Wrong About AI (And How to Get It Right),” the top concerns were clear: academic integrity and the “de-skilling” of students.

To address these critical issues, we launched our new five-part Human-Centered AI Series, starting with this foundational session. Our goal is to cut through the noise and replace the hype with a shared understanding, and our speakers, Packback CTO Dr. Craig Booth and former educator and Senior Product Manager Barbara Kenny, provided a mental model for leading on AI, not just reacting to it.

Missed it? You can watch the full on-demand recording here. Or, if you only have five minutes, here are the four biggest takeaways you need to build an intentional, evidence-based AI strategy.

Key Takeaway 1: AI Is Not One Thing (Not All AI is Generative AI Like ChatGPT)

One of the biggest mistake educators make is using “AI” and “ChatGPT” interchangeably. Why does this distinction matter?

A Venn diagram illustrating the nested relationship of AI concepts. From largest to smallest: Machine Learning, Neural Networks, Deep Learning, and Gen AI (Generative AI). Expert Systems is a separate, overlapping circle. A legend indicates Rule-Based, Predictive, and Generative categories.

AI is a massive field, and you’ve been interacting with it for decades. In the webinar, Dr. Booth and Barbara broke down the history of AI into four main categories that still exist today:

  • Expert Systems (1960s): These are simple, rule-based “if-then” systems. Think of the 1960s chatbot ELISA, which was designed to act like a therapist by following grammatical rules to turn your statements into questions.
  • Machine Learning (1980s-90s): This is AI that learns from patterns in data. You use it every day. It’s the spam filter in your inbox, the fraud detection on your credit card, and the recommendation engine on Netflix.
  • Deep Learning (2010s): A more advanced form of machine learning that uses “neural networks” inspired by the human brain. This is the technology that allowed a Google AI to “teach itself” to identify cats in YouTube videos just by analyzing millions of unlabeled images.
  • Generative AI (2020s): This is the new kid on the block. It’s a subset of deep learning. As Dr. Booth puts it, GenAI models (or LLMs) are “extremely turbocharged autocompletes.”

This 5-Minute Recap Barely Scratches the Surface

Understanding these building blocks is the first step to moving from fear to intentional, evidence-based decision-making. Watch the full on-demand webinar now to stop navigating the hype and get the clear, foundational model you need to lead.

Key Takeaway 2: LLMs are “Compelling Illusions”

So, what is a Large Language Model (LLM) really doing?

It is not “thinking.” It is not “knowing.” It is not “understanding.”

It is a deep learning model trained on vast volumes of text to do one thing: predict what word comes next.

When you give it a prompt like “The first person to walk on the moon was…,” it doesn’t know the answer. It’s answering a statistical question: “Given the text I’ve been trained on, what word is most likely to follow that sequence?”

The result feels like magic, but as Dr. Booth shared from a foundational paper by Murray Shanahan, it’s a “compelling illusion.”

Key Takeaway 3: AI Has a “Jagged Frontier”

The fluency of LLMs hides their flaws. Because they are statistical models and not thinking beings, they are often confidently wrong.

This is AI’s “jagged frontier.” It can generate a fluent essay on Shakespeare but fail at a simple logic puzzle. Barbara Kenny highlighted three key failure points:

  1. Hallucinations: AI “makes things up” with 100% confidence. In one infamous example, ChatGPT insisted the word “strawberry” has two ‘r’s.
  2. Lack of Common Sense: AI doesn’t understand physics or cause and effect. Barbara shared an AI-generated video of a person “blowing out” birthday candles, but the flames didn’t extinguish. It’s a “plausible” image that’s physically wrong.
  3. Embedded Bias: An AI trained on the internet will reproduce the internet’s biases. Studies and user tests show these models often reinforce gender and racial stereotypes, associating “doctors” with men and “nurses” with women, even when it’s not prompted to.

These flaws are precisely why a clear, human-centered AI policy is non-negotiable.

Key Takeaway 4: Match the Tool to the Task (and Keep Humans in the Loop)

This is the final, crucial takeaway.

  • If AI is not one thing (Takeaway 1)…
  • And if the most popular new AI is a “compelling illusion” (Takeaway 2) that fails in unpredictable ways (Takeaway 3)…

…Then you must match the right tool to the right task.

As Barbara explained, there is a core tension: as AI models become more complex (from simple rules to generative language), they feel more personalized, but they become far less transparent.

You can’t use an opaque, “black box” LLM for high-stakes, consequential decisions like grading. This is where educators must be kept in the loop. At Packback, our philosophy is that AI should supplement, not substitute, human judgment.

This was just Part 1. Now that you have the foundation, it’s time to build your ethical framework. Register now for Part 2: Teaching with Integrity in the Age of AI, where we partner with The League for Innovation in the Community College to move from theory to practice.

webinar registration banner with the Packback and League for Innovation in the Community College logos. The title reads 'LIVE WEBINAR: HUMAN-CENTERED AI SERIES PT 2 Teaching with Integrity in the Age of AI: How to Build Your Ethical Framework'. Date and time are 'Thurs. Nov. 20, 2025 • 12 pm CT / 10 am PT'. A prominent 'REGISTER NOW' button is present.