Webinar Recap: Roots of Responsible AI – An Educator’s Guide to Ethical Innovation

Tired of feeling one step behind AI? This on-demand webinar is for educators who want to move from a reactive stance of policing plagiarism to a proactive position of pedagogical leadership. The real risks of AI in the classroom is not just cheating, but the erosion of trust and critical thinking in what we call "Dead Education Theory." Discover a framework that shifts the focus back to the student's writing process and master a five-step method for crafting ethical prompts that puts you, not the algorithm, in control of the learning outcome.
The arrival of generative AI in education has felt like a tidal wave. One minute you were focused on fostering critical thinking, and the next, you were wondering if you’re reading a student’s work or a chatbot’s.
You’re not alone.
In a recent webinar, “Roots of Responsible AI: An Educator’s Guide to Ethical Innovation,” we polled the audience, and the results were telling. The sentiment was split right down the middle when asked if they had received enough guidance from their institutions on responsible AI use. There’s a profound sense of uncertainty leaving faculty frustrated, administrators fearful of the next headline-grabbing AI misstep, and students utterly confused about what’s acceptable.
What if you could move past the fear and confusion? What if you had a practical framework to not only navigate but harness AI to create a more engaging and effective learning environment?
This isn’t just a hypothetical. It’s the new reality for educators who are shifting their perspective on AI from a threat to a tool. Packback’s Chief Revenue Officer, Iain Atkinson, and Chief Technology Officer, Dr. Craig Booth, broke down the core issues and provided a clear path forward.
The Real Problem with AI in Education
(It’s Not What You Think)
The knee-jerk reaction to AI has been to lock it down and focus on detection. But as Iain and Craig discussed, this “gotcha” approach is a losing battle that erodes trust. The deeper issue isn’t just about catching plagiarism; it’s about the erosion of the learning process itself.
We’re facing the rise of what Dr. Booth calls “Dead Education Theory,” a chilling parallel to the “Dead Internet Theory” where AI bots are creating content for other AI bots, leaving humans out of the loop entirely.
“Educators are creating rubrics, prompts, assignments…using AI. Students are responding to those prompts and assignments using AI. Faculty are using AI to evaluate…and sort of never in that loop shall two humans meet in the middle,” Iain explained.
This cycle of AI-generated assignments being completed by AI and then graded by AI is a fast track to cognitive offloading, where the essential struggle of learning is outsourced to a machine.
From Fear to Framework: A Better Way Forward
So, how do we reclaim the learning process? The webinar offered a powerful alternative: a student-first, prevention-focused approach rooted in what we here at Packback call Instructional AI. This isn’t just another AI tool; it’s a pedagogical framework built on three core pillars:
- Transparency: Every piece of feedback and coaching a student receives is visible to them and their instructor, building a foundation of trust.
- Consistency: The feedback students receive is consistent across all their work, creating a reliable path for improvement.
- Prevention Over Detection: Instead of just catching plagiarism after the fact, the focus is on coaching students through the writing process to prevent it from happening in the first place.
This approach is about amplifying educators, not replacing them, and ensuring that AI serves as a scaffold for learning, not a shortcut.
The Art and Science of a Perfect Prompt
One of the most practical takeaways from the webinar was a deep dive into prompt engineering because, as Dr. Booth bluntly put it:
“mediocre prompts lead to mediocre output.”
So what does a good prompt look like? There is a five-step process to transform a generic prompt like “Create a middle school science assignment about climate change” into a powerful, pedagogically sound tool for learning.
The secret lies in layering them together like this:
- Context: Tell the AI who it is and the background of the task.
- Task: Clearly articulate what the AI needs to do with actionable verbs.
- Constraints: Set boundaries for length, tone, and format.
- Ethics Cues: Guide the AI to be inclusive and responsible.
- Iterate: Refine your prompt based on the output.
This isn’t just about getting a better answer from a machine. It’s about infusing your pedagogical values directly into the technology.
Your Exclusive Toolkit is Waiting
The webinar was packed with actionable insights, but we saved the best for last.
For a limited time, when you watch the on-demand recording of the “Roots of Responsible AI” webinar, you’ll get exclusive free access to The Educator’s Responsible AI Prompt Library. This practical toolkit includes ready-to-use prompts for everything from a Socratic AI Tutor to a Source Trustworthiness Evaluator, all built on the principles of responsible AI.
Stop feeling like you’re a step behind the technology. It’s time to take control and confidently lead your students in the age of AI.

Tired of the AI Guessing Game? Here’s a Roadmap for Responsible AI in Your Classroom.
Watch the webinar to:
- Get a deeper understanding of the five pillars of responsible AI: fairness, privacy, equity, transparency, and human oversight.
- See a live demonstration of how to build a powerful, rubric-aligned assignment from scratch.
- Unlock your exclusive copy of our AI Prompt Library for Educators.
Watch the On-Demand Webinar Now and Get Your Free AI Prompt Library