Blog

Webinar Recap: Roots of Responsible AI – An Educator’s Guide to Ethical Innovation

July 1, 2025 Author: Selina Bradley Read time: 5 min

Tired of feeling one step behind AI? This on-demand webinar is for educators who want to move from a reactive stance of policing plagiarism to a proactive position of pedagogical leadership. The real risks of AI in the classroom is not just cheating, but the erosion of trust and critical thinking in what we call "Dead Education Theory." Discover a framework that shifts the focus back to the student's writing process and master a five-step method for crafting ethical prompts that puts you, not the algorithm, in control of the learning outcome.

The arrival of generative AI in education has felt like a tidal wave. One minute you were focused on fostering critical thinking, and the next, you were wondering if you’re reading a student’s work or a chatbot’s.

You’re not alone.

In a recent webinar, “Roots of Responsible AI: An Educator’s Guide to Ethical Innovation,” we polled the audience, and the results were telling. The sentiment was split right down the middle when asked if they had received enough guidance from their institutions on responsible AI use. There’s a profound sense of uncertainty leaving faculty frustrated, administrators fearful of the next headline-grabbing AI misstep, and students utterly confused about what’s acceptable.

What if you could move past the fear and confusion? What if you had a practical framework to not only navigate but harness AI to create a more engaging and effective learning environment?

This isn’t just a hypothetical. It’s the new reality for educators who are shifting their perspective on AI from a threat to a tool. Packback’s Chief Revenue Officer, Iain Atkinson, and Chief Technology Officer, Dr. Craig Booth, broke down the core issues and provided a clear path forward.

A slide from a webinar on responsible AI from Packback with the heading: "Tracking paste events and chasing AI detection won't save the essay." Below, it says, "But rebuilding trust in student work by shifting focus from catching shortcuts to encouraging process might." The bottom of the slide has a quote bubble saying, "Your thoughts matter. Your effort counts. Show us how you got there," illustrating a pedagogical approach to academic integrity

The Real Problem with AI in Education
(It’s Not What You Think)

An illustration of the "Dead Education Theory" showing three robots in a circular loop. This visual represents a broken academic cycle where AI writes assignments, AI completes them, and AI evaluates them, with no human thought involved.

The knee-jerk reaction to AI has been to lock it down and focus on detection. But as Iain and Craig discussed, this “gotcha” approach is a losing battle that erodes trust. The deeper issue isn’t just about catching plagiarism; it’s about the erosion of the learning process itself.

We’re facing the rise of what Dr. Booth calls “Dead Education Theory,” a chilling parallel to the “Dead Internet Theory” where AI bots are creating content for other AI bots, leaving humans out of the loop entirely.

“Educators are creating rubrics, prompts, assignments…using AI. Students are responding to those prompts and assignments using AI. Faculty are using AI to evaluate…and sort of never in that loop shall two humans meet in the middle,” Iain explained.

This cycle of AI-generated assignments being completed by AI and then graded by AI is a fast track to cognitive offloading, where the essential struggle of learning is outsourced to a machine.     

From Fear to Framework: A Better Way Forward

So, how do we reclaim the learning process? The webinar offered a powerful alternative: a student-first, prevention-focused approach rooted in what we here at Packback call Instructional AI. This isn’t just another AI tool; it’s a pedagogical framework built on three core pillars:

Infographic of The Packback Approach to responsible AI, detailing its three pillars for student success: a Student-first approach with writing coaching, Transparency in detection to improve academic integrity, and Consistency of feedback to track student growth.

  1. Transparency: Every piece of feedback and coaching a student receives is visible to them and their instructor, building a foundation of trust.
  2. Consistency: The feedback students receive is consistent across all their work, creating a reliable path for improvement.
  3. Prevention Over Detection: Instead of just catching plagiarism after the fact, the focus is on coaching students through the writing process to prevent it from happening in the first place.

This approach is about amplifying educators, not replacing them, and ensuring that AI serves as a scaffold for learning, not a shortcut.

The Art and Science of a Perfect Prompt

One of the most practical takeaways from the webinar was a deep dive into prompt engineering because, as Dr. Booth bluntly put it:

“mediocre prompts lead to mediocre output.”

So what does a good prompt look like? There is a five-step process to transform a generic prompt like “Create a middle school science assignment about climate change” into a powerful, pedagogically sound tool for learning.

The secret lies in layering them together like this:

  • Context: Tell the AI who it is and the background of the task.
  • Task: Clearly articulate what the AI needs to do with actionable verbs.
  • Constraints: Set boundaries for length, tone, and format.
  • Ethics Cues: Guide the AI to be inclusive and responsible.
  • Iterate: Refine your prompt based on the output.

 

An example of effective AI prompt engineering for educators. The image shows a "Final Prompt" broken into four parts: Context (a 6th-grade science unit), Task (design a homework assignment), Constraints (reading level, number of questions), and Ethics (use neutral language, be inclusive). This demonstrates how to create a responsible AI prompt for classroom use

This isn’t just about getting a better answer from a machine. It’s about infusing your pedagogical values directly into the technology.

Your Exclusive Toolkit is Waiting

The webinar was packed with actionable insights, but we saved the best for last.

For a limited time, when you watch the on-demand recording of the “Roots of Responsible AI” webinar, you’ll get exclusive free access to The Educator’s Responsible AI Prompt Library. This practical toolkit includes ready-to-use prompts for everything from a Socratic AI Tutor to a Source Trustworthiness Evaluator, all built on the principles of responsible AI.

A slide from a Packback webinar titled "Overview of Prompt Library Document", which is a resource for educators using responsible AI. The slide is split into two sections. On the left, "Prompt Troubleshooting Steps" provides solutions for common issues, such as when an AI output doesn't match the task. On the right, "Library of Prompts for Inspiration" shows an example prompt called the "Source Trustworthiness Evaluator," which is designed to help students build media literacy skills through critical questioning

Stop feeling like you’re a step behind the technology. It’s time to take control and confidently lead your students in the age of AI.

A stylized illustration of effective AI prompt engineering for educators, showing a computer monitor with a text prompt that has successfully generated documents for the classroom, signified by a large checkmark.

Tired of the AI Guessing Game? Here’s a Roadmap for Responsible AI in Your Classroom.

Watch the webinar to:

  • Get a deeper understanding of the five pillars of responsible AI: fairness, privacy, equity, transparency, and human oversight.
  • See a live demonstration of how to build a powerful, rubric-aligned assignment from scratch.
  • Unlock your exclusive copy of our AI Prompt Library for Educators.

Watch the On-Demand Webinar Now and Get Your Free AI Prompt Library