Blog

Webinar Recap: Teaching with Integrity – Building Your Ethical AI Framework

November 24, 2025 Author: Peter Lannon Read time: 3 min
an image showing a recap of our recent webinar on teaching with integrity

As AI continues reshaping higher education, faculty and administrators are facing tough questions:
What’s ethical? What’s allowed? And how do we use AI responsibly without losing the human essence of teaching?

In our recent webinar, Teaching with Integrity: Building an Ethical AI Strategy for Education, Packback’s Dr. Craig Booth (CTO), Barbara Kenny (Senior Product Manager, AI), and Dr. Rufus Glasper (President & CEO, League for Innovation in the Community College) explored how institutions can move from reactive AI policies to intentional, values-driven strategies that protect academic integrity and empower learning.

Why “Teaching with Integrity” Matters Now

As Dr. Booth opened the session, he noted that educators are “under pressure from all sides.” Faculty are overwhelmed by new tools, administrators are facing uncertainty, and students—already using AI daily—are doing so with uneven guidance.

That foundation begins with understanding that AI in education is a values issue. Every policy or practice stems from what a community believes about learning, authorship, and the role of educators in an AI-rich world.

From Values to Governance: Building an Ethical Framework

One of the most resonant visuals from the webinar—the Values → Principles → Policy → Practice framework—illustrated how institutions can translate big ideas into everyday decisions.

A step-by-step guide showcasing how to build an ethical AI framework.

The takeaway: Values define what we stand for. Principles show those values in action. Policy formalizes them. And practice makes them real in the classroom.

This layered approach ensures that institutional AI strategies don’t just check compliance boxes—they reflect the beliefs and culture of the learning community they serve.

Reclaiming the Human Center of Learning

Throughout the conversation, the panelists reinforced a key idea: ethical AI use means keeping people in the loop.

Barbara Kenny emphasized the importance of explainability and transparency, noting that educators should always be able to understand and challenge AI-generated feedback or classifications.

Dr. Glasper built on that idea from an institutional perspective: “Administrators have to model trust and transparency. If faculty feel supported, they’ll be far more willing to innovate responsibly.”

This shift from fear to understanding, helps educators reclaim the human center of learning. When AI is used to guide, not replace, instructor judgment, it enhances trust rather than undermining it.

What’s Next: Building an Ethical AI Roadmap

The webinar concluded with a reminder that the key to building an ethical AI strategy is progress with integrity

Dr. Booth shared that Packback’s mission is to “replace hype and fear with understanding,” helping institutions implement AI tools that promote originality, curiosity, and critical thinking while reducing the grading burden on educators.

For administrators and faculty leaders, that means asking a different set of questions:

  • Does this tool support authentic learning?
  • Is it transparent in how it provides feedback?
  • Does it enhance instructor agency and student growth?

Those are the questions that separate short-term reactions from long-term, responsible innovation.

Watch the Full Webinar On-Demand