Webinar Recap – Assignment Strategies That Prevent Cognitive Offloading

August 18, 2025 Author: Selina Bradley Read time: 5 min

In the constant stream of conversations about AI and academic integrity, it’s easy to get stuck on the topic of cheating. But a deeper, more strategic challenge is facing institutional leaders: the risk of “cognitive offloading.” This is where students delegate the hard work of thinking to AI and completing assignments without wrestling with ideas, reflecting, or truly learning. The result is a hollowed-out educational process where compliance replaces cognition.

This challenge was the focus of our recent webinar, Designing for Depth: Assignment Strategies That Prevent Cognitive Offloading.” The full, nuanced discussion is packed with insights for any educator navigating this new reality. 

The Core Challenge: Moving Beyond Policing to Pedagogy

The conversation, moderated by Packback CEO Kelsey Behringer, brought together two educators with distinct expertise to tackle this problem:

  • Dr. Stephen LeMay, a Professor of Supply Chain and Marketing at the University of West Florida, who is preparing students for a professional world where AI is already a ubiquitous tool.
  • Keith Hollowel, a Literature Professor at Virginia Commonwealth University who focuses on building the foundational writing and analysis skills essential for academic success. 

Their shared conclusion was powerful: the most effective path forward isn’t about banning tools, but about fundamentally rethinking assignment design to make deep thinking a core requirement. Without those critical thinking skills, they cannot evaluate AI’s output, rendering the tool ineffective.

This is a widespread concern. We polled the 138 who attended live and a staggering 82% of educators reported they have already updated their assignments to account for AI. This includes 47% who have made small adjustments and 35% who have significantly redesigned their courses. Educators are clearly on the front lines, actively seeking new strategies for responsible AI in education.

Navigating this new world requires a map. Our AI Policy Roadmap is a practical guide for administrators and faculty to build flexible, future-ready AI policies that champion learning over fear.

The Playbook: Two Field-Tested Strategies to Keep Humans Engaged

To maintain cognitive effort, our panelists shared two powerful, field-tested approaches to assignment design.

Strategy #1: Make the Tool the Subject!

For his “Business Analytics with AI” course, Dr. LeMay made AI the centerpiece of the assignment by deploying a “take chances, make mistakes, get messy” approach. Instead of banning AI, he made it the specimen for dissection!

“What it did was force them to look at the solution to the problem many times,” Dr. LeMay explained. The project resulted in students writing around 5,000 words, not because of a word count, but because the process demanded a high level of understanding to critique the solutions effectively.

In his “Multi-AI Consultation” project, students had to:

  • Use three different AI tools to solve a real-world problem in a specific business domain, such as marketing or finance. They must document all their prompts and the AI’s outputs
  • Compare the outputs, evaluating the AIs against each other on accuracy, usability, speed, and relevance, scoring each one and justifying their choice.
  • Apply & Reflect on the process, justifying their choices and documenting what they learned about the tools themselves in a reflective report on their experience, noting successes and conflicts
A case study slide detailing an assignment strategy from Dr. Stephen LeMay designed to prevent cognitive offloading. The project requires students to compare multiple AI tools and write a reflective report, fostering critical thinking.

This brilliant move shifted the focus from getting a simple answer to critically evaluating the quality of AI-generated answers.

Strategy #2: Map the Thinking Process (Less Focus on Final Product)!

Picking up on a fantastic thread from the live chat about whether we should still call improper AI use “cheating,” Keith Hollowell shared a different tactic for students building foundational responsible use AI skills. He focuses on making the students’ thinking process visible, valuing their journey over a polished, but potentially soulless, final product.

“I want to bring back humanity into their writing.
And so much of their humanity is their student voice.”
Keith Hollowell, Virginia Commonwealth University

He breaks down large, intimidating papers – the kind that often tempt students to use AI – into smaller, process-oriented steps:

  1. Evidence Selection & Justification:
    Students submit their quote selections and write a justification explaining how each piece of evidence supports their central argument. This makes their reasoning visible.
  2. Draft-Aloud Dictations:
    He has students record themselves reading their papers aloud to catch awkward phrasing and mistakes that are common in AI-generated text.
  3. Checkpoints and Reflection:
    He assesses the “paper trail” of learning by having students submit things like annotated bibliographies, research notes, drafts, etc. that shows their intellectual journey.

This process-based approach makes it impossible to offload the real work of thinking, connecting, and analyzing to a machine. The work is more manageable, provides more opportunities for feedback, and values the student’s unique voice and analytical process. 

Ready to build your own pedagogical toolkit? A responsible AI strategy starts with a solid foundation. Get your guide to Ethical & Pedagogical AI to unpack the frameworks that empower responsible AI in education for both faculty and students

The Dead Education Theory: A Warning for Us All

An infographic illustrating the "Dead Education Loop," showing a cycle where an AI prompt leads to an AI-generated student response, which is then graded by AI, with the human missing from the process. This visual represents the risk of cognitive offloading and the loss of authentic student learning

The biggest topic of the day was a look at the high-level risk we all face: “Dead Education Theory.” This is a cycle where an educator uses AI to create an assignment, the student uses AI to complete it, and an AI grades it. In this automated loop, the human is missing entirely, and the purpose of education evaporates.

Confronting this requires more than just clever assignments; it requires a clear institutional vision. As the chat conversation highlighted, educators are concerned about policy and need institutional support to navigate this new world.

Watch the Full Conversation

The principles outlined here are just the starting point. The full webinar is where the detailed strategies, the “how-to” of assignment design, and the deeper discussion of institutional adoption unfold.To access the full recording and downloadable slide deck, and to get the complete picture of how to build a resilient pedagogical strategy for the AI era, watch the complete webinar on-demand, “Designing for Depth: Assignment Strategies That Prevent Cognitive Offloading,” and get your copy of the slide deck now.

Watch the on-demand webinar: Designing for Depth - Assignment Strategies that Prevent Cognitive Offloading.