Back to resources Article Contents Insights from Packback’s Instructional AI ExpertsThe Era of “Responsible Use AI” - Kelsey Behringer, Chief Executive OfficerFrom Panic to Pedagogy - Barbara Kenny, Senior Product ManagerThe “AI-First Curriculum” - Craig Booth, Chief Technology OfficerThe Rise of Mission-Based AI Adoption - Oliver Short, Director of ProductWhat This Means for Institutions Blog 2026 Predictions for AI in Higher Education December 10, 2025 Author: Peter Lannon Read time: 7 min Share This Article Copy Link Share on Linkedin Share on X Share via Email Insights from Packback’s Instructional AI Experts Artificial intelligence is no longer a myriad of experiments in higher education – it has been firmly cemented as an essential part of institutional strategy (for better and for worse). Over the past two years, we’ve seen a dramatic shift from anxiety and reaction to strategy and intentionality. In 2026, that shift will deepen. AI will stop being the thing higher education talks about and become the thing it plans alongside, repositioning it as a tool to drive institutional strategy. To understand where the field is headed, we spoke with four of our internal experts: Kelsey Behringer (CEO), Craig Booth (CTO), Oliver Short (Director of Product & Design), and Barbara Kenny (Senior Product Manager), about their predictions for how AI will transform teaching, learning, and institutional strategy in the coming year. The Era of “Responsible Use AI” – Kelsey Behringer, Chief Executive Officer Packback CEO, Kelsey Behringer, believes that 2026 will be the year higher education finally defines what responsible AI means in practice. “In 2025, educators were overwhelmed by noise,” she explained. “There was excitement, fear, experimentation, but little clarity. The institutions that succeed in 2026 will be those that define, operationalize, and communicate what responsible AI use looks like within their context.” Kelsey believes the future of responsible AI practice begins with people. “Institutions must keep humans in the loop,” she said. “AI should solve problems, improve outcomes, and do no harm. When students feel disconnected from the human side of learning, they disengage.” This “human in the loop” philosophy mirrors Packback’s approach to building Instructional AI. Instruction-first tools like Packback center on transparency, feedback, and pedagogy, not automation for automation’s sake. Kelsey predicts that 2026 will bring new models of partnership between faculty and AI, where instructors focus on mentoring and critical dialogue while AI handles repetitive mechanics like grammar feedback or rubric alignment. Her closing thought: “Responsible AI is more than merely compliance, it’s about the trust that must be cultivated and protected between student and educator. The institutions that earn that trust will unlock AI as a tool to serve their mission.” From Panic to Pedagogy – Barbara Kenny, Senior Product Manager Barbara Kenny sees higher education in the middle of an evolution with AI. “We’ve gone from panic to practicality.” She predicts that by the end of 2026, AI literacy, or the ability to understand, critique, and responsibly apply AI, will be embedded across every degree program. “AI literacy will become as essential as digital literacy,” Barbara said. “It’s about the practice of using these powerful AI tools with integrity and intention.” Barbara believes this shift will fundamentally change how institutions assess learning. “The idea of authentic student work is evolving,” she explained. “We’ll stop defining authenticity by whether something was AI-free and instead look at how students demonstrate their thinking process.” This redefinition will spark innovation in assessment. Simulations, oral defenses, AI co-authored writing, and collaborative projects will become common. “The most forward-thinking institutions will value transparency in students’ learning journeys, giving students the freedom to approach problems creatively and demonstrate how they arrived at their conclusions, not just the final product,” Kenny said. Packback’s Engagement Insights already support this evolution by helping faculty see how students engage and iterate over time. “When you make the learning process visible, you make it valuable by inviting deeper engagement and more meaningful outcomes” she added. “That’s the future of authentic work.” The “AI-First Curriculum” – Craig Booth, Chief Technology Officer For Craig Booth, the next major milestone will be the emergence of what he calls the AI-first curriculum, a model where AI literacy and instructional support are embedded throughout every course design. “By the end of 2026, a non-insignificant number of institutions will move beyond pilot projects to fully integrated AI-first programs,” Craig predicts. “We’ll see AI touch everything from personalized practice to assessment workflows and student success analytics.” He is clear that the focus should not be on detecting AI use but on understanding it. “Detection is a dead end,” he said. “Integrity will be redefined around the process. We should assess how students got there, not just what they turned in.” This shift, he argues, will also re-center the human purpose of education. “AI gives us a chance to focus on what machines can’t do: curiosity, creativity, collaboration, communication. Those are the skills employers rank above even technical ability in importance.” Booth likens this moment to the “spreadsheet moment” in business, when automation freed professionals from routine calculation allowing them to focus higher-level strategy. “AI can do the same for education,” he said. “It can lift the administrative burden so educators can focus on teaching and human connection.” Want to see what human-centered AI looks like in action?Join our upcoming Human-Centered AI in Education webinar series, where Packback’s technology and pedagogy experts pull back the curtain on how ethical, explainable AI can strengthen teaching, learning, and institutional outcomes. Register Now The Rise of Mission-Based AI Adoption – Oliver Short, Director of Product Oliver Short’s prediction focuses on why institutions use AI. “In 2024 and 2025, AI adoption was mostly compliance-based,” he said. “Schools were responding to pressure—from policy, from peers, or from the headlines. In 2026, we’ll see a transition to mission-based adoption.” Oliver believes institutions that frame AI as a lever for their educational mission, instead of just a tool for efficiency, will pull ahead. “AI can advance the core goals of higher education: access, equity, and lifelong learning,” he said. “When adoption is guided by those values, it becomes transformative.” He also believes AI will play a pivotal role in addressing the growing debate around the value of a degree. “AI can help students chart their own learning paths,” Oliver explained. “It can show them how their skills connect to real-world opportunities, making education feel more personal and purposeful.” One of Oliver’s most striking predictions centers on evolving student perceptions of AI culture. “By late 2026, using generative AI for everything will become taboo. Peer-to-peer accountability will emerge organically. I think student perception of generative tools will sour slightly as they become more AI-literate and aware of the pitfalls of overuse. When learners recognize the cognitive offloading that comes with rampant dependence on AI, meaningful student effort and productive struggle will make a comeback.” That cultural correction, he believes, will push institutions to make AI literacy central to every classroom, not optional. “We’ll see AI become part of how students build agency, not avoid it.” What This Means for Institutions Across every expert conversation, one theme came through clearly: intentionality. AI is going to be used in 2026 and beyond, that much is certain. The question education needs to ask is how, why, and to what end? Institutions that thrive will: Embed AI literacy into every program and discipline. Redefine academic integrity around transparency, process, and feedback. Measure engagement and retention as the leading indicators of AI’s success. Adopt mission-based AI strategies aligned with institutional goals and values. Kelsey put it best: “The most successful institutions will adopt AI with unique and strategic purpose; with the goal to solve THEIR problems and serve THEIR mission; not someone else’s, and certainly not with the goal to simply adopt AI.” As higher education steps into 2026, the opportunity is not to fight the presence of AI, but to shape it into a responsible, empowering, and human-centered force for learning. Ready to explore what responsible AI looks like at your institution?Connect with Packback to see how Instructional AI can help your faculty and students thrive in the age of AI.👉 Schedule a Conversation with Our Team
Blog Packback Launches Expert Advisory Board to Guide Responsible AI in Higher Ed & K-12 December 4, 2025