NASPA Reflections: The Five Questions Every AI Decision Must Answer
by Claire L. Brady, EdD
At NASPA 2026, I had the privilege of serving as the conference’s “AI Executive in Residence”, facilitating conversations with student affairs leaders about one of the most pressing leadership questions facing higher education today: how we guide the adoption of artificial intelligence on our campuses.
Across presentations, hallway conversations, and VPSA consultations, one theme surfaced again and again. The challenge is not simply understanding the technology. The real challenge is leading institutions through the cultural, ethical, and strategic decisions that AI demands.
Over the next few posts, I’m sharing reflections inspired by several of the mini-keynotes I delivered during the conference. Each explores a different leadership question emerging from this moment:
Are we chasing AI or leading it?
How do institutions adopt AI without compromising their values?
What questions should guide responsible AI decisions?
Where is the line between student support and surveillance?
My hope is that these reflections help student affairs leaders move from reacting to the technology toward shaping how it serves our mission and our students.
The Five Questions Every AI Decision Must Answer
One of the most common questions I hear from higher ed leaders right now is surprisingly simple: How do we know if we’re making the right decisions about AI?
It’s an understandable concern. Artificial intelligence is evolving quickly, and institutions are feeling pressure from multiple directions at once. Students are already using AI tools in their daily lives and academic work. Staff are experimenting quietly to save time or improve efficiency. Faculty are reconsidering long-standing assumptions about teaching, learning, and assessment. Employers are increasingly signaling that AI literacy will be expected of graduates entering the workforce.
In this environment, it can feel tempting to focus primarily on the technology itself—what the newest tool does, how it works, or whether peer institutions have adopted it yet.
But the most important leadership questions about AI are rarely technical.
They are values questions.
During a recent conversation with student affairs leaders, I shared a framework I often use when institutions are considering new AI tools or systems. Before adopting any AI-enabled technology, leaders should pause and ask five simple but powerful questions.
Does this serve our mission?
Higher education institutions exist for a reason: to educate students, expand knowledge, and support human development. AI should strengthen that mission, not distract from it. If a new tool primarily promises efficiency without clearly improving learning or student support, leaders should proceed carefully.
Does it strengthen human judgment?
The most responsible uses of AI augment professional expertise rather than replace it. AI can help analyze information, surface patterns, or reduce administrative burden. But decisions that affect students’ lives should always include human oversight and professional judgment. When automation quietly replaces expertise, institutions risk undermining both trust and quality.
Does it advance equity?
AI has the potential to expand access to information and support services, but it can also amplify existing biases if systems are not designed carefully. Leaders must ask who benefits most from an AI system—and who might be unintentionally excluded or harmed. Equity cannot be addressed after implementation; it must be built into the design and governance of these systems from the beginning.
Does it build trust?
Students, faculty, and staff should understand when AI is being used and how it influences decisions that affect them. Transparency about data use, system limitations, and oversight mechanisms is essential. If leaders cannot clearly explain how a system works and why it is being used, the institution may not yet be ready to deploy it responsibly.
Does it improve the student experience?
Ultimately, the goal of most student affairs work is to help students feel supported, connected, and capable of navigating their educational journey. AI should make that experience stronger—not colder or more bureaucratic. The best uses of AI free up time for the human relationships that matter most.
Taken together, these questions form a simple but powerful test. They shift the conversation away from tool adoption and toward mission alignment. They also remind leaders that every AI decision is, at its core, a reflection of institutional priorities and values.
AI will continue to reshape higher education in ways we are only beginning to understand. The institutions that navigate this transformation well will not necessarily be the ones adopting the most tools or moving the fastest. They will be the ones asking better questions—and ensuring that technology serves their mission rather than the other way around.