NASPA Reflections: Leading AI Without Losing Your Values

by Claire L. Brady, EdD

At NASPA 2026, I had the privilege of serving as the conference’s “AI Executive in Residence”, facilitating conversations with student affairs leaders about one of the most pressing leadership questions facing higher education today: how we guide the adoption of artificial intelligence on our campuses.

Across presentations, hallway conversations, and VPSA consultations, one theme surfaced again and again. The challenge is not simply understanding the technology. The real challenge is leading institutions through the cultural, ethical, and strategic decisions that AI demands.

Over the next few posts, I’m sharing reflections inspired by several of the mini-keynotes I delivered during the conference. Each explores a different leadership question emerging from this moment:

Are we chasing AI or leading it?

  • How do institutions adopt AI without compromising their values?

  • What questions should guide responsible AI decisions?

  • Where is the line between student support and surveillance?

My hope is that these reflections help student affairs leaders move from reacting to the technology toward shaping how it serves our mission and our students.


Leading AI Without Losing Your Values

At NASPA 2026, I facilitated a titled Leading AI Without Losing Your Values. The conversation started with a simple question I asked the room: What is actually keeping you up at night about AI?

The responses were immediate and honest. Leaders spoke about the fear of moving too fast without guardrails. Others worried about moving too slowly and falling behind. Some raised concerns about whether staff and students are truly prepared for what’s coming. Others pointed to the possibility that AI could widen equity gaps, erode student trust, or replace the human relationships that sit at the center of student affairs work.

All of these concerns are real.

What struck me most in the conversation, however, is that many of the tensions leaders are experiencing have less to do with the technology itself and more to do with the values decisions embedded in how institutions adopt it.

Too often, AI adoption in higher education follows a familiar pattern: speed is prioritized over reflection, tool experimentation happens without clear mission alignment, and innovation moves faster than governance structures can keep up. Efficiency begins to overshadow relationships, and equity considerations appear only after implementation rather than shaping decisions from the beginning.

Student affairs leaders understand why that approach is risky.

Our work has always required us to balance competing values—innovation and responsibility, access and convenience, efficiency and care. AI simply makes those tensions more visible. In many ways, artificial intelligence doesn’t create new institutional values as much as it exposes the ones that already exist.

Every AI decision is ultimately a values decision.

One of the most useful leadership tools in this moment is what I call the mission alignment test. Before adopting a new AI system, institutions should ask a few simple but powerful questions: Does this improve learning? Does it improve student support? Does it strengthen relationships between students and the institution? And does it advance our mission in a meaningful way?

If the answer to those questions is unclear, it is often a signal that the institution is moving faster than its values framework can support.

Equity is another area where values-based leadership becomes essential. AI systems have the potential to expand access to support, increase responsiveness, and reduce administrative burdens that pull staff away from students. At the same time, these systems can also amplify bias, widen gaps in opportunity, and unintentionally exclude the students who most need support. The difference is rarely the technology itself. It is the intentionality of the design and governance decisions surrounding it.

In my work with campuses, I often emphasize that AI should augment human expertise rather than replace it. When used thoughtfully, these tools can reduce administrative burden, help staff respond more quickly to student needs, and free up time for the relational work that defines student affairs. But when institutions treat automation as a substitute for human judgment, trust erodes quickly.

Transparency is another critical leadership responsibility. Students, faculty, and staff should be able to understand when AI is being used, what data informs it, and how decisions are made. If leaders cannot explain an AI system clearly, the institution is likely not ready to implement it responsibly.

Ultimately, the institutions that navigate AI well will not be the ones moving the fastest. They will be the ones most clear about what they refuse to compromise.

Because in the end, every AI decision becomes a referendum on institutional values: whether we prioritize mission over speed, equity over expediency, and relationships over convenience.

Artificial intelligence will reshape higher education. The real leadership question is whether we will shape that transformation with intention.

Next
Next

NASPA Reflections: Stop Chasing AI. Start Leading.