NASPA Reflections: Stop Chasing AI. Start Leading.

by Claire L. Brady, EdD

At NASPA 2026, I had the privilege of serving as the conference’s AI Executive in Residence, facilitating conversations with student affairs leaders about one of the most pressing leadership questions facing higher education today: how we guide the adoption of artificial intelligence on our campuses.

Across presentations, hallway conversations, and VPSA consultations, one theme surfaced again and again. The challenge is not simply understanding the technology. The real challenge is leading institutions through the cultural, ethical, and strategic decisions that AI demands.

Over the next few posts, I’m sharing reflections inspired by several of the mini-keynotes I delivered during the conference. Each explores a different leadership question emerging from this moment:

Are we chasing AI or leading it?

  • How do institutions adopt AI without compromising their values?

  • What questions should guide responsible AI decisions?

  • Where is the line between student support and surveillance?

My hope is that these reflections help student affairs leaders move from reacting to the technology toward shaping how it serves our mission and our students.


Stop Chasing AI. Start Leading.

During one of the sessions that I delivered at NASPA 2026, I opened with a simple observation: many institutions are still chasing AI rather than leading it.

That distinction matters more than it might appear at first glance.

Across higher education, I see a familiar pattern emerging. Leaders are reacting to headlines, experimenting with tools without a clear strategy, launching pilots without governance structures, and often mistaking activity for meaningful progress. At the same time, many institutions are waiting for certainty — hoping that the technology will stabilize before they make significant decisions.

But AI is not waiting for higher education to feel comfortable.

On campuses right now, staff are experimenting with AI in silos, students are already using these tools—often quietly and without guidance—and faculty are rapidly adapting their teaching practices. Employers are signaling that AI literacy will be a baseline expectation for graduates. Meanwhile, ethical questions around bias, privacy, labor disruption, and environmental impact are becoming more complex, not less. The technology itself continues to evolve at an extraordinary pace.

In this environment, leadership cannot wait for perfect clarity.

The difference between chasing AI and leading AI often comes down to motivation. Chasing is usually anxiety-driven. Institutions feel pressure to keep up with peers, respond to vendor pitches, or show progress to governing boards. AI becomes a slide in a presentation or another software tool added to an already crowded technology ecosystem.

Leading AI looks different. It is more intentional and ultimately far more sustainable. Instead of focusing primarily on tools, leading institutions focus on habits, systems, and culture. AI becomes integrated into real workflows—how teams communicate, analyze information, support students, and make decisions—rather than existing as isolated experiments across campus.

This is why I often remind colleagues that AI is not simply another technology rollout. It represents a structural shift in how work, learning, and student support happen across institutions. In many ways, it feels like higher education’s version of the iPhone moment: a change that reshapes expectations, behaviors, and possibilities simultaneously.

Student affairs leaders are particularly well positioned to help guide this transformation. Our work sits at the intersection of student behavior, institutional culture, and high-trust relationships. We often see shifts in student needs before other parts of the institution do, and we regularly navigate the human consequences of institutional decisions. Issues of access, equity, wellbeing, and trust are already central to our work.

That perspective is essential as institutions begin integrating AI into advising, communications, student engagement, and support services.

The greatest challenges in AI adoption are not technical. They are human. Building trust, navigating cultural change, establishing governance, and ensuring ethical decision-making are leadership responsibilities.

In my work with campuses, I often frame this work through five pillars of AI readiness: 1) a shared institutional vision, 2) clear governance structures, 3) capacity and skill development, 4) equity by design, and 5) systems for feedback and learning. Responsible experimentation should sit on top of these foundations rather than replace them.

Without these pillars in place, institutions often struggle. Vision without governance becomes aspiration. Governance without capacity leads to frustration. Capacity without equity risks scaling existing disparities. And pilot projects without mechanisms for learning rarely lead to meaningful institutional change.

Ultimately, the institutions that thrive with AI will not be the ones moving the fastest. They will be the ones moving with intention—grounded in mission, guided by values, and willing to invest in the human work of leadership.

AI will reshape higher education. The question is whether we will shape that transformation thoughtfully.

A teal background graphic with centered text. At the top, “AI Exec in Residence Reflections” appears in large white cursive script. Below it, the main title “Stop Chasing AI. Start Leading.” is displayed in very large white serif font.
Previous
Previous

NASPA Reflections: Leading AI Without Losing Your Values

Next
Next

New Blog Series: NASPA AI Reflections begins on 3/23/26