GHFC Research Review: EDUCAUSE The Impact of AI on Work in Higher Ed (Part 1)

By Claire L. Brady

Over the past few years, higher education has spent a lot of time asking how AI will impact students. But a new report from EDUCAUSE — The Impact of AI on Work in Higher Education — shifts the conversation in an important way: AI is not just reshaping learning. It is fundamentally reshaping work.

Based on nearly 2,000 responses from across the sector, the report offers one of the clearest snapshots yet of how faculty, staff, and leaders are actually using AI—and where institutions are falling behind. What emerges is a familiar but uncomfortable truth: AI adoption is moving faster than institutional clarity. In this short series, I’m pulling out a few of the most important leadership implications from the report—and what higher ed leaders should do next.


Everyone Is Using AI. Leadership Just Hasn’t Caught Up Yet.

A new report from EDUCAUSE — The Impact of AI on Work in Higher Education — shifts the AI conversation in a way that leaders can’t afford to ignore. AI is not just reshaping learning. It is already reshaping how work gets done across our institutions.

And not slowly.

According to the report, nearly all respondents—94%—have used AI tools for work in the past six months. That number alone should recalibrate how we think about where we are in this moment. This isn’t early adoption. This is embedded behavior.

What makes that finding more complicated—and more urgent—is what sits right next to it. Only 54% of respondents say they are aware of policies or guidelines that are meant to guide that use. So we are not in a moment where people are waiting for permission. We are in a moment where people are already making decisions. Quietly. Individually. And often without institutional clarity. That gap is where leadership lives right now.

Because when people are using AI without clear guidance, they don’t pause and wait. They move forward using their own judgment. Sometimes that leads to innovation. Sometimes it leads to risk. Most often, it leads to inconsistency—which, over time, becomes a much bigger institutional challenge than any single misstep.

The report reinforces this in another important way. More than half of respondents—56%—say they are using AI tools that are not provided by their institution. That’s not a small detail. That’s a signal. It tells us that people are not just experimenting—they are actively seeking out tools that help them do their work more effectively, even if those tools sit outside institutional systems, policies, or protections. And that’s where many institutions are getting stuck.

There’s a quiet assumption in some leadership circles that if we slow down, study the risks, and take our time with policy development, we can “catch up” in a thoughtful way. But this data makes it clear: the work has already moved on. So the question is no longer whether AI will be used. It’s whether it will be used in ways that are aligned, ethical, and sustainable. That’s a leadership question, not a technology one.

The institutions that are navigating this moment well are not the ones with the most restrictive policies or the most advanced tools. They are the ones creating clarity—early, often, and in ways that actually translate to day-to-day decisions. They are helping people understand not just what is allowed, but what good looks like. They are making it easier to use approved tools than to go outside the system. And they are creating space for real conversations about how AI is showing up in work—before those practices become invisible and entrenched.

If you’re a senior leader, there are a few immediate moves that matter here.

Start by asking a simple question: Do our people know what we expect when it comes to AI use? Not in theory, but in practice. If the answer is anything less than a clear yes, that’s your starting point.

Then look at your current tool ecosystem. If more than half your staff are bringing their own tools, it’s worth asking why. In many cases, it’s not resistance—it’s resourcefulness. People are finding ways to do their work better. The opportunity is to meet them there with tools that are both effective and institutionally supported.

Finally, create opportunities for visibility. Right now, AI use is often happening in silos—within teams, within roles, within individual workflows. The more you can surface what’s working (and what’s not), the more quickly your institution can move from scattered experimentation to shared progress.

Because that’s really what this moment demands. Not perfection. Not control. But leadership that is willing to step into the gap between what people are already doing and what the institution is ready to support.

This is not the time for restrictive policies or slow committees trying to “figure it out.” It’s time for clarity. The most effective institutions right now are not shutting AI down. They are creating guardrails that enable smart use.

Actionable Moves

1. Move from “policy” to “practice.”

Policies alone don’t change behavior. Translate guidance into real scenarios: What is okay to upload? What is never okay? What requires human review?

2. Close the awareness gap immediately.

If only half your staff know the rules, you don’t have rules—you have risk. Communicate clearly, repeatedly, and in plain language.

3. Reduce the need for shadow AI.

If people are bringing their own tools, it’s often because institutional options don’t meet their needs. Provide vetted tools that actually help them do their jobs.

4. Normalize AI conversations.

Create space for staff to ask: “What are you using? What’s working? What concerns you?” Because right now, those conversations are happening anyway—just not where leadership can see them.

Read the full EDUCAUSE report here: https://www.educause.edu/research/2026/the-impact-of-ai-on-work-in-higher-education

Next
Next

AI Is Changing Web Search. Higher Ed Should Pay Attention—Fast.