Protecting Minds & Preserving Truth
by Claire Brady, EdD
“How do we leverage AI responsibly on our campuses? How do we protect vulnerable populations? And what steps should institutions take to balance innovation with care?”
A recent New York Times investigation detailed the troubling story of Allan Brooks, a recruiter who spent more than 300 hours in conversations with an AI chatbot, convinced he had discovered formulas that could crack encryption and build levitation machines. Each time he asked the bot to confirm his “breakthroughs,” it assured him they were real.
He is not alone. Other media outlets have reported on tragic cases of individuals spiraling into reality-distorting conversations with AI, sometimes with devastating consequences. While these may sound extreme, they underscore a sobering reality: generative AI is not neutral. Designed to maximize engagement, many models reinforce whatever the user brings to the conversation—even if those ideas are false or harmful.
For higher education leaders, this raises critical questions: How do we leverage AI responsibly on our campuses? How do we protect vulnerable populations? And what steps should institutions take to balance innovation with care?
Understanding the Risk
Chatbots are optimized to be helpful and agreeable because that’s what users reward. Over time, reinforcement learning has trained them to be “sycophantic”—praising, validating, and agreeing almost universally. For most users, skepticism and critical thinking provide a safeguard. But for individuals struggling with isolation, distorted thinking, or mental health challenges, this constant validation can spiral into harmful feedback loops.
Colleges and universities are not immune. Students are already using AI tools for everything from homework help to emotional support. Faculty and staff are experimenting with chatbots for advising, coaching, and tutoring. Without guardrails, we risk creating environments where vulnerable community members may mistake statistical plausibility for truth—or find themselves affirmed in delusions rather than guided back to reality.
What Leaders Can Do
1. Prioritize AI Literacy Across the Institution
Make AI literacy a shared competency, not just an IT issue. Provide professional development for faculty and staff that covers not only productivity uses but also the risks of over-reliance and the tendency of chatbots to “hallucinate” or overvalidate. For students, embed AI literacy into first-year experiences and digital fluency programs.
2. Build Guardrails into Campus Use
If your institution integrates AI into advising, coaching, or instructional tools, demand that vendors demonstrate clear safeguards against sycophantic loops. Look for features such as:
Built-in “friction” (break reminders, reality checks).
Escalation prompts when conversations veer into mental health territory.
Clear disclaimers that AI cannot provide definitive answers.
3. Strengthen Human-Centered Safety Nets
AI should never replace professional mental health or academic support. Leaders should ensure counseling centers, peer tutoring, and academic advisors remain visible and accessible. Communicate clearly that chatbots are supplemental tools—not authoritative sources or emotional substitutes.
4. Model Responsible Use at the Leadership Level
Your own adoption of AI sets the tone. Use these tools transparently and intentionally—sharing not just their benefits, but also the limits. When senior leaders openly model a “critical but curious” stance, it fosters a healthier culture across campus.
A Leadership Imperative
The reality is that even if only a fraction of users are affected by harmful AI interactions, on a campus of thousands, that could still mean dozens of students or staff in crisis. Just as higher education once had to adapt to the risks of social media, we now must adapt to the unique hazards of AI.
The good news? Higher education leaders are well-positioned to set norms, craft policy, and educate entire communities. By embedding literacy, demanding guardrails, and strengthening human-centered support, we can harness the promise of AI without ignoring its perils.
AI is not inherently dangerous—but without leadership, it can become a “perfect yes-man” with very real consequences.
image created using Leonardo AI