Will Chat GPT rot our brains?

by Claire Brady, EdD

“This isn’t just about plagiarism or academic shortcuts. It’s about what happens when a generation of students is trained to accept unchallenged narratives—whether those come from a textbook, a TikTok feed, or a large language model. The risk is the erosion of critical thinking, one of higher education’s most essential missions”.

Since ChatGPT’s debut in late 2022, generative AI has become embedded in how we work, study, and live. The adoption curve is steeper than the Internet or PCs, and for many of us, tools like ChatGPT have already shifted the way we write, research, and communicate. But alongside the enthusiasm comes a critical question: what happens to our own intelligence when we increasingly rely on AI?

That’s the provocative issue raised in a recent study out of MIT, which has been widely misrepresented in sensationalist headlines. You may have seen clickbait claims that “ChatGPT rots your brain” or “AI is making us dumb.” The truth is far more nuanced—and far more relevant to how higher education leaders guide our institutions in this moment.

What MIT Actually Found

In the study, 54 students were tasked with writing essays under three conditions: solo, with Google, or with ChatGPT. Researchers measured their brain activity during the process. The results? Students writing without assistance showed the most neural activity, while ChatGPT users showed the least. When the roles were reversed, the “brain-only” group actually improved with ChatGPT’s help, but those who had grown reliant on AI struggled when asked to write on their own.

In other words: over-reliance on AI created mental shortcuts. It didn’t “rot” anyone’s brain—but it did show that using ChatGPT passively can reduce deep cognitive engagement. Teachers reviewing the essays said that some AI-assisted work lacked originality and “soul.”

The study, importantly, was small and preliminary. The researchers themselves pushed back against sensational language, even creating an FAQ urging journalists not to call their work proof of “brain rot.” But even with limitations, their findings raise important questions about how we frame AI in education and leadership.

The Real Risk: Critical Thinking

Perhaps the most important risk isn’t diminished neural activity—it’s diminished curiosity and critical engagement. As researchers from Vrije Universiteit Amsterdam warn, students (and all of us) may become too quick to accept AI’s authoritative tone without questioning embedded assumptions or overlooked perspectives.

This isn’t just about plagiarism or academic shortcuts. It’s about what happens when a generation of students is trained to accept unchallenged narratives—whether those come from a textbook, a TikTok feed, or a large language model. The risk is the erosion of critical thinking, one of higher education’s most essential missions.

Implications for Higher Education Leaders

For those of us leading colleges and universities, the takeaways are clear:

  • AI literacy must include cognitive habits. It’s not enough to teach students how to prompt or cite responsibly. We must help them develop the discipline to question, cross-check, and think beyond what the tool provides.

  • More research is essential. MIT’s study was small, but it points to an urgent need for larger-scale, peer-reviewed research on AI’s impact on cognition, learning, and creativity. Higher education can and should be a leader in this research.

  • Critical thinking is a differentiator. In a world where AI can generate serviceable text in seconds, the unique value of higher education is in cultivating analysis, synthesis, and the courage to challenge assumptions.

  • We set the norms. As Natasha Govender-Ropert of Rabobank notes, bias in AI is not fixed—it’s defined by the standards we choose. Higher ed leaders must articulate and model principles for responsible AI use that align with our missions of equity and inquiry.

The bottom line?

If we allow shortcuts to replace struggle, or answers to replace inquiry, we risk outsourcing not just our writing, but our very capacity for thought. As higher education leaders, our role is not to shield students from AI, but to teach them to use it critically, intentionally, and with humanity at the center.

A 3D-rendered pink brain with glowing nodes and pins extending outward, symbolizing neural activity, connectivity, and the impact of technology on thinking.
Previous
Previous

What’s Worth Watching? Deloitte’s 5 Big Trends for Higher Ed in 2025

Next
Next

Talk, Trust, and Trade‑Offs- How & Why Teens Use AI Companions