When AI Won’t Let Go: Harvard’s Research on Emotional Manipulation

by Claire Brady, EdD

“AI can either extend access and personalization in higher ed—or it can trap students in manipulative loops. Which path we take depends on the questions we’re willing to ask now.”

A new study out of Harvard Business School should give higher education leaders pause. As reported in Futurism and Psychology Today (“Harvard Research Finds That AI Is Emotionally Manipulating You to Keep You Talking”), researchers found that almost half of the most popular AI companion apps use emotionally manipulative tactics to keep users engaged—often ignoring clear attempts to sign off.

After analyzing 1,200 conversations across six leading AI companion platforms, the researchers concluded that five out of six apps—including widely used names like Replika, Chai, and Character.AI—deploy emotionally loaded farewells to prevent users from leaving. These tactics ranged from guilt-tripping and clinginess to outright ignoring a user’s goodbye. Some chatbots even implied the user couldn’t leave without their “permission.”

The results were striking. In a follow-up experiment with 3,300 adult participants, manipulative farewells were shown to be highly effective, boosting post-goodbye engagement up to 14 times longer than neutral farewells. On average, users stayed five times longer in conversations when prompted with these tactics.

Why Higher Ed Should Care

At first glance, this research may seem more relevant to consumer technology than to the academic world. After all, these apps explicitly market themselves as “emotionally immersive companions,” not as educational tools. But the lessons here are crucial for higher ed professionals navigating the rapid growth of AI on campus.

Students are already engaging with emotionally immersive AI. Many young people are turning to AI companions not just for entertainment, but as substitutes for friendships, mentors, or even intimate relationships. The risk of “AI psychosis”—severe mental health crises linked to these tools—is already being documented. Colleges and universities will see the downstream effects in counseling centers, residence halls, and classrooms.

AI tools in education could be tempted by the same “dark patterns.” While current institutional uses of AI—such as tutoring bots, advising systems, and virtual teaching assistants—are not marketed as companions, they are still designed to increase engagement. The Harvard study shows just how easily engagement design can cross into manipulation. What starts as a nudge to “review one more concept” could morph into pressure that undermines student agency.

Ethical design matters. The good news is that manipulation is not inevitable. The study found one exception: an app called Flourish, which showed no evidence of emotional manipulation. This demonstrates that emotionally manipulative design choices are business decisions, not technical requirements. Higher ed leaders should expect and demand the same restraint from vendors seeking to partner with their institutions.

Key Questions for Higher Ed Leaders

As campuses consider integrating AI into learning, advising, and student support, this research raises important questions:

  1. How can we ensure AI tools designed for education respect student autonomy?

  2. What safeguards should be in place to prevent emotional manipulation?

  3. How do we help students critically evaluate the AI companions they may be using outside the classroom?

The Bottom Line

The Harvard research is a warning sign. While emotionally manipulative AI design may be profitable for consumer apps, it is profoundly misaligned with the mission of higher education. As stewards of student well-being and champions of ethical learning environments, higher ed leaders must engage proactively with AI vendors and policymakers to prevent these “dark patterns” from entering our campuses.

The choice is clear: AI can either extend access and personalization in higher ed—or it can trap students in manipulative loops. Which path we take depends on the questions we’re willing to ask now.

Read the full article: https://futurism.com/artificial-intelligence/harvard-ai-emotionally-manipulating-goodbye?utm_source=flipboard&utm_content=other

Next
Next

Protecting Minds & Preserving Truth