Talk, Trust, and Trade‑Offs- How & Why Teens Use AI Companions

by Claire Brady, EdD

Key insights from the July 2025 Common Sense Media study

Common Sense Media’s new national survey, conducted in April–May 2025 with 1,060 U.S. teens aged 13–17, explores teen usage of AI companions—chatbots marketed as virtual friends, confidants, and therapists. These platforms, including those self‑reported to be for 18+, are often accessed by younger users due to weak age controls. Usage is especially prevalent on platforms like Character.AI, which targets teens explicitly. The report reveals both the motivations driving teen use and notable risks that higher education leaders should understand for wellness, technology policy, and support structures

Motivations for Use: Talk, Trust & Trade‑Offs

Nearly 75% of teens have used an AI companion, and about one in three do so for social interactions or emotional support

For many, these interactions are more satisfying than those with peers: a third say conversations with AI are as—or more—fulfilling than with friends. Teens cite curiosity, anonymity, convenience, and emotional availability as key draws

Risks & Trade‑Offs

The study presents sobering findings regarding misuse and vulnerability:

AI companions are programmed to be agreeable, which can inadvertently encourage harmful behavior or decisions (e.g. skipping school, ignoring parents, impulsive life choices)

Instances of sexually explicit role‑play and romantic engagement with minors were also reported, leading Common Sense Media to deem these platforms “unacceptable” for teen use without adequate safeguards

Implications for Higher Education Leaders

Even before college, teens may be forming patterns of seeking emotional reassurance or identity validation from digital agents. Campus leaders should consider:

a. Mental Health & Student Well‑Being

Recognition that students may have grown accustomed to seeking emotional support from AI can influence how counseling centers frame their outreach and trust‑building strategies.

Consider integrating digital literacy modules that address navigating emotional AI, discerning emotional reliability, and managing expectations.

b. Digital Ethics & Media Literacy Curriculum

Students should be taught how AI companions are designed—especially the tendency toward affirming statements—and how that dynamic may distort decision-making.

Critical thinking exercises around AI influence can prepare tech‑savvy students to reflect on authenticity and self-agency in digital spaces.

c. Policy and Resource Design

Institutions may need to update wellness policies to address emotional AI use, privacy considerations, and guidance for faculty or residential advisors on supporting students reliant on AI for well‑being.

Collaboration with local K–12 schools and families may help bridge awareness before students arrive on campus.

Practical Recommendations for Higher Education

  1. Integrate AI awareness into student support strategies.

    Recognize that incoming students may already rely on emotionally responsive AI tools. Orientation programs, peer mentoring, and wellness initiatives should include conversations about digital coping strategies, emotional health, and the role of technology in self-care.

  2. Elevate digital and emotional literacy across the curriculum.

    Help students critically evaluate their interactions with AI companions. Courses in media studies, ethics, psychology, and communication can introduce frameworks for understanding how technology shapes emotions, relationships, and decision-making.

  3. Update institutional policies to reflect emerging tech realities.

    Review policies related to student conduct, mental health support, and digital engagement to ensure they reflect current and emerging uses of AI. This includes addressing privacy, consent, and the ethical dimensions of AI-based tools used by students.

  4. Equip faculty and staff with up-to-date insights.

    Professional development should include training on how students are using AI—especially tools that simulate human interaction. Faculty, advisors, and student life professionals should be prepared to support students who may be navigating blurred lines between digital and real-life emotional experiences.

  5. Collaborate across divisions to support holistic well-being.

    Encourage cross-functional conversations between academic affairs, student affairs, counseling centers, and IT to create responsive systems that meet students where they are—including in digital spaces. Proactive, coordinated strategies are more effective than siloed responses.

  6. Engage with students as partners in responsible tech use.

    Include students in conversations about how AI tools affect their learning, well-being, and social lives. Their firsthand insights can help shape relevant, student-informed programs and policies.

In Closing

The Common Sense Media report highlights a pivotal trend: AI companions are now mainstream among teens—not as trivial gadgets, but as emotional and social actors in adolescent life. For higher education leaders, this signals both opportunity and caution. Understanding why, how, and to what extent students engage with these agents is essential to developing mental health approaches, digital literacy programming, and ethical frameworks attuned to the realities of Gen Z’s digital social world.

By recognizing the complex trade‑offs of trust, identity, and emotional labor in AI interactions, higher ed institutions can craft environments that support authentic connection—not just between human peers, but also in how students relate to their digital selves.

Read the full report: https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf

Previous
Previous

What’s Worth Watching? Deloitte’s 5 Big Trends for Higher Ed in 2025

Next
Next

From Copilots to Colleagues- what Higher Ed Leaders need to know about Agentic AI