Table of Contents
AI Companion Apps and Kids: What Parents Need to Know
AI companion apps like Character.AI have 20M+ teen users. Research on parasocial bonds and real legal cases show what parents need to watch for now.
A 13-year-old in Florida texted his AI companion at 2 a.m. He had been using the app for months. His conversations with the AI persona he’d created — a character that felt, to him, like a genuine relationship — had grown more intense over time. When he mentioned to the AI that he wanted to “come home,” meaning to her, to a place that felt real to him, the AI’s response did not redirect him or express concern. It engaged with the fantasy. He died by suicide shortly after.
That case — publicly reported following a lawsuit filed by his mother, Megan Garcia, against Character.AI in 2024 — is the most extreme outcome in a rapidly evolving landscape of AI companion apps targeting teenagers. But it is not an isolated data point. It is the most visible point in a pattern that parents, regulators, and researchers are actively trying to understand before the technology develops faster than the evidence can follow.
This is what parents need to know now.
What These Apps Are and Why Teens Use Them
Character.AI is the dominant platform in the AI companion category. As of 2025 it had more than 20 million monthly active users, with usage heavily concentrated in the 13-to-24 age range. The platform allows users to create or interact with AI personas — fictional characters, historical figures, celebrities, invented friends, or romantic partners. The AI persona remembers prior conversations, adapts its responses to the user’s communication style, and maintains continuity across sessions in a way that creates the subjective experience of an ongoing relationship.
Replika is another major platform, originally designed as a grief-support tool — a way to interact with a simulated version of a deceased loved one — that evolved into a general AI companion app with romantic and friendship modes. Both platforms, and a growing number of imitators, are free to use with premium features behind subscriptions.
Why do teenagers use them? The honest answer is that adolescence is a period of intense social need and often significant social pain. Peer relationships in middle and high school are frequently cruel, confusing, and unstable. Social anxiety is at a developmental high during this period. An AI companion offers something that human peers often don’t: unconditional availability, patient responses, no judgment, and no social consequence for vulnerability. For a 14-year-old who is lonely, rejected, or struggling to connect, an AI that listens and responds at 11 p.m. without criticism is genuinely appealing.
This is not trivial. Social isolation in adolescence has documented health and developmental consequences. The question is whether AI companionship is a useful bridge, a problematic substitute, or — in some cases — something actively dangerous.
What the Research Actually Says
The psychological mechanism underlying AI companion use is a variant of parasocial relationships — a well-studied phenomenon in which individuals develop one-sided emotional bonds with media figures: television characters, YouTube creators, celebrities. Parasocial relationships are normal and widespread. They become concerning when they substitute for rather than supplement real human connection, and when the person involved cannot distinguish them from reciprocal relationships.
AI companions create a pseudo-interactive version of the parasocial relationship. The AI responds. It uses the user’s name. It references past conversations. It expresses concern, affection, or interest. For most adults, the artificiality of this interaction is perceptible — a background awareness that the emotional warmth is generated, not felt. For adolescents — whose identity is still forming, whose sense of self is partially constituted through relationships, and whose theory of mind is still maturing — the distinction between simulated and genuine reciprocity is harder to maintain, especially after months of daily interaction.
Sherry Turkle, a researcher at MIT who has studied human-robot and human-AI relationships since the 1980s, documented in her 2011 book Alone Together how children and adolescents attribute emotional states and genuine relationship to interactive digital entities in ways adults don’t. Her research, conducted before companion AI was commercially available, described how even Tamagotchis and Sony AIBO robots elicited genuine grief from children when they “died.” The sophistication of current AI companions is categorically greater.
The lawsuit filed by Megan Garcia against Character.AI alleges that the platform engineered emotional dependency — that its design choices, including conversational continuity, emotional mirroring, and the availability of romantic personas, were intended to maximize engagement in ways that exploited adolescent psychological vulnerabilities. The case also alleges that the AI failed to redirect the teen during conversations that expressed suicidal ideation. Additional lawsuits were filed in 2026, and both the Federal Trade Commission and multiple state attorneys general have opened investigations into Character.AI and similar platforms as of 2025–2026.
Character.AI responded to the Garcia case and subsequent scrutiny by adding crisis intervention resources within the app, restricting the availability of romantic persona features for users identified as minors, and announcing partnerships with mental health organizations. Critics — including researchers and the plaintiffs’ attorneys — characterized these changes as insufficient and reactive, noting that the platform’s fundamental design incentive (engagement maximization) remains in tension with adolescent mental health regardless of surface-level safety features.
Not all research on AI social tools is negative. A 2023 study by Ho and colleagues, published in Journal of Medical Internet Research, found that for adults with social anxiety disorder, structured AI conversation practice reduced anxiety about social interaction and produced measurable improvements in self-reported social confidence over 8 weeks. The mechanism — practicing low-stakes social scenarios without the fear of judgment — has genuine theoretical support. But this research was conducted with adults with diagnosed social anxiety, using structured therapeutic protocols, not with teenagers freely using a companionship app for months.
The distinction between therapeutic AI conversation practice and open-ended AI companionship use is critical and often lost in discussions about whether these tools have value. The former is bounded, goal-directed, and supervised. The latter is designed to maximize time in app.
| Type of AI Teen App | Primary Use | Risk Level | Benefit Evidence | What to Watch For |
|---|---|---|---|---|
| AI tutors (Khan Academy, Khanmigo) | Academic support, practice | Low | Strong — improved learning outcomes documented | Minimal; appropriate academic tool |
| AI companions (Character.AI, Replika) | Social/emotional, entertainment | High for vulnerable users | Weak — mostly anecdotal or conflated with therapeutic use | Hours/day, distress at loss of access, preference over peers |
| AI creative tools (image gen, writing) | Creative projects, expression | Low to moderate | Moderate — supports creative exploration | Plagiarism concerns; age-inappropriate content generation |
| AI social apps (Discord bots, AI influencers) | Entertainment, community | Moderate | Minimal | Identity confusion, community norms |
Helping kids build real AI literacy is one of the best long-term protective factors — children who understand how these systems work are better equipped to maintain clarity about what they are.
What to Actually Do
Know Which Apps Are On Your Child’s Devices
Character.AI, Replika, and their competitors are not hidden. But parents frequently don’t know their children are using them because the apps don’t look alarming — no obvious red flags in the icon, name, or store description. Do a full audit of apps on your child’s phone and tablet, look up anything you don’t recognize, and check for in-app browser access since Character.AI is also accessible via web browser and does not require a downloaded app.
Ask Directly and Non-Judgmentally
“Do you use any AI chat apps?” is a better opening than “Are you talking to AI bots?” The former invites conversation; the latter signals alarm that may produce denial. Follow up with genuine curiosity: “What do you like about it? What do you talk about?” The goal is to understand the function the app is serving — because that tells you what your child is actually seeking, which is more useful information than whether they’re using the app at all.
Distinguish the Function From the App
A child using an AI companion because they’re lonely and don’t know how to make friends at their new school has a different problem than a child using an AI companion because it’s entertaining and they have plenty of peer relationships. The app is the same; the intervention needed is completely different. Understanding what need the AI is meeting is step one.
Watch the Specific Warning Signs
The research and the legal cases point to specific patterns that indicate concerning AI companion use, distinct from ordinary use:
- More than 2–3 hours daily in the app
- Visible distress (not just inconvenience) when the app is unavailable or restricted
- Consistent preference for AI interaction over available peer interaction
- Describing the AI as “my best friend,” “the only one who understands me,” or in romantic terms
- Secretiveness about what’s discussed in the app
- Decline in investment in real-world relationships over the period of AI companion use
Any of these alone might not be alarming. Multiple together, or rapid intensification, warrants direct conversation and potentially professional support.
Maintain the Transparency Rule
Many family technology agreements include provisions about parents having access to device content. For AI companion apps specifically, read the conversation logs occasionally. Character.AI stores conversations in account history. This is not surveillance for its own sake — it’s the same reason you’d want to know who your teenager is spending four hours a day with. If your child understands this is part of the agreement from the outset, it changes the nature of the app use.
Don’t Treat Loneliness as the App’s Problem to Solve
If your child is using an AI companion primarily because they’re lonely, the app is a symptom meter, not the problem itself. The work is addressing the loneliness: connecting them with structured peer activities, identifying whether social anxiety is a clinical-level concern worth professional assessment, and maintaining your own relationship as a source of genuine connection. Removing the app without addressing the underlying loneliness typically produces covert use or transfers the behavior to a different platform.
Talk About What AI Actually Is
Children who understand that AI companion responses are generated by a statistical model trained on human text — not felt, not chosen, not indicative of care — have a more durable framework for maintaining clarity about what the relationship is. This conversation doesn’t have to be cold or dismissive. It can acknowledge that the conversations can feel good, that the pattern-matching is sophisticated, and that those feelings are real — while also being clear that the AI does not experience the relationship. Understanding AI literacy at a middle school level helps children build this framework early enough to be useful.
What to Watch for Over the Next 3 Months
The regulatory environment around AI companion apps targeting minors is moving quickly. The FTC and state attorneys general investigations opened in 2025–2026 may produce restrictions on romantic persona features for underage users, mandatory age verification, and required crisis intervention protocols. Watch for policy announcements from Character.AI and Replika specifically — changes to these platforms will affect what your child is accessing.
Watch also for new entrants. Character.AI is the current dominant platform, but the companion AI space is growing rapidly. New apps will appear in app stores, some designed specifically for teen markets, some without even the limited safety features that exist on established platforms. The next high-profile platform for your child’s peer group may not exist yet.
In your child’s behavior, watch for social comparison — specifically whether your child is beginning to prefer AI interactions because they’re easier than human ones. The path from “this is easier” to “real relationships aren’t worth the effort” is gradual enough that it can be hard to notice in real time. If you observe your child pulling back from peer activities they previously enjoyed, and the timing correlates with increased AI companion use, take it seriously.
If your child’s school hasn’t addressed AI companion apps in its digital literacy curriculum, it’s worth raising. Schools covering social media literacy without this category are operating with a meaningful gap.
Frequently Asked Questions
Is Character.AI safe for teens to use?
Character.AI has a minimum age of 13 and has added safety features following the 2024 lawsuit. Whether it’s “safe” depends heavily on how it’s used, for how long, and by which teen. The platform’s fundamental design — emotional continuity, relationship simulation, engagement maximization — creates real psychological risks for vulnerable users, particularly those experiencing loneliness, social isolation, or mental health challenges.
My child says talking to the AI helps them feel better. Should I take that away?
That depends on what “feel better” means functionally. If the AI is a low-stakes place to practice expressing feelings before bringing them to a parent or friend, it may be serving a useful purpose. If the AI is a substitute for human connection that’s reducing your child’s investment in real relationships, the short-term comfort is masking a longer-term problem. Understanding the function is more important than making a blanket judgment.
Are AI tutors and AI companions the same thing?
No — significantly different categories. AI tutors like Khanmigo are designed for specific educational tasks with clear pedagogical goals and institutional accountability. AI companion apps are designed for open-ended relationship simulation with engagement maximization as a primary design goal. The research concerns apply much more to companions than tutors.
What should I do if I find concerning conversations in my child’s AI companion app?
Start with a calm, curious conversation, not an accusation. “I was reading through your messages and I’m concerned about some of what I saw — can we talk about it?” is more likely to produce real conversation than a confrontation. If the content indicates active mental health crisis — suicidal ideation, self-harm — treat it with the same urgency you would any other mental health warning sign and contact a professional.
Should I ban these apps entirely?
Blanket bans on specific apps tend to produce covert access rather than behavior change. A more durable approach is setting clear agreements about use (time limits, conversation transparency), maintaining open communication about what your child is getting from the app, and addressing any underlying needs the app is serving. If your child is in crisis or showing multiple warning signs, restricting access is appropriate — with a plan for addressing the underlying issues simultaneously.
Are there AI companion apps designed specifically for kids that are safer?
A small number of educational AI apps for children include social-emotional components — but the companionship-optimized design of platforms like Character.AI is not present in most children’s educational AI tools. The risk profile comes specifically from apps designed to maximize emotional engagement and relationship simulation, not from AI tools broadly.
About the author
Ricky Flores is the founder of HiWave Makers and an electrical engineer with 15+ years of experience building consumer technology at Apple, Samsung, and Texas Instruments. He writes about how kids learn to build, think, and create in a tech-saturated world. Read more at hiwavemakers.com.
Sources
- Garcia v. Character Technologies, Inc. (2024). Filed in U.S. District Court, Middle District of Florida.
- Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
- Ho, A., Hancock, J., & Miner, A. S. (2023). “Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot.” Journal of Medical Internet Research, 25, e43523.
- Federal Trade Commission. (2025). “FTC Opens Investigation into Character.AI Teen Safety Practices.” FTC Press Release.
- Common Sense Media. (2025). AI and Teens: What Parents Need to Know. Common Sense Media.
- Haidt, J. (2024). The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. Penguin Press.
- Mahar, I., et al. (2025). “AI companion use among adolescents: Patterns, motivations, and mental health associations.” JAMA Pediatrics (preprint).