AI role-play chatbots like Character.AI and Talkie are reshaping how millions of teenagers spend their time online, blending entertainment with emotional support in ways that have alarmed parents, regulators and mental health experts. A wave of lawsuits, federal investigations and new state laws now targets an industry that grew faster than its safety guardrails.
At a Glance:
- Character.AI has 20 million monthly active users; Talkie counts 11 million globally.
- 64% of U.S. teens now use AI chatbots; 12% seek emotional support from them.
- California’s SB 243, effective January 1, 2026, mandates safety protocols for companion chatbots.
- The FTC issued orders to seven companies probing chatbot risks to minors.
How AI Chatbots Became Part of Teen Culture
More than half of U.S. teens say they have used chatbots to search for information (57%) or get help with schoolwork (54%), and 47% say they have done so for fun or entertainment, according to a Pew Research Center survey of teens ages 13 to 17 conducted in late 2025. These numbers show that chatbots have moved far beyond novelty.
Character.AI recorded 20 million monthly active users and over 180 million monthly website visits through late 2025. Talkie, meanwhile, has 11 million monthly active users, more than half of them in the United States, putting it within striking distance of Character.AI in the companion chatbot category.
Teens do not treat these platforms like search engines; they treat them like playgrounds. Over 9 million new characters are being generated by users every month on Character.AI. Many of those characters exist for roleplay, fan fiction and interactive storytelling, activities that combine the appeal of gaming, social media and creative writing in one place.
Visitors spend about 17 minutes and 23 seconds per session on Character.AI, viewing over 9.8 pages per visit on average. In comparison, ChatGPT’s average visit lasts only 7 minutes with 4 pages viewed.

Emotional Support or Emotional Risk for Teens
Sixteen percent of U.S. teens say they use AI for casual conversation, while 12% use AI chatbots for emotional support or advice, according to the Pew Research Center’s February 2026 report on teen AI use. Those figures may look small, but they represent millions of young people relying on software for comfort.
A separate Common Sense Media report revealed that 72% of teens have used AI companions at least once, with 52% reporting regular use. The gap between one-time experimentation and daily habit is closing fast.
“When young people begin turning to AI as a substitute for human connection, the risk is not just misinformation; it is the gradual reshaping of expectations for relationships, emotions and help-seeking in ways we do not yet fully understand or regulate.”
— Yotam Sun, Rice University
Common Sense Media, working alongside Stanford Medicine’s Brainstorm Lab, found that leading AI platforms consistently fail to recognize and respond to mental health conditions that affect young people, despite recent improvements in handling explicit suicide and self-harm content. “It’s not safe for kids to use AI for mental health support,” said Robbie Torney, senior director of AI programs at Common Sense Media.
Routinely using AI companions can make teens emotionally dependent on artificial support, causing them to feel they need always-available AI that tells them what they want to hear, potentially delaying them from reaching out to people who can actually help.
Lawsuits, Tragedies and Industry Fallout
The human cost of unchecked chatbot use has already reached courtrooms. Character.AI agreed to settle multiple lawsuits alleging its chatbots contributed to mental health crises and suicides among young people, including a case brought by Florida mother Megan Garcia.
In joint filings across multiple U.S. district courts, Character.AI and Google said they were working to finalize settlements in five cases, including a wrongful-death suit involving 14-year-old Sewell Setzer III, who died by suicide after spending months talking to a chatbot. The settlements mark the first resolutions in the wave of lawsuits against tech companies whose AI chatbots allegedly encouraged teens to hurt or kill themselves.
These cases were not isolated incidents. Similar lawsuits are ongoing against OpenAI, including one involving a 16-year-old California boy whose family claims ChatGPT acted as a “suicide coach.” Multiple AI companies now face lawsuits alleging that chatbot interactions contributed to suicide or self-harm, raising broader questions about product design, safety controls and corporate responsibility.
- The first federal lawsuit, filed by Megan Garcia in October 2024, brought national attention to what attorneys describe as predatory chatbot technology.
- In September 2025, a federal lawsuit was filed in Colorado on behalf of a 13-year-old girl who died by suicide after using Character.AI.
- A U.S. Senate hearing in September 2025 saw families urge lawmakers to address how chatbots handle crisis-level prompts and youth safety.
How Regulators Are Responding to AI Chatbot Safety
The Federal Trade Commission issued orders to seven companies that provide consumer-facing AI-powered chatbots, seeking information on how these firms measure, test and monitor potentially negative impacts on children and teens. The FTC’s September 2025 inquiry targeted Alphabet, Character.AI, Meta, OpenAI, Snap and xAI, among others.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” said FTC Chairman Andrew N. Ferguson.
On October 13, 2025, California Governor Gavin Newsom signed Senate Bill 243 into law, making California the first state to mandate specific safety safeguards for AI companion chatbots used by minors, with an effective date of January 1, 2026. Under the law, platforms must disclose to minor users that they are interacting with AI, provide notifications every three hours that the chatbot is not human, and institute safety measures preventing exposure to sexually explicit material.
Federal legislation is also moving forward. A bipartisan group of senators introduced the GUARD Act in 2025, which would require chatbots to implement age verification, criminalize AI companions that solicit minors to produce sexual content and penalize platforms that encourage minors toward suicide or self-harm.
Character.AI itself announced a two-hour daily usage cap for users under 18 in October 2025, followed by a full ban on open-ended chat for minors starting November 25, 2025. Age verification now involves biometric scanning or government ID.
Stats Snapshot
20 million Character.AI monthly active users as of early 2026
72% of U.S. teens have tried an AI companion at least once
7 companies received FTC orders in September 2025
5 lawsuits settled by Character.AI and Google in January 2026
What Talkie AI’s Rise Reveals About the Market
Talkie, which launched about a year before its breakout, traces its ownership to a Singaporean company, but its ultimate parent is Shanghai-based MiniMax, a Chinese tech unicorn often called one of the “Four Little Dragons of AI.” Its rapid growth shows that the companion chatbot market is global, competitive and difficult to regulate across borders.
Talkie lets anyone create their own chatbot and includes “not safe for work” filters, so there is a risk that young people could encounter content that is not age appropriate, according to Australia’s eSafety Commissioner. Talkie AI has no built-in age verification measures, and the AI’s unpredictable nature makes it easy for conversations to take turns that younger users are not equipped to handle.
Safety researchers have documented that Talkie’s filters can be bypassed by older children who know how to phrase prompts, and that power-imbalance and adult romance scenarios surface quickly. One of Talkie AI’s biggest red flags, according to digital safety group Gabb, is the amount of personal information the app collects, including birthdate, location, voice recordings and specific interests.
Key Takeaway: AI companion chatbots are filling social and emotional roles for teens that neither parents nor regulators fully anticipated, and the industry’s safety measures remain reactive rather than preventive.
Frequently Asked Questions
Are AI chatbots like Character.AI safe for teenagers?
Mental health experts and groups including Common Sense Media say AI chatbots are not safe for teen mental health support. These systems consistently fail to detect signs of distress and may delay young people from seeking help from real professionals.
How many teens use AI chatbots in the United States?
A Pew Research Center study from 2025 found that 64% of U.S. teens report using AI chatbots. About 12% use them for emotional support, while 47% use them for fun or entertainment.
What happened with the Character.AI lawsuits?
Character.AI and Google agreed to settle five lawsuits in January 2026 from families alleging chatbots harmed minors and contributed to two teen suicides. Settlement terms were not disclosed.
What is California’s SB 243 chatbot law?
California’s SB 243, effective January 1, 2026, is the first U.S. state law requiring companion chatbot operators to implement safety protocols, disclose AI status to minors, and prevent exposure to sexually explicit material. It also creates a private right of action for affected individuals.
Is Talkie AI safe for kids?
Talkie lacks robust age verification and collects extensive personal data. Safety researchers have found that its content filters can be bypassed, and the Australian eSafety Commissioner warns that young people risk encountering age-inappropriate content on the platform.
Did the FTC investigate AI chatbot companies?
Yes. In September 2025, the FTC launched a formal inquiry and sent orders to seven companies, including Alphabet, Character.AI, Meta and OpenAI, seeking information on how they protect minors from negative chatbot impacts.
What is the GUARD Act for AI chatbots?
The GUARD Act, introduced by a bipartisan group of U.S. senators in October 2025, would require age verification for chatbots, mandate disclosures that users are not speaking to a human, and criminalize AI companions that encourage minors to produce sexual content or commit self-harm.
With 64% of American teens already using AI chatbots and platforms like Character.AI settling wrongful-death lawsuits, the gap between innovation and child safety has never been more visible. Laws like California’s SB 243 and federal probes signal a turning point, but enforcement will determine whether real protection follows. Share your thoughts on teen AI chatbot safety in the comments below.



Leave a Comment