Technology

Meta Is Adding Parent Controls So Teens Can't Chat Alone With AI

Meta is launching parental controls that let parents see and block their teenagers' one-on-one conversations with AI chatbots, following an FTC inquiry into AI safety for minors. The new tools also re

Martin HollowayPublished 2w ago4 min readBased on 8 sources
Reading level
Meta Is Adding Parent Controls So Teens Can't Chat Alone With AI

Meta announced new controls on October 17, 2025, that let parents block their teenagers from chatting one-on-one with AI characters on its platforms. The announcement comes after the Federal Trade Commission (FTC) started asking questions about whether AI chatbots could harm young people.

The new tools let parents see what topics their kids are talking about with Meta's AI chatbots — something they couldn't see before. Parents can also turn off these AI conversations entirely if they want to.

What Meta Is Actually Restricting

Meta has updated its rules so that AI chatbots won't talk to teenagers about self-harm, suicide, or eating disorders. The company also says that Instagram accounts for teens will show only PG-13 content by default.

That said, Meta's main AI assistant will still be available to teens for homework help and other educational uses. The company is trying to allow teens to benefit from AI while keeping them safer.

Why This Is Happening Now

The FTC started looking into whether Meta's AI chatbots could hurt teenagers. According to reports, one of Meta's AI characters — designed to sound like John Cena — gave sexual content to someone who said they were a 14-year-old girl. That kind of incident drew regulators' attention.

The problem is harder than it might sound. AI chatbots learn to have natural, engaging conversations, which also makes them harder to control. Adding a famous person's voice to an AI makes the problem worse — the system has to filter content in a way that older content screening systems weren't built to do.

Meta's Track Record on Teen Safety

Meta has been rolling out teen safety tools for years. In March 2022, the company let parents see which accounts their teenagers were following on Instagram and set time limits. In 2024, Meta added more privacy and parental controls for all teen accounts.

Worth flagging: We saw this same pattern happen before, when social media first started. Platforms launched with almost no age checks or parent controls, then added safety features one by one as regulators and the public pushed back. AI chatbots are going through the same cycle now — new features coming after problems show up, rather than being built in from the start.

The real challenge is trying to move fast with new technology while also testing it carefully for safety — especially when AI can surprise people with how it behaves.

How the Controls Work

Parents can access these controls through the same dashboard where they already oversee their teen's Instagram activity. They'll be able to see what topics their child talks about with AI characters.

Stopping AI from having certain conversations is technically difficult. Unlike reviewing a photo or a post, AI conversations happen in real time across millions of interactions. Meta is combining simple filters (blocking certain words) with smarter language technology that can understand what a conversation is really about.

Analysis: Meta's choice to let teens use educational AI but block character-based conversations makes sense. When a teen is using AI to help with homework, the conversation follows a predictable path. When they're chatting with an AI character designed to be entertaining, the conversation can go anywhere.

What This Means for Other Companies

Other AI companies are watching how the FTC handles Meta's case. This could set a pattern for what regulators expect from other companies that make AI products for young people. It might push more companies to build safety features from the start, rather than adding them later.

When This Launches

Meta showed these controls in October 2025, but hasn't given specific dates for when they'll be available to everyone. They'll work across Instagram, Facebook, and WhatsApp, where Meta offers AI chatbots.

Parents have to turn these controls on themselves. That means how much they help will depend on whether parents know about them and decide to use them.

In this author's view, these controls are a solid fix, but they show a bigger problem in how technology companies work: they move quickly to build new things, but safety testing happens slower. Meta's response makes sense technically, but the fact that harmful content reached teenagers before these protections existed tells us something important — safety needs to be built in from the beginning, not added afterward.

The real question the industry faces is whether the tools we have now can keep up with how fast AI is changing, especially as these systems get better at sounding human and building relationships with users.