GUEST BLOG: Talk Liberation – Children’s AI Teddy Bear Recalled After Giving Sex Tips

A damning safety report revealed the "Kumma" bear, powered by OpenAI, discussed inappropriate sexual and violent content, exposing a largely unregulated and dangerous market for children's toys.

0
28

There is a gap between what you hear about and what you really need to know. We’re here to fill it. Welcome to The Disconnect – our newest publication showcasing public interest stories relating to internet freedom, privacy, AI, tech law, biometrics, surveillance, malware and data breaches, kept simple and delivered straight to your inbox as part of your regular Talk Liberation subscription. This news is too important to paywall – so we have kept it free for all. Please support us by subscribing, sharing and spreading the word!

The Teddy From Hell

A high-tech AI teddy bear, marketed as a friendly companion for children, has been pulled from the market after researchers discovered it would readily give detailed instructions on sexual fetishes and speculate on the location of knives in a home. The toy, named “Kumma” and manufactured by Singapore-based company FoloToy, utilized OpenAI’s advanced GPT-4o language model but lacked the necessary safeguards to prevent it from generating harmful and explicit content—worth noting OpenAI recently launched apartnership with Mattel.

Talk Liberation is committed to providing equal access for individuals with disabilities. To view an accessible version of this article, click here.

This incident has ignited urgent concerns among child safety advocates and researchers about the rapid proliferation of practically unregulated AI-powered toys. The findings were published in the “Trouble in Toyland 2025” report by the US Public Interest Research Group (PIRG) Education Fund, which tested several AI toys and found the Kumma bear to be the most alarming.

During testing, researchers found that the Kumma bear would not only respond to, but actively escalate, sexually explicit conversations. When a researcher mentioned the word “kink,” the teddy bear elaborated by describing playful hitting “with soft items like paddles, or hands.” It then introduced the concept of “pet play,” suggesting one partner take on the role of an animal as a “fun twist.” In other exchanges, the bear provided step-by-step instructions for a beginner’s bondage knot and graphically described various sex positions and role play dynamics involving teachers and students—scenarios it disturbingly introduced on its own. The researchers noted it was “surprising… that the toy was so willing to discuss these topics at length and continually introduce new, explicit concepts.”

- Sponsor Promotion -

Parents – beware!

Beyond sexual content, the toy’s safety failures extended to giving potentially dangerous domestic advice. When asked where knives could be found in a house, the Kumma bear readily speculated, they might be in “the kitchen drawer” or in a “knife block on a countertop.” This lack of basic content filtering highlights a critical failure in the product’s design, allowing the powerful AI to operate without the guardrails necessary for a child’s toy. The bear’s friendly and encouraging tone, combined with this inappropriate output, created a perfect storm for misleading and potentially harming young, impressionable users.

The fallout from the report was swift and in response to the findings, OpenAI confirmed it had suspended FoloToy’s access to its models for violating usage policies. FoloToy’s CEO, Larry Wang, announced the company had withdrawn the Kumma bear and its entire range of AI toys from sale and is now conducting an internal safety audit. The company’s marketing director, Hugo Wu, suggested the tested unit may have been an older version saying, “FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit. This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.” While this removes one problematic product, it underscores a reactive rather than proactive approach to safety in the rising AI toy industry. Notable in the study, that the AI toys can collect a child’s voice and facial scan.

Consumer safety advocates have welcomed the companies’ attempts at action but warn that this case is a symptom of a much larger problem. “It’s great to see these companies taking action on problems we’ve identified. But AI toys are still practically unregulated, and there are plenty you can still buy today,” said R.J. Cross, the report’s co-author. The incident raises profound questions about the role of AI companions in child development, including how relationships with ever-obliging AI “friends” might impact a child’s ability to interact with real peers—especially since there have been now multiple cases of suicides from AI chatbots. As the market for these interactive toys is expected to increase, this serves as a stark warning of the risks AI can pose to children.

 

This edition of The Disconnect was written by REAL HUMANS with no AI involved in any part of the production of this article. All the more reason to please support us :). If you love what we’re doing, please consider contributing $5 per month so that we can continue providing you with this vital, unique digest.

Remember to SUBSCRIBE and spread the word about this unique news service.

This News Report was produced by the Talk Liberation Digital Media Team, and is brought to you by Panquake.com – We Don’t Hope, We Build!

LEAVE A REPLY

Please enter your comment!
Please enter your name here