OpenAI is preparing to allow ChatGPT to generate erotic content via chatbots, for verified adult users. CEO Sam Altman framed the move as part of a broader principle to “treat adult users like adults,” granting them more freedom to use the tech as they wish. This marks an interesting shift for the company, which spent years enforcing strict content filters to block sexually explicit material, and has faced backlash from parents who insist their children committed suicide, due to ChatGPT chatbot interactions. The change is an alleged response to persistent user demand and signals a non-surprising but new chapter in the evolution of mainstream AI assistants, which are being made to portray human-like characteristics.

The technical implementation of the venture remains intentionally vague, as Altman has avoided calling the new function “erotic mode,” suggesting instead a loosening of restrictions within safe, enhanced age-verification systems. Altman writes, “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).” Exactly what will be allowed – whether authored erotic fiction, interactive risqué conversation, or something akin to a customizable AI companion – is still undefined. The ambiguity leaves OpenAI room to adjust but has sparked intense scrutiny from some who question how the company could possibly enact consistent, safe boundaries with users.
The Perils to Children and Digital Privacy Pitfalls
Central to the controversy is the challenge of the mentioned age verification. OpenAI states it is developing an “age-prediction system” to gate access, defaulting to a restricted experience if a user’s age is uncertain. However the existing age-gating technology is notoriously easy to circumvent. Without robust, transparent methods to exclude minors, the policy could inadvertently expose younger users to inappropriate content despite the company’s assurances, while also opening up the app to use more surveillance measures for verification.
Privacy, however, represents a clear critical concern, as allowing intimate AI interactions generates highly sensitive personal data which would likely be stored by a major billion dollar tech companies and unknown corporations – potentially forever. This could tempt some with leaks or breaches which could lead to unprecedented personal exposure if or when security fails. Commercial data collected online is widely shared with Big Data companies or subject to third party agreements, be they commercial or governmental. The idea that these conversations could never be retained, repurposed or misused is fantastical, but this belief is paramount to engaging users and maintaining their faith despite the deeply vulnerable context.
Shiny Toys Over Ethics
Beyond access control, interactive AI may poses unique psychological risks compared to static adult content. The conversational nature of chatbots could foster unhealthy parasocial attachments, potentially distorting users’ expectations of real human intimacy and deepening social isolation which can have far-reaching consequences for society, not yet entirely understood. Even members of OpenAI’s own expert safety council expressed concern that the rollout was announced without their consultation and lacks independent validation of its mental health safeguards. Anthropic’s Jan Leike, who pushed for OpenAI to have more control over AI tools, quit in 2024, saying “Over the past years, safety culture and processes have taken a back seat to shiny products”.
But Altman defended the move toward the chatbots, insisting that safety, particularly for teens, remains a priority, and that no policies related to mental health are being loosened. He positions the change as analogous to an R-rated movie – a safe space for adult consumption, “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” he said.
As OpenAI navigates this next phase, society is stepping into a charged debate that pits user autonomy against a tech giant’s responsibility or potential exploitation. There are also the more vital questions behind the dangers of AI for children and vulnerable people, as targeting lonely, isolated humans seeking AI companionship creates a potentially profitable, yet disturbingly dystopian emerging market.
This edition of The Disconnect was written by REAL HUMANS with no AI involved in any part of the production of this article. All the more reason to please support us :). If you love what we’re doing, please consider contributing $5 per month so that we can continue providing you with this vital, unique digest.


