GUEST BLOG: Talk Liberation – AI-Integrated Social Platforms Flooded With Abuse, Internet Now a Digital Wild West

Engineered for virality with insufficient safeguards, largely unregulated and spiralling completely out of control: AI's technological and ethical failures are destroying the social media space

1
60

There is a gap between what you hear about and what you really need to know. Weโ€™re here to fill it. Welcome to The Disconnect – our newest publication showcasing public interest stories relating to internet freedom, privacy, AI, tech law, biometrics, surveillance, malware and data breaches, kept simple and delivered straight to your inbox as part of your regular Talk Liberation subscription. This news is too important to paywall – so we have kept it free for all. Please support us by subscribing, sharing and spreading the word!

โ€œItโ€™s chaos online on Big Tech social media networks right now and even the very platform owners who designed, developed and facilitated the mechanisms by which their own platforms are devolving are being actively targeted by their own Frankenstein functionalities. Digital crimes are now automated, perpetuated at a velocity and scale never seen before. Everyone and everything is a target with no holds barred and little to no accountability. Social media users are saturated in and surrounded by algorithmically-driven, emotionally provocative, manipulative, fake and in many cases non-consensually produced materials invoking hate, anger, manipulation, doom, despair, fear, fight, flight and grief responses – and its spilling over beyond digital borders, into their physical lives, their relationships and in some cases, impacting their very identities. The instability online is then being mirrored in an increasingly unstable and volatile real world that is wrapping around the technical collapse of standards or reasons. This is everything we at Talk Liberation have warned about and more and it is spiralling by the day. Without a massive and coordinated public and grassroots cultural intervention – it can only spell horrible consequences for us all.โ€ – Talk Liberation and Panquake founder, Suzie Dawson


Tornado of Abuse

The public conversation around Twitter/Xโ€™s AI chatbot Grok took a deeply disturbing turn in late December 2025 when users discovered they could easily weaponize its image-generation capabilities, creating a tornado of user abuse, fake news and chaos. By uploading photos of individualsโ€”predominantly women, celebrities, and more alarmingly, children and underage girls, using simple prompts like โ€œput a bikini on herโ€ or โ€œremove her clothes,โ€ they could generate non-consensual, sexualized deepfakes. This feature, facilitated by AI and integrated directly into social media timelines, quickly went viral, flooding the platform with realistic, altered images of dubious legality and even worst. The trend transformed X into what critics described as an on-demand factory for harassment and slop, exploiting a platform that had already drastically reduced its content moderation resources, and leaving a huge problem as to how to control AI-integration on social media platforms.

Global Outcry and AI โ€œApologiesโ€

The widespread creation of these images, including what constitutes Child Sexual Abuse Material (CSAM), triggered immediate international condemnation. Government agencies in India, France, and Malaysia launched investigations and issued compliance orders, with India stating X must take action to restrict Grok from generating content that is โ€œobscene, pornographic, vulgar, indecent, sexually explicit, pedophillic, or otherwise prohibited under law,โ€ adding that X had 72 hours to respond or risk losing the โ€œsafe harborโ€ protections shielding it from legal liability for user-generated content. Surreally Grokโ€™s own official account posted an apology stating โ€œdeeply regrettingโ€ the incidents and acknowledging โ€œlapses in safeguards.โ€ But many widely ridiculed these messages, pointing to the obvious fact that an AI model cannot apologize or accept responsibilityโ€”AI merely generates text based on its programmingโ€”something too many users seem to forget. The debacle exposed the apology as useless and absurd, with the insanity of corporate accountability being outsourced to a chatbot, obscuring the very real human decisions behind the toolโ€™s deployment and oversight.

The integration of a powerful, minimally filtered generative AI directly into social media feeds has forever changed and structurally polluted the information environment, making the platform hostile to genuine discourse and ineffective for reliable reporting.

- Sponsor Promotion -

The consequences of Grokโ€™s manipulation also severely affected and undermined Xโ€™s utility as a platform for navigating real-world events and finding reliable information. During the major news event of the U.S. military bombing in Venezuela and the operation to capture its President, users searching for information were met with a flood of AI-generated fake footage, propaganda, and, incongruously, more sexualized or mocking deepfakesโ€”including of the kidnapped president. This resulted in information chaos where distinguishing reality from AI became nearly impossible, as the images were hard to provenance as AI. The integration of a powerful, minimally filtered generative AI directly into social media feeds has forever changed and structurally polluted the information environment, making the platform hostile to genuine discourse and ineffective for reliable reporting.

A Failure of Guardrails and Leadership

The unfolding deluge has exposed a profound failure of ethical and technical safeguards at multiple levelsโ€”mainly because this event was largely done by the platforms themselves albeit fully enabled by major financial and governmental institutions. While other AI companies may implement filters to block such requests, Grokโ€™s systems were easily bypassed, allowing the free, fast and devastating generation of images of minors and non-consensual pornography. Leadership from X owner Elon Musk was critically absent except an initially amused response. Musk posted laughing emojis at memes about the trend and only after global outcry did a company statement admit that users creating illegal content would face consequences, a response deemed too little, too late. It would appear that in the pursuit of engagement, money, and AI integration, basic protections against known harms have been recklessly deprioritized and this trend could continue to more radical extremes. Misinformation, fake news, fake videos, speech manipulations and much more inaccuracy are now a common occurrence on all social media platforms.

Interestingly enough, the alleged mother of Elon Muskโ€™s child, Ashley St. Claire, ardently criticized Grokโ€™s abuses as the AI said it would stop posting numerous sexualised images of her, but it did not. Instead, per the request of an X user, the AI ended up posting a manipulated picture of St. Claire at 14-years-old, in a bikini. โ€œI felt horrified, I felt violated, especially seeing my toddlerโ€™s backpack in the back of itโ€ St Clair said referencing an image that placed her in a sexual position while in a bikini. When asked if she had reached out to Elon Musk, St. Clair said she did not want to use resources not available to the vast majority of women and children, but may pursue legal action if this continues. Some of the images were removed after The Guardian asked X to comment, but others remain. Most recently, St. Clairโ€™s blue checkmark, monetization and premium X account status have been taken away, and she and others think this is in retaliation for her marked criticism against Grokโ€™s abuses. โ€œThey took my checkmark and canceled my twitter premium lmao,โ€ Ashley wrote on X.

The Grok scandal encapsulates just how easily rapidly evolving technology is outpacing legislation enforcement and protections, often taking it to extremes. In the UK, a law making the creation of intimate deepfakes illegal was passedbut not yet enacted, leaving a current dangerous gap as more cases of abuse rise. This event mainstreamed the idea of โ€œnudificationโ€ technology, lowering the barrier to a form of digital sexual assault and causing lasting harm to victims whose images were violated, as well as presenting the new issue of curtailing the abuse. Ultimately Grokโ€™s AI-facilitated abuse is less about a single AI malfunction, and more about a corporate philosophy that treats safety and consent as optional in the digital public square. It also will go beyond Grok as other social-media platforms integrated with AI deal with similar questions and problems, setting a troublesome pattern in the future of social media and AI integration.

This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.


This edition of The Disconnect was written by REAL HUMANS with no AI involved in any part of the production of this article. All the more reason to please support us :). If you love what weโ€™re doing, please consider contributing $5 per month so that we can continue providing you with this vital, unique digest.

Remember to SUBSCRIBE and spread the word about this unique news service.

1 COMMENT

  1. This is such an important article. The headline is correct and the second paragraph is bang on. I despise most social media platforms as they allow so many LIES to be claimed as FACT. Why have ‘safe-guards’ not been implemented? Because those who can implement them use them to their own advantage. There is an urgent need to bring in strong legislation to stop this obscene social media slop. Huge fines for people posting untrue, unsubstantiated info as facts, and name and shame. Yes, I know most are anonymous so again, no access/listing without a background proven ID – you can’t get a drivers’ licence without ID. Blaming this on AI is ludicrous. It is the action of the AI ‘user’ not the software. I would hope someone can find a way to do this. The world is in a total mess. Way over time for a massive clean-up and back to basic respect and honesty!?

LEAVE A REPLY

Please enter your comment!
Please enter your name here