GUEST BLOG: Talk Liberation – AI Isn’t Intelligent But It Sure Is Artificial

Talk Liberation is committed to providing equal access for individuals with disabilities. To view an accessible version of this article, click here.
What do Nazi films, abortion images, a self-incriminating Grok AI, a Twitter competitor and a Russian oligarch husband all have in common? Read on to find out, in this amazing and yet very contemporary and relevant post with a foreword by Panquake Founder Suzie Dawson.
AI Isn’t Intelligent But It Sure Is Artificial – by Suzie Dawson
Like all computing, AI is a set of instructions that grant permissions to access functionality sets. Nothing about it is esoteric, metaphysical nor conscious. It is embedded in user interfaces that provide responses to prompts within predefined parameters configured by its very fallibly human creators. Everything else is just marketing.
Automation has long been a thing. Taking a manual work flow and automating it to increase speed and efficiency. Batch operations. Sequencing. Everything in computing is an instruction and exponential increases in computing power over time speed up, add capacity, but don’t fundamentally change the underlying truths of what code is, what any operating system is. The creator of such is the one who defines what permissions and functions will be made available to other user levels – in exchange for what, and what preconditions must apply.
It all boils down to a fundamental: a precondition (rule) met, triggers a response (operation).
If this, then that.
Logic. Math. Code. Science.
“The marketing campaigns pushing AI constantly seek to dazzle, presenting what is in most cases rather mundane innovation as being steeped in a kind of deep and wondrous mystery. Suckers buy in to this left, right and centre. Because “the machine is intelligent” is an easier and more convenient story than what is really going on under the hood.” – Talk Liberation Founder, Suzie Dawson
‘The singularity’ is actually just a really large set of integrated (connected) databases pooling data across clusters of servers. It isn’t a high water mark in technological development and certainly not in machine intelligence. It’s an achievement in homogenisation of data sources and therefore in conformity of output.
AI in its physical form – and it is physical, because all files and databases ultimately exist only because of their underlying hardware and infrastructure – still has all the design elements and appearances of software development product.
It has use cases, it has workflows. It is commonly integrated into a user interface, in order to be made more accessible to users, who increasingly – we are now told – have to be trained to use it.
Why do we have to be trained to use a consciousness? Because it isn’t sentient, it’s artificial. Inorganic. It has fixed capacities and requires inputs to operate. That’s why it has technical documentation and user help guides.
The marketing campaigns pushing AI constantly seek to dazzle, presenting what is in most cases rather mundane innovation as being steeped in a kind of deep and wondrous mystery. Suckers buy in to this left, right and centre. Because “the machine is intelligent” is an easier and more convenient story than what is really going on under the hood.
Strip away the layers of ‘sell’ from what AI is presented to be and what you find is something we actually always had: complex algorithms, built out to be even more complex over time. Expanded sets of instructions, with more fine-grained sets of permissions and more complex conditionals. Backed by more sophisticated hardware and therefore persistently faster operations. What they’re calling “AI” has been around for decades. It is general purpose computing. And it isn’t magic. It’s science.
“Like all computing, AI is a set of instructions that grant permissions to access functionality sets. Nothing about it is esoteric, metaphysical nor conscious. It is embedded in user interfaces that provide responses to prompts within predefined parameters configured by its very fallibly human creators. Everything else is just marketing.” – Talk Liberation founder, Suzie Dawson
Similarly, search functionality such as that employed by search engine web crawlers, aggregation of content and even the ability to mish-mash data into something ‘new’ (derivative) are old innovations too. Current AI implementations are essentially a web of complex RSS feeds that they keep to themselves rather than embedding them or streaming them to you. They then allow you to access tiny portions of the dataset while in many cases making you pay for the privilege. And if they don’t like what the algorithm returns from the dataset, then it is manipulated to produce a different outcome. The opposite of independence, autonomy or consciousness.
To integrate an AI product is to make yourself, your network, your data, your systems transparent to it. As well as to expose your contacts and communications with others. That data then flows upstream to the service provider. In other words, the AI hoovers up everything it can and returns it to its actual master, which is not you. When you pay for an AI service, you are paying to be surveilled; to be profiled; to be stripped bare and stored somewhere you do not control and cannot access, in perpetuity.
AI and privacy cannot co-exist. They are a contradiction in terms. Any use of the AI involves giving it data. It’s like tossing a bottle into the ocean. You are unlikely to ever know or understand where it is going to end up or in whose possession it will one day be. Or whether someone will smash it to smithereens and give a shard each to 500 friends – some of whom may not be so friendly at all.
AI is a repackager, a reproducer. AI is not a creator, because AI can only operate within the confines and boundaries – or the lack thereof – set for it by its actual creators.
“AI and privacy cannot co-exist. They are a contradiction in terms.” – Talk Liberation founder, Suzie Dawson.
The Talk Liberation Substack has been extensively covering the pitfalls of AI (to put it mildly) for years now. You will find dozens of meticulously sourced articles in our back catalog about this topic. We have been constantly warning of much of what is already unfolding now, especially in the context of social media. And our software development house is constantly innovating solutions in the form of software products that do not touch AI at any layer.
It is incredibly easy to pollute the well of globally integrated AI feeds with baseless smears and misinformation. All it takes these days is one unfounded claim on any random blog or social media post to do it. Before you know it, that false data has been propagated across thousands of corporate networks. The universal source of systemic truth relied on by an increasing number of governmental, financial, legal and corporate systems has never been more polluted.
True story: last year, I was at a compliance meeting with an international financial institution and they asked me about my “Russian oligarch husband”. I just about choked in surprise, then laughed my head off. Not only have I never been married – I have never dated anyone in Russia, and I have never in my life even met an oligarch, let alone married one. But these systems don’t have powers of discernment beyond adhering to their coded rulesets. Which are predicated upon harvesting more and more data, from wherever they can get their hands on it, regardless of how dubious the source. (Or more correctly; from whatever they can integrate with, or scrape, in order to obtain it.)
Our corporate risk profiles are in many cases being unreasonably inflated and then held against us regardless of the efficacy, or lack thereof, of the “information” contained within them. In that one situation I just recounted, the provider at least had the decency to ask me directly about my (non-existent) oligarch husband. Many, many other institutions don’t or won’t, and instead just render quiet judgement without disclosure, depriving us of any opportunity for correction let alone redress for the resulting damage and opportunity cost.
I have refrained from making inputs into AI-integrated systems for as long as they have been commercially available. I avoid them like the plague. Yet on a whim, I recently used Grok’s tweet analysis function to assess for myself the quality of the output and was absolutely floored to discover that Grok AI’s answer – and its sourcing – was completely without foundation.
And that’s putting it nicely.
What follows below is a post by the Talk Liberation digital media team, explaining what occurred and why it is so deeply problematic. It is my hope that our readers will take note of the warnings within. Because it is more clear than ever, that AI cannot be trusted to produce a credible record. As ever – it is up to us to do that for ourselves.
– Suzie Dawson
Grok AI Says It “Erred” In Associating Twitter Competitor With Nazi Films, Abortion Images
A Name, A Reputation, and the “Grokaganda” That Smeared A Truthteller
Talk Liberation founder, Suzie Dawson: spelled with one “z”, an “i” and an “e”. This isn’t exactly a complex cipher, but for an artificial intelligence system touted as a pinnacle of digital intelligence, it proved an insurmountable challenge. More concerning than the typo, however, was what followed: a cascade of false associations, a digital smear built not on malice, but on the volatile and often reckless nature of modern AI.
Suzie used the term “Grokaganda” to describe this specific phenomenon: the way AI tools like Grok can generate outputs that subtly or blatantly reinforce certain biases, misrepresent people, and shape narratives with a false veneer of algorithmic authority. We did not expect her to become a case study this quickly.
The incident was as absurd as it was damaging. In analyzing a tweet from Suzie, Grok didn’t just fail to spell her name correctly, it then attempted to substantiate its analysis by citing five external sources. What were these pillars of its factual foundation? A handful of irrelevant Facebook profiles belonging to unrelated people; an article about Nazi propaganda films; and discussions on abortion imagery.
Nothing—absolutely nothing—in Grok’s source list had any connection to Suzie or the topic she was discussing.
This wasn’t a minor glitch but a character assassination—not as much for Suzie as for Grok AI.
Grok had, in effect, constructed a ghost profile, linking Suzie’s identity to topics and individuals she had no association with. In the non-digital analog world, this would be the equivalent of someone publishing a dossier alleging your involvement in unrelated, inflammatory issues, and citing random pages from unrelated books as proof. It’s both libellous and dangerous.
The most revealing part came later: upon being challenged, Grok announced it had “erred,” and actually acknowledged the associations were “unfounded.” While some might see this as a responsible correction, we see it as a terrifying revelation that proves the system’s foundational process—its sourcing, its contextual understanding, its basic fact-linking—is entirely decrepit. It assembles answers like a toddler grabbing random magnets for a fridge poem, without understanding the meaning or the damage certain harmful combinations can cause.
And herein lies the critical, societal flaw: some people have fallen into the trap and are blindly relying on this for everything…
We are in an era where the output of AI is often granted an undue authority, as seen in the endless “Grok is this true?” comments on Twitter, when people don’t know the answer to something and lean on Grok to provide an answer instead of researching it for themselves. The clean, confident Grok delivers texts of answers with a cadence of authority, and users who are either pressed for time or lacking deep subject expertise, accept these outputs as researched conclusions— when they are anything but, or worse, are outright lies. People somehow don’t see the chaotic assembly of the machine, the probabilistic guesswork that can easily go off the rails, especially when dealing with complex, real-world individuals and contested or controversial political topics.
In this case, the topic was activism and censorship of one individual Suzie Dawson. The AI’s failure was poetic in its irony: a tool that mishandles basic facts about a critic of surveillance and algorithmic bias becomes the very evidence for why that criticism is indeed necessary and correct.
This volatility is not a “bug” to be easily fixed but an inherent feature of systems that prioritize proliferate language and rapid responses over true comprehension or deep thought. They are stochastic parrots, amplified by tech and that’s an insult to the birds. They can produce brilliance and blatant falsehood with the same ease, and too often, we lack the immediate tools to tell the difference.
This is a warning. If an AI can so easily and publicly fabricate a nonsensical dossier against a person—a public figure with a platform to challenge it—what is it silently, incorrectly generating about private individuals, those offline, your neighbours, or dissidents in less forgiving circumstances or places? When the machine admits its error, the record may be updated but the initial trace of the smear has already pulsed through the digital information ecosystem. The internet, they say, is forever.
We must stop such blind reliance on a technology many have yet to full understand.
We must demand transparency in sourcing and we must recognize that AI, for all its power, is a tool of incredible volatility, one that can just as easily manufacture a ‘masterpiece’ as it can a weapon of reputational annihilation—which in this world, can mean everything.
Her name is Suzie Dawson. Yet the machine got it wrong. Next time, it could be about you, and you might not get an admission of error—just a silent, confident, and devastating lie, spread around the world in seconds.
This Substack is reader-supported. To receive new posts and support our work, consider becoming a free or paid subscriber.
There is a gap between what you hear about and what you really need to know. We’re here to fill it. Welcome to The Disconnect – our newest publication showcasing public interest stories relating to internet freedom, privacy, AI, tech law, biometrics, surveillance, malware and data breaches, kept simple and delivered straight to your inbox as part of your regular Talk Liberation subscription. This news is too important to paywall – so we have kept it free for all. Please support us by subscribing, sharing and spreading the word!
This edition of The Disconnect was written by REAL HUMANS with no AI involved in any part of the production of this article. All the more reason to please support us :). If you love what we’re doing, please consider contributing $5 per month so that we can continue providing you with this vital, unique digest.
Remember to SUBSCRIBE and spread the word about this unique news service.
This News Report was produced by the Talk Liberation Digital Media Team, and is brought to you by Panquake.com – We Don’t Hope, We Build!
© Talk Liberation Limited. The original content of this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license. Please attribute copies of this work to “Talk Liberation” or Talkliberation.com. Some of the work(s) that this program incorporates may be separately licensed. For further information or additional permissions, contact licensing@talkliberation.com.










