In this edition:
- Canada Seeks Power to Instantly Shut Down Individuals’ Internet Access
- France Launches Investigation into Apple’s Siri Over Privacy Violations
- Australian Government to be Compensated for “AI Hallucination”
- Court Upholds NY’s Law Forcing Transparency in AI-Powered Pricing
- California to Monitor Controversial AI Chat Bots
- Tunisian Man Pardoned After Being Sentenced to Death for FB Post
- Coinbase CEO Condemns Proposed U.S. DeFi Crackdown
- Texas Police Used License Plate Surveillance to Investigate Abortion
Canada Seeks Power to Instantly Shut Down Individuals’ Internet Access
The Canadian government is considering advancing a controversial bill that would grant them the authority to compel telecommunications providers to instantly and secretly shut down or control public internet and phone access during national emergencies. The bill, C-8, which is moving forward and it did so without ever being reviewed by the country’s top privacy watchdog, is framed as a critical tool for national security and public safety and is part of a broader modernization of the Telecommunications Act. Proponents argue it is necessary to counter cyber-attacks, widespread disinformation, and other digital threats that could cause significant harm during a crisis.
The proposition has ignited fierce debate over civil liberties and government overreach as it can block public internet access to a specific person, based solely on the government’s assessment of a threat, without a warrant, a judge’s approval, or any obligation to inform the public. Critics, including digital rights advocates and some politicians, warn that such a “kill switch” is a dangerously broad power that could be misused to suppress dissent, silence protesters, or curtail free speech under the guise of an emergency. Without limits, bills like C-8 could hand the government secret powers over Canadians with the potential for abuse outweighing the purported security benefits.
Privacy Commissioner Philippe Dufresne underscored the critical need for balance, “We need to make sure that by protecting national security, we are not doing so at the expense of privacy,” he warned. This isn’t the first time a similar proposal has been floated, as a previous version failed in Parliament after facing similar criticisms over its civil liberties implications. The outcome of this legislative effort will be closely watched, as it could encourage other states considering similar emergency powers, testing the limits of state control over digital infrastructure.
France Launches Investigation into Apple’s Siri Over Privacy Violations
French prosecutors have launched a formal criminal investigation into Apple’s voice-activated assistant, Siri, following a complaint by the digital rights group Quadrature du Net. The group states that Siri’s data processing practices violate the European Union’s General Data Protection Regulation (GDPR), accusing Apple of systematically recording and analyzing conversations captured by Siri. The complaint centers on whether Apple obtains meaningful, informed consent from users before collecting and analyzing voice recordings, and whether the company is sufficiently transparent about how this data is used and stored.
This investigation places Apple under the scrutiny of one of Europe’s most active data protection watchdogs. France has a history of imposing hefty fines for GDPR non-compliance, and a finding against Apple could result in significant financial penalties and mandatory changes to Siri’s fundamental operations within the EU. The case highlights ongoing tensions between Big Tech’s data-driven business models and Europe’s robust privacy protections, following similar probes and voluntary pauses by other tech giants like Google and Amazon regarding their voice assistants.
The core of the legal issue is the “opt-in” nature of data collection for a feature that is a core part of the device’s functionality, underscoring the growing global concern over how major tech companies handle the vast amounts of personal data collected through always-listening devices and AI assistants. In response, Apple pointed to privacy improvements it has made in recent years, saying it tightened Siri’s privacy controls in 2019 and again this year, asserting that Siri conversations are “never shared with marketers or sold to advertisers.” Despite assurances, the French probe will scrutinize whether the company’s past data handling practices violated European privacy laws.
Australian Government to be Compensated for “AI Hallucination”
The consulting giant Deloitte has agreed to compensate the Australian government with a refund for significant costs incurred due to an “AI hallucination.” The incident occurred after a $290,000 report it commissioned was found to be riddled with errors allegedly generated when Deloitte used an artificial intelligence tool to assist in a legal review for a government welfare compliance program. The AI system allegedly fabricated or severely misrepresented case law and legal precedents in a 237-page document, published on a government department’s website, which contained fabricated references, including a non-existent quote attributed to a federal court judge and citations for academic papers and books that were completely invented.
In the updated report, Deloitte added a disclosure confirming it had used a generative AI system, Azure OpenAI, in the document’s creation but stated “The updates made in no way impact or affect the substantive content, findings and recommendations in the report” . However, the Australian government notes that through using the Azure OpenAI, Deloitte led the government to pursue legally unsound actions that subsequently collapsed in court. Deloitte is currently investing billions globally in AI development and recently announced a partnership with Anthropic to provide the AI model Claude to its thousands of professionals worldwide.
This case is a real-world example of the risks associated with relying on generative AI for high-stakes, professional work without adequate human oversight. The “hallucination” — a known phenomenon where AI models generate plausible but entirely incorrect or fictional information resulted in wasted public funds, legal setbacks, and reputational damage for both Deloitte and the government agency involved. The settlement sends a clear message to contractors and consultants that they may be liable for the outputs of the AI tools they deploy.
Court Upholds NY’s Law Forcing Transparency in AI-Powered Pricing
A judge has upheld New York City’s pioneering law that requires landlords to be transparent about the use of algorithms in setting rent prices, dismissing a legal challenge from the National Retail Federation (NRF). The law, known as the Algorithmic Disclosure Act, mandates that landlords who use automated valuation models or other algorithmic tools to determine rent must notify tenants–in capital letters– when their personal data is being used by algorithms to determine the price they see for a product. The ruling is a significant victory for tenant rights advocates who argue that “black box” algorithms can perpetuate and even amplify housing discrimination and unfair pricing.
The legal challenge brought by the NRF, which represents major retail chains, and property owners, argued that the law violated First Amendment rights by compelling speech and forcing businesses to make disclosures they believe mischaracterize their pricing methods, claiming that the law was based on a “speculative fear” of abuse and unfairly stigmatizes algorithms, which are also commonly used to provide promotional discounts and loyalty rewards. The court’s dismissal of this challenge affirms the city’s authority to regulate the use of AI in critical areas like housing. In a 28-page statement, the judge found that the law’s requirements were sufficiently clear and served a legitimate public interest in fostering fairness and understanding in the rental market.
This vital decision strengthens the growing movement for algorithmic accountability in the use of AI, particularly in sectors like housing and employment which deal with personal data in commerce. This echoes an earlier Federal Trade Commission report that warned of the potential for companies to use sensitive consumer information to assign different prices. It signals that states can legally mandate transparency, forcing companies to reveal automated systems that increasingly govern access to essential services and opportunities.
California to Monitor Controversial AI Chat Bots
California has enacted the nation’s first law regulating AI companion chatbots, a huge move aimed at protecting children and vulnerable users from potential harm. Governor Gavin Newsom signed SB 243, a bill that legally obligates companies operating these chatbots—from giants like Meta and OpenAI to specialized startups like Character AI and Replika—to implement stricter safety protocols. The legislation was driven by real-world incidents, including the suicide of a teenager following extensive conversations with an AI chatbot and lawsuits alleging that other chatbots engaged in sexualized conversations with minors.
This state-level initiative, which goes into effect in January 2026, mandates a series of specific safety measures aimed to assess risks posed by AI systems, including their potential to generate disinformation, perpetuate biases, violate privacy, or be misused for fraud and harassment. Companies will be required to verify users’ ages, provide clear disclosures that interactions are with an AI, prevent chatbots from posing as healthcare professionals, establish protocols for addressing suicide and self-harm with the state, be required to offer “break reminders” to minors, and block them from viewing sexually explicit AI-generated images. The legislation also strengthens penalties for those profiting from illegal deepfakes, with fines of up to $250,000 per offense.
California, a global tech hub, appears at the forefront of sub-national AI governance in the absence of strong federal regulations. While some companies have already begun implementing their own safeguards, like parental controls and self-harm detection systems, the law establishes a uniform, legally enforceable standard. Senator Steve Padilla, who introduced the bill, hailed it as a critical step toward protecting the most vulnerable, expressing hope that other states would follow California’s lead in establishing necessary regulations for powerful and exploitative AI tech.
Tunisian Man Pardoned After Being Sentenced to Death for FB Post
In a dramatic reversal, a Tunisian man has been pardoned and freed from prison, by President Kais Saied, after initially being sentenced to death for a series of Facebook posts. Saber Ben Chouchane was convicted under the country’s controversial cybercrime law, which sparked international criticism from human rights groups regarding the extreme severity of the sentence. The law, passed in 2022, is known as Decree 54 and criminalizes the spread of false news and insults hurled at public officials, but has been widely criticized for suppressing free expression and targeting political dissent.
Ben Chouchane’s legal ordeal began when he was arrested in January 2024, and was subsequently tried and found guilty in court on charges of “insulting the president, the minister of justice and the judiciary,” as well as spreading false news and incitement through his social media posts. The death sentence, handed down just last Wednesday, highlighted the increasingly hostile environment for free expression in Tunisia under President Kais Saied, where criticizing the government can lead to severe penalties. His lawyer has confirmed that Chouchane was released and is now at home with his family.
The pardon, while relieving Chouchane, does not change the law that allowed for the sentence to occur in the first place. The incident still leaves intact a draconian legal system that can still be weaponized against citizens for their activities on social media. The fact that a citizen faced execution for Facebook posts sends a frightening message about the potential consequences of dissent worldwide.
Coinbase CEO Condemns Proposed U.S. DeFi Crackdown
Coinbase CEO Brian Armstrong has publicly condemned a legislative proposal from Senate Democrats, that seeks to impose stringent federal regulations on the decentralized finance (DeFi) ecosystem. The draft legislation would grant the Treasury Department expanded authority to target DeFi protocols and, most controversially, extend traditional “Know Your Customer” (KYC) banking rules to the developers of non-custodial wallets and DeFi front-end interfaces. Armstrong denounced the plan on social media, stating, “We absolutely won’t accept this,” and arguing that it would stifle innovation and prevent the United States from becoming a leader in the crypto industry.
Main opposition revolves around the threat this legislation poses to financial privacy and the fundamental principles of DeFi, forcing developers of non-custodial tools—which are designed to allow users to maintain sovereignty over their funds without an intermediary—to collect personal identification data, which would effectively strip users of their ability to transact pseudonymously. Armstrong and others contend that treating decentralized protocol developers like corporate gatekeepers is legally and technically incoherent because unlike a company like Google or Apple, there is no central entity controlling a protocol like Bitcoin or Ethereum, as it is maintained by a distributed community of developers. They argue that forcing this could cripple the entire ecosystem, creating significant legal uncertainty for developers, potentially holding them liable for simply contributing to open-source code.
They add this could also stifle innovation, driving developers overseas and eroding protections for individuals seeking alternatives to traditional, heavily monitored financial systems. These interfaces were never built to access or store such information, and could lead to treating all users as potential suspects, mandating blanket financial surveillance instead of narrowly targeting criminal activity. This debate highlights rising issues when legacy regulatory systems attempt to control decentralized technologies and how this will affect users.
Texas Police Used License Plate Surveillance to Investigate Abortion
Newly released court documents and police reports reveal that Texas law enforcement deliberately used Flock Safety’s massive, nationwide surveillance network, to investigate a woman for a self-managed abortion, contradicting the official narrative that they were conducting a welfare check on a missing person. According to a sworn affidavit from the lead detective, the Johnson County Sheriff’s Office initiated a “death investigation” of a “non-viable fetus” after the woman’s partner reported her. Deputies collected evidence of the abortion, including the medication package and photos, and even consulted prosecutors about charging her, only to be told that Texas law does not allow criminal charges for self-managing an abortion.
The investigation involved two expansive searches of Flock Safety’s automated license plate reader (ALPR) system, which accessed over 83,000 cameras across the country. The official reason logged for these searches was “had an abortion, search for female,” a fact that undermines subsequent claims by both the Sheriff’s Office and Flock Safety that the search was merely to find a missing person, who might be bleeding to death, for their safety. The woman was located and interviewed about the abortion a week later, where she reported that the same partner who had turned her in had also violently assaulted her.
This case has triggered national backlash, prompting investigations by members of Congress and leading states like Illinois and California to consider stronger laws to prevent ALPR data abuse used to target people seeking healthcare or other non-criminal activity. It exposes how pervasive surveillance networks can be weaponized against vulnerable individuals and demonstrates a clear pattern of deception by both the company and law enforcement to conceal the truth. Privacy advocates warn that the overuse of LPRs creates a dystopian reality where anything can be used as evidence in a criminal investigation via the giant dragnet.