GUEST BLOG: Talk Liberation – The New Rules: Who Governs AI, Redefining Privacy, and Our Digital Future

0
1

In this edition:

  • Dangers of FBI’s Subpoena of (Non-Profit) Internet Archive
  • The Next Legal Frontier: Who Pays When an AI Lies?
  • Tiny Town Takes on AI Giant for US Nuclear Program
  • DHS Tests Powerful Surveillance Tool at College Football Games
  • EU Proposal Poses Largest Cuts to Privacy Rights in Years
  • “Protecting Children” or Eroding Privacy? MP Backs Message Scanning
  • How Congress Was Wining and Dining with AI Lobbyists This Summer
  • The AI Boom Needs Land, Power, and Workers. Is Your Town Next?

 

Dangers of FBI’s Subpoena of (Non-Profit) Internet Archive

The FBI is actively attempting to uncover the identity of the anonymous owner behind the web archiving service Archive.today. A federal subpoena was issued to the site’s domain registrar, Tucows, demanding extensive customer information, including names, addresses, and billing details. The order also seeks telephone records, payment information, and network session data, stating it is part of a federal criminal investigation without specifying a particular crime.

Archive.today is a popular but anonymous service that allows users to take permanent snapshots of web pages, functioning similarly to the Wayback Machine but with a noted lack of formal rules. A primary use of the service is to bypass paywalls on news sites, a practice that has drawn significant criticism from the media industry for enabling access to copyrighted content without payment. For over a decade, the site has operated without major interference, financed by donations and managed anonymously, despite occasional domain-related issues.

- Sponsor Promotion -

This legal action has prompted the long-quiet official Archive.today X account to post the subpoena alongside the word “Canary,” a traditional warning signal of imminent danger. The release of the subpoena by the site itself indicates the operators perceive a serious threat to their anonymity and the service’s continued existence. This case echoes other legal actions against archiving tools, following the recent takedown of the similar paywall-bypass service 12ft.io.

The Next Legal Frontier: Who Pays When an AI Lies?

Conservative activist Robby Starbuck filed a $15 million defamation lawsuit against Google, alleging its Bard AI chatbot falsely linked him to sexual assault allegations and white nationalist Richard Spencer. The legal action mirrors Starbuck’s previous successful strategy against Meta, where a similar claim over AI hallucinations resulted in a confidential settlement and an advisory role for him at the company. The lawsuit emerges amidst a related incident where Senator Marsha Blackburn accused Google’s open-source Gemma AI model of generating false and defamatory claims, including a fabricated sexual misconduct accusation against her. In response to the Senator’s complaint, Google promptly removed the Gemma model from its public-facing AI Studio platform, though it remains available to developers.

This highlights significant and unresolved legal challenges surrounding AI-generated defamation, as no U.S. court has yet awarded damages in an AI chatbot defamation case. A major legal precedent was set when a court ruled in favor of OpenAI in a similar suit, finding that the plaintiff failed to prove “actual malice,” a key requirement for defamation cases involving public figures. Google has pushed back against Starbuck’s claims, filing a motion to dismiss it. Senator Blackburn, however, has characterized these errors not as harmless glitches but as evidence of a “consistent pattern of bias against conservative figures” within Google’s AI systems.

While the legal precedent currently favors tech giants, Starbuck’s success in securing an influential advisory role at Meta suggests that the strategic goal may be corporate influence rather than a courtroom victory. Google’s removal of Gemma demonstrates the immediate operational impact that political scrutiny can have on the deployment of AI tech— even experimental ones. Ultimately, these cases unravel the ongoing tension between the rapid development of AI, the persistent problem of model “hallucinations,” and the growing demands for accuracy and fairness from both lawmakers and the public.

Tiny Town Takes on AI Giant for US Nuclear Program

The city of Ypsilanti, Michigan, is actively opposing the construction of a proposed $1.2 billion data center, a collaborative project between the University of Michigan and the Los Alamos National Laboratory (LANL), part of the National Nuclear Security Administration. Residents and city council members are posing ethical objections, citing the facility’s intended service to a nuclear weapons laboratory nearly 1,500 miles away. For many, like Councilmember Amber Fellows whose Japanese family was impacted by nuclear weapons, the project represents a troubling modernization of nuclear arms rather than “scientific progress.” The community’s opposition culminated in an official city council resolution demanding a permanent halt to the project, aligning the city with the international “Mayors for Peace” initiative, an organization of cities opposed to nuclear weapons, founded by the former mayor of Hiroshima.

Beyond moral concerns, residents fear significant environmental and practical consequences, pointing to documented cases where data centers have led to skyrocketing local energy bills, strained power grids, and created noise pollution. Further, the project’s vague purpose has fueled suspicion, as the University of Michigan calls it a “high-performance computing facility” that won’t “manufacture” weapons but refuses to confirm if its research will support nuclear weapons science. The lack of transparency is exacerbated by the fact that 84% of LANL’s budget is allocated specifically to nuclear weapons, with only $40 million of the LANL budget set aside for “science,” putting in doubt its promised focus on clean energy and other sciences.

The University of Michigan has been evasive, but the reality is LANL is knee-deep in AI, having partnered with OpenAI and recently NVIDIA to build two new super computers named “Mission” and “Vision. Ypsilanti officials are now preparing a multi-pronged fight, vowing to pressure township boards to deny permits and challenging the state legislature and university trustees who approved public funding. The community’s determined to confront institutional power, refusing on moral grounds to host any project tied to nuclear weapons.

DHS Tests Powerful New Surveillance Tool at College Football Games

The Department of Homeland Security is deploying its expansive Homeland Security Information Network (HSIN) in an unlikely place: at major college football games, transforming them into hubs of surveillance. Through public records, the platform, which is also used to surveil protests, is set as a centralized system that integrates live video from CCTV, drones, and body cameras, subjecting crowds to constant monitoring. The system also collects and mines vast amounts of personal data and has been used to facilitate facial recognition searches via companies like Clearview AI. Effectively, HSIN has built and refined its powerful surveillance capabilities on the back of college campuses and their large events.

Documents show the tech was used to monitor pro-Palestinian protests at Ohio State University, where campus camera feeds were live-streamed directly to DHS command centers. Internal reports also confirm years of use at football games, in schools like Ohio State, Georgia Tech, and the University of Central Florida, where drone and security camera feeds were integrated in real-time. An Event Action Plan for an Ole Miss football game detailed how 11 law enforcement agencies, including a military rapid-response team, were coordinated through the HSIN platform.

The system also raises significant concerns about “mission creep,” where tools built for specific event security could evolve into a permanent apparatus for broader state surveillance. While DHS promotes HSIN as a secure communications tool for public safety, its evolution into a live-video surveillance and data-mining hub has occurred with little public knowledge or oversight. The secrecy is compounded by a lack of transparency from both DHS and the universities regarding the full scope of data collection and camera access. This has allowed a powerful, post-9/11 surveillance tool to become a normalized but yet almost entirely invisible feature of U.S campus life.

EU Proposal Launches Largest Cut to Privacy Rights in Years

The European Commission is advancing a radical proposal of the General Data Protection Regulation or GDPR through a “fast-track,” omnibus procedure, intended for minor adjustments, which could severely undermine core data protection principles. Leaked drafts reveal proposals that would narrow the definition of “personal data,” effectively creating loopholes for companies using pseudonyms and exempting vast sectors like online tracking from regulation. The reform would also grant AI companies like Google and OpenAI a cart blanche to use Europeans’ personal data for training their models, essentially favoring AI development over individual privacy rights. These changes are seen as a huge downgrading of privacy a decade after the GDPR’s adoption, despite most EU member states having explicitly asked not to reopen the law.

The proposal overhaul appears driven by the AI race and external pressure, particularly from Germany, which submitted a non-paper demanding significant changes that are reflected in the leaked draft. Core data subject rights would be critically weakened, as the right to access, correction, and deletion would be limited to use only for “data protection purposes,” allowing companies to reject requests from employees in disputes or journalists as “abusive.” Protections for sensitive data, such as health information or sexual orientation, would be reduced to only apply when such data is “directly revealed,” leaving individuals vulnerable to having this information deduced from other data points. The draft also dangerously expands remote access to personal devices, proposing up to ten legal bases for companies to pull data from smartphones and PCs, potentially without the user’s full consent.

This aggressive deregulation attempt, described as “death by a thousand cuts,” is packaged under the guise of helping small businesses but primarily benefits large tech corporations while creating significant legal uncertainty. The rushed process, which gave internal units only five days to review a 180-page draft, resulted in violations of the European Charter of Fundamental Rights, overturning established Court of Justice case law. The argument is that poorly drafted and extreme proposal will not only harm users but also likely face lengthy delays in the European Parliament and Council. The final decision on whether this draft becomes the European Commission’s official position will be revealed when the “Digital Omnibus” is officially presented on November 19th.

Share

“Protecting Children” or Eroding Privacy? Labour Minister Backs Mass Message Scanning

A campaign led by the UK’s Internet Watch Foundation (IWF), and endorsed by politicians like Jess Phillips and the National Crime Agency, is pushing for the implementation of “upload prevention” systems. The invasive tech would involve client-side scanning, where every file or image on a user’s device is checked against a database of known sexual abuse material before it is sent and encrypted. The IWF frames this as a necessary child safety measure, arguing that tech companies must not use encryption as a “shield to avoid responsibility.” However, the system effectively ends private comms by turning every personal device into a checkpoint, where lawful content could be falsely flagged, the scanning infrastructure could easily be expanded to target other content.

This initiative is empowered by the UK’s Online Safety Act, which gives regulators like Ofcom the authority to pressure tech companies into compliance. The government’s stance is that the “design choices of platforms,” such as implementing end-to-end encryption, cannot be an excuse for failing to combat crimes against children. This approach mirrors the controversial “Chat Control” proposal previously debated in the European Union, which faced massive backlash for violating fundamental privacy rights and was ultimately stalled. The UK is now proceeding with a similar logic, recasting widespread surveillance as a public safety necessity and challenging the principle of secure encryption.

These EU proposals are highly divisive and are opposed by legal experts, technologists, and digital rights groups who argue they fundamentally undermine privacy. Once the technical capability for client-side scanning is installed, the potential for mission creep is significant, as nothing would stop governments from expanding the categories of “harmful” content it monitors. The UK’s attitude toward’s online regulation is becoming increasingly invasive, as they recently tried to compel Apple to install a back door into its encrypted iCloud backups under the Investigatory Powers Act.

How Congress Was Wining and Dining with AI Lobbyists This Summer

During the August 2024 recess, numerous congressional staffers accepted sponsored trips to destinations like Aspen and Silicon Valley for conferences and company tours hosted by major tech firms. These excursions, detailed in congressional travel disclosures, provided lobbyists and executives from companies like Google, Meta, Palantir, and SpaceX direct access to influential government aides. The agendas for these events were heavily skewed toward the industry’s perspective, typically lacking any balancing viewpoints from consumer advocates or government watchdog organizations. This private mingling occurred as the tech industry’s federal lobbying spending, led by Meta, Amazon, and Alphabet, is projected to reach record-breaking levels this year.

The tech giants are simultaneously deploying immense financial resources to influence policy across multiple fronts, extending beyond direct lobbying to shape the political landscape. Meta has established a new super PAC, the American Technology Excellence Project, which is backed by tens of millions of dollars to combat state-level regulations it deems unfavorable. In a similar move, venture firm Andreessen Horowitz (a16z) and OpenAI have committed to a $100 million super PAC aimed at preventing the enactment of stricter AI regulations. This coordinated spending blitz demonstrates a comprehensive strategy to sway both federal and state policymakers through financial pressure.

These activities collectively highlight a powerful influence operation where informal, sponsored access for congressional staff is reinforced by massive political spending. The goal for these companies is to present their arguments on artificial intelligence and other technologies directly to the aides who help draft legislation, without significant countervailing voices. Ultimately, these efforts are designed to fend off potential laws that could impose stricter rules on the rapidly evolving AI industry and other tech sectors.

The AI Boom Needs Land, Power, and Workers. Is Your Town Next?

The AI data center boom is absorbing a historic level of capital, with tech giants like Microsoft, Alphabet, Meta, and Amazon projected to spend a collective $370 billion in 2025, reshaping the U.S. economy. This massive investment is fueling a stock market surge, as AI-related stocks have accounted for 75 percent of S&P 500 returns since ChatGPT’s launch. However, the spending is also supported by risky accounting, as companies are depreciating their expensive Nvidia chips over six years, a timeframe that may be unrealistic given the rapid pace of AI advancement. Some firms like Meta are even turning to special purpose vehicles and corporate debt to fund their massive projects without overburdening their balance sheets.

This unprecedented expansion is placing immense strain on the U.S. energy grid, as data centers housing tens of thousands of power-intensive GPUs demand a colossal amount of electricity. Analysts warn that the U.S. is not building grid capacity fast enough, leading to a likely scenario where built data centers lack the power to operate and causing electricity rates to soar for local communities. The scale of the challenge is highlighted by the fact that China added nearly nine times more renewable energy capacity than the U.S. last year, while also subsidizing its own tech firms. This energy crunch has prompted OpenAI to formally warn the White House that power generation limits threaten America’s competitive edge in AI.

The current labor market shows signs of strain, with tech companies announcing thousands of layoffs even as they report record profits, suggesting a strategic reallocation of resources. While there is evidence that AI is automating some entry-level roles, the more immediate impact on jobs comes from capital being funneled into data center construction instead of other sectors like manufacturing, which lost jobs. The concentration of investment in AI infrastructure means less funding is available for other areas of the economy, creating a lopsided and vulnerable economic landscape.

This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.

LEAVE A REPLY

Please enter your comment!
Please enter your name here