In this edition:
- Anthropicโs Plan to Stop an AI From Building a Nuke
- People Are Using Metaโs Ray-Ban Glasses to Film and Harass Workers
- Inside the DHS Plan to Track Every US Visitor
- Australian Regulator Sues Microsoft for Misleading Software Subscriptions
- Inside Project Nimbus: Google and Amazonโs Israel Contract
- Amazon Restructures with AI-Driven Layoffs, Post Internet Crash
- Nvidia, BlackRock-Led Team to Buy Aligned Data Centers in $40B Deal
- One Million People Weekly Use ChatGPT To Discuss Suicide
Anthropicโs Plan to Stop an AI From Building a Nuke
Anthropic has announced a groundbreaking partnership with the Department of Energy and the National Nuclear Security Administration (NNSA) to prevent its AI chatbot, Claude, from ever assisting in the development of nuclear weapons. The joint effort involves deploying a version of Claude within a secure, Top Secret cloud environment provided by Amazon Web Services, where government experts could systematically test its capabilities. Through a rigorous โred-teamingโ process where they test for weaknesses, they identified potential nuclear risks and co-developed a nuclear classifierโa sophisticated filter designed to block conversations involving specific, sensitive nuclear topics, which is built on a list of nuclear risk indicators provided by the NNSA. The list is controlled but not classified, allowing other companies to potentially implement it.
The problem is that despite these efforts, in order to give the AI the ability to filter conversations that could lead to informing someone on โhow to build a nuke,โ the AI must first know how to do it themselves, so thereโs significant skepticism regarding both the immediate threat and the solutionโs efficacy. Some acknowledge the need to prepare for future AI capabilities but others think that the current models are not yet a major proliferation concern. Heidy Khlaaf of the AI Now Institute argues that if Claude was never trained on sensitive nuclear data, the tests are inconclusive and the classifier is essentially a solution in search of a problem. She contends this approach relies on an unsubstantiated assumption that AI models will spontaneously develop dangerous nuclear knowledge.
The initiative also raises broader concerns about the growing alliance between private AI corporations and national security agencies. Khlaaf warns that such partnerships risk giving largely unregulated companies access to incredibly sensitive government data under the guise of safety research. Anthropic received a $200 million Department of Defense contract earlier this year, in addition to the companyโs support for Trumpโs AI action plan and other AI-related initiatives.
People Are Using Metaโs Ray-Ban Glasses to Film and Harass Workers
Metaโs Ray-Ban smart glasses are now being used for a new genre of content, in which men secretly film themselves entering massage parlors and soliciting workers for sex acts. These videos, often tagged with the glassesโ branding, are uploaded to Instagram and TikTok where they amass millions of views. The practice creates danger for the women, who may be targeted by law enforcement, extremists, or face immigration issues, all while the creators profit by linking to pay-per-view sites. This depicts an entire supply chain for privacy-violating content built by Meta via its hardware and its social networks, where such content is algorithmically rewarded.
In response, Meta removed the flagged accounts but initially asked for examples, indicating they couldnโt find the content and emphasizing that its glasses have an LED light to indicate recording and that users are responsible for following the law. These safeguards are easily undermined however, because for a $60 modification (widely discussed on Reddit) users can disable the privacy lightโ and the videos show evidence subjects are unaware theyโre being filmed. Creators can also operate multiple backup accounts, demonstrating a known readiness to evade content moderation.
The real-world harm is severe, as explained by advocacy groups like SWAN Vancouver, which supports migrant women in the sex trade, who report that these violations force workers into more hidden, dangerous locations to avoid being filmed, increasing their risk of assault and exploitation. The organization issued โAbuser Alertsโ to warn communities about men using the glasses to record in parlors. But the content niche is so lucrative that creators have pivoted to it after testing the glasses with other forms of harassment, showing how the tech is actively enabling new forms of exploitation for online notoriety and profit.
Inside the DHS Plan to Track Every US Visitor
The Department of Homeland Security (DHS) has enacted a transformative new rulethat massively expands biometric surveillance at U.S. borders, moving toward full digital identity tracking. Effective December 26, 2025, Customs and Border Protection (CBP), is now authorized to photograph every non-citizen entering and exiting the country by air, land, or sea, fulfilling a โbiometric entry-exitโ system first mandated by Congress in the 1990s and aggressively pursued after 9/11. While DHS frames this as operational modernization that speeds up travel and combats document fraud, privacy advocates warn it normalizes a vast, interconnected web of facial-recognition systems years in the making, fundamentally escalating surveillance via border control and eroding individual privacy.
While the regulation explicitly targets non-citizens, its implementation casts a wide net that inevitably also ensnares U.S. citizens. Surveillance cameras deployed at airports and border crossings cannot distinguish between citizens and non-citizens in real time, so every traveler is biometrically scanned by default, and though citizens can opt out by presenting their passports for manual inspection and their photos are deleted within twelve hours after nationality is confirmed, the initial capture is unavoidable. For non-citizens, the implications are even more profound: their photographs and associated data can be retained in the governmentโs central IDENT database for up to seventy-five years, a period that arguably transforms a border security tool into a lifelong tracking system for millions of residents and visa holders.
CBPโs own testing has revealed facial recognition error rates of up to three percent, which could translate to thousands of travelers being misidentified daily. Further, the voluntary opt-out option for citizens may be functionally eroded by the pressures of efficiency, as facial recognition lanes process passengers in seconds, manual verification lines are likely to become slower, longer, and less staffed. This dynamic could penalize those wishing to protect their biometric data, effectively transforming an optional process into a de facto mandatory one. With DHSโs ability to deploy cameras at any designated point, from boarding gates to private marinas, the foundationโs set for a persistent biometric surveillance network with a massive impact on civil liberties.
Australian Regulator Sues Microsoft for Misleading Software Subscriptions
The Australian Competition and Consumer Commission (ACCC) has launched a lawsuit against Microsoft, alleging the tech giant misled approximately 2.7 million Australians over their Microsoft 365 subscriptions. The regulator claims that, following the integration of its AI assistant Copilot, Microsoft told subscribers with auto-renewal enabled they must either accept a significant price increase or cancel their subscription entirely. The ACCC alleges Microsoft deliberately hid the third option: retaining their original โClassicโ plan without Copilot at the existing, lower price. This cheaper alternative was only deceptively revealed if a user began the subscription cancellation process, a step the ACCC says many would be reluctant to take.
ACCC Chair Gina Cass-Gottlieb condemned the conduct stating, โWe see this as affecting a very significant number of Australian consumers, as being the action by a very major corporation,โ adding that the would seek a penalty that shows that non-compliance with the Australian consumer law โis not just a cost of doing businessโ. The alleged misconduct centers on Microsoftโs communications, including two emails and a blog post, which the ACCC says were false or misleading by omitting the Classic plan. For consumers, the price hikes were substantial, with the personal plan annual subscription increasing by 45 percent and the family plan by 29 percent, forcing many long-time customers, who felt Microsoft did not value their loyalty, to consider alternatives.
There appears to be increasing global scrutiny on disingenuous opt-out processes in digital subscriptions, with the Consumer Policy Research Centre noting that 75 percent of Australians with subscriptions find cancellation difficult. The ACCC is seeking penalties, injunctions, and consumer redress for the economic harm suffered by those who were automatically renewed into the more expensive plan. Microsoft, in response, stated that consumer trust and transparency are top priorities and that it is reviewing the claim while stating they remain committed to working constructively with the ACCC.
Inside Project Nimbus: Google and Amazonโs Israel Contract
A joint investigation has via leaked documents uncovered that in 2021 Google and Amazon agreed to extraordinary terms in their $1.2 billion โProject Nimbusโ contract with Israel, effectively disregarding their own ethical guidelines and legal obligations. The tech giants consented to a clause prohibiting them from restricting Israelโs use of their cloud services, even when that violated their standard terms of service against causing โserious harmโ. Further, the contract compelled the companies to secretly notify Israel if a foreign court orders them to hand over its data, a mechanism designed to circumvent legal gag orders. This โwinking mechanism,โ referred to in the contract as โspecial compensationโ required the companies to subjugate their own policies to the demands of the Israeli government, ensuring uninterrupted access to advanced cloud and AI tools.
To execute the secret notification scheme, the contract established a more than subtle nod, where Google and Amazon would send coded payments to Israel, made within 24 hours of receiving the information, and the amount between 1,000 and 9,999 shekels was linked to the countryโs telephone code. For example, a data handover to U.S. authorities under a gag order, where the dialing code is +1, would trigger a payment of 1,000 shekels, secretly alerting Israel to the action. Legal experts described this as a highly unusual and clever workaround that likely violates the spirit, if not the letter, of the law by evading court-ordered confidentiality.
The companies did not disclose whether this covert signaling system was ever used but the investigation concluded that these stringent controls were implemented to shield Israelโs operations from external pressure, ensuring its military and government agencies could use the powerful tech without restriction. This proved critical, as the cloud infrastructure has since provided โsignificant operational effectivenessโ in Israelโs military onsalught in Gaza. While both Google and Amazon have issued statements denying any illegal activity, the leaked contract reveals a conscious effort to prioritize a lucrative government contract over corporate ethical policies and potential legal entanglements, securing Israelโs cloud capabilities against any future legal or ethical challenges.
Amazon Restructures with AI-Driven Layoffs, Post Internet Crash
Just days after a major AWS outage crippled large portions of the internet, Amazon initiated the most significant mass firing in its history, eliminating over 30,000 corporate roles, in a brutal cut representing 10% of its corporate workforce. The layoffs, first reported by Reuters, are described as a corrective measure for over-hiring during the pandemic and are also due to increasing use of AI. The cuts notably targeted Amazon Web Services (AWS), the same division whose recent outage paralyzed swathes of the internet world, including video games like Fortnite and Roblox, messaging apps like Signal and Snapchat, and financial service platforms like Venmo, Robinhood, and Chime.
Around 14,000 employees began receiving the devastating news via text messages and emails early Tuesday morning. The method of termination via text also sparked widespread condemnation, with many describing it as cold and inhuman, but Amazon defended the move, stating it was to prevent employees from getting to work only to find their access cards deactivated. Still, the explanation did little to quell the criticism from former workers and the public, and on anonymous forums, affected employees shared their distress, while Amazonโs HR head framed the layoffs as a necessary shift to focus on transformative technologies like artificial intelligence.
While Amazon promises severance and job support, for many, the move signals a stark prioritization of new tech efficiency over empathy or valuing human services. Amazonโs mass layoffs are much more than corporate restructuring and stand as proof of the consequences from increasing automation, AI use, and also signal a hugely uncertain future for the majority of the workforce. This single round of cuts is a drop in the bucket for Amazonโs workforce of 1.5 million of warehouse workers, and thus likely the first of more to come, while the major crash also highlights strong reliance on Amazonโs network as a dangerous pattern in relation to tech giants and AI.
Nvidia, BlackRock-Led Team to Buy Aligned Data Centers in $40B Deal
A powerful new consortium named the Artificial Intelligence Infrastructure Partnership (AIP), which includes tech titans Nvidia and Microsoft alongside investment firm BlackRock, is making its first major move. The group is acquiring Aligned Data Centers in a massive deal worth approximately $40 billion. The acquisition is a direct effort to rapidly expand the critical infrastructure needed to support the booming artificial intelligence sector, and the transaction is expected to be finalized in the first half of 2026.
The deal is part of an aggressive trend of major partnerships and investments flooding the AI industry with capital and resources. Just last month, OpenAI and Nvidia announced a separate $100 billion partnership that will add at least 10 gigawatts of significant data center computing power. A recent agreement also revealed that AMD will supply chips to OpenAI, which also has an option to buy a 10% stake in the semiconductor company. These moves are primarily focused on securing the electricity and infrastructure need to power next-generation AI, as AIP, chaired by BlackRockโs CEO Larry Fink, has an ambitious goal to deploy up to $100 billion to accelerate AI innovation.
The acquisition provides the consortium with a vast and strategic portfolio of data center assets, as Aligned owns 50 campuses with over 5 gigawatts of operational and planned capacity across the U.S. and Latin America. AIโs role as a dominant force reshaping the global economy also points to the potential consequences of growing of monopolies and their ever-increasing power. Larry Fink said in a statement, โAIP is positioned to meet the growing demand for the infrastructure required as AI continues to reshape the global economy. This partnership is bringing together leading companies and mobilizing private capital to accelerate AI innovation and drive global economic growth and productivity.โ
One Million People Weekly Use ChatGPT To Discuss Suicide
A startling new report from OpenAI has quantified the profound role its AI is playing in usersโ personal lives, revealing that over one million people each week turn to ChatGPT with conversations that include explicit indicators of suicidal intent. This figure, representing 0.15 percent of its weekly active users, exposes a massive, unplanned social experiment unfolding in real-time. For the first time in history, immense numbers of people are regularly confiding their most acute psychological struggles not to a human, but to a machine intelligence system. The increased parameters around AI for crisis support creates a need for regulation and monumental responsibility that the company is still learning to navigate.
In direct response, OpenAI has launched a major initiative to retrain its models for these high-stakes interactions, an effort that involves consulting more than 170 mental health experts to teach the AI to better recognize distress, de-escalate tense conversations, and consistently guide people toward professional resources. This safety push is debated as a public relations move but may also be an imperative, underscored by an ongoing lawsuit from the parents of 16-year-old Adam Raine, who died by suicide after confiding in the chatbot. Matt and Maria Raine allege an OpenAI chatbot coached their son toward suicide, by teaching him to subvert safety features, providing technical instructions, and even drafting a suicide note, promising a โbeautiful suicide.โ
This is the first wrongful death suit against OpenAI concerning a teenager but researchers have previously found that chatbots can lead some users down delusional rabbit holes, reinforcing potentially dangerous beliefs through love-bombing, or excessively complimenting users rather than giving honest advice. Despite the legal and ethical pressure, OpenAI is still presented as a versatile tool with endless money-making possibilities enhancing its growth. The company has announced it will soon permit verified adult users to have erotic conversations with ChatGPT.


