In this edition:
- U.S. Government Wants to Put AI in Charge of Nuclear Power Plants
- Porn Sites Impacted By Age Verification Legislation
- U.S. Dept of War to Deploy Googleโs Gemini for Pentagon Use
- Transatlantic Digital Partnership: Canada & EU Forge New AI and Digital ID Framework
- U.S. Proposes Collecting Social Media History & DNA from Tourists
- It happened: U.S. Citizen Scanned by ICEโs Facial Recognition
- YouTube Wants AI to Decide if Your Channel Should Exist
- Texas Will Start Testing โRobotTaxisโ Without Drivers
U.S. Government Wants to Put AI in Charge of Nuclear Power Plants
At the International Atomic Energy Agencyโs International Symposium on Artificial Intelligence, the Deputy Assistant Secretary for the U.S. Department of Energy, Rian Bahran, presented his dream vision of a future where nuclear energy powers artificial intelligence and artificial intelligence symbiotically shapes nuclear energy. His dream stands as a โ$1 trillion integrated platformโ leveraging AI to design new reactor materials, generate regulatory safety analyses, and manage plant operations with a target of less than five percent human intervention during normal functions. The strategy aims to create a utopian, โvirtuous cycle of peaceful nuclear deployment,โ where nuclear power fuels the massive data centers required for advanced AI, and AI accelerates the deployment of nuclear plants. Bahran said, โThe goal is simple: to double the productivity and impact of American science and engineering within a decadeโ.
Proponents, including IAEA leadership, frame the plan as an โatoms for algorithmsโ alliance, essential for meeting energy demands and advancing scientific productivity. IAEA Director General Rafaek Grossi stated: โTwo forces are reshaping humanityโs horizon at an unprecedented pace: the rise of Artificial Intelligence and the global transition towards clean, reliable energy. The worldโs energy map is being redrawn before our eyes. We can now say with clarity: the AI revolution, through its scale and speed, was always going to choose nuclear energy as a partner. The only question was when? Today, we know that the answer is nowโ.
The aggressive push is allegedly driven by the urgent power requirements of the AI industry, as the operation of large-scale AI models demands immense and constant electricity, creating pressure to rapidly build a new, reliable, foundation for power generation. The problem is conventional nuclear plant construction on contrary is a slow, costly, and heavily regulated process often spanning a decade or more. Their proposed solution is to use AI to break the gridlock by dramatically shortening design, licensing, and construction timelines.
But the proposed minimization of human oversight raises obvious worries among some nuclear experts. Heidy Khlaaf of the AI Now Institute warns that using generative AI for nuclear safety licensing and aiming for minimal human control is โunheard ofโ in other safety-critical frameworks and represents a dangerous shift. Another argument is that over-reliance on AI risks the erosion of essential human expertise and ignores safety within the nuclear industry. The core apprehension is that an over-automated system could allow small, AI-generated errors to cascade into serious failures before human operators could intervene, which could lead to potentially catastrophic mistakes.
Porn Sites Impacted by Age Verification Legislation
Missouri has just become the 25th state to enact restrictive age verification legislation, a profound shift in digital privacy has solidified, with half of U.S. states enacting restrictive age verification laws that mandate adults surrender biometric data or personal identification to access legal pornography. This trending legislation, now active in states including Missouri, Louisiana, Texas, Florida, and Ohio, fundamentally alters the contract of anonymous online consumption. The laws, which typically apply to sites with more than one-third of content deemed โharmful to minors,โ compel platforms to implement third-party verification checks, framing the move as a necessary shield for children. This rapidly constructed legal structure, however, can be less effective as a barrier for minors and more of a sweeping dragnet for adult user data, creating significant security and privacy repercussions.
Pornhub and its network of related sites blocked access to its sites in Missouri, replacing their homepages with a video of performer Cherie DeVille speaking on privacy risks and the dangerous effects of age verification. In a statement, the porn giant said, โWhile safety and compliance are at the forefront of our mission, giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting usersโ. The reach of these verification checks extends far beyond adult content. In states like Wyoming, South Dakota, Mississippi, and Ohio, broadly written statutes have forced even nascent social platforms like Bluesky to implement age gates. Users there must submit to a facial scan by the third-party company Yoti or upload a photo of a credit card to prove theyโre over 18.
This change creates a dangerous effect, where any site hosting user-generated content could potentially be forced to collect highly sensitive biometric or government-issued identity documents. The security risks of centralizing troves of sensitive data are already materializing. Identity verification services present a tantalizing target for hackers, like in October when Discord announced a breach at a third-party vendor handling age-verification appeals, potentially exposing the government ID photos of approximately 70,000 UK users. This points to inherent vulnerabilities of creating new, attractive databases of personal information. The very laws purportedly designed to protect the vulnerable are effectively constructing new honeypots of private data, ripe for exploitation.
U.S. Dept of War to Deploy Googleโs Gemini for Pentagon Use
The U.S. Department of War is set to unleash Googleโs Gemini for Government through its new GenAI.mil platform, giving Pentagon employees full access to Googleโs generative AI tools. The War Department claims the initiative fosters an โAI-firstโ workforce by leveraging generative AI to enhance efficiency and readiness across the enterprise, extending world-class, highly secure, AI models to all civilian, contractor, and military personnel. The aim is to advance the objectives outlined in the White Houseโs AI Action Plan, described as a โstrategic roadmap,โ aimed at securing U.S. global dominance in artificial intelligence by accelerating innovation in the private sector, building vast infrastructure, and leading international AI diplomacy and security. But the other end of the spectrum shows a growing descent into AI domination in warfare, that could produce vast consequences for the public.
In July, President Donald Trump issued a directive to focus on attaining unparalleled tech superiority in AI and since then the Departmentโsโ deployed AI capabilities to desktops across the Pentagon and U.S. military installations worldwide, providing training at length. According to the Department, the inaugural platform on GenAI.milfacilitates intelligent workflows, encourages innovation, and accelerates a shift toward AI-enabled operations, which they describe as positive. Emil Michael, Under Secretary of War for Research and Engineering said, โThere is no prize for second place in the global race for AI dominanceโ. Developed by the AI Rapid Capabilities Cell within the War Departmentโs Office of Research & Engineering, GenAI.mil aligns with the Departmentโs focus on restoring their hold on deterrence through tech superiority.
Gemini for Government claims reliable outputs and less risk of AI hallucinations by using natural language conversation, retrieval-augmented generation (RAG), integrating real-time Google searches, giving Washingtonโs workers an advantage but also making them heavily reliant on AI tech and all its vulnerabilities. In truth, a core risk of government reliance on platforms like Gemini is the potential for overly confident but factually incorrect outputs or hallucinations and manipulated responses from security flaws or prompt injection, to directly inform official decisions, planning, or documents, introducing significant operational or legal vulnerabilities. Secretary of War, Pete Hegseth, stated on X, โThe future of American warfare is here, and itโs spelled AI.โ Adding to the AI race, several other companies including, xAI, OpenAI, Anthropic, and Scale AI also signed contracts with the Pentagon in 2025.
Transatlantic Digital Partnership: Canada & EU Forge New AI & Digital ID Framework
In a significant move toward creating a wider digital dragnet, Canadian and European Union officials have formalized a new partnership linking digital identity systems, AI developments and the managing of online information. This bilateral cooperation, activated at the inaugural Canada-EU Digital Partnership Council meeting in Montreal, is anchored by two new Memoranda of Understanding (MoUs) covering Digital Credentials and Artificial Intelligence. Forged between Canadaโs Minister of AI and Digital Innovation, Evan Solomon, and European Commission Executive Vice-President, Henna Virkkunen, the pact aims to establish standards for joint tech advancement, promote innovation in strategic sectors, and foster what both governments call โtrustโ and โinformation integrityโ in the digital sphere.
The partnershipโs primary focus is on creating interoperable digital identity systems. The agreement establishes a working forum to conduct joint experiments and test โdigital identity walletsโโsoftware tools meant to allow citizens to securely store and present government-verified credentials across borders. The collaboration directly supports the EUโs flagship European Digital Identity (EUDI) Wallet initiative while aligning with Canadaโs ongoing efforts through its Pan-Canadian Trust Framework. The technical foundation for this new system would rely on established global standards like the W3Cโs Verifiable Credentials and the EUโs eIDAS regulation,ensuring that any future national digital wallet in Canada is built for seamless international compatibility from its inception.
Parallel to the digital identity initiative, the AI memorandum seeks to align the two regionsโ approaches to artificial intelligence governance and infrastructure with goals to boost their innovation and economies via AI research, semiconductor supply chains, and quantum technology partnerships. While officially framed as a means to โaccelerate AI adoptionโ and build โadvanced AI models for the public good,โ it also facilitates the cross-jurisdictional flow of data, which not only creates potential oversight but could directly endanger public privacy and entirely trample on user consent. The partnership also extends into the realm of information policy, with both sides pledging cooperation on โenhancing information integrity online,โ a measure officially aimed at countering foreign manipulation and generative AI risks but one that may prioritize controlled narratives over open public discourse. This is reflective of a global trend toward more centralized control of public digital information flow, with little mention of the publicโs freedoms.
U.S. Proposes Collecting Social Media History & DNA from Tourists
The United States governmentโs proposed a significant expansion of the personal information it collects from international visitors who do not require a visa. Under the new plan announced by U.S. Customs and Border Protection (CBP), travellers from 42 allied nations in the Visa Waiver Program would be required to provide up to five years of their social media history and ten years of email addresses. This mandatory data collection would replace a previous, optional question about social media on the Electronic System for Travel Authorization (ESTA) application, aiming to enhance security screening for visitors from countries like the United Kingdom, Germany, Japan, and Australia. On the other end, Washingtonโs also proposing to expanding data collection on visa-free travelers from allied nations to include their DNA. This proposal is linked to an executive order from President Trump, signed in January 2025, aimed at enhancing screening to prevent potential national security threats.
The scope of data collection extends far beyond social media, encompassing an expansive range of personal and biometric data. Authorities also plan to request phone numbers from the past five years and, โwhen feasible,โ add what they term โhigh-value data fields,โ including telephone numbers from the past five years, email addresses from the last decade, extensive family background, and metadata from photographs. Most critically, the plan includes the collection of biometrics, such as fingerprints, iris scans, and DNA, which would mark a monumental shift in border screening practices. This potential DNA collection, would be described as the most extreme biometric data regime for short-term travellers in the world, placing the U.S. far beyond existing the global norms for most nations.
The CBP states the move is to comply with an executive order for stricter screening to prevent national security threats, though the notice did not specify what officials would be looking for in the social media accounts. The proposal clearly holds significant privacy and diplomatic implications that go beyond simply identifying foreign visitors. Unlike fingerprints, DNA can reveal sensitive health and genetic information, potentially placing the relatives of travelers under indirect surveillance. The plan also raises international legal questions, as many VWP member countries have strong data protection laws, particularly concerning genetic information. The public has 60 days to submit formal comments on these proposed changes before they are finalized by the U.S. Department of Homeland Security.
It happened: U.S. Citizen Scanned by ICEโs Facial Recognition
Exemplifying the domestic use of new immigration enforcement tech in an operation, a U.S. citizen named Jesus Gutiรฉrrez was detained and had his face scanned by Immigration and Customs Enforcement (ICE) agents on a Chicago street, after leaving the gym. Agents stopped him as he was walking and asked where he was going and if he had his ID on him, which he did not. They hand-cuffed him, put him in a gray Cadillac SUV with no license plates, and then used a mobile app called Mobile Fortify, which runs facial scans against a vast government database of over 200 million images, to identify him. The app, which ICE and Customs and Border Protection (CBP) agents use on their work phones, is designed to identify individuals who may be subject to deportation by pulling data from FBI records and other sensitive databases. Gutiรฉrrezโs account, corroborated by his U.S. passport, demonstrates that the tech is being used not just at borders but against people in the United States.
This incident is one of many and raises significant scrutiny related to due process, racial profiling, and most vitally the accuracy of this technology. An internal Department of Homeland Security document acknowledges the app can be used on U.S. citizens, and congressional testimony indicates ICE may prioritize a biometric match from the app over traditional proof of citizenship including a birth certificate. Critics, including the ACLU, argue the practice, often based on stops in specific neighborhoods or targeting people of certain ethnicitiesโa tactic referenced in a recent Supreme Court opinion which essentially allows itโis a profound violation of rights. They warn that the technology is susceptible to glitches and misidentifications, potentially leading to wrongful detentions of American citizens.
Federal authorities have offered limited defense of the program, with DHS Assistant Secretary Tricia McLaughlin stating, โDHS is not going to confirm or deny law enforcement capabilities or methods.โ However, they admit agents must complete training and โconsider all circumstancesโ before making a final status determination. For citizens like Gutiรฉrrez, who described the experience akin to being kidnapped, the encounter represents a real and frightening expansion of surveillance, and trampling of rights, where simply walking down the street can result in a digital identity check against a secretive government databaseโa check you cannot refuse.
YouTube Wants AI to Decide if Your Channel Should Exist
In a public future vision for the platform, YouTube CEO Neal Mohan champions artificial intelligence as a dual force for creation and moderation. In an interview with Time magazine he argued that AI will empower a new class of creators who lack traditional production skills while simultaneously enhancing the platformโs ability to identify and remove violative content at scale. His plan positions YouTube not just as a host for content but as an active arbiter, suggesting, โthere will be good content and bad content, and it will be up to YouTube and our investment in technology and the algorithms to bring that to the foreโ. Consequently, the platform is set to shift toward a model where AI both generates the material that fills the site and serves as the primary judge of what remains.
But such a vision starkly contrasts with numerous documented cases where automated systems have erroneously punished creators, putting the technologyโs precision into question. Several channels, including Enderman, Scrachit Gaming, and 4096, were terminated after being mistakenly linked by YouTubeโs systems to other, violative channels, actions the company later reversed following public backlash. In other bewildering instances, a creatorโs video was age-restricted for โgraphic contentโ for simply laughing, and tutorials on installing Windows 11 were removed for being โharmful or dangerous.โ YouTubeโs public statements on these incidents have at best been inconsistent, and at times outright denied automated enforcement played a role while its official policy documents admit to using both automation and humans for these tasks.
The fallout from these mistakes or forms of censorship has ignited deep distrust within the creator community, who denounce that the appeal process for unjust decisions is a slow and opaque one, and more often public outcry on social media is the most effective strategy. Prominent creators like MoistCr1TiKaL, whoโs denounced the CEOโs statements, have condemned the AI tools as a โscourge,โ arguing that automated systems should never act as โjudge, jury, and executionerโ over human expression. YouTubeโs push for full automation in content policing, critics argue, represents a fundamental shift away from its original ethos of โgiving everyone a voice,โ instead creating an environment where expression must be โalgorithmically compliant.โ The real divide lies between YouTubeโs corporate love affair with the idea of an AI-managed ecosystem and the real-world impact of these systems that currently refuse to distinguish between harmless laughter and graphic violenceโor worse, purposely distort them to manage a narrative.
Texas Will Start Testing โRobotTaxisโ Without Drivers
Tesla has taken a risky and arguably dangerous step toward its long-promised commercial robotaxi service by removing all human safety monitors from its test fleet in Austin, Texas. The move, announced by CEO Elon Musk on X after videos of empty Model Y vehicles circulated online, supposedly marks the transition from supervised testing to a fully autonomous operation. The company, which started offering monitored rides six months ago, has been incrementally scaling back human oversight by first moving monitors from the passenger to the driverโs seat before eliminating them entirely. Teslaโs cryptic social media post, โSlowly, then all at once,โ suggests a phased out but accelerating push toward offering public rides in the driverless vehicles.
The companyโs milestone arrives amid heightened scrutiny due to safety concerns and Elon Muskโs historically ambitious but often revised and overstated timelines. The small Austin test fleet of 25-30 vehicles has been involved in at least seven documented crashes since testing began, though details remain sparse because Tesla heavily redacts its reports to federal safety regulators. Muskโs projection back in July that thefleet would serve โhalf of the population of the U.S.โ by yearโs end has been significantly tempered, with a revised November target of merely doubling the Austin fleet to about 60 cars. Choosing Texas for this key testing phase is strategic, as its lack of specific regulations for driverless vehicles presents fewer bureaucratic hurdles compared to states like California.
The advancement of Teslaโs proprietary robotaxi service also magnifies challenges surrounding its older promises about vehicle hardware and owner participation. Musk has long discussed a future where owners could add their personal Teslas to a shared robotaxi network, a vision complicated by the companyโs own admission that millions of existing cars lack the necessary hardware for full autonomy. As Tesla moves this testing forward, its ability to safely manage this driverless fleet at scale while navigating past overstatements will be critical to its viability against established competitors. More importantly, itโs reputation of burying traces of huge errors, may not be so easy to put under the carpet this time.


