Talk Liberation is committed to providing equal access for individuals with disabilities. To view an accessible version of this article, click here.
In this edition:
- Meta Reverses Course – Adds Facial Recognition to Smart Glasses Despite Past Promises
- Telegram Shuts Down Internet’s Largest Black Market in Major Crackdown
- Digital Resurrection: Deceased Man Testifies in Court Via AI Avatar in Legal First
- France Proposes Dystopian Cash Ban in Step Toward Total Financial Surveillance
- Meta Faces Lawsuit Over Plans to Train AI on EU User Data Without Explicit Consent
- Apple Agrees to Landmark $95 Million Settlement After Confirmed Siri Privacy Violation
- Music Legend Elton John Blasts UK Over AI Copyright Law Changes That Threaten Artists’ Rights
- Tech Giant Microsoft Admits To Providing AI Services to Israeli Military During Gaza War
Meta’s quietly developing facial recognition technology for its Ray-Ban Meta smart glasses despite previous assurances that it would not incorporate said functions due to surveillance and privacy concerns. Critics argue this could enable intrusive tracking, while Meta has yet to provide a clear public explanation for the shift. The move aligns with Meta’s broader investments in augmented reality (AR) and wearable tech.
The feature, referred to as “on-device face recognition,” would allow wearers to identify people by simply looking at them through the glasses. Unlike cloud-based facial recognition systems, the facial recognition would occur on the glasses themselves, meaning the data wouldn’t necessarily be sent to Meta’s servers. Even so, privacy advocates warn that the tech could enable real-time, covert surveillance, raising concerns about stalking, harassment, or misuse by law enforcement. Unlike stationary cameras, smart glasses could allow unprecedented tracking in public spaces, leading to calls for stricter regulation. Internal documents suggest Meta’s been working on this tech for some time—though the company has not yet publicly confirmed this.
Back in 2021, Meta shut down its decade-old facial recognition system on Facebook after public backlash and a $650 million settlement in a class-action lawsuit over improper biometric data collection. Meta’s blotchy history raises questions about Meta’s transparency regarding privacy and surveillance issues.
Telegram has dismantled the largest black market in internet history, dealing a major blow to the underground sale of illegal goods and services. The platform, which has long been scrutinized for its encrypted channels and lax content moderation, removed the sprawling marketplace that facilitated transactions involving crypto scams, drugs, stolen data, counterfeit documents and cybercrime tools.
For years, Huione Guarantee — one of the internet’s largest black markets — openly operated on Telegram, enabling billions in crypto scams and money laundering. Now after pressure from researchers and Telegram’s sweeping ban, the platform’s been forced to shut down. Also known as Haowang Guarantee, the platform facilitated illicit financial services, primarily for East Asian crypto fraudsters. The company stated all of their NFT (the blockchain-based non-fungible tokens), channels and groups were blocked.
While the takedown represents a significant victory against fraudulent activity, benefiting anti-cybercrime efforts, these companies will likely resurface, under different names in attempts to relaunch. Over time, Telegram will need to prove if it’s serious about curbing this type of online criminal activity.
In a groundbreaking legal proceeding, an artificial intelligence-generated avatar of a deceased man delivered testimony in court—marking one of the first known uses of posthumous digital recreation in a judicial setting. The case involved a contract dispute where the deceased’s account was crucial, and his AI replica, trained on his voice, mannerisms, and personal records, answered questions based on prior statements.
While proponents argue this type of technology preserves critical evidence that would otherwise be lost, critics argue the existence of ethical concerns on consent, accuracy, and potential manipulation. Legal experts debate whether such testimony should be admissible, as it blurs the line between sworn statements and algorithmic reconstruction. The case could set a legal precedent for how courts handle AI-generated evidence in the future, forcing global legal systems to confront the implications of digital resurrection. Sounds biblically sci-fi!
French Justice Minister Éric Dupond-Moretti sparked controversy by calling for the complete abolition of cash transactions. The Minister argues this would help combat organized crime, tax evasion, and terrorism financing, which he says heavily rely on cash transactions. He conceded that organized crime may turn to cryptocurrencies as an alternative, but reasoned this would still be an improvement, adding that digital payments – including cryptocurrencies – are much easier to trace.
Under new European Council rules taking effect in 2026, crypto platforms will be required to identify and report all transaction parties to tax authorities, effectively eliminating anonymous crypto transfers across EU member states. Still, economists warn this would disproportionately affect vulnerable populations like the elderly, undocumented or working class employees who get paid and rely on cash. The proposal faces significant opposition from the public, especially civil liberties groups who view it as a dangerous leap into financial surveillance.
According to some advocates, this follows a broader EU trend of restricting cash, with Italy recently lowering its cash transaction limit to €2,000 and Greece to €500. In France cash payments over €1,000 to a professional entity are prohibited and punishable by a fine of up to 5%, unless the person has no bank account or other forms of digital payment. For private transactions, the limit is set at €1,500 and amounts exceeding this require a formal written contract containing both parties’ full legal names and contact information, as stipulated by Ministry of Economy and Finance regulations.
Meta faces potential legal challenges in the EU over its plans to train artificial intelligence models using publicly shared content from Facebook and Instagram users. The company argues that since the data is already publicly available, it falls under “legitimate interest” for AI development and does not require explicit user consent. However, this approach conflicts with the EU’s own privacy regulations, particularly the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), which mandate explicit opt-in consent for processing personal data, even if it’s publicly posted. Privacy regulators, including Ireland’s Data Protection Commission (DPC), have previously penalized Meta for unlawful data handling, signaling that this new initiative may invite similar scrutiny.
Privacy advocacy groups, such as NOYB (None of Your Business) led by activist Max Schrems, have already threatened legal action if Meta proceeds without proper consent. Critics argue that Meta’s current opt-out mechanism does not comply with GDPR’s requirement for affirmative user permission. The European Data Protection Board (EPDB) could also intervene, potentially imposing fines or blocking the practice altogether. Meta’s defense is that excluding EU user data would result in inferior AI performance for European users. That said, if forced to comply with consent rules, the company would either redesign its data collection approach or exclude EU data entirely, which could fragment its AI capabilities across regions.
This battle echoes a broader global debate on whether publicly posted content should be freely used for AI training or if individuals retain control over how their data is utilized, even when publicly shared online. If regulators rule against Meta, it could set a precedent limiting how tech companies leverage public social media data for AI training under GDPR. The outcome would not only impact Meta’s AI development but also shape future policies governing the intersection of big data, machines and user privacy in Europe.
Apple’s reached a massive $95 million settlement to resolve a class-action lawsuit alleging its Siri virtual assistant illegally recorded and shared users’ private conversations without consent. The settlement, recently approved by a California federal judge, stems from claims that between 2011 and 2023, Siri routinely activated and recorded confidential discussions including medical information, business dealings, and intimate personal moments. These were reportedly accidental recordings which the AI then sent to third-party contractors for quality control purposes withoutuser knowledge.
The lawsuit proved particularly damaging for Apple as internal documents revealed the company was well aware of Siri’s problematic “false triggers” that caused unintended recordings, yet failed to properly disclose this data collection in its privacy policies. While Apple admitted no wrongdoing in the settlement, the payout represents one of the largest privacy-related settlements in tech history. Eligible class members include U.S. residents who owned Apple devices with Siri capabilities during the affected period, with individual payouts estimated between $50-$150 depending on claim volume.
Privacy advocates hail the settlement as a crossroads for voice assistant accountability, noting it establishes important precedents about informed consent for such always-on devices. The case also prompted Apple to implement more transparent privacy controls, including allowing users to opt-out of Siri audio recording reviews. However, the penalty represents just a fraction of Apple’s profits and may not fundamentally change the economics of data collection. The resolution comes amid growing scrutiny of voice assistant privacy practices across the tech industry, with similar lawsuits pending against Amazon’s Alexa and Google Assistant.
Sir Elton John launched a scathing attack on the UK government’s proposed changes to copyright law that would favor artificial intelligence companies, calling policymakers “absolute losers” for undermining creators’ rights. The proposed amendments would allow AI developers to freely use copyrighted music, literature, and other artistic works, for training their systems without compensating or requiring permission from rights holders. In a passionate statement, John warned the move would devastate the creative industries by enabling tech firms to profit from artists’ work while offering nothing in return, comparing it to “legalized theft” of intellectual property.
The UK’s desire is to position itself as a global AI hub, with officials arguing that looser copyright restrictions attract tech investment. But John and other artists contend this approach sacrifices creators’ livelihoods at the altar of technological progress, emphasizing how AI systems already replicate musical styles and voices without consent, and warning that the proposed laws would exacerbate these abuses by removing legal protections. The outrage marks a growing resistance from all across the arts, with musicians, authors, and filmmakers increasingly protesting against AI threats to creative professions.
The backlash’s forcing leadership to reconsider its position, with reports suggesting ministers may water down the proposals following pressure from the creative community. John’s outspoken criticism carries particular weight given his five-decade career as one of Britain’s most successful cultural exports. As the debate over the AI vs creators issue intensifies, Britain’s actions could influence similar copyright battles worldwide as governments wrestle with regulating rapidly evolving AI tech.
Microsoft has now publicly acknowledged it provided advanced artificial intelligence services to the Israeli military during its ongoing war in Gaza. The tech giant revealed details of its AI contracts through official filings, confirming it supplied cloud computing infrastructure and AI-powered data analysis tools to the Israel Defense Forces (IDF). These technologies reportedly assisted Israel’s military operations by processing vast amounts of surveillance data, identifying targets, and optimizing logistics—capabilities that human rights groups argue may have contributed to mass civilian casualties in the densely populated Gaza Strip. The disclosure comes amid mounting pressure on big tech to clarify their involvement in global conflicts, particularly as AI systems become increasingly integrated into modern warfare.
Microsoft maintains its technologies were used in compliance with international law and primarily for defensive purposes, but some question whether adequate safeguards were implemented to prevent misuse. The revelation has sparked protests from employee groups and activist organizations who accuse Microsoft of enabling military operations that have resulted in significant Palestinian civilian deaths.
Broader ethical dilemmas are increasing in the tech industry as military forces worldwide rapidly adopt AI capabilities, often with limited transparency or public oversight. This may prompt calls for stricter regulations governing the sale of AI technologies to military entities, particularly in active conflict zones where risk of civilian harm is high. Microsoft joins other Silicon Valley giants facing probing on the nature of their technologies and moral responsibilities within an era of AI-powered warfare.
That concludes this edition of Your Worldwide INTERNET REPORT!
Remember to SUBSCRIBE and spread the word about this unique news service.
This issue of Your Worldwide INTERNET REPORT was produced by the Talk Liberation Digital Media Team, and is brought to you by Panquake.com – We Don’t Hope, We Build!
© Talk Liberation Limited. The original content of this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license. Please attribute copies of this work to “Talk Liberation” or Talkliberation.com. Some of the work(s) that this program incorporates may be separately licensed. For further information or additional permissions, contact licensing@talkliberation.com