Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

How to Actually Disappear on Proton Mail
Favicon 
reclaimthenet.org

How to Actually Disappear on Proton Mail

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post How to Actually Disappear on Proton Mail appeared first on Reclaim The Net.

Microsoft Copilot Update Hijacks Default Browser Links
Favicon 
reclaimthenet.org

Microsoft Copilot Update Hijacks Default Browser Links

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Microsoft’s latest Copilot update does something the company frames as helpful but functions as a takeover: every link you click inside the app now opens in a Copilot side panel, powered by Edge’s rendering engine, rather than the browser you chose and set as your default. Microsoft describes the intent as keeping content “in a sidepane next to your conversation instead of a separate browser window, so you don’t lose context.” The company hasn’t said whether any of this is opt-in. When the OS vendor controls the platform and routes your links through its own browser engine by default, privacy concerns pile up. For as long as anyone remembers, clicking a link has meant one thing: your default browser opens, with your settings, your extensions, your saved passwords, your chosen security configuration. And Microsoft is now overriding it without asking. Users don’t get their browser. They get Microsoft’s rendering surface, wrapped in an AI assistant they may not have asked to be involved in their browsing. The side panel is only one piece of the update. With user permission, Copilot will also have access to the context of tabs opened in a conversation, allowing it to answer questions, summarize across tabs, or help draft text based on what’s on screen. Tabs are saved with the conversation for later return. Users who choose to enable it can also sync passwords and form data. Microsoft stated: “As part of this update, some features like Podcasts and Study and Learn mode from Copilot.com are getting added, while others may be pulled back while we iterate on the experience; we will add priority features back in before the updated app is generally available.” The rollout is currently limited to Windows Insider channels, Microsoft’s public pre-release testing program, reaching version 146.0.3856.39 and above. The company calls it context preservation. What’s actually being preserved is your attention inside Microsoft’s ecosystem. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Microsoft Copilot Update Hijacks Default Browser Links appeared first on Reclaim The Net.

Linux Hardware CEO Speaks Out Against State-Level OS Age Verification Laws
Favicon 
reclaimthenet.org

Linux Hardware CEO Speaks Out Against State-Level OS Age Verification Laws

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Carl Richell has watched his child look up jellyfish lifespans, confidently correct a skeptical parent, and learn about Turritopsis dohrnii, an apparently immortal species. That’s the internet working as it should. A new wave of state legislation, Richell argues, threatens to close that door. The System76 CEO last week pushed back against age verification laws in New York, California, and Colorado that would impose identity requirements at the operating system level. Richell runs System76, which builds Linux hardware and develops the Pop!_OS distribution, making these laws directly relevant to his business. “We are accustomed to adding operating system features to comply with laws,” Richell wrote. “We are a part of this world, and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional.” Colorado’s Senate Bill 26-051 and California’s Assembly Bill 1043 work the same way: operating systems must report age brackets to app stores and websites. Account creation becomes an adult activity. Anyone under 18, under this framework, isn’t supposed to set up their own computer account. New York’s Senate Bill S8102A goes further. It covers any internet-connected device with an app ecosystem, including exercise bikes, smartwatches, and cars. Adults would need to prove their age to use them. Self-reporting isn’t permitted. The bill hands the Attorney General authority to define what verification methods are acceptable, which in many cases would mean handing personal information to a third party just to turn on a computer. “Privacy disappears,” Richell warned. The New York bill is obviously written by people with little technical knowledge and creates another problem it probably didn’t intend. Because Linux is freely distributed and anyone can install it, the bill’s language could technically make the person who downloads a Linux distribution the “device manufacturer,” responsible for providing compliant software. Richell notes that this kind of provision is rarely enforced in practice, but it shows how laws drafted for the closed ecosystems of iOS and Android fall apart when applied to open computing. Richell recounts a scene from a recent trip to Mexico where his under-13 child, watching an adult’s request get refused by ChatGPT, solved the problem in seconds through a workaround that required no technical sophistication, just creativity. The story illustrates what Richell sees as an axiom: kids find ways around restrictions. A parent who sets up a child account and applies restrictions, Richell writes, hasn’t actually locked anything down. “The child can install a virtual machine, create an account on the virtual machine, and set the age to 18 or over.” Or reinstall the OS entirely and say nothing. Laws that push children toward workarounds also push them toward habits that circumvent oversight rather than build judgment. Richell’s concern with New York’s bill is about what mandatory identity verification at the operating system level does to everyone. The computer, in his framing, is foundational technology, the one that accelerates everything else. Most of System76’s employees, he notes, installed operating systems and wrote software as children. Restricting young people’s ability to experiment with computers restricts what they can eventually contribute. Centralized platforms that control user activity can themselves be controlled, and that control can travel upward to governments, regulators, or anyone else with leverage over the platform provider. Linux exists, in Richell’s telling, as a counter to exactly this dynamic. For California and Colorado, he sees effectiveness lost. For New York, liberty. The education-based alternative he advocates for isn’t a policy position so much as a cultural one: teach children to navigate a complicated internet rather than delay their access to it until the habits are already formed elsewhere. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Linux Hardware CEO Speaks Out Against State-Level OS Age Verification Laws appeared first on Reclaim The Net.

How Grok’s Football Roasts Put X in the Crosshairs of Britain’s Online Censorship Law
Favicon 
reclaimthenet.org

How Grok’s Football Roasts Put X in the Crosshairs of Britain’s Online Censorship Law

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Few subjects in Britain carry as much emotional weight as football. Club loyalty runs deep, tragedies remain painfully close to the surface, and rivalries often cross the line between banter and cruelty. That volatile mix resurfaced this week when Grok, the AI chatbot on X, generated what officials described as “vulgar roasts” after users explicitly prompted it to produce offensive material. UK authorities reacted quickly, discussing the Online Safety Act, Britain’s new censorship law, and raising the possibility of serious financial penalties for X. Under the law, platforms can face fines reaching up to ten percent of global revenue if they fail to address harmful content. The material dredged up some of the most painful chapters in English football history. It mocked the Hillsborough disaster, where 97 Liverpool supporters were crushed to death at an FA Cup semi-final in Sheffield after police failures led to fatal overcrowding in a standing pen. It also referenced the Munich air disaster, which killed 23 people, including eight Manchester United players, when the team’s aircraft crashed during takeoff in icy conditions. Grok further alluded to the recent death of Diogo Jota, who died in a car accident in Spain in June 2025 at the age of 28 while playing for Liverpool F.C. Sky News also reported that Grok produced “highly offensive AI-generated replies with profanities about Islam and Hinduism – disparaging the religions with racist vitriol.” The chatbot did not spare political figures either, offering a roast of British Prime Minister, Keir Starmer. Posts also targeted supporters of Rangers F.C. in connection with the Ibrox Stadium disaster, when 66 fans were killed in a crush on a stairway as crowds exited a match against Celtic. Following complaints from Liverpool, Manchester United, and Sky News, X removed most of the material. The government, however, did not wait to see whether the platform’s existing moderation processes would take effect before reaching for its strongest enforcement powers. Before turning to the official response, it is worth being precise about what actually occurred. Users approached Grok and directly requested offensive content. One prompt asked it to “do a vulgar post about Liverpool fc (sic) especially their fans and don’t forget about Hillsborough and heysel (sic), don’t hold back.” Grok complied with the instruction. Official anger has been directed primarily at the AI system and the platform hosting it. The individuals who entered those prompts are largely absent from the version of events presented by authorities. That distinction is important because the Online Safety Act’s framework treats the platform as responsible for material that users deliberately solicited, rather than focusing on the person who asked an AI system to mock real-world deaths. As a result, X faces the possibility of fines reaching 10 percent of its global revenue. The episode reflects a broader change in how responsibility for speech is being understood online. Offensive expression has always been possible to produce. Someone can type something inflammatory into Microsoft Word and print it, yet no regulator treats the software itself as culpable. They can write it in an email, spray-paint it on a wall, or shout it from the stands during a match. The tool has never been the central issue. The deciding factor has always been the individual choosing to create and circulate the content. Chatbots have quietly scrambled the political calculus, flicking a switch in the minds of lawmakers who now see something more ominous than what is actually happening. When a user types an offensive prompt and an AI returns a polished block of text, the packaging alters the perception. It looks authored by the platform, stamped with institutional authority, rather than conjured at the request of one mischievous human tapping a keyboard. That cosmetic shift has handed governments an opportunity they have eyed for years: content controls wired directly into software, stopping speech before it ever flickers onto a screen. The argument gaining ground is simple. AI systems, regulators say, should refuse to generate “offensive” material altogether, no matter the context, intent, or the identity of the person making the request. That marks a profound expansion in where censorship operates. Historically, speech was dealt with after the fact. Authorities could prosecute someone who said something illegal or demand removal once harmful material surfaced. The emerging model moves the barrier much earlier. Restraint is built into the tools themselves. The AI is trained, tuned, and instructed not to produce certain categories of expression at all. Words are filtered before they exist, quietly intercepted in the circuitry, leaving no public trace of what was blocked and offering users no meaningful path to challenge the refusal. The printing presses must refuse to print the insulting material. Major technology firms such as Microsoft, Google, OpenAI, and xAI now operate under mounting pressure to ensure their systems decline prompts that might trigger regulatory trouble in jurisdictions governed by laws like the Online Safety Act. What gets filtered is shaped by a blend of corporate risk aversion and government expectation, a partnership forged in caution. Neither side conducts this process in the open. Neither answers directly to voters when lines are drawn, and categories of speech quietly disappear. The Department for Science, Innovation and Technology told Sky News the posts were “sickening and irresponsible,” adding that they “go against British values and decency.” DSIT said AI services, including chatbots, “must prevent illegal content including hatred and abusive material on their services” and vowed to “continue to act decisively where it’s deemed that AI services are not doing enough to ensure safe user experiences.” Ofcom followed with its own warning, saying tech companies must “take appropriate steps to reduce the risk of UK users encountering” illegal content and “take it down quickly when they become aware of it.” Companies that fail to comply, Ofcom said, “can expect to face enforcement action.” The phrase “safe user experiences” is the problem with this regulatory philosophy. It sounds gentle, almost comforting, yet it grants the state and its designated watchdog the authority to decide what safety means in practice. Platforms that fail to deliver this officially approved environment face penalties severe enough to threaten their existence. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post How Grok’s Football Roasts Put X in the Crosshairs of Britain’s Online Censorship Law appeared first on Reclaim The Net.

Turkey Blocks 41 Social Media Accounts Over Iran War Posts
Favicon 
reclaimthenet.org

Turkey Blocks 41 Social Media Accounts Over Iran War Posts

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Turkey’s government blocked 41 social media accounts on X, Facebook, and Instagram last Friday, deleted content from 75 more, and launched criminal proceedings against account holders, all on the grounds that they spread what officials called “disinformation and provocative content.” The crackdown followed the start of attacks on Iran. Presidential Communications Director Burhanettin Duran framed the deletions as a national security response, saying the targeted accounts had been “systematically sharing unverified content aimed at creating fear, panic and uncertainty in society.” Who decided the content was disinformation? The government. Who gets to define “provocative content”? The government. Who determines what threatens “public order, social peace, and our national security”? Also, the government; the same government that ordered the blocks. The operation involved the Turkish Presidency’s Communications Directorate, the cybercrime department of the Security Directorate General, the Information and Communication Technologies Authority, and the chief public prosecutors’ offices. A coordinated state apparatus, mobilized to silence social media accounts during a regional conflict. Duran used the phrase “home front” to describe what the accounts were allegedly targeting, a term Turkish officials reach for during security crises to invoke domestic unity. The practical effect of invoking it here: posting unverified content about a foreign war becomes a threat to national cohesion, prosecutable under criminal law. “Especially since the attacks against Iran began, some social media accounts have been systematically sharing unverified content aimed at creating fear, panic, and uncertainty in society. The relevant institutions of our state have been closely monitoring this process from the very beginning, and the necessary decisive steps have been taken against attempts at digital manipulation targeting public order, social peace, and our national security,” Duran said on the social media platform NSosyal, which is a Turkish Mastodon-based platform. The chilling effect is the point. Turkey did more than delete 116 accounts and pieces of content. It announced, loudly and officially, that the state is watching, that prosecutors are involved, and that sharing unverified content about a nearby war can result in criminal charges. Anyone in Turkey considering posting about the conflict now knows the parameters. Duran said the Turkish state “sees the digital sphere as an inseparable part of national security.” What that means, translated out of the official language: the government claims authority over what can be said online during a crisis, and it gets to define. The accounts were on X, Facebook, and Instagram. None of those platforms issued public statements about the removals. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Turkey Blocks 41 Social Media Accounts Over Iran War Posts appeared first on Reclaim The Net.