Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification
Favicon 
reclaimthenet.org

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online. The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years. All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race. What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously. We obtained a copy of the bill for you here. The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend. Blackburn’s bill repeals it entirely, after a two-year transition period. Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.” AI platforms would be incentivized to heavily monitor users. Enforcement doesn’t sit only with federal regulators; state attorneys general and private actors both get standing to sue. The downstream effect on publishing is direct. Once liability protections go, platforms can no longer host content neutrally. Reporting on contentious subjects doesn’t need to be factually wrong to become a liability problem. It just needs to be frameable as “harmful.” The predictable result: platforms tighten policies, reduce reach, or quietly stop hosting the material that exposes them most. The bill requires AI developers to prevent “reasonably foreseeable harms” from their systems. “Harm,” “foreseeable,” and “contributing factor” are not defined in fixed terms. They get decided after the fact, by regulators and courts working from evolving interpretations. An AI output can be judged unlawful under standards that didn’t exist when the system produced it. For developers, the rational response is aggressive preemptive restriction: building systems that refuse more, flag more, and generate less of anything that might one day attract a lawsuit. Blackburn frames the bill as clearing up a “patchwork of state laws” through a single national standard. The agencies empowered to define and enforce that standard: the FTC, DOJ, NIST, and Department of Energy. Rather than competing state-level experiments, this creates a centralized governance structure where a handful of federal bodies set the rules for AI development across the entire country. Blackburn’s framework absorbs several existing proposals wholesale. Each one carries its own surveillance and censorship architecture. The Kids Online Safety Act (KOSA) brings algorithmic systems under federal oversight. Platforms would be required to modify personalized recommendation engines, disable infinite scrolling and autoplay, and restrict notification systems to prevent “compulsive usage.” This goes beyond content moderation. It regulates how information gets ranked, delivered, and amplified at the system level. The NO FAKES Act creates new liability for AI-generated replicas of individuals’ voices or likenesses, and extends that liability to platforms that knowingly host unauthorized material. Anyone can sue. Platforms that fail to comply with takedown requirements face substantial fines. The GUARD Act mandates age verification for AI chatbot makers, bans minors from access, and requires additional child safety measures. Age verification at this scale means identity verification. The data collected to confirm someone isn’t a minor doesn’t disappear after the check. The AI LEAD Act introduces federal liability standards covering defective design, failure to warn, and strict liability for AI products deemed “unreasonably dangerous,” the same framework being imported into the broader bill. The bill explicitly declares that training AI models on copyrighted works is not fair use. That single provision opens the door to litigation against virtually every major AI developer. It also establishes liability for unauthorized use of a person’s voice or likeness in AI-generated content, covering both training and deployment. NIST gets directed to develop national standards for content provenance and watermarking of AI-generated media, with requirements that AI providers allow content owners to attach provenance data to their work and prohibitions on its removal. The infrastructure this builds tracks the origin and authenticity of digital content across platforms at a technical level. Surveillance is the word for it, even when it’s being sold as authentication. Removing Section 230 and introducing broad legal exposure creates a system where platforms and AI developers live under constant litigation risk tied to content, outputs, and system behavior. That converts platform self-censorship from a choice into a survival strategy. The bill doesn’t need government agents flagging articles. It just needs to make the legal cost of hosting difficult reporting high enough that platforms do the math themselves. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification appeared first on Reclaim The Net.

The $4 Tool That Can Unmask Anonymous Accounts, and the Habits That Give You Away
Favicon 
reclaimthenet.org

The $4 Tool That Can Unmask Anonymous Accounts, and the Habits That Give You Away

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The $4 Tool That Can Unmask Anonymous Accounts, and the Habits That Give You Away appeared first on Reclaim The Net.

Russia Deploys Internet Whitelist in Moscow, Blocking Foreign Sites
Favicon 
reclaimthenet.org

Russia Deploys Internet Whitelist in Moscow, Blocking Foreign Sites

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Russia’s government has decided which websites its citizens are allowed to read. The mechanism for enforcing that decision is now operational in Moscow. Since March 6, mobile internet in the capital has been intermittently cut. Some areas are still offline. St. Petersburg residents were warned this week to expect the same. The official justification is protection against Ukrainian drone attacks, which use cell towers for navigation, the same explanation has been offered across Russia for months. What’s actually being tested is a “whitelist” system: a government-compiled list of approved platforms that remain accessible when mobile internet is shut down. Everything not on the list simply disappears. According to local press, only pre-approved Russian platforms, including social media, marketplaces, taxi and delivery apps, telecom services, and government websites, remain accessible when mobile internet is restricted. Foreign news sites, independent media, and anything outside the approved perimeter are gone. The technical backbone is deep packet inspection, or DPI. Telecom providers use it to block most internet traffic while letting approved services through. It’s the same technology that authoritarian governments have used for years to filter the internet at the infrastructure level. Russia has been rolling it out region by region since at least last summer. Moscow is just the most visible deployment yet. The whitelist includes mobile operator sites, pro-Kremlin media, government bodies, marketplaces, and Russian social networks VKontakte, Odnoklassniki, Max. Notably absent is anything that might let citizens read something the government hasn’t approved. To get on the list at all, companies must meet strict requirements, including routing traffic through Russian infrastructure, hosting servers domestically, and ensuring users cannot conceal their IP addresses. The structure effectively excludes foreign platforms by design and creates a surveillance requirement for anyone who wants to remain accessible. The whitelists have been working patchily, with some of the approved websites plagued by malfunctions and accessibility problems. According to Monitor Runet, whitelists have so far been introduced in fifty-seven of Russia’s eighty-plus regions, likely because not all telecom operators have yet installed the DPI systems used to configure whitelisting. Russian authorities have not confirmed the rollout. No official from the Ministry of Digital Development, Roskomnadzor, or the telecom operators has made a public statement. A source from the Digital Development Ministry told the RBC business daily that the Moscow internet outages were a test of the ability to block access to sites not on the “white list,” saying: “This testing has been going on in the regions for some time, and it has now reached Moscow.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Russia Deploys Internet Whitelist in Moscow, Blocking Foreign Sites appeared first on Reclaim The Net.

Google’s Android Sideloading Will Now Require 10 Steps and a 24-Hour Wait
Favicon 
reclaimthenet.org

Google’s Android Sideloading Will Now Require 10 Steps and a 24-Hour Wait

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Google has detailed what it calls an “advanced flow” for installing apps from unverified developers once mandatory developer registration kicks in later this year. The process requires enabling developer mode, surviving multiple scare screens, restarting your device, and then waiting a full day before you can do what you used to do freely. That’s the concession. The original plan, announced last August, was blunter: no more installing apps from developers who haven’t handed Google their legal name, address, email, phone number, and in some cases a copy of their government ID. After developers and users pushed back, Google said it would preserve a path for “power users.” This is that path. Here’s what the path actually looks like. Tap your build number seven times to enable developer mode. Navigate to Settings, find Developer Options, scroll to “Allow Unverified Packages,” and flip the toggle. Answer a screen that asks if you’re being coerced. Enter your PIN. Restart your phone. Wait 24 hours. Return to the menu. Scroll past more warnings. Choose “Allow temporarily” (seven days) or “Allow indefinitely.” Confirm you understand the risks. Now you can install apps. That’s ten steps, including a mandatory overnight delay, to do something Android users have always been able to do with a single settings toggle. Google isn’t hiding that the friction is intentional. Matthew Forsythe, Google’s director of product management and app safety, promised a “high-friction” process earlier this year, and he delivers. Each additional step is a calculated bet that some percentage of users will give up, decide it’s not worth it, and stick to verified apps. That’s what “high-friction” means. Fewer people sideload. Forsythe published a blog post framing the whole architecture as anti-scam design. The waiting period is there, he explains, to break the manufactured urgency scammers use when pressuring victims into installing malicious software. The restart cuts off remote access a scammer might be using. The scare screens ensure you’re acting freely. It’s difficult to argue against protecting vulnerable people from phone fraud. That’s the point. The same logic that justifies the waiting period also justifies, in principle, restricting sideloading further, requiring government ID to unlock it, or removing the option entirely. The justification doesn’t have a natural stopping point, and the friction doesn’t have a floor. What the framing also does is rebrand users who sideload. Previously, Android’s openness was a feature Google marketed. Now, anyone who wants to install an app outside the Play Store is implicitly in the category of someone who might be a scam victim or a scammer’s target: someone who needs to be slowed down, warned repeatedly, and given time to reconsider. The audience for sideloading, in Google’s new framing, is not a developer, a privacy-conscious user, or someone installing open-source software. It’s someone who might be being manipulated. Keep Android Open, the developer-led campaign opposing the verification program, looked at the advanced flow and called it what it is: not a solution. The full installation process runs through Google Play Services rather than the Android OS itself. That means Google can modify, restrict, or remove it at any time without an OS update and without any user consent. The friction level today is not the floor. Whatever access Google provides, Google can take away, tighten, or redesign without asking. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Google’s Android Sideloading Will Now Require 10 Steps and a 24-Hour Wait appeared first on Reclaim The Net.

EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers
Favicon 
reclaimthenet.org

EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The EU’s own diplomatic service has published a report admitting that X makes its data more accessible to researchers than other major platforms, and then used that admission to brand X the primary channel of “foreign information manipulation and interference” against the bloc. The European External Action Service (EEAS) put this in writing. The media ran with the conclusion and buried the caveat. The fourth annual FIMI Threats report, released this month, found that “88% of instances were concentrated on the platform X. The presence of CIB networks, the ease of creation of fabricated accounts, but also more straightforward access to data, explains this concentration. Most of the major social media platforms restrict access to data that would allow for assessing the magnitude of information manipulation activities.” Read that again. The EEAS is telling you that X appears dominant in its findings partly because X lets researchers see what’s happening, while other platforms don’t. Facebook, TikTok, Instagram, YouTube: their data is locked down. So the manipulation happening there goes unmeasured. X gets flagged precisely because it’s more open. That context was largely absent from the headlines that followed. Polskie Radio ran with “Social network X is the main channel of disinformation against the EU and politicians are the biggest targets.” Plataforma Media went with “X (Twitter) main disinformation channel against EU.” Neither headline mentioned that the EU’s own analysts acknowledged a significant part of this concentration reflects X’s comparatively open data environment, not just the actual prevalence of manipulation on the platform. The timing makes this worse. Three months before the FIMI report landed, the European Commission fined X €120 million under the Digital Services Act. One of the three violations cited was the failure to provide access to public data for researchers. X’s terms of service prohibit eligible researchers from independently accessing its public data, including through scraping. What’s more, X’s processes for researchers’ access to public data impose unnecessary barriers, effectively undermining research into several systemic risks in the European Union. So the EU fined X for restricting researcher access to data. Then the EEAS published a report crediting X’s comparatively open data access as a reason it dominates the FIMI numbers. Both things happened. Neither position was retracted, and the Commission’s fine remains on the books. The contradiction gets sharper when you look at what was happening in Germany around the same time. Two NGOs, Democracy Reporting International (DRI) and the Society for Civil Rights (GFF), sued X under the DSA for refusing to hand over data ahead of Germany’s February 2025 federal election. “Other platforms have granted us access to systematically track public debates on their platforms, but X has refused to do so,” said Michael Meyer-Resende of DRI. A Berlin court sided with the NGOs and ordered X to comply. The funding behind that lawsuit is worth noting. DRI’s largest single funder is the European Commission itself, which provided €5.7 million in 2023 alone. The same institution that fined X €40 million for DSA non-compliance is also the primary financial backer of the group that just won a court order forcing X to comply with the DSA. GFF’s funding trail has its own texture. The Mozilla Foundation granted money to GFF specifically to support “enforcement of research data access based on the DSA,” the precise legal mechanism at the center of this lawsuit. Mozilla’s revenue comes overwhelmingly from Google, via a search engine deal. DuckDuckGo also appears on GFF’s donor list. The same pattern repeated in February this year. A Berlin court ordered X to hand over data on Hungarian election activity to researchers, again ruling in favor of DRI after X refused. Hungary votes in April. X’s performance in this area was serious enough to be the basis of the European Commission’s fine decision for €120 million, which found that X only accepts 4.7 percent of the data access requests it receives. That’s the Commission’s own figure. Most formal research requests to X get rejected. And yet, according to the EEAS, the platform still provides “more straightforward access to data” than its competitors. Which means the others are offering even less. The platforms that accept close to zero research requests are shielded from FIMI statistics entirely. Their manipulation problems don’t show up in the numbers because researchers can’t get at the data to find them. The FIMI report covered 540 incidents detected throughout 2025. The EEAS is careful to note that identified trends should not be interpreted as exhaustive, as the analysis remains shaped by the focus and scope of monitoring efforts. That disclaimer appears in the small print. The headline number, 88% on X, does not come with it. What the EU has built here is a measurement system that rewards opacity. Platforms that restrict data access don’t show up in the statistics. They’re not transparent enough to be monitored. X, which at least allows more data to flow than the alternatives, becomes the visible target. More visibility equals more accountability equals more blame. Close your data off and disappear from the count. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers appeared first on Reclaim The Net.