Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

House Committee Passes Child “Safety” Bills That Pushes National Age Verification Surveillance
Favicon 
reclaimthenet.org

House Committee Passes Child “Safety” Bills That Pushes National Age Verification Surveillance

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A House committee voted Thursday to advance three child safety bills, bundling them toward the floor in a package that passed. The votes were close: 28-24 for the KIDS Act, 26-23 for the App Store Accountability Act. Sammy’s Law also cleared the committee. The Children and Teens’ Online Privacy Protection Act (COPPA 2.0) never got a House vote, but the Senate Commerce Committee passed its version unanimously. The KIDS Act, sponsored by Rep. Brett Guthrie (R-KY), rolls several laws into one. It includes a version of the Kids Online Safety Act (KOSA) stripped of the “duty of care” provision that defined the Senate’s bipartisan take on the bill. That provision required platforms to actively mitigate risks to minors. The House version drops it. Several Democrats voted the package, though largely for the wrong reasons. Their concern was that the bills would block states from passing stronger online protections for young users. KOSA has been introduced in various forms for years without ever passing. Rep. Alexandria Ocasio-Cortez (D-NY) argued the KIDS Act uses child safety as cover for something else entirely. “What Big Tech lobbyists want is a national surveillance program where they can harvest the private and personal data of every American with zero actual protections for people,” she said. Ocasio-Cortez is right in the sense that the broader project is effectively creating a surveillance network where users of each platform would be de-anonymized on sign-up and their usage tied to a real-world ID. However, it’s largely a project of governments that are pushing for this. Some Big Tech players are actually against it. Ocasio-Cortez called out Discord specifically, which delayed age verification plans after user backlash over privacy and security concerns, and over its partnership with third-party verification platform Persona. “[Discord] tried to roll out this idea of a data verification or an age verification technique, but they did it in this way that was also very emblematic of what we’re against here today,” Ocasio-Cortez said. “What’s more shocking is that Discord made the decision to move forward with this after they had been hacked, and at least 70,000 users had their data stolen.” Discord acknowledged last year that a number of government ID images were exposed in a hack affecting a third-party customer service provider it has since dropped. The package includes age verification requirements for app store downloads, purchases, and access to adult content online. The KIDS Act also restricts platform design features that “result in compulsive usage” and requires AI chatbot makers to tell minors they’re talking to a machine, not a person. The App Store Accountability Act adds age gating at the app store level, aiming to stop minors from downloading age-restricted content before they ever reach a platform. Sammy’s Law would require large social media companies to let parents manage their child’s account and interactions through a third-party tool. Age verification at the app store level has become a proxy war between competing tech interests. Meta supports it because it shifts the compliance burden away from their platforms. Apple and Google are lobbying against it. The same fight has played out in Utah and Louisiana. The child safety framing is the excuse here. These bills create a substantial data collection infrastructure in the name of protecting minors. Age verification at scale means platforms, app stores, or third-party services collecting identity documents from millions of users, storing them, and eventually losing them to breaches. Discord’s experience is the predictable outcome of building these systems. The question of who holds that data, under what legal protections, and what happens when it leaks is less prominent in the debate than the bill sponsors would prefer. Vague mandates to prevent design features that cause “compulsive usage” and regulators’ broad authority to define what counts. Infinite scroll. Notification badges. Recommendation algorithms. All of these could fit the definition, applied selectively or expansively depending on who’s doing the applying. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post House Committee Passes Child “Safety” Bills That Pushes National Age Verification Surveillance appeared first on Reclaim The Net.

The Privacy Phone Is Going Mainstream: The Deal That Could Change De-Googled Phones Forever
Favicon 
reclaimthenet.org

The Privacy Phone Is Going Mainstream: The Deal That Could Change De-Googled Phones Forever

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post The Privacy Phone Is Going Mainstream: The Deal That Could Change De-Googled Phones Forever appeared first on Reclaim The Net.

UAE Threatens Residents With Prison for Sharing Unverified News During Missile Strikes
Favicon 
reclaimthenet.org

UAE Threatens Residents With Prison for Sharing Unverified News During Missile Strikes

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The UAE government has decided this is the moment to threaten its 10 million residents with prison for sharing the wrong information online. On February 28, as Iranian ballistic missiles and drones struck the country for the first time, the UAE Public Prosecution announced that publishing or circulating “rumors and information from unknown sources through social media platforms or any other technological means” is a criminal offense under federal law. The warning extends to anyone who reposts such content. Not just people who create it. “Information is a responsibility, and spreading rumors is a crime,” the Public Prosecution said, directing residents to get their information “solely from official and accredited sources.” In a country under active missile bombardment, the government has just told its residents that the only permitted source of information about what’s happening to them is the government. Federal Decree-Law No. 34 of 2021 on Countering Rumors and Cybercrimes was already on the books. This isn’t a new emergency measure rushed through in a crisis. It carries a minimum sentence of one year in prison and a fine of at least AED 100,000 (around $27,000) for sharing false, misleading, or unverified information online. Share something the authorities determine “incited public opinion” against the UAE government, or share it during a declared crisis, and the minimum jumps to two years and AED 200,000. The law has been used before. Authorities in Ras Al Khaimah referred seven people to prosecution for social media content deemed to have “undermined community security and stirred public opinion.” The posts that triggered that prosecution were described as inaccurate. The standard for what counts as inaccurate, undermining, or rumor-spreading is set by the same authorities enforcing the law. The Public Prosecution’s statement draws no distinction between deliberate disinformation and honest mistakes. It draws no distinction between fabricated propaganda and a resident sharing a video of fires near their home to warn family abroad. The legal standard is “unverified information from unknown sources.” Everything that isn’t an official press release potentially qualifies. The US Embassy in Abu Dhabi even told American citizens in a March 1 alert that publishing or circulating “rumors, false news, or news from unknown sources” could expose them to prosecution under UAE law. That warning carries real weight when the official position on what’s happening is controlled exclusively by the government prosecuting the war. The UAE is home to one of the largest expatriate populations anywhere in the world. Most residents have family elsewhere. Sharing updates during a crisis isn’t rumor-mongering. It’s what people do. Under this law, doing what people do can get you imprisoned. The Public Prosecution doesn’t need to actively prosecute thousands of residents for the law to work. The threat is enough. A resident filming debris near their building thinks twice before posting it. A worker injured at an airport thinks twice before texting a video to a friend abroad who might share it. A journalist covering the story thinks twice about including details that haven’t been confirmed by official sources first. The message to the UAE’s millions of residents is, as the original reporting notes, unambiguous: in a crisis, what you retweet can be as legally consequential as what you write. What it doesn’t mention is that the government gets to decide, after the fact, what counts as a rumor. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UAE Threatens Residents With Prison for Sharing Unverified News During Missile Strikes appeared first on Reclaim The Net.

TikTok Says Privacy Makes Users Less Safe
Favicon 
reclaimthenet.org

TikTok Says Privacy Makes Users Less Safe

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Over the past five years, the largest social platforms settled on a clear position about private messaging. Lock it down. Facebook turned on end-to-end encryption. Instagram and Messenger did the same. X joined the club. Yes, metadata is still an issue and the protocols used matter; but, generally speaking, the move was toward more privacy of actual messages. TikTok looked at that trend and made a different choice. Then it scheduled a briefing in London with the BBC to explain the reasoning. The explanation was safety. In the UK, TikTok belongs to ByteDance, a Chinese technology company that operates under Beijing’s jurisdiction. China maintains strict limits on end-to-end encryption inside its borders. TikTok, after its own review of the issue, reached the same policy outcome for its messaging system. Alan Woodward, a cybersecurity professor at Surrey University, raised that point directly. The company’s “Chinese influence might be behind the decision,” he said, adding that end-to-end encryption is “largely banned in China.” TikTok declined to engage with that suggestion, of course. The remark hung in the air. However, it’s worth adding that the US operation of TikTok has made no indication that it is moving towards private messaging standards either. End-to-end encryption is simple in theory. Only the people in a conversation can read the messages. The platform running the service cannot access the content. Governments cannot request it. Engineers inside the company cannot view it. TikTok’s system operates in a different way. Messages on the platform remain readable to the company. Employees can access them under defined circumstances. Law enforcement agencies can request them through legal channels. TikTok argues that readable messages allow the company to identify harmful activity. The debate turns on a basic technical fact. “We can read your messages to catch predators,” and “we can read your messages” describe the same system. The second statement applies to everyone on the platform. Every message becomes part of a structure that remains accessible inside the company. Employees can make mistakes, systems can face breaches, and legal demands differ across countries. Each factor creates another path through which private communication may be accessed. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post TikTok Says Privacy Makes Users Less Safe appeared first on Reclaim The Net.

Google Opens Android App Distribution, Cuts Play Store Fees After Epic Antitrust Loss
Favicon 
reclaimthenet.org

Google Opens Android App Distribution, Cuts Play Store Fees After Epic Antitrust Loss

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Google has spent years deciding what software a billion Android users are allowed to install. A jury said that it was illegal. Now, after losing in court twice and facing regulators on three continents, the company is changing how it runs Android’s app distribution. Whether the changes go far enough is still before a federal judge. The core of what Google announced: rival companies can register as app stores, pay a one-time fee, and offer their own catalogs on Android devices. Google’s standard 30% cut on developer revenue drops to as low as 10% on recurring subscriptions, with a flat 5% option for developers who just want to use Google’s billing system. Developers can also route customers to external payment processors entirely. “Anybody can launch a competitive app store now,” said Epic Chief Executive Officer Tim Sweeney. “Google is opening up Android all the way with robust support for competing stores, competing payments, and a better deal for all developers,” Sweeney posted on X. That sentence would have been unremarkable a few years ago. For most of Android’s history, it wasn’t true. How Google kept the gates closed A jury concluded in 2023 that Google’s Android policies violate antitrust law. The evidence showed a company that cut deals with developers, manufacturers, and carriers to ensure Google Play remained the only practical way to distribute apps on Android. Rival stores faced what the court called “install frictions,” an arsenal of warning screens and technical barriers designed to make alternatives feel unsafe or cumbersome. The message to users was that other app stores are a risk. Judge James Donato issued an injunction ordering Google to open its app catalog to rival stores, ban preferential treatment for Google’s own services, and let developers steer customers to cheaper payments elsewhere. Google appealed. The Ninth Circuit upheld the ruling. Google asked the Supreme Court for relief. The Supreme Court said no. The new proposal, filed Wednesday in San Francisco federal court, follows the shape of Donato’s original order. Google and Epic said it should resolve concerns the judge raised about an earlier settlement he described as a “sweetheart deal” for Google. Donato still needs to approve it. What changes, and what it means The practical changes are no small deal. Registered app stores will get a neutral install screen, equal treatment to Google Play on Android devices, and access to distribute their own app catalogs. The friction that made sideloading feel dangerous to ordinary users is supposed to disappear. That matters beyond price competition. App stores that operate outside Google’s rules can carry software Google won’t. That includes apps from developers who’ve been removed from Play, apps that serve communities Google’s content policies don’t accommodate, and alternatives to Google’s own services that Google has historically had every incentive to disadvantage. When one company controls distribution on a platform used by billions of people, it controls more than commerce. It controls what software exists in practice for most of the world’s Android users. Google has faced EU fines for competition violations. The European Commission flagged the company in March 2025 for blocking developers from steering users to offers outside the Play Store, with potential fines reaching 10% of global annual revenue. The UK’s Competition and Markets Authority designated Google as having Strategic Market Status in mobile platforms, giving regulators power to mandate exactly the kind of changes announced Wednesday. Fee changes for the US, UK, and EU are expected by June, with Australia, Korea, and Japan following before the end of 2026. The revenue question Documents from the Epic litigation put Google Play’s 2020 revenue at $14.66 billion. Alphabet doesn’t break out figures for the Play Store separately. Analysts estimated the combined effect of litigation and new regulations could cost Google around $1 billion in gross profit. Google’s willingness to settle, after years of fighting these changes through every available court, tells you what the stakes looked like from inside the company. Whether that remains true once legal scrutiny fades, and whether Google enforces its new openness with the same energy it once spent closing the platform down, are questions the next few years will answer. Still, a looming problem The court victory and the settlement, though, only go so far. Google has a separate policy in the works that would effectively reassert gatekeeping control through a different mechanism. Starting September 2026, any app installed on a certified Android device must be registered by a Google-verified developer. No registration means no installation, even for apps distributed entirely outside Google Play, through stores like F-Droid or the Amazon Appstore. That means government-issued ID, agreement to Google’s terms, and a $25 fee, just to reach users on hardware Google doesn’t own, through stores Google doesn’t run. A coalition of organizations signed an open letter to Sundar Pichai demanding the policy be scrapped. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Google Opens Android App Distribution, Cuts Play Store Fees After Epic Antitrust Loss appeared first on Reclaim The Net.