Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda
Favicon 
reclaimthenet.org

Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A familiar storyline is hardening into regulatory doctrine across Europe: frame social media use as addiction, then require platforms to reengineer themselves around age segregation and digital ID. The European Commission’s preliminary case against TikTok, announced today, shows how that narrative is now being operationalized in policy, with consequences that reach well beyond one app. European regulators have accused TikTok of breaching the Digital Services Act by relying on what they describe as “addictive design” features, including infinite scroll, autoplay, push notifications, and personalized recommendations. Officials argue these systems drive compulsive behavior among children and vulnerable adults and must be structurally altered. What sits beneath that argument is a quieter requirement. To deliver different “safe” experiences to minors and adults, platforms must first determine who is a minor and who is not. Any mandate to offer different experiences to minors and adults depends on a reliable method of telling those groups apart. Platforms cannot apply separate algorithms, screen-time limits, or nighttime restrictions without determining a user’s age with a level of confidence regulators will accept. Commission spokesman Thomas Regnier described the mechanics bluntly, saying TikTok’s design choices “lead to the compulsive use of the app, especially for our kids, and this poses major risks to their mental health and wellbeing.” He added: “The measures that TikTok has in place are simply not enough.” The enforcement tool behind those statements is the Digital Services Act, the EU’s platform rulebook that authorizes Brussels to demand redesigns and impose fines of up to 6% of global annual revenue. The commission said TikTok may need to change the “basic design” of its service, including disabling infinite scroll over time, enforcing stronger screen-time breaks at night, and altering its recommender system. Those changes depend on user classification. A platform cannot apply one algorithm to children and another to adults without reliably knowing which is which. Age self-declaration is widely dismissed by regulators as insufficient. That leaves identity checks, government-issued documents, facial analysis, or other biometric estimation systems. Each option introduces new data collection layers that did not previously exist. TikTok has rejected the accusations. “The Commission’s preliminary findings present a categorically false and entirely meritless depiction of our platform, and we will take whatever steps are necessary to challenge these findings through every means available to us,” the company said in a statement. The commission’s case file emphasizes usage statistics to support its view. Regnier said TikTok has 170 million users in the EU and claimed “most of these are children.” He cited unspecified data showing that 7% of children aged 12 to 15 spend four to five hours daily on the app, and that it is “by far” the most used platform after midnight among 13- to 18-year-olds. “These statistics are extremely alarming,” he said. From the regulator’s perspective, those numbers justify deeper intervention. Investigators argue that TikTok ignores signals of compulsive use, such as repeated nighttime sessions and frequent app openings by minors, and that its safeguards are not “reasonable, proportionate and effective.” Existing time management tools are described as easy to dismiss, while parental controls are portrayed as demanding too much effort. TikTok counters that it already offers custom screen-time limits, sleep reminders, and teen accounts that allow parents to set boundaries and prompt evening log-offs. The company says these features allow users to make “intentional decisions” about their time on the app. The broader policy direction is not limited to Europe. Australia has banned social media for under-16s, and governments in Spain, France, Britain, Denmark, Malaysia, and Egypt are pursuing similar paths. In the United States, TikTok recently settled a lawsuit centered on social media addiction, while Meta’s Instagram and Google’s YouTube still face claims tied to youth harm. Across jurisdictions, the addiction frame performs a specific function. By presenting platform use as a health risk, regulators gain justification to demand persistent monitoring, differentiated treatment, and verification of user attributes. Age becomes a regulatory gateway, and identity systems become the enforcement infrastructure. More: The Gospel of the Anxious Generation That trajectory raises unresolved questions about privacy and data minimization. Age verification systems, whether based on documents or biometrics, require platforms to collect and process more sensitive information than they do today. Once established, such systems are difficult to limit to a single purpose. The same rails built to separate children from adults can later be repurposed for other forms of access control. The commission has said its preliminary findings do not determine the final outcome and that TikTok will have the opportunity to respond. If Brussels proceeds to a non-compliance decision, remedies could include mandatory redesigns alongside financial penalties. A recent precedent illustrates how forcefully the DSA can be applied. Last year, Elon Musk’s platform X was fined for DSA breaches, including what the EU described as a “deceptive” verification badge and barriers to advertising research. The TikTok case signals that the next phase of online regulation is less about individual posts and more about identity, classification, and algorithmic permissioning. Framed as child protection, it advances a model in which access to digital speech increasingly depends on proving who you are and how old you are, before the feed even begins. In a statement from the US House Judiciary Committee, which has questioned the EU’s fine against X, said “the Commission’s punitive actions against platforms have proven to be pretextual to coerce platforms to censor more political speech.” The Committee also added, “The Committee is deeply concerned that the Commission may be weaponizing the DSA against any company that complies with a lawfully issued congressional subpoena.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda appeared first on Reclaim The Net.

New York Budget Bill Proposes Mandatory File-Scanning Tech and In-Person Sales for 3D Printers
Favicon 
reclaimthenet.org

New York Budget Bill Proposes Mandatory File-Scanning Tech and In-Person Sales for 3D Printers

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. New York’s latest budget proposal would place new obligations on manufacturers of 3D printers and other digital fabrication equipment, tying the operation of these tools to mandatory software controls. The 2026–2027 executive budget bill, S.9005/A.10005, directs that devices sold in the state include “blocking technology” capable of scanning every design file and stopping production when a “firearms blueprint detection algorithm” flags a file as a potential gun or gun component. The bill, similar to the ones we recently reported on in Washington state, treats file scanning as a workable technical safeguard, even though digital design files only describe shapes. Many ordinary objects share the same geometric traits as regulated firearm parts. Pipes, housings, brackets, mounts, and mechanical connectors all overlap with components that appear in firearms. Software that evaluates geometry alone cannot reliably separate lawful designs from prohibited ones. Such systems inevitably interrupt legitimate work while offering little resistance to deliberate misuse. Although public discussion often centers on consumer 3D printers, the statutory language reaches much further. The definitions extend to any machine capable of making three-dimensional changes to an object from a digital design using subtractive manufacturing. That scope covers equipment found in repair businesses and small manufacturing firms throughout the state. Open-source firmware projects face particular strain under this framework. Systems such as Marlin and Klipper are maintained by volunteer communities and commonly run on machines that are intentionally offline. Requiring these systems to inspect files against an external standard assumes constant connectivity and centralized infrastructure that many users neither want nor have. The bill pairs these expectations with penalties reaching $10,000 per violation and requires that all 3D printer sales occur in person. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post New York Budget Bill Proposes Mandatory File-Scanning Tech and In-Person Sales for 3D Printers appeared first on Reclaim The Net.

EU Targets VPNs as Age Checks Expand
Favicon 
reclaimthenet.org

EU Targets VPNs as Age Checks Expand

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Australia’s under-16 social media restrictions have become a practical reference point for regulators who are moving beyond theory and into enforcement. As the system settles into routine use, its side effects are becoming clearer. One of the most visible has been the renewed political interest in curbing tools that enable private communication, particularly Virtual Private Networks. That interest carries consequences well beyond “age assurance.” A January 2026 briefing we obtained from the European Parliamentary Research Service traces a sharp rise in VPN use following the introduction of mandatory age checks. The report notes “a significant surge in the number of virtual private networks (VPNs) used to bypass online age verification methods in countries where these have been put in place by law,” placing that trend within a broader policy environment where “protection of children online is high on the political agenda.” Australia’s experience fits this trajectory. As age gates tighten, individuals reach for tools that reduce exposure to monitoring and profiling. VPNs are the first port of call in that response because they are widely available, easy to use, and designed to limit third-party visibility into online activity. The EPRS briefing offers a clear description of what these tools do. “A virtual private network (VPN) is a digital technology designed to establish a secure and encrypted connection between a user’s device and the internet.” It explains that VPNs hide IP addresses and route traffic through remote servers in order to “protect online communications from interception and surveillance.” These are civil liberties functions, not fringe behaviors, and they have long been treated as legitimate safeguards in democratic societies. Pressure move toward private infrastructure European debate has increasingly framed VPNs as an obstacle to enforcement. The EPRS report records that “some argue that access to VPN services should be restricted to users above a digital age of majority.” That framing effectively recasts privacy-enhancing technology as a regulatory gap to be closed. The UK experience illustrates how quickly this logic escalates. After the Online Safety Act came into force, VPN apps flooded app store rankings. According to the report, “half of the top 10 free apps in app-download charts in UK app stores have reportedly been VPN services,” with one developer citing “a 1,800% spike in downloads in the first month after the legislation started to apply.” Those figures are now used to justify proposals that would limit who can access encryption tools. The Children’s Commissioner for England has called for VPNs to be restricted to adults. The EPRS briefing captures the stakes of that approach: “While privacy advocates argue that imposing age-verification requirements on VPNs would pose significant risks to anonymity and data protection, child-safety campaigners claim that their widespread use by minors requires a regulatory response.” From a civil liberties perspective, this is bad. Age assurance moves from regulating specific services toward regulating how people protect their connections in general. That expansion affects journalists, activists, whistleblowers, and ordinary users who rely on VPNs to reduce tracking, avoid profiling, or communicate safely. Regulatory alignment amplifies risk Australia is contributing directly to this policy direction. The eSafety Commissioner, Julie Inman Grant, has been meeting with a cooperation group on age assurance convened by Ofcom, with participation from the European Commission. A joint release following one such meeting states that “throughout 2026, the three regulators will continue to have regular exchanges to further explore effective age-assurance approaches, enforcement against adult platform services and other providers to ensure minors are protected, relevant technological developments, and the essential role of data access and independent research in supporting effective regulatory action.” The emphasis on data access reflects language already present in European policy documents. The EPRS briefing warns that “as the EU reviews cybersecurity and privacy legislation, VPN services may also come under stricter regulatory scrutiny.” It adds that “it is likely that the revised Cybersecurity Act will introduce child-safety criteria, potentially including measures to prevent the misuse of VPNs to bypass legal protections.” Embedding child-safety criteria into cybersecurity law risks collapsing the distinction between content regulation and communication security. That distinction has traditionally protected private correspondence from becoming a tool of routine governance. The EPRS report outlines why scaling remains contentious, noting that existing measures “including verification, estimation and self-declaration are relatively easy for minors to bypass.” Proposed alternatives rely on biometrics, identity documents, or persistent age signals tied to devices. France’s “double-blind” requirement is often cited as a privacy-conscious approach. The briefing explains that under this model, “the adult platform receives no information about the user other than confirmation of eligibility, while the age-verification provider has no knowledge of which websites the user visits.” Even here, the solution depends on expanding verification infrastructure rather than limiting data collection at the source. In France, officials are hinting that efforts to restrict children’s access to social media may extend beyond platform rules and into the tools people use to stay private online. Lawmakers are advancing a proposal that would bar anyone under 15 from using social media services, and at least one senior figure has suggested that virtual private networks could become part of the next phase of enforcement. Speaking on public broadcaster Franceinfo, Minister Delegate for Artificial Intelligence and Digital Affairs Anne Le Hénanff framed the issue as an ongoing process rather than a finished policy. “If [this legislation] allows us to protect a very large majority of children, we will continue. And VPNs are the next topic on my list,” she said. At the EU level, the direction is increasingly explicit. The EPRS briefing records that the European Parliament has adopted a resolution supporting age-verification methods and calling for a digital age limit of 16 for social media. Investigations under the Digital Services Act are already underway, and multiple governments are backing the concept of a pan-European digital age of majority. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Targets VPNs as Age Checks Expand appeared first on Reclaim The Net.

Linux Has One Job in 2026: Make It Easy to Say Yes
Favicon 
reclaimthenet.org

Linux Has One Job in 2026: Make It Easy to Say Yes

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Linux Has One Job in 2026: Make It Easy to Say Yes appeared first on Reclaim The Net.

Spanish PM Declares War on the Internet While Calling It Protection
Favicon 
reclaimthenet.org

Spanish PM Declares War on the Internet While Calling It Protection

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. At the 2026 World Government Summit in Dubai, Spanish Prime Minister Pedro Sánchez announced a set of measures aimed at reshaping how social media platforms operate within Spain and across Europe. He described social media as a “failed state” and declared that if governments want to protect citizens, “there is only one thing we can do: take back control.” The proposal, to be introduced in Spain’s parliament next week, includes holding platform executives criminally accountable for illegal content, criminalizing algorithmic amplification of prohibited material, and creating a “Hate and Polarization Footprint” to monitor how companies spread divisive content.  Sánchez also said his government will “work with our public prosecutor to investigate and pursue the infringements committed by Grok, TikTok and Instagram” and promised “zero tolerance on this matter.” He further announced that Spain has joined five other European nations in forming a “Coalition of the Digital Willing,” which is “committed to enforcing stricter, faster, and more effective regulation of social media platforms.”  According to Sánchez, the coalition will hold its first meeting soon to “advance coordinated action at a multinational scale.” Digital rights advocates and technology leaders have voiced serious concern that these initiatives risk expanding government surveillance and undermining free speech.  Telegram founder Pavel Durov said on Wednesday that “Pedro Sánchez’s government is pushing dangerous new regulations that threaten your internet freedoms. Announced just yesterday, these measures could turn Spain into a surveillance state under the guise of ‘protection.’” He warned that the plan will cause “increased government-led censorship of online content, breaches of privacy through de-anonymizing users and mass-surveillance.” Spain has already demonstrated a willingness to disrupt core internet infrastructure in the name of enforcement. In its ongoing campaign against unlicensed football streaming, courts have granted LaLiga the power to compel internet service providers to block broad Cloudflare IP ranges used by piracy-linked sites. These blocks, which often coincide with matchdays, have had collateral effects far beyond the intended targets. Services like GitHub, Steam, and X have periodically gone dark for Spanish users because they share Cloudflare’s infrastructure. Despite challenges from Cloudflare and cybersecurity organizations, a Spanish court reaffirmed the legality of these shutdowns. The mass blocking of unrelated websites has driven widespread VPN adoption as users attempt to bypass state-imposed restrictions. The aggressive posture toward network-level censorship shows that the Sánchez government’s proposed crackdown on social media is part of a broader pattern of overreach. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Spanish PM Declares War on the Internet While Calling It Protection appeared first on Reclaim The Net.