Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

SCOTUS Rejects Citizen Journalist’s Case Against Officials Who Arrested Her for Asking Police Questions
Favicon 
reclaimthenet.org

SCOTUS Rejects Citizen Journalist’s Case Against Officials Who Arrested Her for Asking Police Questions

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Priscilla Villarreal built a following in the way modern news often grows now. Not through printing presses or broadcast towers, but through a Facebook page that drew more than 200,000 people into its orbit. In Laredo, Texas, under the name La Gordiloca, she reported quickly, conversationally, sometimes uncomfortably close to the raw edge of events. In 2017, she texted a police officer to confirm the identities of two victims, one from a suicide, one from a car accident. She received answers. She published them. Months later, she was arrested. The law used against her had been sitting unused for 23 years. It makes it a felony to solicit nonpublic information from a government official “with intent to obtain a benefit.” In Villarreal’s case, authorities argued that the benefit was popularity, more followers, more attention, more reach. In other words, doing well at the job became the job’s alleged crime. A state judge dismissed the charges, finding the statute too vague to stand. That might have sounded like a resolution, the system correcting itself in the end. Instead, it became the beginning of a second act. Villarreal filed a civil rights lawsuit against the officials involved in her arrest. The response was immediate and familiar within legal circles: “Qualified immunity.” The doctrine protects government officials from liability unless there is already a court decision declaring nearly identical conduct unconstitutional. No case had ever addressed the idea of arresting a journalist for asking a question over text. A three-judge panel initially sided with Villarreal, stating, “If the First Amendment means anything, it surely means that a citizen journalist has the right to ask a public official a question, without fear of being imprisoned. Yet that is exactly what happened here: Priscilla Villarreal was put in jail for asking a police officer a question. If that is not an obvious violation of the Constitution, it’s hard to imagine what would be.” The clarity of that statement did not last. The full 5th Circuit reversed the decision. In a 9-7 ruling, the court concluded that the officers and prosecutors could reasonably believe they were enforcing the law. Judge Edith Jones wrote that it was inappropriate to “portray her as a martyr for the sake of journalism,” adding that Villarreal had skirted the Texas law “to capitalize on others’ tragedies to propel her reputation and career.” The focus moved. Not just what happened, but who it happened to. The Supreme Court Steps Aside When the case reached the Supreme Court, the justices declined to hear it. Villarreal’s First Amendment claim effectively ended on Monday. We obtained a copy of the order list for you here. Justice Sonia Sotomayor dissented. Her words were a reminder of how ordinary the underlying act had been. “This case implicates one of the most basic journalistic practices of them all: asking sources within the government for information. Each day, countless journalists follow this practice, seeking comment, confirmation, or even ‘scoops’ from governmental sources,” she wrote. “This was a blatant First Amendment violation. No reasonable officer would have thought that he could have arrested Villarreal, consistent with the Constitution, for asking the questions she asked.” She described the outcome as “a perverse scheme in which officials can arrest someone for protected activity, decline to appeal a trial court’s decision declaring the statute unconstitutional (as the county did here), and use qualified immunity to avoid liability by citing back to that statute.” The structure revealed by the case is difficult to ignore. An arrest is made under a questionable law. Charges are later dropped. No definitive ruling emerges on the arrest itself. When challenged, officials point to the lack of prior rulings as protection. The result is a kind of legal loop. The act may be unconstitutional, yet no one is held accountable for treating it as if it were not. What remains is the ripple effect. Journalism depends on questions. Sometimes, persistent and inconvenient ones. When asking those questions carries even a distant possibility of arrest, the calculation changes. Not dramatically, not all at once, but enough. Enough to hesitate. Enough to reconsider sending the message at all. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post SCOTUS Rejects Citizen Journalist’s Case Against Officials Who Arrested Her for Asking Police Questions appeared first on Reclaim The Net.

The Verdict Against Meta and Google That Could End the Anonymous Internet
Favicon 
reclaimthenet.org

The Verdict Against Meta and Google That Could End the Anonymous Internet

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A Los Angeles jury has found Meta and YouTube negligent in the design of their platforms and awarded $3 million to a plaintiff identified as K.G.M., a young woman who testified that years of near-constant social media use contributed to depression, anxiety, and body dysmorphia. The jury assigned 70% of the responsibility to Meta and 30% to YouTube. Punitive damages came to another $6 million. The verdict is being reported as a landmark for child safety. It also represents a significant legal mechanism for dismantling anonymous internet access, built in plain sight, with bipartisan enthusiasm and a CEO’s enthusiastic assistance. K.G.M.’s attorneys built their claim not around what users posted, which Section 230 of the Communications Decency Act largely shields platforms from liability for, but around how the platforms were designed. Infinite scroll, algorithmically amplified notifications, engagement loops engineered to maximize time on site. The argument treats social media architecture the way product liability law treats a car without brakes. A defective product that the public needs to be protected from. If that framing survives appeal, the plaintiffs in over 1,600 similar cases waiting nationally will inherit a tested legal theory for bypassing Section 230 protections entirely. That is a structural change to internet liability law, driven by trial lawyers and a still-contested body of science on social media’s mental health effects. The science is genuinely disputed. The word “addiction” is making substantial legal and rhetorical waves here. When social media gets classified as a drug, access to it becomes a regulatory and medical matter that the government needs to step in and fix. Who uses it, under what conditions, and who becomes verified are questions for authorities rather than individuals. Regulating an addictive product and regulating speech should not be the same. The surveillance infrastructure required to enforce either is identical: identity verification, access controls, and a system that follows users across every platform they use. Which brings us to what Mark Zuckerberg said on the stand. Zuckerberg spent more than five hours testifying in Los Angeles Superior Court, becoming visibly agitated under cross-examination. Prosecutors presented internal emails, including a 2015 estimate that 4 million users under 13 were on Instagram, approximately 30% of all American children aged 10 to 12. An old email from former public policy head Nick Clegg was read into the record: “The fact that we say we don’t allow under-13s on our platform, yet have no way of enforcing it, is just indefensible.” Zuckerberg acknowledged the slow progress: “I always wish that we could have gotten there sooner.” When pressed on age verification, he told jurors he did not understand why it was difficult. His proposed solution is a detail that deserves the most attention. Multiple times, Zuckerberg argued that verification should happen not inside individual apps but at the operating system level, handled by Big Tech gatekeepers Apple and Google. He told the jury that operating system providers “were better positioned to implement age verification tools, since they control the software that runs most smartphones.” He elaborated: “Doing it at the level of the phone is just a lot cleaner than having every single app out there have to do this separately.” He added that it “would be pretty easy for them” to implement. This is not a proposal to just verify the ages of Instagram users. It is a proposal to verify the identity of every smartphone user, for every app, at the OS layer. It applies to every app installed on the device, every website accessed through the phone’s browser, and every message sent through any app on the phone. Zuckerberg proposed this from the witness stand while simultaneously solving his own legal problem. If Apple and Google own age enforcement, platforms like Meta are no longer responsible for it. The liability shifts to Cupertino and Mountain View. Two companies already under serious antitrust scrutiny for their control of app distribution would be handed new authority as identity gatekeepers for the internet. The man under oath, under pressure, handed a high-profile public endorsement to a national digital ID layer baked into the two operating systems running the overwhelming majority of the world’s smartphones. Legislators will use it. The infrastructure for this is already under construction. California’s SB 976 mandates age verification systems for social media platforms statewide, with implementation rules due by January 2027. The Ninth Circuit has declined to rule on whether those requirements violate the First Amendment until those regulations are finalized. Age verification for lawful online speech is advancing in California without a constitutional answer. The Kids Online Safety Act, pending federally, would direct agencies to develop verification at the device or operating system level, precisely the framework Zuckerberg promoted from the stand. New York’s SAFE For Kids Act permits facial analysis as an alternative to government ID submission, and biometric data collected to access a social media feed. These laws require identity databases. Identity databases get breached. A Discord-related breach last year exposed approximately 70,000 government-issued IDs submitted through a third-party customer support system, with attackers claiming the number was higher. Every ID check creates a future breach waiting to happen. Anonymous and pseudonymous speech online protects real people: whistleblowers, abuse survivors, political dissidents, people exploring medical questions or identities they are not ready to attach their legal names to, and journalists protecting sources. Mandatory identity verification at the OS level ends all of that for everyone. The stated goal is to protect children from Instagram. The mechanism ends anonymous internet access for every adult who owns a phone. Meanwhile, a separate New Mexico jury found Meta in violation of state consumer protection law this week, imposing a $375 million penalty after New Mexico Attorney General Raúl Torrez built a case by posing as children on the platforms and documenting the sexual solicitations they received. The jury determined Meta engaged in what it described as “unconscionable” trade practices and made false or misleading statements about child safety. Meta said it “disagrees with the verdict and will appeal,” adding: “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.” The $375 million fine is a fraction of Meta’s $201 billion revenue in 2025. The chain from these verdicts to surveillance architecture runs through a single word: “addiction.” Public health emergency follows from that classification. Emergency powers follow from the emergency. Age verification follows from emergency powers. OS-level ID checks follow from age verification. Each step is presented as protecting children. What gets built is a surveillance system for everyone unless we can get more people to wake up to it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post The Verdict Against Meta and Google That Could End the Anonymous Internet appeared first on Reclaim The Net.

Apple Forces UK iPhone Age Checks in iOS 26.4
Favicon 
reclaimthenet.org

Apple Forces UK iPhone Age Checks in iOS 26.4

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. With iOS 26.4, Apple has turned every iPhone in the UK into an identity checkpoint. The update, released March 24, requires all UK users to confirm they’re 18 or older before accessing certain features and services on their Apple Account. UK communications regulator Ofcom called it “a real win for children and families.” The infrastructure being built is more of a problem than that framing suggests. Apple has, without warning, placed a gatekeeper on the devices of 35 million UK users who paid good money for full-featured smartphones and now find themselves holding something closer to a supervised children’s tablet. It’s a corporate ultimatum: hand over sensitive personal data or lose functionality you already paid for. The verification prompt appears immediately after the update installs. Apple checks whether your account already has a credit card linked or whether the account has existed long enough to establish you as an adult. For many existing users, the process is essentially automatic. For everyone else, the options narrow quickly: link a credit card, scan a government-issued photo ID, or accept that your account defaults to teen restrictions, with Apple’s Web Content Filter and Communication Safety features switched on across all browsers and messaging apps and FaceTime, monitoring communications. Web Content Filter blocks websites Apple classifies as explicit, operating across Safari and third-party browsers alike. Communication Safety scans incoming and outgoing images and videos for nudity. Both activate silently for anyone who hasn’t cleared the adult threshold. Skip verification, or lack a credit card and a government ID, and Apple decides what you’re allowed to see. Users without a credit card or government ID have no other path. Reports from UK users confirm it. Scan the card, upload the ID, or live with restricted access. The system doesn’t offer alternatives. Ofcom praised the rollout in a statement, saying it had coordinated extensively with Apple and others on age assurance under the Online Safety Act: “Apple’s decision that the UK will be one of the first countries in the world to receive new child safety protections on devices is a real win for children and families…We’ve worked closely with Apple and other services to ensure they can be applied in a variety of contexts in order to ensure users are protected. This will build on the strong foundations of the Online Safety Act, from widespread age checks that keep young people away from harmful content, to blocking high-risk sites and stepping up action against child sexual abuse material.” Notably, Apple wasn’t legally required to do this. The Online Safety Act’s age verification obligations apply to platforms and adult content sites, not to device operating systems or app stores. Apple chose to go further, and Ofcom chose to celebrate it. The reality of the UK’s wider age verification push is that it hasn’t worked. VPN usage spiked dramatically when the Online Safety Act came into force, with NordVPN reporting a 1,000% increase in UK sign-ups and Proton VPN seeing 1,400% more in the first days after enforcement. Some users bypass facial age-scanning on websites by holding a photo of an older person to the camera. The UK isn’t the endpoint. Apple has been watching age verification legislation build momentum globally, with US federal pressure, state-level requirements beginning in Utah, and ongoing industry lobbying all pointing in the same direction. iOS 26.4 makes the UK a test case. The system designed here will likely expand if other jurisdictions get their way. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Apple Forces UK iPhone Age Checks in iOS 26.4 appeared first on Reclaim The Net.

Missouri v. Biden Consent Decree: US Government Admits Pressuring Social Media Platforms to Censor Protected Speech
Favicon 
reclaimthenet.org

Missouri v. Biden Consent Decree: US Government Admits Pressuring Social Media Platforms to Censor Protected Speech

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The Trump administration and the plaintiffs in Missouri v. Biden signed a consent decree on Monday, ending one of the most significant First Amendment lawsuits in recent memory with a formal, court-enforceable admission: the federal government pressured social media platforms to silence protected speech, and it cannot do so again. We obtained a copy of the consent decree for you here. The decree lands at the end of years of litigation that began when the States of Missouri and Louisiana, joined by Gateway Pundit publisher Jim Hoft, Dr. Aaron Kheriaty, and activist Jill Hines, filed suit alleging that Biden administration officials had run what their legal filings described as “a coordinated censorship operation emanating from the highest levels of government.” The lawsuit survived a Supreme Court ruling in 2024 that blocked a preliminary injunction on standing grounds, with the majority explicitly declining to rule on the merits. The case returned to the district court in Louisiana, discovery continued, and eventually both sides concluded that settling was preferable to prolonged litigation. The consent decree prohibits the Surgeon General, the CDC, and CISA from taking any action, formal or informal, direct or indirect, to threaten Facebook, Instagram, X, LinkedIn, or YouTube with punishment unless those platforms delete content containing protected speech. The decree also bars those agencies from unilaterally directing or vetoing the platforms’ content moderation decisions. The agreement runs for ten years and is enforceable by the named plaintiffs if violated. The decree’s preamble makes plain what the government is conceding. Quoting President Trump’s Executive Order 14149, signed on his first day back in office, the document states that the previous administration had “[t]rampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve.” It continues: the Federal Government “infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.” The government signed that. On the question of “misinformation,” the decree is equally direct. The parties agreed that “government, politicians, media, academics, or anyone else applying labels such as ‘misinformation,’ ‘disinformation,’ or ‘malinformation’ to speech does not render it constitutionally unprotected.” Citing the Supreme Court’s United States v. Alvarez, the agreement acknowledges that some false statements are inevitable in open public discourse, and the First Amendment protects that space. The real scope of what the Biden administration built has been documented through the litigation’s discovery process. Government agencies and the White House directed social media platforms to remove viewpoints conflicting with federal messaging on COVID-19, the 2020 election, and the Hunter Biden laptop story. The FBI ran weekly calls with major tech companies ahead of the 2020 election. At its peak, the government’s real-time content monitoring flagged 2.5 percent of all tweets on Twitter as “potential misinformation.” In a single request, the FBI demanded that Twitter delete 929,000 tweets it characterized as foreign speech. Mark Zuckerberg publicly acknowledged in 2024 that the pressure campaign existed and that he regretted Facebook’s participation in it. The decree is, without question, a limited instrument. It covers only the named plaintiffs’ social media accounts on those five platforms, and only those three agencies. It does not bind every federal department or protect every American’s posts. John Vecchione, senior litigation counsel at the New Civil Liberties Alliance and one of the plaintiffs’ attorneys, offered his own accounting of what four years of litigation produced: “This case began with a suspicion that blossomed into fact, that led to Congressional hearings and an Executive Order that government censorship of Americans’ social media posts should end. Freedom of speech has been powerfully preserved by our clients, past and present, who initiated this suit.” A consent decree is only as useful as the court’s willingness to enforce it and the plaintiffs’ ability to detect violations and bring them to court promptly. The decree gives the named agencies fifteen business days to remedy any identified violation before the plaintiffs can seek court relief, and limits judicial remedies to retracting the offending statements and voiding their effect. That is a constrained enforcement mechanism, though it is a real one. Vecchione has indicated he will publish an op-ed explaining the mechanics of how future enforcement would work, and NCLA notes the plaintiffs retain the right to return to court if the government violates the terms. What is genuinely significant here is the admission itself. The government of the United States has agreed, in a binding legal document, that labeling speech “misinformation” does not strip it of constitutional protection, that coercing private companies to delete protected posts violates the First Amendment, and that the previous administration did exactly that. For years, official channels denied the operation existed. Congressional hearings documented it anyway. An executive order condemned it. Now, a consent decree formally acknowledges it and places enforceable limits on its recurrence, at least for the plaintiffs who spent four years litigating to get this far. The consent decree awaits final approval from Judge Terry Doughty of the Western District of Louisiana. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Missouri v. Biden Consent Decree: US Government Admits Pressuring Social Media Platforms to Censor Protected Speech appeared first on Reclaim The Net.

Ohio’s Second Attempt at Adult Website Age ID Verification Advances
Favicon 
reclaimthenet.org

Ohio’s Second Attempt at Adult Website Age ID Verification Advances

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Ohio is taking a second shot at forcing adult websites to verify users’ ages, and this time the legislature is trying to close the legal escape route that let adult websites and others walk away from the first attempt. The Innocence Act, House Bill 84, passed the Ohio House on March 18 and moved to the Senate the following day. We obtained a copy of the bill for you here. The bill requires any company that “sells, delivers, furnishes, disseminates, provides, exhibits, or presents any material or performance that is obscene or harmful to juveniles on the internet” to deploy age verification. There are no carve-outs for platforms that host third-party content. That shelter is exactly what Aylo, Pornhub’s parent company, claimed under Ohio’s original age verification law. Section 230 of the Communications Decency Act shields platforms from liability for content posted by their users, and Aylo argued that hosting user-generated content made it an “interactive computer service” under that definition, exempting it from Ohio’s age-gating requirements. The argument worked. The original law’s language mirrored the federal statute closely enough that Aylo and other adult platforms successfully sidestepped enforcement entirely. HB 84 rewrites those definitions to cut off that route. It also replaces the criminal penalties from an earlier version of the bill, which included misdemeanor charges for minors who bypassed content blocks, with civil fines reaching $100,000 per day for noncompliance. Enforcement falls to Ohio Attorney General Dave Yost, whose office worked with Republican state Reps. Steve Demetriou and Josh Williams on the bill’s drafting. The measure passed the House Technology and Innovation Committee unanimously before advancing to a floor vote, and a path to Governor Mike DeWine’s signature looks clear. The age verification these laws require is worth examining directly. To access legal content as an adult, users must submit identity documents, biometric data, or other credentials to platforms or third-party verification services. That data then exists somewhere, held by someone, subject to breach, subpoena, and uses that weren’t disclosed at the point of collection. The stated goal is to protect children. The actual mechanism is building a database of adults who watch pornography, linked to a verifiable identity. Demetriou introduced an earlier Innocence Act version that imposed criminal penalties on minors who circumvented age blocks, a provision that treated teenagers as criminals for doing what teenagers do online. That’s gone from HB 84. What remains is the identity verification infrastructure itself, framed as child protection while functioning as a surveillance requirement for adult content consumption. Ohio isn’t alone in pursuing this, but it is among the states most determined to make it work regardless of the legal obstacles that keep appearing in the way. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Ohio’s Second Attempt at Adult Website Age ID Verification Advances appeared first on Reclaim The Net.