Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

UK Foreign Affairs Committee Calls for Government Agency to Police Online “Disinformation”
Favicon 
reclaimthenet.org

UK Foreign Affairs Committee Calls for Government Agency to Police Online “Disinformation”

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The UK’s Foreign Affairs Committee wants the government to build a new censorship agency. The proposed “National Counter Disinformation Centre” would be given the power to identify and act against speech the state considers “disinformation,” placed on a statutory footing, and modeled on bodies like Sweden’s Psychological Defence Agency, which once ran a public campaign warning citizens about the dangers of memes. The committee’s report, published on March 27 2026, goes further than a single new body. It calls for new censorship rules in a forthcoming Representation of the People Bill to target AI-generated content and “the creation and dissemination of disinformation.” It wants amendments to the Online Safety Act that would force platforms to publicly display where user accounts were created and whether the user connected through a VPN. It wants more money for the FCDO’s Hybrid Threats Directorate. And it wants the government to review the National Security Act’s foreign interference offense because, apparently, an existing law that carries up to 14 years in prison isn’t strict enough. Committee chair Emily Thornberry framed the entire project in the language of war. “It is the new warfare and open liberal democracies are sitting ducks,” she said. “From pushing provable lies, to planting false seeds of doubt, disinformation is the weapon of choice of hostile states seeking to destabilise democracies.” If “disinformation” is a weapon, then censoring it becomes a defense. If identifying incorrect speech is warfare, then creating a government agency to police the information space becomes national security. The metaphor lets you skip past every difficult question about who defines “disinformation,” who gets targeted when definitions are vague, and what happens when a government agency tasked with identifying false speech starts to decide that inconvenient speech qualifies. The report itself cites the idea that “Elon Musk’s influence is potentially greater in the UK than that of Russia’s,” placing the owner of a social media platform in the same threat category as hostile states. That framing tells you something about the committee’s actual concerns. The problem isn’t limited to covert Russian bot networks planting fabricated stories. It extends to the owner of a platform making editorial decisions that the committee doesn’t like. The foreign interference offense that the committee wants reviewed already covers “misrepresentation,” which the National Security Act 2023 defines broadly enough to include “presenting information in a way which amounts to a misrepresentation, even if some or all of the information is true.” You can be prosecuted for presenting true information in a way the government considers misleading, provided the other elements of the offense are met. The committee’s complaint is that the “foreign power condition” is too hard to prove, which suggests they want the law applied more widely, with a lower threshold for establishing a foreign link. Under the Online Safety Act, platforms are already required to “effectively mitigate and manage the risk” of their services being used for priority offenses like foreign interference. The committee’s proposed review signals a push to make those obligations bite harder, potentially requiring platforms to censor a wider range of speech labelled as foreign-adjacent. The VPN and location transparency proposal is quietly one of the most significant recommendations. The committee wants an Online Safety Act amendment requiring platforms to share publicly the region where an account was created, the region where it’s based, and whether the user connects via VPN. An opt-out would be available, but the default is disclosure. This targets anonymous and pseudonymous speech directly. If you post political commentary from behind a VPN, that fact would be visible to everyone viewing your account. The chilling effect is immediate and by design. Users who value privacy, who have legitimate reasons to obscure their location (journalists, whistleblowers, domestic abuse survivors, anyone who doesn’t want to be doxed) would be flagged as suspicious by the very fact that they’re using basic privacy tools. The report also calls for the government to force platforms to hand over data to researchers “free of charge and without cumbersome restrictions.” The committee frames this as transparency, but the direction of the research pipeline matters. Researchers who study “disinformation” frequently conclude that platforms aren’t censoring enough. Giving them unrestricted access to platform data, with government backing, creates a pressure loop where academic findings become the justification for more content removal. Platforms would also be forced to publish annual reports on “the detection of artificial amplification and foreign interference and the subsequent actions taken to remove such content,” creating a built-in incentive to demonstrate that they’re censoring at the scale the government expects. Seven government departments currently have responsibilities touching on what the committee calls “foreign information manipulation and interference.” The report complains about fragmentation and slow progress, proposing the new center as a fix. The model they admire is the National Cyber Security Centre, housed within GCHQ. A “disinformation” centre built on the GCHQ template, with statutory powers and intelligence agency proximity, would have the tools and the institutional culture to treat speech as a threat vector. The whole report treats speech as a security problem and government censorship as the solution, wrapped in enough references to Russia and national defense that questioning any of it risks looking naïve. But the tools it proposes, a government body that decides what counts as “disinformation,” mandatory location exposure for social media users, lower thresholds for prosecuting speech as foreign interference, forced data access for researchers who study “misinformation,” none of these powers come with an expiry date. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Foreign Affairs Committee Calls for Government Agency to Police Online “Disinformation” appeared first on Reclaim The Net.

Meta To Comply With Florida Age Verification Digital ID Law
Favicon 
reclaimthenet.org

Meta To Comply With Florida Age Verification Digital ID Law

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Meta agreed to comply with Florida’s age verification law, HB 3, and will begin purging accounts belonging to children under 14 starting in May.  The company’s capitulation comes ahead of an April 8 deadline set by Florida Attorney General James Uthmeier, who threatened litigation against any platform still refusing to verify the ages and identities of its users. Uthmeier is now pressuring Snapchat, Roblox, Discord, and TikTok to do the same. What Florida calls child protection is also the construction of a statewide identity verification system for the internet. Meta is one of the biggest companies lobbying for age verification checks on the app store level. HB 3 bans under-14s from social media entirely and requires parental consent for 14- and 15-year-olds. But to block minors, platforms first need to determine who is and isn’t a minor. That means age-checking everyone, adults included. The surveillance burden falls on millions of people who have every legal right to use these services without proving who they are. The law, signed by Governor Ron DeSantis in 2024 and passed with bipartisan support, spent two years tied up in court before becoming enforceable this March.  Uthmeier appeared on Fox & Friends to announce Meta’s compliance. “I can confirm we heard from Meta, and they have announced they will be complying with our law effective in early May,” he said. The fines for non-compliance sit at $50,000 per violation, a number Uthmeier says could reach the “billions” if platforms fail to remove unchecked accounts.  State officials expect hundreds of thousands of accounts to be suspended next month. For companies that refuse to comply, Uthmeier’s office has promised to seek heavy damages and injunctive relief in court. Uthmeier framed this as straightforward child safety. “They know that kids are suffering on these applications. They know the predators are getting to kids. So we’re encouraging companies, ‘come in, sit down. Let’s work together, let’s protect our kids at all costs,” he said.  What he didn’t address is what happens to the identity data that platforms will need to collect from every user in Florida to comply. The law doesn’t specify which verification methods count as “reasonable,” leaving platforms to decide how much personal information to harvest and how long to keep it. Government IDs, biometric scans, payment credentials, behavioral profiling: all of these are on the table, and none of them come with meaningful retention limits. Discord is one of the platforms Uthmeier singled out. The company’s earlier experiment with government ID-based age verification already resulted in a breach exposing over 70,000 government-issued IDs, a preview of what mandatory identity collection across every major platform could look like at scale. The legal challenge to HB 3, brought by the Computer & Communications Industry Association and NetChoice in Computer & Communications Industry Association v. Moody, is still active.  A federal district judge in Tallahassee initially blocked the law in June 2025, ruling it was “likely unconstitutional” as a restriction on protected speech. Two judges on the 11th Circuit reversed that in November, staying the injunction and finding Florida likely to succeed. Florida is constructing that system now, behind a child-safety rationale that makes the long-term privacy costs easy to ignore. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Meta To Comply With Florida Age Verification Digital ID Law appeared first on Reclaim The Net.

Alberta Bill Would Fine Political Deepfakes $10,000 Without Satire Exemptions
Favicon 
reclaimthenet.org

Alberta Bill Would Fine Political Deepfakes $10,000 Without Satire Exemptions

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Alberta’s government wants the power to fine people $10,000 for creating a political deepfake. The bill makes no distinction between a fake video designed to suppress votes and a satirical meme poking fun at the premier. Justice Minister Mickey Amery tabled Bill 23, the Justice Statutes Amendment Act, 2026, on March 30. The legislation would prohibit individuals and entities from creating or distributing deepfakes that are likely to mislead voters about the conduct or statements of a party leader, minister, leadership or nomination contestant, MLA candidate, the chief electoral officer, the election commissioner, Elections Alberta employees or election officers. We obtained a copy of the bill for you here. The ban’s reach is notable for what it doesn’t say. There is no carve-out for satire, no exemption for parody, no protection for political memes. A deepfake clearly labelled as humor could still be prosecuted if someone, somewhere, decided it was “likely to mislead voters” about a politician’s statements. Who decides what’s likely to mislead? The election commissioner, the same office empowered by the bill to issue directions to stop the creation, distribution, or publication of content it deems in violation. Officials said the prohibition would apply at all times, not only during the election cycle. The ban operates year-round, every year, regardless of whether Albertans are anywhere near a ballot box. It applies to content about sitting politicians even when no one is voting. “We know that deepfake technology is going to continue to improve, and the distinction between what is reality and what is fake is becoming more and more difficult to distinguish,” Amery said. Alberta’s bill takes a different approach. Rather than relying on existing fraud and election interference laws to prosecute genuine bad actors, it creates a broad new category of banned speech and gives a government appointee the power to enforce it. “Bill 23 ensures that our elections will remain fair and honest,” Amery said. “This is why Bill 23 will prohibit the creation and distribution of deepfakes that are likely to mislead voters about the statements or conduct of a candidate. Public confidence is essential to a healthy democracy.” The phrase “likely to mislead” is where the real power sits. A deepfake of a premier singing a ridiculous song, obviously fake to any viewer, could technically be argued to mislead someone about the premier’s “conduct.” A satirical clip of a justice minister saying something absurd could be classified as a misleading depiction of their “statements.” The legislation provides no guidance on how to distinguish a genuine attempt at voter suppression from a political joke that happens to use AI-generated media. Those who violate the rules face fines of up to $10,000, and entities up to $100,000. Additional fines could be imposed for each day of non-compliance. Those are serious penalties for speech that may well be constitutionally protected under the Canadian Charter. The chilling effect is predictable. An Alberta resident thinking about making a satirical AI video about their MLA now has a strong incentive to not bother. The government doesn’t need to prosecute anyone for the law to work exactly as a speech restriction always works, by making people think twice before they speak. The bill also happens to be buried inside a much larger piece of legislation that quietly reshapes how Albertans can challenge their own government. Bill 23 would create a 12-month blackout period before and after provincial elections for starting or continuing a citizen initiative petition. It would also repeal deadlines for the government to call a referendum for any future successful policy or constitutional petition. A citizen petition that gathers enough signatures no longer comes with any deadline for the government to actually act on it. A petition delayed long enough is a petition that never matters. Alberta already has laws against fraud and election interference. The question is whether a province needs a new law that bans a broad category of political expression, with vague definitions and no protections for satire or parody, enforced by fines that would bankrupt most individuals. Opposition parties have indicated tentative support for the bill, which is unsurprising. The deepfake provisions will probably pass. They’ll sit on the books alongside the citizen petition restrictions, the removed referendum deadlines, and the expanded government oversight of the signature verification process. Bill 23 gives the Alberta government more tools to control what citizens say about their politicians and fewer obligations to respond when citizens try to hold those politicians accountable. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Alberta Bill Would Fine Political Deepfakes $10,000 Without Satire Exemptions appeared first on Reclaim The Net.

Brazil’s Justice Moraes Ordered Global Takedowns of American Users’ Speech, House Report Reveals
Favicon 
reclaimthenet.org

Brazil’s Justice Moraes Ordered Global Takedowns of American Users’ Speech, House Report Reveals

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. We have long covered how Brazilian Supreme Court Justice Alexandre de Moraes has been issuing global takedown orders to American social media companies since at least 2020, demanding they delete accounts and posts worldwide, including for users in the United States. But now, a new House Judiciary Committee report, built on nonpublic documents, maps the full scope of this operation and has given some more insight. We obtained a copy of the report for you here. The orders target political dissent. Moraes’s first documented global order, from July 2020, told Meta to delete 16 Facebook profiles everywhere to stop “continued dissemination of fraudulent news (fake news), slanderous accusations, threats and offenses imbued with animus . . . that affect the honor and safety of the FEDERAL SUPREME COURT.” The speech he wanted erased was criticism of his own court. The targets include Americans. Bruno Aiub, a Florida-based podcaster known as “Monark,” saw roughly 40 accounts ordered deleted across 24 platforms in June 2024, with daily fines of about $18,500. Moraes also issued secret orders to Spotify between 2023 and 2024 demanding the removal of Aiub’s podcast. Brazilian journalist Allan Dos Santos, also US-based, triggered a harsher response. When X refused to block his account, Moraes froze the platform’s assets, cut off payment processing, and ordered X to cease operating in Brazil. Rumble was shut down for the same reason. Brazil’s censorship agency, the CIEDDE, even flagged posts about US presidents for deletion. One April 2025 post was targeted because it said Trump was “going to expose that bandit dressed as a judge [Justice Moraes] here in Brazil, as well as the interference/fraud in the 2022 elections.” Others accused Biden and USAID of involvement in Brazilian election fraud. X refused to comply. Stanford’s Cyber Policy Center hosted a September 2025 roundtable that brought together censorship officials from Brazil, Australia, the EU, and the UK. The event, revealed by a whistleblower, was framed as discussing “compliance and enforcement of existing regulations related to online trust and safety.” The attendees included officials who have directly targeted American speech. Brazil’s Supreme Court also stripped platforms’ liability protections in June 2025. Justice Gilmar Mendes called the ruling a potential “paradigm for the world” for “how to deal with social media.” If a foreign judge can order worldwide deletion of posts that criticize him, and platforms comply to keep market access, every government with a large consumer base holds a veto over speech in the United States. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Brazil’s Justice Moraes Ordered Global Takedowns of American Users’ Speech, House Report Reveals appeared first on Reclaim The Net.

OkCupid Gave 3M Users’ Photos to AI Firm, FTC Says
Favicon 
reclaimthenet.org

OkCupid Gave 3M Users’ Photos to AI Firm, FTC Says

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Nearly three million people uploaded photos to OkCupid expecting those images would stay on a dating app. Instead, the photos ended up training facial recognition software, handed over by the company’s own founders to an AI firm they’d personally invested in. Match Group settled a Federal Trade Commission lawsuit last week over the transfer, which the agency says violated OkCupid’s privacy policy and was actively covered up for years. The consent decree permanently bars Match Group and OkCupid from misrepresenting their data practices and puts them under compliance reporting for a decade. The settlement carries no financial penalty. Three million users’ photos, demographic profiles, and location data were funneled to a facial recognition company with zero restrictions on use, and the regulatory consequence is a promise not to lie about it again. The data transfer happened in September 2014. Clarifai, an AI company building image recognition systems, asked OkCupid for a large dataset of user photos. Clarifai The request wasn’t routed through a business development team or vetted by legal. OkCupid’s founders were financially invested in Clarifai, and the ask came on that basis, one investor helping out another. OkCupid’s president and chief technology officer were directly involved in the data transfer, and one of the founders allegedly sent the photos from his personal email account, bypassing any corporate oversight or audit trail. No contract governed the handoff. No restrictions were placed on what Clarifai could do with the data. Clarifai never provided any business services to OkCupid. OkCupid’s privacy policy at the time told users the company wouldn’t share personal information with third parties except as described in the policy, or when users were given a chance to opt out. Neither applied here. The photos, the location data, and the demographic details went to a facial recognition startup because insiders wanted them to, and nobody asked the people in those photos whether that was acceptable. Clarifai’s CEO and founder later said his company used the OkCupid images to build a service that could identify the age, sex, and race of detected faces. Dating profile pictures, uploaded by people looking for romantic connections, became raw material for technology that could be sold to police departments, government agencies, and military operations. When The New York Times reported on the arrangement in 2019, OkCupid’s response was carefully evasive. The company told the paper that Clarifai had contacted OkCupid about a possible collaboration and that no commercial agreement had been entered into. That framing was technically true and functionally misleading. There was no commercial agreement because the data was given away for free, a favor between a company and its founders’ investment. The FTC alleged that OkCupid did not address whether Clarifai had gained access to photos without consent, and described the response as part of a broader pattern of concealment. The agency said it ultimately had to enforce its Civil Investigative Demand in federal court after OkCupid obstructed the investigation. Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection, said, “The FTC enforces the privacy promises that companies make.” He added, “We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through.” “The alleged conduct at issue does not reflect how OkCupid operates today,” OkCupid spokesperson Michael Kaye. “Over the years, we have further strengthened our privacy practices and data governance to ensure we meet the expectations of our users.” Match and Clarifai did not immediately respond to requests for comment. The settlement, filed March 30, 2026 in the US District Court for the Northern District of Texas, permanently prohibits misrepresenting data collection, use, and disclosure practices. Match Group did not admit wrongdoing. The Commission vote was 2-0. What the settlement doesn’t do is more revealing than what it does. There’s no fine. There’s no requirement to delete the data Clarifai received. There’s no penalty for the twelve years of alleged concealment. There’s no action against Clarifai itself. The settlement creates a compliance framework that only produces consequences if Match Group violates the order going forward, meaning the first violation is effectively free. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post OkCupid Gave 3M Users’ Photos to AI Firm, FTC Says appeared first on Reclaim The Net.