Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

OkCupid Gave 3M Users’ Photos to AI Firm, FTC Says
Favicon 
reclaimthenet.org

OkCupid Gave 3M Users’ Photos to AI Firm, FTC Says

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Nearly three million people uploaded photos to OkCupid expecting those images would stay on a dating app. Instead, the photos ended up training facial recognition software, handed over by the company’s own founders to an AI firm they’d personally invested in. Match Group settled a Federal Trade Commission lawsuit last week over the transfer, which the agency says violated OkCupid’s privacy policy and was actively covered up for years. The consent decree permanently bars Match Group and OkCupid from misrepresenting their data practices and puts them under compliance reporting for a decade. The settlement carries no financial penalty. Three million users’ photos, demographic profiles, and location data were funneled to a facial recognition company with zero restrictions on use, and the regulatory consequence is a promise not to lie about it again. The data transfer happened in September 2014. Clarifai, an AI company building image recognition systems, asked OkCupid for a large dataset of user photos. Clarifai The request wasn’t routed through a business development team or vetted by legal. OkCupid’s founders were financially invested in Clarifai, and the ask came on that basis, one investor helping out another. OkCupid’s president and chief technology officer were directly involved in the data transfer, and one of the founders allegedly sent the photos from his personal email account, bypassing any corporate oversight or audit trail. No contract governed the handoff. No restrictions were placed on what Clarifai could do with the data. Clarifai never provided any business services to OkCupid. OkCupid’s privacy policy at the time told users the company wouldn’t share personal information with third parties except as described in the policy, or when users were given a chance to opt out. Neither applied here. The photos, the location data, and the demographic details went to a facial recognition startup because insiders wanted them to, and nobody asked the people in those photos whether that was acceptable. Clarifai’s CEO and founder later said his company used the OkCupid images to build a service that could identify the age, sex, and race of detected faces. Dating profile pictures, uploaded by people looking for romantic connections, became raw material for technology that could be sold to police departments, government agencies, and military operations. When The New York Times reported on the arrangement in 2019, OkCupid’s response was carefully evasive. The company told the paper that Clarifai had contacted OkCupid about a possible collaboration and that no commercial agreement had been entered into. That framing was technically true and functionally misleading. There was no commercial agreement because the data was given away for free, a favor between a company and its founders’ investment. The FTC alleged that OkCupid did not address whether Clarifai had gained access to photos without consent, and described the response as part of a broader pattern of concealment. The agency said it ultimately had to enforce its Civil Investigative Demand in federal court after OkCupid obstructed the investigation. Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection, said, “The FTC enforces the privacy promises that companies make.” He added, “We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through.” “The alleged conduct at issue does not reflect how OkCupid operates today,” OkCupid spokesperson Michael Kaye. “Over the years, we have further strengthened our privacy practices and data governance to ensure we meet the expectations of our users.” Match and Clarifai did not immediately respond to requests for comment. The settlement, filed March 30, 2026 in the US District Court for the Northern District of Texas, permanently prohibits misrepresenting data collection, use, and disclosure practices. Match Group did not admit wrongdoing. The Commission vote was 2-0. What the settlement doesn’t do is more revealing than what it does. There’s no fine. There’s no requirement to delete the data Clarifai received. There’s no penalty for the twelve years of alleged concealment. There’s no action against Clarifai itself. The settlement creates a compliance framework that only produces consequences if Match Group violates the order going forward, meaning the first violation is effectively free. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post OkCupid Gave 3M Users’ Photos to AI Firm, FTC Says appeared first on Reclaim The Net.

How Your Weather App Became a Surveillance Machine — and How to Escape It
Favicon 
reclaimthenet.org

How Your Weather App Became a Surveillance Machine — and How to Escape It

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post How Your Weather App Became a Surveillance Machine — and How to Escape It appeared first on Reclaim The Net.

US & EU Negotiate Biometric Data-Sharing Deal
Favicon 
reclaimthenet.org

US & EU Negotiate Biometric Data-Sharing Deal

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Washington wants to run European fingerprints through American databases, and the EU is considering it. The Department of Homeland Security and the European Union are in formal negotiations over an arrangement that would give DHS direct query access to biometric records held by EU member states, a level of access that Brussels has never granted to a non-EU country for border security purposes. The deal sits inside DHS’s Enhanced Border Security Partnership program, which effectively tells Visa Waiver Program countries to open their biometric databases or risk losing visa-free travel privileges. Washington has set a December 31, 2026, deadline for EBSP agreements to be operational. After that, DHS reviews each country’s compliance. Countries that fail to meet expectations risk suspension from the VWP, which would reimpose visa requirements on their citizens. When DHS encounters a traveler, asylum seeker, visa applicant, or anyone flagged during immigration processing, it would query a participating country’s database using that person’s biometrics. A match returns fingerprints and related identity data to DHS. If there’s no match, there’s no data transfer. That sounds targeted until you consider the volume. Twenty-four of the EU’s twenty-seven member states participate in the VWP. DHS wants query access to all of them. What makes this negotiation unusual is its layered structure. The EU Council authorized negotiation of an EU-level framework agreement in December 2025, setting the legal conditions for data transfers. Individual member states would then sign their own implementing arrangements with DHS, identifying which databases are involved and setting operational terms. The framework creates the legal permission. The bilateral deals create the pipeline. The scope of data under discussion extends well beyond fingerprints and passport photos. Draft documents show that European countries may be compelled to transfer “special categories” of personal data to US border authorities, including political opinions, trade union memberships, and information on sex life. The draft allows this if transfers are “strictly necessary and proportionate,” and the DHS gets to argue what qualifies. The European Data Protection Supervisor described it as the first EU agreement involving large-scale sharing of personal data, including biometrics, with a third country. European negotiators are pushing for limits on bulk data harvesting, meaningful human oversight when automated decisions produce adverse outcomes, tight controls on what happens to data after DHS receives it (particularly transfers onward to third countries), and some form of legal remedy for Europeans whose data gets misused. The EU also wants reciprocity, meaning European authorities would get to query American databases rather than just supplying data in one direction. That last demand tells you something about the power dynamics here. The current proposal asks Europe to feed information into a system it can’t search itself. Reconciling those European demands with what DHS actually wants may prove functionally impossible. The tensions run through every major detail. How long can DHS retain transferred biometric data? Does the agreement cover targeted border checks, or does it enable something closer to systematic screening of entire populations? And what legal redress can a German or French citizen realistically obtain under US law when their data is mishandled? According to draft documents, EU states would have to resolve disputes through national and international courts, rather than through a joint oversight committee. For a European whose fingerprints end up in a US enforcement database by mistake, the path to correction runs through American courts, a system with no equivalent of data subject rights. The DHS is simultaneously expanding its domestic biometric surveillance capabilities. CBP signed a $225,000 contract with Clearview AI for facial recognition technology and access to a database of over 60 billion publicly available images scraped from the internet. And biometrics aren’t the only data DHS is reaching for. CBP proposed in December 2025 to require DNA from the 42 Visa Waiver Program countries and to provide five years of social media history as a mandatory condition of entry, along with all email addresses from the past decade and phone numbers used over the past five years. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post US & EU Negotiate Biometric Data-Sharing Deal appeared first on Reclaim The Net.

Apple Removes Bitchat from China App Store at Cyberspace Administration Order
Favicon 
reclaimthenet.org

Apple Removes Bitchat from China App Store at Cyberspace Administration Order

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Apple deleted Bitchat from the China App Store, acting on a direct order from the Cyberspace Administration of China. Jack Dorsey, who created the app, posted a screenshot of Apple’s removal notice to X with a short caption: “bitchat pulled from the china app store.” The notice Apple sent to Dorsey is almost a copy-paste of the one it sent to Damus three years earlier. The language is identical. The accusation is identical. The CAC determined that Bitchat violates Articles 3 of the Provisions on the Security Assessment of Internet-based Information Services with Attribute of Public Opinions or Capable of Social Mobilization. That regulation, enacted in 2018, requires any online service capable of influencing public opinion or organizing collective action to undergo a government security assessment before going live. If a service hasn’t submitted to that assessment, the CAC can order it pulled. It targets the capacity for “public opinions” and “social mobilization.” The Chinese government has decided that the ability to communicate outside state-approved channels is itself a security threat, and Apple consistently treats that determination as sufficient grounds for deletion. Bitchat is a peer-to-peer messaging app that operates over Bluetooth mesh networks. It requires no internet connection, no phone number, no email address, and no user account. Messages are end-to-end encrypted and stored only on the devices involved. There are no central servers to subpoena, no user databases to hand over, and no content moderation pipeline for the CAC to plug into. Dorsey built the initial version over a single weekend in July 2025, coding it with Goose, Block’s open-source AI assistant. He published a white paper on GitHub and opened a TestFlight beta that hit its 10,000-user cap within hours. That design is precisely the problem from Beijing’s perspective. China’s internet censorship apparatus depends on having a chokepoint. WeChat, the country’s dominant messaging platform, has censorship tools baked into its architecture. The government can monitor conversations, flag keywords, and delete content before users even see it. Bitchat offers none of those control surfaces. The app makes censorship structurally impossible on a technical level because there is nothing between sender and receiver for the state to intercept, filter, or read. The app had already proved its usefulness in exactly the scenarios that make governments nervous. Protesters in Madagascar downloaded it 70,000 times in a single week during September 2025. Nepalese users pulled down nearly 50,000 copies on September 8 alone, after their government shut down social networks during anti-corruption demonstrations. Downloads spiked in Uganda and Iran during internet blackouts in January 2026. Dorsey noted publicly that Russia was at one point the app’s largest user base by country, as citizens looked for alternatives to state-surveilled messaging platforms. Apple’s compliance with the removal order was, as usual, immediate and without public comment beyond the boilerplate notification to the developer. The notice told Dorsey that his app “includes content that is illegal in China, which is not in compliance with the App Store Review Guidelines.” Apple cited Section 5 (Legal) of those guidelines, which requires apps to comply with local laws. Apple has refined this process into a routine. When the CAC ordered WhatsApp, Threads, Signal, and Telegram removed from the China App Store in April 2024, Apple complied and issued a statement saying, “We are obligated to follow the laws in the countries where we operate, even when we disagree.” The company has removed VPN apps, news apps, a Quran app, and tens of thousands of games from its China storefront over the past several years, all at government request. Apple positions itself publicly as a defender of privacy and an advocate for human rights. The company’s own human rights commitment states that it believes in “the critical importance of an open society in which information flows freely.” And yet Apple functions as the enforcement arm of the CAC’s censorship decisions whenever those decisions concern the Chinese market. The company doesn’t challenge the orders. It doesn’t delay. Apple controls the only door into every iPhone on the planet. Unlike Android, which allows users to install software from the web or third-party stores, iOS locks app distribution to the App Store. Apple decides what gets listed, what stays, and what disappears. That makes every removal order from a government functionally absolute for anyone using an iPhone in that country. When the CAC told Apple to pull Bitchat, it wasn’t asking Apple to remove one option among many. It was asking Apple to cut off access entirely, and Apple had the architectural power to do exactly that. No sideloading, no alternative storefront, no workaround short of jailbreaking the device or having already installed the app before the order came through. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Apple Removes Bitchat from China App Store at Cyberspace Administration Order appeared first on Reclaim The Net.

Kiwi Farms Challenges DMCA Subpoenas as Tools to Unmask Anonymous Speech
Favicon 
reclaimthenet.org

Kiwi Farms Challenges DMCA Subpoenas as Tools to Unmask Anonymous Speech

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A new lawsuit filed in the Southern District of New York offers a clean example of something that keeps happening and keeps getting ignored: the Digital Millennium Copyright Act being used to censor speech and unmask anonymous speakers. The case is Lolcow LLC v. Fong-Jones, filed on March 12, 2026, and it pits the operator of the web forum Kiwi Farms against Liz Fong-Jones, an activist and field Chief Technology Officer at SaaS observability platform Honeycomb, who has been filing DMCA subpoenas in an attempt to identify anonymous forum users. The content Fong-Jones wants censored is a screenshot of a Fong-Jones Bluesky post and an edited version of a Fong-Jones headshot, both related to what Fong-Jones has previously described publicly as a “consent accident.” Forum users posted and discussed those images. Fong-Jones responded by claiming copyright ownership and filing DMCA subpoenas to force the site to hand over the identities of the people who posted them. The copyright claims seem thin. Kiwi Farms operator Joshua Moon argues that the screenshot is a derivative work over which Fong-Jones holds no copyright, and that the edited headshot represents a textbook case of fair use, given that the image has no commercial value and was modified specifically for purposes of criticism and commentary. That argument carries weight. Courts have long recognized that transformative use of images for commentary or ridicule sits comfortably within fair use protections. What makes this case useful as a case study is less the copyright question itself and more the mechanism being exploited. The DMCA subpoena process, codified in Section 512(h), allows copyright holders to obtain a judicial subpoena to unmask the identities of allegedly infringing anonymous internet users just by asking a court clerk to issue one and attaching a copy of the infringement notice. No judge needs to review the merits. No adversarial hearing takes place first. The anonymous speaker can be exposed before they even know a claim has been filed. This is a feature that has been routinely abused to strip anonymity from people whose real offense is saying something someone powerful doesn’t like. The EFF has documented this pattern extensively. Copyright law is “an all-too common method used to silence lawful speech,” and the DMCA subpoena is the sharpest version of that weapon because it sidesteps the normal protections a lawsuit would require. The standard DMCA process works like this, and it tells you everything you need to know about why Fong-Jones chose it: Under the DMCA, posts must come down first, and then the user who posted them must reveal their real identity and accept the risk of being sued and threatened, or else the posts stay down permanently. For anonymous forum users, the options are brutal. You either give up your identity to someone with a demonstrated history of legal aggression, or you accept the permanent censorship of your speech. The chilling effect is the point. More: How the DMCA has become one of the biggest threats to online speech Fong-Jones has campaigned against Kiwi Farms for years. Moon characterizes the DMCA approach as the latest tactic after technological and financial efforts to force the site offline have failed, describing it as probing “legalistic avenues” to achieve what deplatforming campaigns could not. The copyright route is more dangerous, and it’s designed to be. Instead of pressuring companies, it goes after individual users. If Fong-Jones’s DMCA subpoenas succeed, anonymous forum participants could be exposed by name. Lolcow LLC’s suit seeks a declaratory judgment that the copyright complaints are false, along with attorney’s fees. There’s an important precedent worth understanding here. A 2022 ruling in the Northern District of California confirmed that the First Amendment’s protections for anonymous speech apply even in DMCA cases, and that copyright holders issuing subpoenas must meet the Constitution’s test before identifying anonymous speakers. In that case, involving an anonymous Twitter user who posted photos while criticizing a private equity billionaire, it was established that a court should apply a two-step test before unmasking anyone. The court found that even if the copyright holder had established a valid infringement claim, the subpoena should still be quashed because the user’s interest in remaining anonymous outweighed the copyright interest. The parallel to the Kiwi Farms situation is obvious. Fong-Jones claims copyright over a Bluesky screenshot and an edited headshot. The forum users posted those images as part of the discussion and commentary. The question is whether copyright law should be available as a shortcut to unmask and silence anonymous speakers engaged in commentary about a public figure. Courts have thankfully been skeptical of exactly this kind of move. The EFF has argued that courts “must apply robust First Amendment safeguards to prevent DMCA abuse that either seeks to suppress speech or to identify anonymous speakers.” The Northern District of California agreed, ruling that “it is possible for a speaker’s interest in anonymity to extend beyond the alleged infringement.” Moon has been open about the financial stakes, noting that approximately $125,000 was crowdfunded three years ago for litigation costs, that most of it has been spent, and that the site’s annual legal expenses exceed $25,000. He’s asking users to contribute to a legal fund, and the outcome could determine whether the site survives. If Fong-Jones wins the copyright dispute, Moon says, “the site is gone.” The broader pattern here matters more than any individual piece of forum content. Copyright claims are becoming a preferred weapon for people who want to censor speech they find offensive or embarrassing, and the DMCA gives them a loaded gun. The subpoena process was designed to help copyright holders protect their creative works. Fong-Jones is here accused of using it to try to identify anonymous people who posted a screenshot of a public social media post and a mocked-up photo. Lolcow LLC has already shown a willingness to fight these free speech battles. The company, alongside 4chan, filed a separate lawsuit in August 2025 challenging the UK’s Online Safety Act, arguing that enforcement of the Act against American companies is “intended to deliberately undermine the First Amendment and American competitiveness.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Kiwi Farms Challenges DMCA Subpoenas as Tools to Unmask Anonymous Speech appeared first on Reclaim The Net.