DeepLinks from the EFF
DeepLinks from the EFF

DeepLinks from the EFF

@deeplinks

EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea
Favicon 
www.eff.org

EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea

Wisconsin’s S.B. 130 / A.B. 105 is a spectacularly bad idea. It’s an age-verification bill that effectively bans VPN access to certain websites for Wisconsinites and censors lawful speech. We wrote about it last November in our blog “Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing,” but since then, the bill has passed the State Assembly and is scheduled for a vote in the State Senate tomorrow. In light of this, EFF sent a letter to the entire Wisconsin Legislature urging lawmakers to reject this dangerous bill. You can read the full letter here. The short version? This bill both requires invasive age verification for websites that host content lawmakers might deem “sexual” and requires that those sites block any user that connects via a Virtual Private Network (VPN). VPNs are a basic cybersecurity tool used by businesses, universities, journalists, veterans, abuse survivors, and ordinary people who simply don’t want to broadcast their location to every website they visit. As we lay out in the letter, Wisconsin’s mandate is technically unworkable. Websites cannot reliably determine whether a VPN user is in Wisconsin, a different state, or a different country. So, to avoid liability, websites are faced with an unfortunate choice: either resort to over-blocking IP addresses commonly associated with commercial VPNs, block all Wisconsin users’ access, or mandate nationwide restrictions just to avoid liability.  The bill also creates a privacy nightmare. It pushes websites to collect sensitive personal data (e.g. government IDs, financial information, biometric identifiers) just to access lawful speech. At the same time, it broadens the definition of material deemed “harmful to minors” far beyond the narrow categories courts have historically allowed states to regulate. The definition goes far beyond the narrow categories historically recognized by courts (namely, explicit adult sexual materials) and instead sweeps in material that merely describes sex or depicts human anatomy. This approach invites over-censorship, chills lawful speech, and exposes websites to vague and unpredictable enforcement. That combination—mass data collection plus vague, expansive speech restrictions—is a recipe for over-censorship, data breaches, and constitutional overreach. If you live in Wisconsin, now is the time for you to contact your State Senator and urge them to vote NO on S.B. 130 / A.B. 105. Tell them protecting young people online should not mean undermining cybersecurity, chilling lawful speech, and forcing residents to hand over their IDs just to browse the internet. As we said last time: Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.

San Jose Can Protect Immigrants by Ending Flock Surveillance System
Favicon 
www.eff.org

San Jose Can Protect Immigrants by Ending Flock Surveillance System

(This appeared as an op-ed published February 12, 2026 in the San Jose Spotlight, written by Huy Tran (SIREN), Jeffrey Wang (CAIR-SFBA), and Jennifer Pinsof.) As ICE and other federal agencies continue their assault on civil liberties, local leaders are stepping up to protect their communities. This includes pushing back against automated license plate readers, or ALPRs, which are tools of mass surveillance that can be weaponized against immigrants, political dissidents and other targets. In recent weeks, Mountain View, Los Altos Hills, Santa Cruz, East Palo Alto and Santa Clara County have begun reconsidering their ALPR programs. San Jose should join them. This dangerous technology poses an unacceptable risk to the safety of immigrants and other vulnerable populations. ALPRs are marketed to promote public safety. But their utility is debatable and they come with significant drawbacks. They don’t just track “criminals.” They track everyone, all the time. Your vehicle’s movements can reveal where you work, worship and obtain medical care. ALPR vendors like Flock Safety put the location information of millions of drivers into databases, allowing anyone with access to instantly reconstruct the public’s movements. But “anyone with access” is far broader than just local police. Some California law enforcement agencies have used ALPR networks to run searches related to immigration enforcement. In other situations, purported issues with the system’s software have enabled federal agencies to directly access California ALPR data. This is despite the promises of ALPR vendors and clear legal prohibitions. Communities are saying enough is enough. Just last week, police in Mountain View decided to turn off all of the city’s Flock cameras, following revelations that federal and other unauthorized agencies had accessed their network. The cameras will remain inactive until the City Council provides further direction. Other localities have shut off the cameras for good. In January, Los Altos Hills terminated its contract with Flock following concerns about ICE. Santa Cruz severed relations with Flock, citing rising tensions with ICE. Most recently, East Palo Alto and Santa Clara County are reconsidering whether to continue their relationships with Flock, given heightened concern for the safety of immigrant communities. California law prohibits local police from disclosing ALPR data to out-of-state or federal agencies. But at least 75 California police agencies were sharing these records out-of-state as recently as 2023. Just last year, San Francisco police allowed access to out-of-state agencies and 19 searches were related to ICE. Even without direct access, ICE can exploit local ALPR systems. One investigation found more than 4,000 cases where police had made searches on behalf of federal law enforcement, including for immigration investigations. Increasing the risk is that law enforcement routinely searches these networks without first obtaining a warrant. In San Jose, police aren’t required to have any suspicion of wrongdoing before searching ALPR databases, which contain a year’s worth of data representing hundreds of millions of records. In a little over a year, San Jose police logged more than 261,000 ALPR searches, or nearly 700 searches a day, all without a warrant. Two nonprofit organizations, SIREN and CAIR California, represented by Electronic Frontier Foundation and the ACLU of Northern California, are currently suing to stop San Jose’s warrantless searches of ALPR data. But this is only the first step. A better solution is to simply turn these cameras off. San Jose cannot afford delay. Each day these cameras remain active, they collect sensitive location data that can be misused to target immigrant families and violate fundamental freedoms. It is a risk materializing across California. City leaders must act now to shut down ALPR systems and make clear that public safety will not come at the expense of privacy, human dignity or community trust. Related Cases: SIREN and CAIR-CA v. San Jose

New Report Helps Journalists Dig Deeper Into Police Surveillance Technology
Favicon 
www.eff.org

New Report Helps Journalists Dig Deeper Into Police Surveillance Technology

Report from EFF, Center for Just Journalism, and IPVM Helps Cut Through Sales HypeSAN FRANCISCO — A new report released today offers journalists tips on cutting through the sales hype about police surveillance technology and report accurately on costs, benefits, privacy, and accountability as these invasive and often ineffective tools come to communities across the nation.  The “Selling Safety” report is a joint project of the Electronic Frontier Foundation (EFF), the Center for Just Journalism (CJJ), and IPVM.  Police technology is often sold as a silver bullet: a way to modernize departments, make communities safer, and eliminate human bias from policing with algorithmic objectivity. Behind the slick marketing is a sprawling, under-scrutinized industry that relies on manufacturing the appearance of effectiveness, not measuring it. The cost of blindly deferring to advertising can be high in tax dollars, privacy, and civil liberties.  “Selling Safety” helps journalists see through the spin. It breaks down how policing technology companies market their tools, and how those sales claims — which are often misleading — get recycled into media coverage. It offers tools for asking better questions, understanding incentives, and finding local accountability stories.  “The industry that provides technology to law enforcement is one of the most unregulated, unexamined, and consequential in the United States,” said EFF Senior Policy Analyst Matthew Guariglia. “Most Americans would rightfully be horrified to know how many decisions about policing are made: not by public employees, but by multi-billion-dollar surveillance tech companies who have an insatiable profit motive to market their technology as the silver bullet that will stop crime. Lawmakers often are too eager to seem ‘tough on crime’ and journalists too often see an easy story in publishing law enforcement press releases about new technology. This report offers a glimpse into how the police-tech sausage gets made so reporters and lawmakers can recognize the tactics of glossy marketing pitches, manufactured effectiveness numbers, and chumminess between companies and police.”  “Surveillance and other police technologies are spreading faster than public understanding or oversight, leaving journalists to do critical accountability work in real time. We hope this report helps make that work easier,” said Hannah Riley Fernandez, CJJ’s Director of Programming.  "The surveillance technology industry has a documented pattern of making unsubstantiated claims about technology,” said Conor Healy, IPVM's Director of Government Research. “Marketing is not a substitute for evidence. Journalists who go beyond press releases to critically examine vendor claims will often find solutions are not as magical as they may seem. In doing so, they perform essential accountability work that protects both taxpayer dollars and civil liberties."  EFF also maintains resources for understanding various police technologies and mapping those technologies in communities across the United States.  For the “Selling Safety” report:  https://www.eff.org/document/selling-safety-journalists-guide-covering-police-technology For EFF’s Street-Level Surveillance hub: https://sls.eff.org/  For EFF’s Atlas of Surveillance: https://www.atlasofsurveillance.org/  Contact:  BerylLiptonSenior Investigative Researcherberyl@eff.org

Seven Billion Reasons for Facebook to Abandon its Face Recognition Plans
Favicon 
www.eff.org

Seven Billion Reasons for Facebook to Abandon its Face Recognition Plans

The New York Times reported that Meta is considering adding face recognition technology to its smart glasses. According to an internal Meta document, the company may launch the product “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”  This is a bad idea that Meta should abandon. If adopted and released to the public, it would violate the privacy rights of millions of people and cost the company billions of dollars in legal battles.    Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination. Adding this technology to glasses on the street also raises safety concerns.    This kind of face recognition feature would require the company to collect a faceprint from every person who steps into view of the camera-equipped glasses to find a match. Meta cannot possibly obtain consent from everyone—especially bystanders who are not Meta users.   Dozens of state laws consider biometric information to be sensitive and require companies to implement strict protections to collect and process it, including affirmative consent.   Meta Should Know the Privacy and Legal Risks   Meta should already know the privacy risks of face recognition technology, after abandoning related technology and paying nearly $7 billion in settlements a few years ago.   In November 2021, Meta announced that it would shut down its tool that scanned the face of every person in photos posted on the platform. At the time, Meta also announced that it would delete more than a billion face templates.  Two years before that in July 2019, Facebook settled a sweeping privacy investigation with the Federal Trade Commission for $5 billion. This included allegations that Facebook’s face recognition settings were confusing and deceptive. At the time, the company agreed to obtain consent before running face recognition on users in the future.    In March 2021, the company agreed to a $650 million class action settlement brought by Illinois consumers under the state's strong biometric privacy law.  And most recently, in July 2024, Meta agreed to pay $1.4 billion to settle claims that its defunct face recognition system violated Texas law.    Privacy Advocates Will Continue to Focus our Resources on Meta    Meta’s conclusion that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt. It is also dead wrong.   Now more than ever, people have seen the real-world risk of invasive technology. The public has recoiled at masked immigration agents roving cities with phones equipped with a face recognition app called Mobile Fortify. And Amazon Ring just experienced a huge backlash when people realized that a feature marketed for finding lost dogs could one day be repurposed for mass biometric surveillance.   The public will continue to resist these privacy invasive features. And EFF, other civil liberties groups, and plaintiffs’ attorneys will be here to help. We urge privacy regulators and attorneys general to step up to investigate as well.  

Discord Voluntarily Pushes Mandatory Age Verification Despite Recent Data Breach
Favicon 
www.eff.org

Discord Voluntarily Pushes Mandatory Age Verification Despite Recent Data Breach

Discord has begun rolling out mandatory age verification and the internet is, understandably, freaking out. At EFF, we’ve been raising the alarm about age verification mandates for years. In December, we launched our Age Verification Resource Hub to push back against laws and platform policies that require users to hand over sensitive personal information just to access basic online services. At the time, age gates were largely enforced in polities where it was mandated by law. Now they’re landing in platforms and jurisdictions where they’re not required. Beginning in early March, users who are either (a) estimated by Discord to be under 18, or (b) Discord doesn't have enough information on, may find themselves locked into a “teen-appropriate experience.” That means content filters, age gates, restrictions on direct messages and friend requests, and the inability to speak in “Stage channels,” which are the large-audience audio spaces that power many community events. Discord says most adults may be sorted automatically through a new “age inference” system that relies on account tenure, device and activity data, and broader platform patterns. Those whose age isn’t estimated due to lack of information or who are estimated to not be adults will be asked to scan their face or upload a government ID through a third-party vendor if they want to avoid the default teen account restrictions. We’ve written extensively about why age verification mandates are a censorship and surveillance nightmare. Discord’s shift only reinforces those concerns. Here’s why: The 2025 Breach and What's Changed Since Discord literally won our 2025 “We Still Told You So” Breachies Award. Last year, attackers accessed roughly 70,000 users’ government IDs, selfies, and other sensitive information after compromising Discord’s third-party customer support system. To be clear: Discord is no longer using that system, which involved routing ID uploads through its general ticketing system for age verification. It now uses dedicated age verification vendors (k-ID globally and Persona for some users in the United Kingdom). That’s an improvement. But it doesn’t eliminate the underlying potential for data breaches and other harms. Discord says that it will delete records of any user-uploaded government IDs, and that any facial scans will never leave users’ devices. But platforms are closed-source, audits are limited, and history shows that data (especially this ultra-valuable identity data) will leak—whether through hacks, misconfigurations, or retention mistakes. Users are being asked to simply trust that this time will be different. Age Verification and Anonymous Speech For decades, we’ve taught young people a simple rule: don’t share personal information with strangers online. Age verification complicates that advice. Suddenly, some Discord users will now be asked to submit a government ID or facial scan to access certain features if their age-inference technology fails. Discord has said on its blog that it will not associate a user’s ID with their account (only using that information to confirm their age) and that identifying documents won’t be retained. We take those commitments seriously. However, users have little independent visibility into how those safeguards operate in practice or whether they are sufficient to prevent identification. Even if Discord can technically separate IDs from accounts, many users are understandably skeptical, especially after the platform’s recent breach involving age-verification data. For people who rely on pseudonymity, being required to upload a face scan or government ID at all can feel like crossing a line. Many people rely on anonymity to speak freely. LGBTQ+ youth, survivors of abuse, political dissidents, and countless others use aliases to explore identity, find support, and build community safely. When identity checks become a condition of participation, many users will simply opt out. The chilling effect isn’t only about whether an ID is permanently linked to an account; it’s about whether users trust the system enough to participate in the first place. When you’re worried that what you say can be traced back to your government ID, you speak differently—or not at all. No one should have to choose between accessing online communities and protecting their privacy. Age Verification Systems Are Not Ready for Prime Time Discord says it is trying to address privacy concerns by using device-based facial age estimation and separating government IDs from user accounts, retaining only a user’s age rather than their identity documents. This is meant to reduce the risks associated with retaining and collecting this sensitive data. However, even when privacy safeguards are in place, we are faced with another problem: There is no current technology that is fully privacy-protective, universally accessible, and consistently accurate. Facial age estimation tools are notoriously unreliable, particularly for people of color, trans and nonbinary people, and people with disabilities. The internet has now proliferated with stories of people bypassing these facial age estimation tools. But when systems get it wrong, users may be forced into appeals processes or required to submit more documentation, such as government-issued IDs, which would exclude those whose appearance doesn’t match their documents and the millions of people around the world who don’t have government-issued identity documents at all. Even newer approaches (things like age inference, behavior tracking, financial database checks, digital ID systems) expand the web of data collection, and carry their own tradeoffs around access and error. As we mentioned earlier, no current approach is simultaneously privacy-protective, universally accessible, and consistently accurate across all demographics.  That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play. That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play. The Aftermath Discord reports over 200 million monthly active users, and is one of the largest platforms used by gamers to chat. The video game industry is larger than movies, TV, and music combined, and Discord represents an almost-default option for gamers looking to host communities. Many communities, including open-source projects, sports teams, fandoms, friend groups, and families, use Discord to stay connected. If communities or individuals are wrongly flagged as minors, or asked to complete the age verification process, they may face a difficult choice: submit to facial scans or ID checks, or accept a more restricted “teen” experience. For those who decline to go through the process, the result can mean reduced functionality, limited communication tools, and the chilling effects that follow.  Most importantly, Discord did not have to “comply in advance” by requiring age verification for all users, whether or not they live in a jurisdiction that mandates it. Other social media platforms and their trade groups have fought back against more than a dozen age verification laws in the U.S., and Reddit has now taken the legal fight internationally. For a platform with as much market power as Discord, voluntarily imposing age verification is unacceptable.  So You’ve Hit an Age Gate. Now What? Discord should reconsider whether expanding identity checks is worth the harm to its communities. But in the meantime, many users are facing age checks today. That’s why we created our guide, “So You’ve Hit an Age Gate. Now What?” It walks through practical steps to minimize risk, such as: Submit the least amount of sensitive data possible. Ask: What data is collected? Who can access it? How long is it retained? Look for evidence of independent, security-focused audits. Be cautious about background details in selfies or ID photos. There is unfortunately no perfect option, only tradeoffs. And every user will have their own unique set of safety concerns to consider. Amidst this confusion, our goal is to help keep you informed, so you can make the best choices for you and your community. In light of the harms imposed by age-verification systems, EFF encourages all services to stop adopting these systems when they are not mandated by law. And lawmakers across the world that are considering bills that would make Discord’s approach the norm for every platform should watch this backlash and similarly move away from the idea. If you care about privacy, free expression, and the right to participate online without handing over your identity, now is the time to speak up. Join us in the fight.