DeepLinks from the EFF
DeepLinks from the EFF

DeepLinks from the EFF

@deeplinks

Victory! Pen-Link's Police Tools Are Not Secret
Favicon 
www.eff.org

Victory! Pen-Link's Police Tools Are Not Secret

In a victory for transparency, the government contractor Pen-Link agreed to disclose the prices and descriptions of surveillance products that it sold to a local California Sheriff's office. The settlement ends a months-long California public records lawsuit with the Electronic Frontier Foundation and the San Joaquin County Sheriff’s Office. The settlement provides further proof that the surveillance tools used by governments are not secret and shouldn’t be treated that way under the law. Last year, EFF submitted a California public records request to the San Joaquin County Sheriff’s Office for information about its work with Pen-Link and its subsidy Cobwebs Technology. Pen-Link went to court to try to block the disclosure, claiming the names of its products and prices were trade secrets. EFF later entered the case to obtain the records it requested.   The Records Show the Sheriff Bought Online Monitoring Tools The records disclosed in the settlement show that in late 2023, the Sheriff’s Office paid $180,000 for a two-year subscription to the Tangles “Web Intelligence Platform,” which is a Cobwebs Technologies product that allows the Sheriff to monitor online activity. The subscription allows the Sheriff to perform hundreds of searches and requests per month. The source of information includes the “Dark Web” and “Webloc,” according to the price quotation. According to the settlement, the Sheriff’s Office was offered but did not purchase a series of other add-ons including “AI Image processing” and “Webloc Geo source data per user/Seat.” Have you been blocked from receiving similar information? We’d like to hear from you. The intelligence platform overall has been described in other documents as analyzing data from the “open, deep, and dark web, to mobile and social.” And Webloc has been described as a platform that “provides access to vast amounts of location-based data in any specified geographic location.” Journalists at multiple news outlets have chronicled Pen-Link's technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. Major local, state, and federal agencies use Pen-Link's technology. The records also show that in late 2022 the Sheriff’s Office purchased some of Pen-Link’s more traditional products that help law enforcement execute and analyze data from wiretaps and pen-registers after a court grants approval.  Government Surveillance Tools Are Not Trade Secrets The public has a right to know what surveillance tools the government is using, no matter whether the government develops its own products or purchases them from private contractors. There are a host of policy, legal, and factual reasons that the surveillance tools sold by contractors like Pen-Link are not trade secrets. Public information about these products and prices helps communities have informed conversations and make decisions about how their government should operate. In this case, Pen-Link argued that its products and prices are trade secrets partially because governments rely on the company to “keep their data analysis capabilities private.” The company argued that clients would “lose trust” and governments may avoid “purchasing certain services” if the purchases were made public. This troubling claim highlights the importance of transparency. The public should be skeptical of any government tool that relies on secrecy to operate. Information about these tools is also essential for defendants and criminal defense attorneys, who have the right to discover when these tools are used during an investigation. In support of its trade secret claim, Pen-Link cited terms of service that purported to restrict the government from disclosing its use of this technology without the company’s consent. Terms like this cannot be used to circumvent the public’s right to know, and governments should not agree to them. Finally, in order for surveillance tools and their prices to be protected as a trade secret under the law, they have to actually be secret. However, Pen-Link’s tools and their prices are already public across the internet—in previous public records disclosures, product descriptions, trademark applications, and government websites.  Lessons Learned Government surveillance contractors should consider the policy implications, reputational risks, and waste of time and resources when attempting to hide from the public the full terms of their sales to law enforcement. Cases like these, known as reverse-public records act lawsuits, are troubling because a well-resourced company can frustrate public access by merely filing the case. Not every member of the public, researcher, or journalist can afford to litigate their public records request. Without a team of internal staff attorneys, it would have cost EFF tens of thousands of dollars to fight this lawsuit.  Luckily in this case, EFF had the ability to fight back. And we will continue our surveillance transparency work. That is why EFF required some attorneys’ fees to be part of the final settlement. Related Cases: Pen-Link v. County of San Joaquin Sheriff’s Office

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas
Favicon 
www.eff.org

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas

The Ninth Circuit upheld an important limitation on Digital Millenium Copyright Act (DMCA) subpoenas that other federal courts have recognized for more than two decades. The DMCA, a misguided anti-piracy law passed in the late nineties, created a bevy of powerful tools, ostensibly to help copyright holders fight online infringement. Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls,” unscrupulous litigants who abuse the system at everyone else’s expense. The DMCA’s “notice and takedown” regime is one of these tools. Section 512 of the DMCA creates “safe harbors” that protect service providers from liability, so long as they disable access to content when a copyright holder notifies them that the content is infringing, and fulfill some other requirements. This gives copyright holders a quick and easy way to censor allegedly infringing content without going to court.  Section 512(h) is ostensibly designed to facilitate this system, by giving rightsholders a fast and easy way of identifying anonymous infringers. Section 512(h) allows copyright holders to obtain a judicial subpoena to unmask the identities of allegedly infringing anonymous internet users, just by asking a court clerk to issue one, and attaching a copy of the infringement notice. In other words, they can wield the court’s power to override an internet user’s right to anonymous speech, without permission from a judge.  It’s easy to see why these subpoenas are prone to misuse. Internet service providers (ISPs)—the companies that provide an internet connection (e.g. broadband or fiber) to customers—are obvious targets for these subpoenas. Often, copyright holders know the Internet Protocol (IP) address of an alleged infringer, but not their name or contact information. Since ISPs assign IP addresses to customers, they can often identify the customer associated with one. Fortunately, Section 512(h) has an important limitation that protects users.  Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in In re Internet Subscribers of Cox Communications, LLC, the Ninth Circuit agreed, as EFF urged it to in our amicus brief. As the Ninth Circuit held: Because a § 512(a) service provider cannot remove or disable access to infringing content, it cannot receive a valid (c)(3)(A) notification, which is a prerequisite for a § 512(h) subpoena. We therefore conclude from the text of the DMCA that a § 512(h) subpoena cannot issue to a § 512(a) service provider as a matter of law. This decision preserves the understanding of Section 512(h) that internet users, websites, and copyright holders have shared for decades. As EFF explained to the court in its amicus brief: [This] ensures important procedural safeguards for internet users against a group of copyright holders who seek to monetize frequent litigation (or threats of litigation) by coercing settlements—copyright trolls. Affirming the district court and upholding the interpretation of the D.C. and Eighth Circuits will preserve this protection, while still allowing rightsholders the ability to find and sue infringers. EFF applauds this decision. And because three federal appeals courts have all ruled the same way on this question—and none have disagreed—ISPs all over the country can feel confident about protecting their customers’ privacy by simply throwing improper DMCA 512(h) subpoenas in the trash.

From Book Bans to Internet Bans: Wyoming Lets Parents Control the Whole State’s Access to The Internet
Favicon 
www.eff.org

From Book Bans to Internet Bans: Wyoming Lets Parents Control the Whole State’s Access to The Internet

If you've read about the sudden appearance of age verification across the internet in the UK and thought it would never happen in the U.S., take note: many politicians want the same or even more strict laws. As of July 1st, South Dakota and Wyoming enacted laws requiring any website that hosts any sexual content to implement age verification measures. These laws would potentially capture a broad range of non-pornographic content, including classic literature and art, and expose a wide range of platforms, of all sizes, to civil or criminal liability for not using age verification on every user. That includes social media networks like X, Reddit, and Discord; online retailers like Amazon and Barnes & Noble; and streaming platforms like Netflix and Rumble—essentially, any site that allows user-generated or published content without gatekeeping access based on age. These laws expand on the flawed logic from last month’s troubling Supreme Court decision,  Free Speech Coalition v. Paxton, which gave Texas the green light to require age verification for sites where at least one-third (33.3%) of the content is sexual materials deemed “harmful to minors.” Wyoming and South Dakota seem to interpret this decision to give them license to require age verification—and potential legal liability—for any website that contains ANY image, video, or post that contains sexual content that could be interpreted as harmful to minors. Platforms or websites may be able to comply by implementing an “age gate” within certain sections of their sites where, for example, user-generated content is allowed, or at the point of entry to the entire site. Although these laws are in effect, we do not believe the Supreme Court’s decision in FSC v. Paxton gives these laws any constitutional legitimacy. You do not need a law degree to see the difference between the Texas law—which targets sites where a substantial portion (one third) of content is “sexual material harmful to minors”—and these laws, which apply to any site that contains even a single instance of such material. In practice, it is the difference between burdening adults with age gates for websites that host “adult” content, and burdening the entire internet, including sites that allow user-generated content or published content. The law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands But lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” and use other methods to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature. Books like The Bluest Eye by Toni Morrison, The Handmaid’s Tale by Margaret Atwood, and And Tango Makes Three have all been swept up in these crusades—not because of their overall content, but because of isolated scenes or references. Wyoming’s law is also particularly extreme: rather than provide enforcement by the Attorney General, HB0043 is a “bounty” law that deputizes any resident with a child to file civil lawsuits against websites they believe are in violation, effectively turning anyone into a potential content cop. There is no central agency, no regulatory oversight, and no clear standard. Instead, the law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands by suing websites that contain a single example of objectionable content. Though most other state age verification laws often allow individuals to make reports to state Attorneys General who are responsible for enforcement, and some include a private right of action allowing parents or guardians to file civil claims for damages, the Wyoming law is similar to laws in Louisiana and Utah that rely entirely on civil enforcement.  This is a textbook example of a “heckler’s veto,” where a single person can unilaterally decide what content the public is allowed to access. However, it is clear that the Wyoming legislature explicitly designed the law this way in a deliberate effort to sidestep state enforcement and avoid an early constitutional court challenge, as many other bounty laws targeting people who assist in abortions, drag performers, and trans people have done. The result? An open invitation from the Wyoming legislature to weaponize its citizens, and the courts, against platforms, big or small. Because when nearly anyone can sue any website over any content they deem unsafe for minors, the result isn’t safety. It’s censorship. That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk.  Imagine a Wyomingite stumbling across an NSFW subreddit or a Tumblr fanfic blog and deciding it violates the law. If they were a parent of a minor, that resident could sue the platform, potentially forcing those websites to restrict or geo-block access to the entire state in order to avoid the cost and risk of litigation. And because there’s no threshold for how much “harmful” content a site must host, a single image or passage could be enough. That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk.  This law will likely be challenged, and eventually, halted, by the courts. But given that the state cannot enforce it, those challenges will not come until a parent sues a website. Until then, its mere existence poses a serious threat to free speech online. Risk-averse platforms may over-correct, over-censor, or even restrict access to the state entirely just to avoid the possibility of a lawsuit, as Pornhub has already done. And should sites impose age-verification schemes to comply, they will be a speech and privacy disaster for all state residents. And let’s be clear: these state laws are not outliers. They are part of a growing political movement to redefine terms like “obscene,” “pornographic,” and “sexually explicit”  as catchalls to restrict content for both adults and young people alike. What starts in one state and one lawsuit can quickly become a national blueprint.  If we don’t push back now, the internet as we know it could disappear behind a wall of fear and censorship. Age-verification laws like these have relied on vague language, intimidating enforcement mechanisms, and public complacency to take root. Courts may eventually strike them down, but in the meantime, users, platforms, creators, and digital rights advocacy groups need to stay alert, speak up against these laws, and push back while they can. When governments expand censorship and surveillance offline, it's our job at EFF to protect your access to a free and open internet. Because if we don’t push back now, the internet as we know it— the messy, diverse, and open internet we know—could disappear behind a wall of fear and censorship. Ready to join us? Urge your state lawmakers to reject harmful age verification laws. Call or email your representatives to oppose KOSA and any other proposed federal age-checking mandates. Make your voice heard by talking to your friends and family about what we all stand to lose if the age-gated internet becomes a global reality. Because the fight for a free internet starts with us.

New Documents Show First Trump DOJ Worked With Congress to Amend Section 230
Favicon 
www.eff.org

New Documents Show First Trump DOJ Worked With Congress to Amend Section 230

In the wake of rolling out its own proposal to significantly limit a key law protecting internet users’ speech in the summer of 2020, the Department of Justice under the first Trump administration actively worked with lawmakers to support further efforts to stifle online speech. The new documents, disclosed in an EFF Freedom of Information Act (FOIA) lawsuit, show officials were talking with Senate staffers working to pass speech- and privacy-chilling bills like the EARN IT Act and PACT Act (neither became law). DOJ officials also communicated with an organization that sought to condition Section 230’s legal protections on websites using age-verification systems if they hosted sexual content. Section 230 protects users’ online speech by protecting the online intermediaries we all rely on to communicate on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 embodies the principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say. DOJ’s work to weaken Section 230 began before President Donald Trump issued an executive order targeting social media services in 2020, and officials in DOJ appeared to be blindsided by the order. EFF was counsel to plaintiffs who challenged the order, and President Joe Biden later rescinded it. EFF filed two FOIA suits seeking records about the executive order and the DOJ’s work to weaken Section 230. The DOJ’s latest release provides more detail on a general theme that has been apparent for years: that the DOJ in 2020 flexed its powers to try to undermine or rewrite Section 230. The documents show that in addition to meeting with congressional staffers, DOJ was critical of a proposed amendment to the EARN IT Act, with one official stating that it “completely undermines” the sponsors’ argument for rejecting DOJ’s proposal to exempt so-called “Bad Samaritan” websites from Section 230. Further, DOJ reviewed and proposed edits to a rulemaking petition to the Federal Communications Commission that tried to reinterpret Section 230. That effort never moved forward given the FCC lacked any legal authority to reinterpret the law. You can read the latest release of documents here, and all the documents released in this case are here. Related Cases: EFF v. OMB (Trump 230 Executive Order FOIA)

President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare
Favicon 
www.eff.org

President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade. A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda. The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government.  Less Accuracy, More Bias and Discrimination It’s no secret that AI models—including gen AI—tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are “trained” on. If the training data reflects biases against racial, ethnic, and gender minorities—which it often does—then the AI model will “learn” to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of the people who train, test, and evaluate them.  This is true across different types of AI. For example, “predictive policing” tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMs already recommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population, 80 percent of Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women.  These models aren’t just biased—they’re fundamentally incorrect. Race and gender aren’t objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance—not some “objective” reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less.  Biased LLMs Cause Serious Harm—Especially in the Hands of the Government But inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people’s personal freedom and access to financial resources, healthcare, housing, and more. The White House’s AI Action Plan calls for a massive increase in agencies’ use of LLMs and other AI—while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology. We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series of executive orders, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone’s rights at risk.  And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm.  We have argued against using machine learning to make predictive policing decisions or other punitive judgments for just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.