DeepLinks from the EFF
DeepLinks from the EFF

DeepLinks from the EFF

@deeplinks

The UK Has It Wrong on Digital ID. Here’s Why.
Favicon 
www.eff.org

The UK Has It Wrong on Digital ID. Here’s Why.

In late September, the United Kingdom’s Prime Minister Keir Starmer announced his government’s plans to introduce a new digital ID scheme in the country to take effect before the end of the Parliament (no later than August 2029). The scheme will, according to the Prime Minister, “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like people’s name, date of birth, nationality or residency status, and photo to verify their right to live and work in the country. This is the latest example of a government creating a new digital system that is fundamentally incompatible with a privacy-protecting and human rights-defending democracy. This past year alone, we’ve seen federal agencies across the United States explore digital IDs to prevent fraud, the Transportation Security Administration accepting “Digital passport IDs” in Android, and states contracting with mobile driver’s license providers (mDL). And as we’ve said many times, digital ID is not for everyone and policymakers should ensure better access for people with or without a digital ID. But instead, the UK is pushing forward with its plans to rollout digital ID in the country. Here’s three reasons why those policymakers have it wrong.  Digital ID allows the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences.  Mission Creep  In his initial announcement, Starmer stated: “You will not be able to work in the United Kingdom if you do not have digital ID. It's as simple as that.” Since then, the government has been forced to clarify those remarks: digital ID will be mandatory to prove the right to work, and will only take effect after the scheme's proposed introduction in 2028, rather than retrospectively. The government has also confirmed that digital ID will not be required for pensioners, students, and those not seeking employment, and will also not be mandatory for accessing medical services, such as visiting hospitals. But as civil society organizations are warning, it's possible that the required use of digital ID will not end here. Once this data is collected and stored, it provides a multitude of opportunities for government agencies to expand the scenarios where they demand that you prove your identity before entering physical and digital spaces or accessing goods and services. The government may also be able to request information from workplaces on who is registering for employment at that location, or collaborate with banks to aggregate different data points to determine who is self-employed or not registered to work. It potentially leads to situations where state authorities can treat the entire population with suspicion of not belonging, and would shift the power dynamics even further towards government control over our freedom of movement and association. And this is not the first time that the UK has attempted to introduce digital ID: politicians previously proposed similar schemes intended to control the spread of COVID-19, limit immigration, and fight terrorism. In a country increasing the deployment of other surveillance technologies like face recognition technology, this raises additional concerns about how digital ID could lead to new divisions and inequalities based on the data obtained by the system. These concerns compound the underlying narrative that digital ID is being introduced to curb illegal immigration to the UK: that digital ID would make it harder for people without residency status to work in the country because it would lower the possibility that anyone could borrow or steal the identity of another. Not only is there little evidence to prove that digital ID will limit illegal immigration, but checks on the right to work in the UK already exist. This is nothing more than inflammatory and misleading; Liberal Democrat leader Ed Davey noted this would do “next to nothing to tackle channel crossings.” Inclusivity is Not Inevitable, But Exclusion Is  While the government announced that their digital ID scheme will be inclusive enough to work for those without access to a passport, reliable internet, or a personal smartphone, as we’ve been saying for years, digital ID leaves vulnerable and marginalized people not only out of the debate and ultimately out of the society that these governments want to build. We remain concerned about the potential for digital identification to exacerbate existing social inequalities, particularly for those with reduced access to digital services or people seeking asylum. The UK government has said a public consultation will be launched later this year to explore alternatives, such as physical documentation or in-person support for the homeless and older people; but it’s short-sighted to think that these alternatives are viable or functional in the long term. For example, UK organization Big Brother Watch reported that about only 20% of Universal Credit applicants can use online ID verification methods. These individuals should not be an afterthought that are attached to the end of the announcement for further review. It is essential that if a tool does not work for those without access to the array of essentials, such as the internet or the physical ID, then it should not exist.Digital ID schemes also exacerbate other inequalities in society, such as abusers who will be able to prevent others from getting jobs or proving other statuses by denying access to their ID. In the same way, the scope of digital ID may be expanded and people could be forced to prove their identities to different government agencies and officials, which may raise issues of institutional discrimination when phones may not load, or when the Home Office has incorrect information on an individual. This is not an unrealistic scenario considering the frequency of internet connectivity issues, or circumstances like passports and other documentation expiring. Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID. Attacks on Privacy and Surveillance  Digital ID systems expand the number of entities that may access personal information and consequently use it to track and surveil. The UK government has nodded to this threat. Starmer stated that the technology would “absolutely have very strong encryption” and wouldn't be used as a surveillance tool. Moreover, junior Cabinet Office Minister Josh Simons told Parliament that “data associated with the digital ID system will be held and kept safe in secure cloud environments hosted in the United Kingdom” and that “the government will work closely with expert stakeholders to make the programme effective, secure and inclusive.” But if digital ID is needed to verify people’s identities multiple times per day or week, ensuring end-to-encryption is the bare minimum the government should require. Unlike sharing a National Insurance Number, a digital ID will show an array of personal information that would otherwise not be available or exchanged. This would create a rich environment for hackers or hostile agencies to obtain swathes of personal information on those based in the UK. And if previous schemes in the country are anything to go by, the government’s ability to handle giant databases is questionable. Notably, the eVisa’s multitude of failures last year illustrated the harms that digital IDs can bring, with issues like government system failures and internet outages leading to people being detained, losing their jobs, or being made homeless. Checking someone’s identity against a database in real-time requires a host of online and offline factors to work, and the UK is yet to take the structural steps required to remedying this.Moreover, we know that the Cabinet Office and the Department for Science, Innovation and Technology will be involved in the delivery of digital ID and are clients of U.S.-based tech vendors, specifically Amazon Web Services (AWS). The UK government has spent millions on AWS (and Microsoft) cloud services in recent years, and the One Government Value Agreement (OGVA)—first introduced in 2020 and of which provides discounts for cloud services by contracting with the UK government and public sector organizations as a single client—is still active. It is essential that any data collected is not stored or shared with third parties, including through cloud agreements with companies outside the UK.And even if the UK government published comprehensive plans to ensure data minimization in its digital ID, we will still strongly oppose any national ID scheme. Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID, and both the public and civil society organizations in the country are against this. Ways Forward Digital ID regimes strip privacy from everyone and further marginalize those seeking asylum or undocumented people. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. We cannot base our human rights on the government’s mere promise to uphold them. On December 8th, politicians in the country will be debating a petition that reached almost 3 million signatories rejecting mandatory digital ID. If you’re based in the UK, you can contact your MP (external campaign links) to oppose the plans for a digital ID system. The case for digital identification has not been made. The UK government must listen to people in the country and say no to digital ID.

EFF’s Holiday Gift Guide
Favicon 
www.eff.org

EFF’s Holiday Gift Guide

Technology is supercharging the attack on democracy and EFF is fighting back. We’re suing to stop government surveillance. We're fighting to protect free expression online. And we're building tools to protect your data privacy. Help support our mission with new gear from EFF's online store, perfect gifts for the digital rights defender in your life. Take 20% your order today with code BLACKFRI. Thanks for being an EFF supporter! Liquid Core Dice are perfect for tabletop games. The metal clear-view EFF display tin contains a seven piece set of sharp-edge dice. These glittery dice will show that you roll with the crew protecting our civil liberties online. Celebrate equity and accessibility with this tactile braille sticker that depicts the fiery figure of Lady Justice with braille characters reading "justice" and "EFF." With this embossed sticker, you won't just be showing off your support for justice, you'll actually be able to feel it. Applaud reproductive rights with this gift bundle hailing your data privacy and personal freedom. The bundle includes all items featuring our mascot for choice and privacy, Lady Lock: the "My Body, My Data, My Choice" tote bag, a "Honey, I Encrypt Everything" sticker, and a heat-changing mug that reveals its secret slogan when hot. Explore the mysteries of the web with an iconic Bigfoot de la Sasquatch lapel pin—privacy is a "human" right! Continue the journey with with campfire tales from The Encryptids, the rarely-seen creatures who’ve become digital rights legends. This sparkling cloisonne pin measures 1.5 inches tall and features a high quality spring backing. Find all these items, plus t-shirts, hoodies, beanies, and more at the EFF Online Shop. And as always, you can donate to EFF and give the gift of membership to the digital rights defender or newbie in your life. Shop Now Support Digital Rights with Every Purchase Are you hoping for delivery by December 25 in the continental U.S.? Please place your order by Thursday, December 10. Email us with any questions.

Privacy is For the Children (Too)
Favicon 
www.eff.org

Privacy is For the Children (Too)

In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the third in a short series that explains digital ID and the pending use case of age verification. Here, we cover alternative frameworks on age controls, updates on parental controls, and the importance of digital privacy in an increasingly hostile climate politically. You can read the first two posts here, and here. Observable harms of age verification legislation in the UK, US, and elsewhere: As we witness the effects of the Online Safety Act in the UK and over 25 state age verification laws in the U.S, it has become even more apparent that mandatory age verification is more of a detriment than a benefit to the public. Here’s what we’re seeing: Scope creep is already occurring with various website shutdowns and excessive requests from various websites. Data is being handed to age assurance providers with questionable privacy policies. Older teenagers (voting age is set to be 16 in the UK) are having their ability to communicate online blocked on various platforms. Adults aren’t able to use the internet to access content that is lawfully available to them if they do not agree to third-party usage of their information. It’s obvious: age verification will not keep children safe online. Rather, it is a large proverbial hammer that nails everyone—adults and young people alike—into restrictive parameters of what the government deems appropriate content. That reality is more obvious and tangible now that we’ve seen age-restrictive regulations roll out in various states and countries. But that doesn’t have to be the future if we turn away from age-gating the web. Keeping kids safe online (or anywhere IRL, let’s not forget) is a complex social issue that cannot be resolved with technology alone. The legislators responsible for online age verification bills must confront that they are currently addressing complex social issues with a problematic array of technology. Most of policymakers’ concerns about minors' engagement with the internet can be sorted into one of three categories: Content risks: The negative implications from exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm.  Conduct risks: Behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service. Contact risks: The potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material. Parental controls—which already exist!—can help. These three categories of possible risks will not be eliminated by mandatory age verification—or any form of techno-solutionism, for that matter. Mandatory age checks will instead block access to vital online communities and resources for those people—including young people—who need them the most. It’s an ineffective and disproportionate tool to holistically address young people’s online safety.  However, these can be partially addressed with better-utilized and better-designed parental controls and family accounts. Existing parental controls are woefully underutilized, according to one survey that collected answers from 1,000 parents. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. Making parental controls more flexible and accessible, so parents better understand the tools and how to use them, could increase adoption and address content risk more effectively than a broad government censorship mandate.   Recently, Android made its parental controls easier to set up. It rolled out features that directly address content risk by assisting parents who wish to block specific apps and filter out mature content from Google Chrome and Google Search. Apple also updated its parental controls settings this past summer by instituting new ways for parents to manage child accounts and giving app developers access to a Declared Age Range API. Where parents can declare age range and apps can respond to declared ranges established in child accounts, without giving over a birthdate. With this, parents are given some flexibility like age-range information beyond just 13+. A diverse range of tools and flexible settings provide the best options for families and empower parents and guardians to decide and tailor what online safety means for their own children—at any age, maturity level, or type of individual risk. Privacy laws can also help minors online. Parental controls are useful in the hands of responsible guardians. But what about children who are neglected or abused by those in charge of them? Age verification laws cannot solve this problem; these laws simply share possible abuse of power with the state. To address social issues, we need more efforts directed at the family and community structures around young people, and initiatives that can mitigate the risk factors of abuse instead of resorting to government control over speech. While age verification is not the answer, those seeking legislative solutions can instead focus their attention on privacy laws—which are more than capable of assisting minors online, no matter the state of their at-home care. Comprehensive data privacy, which EFF has long advocated for, is perhaps the most obvious way to keep the data of young people safe online. Data brokers gather a vast amount of data and assemble new profiles of information as a young person uses the internet. These data sets also contribute to surveillance and teach minors that it is normal to be tracked as they use the web. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do and be able to sell it to whomever will buy it from them. For example, many age-checking tools use data brokers to establish “age estimation” on emails used to sign up for an online service, further incentivizing a vicious cycle of data collection and retention. Ultimately, privacy-encroaching companies are rewarded for the years of mishandling our data with lucrative government contracts. These systems create much more risk online and offline for young people in terms of their privacy over time from online surveillance and in authoritarian political climates. Age verification proponents often acknowledge that there are privacy risks, and dismiss the consequences by claiming the trade off will “protect children.” These systems don’t foster safer online practices for young people; they encourage increasingly invasive ways for governments to define who is and isn’t free to roam online. If we don’t re-establish ways to maintain online anonymity today, our children’s internet could become unrecognizable and unusable for not only them, but many adults as well.  Actions you can take today to protect young people online: Use existing parental controls to decide for yourself what your kid should and shouldn’t see, who they should engage with, etc. Discuss the importance of online privacy and safety with your kids and community. Provide spaces and resources for young people to flexibly communicate with their schools, guardians, and community. Support comprehensive privacy legislation for all. Support legislators’ efforts to regulate the out-of-control data broker industry by banning behavioral ads. Join EFF in opposing mandatory age verification and age gating laws—help us keep your kids safe and protect the future of the internet, privacy, and anonymity.

EFF to Arizona Federal Court: Protect Public School Students from Surveillance and Punishment for Off-Campus Speech
Favicon 
www.eff.org

EFF to Arizona Federal Court: Protect Public School Students from Surveillance and Punishment for Off-Campus Speech

Legal Intern Alexandra Rhodes contributed to this blog post.  EFF filed an amicus brief urging the Arizona District Court to protect public school students’ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is “on campus.” We argued that students need private digital spaces beyond their school’s reach to speak freely, without the specter of constant school surveillance and punishment.   Surveillance Software Exposed a Bad Joke Made in the Privacy of a Student’s Home  The case, Merrill v. Marana Unified School District, involves a Marana High School student who, while at home one morning before school started, asked his mother for advice about a bad grade he received on an English assignment. His mother said he should talk to his English teacher, so he opened his school-issued Google Chromebook and started drafting an email. The student then wrote a series of jokes in the draft email that he deleted each time. The last joke stated: “GANG GANG GIMME A BETTER GRADE OR I SHOOT UP DA SKOOL HOMIE,” which he narrated out loud to his mother in a silly voice before deleting the draft and closing his computer.   Within the hour, the student’s mother received a phone call from the school principal, who said that Gaggle surveillance software had flagged a threat from her son and had sent along the screenshot of the draft email. The student’s mother attempted to explain the situation and reassure the principal that there was no threat. Nevertheless, despite her reassurances and the student’s lack of disciplinary record or history of violence, the student was ultimately suspended over the draft email—even though he was physically off campus at the time, before school hours, and had never sent the email.   After the student’s suspension was unsuccessfully challenged, the family sued the school district alleging infringement of the student’s right to free speech under the First Amendment and violation of the student’s right to due process under the Fourteenth Amendment.  Public School Students Have Greater First Amendment Protection for Off-Campus Speech  The U.S. Supreme Court has addressed the First Amendment rights of public school students in a handful of cases.  Most notably, in Tinker v. Des Moines Independent Community School District (1969), the Court held that students may not be punished for their on-campus speech unless the speech “materially and substantially” disrupted the school day or invaded the rights of others.  Decades later, in Mahanoy Area School District v. B.L. by and through Levy (2021), in which EFF filed a brief, the Court further held that schools have less leeway to regulate student speech when that speech occurs off campus. Importantly, the Court stated that schools should have a limited ability to punish off-campus speech because “from the student speaker’s perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.”  The Ninth Circuit has further held that off-campus speech is only punishable if it bears a “sufficient nexus” to the school and poses a credible threat of violence.  In this case, therefore, the extent of the school district’s authority to regulate student speech is tied to whether the high schooler was on or off campus at the time of the speech. The student here was at home and thus physically off campus when he wrote the joke in question; he wrote the draft before school hours; and the joke was not emailed to anyone on campus or anyone associated with the campus.   Yet the school district is arguing that his use of a school-issued Google Chromebook and Google Workspace for Education account (including the email account) made his speech—and makes all student speech—automatically “on campus” for purposes of justifying punishment under the First Amendment.   Schools Provide Students with Valuable Digital Tools—But Also Subject Them to Surveillance  EFF supports the plaintiffs’ argument that the student’s speech was “off campus,” did not bear a sufficient nexus to the school, and was not a credible threat. In our amicus brief, we urged the trial court at minimum to reject a rule that the use of a school-issued device or cloud account always makes a student’s speech “on campus.”    Our amicus brief supports the plaintiffs’ First Amendment arguments through the lens of surveillance, emphasizing that digital speech and digital privacy are inextricably linked.   As we explained, Marana Unified School District, like many schools and districts across the country, offers students free Google Chromebooks and requires them to have an online Google Account to access the various cloud apps in Google Workspace for Education, including the Gmail app.   Marana Unified School District also uses three surveillance technologies that are integrated into Chromebooks and Google Workspace for Education: Gaggle, GoGuardian, and Securly. These surveillance technologies collectively can monitor virtually everything students do on their laptops and online, from the emails and documents they write (or even just draft) to the websites they visit.   School Digital Surveillance Chills Student Speech and Further Harms Students  In our amicus brief, we made four main arguments against a blanket rule that categorizes any use of a school-issued device or cloud account as “on campus,” even if the student is geographically off campus or outside of school hours.   First, we pointed out that such a rule will result in students having no reprieve from school authority, which runs counter to the Supreme Court’s admonition in Mahanoy not to regulate “all the speech a student utters during the full 24-hour day.” There must be some place that is “off campus” for public school students even when using digital tools provided by schools, otherwise schools will reach too far into students’ lives.   Second, we urged the court to reject such an “on campus” rule to mitigate the chilling effect of digital surveillance on students’ freedom of speech—that is, the risk that students will self-censor and choose not to express themselves in certain ways or access certain information that may be disfavored by school officials. If students know that no matter where they are or what they are doing with their Chromebooks and Google Accounts, the school is watching and the school has greater legal authority to punish them because they are always “on campus,” students will undoubtedly curb their speech.  Third, we argued that such an “on campus” rule will exacerbate existing inequities in public schools among students of different socio-economic backgrounds. It would distinctly disadvantage lower-income students who are more likely to rely on school-issued devices because their families cannot afford a personal laptop or tablet. This creates a “pay for privacy” scheme: lower-income students are subject to greater school-directed surveillance and related discipline for digital speech, while wealthier students can limit surveillance by using personal laptops and email accounts, enabling them to have more robust free speech protections.  Fourth, such an “on campus” rule will incentivize public schools to continue eroding student privacy by subjecting them to near constant digital surveillance. The student surveillance technologies schools use are notoriously privacy invasive and inaccurate, causing various harms to students—including unnecessary investigations and discipline, disclosure of sensitive information, and frustrated learning.  We urge the Arizona District Court to protect public school students’ freedom of speech and privacy by rejecting this approach to school-managed technology. As we said in our brief, students, especially high schoolers, need some sphere of digital autonomy, free of surveillance, judgment, and punishment, as much as anyone else—to express themselves, to develop their identities, to learn and explore, to be silly or crude, and even to make mistakes.  

✋ Get A Warrant | EFFector 37.17
Favicon 
www.eff.org

✋ Get A Warrant | EFFector 37.17

Even with the holidays coming up, the digital rights news doesn't stop. Thankfully, EFF is here to keep you up-to-date with our EFFector newsletter! In our latest issue, we’re explaining why politicians latest attempts to ban VPNs is a terrible idea; asking supporters to file public comments opposing new rules that would make bad patents untouchable; and sharing a privacy victory—Sacramento is forced to end its dragnet surveillance program of power meter data. Prefer to listen in? Check out our audio companion, where EFF Surveillance Litigation Director Andrew Crocker explains our new lawsuit challenging the warrantless mass surveillance of drivers in San Jose. Catch the conversation on YouTube or the Internet Archive. LISTEN TO EFFECTOR EFFECTOR 37.17 - ✋ GET A WARRANT Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.  Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.