Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Edmonton Becomes First in Canada to Test Facial Recognition Body Cameras in Police Pilot Program
Favicon 
reclaimthenet.org

Edmonton Becomes First in Canada to Test Facial Recognition Body Cameras in Police Pilot Program

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. When fifty Edmonton, Alberta, police officers stepped onto city streets this week, they carried more than their standard-issue equipment. Clipped to their uniforms were small, black devices capable of something no other police body camera in Canada has done before: recognizing faces. The Edmonton Police Service (EPS) has become the country’s first force to test facial recognition-equipped body cameras, entering a new and deeply contested phase of public surveillance. The pilot program, which runs through the end of December, puts the technology directly into daily policing. The cameras, built by Axon Enterprise, the American company behind the ubiquitous Taser and many of North America’s police tech systems, connect field officers to a biometric network that scans faces against EPS’s existing database of mugshots. Acting Superintendent Kurt Martin presented the move as a practical, safety-oriented upgrade. More: The Next Surveillance Boom Is Taking Flight With more than 6,300 individuals currently flagged for serious offenses, he said, the technology could help officers “recognize people who have outstanding warrants for serious offenses.” For Martin, this is not about creating a digital dragnet but about closing cases faster and reducing risks in uncertain encounters. The mechanics of the system reveal its complexity. When a camera records, every face within roughly four meters enters a digital pipeline where software compares it to known offenders in the EPS database. Images without a match are supposed to be erased. The facial recognition feature is not always active; it remains off during routine patrols and switches on only when enforcement begins or during later investigative reviews. Yet the question remains: how much discretion can a human exercise once the algorithm has made a suggestion? Not everyone is convinced this experiment is ready for the real world. What makes Edmonton’s pilot remarkable is how it integrates facial recognition directly into an officer’s line of sight. Traditional systems rely on fixed cameras at airports or stadiums, where surveillance is static and predictable. Body-worn cameras, by contrast, move through neighborhoods, homes, and private businesses, gathering footage that reflects the rhythms of daily life. Even if non-matching faces are deleted, the simple act of scanning them reframes what it means to appear in public space. Such technology edges society toward continuous, automated observation, a state where anonymity in public becomes a relic. The EPS pilot is only a few weeks old, yet it is already a bellwether for Canadian law enforcement. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Edmonton Becomes First in Canada to Test Facial Recognition Body Cameras in Police Pilot Program appeared first on Reclaim The Net.

Australia’s Top Censor Warns of Surveillance While Hypocritically Expanding It
Favicon 
reclaimthenet.org

Australia’s Top Censor Warns of Surveillance While Hypocritically Expanding It

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. At a press conference that could have been a comedy sketch idea, Australia’s “eSafety” Commissioner Julie Inman Grant and Social Services Minister Tanya Plibersek stood before the cameras and solemnly warned the nation about the perils of surveillance. Not from government programs or sweeping digital mandates, but from smart cars and connected devices. The irony was not lost on anyone paying attention. Both Grant and Plibersek are enthusiastic backers of the country’s new online age verification law, the so-called Social Media Minimum Age Bill 2024, a law that has done more to expand digital surveillance than any gadget in a Toyota. The legislation bans under-16s from social media and requires users to prove their age through “assurance” systems that often involve facial scans, ID uploads, and data analysis so invasive it would make a marketing executive blush. But on the same day she cautioned the public about the dangers of “connected” cars sharing sensitive information with third parties, Grant’s agency was publishing rules that literally require social media platforms to share sensitive data with third parties. During the press conference, Grant complained that “it’s disappointing” YouTube and other platforms hadn’t yet released their guidance on how they’ll implement verification. She announced that eSafety will begin issuing “gathering information notices” on December 10, demanding details from companies about how they plan to comply once her expanded powers take effect. She also warned that some of the smaller apps users are migrating to may soon “become age-restricted social media platforms.” The Office of the Australian Information Commissioner (OAIC) explains that compliance under this law can involve “age estimation” using facial analysis, “age inference” through data modeling of user activity, or “age verification” with government ID. All three options amount to building a surveillance apparatus around everyday users. Facial recognition, voice modeling, behavioral tracking; pick your poison. Most platforms outsource this work to private firms, which means that the same sensitive data the law claims to protect is immediately handed to a commercial intermediary. Meta, for example, relies on Yoti, a third-party ID verification company. Others use firms like Au10tix, which famously left troves of ID scans exposed online for over a year. The law includes what politicians like to call “strong privacy safeguards.” Platforms must only collect the data necessary for verification, must destroy it once it’s used, and must never reuse it for other purposes. It’s the same promise every company makes before it gets hacked or “inadvertently” leaks user data. Even small dating apps that claimed to delete verification selfies “immediately after completion” managed to leak those same selfies. In every case, the breach followed the same pattern: grand assurances, then exposure. Julie Inman Grant calls it protecting the public. Tanya Plibersek calls it social responsibility. The rest of us might call it what it actually is: institutionalized data collection, dressed in the language of child safety. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Australia’s Top Censor Warns of Surveillance While Hypocritically Expanding It appeared first on Reclaim The Net.

As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID
Favicon 
reclaimthenet.org

As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The latest congressional hearing on “protecting children online” opened as you would expect: the same characters, the same script, a few new buzzwords, and a familiar moral panic to which the answer is mass surveillance and censorship. The Subcommittee on Commerce, Manufacturing, and Trade had convened to discuss a set of draft bills packaged as the “Kids Online Safety Package.” The name alone sounded like a software update against civil liberties. The hearing was called “Legislative Solutions to Protect Children and Teens Online.” Everyone on the dais seemed eager to prove they were on the side of the kids, which meant, as usual, promising to make the internet less free for everyone else. Rep. Gus Bilirakis (R-FL), who chaired the hearing, kicked things off by assuring everyone that the proposed bills were “mindful of the Constitution’s protections for free speech.” He then reminded the audience that “laws with good intentions have been struck down for violating the First Amendment” and added, with all the solemnity of a man about to make that same mistake again, that “a law that gets struck down in court does not protect a child.” They know these bills are legally risky, but they’re going to do it anyway. Bilirakis’s point was echoed later by House Energy & Commerce Committee Chairman Brett Guthrie (R-KY), who claimed the bills had been “curated to withstand constitutional challenges.” That word, curated, was doing a lot of work. Guthrie went on to insist that “age verification is needed…even before logging in” to trigger privacy protections under COPPA 2.0. The irony of requiring people to surrender their private information in order to be protected from privacy violations was lost in the shuffle. Guthrie praised the TAKE IT DOWN Act, signed by President Trump in May, as a model of legislative virtue, despite the fact that digital rights groups have flagged it for censorship risks and missing safeguards. “Countless other harms exist,” Guthrie warned, “and it is our responsibility to find a solution.” The phrase “find a solution” is Capitol Hill’s version of the blue screen of death: a signal that the machine has crashed, but no one wants to admit it. The only people complaining were Democrats, and not because the bills threatened privacy or free expression. Their gripe was that the bills didn’t go far enough. Rep. Jan Schakowsky (D-IL) called the current proposals “really frustrating,” adding that “the legislation that has been offered by the Republicans does not do the job” and “we have a long, long way to go to protect our children.” Rep. Kathy Castor (D-FL) joined in, labeling the Republican-backed versions of COPPA and KOSA “weak,” “ineffectual,” and “a slap in the face to the parents, the experts, and the advocates.” She called for Congress to strengthen the bills, which in practice means adding even more enforcement power to age verification and content moderation regimes. Rep. Lori Trahan (D-MA) said the proposals had been “gutted and co-opted by Big Tech.” That line played well for cameras, but it rang hollow coming from a party that spent years demanding that tech companies police speech harder and faster. It was a bipartisan competition to see who could sound more outraged about the dangers of screens. The private sector witnesses tried to slow the rush toward universal ID checks. Paul Lekas of the Software & Information Industry Association gently reminded lawmakers that while the Supreme Court had upheld some forms of age verification for “unprotected sexually explicit material,” broader mandates could be unconstitutional. He offered “age estimation” as a compromise, a technology that guesses your age from data like facial features or online behavior, a system that sounds like surveillance with a smile. Kate Ruane from the Center for Democracy & Technology (CDT) took a sharper tone. She warned that the bills carried “significant privacy risks” and lacked “sufficient guardrails to protect and require security practices within the age assurance requirements.” In other words, the proposals could create a sprawling, insecure ID infrastructure in the name of keeping kids safe. Still, CDT, an organization that has previously supported crackdowns on “disinformation,” wasn’t opposed to the idea of more regulation. They just wanted it done right. Every side at this hearing had a version of “we agree with censorship, but only ours.” Marc Berkman, CEO of the Organization for Social Media Safety, did his part for the cause, declaring that “we need to find consensus and pass meaningful social media safety legislation this year to protect our children.” The phrase “meaningful legislation” is another Beltway placeholder, something you say when the substance doesn’t hold up but the sentiment polls well. Joel Thayer of the Digital Progress Institute endorsed the whole package, particularly the App Store Accountability Act, the SCREEN Act, and KOSA. His testimony provided cover for lawmakers eager to claim bipartisanship while ignoring the reality that the bills collectively amount to a new system of mandatory content filters and identity checks. Then came Rep. Neal Dunn (R-FL), who pitched his Safe Messaging for Kids Act like a public service announcement from 1999. Dunn warned that ephemeral messages, the kind that disappear after you send them, were a “dangerous feature.” His bill would ban social media platforms from offering them to “any user they know is a minor under 17.” That standard all but forces companies to collect proof of age from everyone or treat all users like minors. The hearing was supposedly about protecting children, but every fix circled back to tagging and tracking everyone. Rep. Jay Obernolte (R-CA) wrapped up the policy side with a tidy summary: age verification, he said, is “key to the protections we’re trying to provide here.” His preferred version would happen at the operating system level, meaning your phone, computer, or console would verify your age for everything else. By the end, it was hard to tell whether lawmakers believed their own talking points or were simply keeping up with the moral fashion of the day. The hearing wasn’t about solving anything; it was about signaling concern. Each side spoke about protecting children while ignoring the real tension between privacy, speech, and power. As always, the session ended without progress and without reflection on what happens when the government starts requiring people to prove who they are to speak, read, or log in. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID appeared first on Reclaim The Net.

How One Patent Envisions a Nation Under Surveillance
Favicon 
reclaimthenet.org

How One Patent Envisions a Nation Under Surveillance

In August 2022, a company named Flock Group Inc., already running a nationwide web of license-plate cameras, secured a US patent that quietly sketches out the next logical step: total video integration. It is called US 11,416,545 B1, a bureaucratic title for what amounts to a manual on building a surveillance organism. Become a Member and Keep Reading… Reclaim your digital freedom. Get the latest on censorship, cancel culture, and surveillance, and learn how to fight back. Join Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post How One Patent Envisions a Nation Under Surveillance appeared first on Reclaim The Net.

Congress Goes Parental on Social Media and Your Privacy
Favicon 
reclaimthenet.org

Congress Goes Parental on Social Media and Your Privacy

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package. The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls. Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.” The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is. The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them. Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.” TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial. Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat. But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections. Next up is the Parents Over Platforms Act, which reads like a spy’s dream version of child safety. The idea is to require mobile app stores and developers to “assure” user ages through “commercially reasonable efforts.” Developers must “determine whether a user is an Adult or a Minor with a reasonable level of certainty.” How they’re supposed to do that without collecting more personal data is unclear. Privacy advocates might want to sit down for this one. The bill’s co-sponsor, Republican Erin Houchin of Indiana, says it comes from personal experience. Her daughter, age 13, “hacked around our parental controls” and started chatting with strangers. “My goal is to put parents back in the driver’s seat,” she says. Fair enough, but that driver’s seat now comes with a dashboard full of federal switches and levers. If passed, parents would input their children’s ages into the app store, which would then transmit the “Age Signal” to every app. Kids under 13 would be locked out of restricted platforms. The potential for data errors and cross-app confusion seems baked in, but Congress appears unbothered. Rounding out the trio is the Education Not Endless Scrolling Act, which would slap a 50 percent tax on digital ad revenue over $2.5 billion. The money would fund tutoring programs, local journalism, and technical education. Auchincloss explains, “This is for the major social media corporations, not the recipe blogs.” He adds, “These social media corporations have made hundreds of billions of dollars making us angrier, lonelier, and sadder, and they have no accountability to the American public.” The proposal reads like a moral tax: the government will collect penance for every click. Both Auchincloss and Houchin frame their effort as a bipartisan stand for the children, launching a “Kids Online Safety Caucus” to formalize their alliance. Houchin puts it simply: “Good policy supersedes politics.” It’s a line you usually hear right before an entire generation of digital policy disasters. The timing is no accident. Congress is now flooded with “child safety” bills. Auchincloss says he’s tired of waiting. “I don’t like to be passive or wait for the ground to shift,” he says. “I am trying to be an earthquake.” It’s a fitting metaphor, though he might consider what happens after the shaking stops. Once the dust settles, the UnAnxious Generation may find that the cure for digital anxiety looks a lot like preemptive censorship and surveillance wrapped in a moral crusade. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Congress Goes Parental on Social Media and Your Privacy appeared first on Reclaim The Net.