Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Edmonton Police Turned Body Cameras Into Facial Recognition Surveillance Tools
Favicon 
reclaimthenet.org

Edmonton Police Turned Body Cameras Into Facial Recognition Surveillance Tools

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Dozens of Edmonton, Alberta, police officers spent December 2025 patrolling with body cameras that silently scanned every face within four meters, comparing captures against a watchlist of roughly 7,000 people. The cameras, manufactured by Axon Enterprise and powered by facial recognition from Corsight AI, ran automatically whenever an officer pressed record. No one being scanned was asked or told. Body cameras were sold to the public as accountability tools that watch police on behalf of citizens. Edmonton’s pilot inverts that promise. The same cameras now watch citizens on behalf of police. EPS’s own privacy assessment acknowledges this, stating that “the continuous scanning of faces for comparison against a watchlist constitutes proactive surveillance.” Newly obtained documents reveal that the privacy impact assessment EPS submitted to Alberta’s privacy watchdog contains troubling language around data sharing. The assessment says data shared with Axon will be anonymized “whenever possible,” but adds that “data required to aid in assessing the success or failures associated with the technology will be shared when / if required.” Gideon Christian, an associate professor of AI and law at the University of Calgary, called that phrasing dangerously vague. “‘Whenever possible’ is a very loose and ambiguous phrase,” he said. Kate Robertson, a senior research associate with the University of Toronto’s Citizen Lab, called this “likely the most high risk algorithmic surveillance program that I have observed to date in Canada.” A system outage caused by a “critical fault” prevented matches for seven days, and EPS requested a three-week extension to collect enough data for a potential second phase involving real-time officer notifications. Whether that extension was approved remains unknown. EPS refused to answer questions. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Edmonton Police Turned Body Cameras Into Facial Recognition Surveillance Tools appeared first on Reclaim The Net.

FTC Settlement: Ad Agencies Agree to Stop “Brand Safety” Collusion to Defund Media Outlets
Favicon 
reclaimthenet.org

FTC Settlement: Ad Agencies Agree to Stop “Brand Safety” Collusion to Defund Media Outlets

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Three of the world’s biggest advertising conglomerates have agreed to stop colluding to defund media outlets whose politics they didn’t like. The Federal Trade Commission and Texas Attorney General Ken Paxton, joined by seven other states, filed a complaint and simultaneous settlement against Dentsu US, GroupM Worldwide (WPP’s media-buying arm), and Publicis on April 15, accusing them of running what amounts to a coordinated censorship operation through the advertising supply chain. Starting in 2018, these agencies, which collectively control over $81 billion in ad-buying power, agreed to adopt identical “brand safety” standards that treated so-called “misinformation” as a category of content too dangerous for any advertiser to touch. They did this through two industry groups: the American Association of Advertising Agencies’ Advertiser Protection Bureau, and the World Federation of Advertisers’ Global Alliance for Responsible Media, better known as GARM. The result was a shared “Brand Safety Floor” that could starve publishers of revenue without any single company having to take public responsibility for the decision. One ad agency executive described the arrangement’s origins by saying, “the major holding companies came together under the 4As and agreed that brand safety is so important, that we must combine efforts, become one voice, and stop sending potential mixed signals.” The 4As vice president put it even more bluntly: “When it comes to brand and consumer safety, media agencies have to put competition aside.” Put competition aside. That looks like an antitrust violation described as a virtue. GARM operated under explicit secrecy. According to the complaint, GARM told the six largest global advertising holding companies that discussions about brand safety were governed by a principle: “The first rule of Fight Club is: You do not talk about Fight Club. The second rule of Fight Club is: You do not talk about Fight Club.” GARM leadership wanted “the agencies [to] speak as a single entity to describe how they’re tailoring plans and buys.” At a retrospective on GARM’s third anniversary, participants celebrated what they called “uncommon collaboration,” praising how the agencies came together to “collaborate not compete on safety.” The word “safety” is a misnomer. What they were actually collaborating on was a system to cut off ad revenue from publishers whose content fell below their agreed-upon standard for acceptable speech. And who got to define what was acceptable? Organizations like NewsGuard, the Global Disinformation Index, Check My Ads, and Media Matters for America. The complaint describes these groups as having “sought to elevate concerns within the digital advertising industry about what they viewed as ‘misinformation,’ in order to deprive certain sites of the digital ad revenue they needed to survive.” The Global Disinformation Index was founded because its creators believed the 2016 US presidential election and the Brexit referendum were caused by media disinformation, a problem they decided could be solved by going after those media companies’ advertisers. Check My Ads announced in 2022 that it was “launching the first effort to permanently block” conservative media figures like Charlie Kirk, Glenn Beck, and Steve Bannon “from the ad industry,” in an article titled “Here’s our plan to defund the insurrectionists.” Media Matters ran campaigns pressuring advertisers to pull spending from Fox News and later from Elon Musk’s X. The chilling effect of this arrangement went well beyond the individual publishers who lost revenue. When the three largest ad-buying agencies in the country all agree to use the same criteria for excluding websites, the definition of “brand safe” becomes industry-wide orthodoxy. Publishers who might have survived one agency’s disapproval couldn’t survive all of them acting in concert. News outlets, commentators, and social media platforms were the primary targets. A House Judiciary Committee report found that GARM discussed putting center-right outlets, including Breitbart News, Daily Wire, and Fox News on advertising exclusion lists. An internal GARM communication, quoted in earlier FTC proceedings, captured the thinking. John Montgomery, then-executive vice president of Global Brand Safety, wrote to GARM leader Rob Rakowitz: “There is an interesting parallel here with Breitbart. Before Breitbart crossed the line and started spouting blatant misinformation, we had long discussions about whether we should include them on our exclusion lists. As much as we hated their ideology and bullshit, we couldn’t really justify blocking them for misguided opinion. We watched them very carefully and it didn’t take long for them to cross the line.” FTC Chairman Andrew Ferguson framed the case in both antitrust and speech terms. “The ad agencies’ brand-safety conspiracy turned competition in the market for ad-buying services on its head,” he said. “The antitrust laws guarantee participation in a market free from conduct, such as economic boycotts, that distort the fundamental competitive pressures that promote lower prices, higher-quality products, and increased innovation.” Ferguson added that the collusion “deprived advertisers of the benefits of differentiated brand-safety standards that could be tailored to their unique advertising inventory.” He went further: “This unlawful collusion not only damaged our marketplace, but also distorted the marketplace of ideas by discriminating against speech and ideas that fell below the unlawfully agreed-upon floor.” Paxton called the scheme “an egregious attempt to control public opinion and silence those who speak out against the liberal elites and powerful corporations.” He added: “I will continue to lead the fight against viewpoint suppression and protect the speech of Americans from corrupt manipulation.” Under the proposed settlement, filed in US District Court for the Northern District of Texas, all three agencies must stop using exclusion lists and coordinated agreements to restrict ad spending based on political viewpoints or social commentary. They cannot enter into or enforce agreements that restrict business with media publishers based on political or social commentary content, and they cannot direct or limit ad spending based on political viewpoints, ideological viewpoints, or DEI commitments. A court-appointed monitor will oversee compliance. The settlements require court approval to take effect. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post FTC Settlement: Ad Agencies Agree to Stop “Brand Safety” Collusion to Defund Media Outlets appeared first on Reclaim The Net.

Tuta Announces Quantum-Resistant Encrypted Cloud Storage, Tuta Drive
Favicon 
reclaimthenet.org

Tuta Announces Quantum-Resistant Encrypted Cloud Storage, Tuta Drive

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Privacy company Tuta is launching an encrypted cloud storage service, and it comes with something most competitors can’t offer: encryption that’s designed to survive quantum computers. Tuta Drive enters early access today as an invite-only beta, built on the same hybrid cryptographic protocol the German company deployed in Tuta Mail back in early 2024. That protocol, TutaCrypt, pairs conventional algorithms with quantum-resistant ones, which means files uploaded to Tuta Drive are encrypted with math that current computers can’t break and future quantum machines shouldn’t be able to either. Every file gets encrypted on your device before it leaves. Tuta’s servers never see the unencrypted version. In a zero-knowledge architecture like this, even a government subpoena can’t produce readable files, because the company genuinely doesn’t have the keys. This is the product that has been under development for nearly three years. Tuta started the PQDrive research project in July 2023, working alongside the University of Wuppertal to build post-quantum encryption into a cloud storage system from the ground up. By early 2024, the cryptography was proven enough for email, making Tuta Mail the first provider worldwide to ship quantum-safe encryption by default. Now that same protocol extends to file storage. “With Tuta Drive, we are taking the next step towards offering a full private digital workspace,” said Arne Möhle, CEO of Tuta. “Today, more than ten million citizens and businesses, including journalists, whistleblowers and activists use Tuta Mail as an alternative to insecure email offered by mainstream providers. “Adding an encrypted cloud storage to Tuta will enable them to also store their files securely. This invite-only beta release accumulates all our efforts of the last years. In July 2023, we started an extensive research project with the goal to update the Tuta cryptography to a hybrid protocol with traditional and quantum-resistant algorithms. We achieved this in beginning of 2024, making Tuta Mail the first quantum-safe email provider worldwide. And today we are proud to announce that we are ready to add a Drive solution to Tuta that makes use of the same cryptography.” Intelligence agencies and sophisticated attackers are already harvesting encrypted data in bulk, banking on the assumption that quantum computers will eventually crack today’s encryption. It’s called “harvest now, decrypt later,” and it transforms every file you store in a conventional cloud service into a future liability. Your medical records, legal documents, financial statements, business plans, anything uploaded to Google Drive or Dropbox today sits behind encryption that a sufficiently powerful quantum computer could shred. The files don’t need to be interesting right now. They just need to still be sensitive in ten or fifteen years, which most of them will be. Google, Microsoft, and Dropbox don’t offer end-to-end encryption on their cloud storage by default. They encrypt files in transit and at rest, sure, but they hold the keys. That means they can read your files, law enforcement can compel them to hand files over in readable form, and a breach of their systems exposes actual content. The privacy promise amounts to trusting that they won’t look and that nobody else will successfully break in. It’s a bet that gets worse every year as quantum computing advances accelerate. Tuta Drive’s hybrid encryption sidesteps this entirely. The protocol combines CRYSTALS-Kyber (a NIST-standardized post-quantum key encapsulation mechanism) with elliptic curve cryptography, layered over AES-256 symmetric encryption. If someone breaks the quantum-safe algorithm, the conventional encryption still holds. If someone breaks the conventional encryption, the quantum-safe layer still holds. An attacker would need to defeat both simultaneously, which is the whole point of a hybrid approach. The beta is bare-bones for now. It works through the web interface on desktop and mobile, with native apps and a sync client coming later. Users can upload and store files, with sharing features planned. That’s not a lot of polish, but the encryption underneath is the part that actually matters, and Tuta has been hardening it for years across email, calendar, and contact data before extending it to file storage. Tuta is based in Germany, which means European data protection law applies. More meaningfully, the zero-knowledge architecture makes that jurisdiction question less important than it would be for a service that can actually read your data. When a provider holds no usable decryption keys, the legal framework governing data requests becomes somewhat academic. You don’t have to trust Tuta’s promises about privacy. You have to trust the math, which is open source and available for anyone to audit on GitHub. During the closed Tuta Drive beta, participants can test core functionality and submit feedback to shape what the final product looks like. Given how long the privacy community has waited for quantum-resistant cloud storage from a provider that isn’t headquartered in a Five Eyes country, the beta can’t come soon enough. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Tuta Announces Quantum-Resistant Encrypted Cloud Storage, Tuta Drive appeared first on Reclaim The Net.

UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions
Favicon 
reclaimthenet.org

UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. On July 29 2024, a teenager walked into a children’s Taylor Swift-themed dance class in Southport, England, and murdered three young girls with a knife. He injured ten others. It was, by any measure, one of the most horrifying attacks on British soil in recent memory, and what followed should have been a reckoning with the catastrophic state failures that let it happen. Instead, the British government looked at the smoldering aftermath and decided the real enemy was the internet, and the solution just so happens to be the mass surveillance censorship proposals the government is already working on. After the attack, outrage on social media turned to protests. Protests became riots. And the state’s response landed with a speed and ferocity that it had never managed to direct at, say, the agencies that let a known danger walk free for years. A former childcarer named Lucy Connolly was jailed for 31 months for a single post on X. That is three months longer than the sentence given to a man who physically attacked a mosque during the same period of unrest. The UK was already a country where arrests for “offensive” social media posts had nearly doubled in seven years, climbing from 5,502 in 2017 to 12,183 in 2023. The overall conviction rate for those arrests was falling at the same time. Police were locking people up for what they typed at a rate that was going up, while the number of convictions that actually stuck was going down. The Southport riots became the accelerant. A House of Commons Home Affairs Committee report used the unrest to call for a “new national system for policing” with enhanced capabilities to surveil social media activity, framing public anger as a problem of online “misinformation” rather than a consequence of the state’s own failures. The state was dodging accountability by demanding censorship and surveillance and blaming the internet for unrest. And now, months later, Sir Adrian Fulford’s Southport Inquiry Phase 1 report has arrived, and it takes the whole dynamic further still. Not just further toward punishing people for what they say online, but toward watching everything they do online, and everything they buy offline, too. The report itself is 763 pages across two volumes, published on 13 April, with 67 recommendations. Its central finding is devastating. The attack “could have been and should have been prevented.” Multiple state agencies failed repeatedly to act on years of warning signs. The attacker’s parents bore “considerable blame” for not reporting Axel Rudakubana’s worsening behavior. Sir Adrian identified five areas of systematic failure, including critical breakdowns in information sharing and a repeated tendency to excuse the attacker’s behavior on the basis of his autism spectrum disorder. The factual record of those failures is staggering. The attacker was referred to the Prevent counter-terrorism program three times between 2019 and 2024, with each referral closed without sustained action. He purchased weapons, including three machetes, as well as ingredients to make the poison ricin. Police responded to five calls at the family home. And in March 2022, when the attacker was found on a bus with a knife, admitting he wanted to stab someone and thinking about poison, he was simply returned home with advice to hide the knives. The report said that had this incident been judged in light of the attacker’s past risk, he would have been arrested, and his possession of an al-Qaeda manual and ricin seeds would have come to light. You might think the resulting 67 recommendations would focus on making sure the people who are paid to protect children actually protect them. Some of them do. But a significant chunk has nothing to do with fixing the human laziness that ultimately killed three girls, and everything to do with building an internet surveillance apparatus that would make the average dystopian novelist blush. Recommendation 12 asks the government to “consider systems to detect and report concerning online behaviour and suspicious combinations of purchases.” It lists VPN use alongside name changes as behavioral red flags worth automated detection. The same recommendation wants reporting systems for “concerning purchases of dangerous but legal items (e.g., sledgehammers, bow and arrows and smoke grenades)” and “concerning combinations of purchases (e.g. castor beans, alcohol, and laboratory equipment).” Anyone who has ever renovated a kitchen, taken up archery as a hobby, or ordered laboratory glassware because they fancied making gin is now, apparently, a person of interest. Recommendation 24 goes after VPNs directly, asking Phase 2 to “consider age verification for the use of Virtual Private Network (VPN) software and other options to avoid VPNs being used to circumvent the age-related protections in the Online Safety Act 2023.” Recommendation 20 calls for “mandatory reporting and information-sharing about suspicious behaviour” around knife sales, alongside “strengthening online age-verification and age verified delivery standards” and “prohibiting some online sales.” Recommendation 19 tells Amazon to “improve its measures to prevent children from making purchases,” to “review its systems for recording details of the recipient to ensure that an accurate record of the recipient is obtained,” and to “audit its training of age verified deliveries for drivers, in particular for Amazon Flex drivers.” Amazon is being told to collect more data about everyone who receives a parcel. The company already uses “trusted ID verification services to check name, date of birth and address details whenever an order is placed for these bladed items” and has “an age verification on delivery process that requires drivers to verify the recipient’s age through an app on their devices.” Recommendation 22 tells Lancashire County Council to ensure frontline staff “have access to effective tools and guidance to identify and respond to” online risks, specifically naming “the risks associated with the use of Virtual Private Networks, which can enable children to bypass the safeguards established under the Online Safety Act 2023.” It asks the Department of Health and Social Care to consider whether “reforms to national guidance, policy or training are required.” Social workers are now expected to treat VPN use as a safeguarding red flag. The same tool, you will recall, that Parliament itself told its own members to install on their phones. Here is where the whole thing becomes genuinely absurd. VPN use in Britain exploded because the government’s own Online Safety Act censorship law forced it. When age verification rules took effect in July 2025, Proton VPN reported a sustained 1,800 percent increase in UK sign-ups. Five VPN apps hit Apple’s UK App Store top 10 within days. Millions of ordinary people downloaded privacy tools to avoid handing their biometric data to random websites as the government’s own rules demanded. And the government’s response to this entirely predictable mass adoption of privacy software is to propose restricting privacy software. The House of Lords voted in January to ban VPN use by under-18s, backing an amendment to the Children’s Wellbeing and Schools Bill by 207 votes to 159. Labour’s Lord Knight acknowledged that VPNs could “undermine the child safety gains of the Online Safety Act” but warned that age-gating them could be “extremely problematic.” He noted: “My phone uses a VPN, following a personal device cyber consultation offered by this Parliament. VPNs can make us more secure, and we should not rush to deprive children of that safety.” For now, MPs haven’t gone along with it. But the rejected proposals are only one implementation of such ideas. So Parliament tells its own members to use VPNs. Parliament then votes to ban children from using VPNs, which would require age checks and giving up privacy. And a public inquiry now wants social workers to flag VPN use as a risk indicator. Age verification amounts to requiring adults to give up their personal or biometric data to access lawful content. This is the throughline that connects Southport to the wider censorship machine. The government passes laws requiring identity verification to access legal content. People use privacy tools to avoid handing their identity to strangers. The government then classifies those privacy tools as suspicious. At each step, the scope of surveillance expands and the definition of “concerning behavior” gets broader, and at no point does anyone go back and fix the actual agencies that let a teenager with an al-Qaeda manual and ricin seeds, three machetes, and multiple Prevent referrals walk free for years. The rest of the surveillance proposals are not aimed at known threats. They are aimed at the whole population. They propose systems to track what you browse, what you buy, and whether you dare to use a VPN, then flag combinations that some algorithm decides look suspicious. The Southport Inquiry confirms what the arrest statistics, the sentencing disparities, and the legislative agenda already made obvious. Britain has developed a very specific institutional reflex. When its agencies fail catastrophically, the state responds by expanding surveillance of the general population. When the public expresses anger about those failures, the state responds by censoring the expression of that anger. The definition of “offensive” keeps expanding. And the people who actually had the information needed to prevent a massacre keep their jobs. What failed at Southport was not a lack of data. It was not the absence of purchase-tracking algorithms. It was not that VPNs exist. What failed was human beings in positions of authority who saw danger, documented it, filed the paperwork confirming they’d seen it, and then closed the case and went home. Building a national internet surveillance system won’t change that. Age-gating the privacy tools that Parliament recommends to its own members won’t change that. Nothing in this report’s surveillance wishlist addresses the reason three girls are dead, which is that the system already knew, and the system chose to do nothing. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions appeared first on Reclaim The Net.

California’s “Stop Nick Shirley Act” Would Penalize Journalism
Favicon 
reclaimthenet.org

California’s “Stop Nick Shirley Act” Would Penalize Journalism

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. California’s Assembly Privacy and Consumer Protection Committee voted 11-2 on April 7 to advance a bill that would let employees and volunteers at immigration service organizations demand the deletion of their images and personal information from the internet, backed by civil penalties starting at $4,000 and the threat of criminal charges. AB 2624, authored by Assemblywoman Mia Bonta, is already being called the “Stop Nick Shirley Act.” We obtained a copy of the bill for you here. The bill arrives just weeks after investigative video creator Nick Shirley published a 40-minute video on alleged hospice fraud in California that racked up 42 million views on X. Other investigations have found that a single program is causing the state to lose an alleged $6 billion in fraud annually. Shirley had already reported on over $110 million in Somali daycare fraud in Minnesota in December 2025, with empty facilities billing taxpayers while kids were nowhere to be found. His California reporting uncovered an alleged $170 million in similar fraud in daycares and hospices, with ghost operations registered to empty lots and strip malls. Sacramento’s response to this flood of documented waste and abuse was not an audit, not an investigation into the programs themselves, but a bill to make it harder to film the people running them. Under AB 2624, anyone affiliated with an organization providing “designated immigration support services” can send a written demand prohibiting the publication of their personal information or image online. That demand remains effective for four years, even after the person leaves the organization. If the demand is ignored, the person can go to court for an injunction or declaratory relief. Fines run up to three times the actual damages, with a floor of $4,000, meaning the minimum penalty triples to $12,000 in cases where a takedown demand is defied. If a journalist or anyone else is accused of posting information with the intent to incite harm, they face criminal charges and fines of $10,000. The definition of “designated immigration support services” is broad enough to cover almost any organization that touches immigrant communities. The bill defines these services as those provided to the immigrant population, including legal representation, legal assistance, advocacy, case management, humanitarian relief, immigration resources, referrals, translation services, counseling services, and health care. That’s a definition wide enough to include organizations that have been at the center of documented fraud investigations, and to give them a legal tool to suppress the documentation. The bill also creates an address confidentiality program modeled on California’s existing Safe at Home program for domestic violence survivors. Bonta defended this at the committee hearing, saying the program “allows participants to keep their home and work addresses out of public records, giving them a critical layer of protection and privacy in an environment where their personal safety is increasingly at risk.” She told the committee that “individuals who provide immigrant support services … are facing targeted harassment” and that “advocates and workers are receiving death threats, being targeted at court houses and facing coordinated online doxxing campaigns.” There’s a problem with framing the bill as a safety measure, though, and it’s built right into the text. The “reasonable fear” standard required to trigger these protections is defined and enforced by the person claiming the fear, not by a court, not by law enforcement. Any employee or volunteer at a qualifying organization can send a written demand to suppress publication. They don’t need to prove a threat was made. They don’t need to file a police report. They need to cite a “reasonable fear” and put it in writing. The demand is valid for four years. And the mechanism for enforcing it is a lawsuit against whoever published the information. What’s missing from the bill is just as telling as what’s in it. There is no exemption for journalists. Assemblymember Carl DeMaio raised this directly during the committee hearing, telling Bonta, “You do not provide an exemption for journalists.” He pointed to specific investigative work, noting that “posting video, like (Republican Assembly Member Alexandra) Macedo, in her investigation, posted a video of, what, 90 fake hospices, and Mr. Shirley had dozens or, you know, fifty, sixty fake ‘learning’ centers for the Somali community in Minnesota. Posting the video apparently would be punishable under your law.” DeMaio also said that the bill makes no distinction between independent citizen journalists and reporters at established outlets. “There’s no differentiation,” DeMaio said. “It says any individual who does this, any corporation, any business who posts a video, full stop. There’s no ‘Well, there’s an exemption for journalists.'” Bonta pushed back on this characterization. “In your scenario, Assemblyman DeMaio, the folks who were investigating that, these were reporters, journalists,” she said during the hearing. “They were not subjecting any particular organization to violence or threats of violence. That is the nature of this bill.” But the nature of the bill is exactly what’s at issue. The bill doesn’t require that a journalist actually threaten someone. It requires that someone at a covered organization feel threatened enough to send a letter. And once that letter is sent, the journalist faces penalties for publishing. The organization that receives bad press gets to decide whether the press constitutes a threat, and then the law penalizes the press. This is prior restraint dressed up as privacy protection. Prior restraint, stopping speech before it happens rather than addressing genuinely harmful speech after the fact, is the form of censorship that sits closest to the core of what the First Amendment was written to prevent. AB 2624 creates a tool where a letter from a subject of reporting can trigger legal liability for that reporting before any court determines whether the report was harmful, threatening, or even inaccurate. DeMaio called it bluntly. “California Democrats are trying to intimidate citizen watchdog journalists and protect waste and fraud happening in far-Left-wing NGOs,” he said in a statement. “AB 2624 can only be described as the ‘Stop Nick Shirley Act’ — a bill designed to silence citizen journalists exposing fraud and abuse of taxpayer dollars.” He added that “instead of fixing the fraud problems being uncovered, Sacramento politicians are trying to shut down the people exposing them.” His full statement went further. “AB 2624 would allow activists and taxpayer-funded organizations to demand the removal of video evidence — even if it captures misconduct in plain view — and threatens journalists with massive financial penalties,” DeMaio said. “If this bill becomes law, the message is clear to every journalist in California: expose corruption and you will be punished. AB 2624 is an unconstitutional direct attack on transparency and the First Amendment – and it needs to be defeated.” Shirley responded on X with a similar read of the situation. “California is trying to pass a bill that would criminalize investigative journalism with misdemeanors, $10,000 fines, imprisonment, and content takedown,” he wrote. He noted that “under AB 2624, government-funded entities like the Somali ‘Learing’ Daycare centers would be protected from being exposed if they operated inside California.” The chilling effect here doesn’t even require enforcement. A journalist considering an investigation into a taxpayer-funded immigration services organization now has to weigh the possibility that filming employees in a public space could trigger a written demand, a lawsuit, an injunction, and thousands of dollars in penalties. The organizations being investigated get a new tool for suppressing coverage that has nothing to do with the merits of the investigation. The question stops being “is the reporting accurate?” and becomes “did someone at the organization feel threatened enough to send a letter?” That’s a standard built for abuse. The organizations with the most to hide are the ones most motivated to send the letter. And the penalty for ignoring it is a lawsuit, which costs money and time even if you win. The bill passed the Assembly Privacy and Consumer Protection Committee on April 7 with an 11-2 vote, with Republican Assemblymembers Alexandra Macedo and Carl DeMaio casting the only no votes. It was then referred to the Assembly Judiciary Committee and is still working through the California Assembly. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post California’s “Stop Nick Shirley Act” Would Penalize Journalism appeared first on Reclaim The Net.