Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Von der Leyen Defends EU Censorship Rules Amid Criticism, Targets X, Meta, Apple, and TikTok Under DSA, DMA, and AI Act
Favicon 
reclaimthenet.org

Von der Leyen Defends EU Censorship Rules Amid Criticism, Targets X, Meta, Apple, and TikTok Under DSA, DMA, and AI Act

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. EU Commission President Ursula von der Leyen has fired another shot across the bows of major US social media companies – such as X, Meta, and Apple – but also China’s TikTok, stating that the bloc would proceed with enforcing its online rules, “without fear.” Among those rules are the Digital Services Act (DSA) – long regarded by critics as a censorship law – as well as the AI Act and the Digital Markets Act (DMA). The reason the EU would feel any fear to begin with – prompting Von der Leyen to offer assurances that investigations against the tech companies will continue to be pursued – is the position the Trump administration has taken. And it is that these EU’s rules represent tools of censorship that also stand in the way of innovation in Europe. In a statement to Politico, Von der Leyen claims that the EU applies the rules “fairly, proportionally, and without bias,” adding, “We don’t care where a company’s from and who’s running it. We care about protecting people.” But the treatment of X (and to a lesser degree Meta) compared to others under investigation, seems to tell a different story. Elon Musk’s association with the US administration has resulted in various forms of pressure and vilification of himself and X in the EU, which is reportedly ready to fine Musk’s company with one billion euros. X is accused of non-compliance with the DSA, for not censoring content the EU finds to be “disinformation,” “harmful,” or “unlawful.” Meanwhile, many other companies are investigated for alleged breaches of the DMA and are looking at fines that are significantly lower. Observers interpret this to demonstrate political bias – despite Von der Leyen taking the trouble to declare that cannot possibly be the case. But the impression to the contrary is further amplified by the fact Meta could get a treatment similar to X. And that comes after Mark Zuckerberg first publicly admitted that the previous US administration (the one heavily preferred to the current one in Brussels) pressured Meta to censor via third parties like “fact-checkers.” This ultimately led to the giant dropping its “fact-checking” program in the US – still to the chagrin of the EU. Also sitting at odds with declarations about the “fair and unbiased” approach to everyone is the recently announced Democracy Shield initiative, allegedly needed to counter “disinformation,” with X and Meta mentioned by name. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Von der Leyen Defends EU Censorship Rules Amid Criticism, Targets X, Meta, Apple, and TikTok Under DSA, DMA, and AI Act appeared first on Reclaim The Net.

UK Conservatives Target “Non-Crime Hate Incidents” in Crime Bill Amendment as Police Admit No Evidence of Impact on Crime
Favicon 
reclaimthenet.org

UK Conservatives Target “Non-Crime Hate Incidents” in Crime Bill Amendment as Police Admit No Evidence of Impact on Crime

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. UK’s opposition Conservatives are hoping to push through an amendment to the Crime and Policing Bill that would end the practice of the police recording and acting on “non-crime hate incidents” (NCHIs). NCHIs have been the source of much controversy over the last years, both from the standpoint of intimidating and suppressing lawful speech and as a significant diversion of police time and resources away from real crime and toward by-and-large trivial issues. Though they have been getting logged in their thousands – 13,000 only last year, which translates to 30,000 hours of police time – NCHIs started getting the attention of mainstream media with some high-profile cases. More: Welcome to Britain, Where Critical WhatsApp Messages Are a Police Matter Announcing the amendment, Shadow Home Secretary Chris Philp this week writes in The Telegraph that the UK is “supposed to be the home of free speech – and a country where the police chase criminals, not law-abiding members of the public.” And once the amendment is in parliament, Philp seems sure the vote would “smoke out” those MPs from the ruling Labour who prefer to control speech (under the guise of combating hate speech), rather than protect freedom of expression. Conservatives appear to be – now – making free speech a focal point of their political strategy, as party leader Kemi Badenoch also decided not to mince words when commenting on NCHIs, referring to them as “wasted police time chasing ideology and grievance instead of justice.” The Free Speech Union (FSU) public interest group welcomed Philp’s announcement, and provided a timeline of the introduction and strengthening of the NCHI phenomenon, singling out 2014 when the College of Policing came out with the Hate Crime Operational Guidance, “which formalized the idea that NCHIs would be recorded (and retained) against individuals’ names.” And that guidance provided a fairly uniquely broad and vague “definition” of what NCHIs are: “Any non-crime incident which is perceived by the victim or any bystanders to be motivated by hostility or prejudice based on a protected characteristic: race or perceived race, religion or perceived religion (…)” It’s worth quoting more perplexing statements from the guidance: “The victim does not have to justify or provide evidence of their belief, and police officers or staff should not directly challenge this perception. Evidence of the hostility is not required.” Another thing that is apparently “not required” is for the police to know what, if any help in fighting hate crime some 250,000 NCHIs recorded since 2014 have had. As The Telegraph wrote earlier this week, “most (police) forces have admitted they carry out no analysis of the data and so have little idea as to their effectiveness in detecting and preventing hate crime.” Even if the Conservatives’ amendment is adopted, the UK Crime and Policing Bill is burdened by other major issues, such as allowing greater access to citizens’ “driving license information” to all police forces in the UK – and laying the foundations for the police using 50 million+ driving license photos for facial recognition searches. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Conservatives Target “Non-Crime Hate Incidents” in Crime Bill Amendment as Police Admit No Evidence of Impact on Crime appeared first on Reclaim The Net.

India’s National Medical Commission Mandates Facial Recognition and GPS Tracking for Medical Faculty
Favicon 
reclaimthenet.org

India’s National Medical Commission Mandates Facial Recognition and GPS Tracking for Medical Faculty

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. When the bell rings across India’s medical colleges on May 1, signaling a new day of lectures, dissections, and rounds, a new kind of observer will quietly take its place at the front of every classroom. Unlike students, it won’t scribble notes or ask questions. This observer doesn’t blink, doesn’t forget, and most importantly, doesn’t trust. It’s the latest mandate from the National Medical Commission (NMC): a facial recognition system (FRS) tethered to GPS location tracking, rolled out to ensure faculty attendance is logged down to the exact time and place. But this isn’t just about headcounts. It’s about who gets to watch, who must comply, and what happens when trust is supplanted by tracking. To the NMC, this shift is a stride toward modernization. The old Aadhaar-linked fingerprint systems, clunky and prone to manipulation, are being phased out in favor of tech that promises seamless integration and real-time oversight. However, for faculty across India, the message lands differently. “Forcing faculty members to share their real-time location is not only unjustified but also offensive,” wrote the Medical Teachers Association of Bundelkhand Medical College in a letter dated April 19. “We are professionals, not subjects of suspicion… The NMC is not a moral policing agency.” Their frustration is echoed widely. The concern isn’t about adapting to technology. It’s about the kind of power that technology now wields. GPS tracking doesn’t just confirm your presence; it charts your movement, stores your routines, and gradually reshapes what privacy means inside a workplace. Dr. Sarvesh Jain, president of the association, made the stakes crystal clear in his remarks to EdexLive. “If everyone is right, then Pegasus should be installed on all our devices. This is not about right or wrong, it’s about privacy, which is a Constitutional right. I may have ‘n’ number of secrets, and as long as I’m within the law, they’re my business.” For educators like Jain, this isn’t about hiding wrongdoing. It’s about resisting a creeping presumption of guilt that now comes with every login and swipe. To understand the undertow here, it helps to rewind a few years. Until 2020, medical education in India was overseen by the Medical Council of India (MCI), a somewhat unruly but independent body. In a sweeping reform aimed at rooting out inefficiencies and corruption, the MCI was dissolved, and the National Medical Commission took its place. The change, though administrative on paper, was more than just structural. “Earlier, we had the Medical Council of India, which was independent. Today, NMC is a body formed and appointed by the government, and is now acting like a surveillance agency,” said Jain. It’s a sentiment that gets at the heart of the disquiet. The shift to NMC has been accompanied by an unmistakable centralization of control. Regulation, once a system of peer review and academic standards, increasingly feels like a mechanism of enforcement. The classroom, once governed by collegiality and discretion, is beginning to resemble a monitored space. Not everyone sees the FRS mandate as a step too far. Supporters argue that something had to give. The MSc Medicine Association (TMMA), for example, believes the system is overdue. Their defense hinges on an uncomfortable truth: ghost faculty and fake attendance records have plagued Indian medical institutions for years. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post India’s National Medical Commission Mandates Facial Recognition and GPS Tracking for Medical Faculty appeared first on Reclaim The Net.

Privacy Ends Where the Cell Tower Begins
Favicon 
reclaimthenet.org

Privacy Ends Where the Cell Tower Begins

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. This content is available exclusively to supporters of Reclaim The Net Subscribe for premier reporting on free speech, privacy, Big Tech, media gatekeepers and individual liberty online.   Subscribe   Already a supporter? Login here.                       If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Privacy Ends Where the Cell Tower Begins appeared first on Reclaim The Net.

NSF Terminates Hundreds of “Misinformation”-Related Grants, Impacting Researchers Tied to Online Speech Flagging Initiatives like EIP and CIP
Favicon 
reclaimthenet.org

NSF Terminates Hundreds of “Misinformation”-Related Grants, Impacting Researchers Tied to Online Speech Flagging Initiatives like EIP and CIP

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A large wave of funding cancellations from the National Science Foundation (NSF) has abruptly derailed hundreds of research projects, many of which were focused on so-called “misinformation” and “disinformation.” Late Friday, researchers across the country received emails notifying them that their grants, fellowships, or awards had been rescinded; an action that stunned many in the academic community and ignited conversations about the role of the government in regulating research into online speech. Among those impacted was Kate Starbird, a prominent figure in the “disinformation” research sphere and former Director of the University of Washington’s Center for an Informed Public. The Center, which collaborated with initiatives like the Election Integrity Partnership and the Virality Project, both known for coordinating content reporting to social media platforms, had ties to federal agencies and private moderation efforts. Starbird expressed dismay over the NSF’s move, calling it “disruptive and disheartening,” and pointed to a wider rollback in efforts to police digital content, citing reduced platform transparency and the shrinking of “fact-checking” operations. Grants that were cut included studies like one probing how to correct “false beliefs” and another testing intervention strategies for online misinformation. These projects, once backed by taxpayer dollars, were part of a growing field that often overlaps with content moderation and speech policing; a fact acknowledged by even Nieman Lab, which admitted such research helps journalists “flag false information.” The timing of the cancellations raised eyebrows. The NSF’s action followed a report highlighting how the Trump administration was reevaluating $1.4 billion in federal funding tied to misinformation research. That investigation noted NSF’s involvement in these programs but did not indicate the impending revocations. The NSF stated on its website that the grants were being terminated because they “are not aligned with NSF’s priorities,” naming projects centered on diversity, equity, inclusion, and misinformation among those affected. A published FAQ further clarified the agency’s new direction, referencing an executive order signed by President Donald Trump. It emphasized that NSF would no longer support efforts aimed at combating “misinformation” or similar topics if such work could be weaponized to suppress constitutionally protected speech or promote preferred narratives. Some researchers, like Boston University’s Gianluca Stringhini, found multiple projects abruptly defunded. Stringhini, who had been exploring AI tools to offer users additional context about social media content; a method akin to the soft content warnings platforms deployed during the pandemic—was left unsure about the full scope of consequences for his lab. Foundational to many early studies in this space, the NSF had long played a key role in launching initiatives that shaped how digital discourse was studied and potentially influenced. According to Starbird, about 90% of her early research was NSF-funded. She cited the agency’s vital support in forging cross-institutional collaborations and developing infrastructure for examining information integrity and technological design. The mass termination of these grants signals a pivotal shift in the federal government’s stance on funding initiatives that blur the lines between research and regulation of public speech. What some see as necessary oversight to prevent narrative enforcement, others view as a dismantling of essential tools used to navigate complex digital environments. Either way, the message from Washington is clear: using federal dollars to police speech, even under the guise of scientific inquiry, is no longer a priority. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post NSF Terminates Hundreds of “Misinformation”-Related Grants, Impacting Researchers Tied to Online Speech Flagging Initiatives like EIP and CIP appeared first on Reclaim The Net.