Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Losing Their Grip: Why Anti-“Misinformation” Crusaders Are Mourning the End of Control
Favicon 
reclaimthenet.org

Losing Their Grip: Why Anti-“Misinformation” Crusaders Are Mourning the End of Control

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. In the brave new world of the University of Washington’s Center for an Informed Public (CIP), it seems like “informed” is synonymous with “watched.” Birthed to combat the wildfires of online “misinformation,” CIP and its partners – including the defunct Election Integrity Partnership (EIP) and the short-lived Virality Project – thought they might have been celebrated defenders of truth. Instead, they became poster children for what happens when watchdogs get a little too cozy with power, diving into an experiment that teetered between public good and Orwellian oversight. Election Integrity Partnership: A Marriage of “Good Intentions” and Government Influence The Election Integrity Partnership, a coalition that included CIP as a key player, kicked off its operations with a noble-sounding mission: to shield our fragile electoral systems from the scourge of fake news. For the discerning reader, the term “integrity” in their name may raise eyebrows; it’s reminiscent of government programs cloaked in the language of virtue, their real work a little murkier. Partnering with government entities and social media giants like Facebook and then-Twitter, EIP set out to identify and “mitigate” misleading content related to elections. In other words, they assumed the job of selectively filtering out the lies, or as critics would say, the truths that didn’t toe the right political line. For a while, EIP was in its element, functioning as a digital triage, purging the internet of what they deemed harmful content. But what started as “informational integrity” quickly became a federal hall monitor, policing citizens’ Facebook posts and Twitter threads with all the subtlety of a sledgehammer. Conservatives, in particular, saw this as more of a censorship scheme than a public service. Their view? EIP wasn’t there to inform – it was there to enforce. The Consequences of Playing Speech Police Predictably, the backlash came hard and fast. Between accusations of censorship, lawsuits, and subpoenas, the EIP got hit with more legal troubles than a tech startup in a copyright infringement scandal. And when all was said and done, EIP disbanded, its ambitions buckling under the weight of public scrutiny and political pressure. The New York Times, ever the mournful observer of lost social crusades, called it a tragedy for public discourse. They framed the dissolution as a loss for those who believe in “responsible” information regulation, i.e., those who think someone should be appointed arbiter of truth, as long as it’s the “right” someone. The lawsuit-laden disbandment sent a message: Americans are more than a little skeptical about government agencies and their academic friends lurking behind the scenes, flagging speech like a hall monitor on a power trip. The public isn’t too keen on playing along with institutional gatekeepers telling them which “facts” are allowed to stand. CIP’s Retreat: Education Over Eradication? With EIP gone, CIP has had to pivot. It’s retreated from the frontlines of digital speech enforcement, now favoring a softer approach – “educating” the public on misinformation rather than erasing it outright. Translation: CIP now hosts workshops and seminars where it teaches researchers and civilians alike about the nature of disinformation, sidestepping its prior role as a social media referee. This rebranding effort is essentially CIP’s way of saying, “We’re not here to censor, promise.” Yet, the academic world’s “shift to education” sounds suspiciously like the fox retreating from the henhouse after getting caught. CIP’s pivot reflects the current climate, one in which watchdogs like it have to tread carefully or risk losing all influence. Now, they’re not shutting people up; they’re merely explaining why certain ideas are wrong, a move that feels less aggressive but still keeps CIP’s finger on the scale of public opinion. The Larger Implications: Free Speech in the Crosshairs CIP’s saga shines a harsh light on the deepening tensions between free speech advocates and so-called “disinformation” experts. On one side, you have entities like the New York Times wringing their hands, lamenting the “tragedy” of these anti-misinformation efforts falling apart. The Times warns of a future in which misinformation spreads unchecked, as though without EIP, social media will devolve into an apocalyptic pit of lies. On the other side, you have critics of censorship, those who see CIP’s previous activities as a government-endorsed grab at control, cloaked in the language of public safety. Now, we find ourselves in a new chapter, with CIP toeing the line carefully, offering lessons in “awareness” rather than flagging posts. This so-called “nuanced understanding” might sound respectable, but it still hinges on a central belief: certain ideas are dangerous enough to warrant intervention, even if the means have shifted from banning to benign “educating.” In short, CIP may be keeping a lower profile, but its ambitions haven’t changed – they’ve merely gone underground. So what do you get when you hand the keys to social discourse over to government-aligned bodies like the EIP? For starters, the inevitable slide toward an overzealous surveillance state. Free speech advocates have been beating this drum for a while, and they aren’t wrong: schemes like EIP carry the perfect storm of potential for overreach and abuse. It’s the classic “trust us” move from government and corporate giants who assure the public that they’re only flagging content for “our own good.” But when a government body is allowed to sift through online conversations, the notion of “our good” quickly morphs into “their control.” The result? People start censoring themselves, fearing that one wrong post might put them on a watchlist or see them “fact-checked” into silence. These watchdog groups claim to target misinformation, but they often mistake dissenting views for danger and critique for conspiracy. The very act of monitoring speech creates a chilling effect, where the public might think twice before posting on sensitive subjects. After all, who wants to risk getting flagged by an algorithm armed with both the moral zeal and clumsiness of a hammer trying to nail jelly to a wall? Transparency and Accountability – Or the Lack Thereof And then there’s the lack of transparency – a time-honored tradition in institutions that insist they know best. When EIP was in full swing, it wasn’t as if users got an email detailing who decided their post was a threat to democracy or what precise reasoning went into labeling it “misinformation.” Instead, decisions were made in rooms far from public view, with opaque policies and an ever-shifting definition of what “misinformation” even means. Political or corporate interests could easily influence this moderation, and, surprise, surprise – with little oversight, the system quickly looks more biased than benevolent. The arbitrary and often political nature of these decisions only stokes public distrust, especially when it’s the very voices challenging authority that find themselves most frequently muzzled. It’s the internet equivalent of a teacher who can’t explain why certain kids always get detention – people quickly learn not to ask questions and go along with the rules, but that doesn’t mean they believe in the fairness of the process. Democracy’s Achilles’ Heel: Stifling Discourse in the Name of Truth In democratic societies, it’s a cornerstone. The ability to voice different viewpoints, even those that shake the system, is essential for a healthy public sphere. When bodies like EIP take it upon themselves to deem what’s acceptable for public consumption, we’re left with a sanitized marketplace of ideas – one in which only the ideas that align with sanctioned narratives get a seat at the table. If only certain perspectives survive the cut, we end up with voters fed a curated set of “truths,” unable to challenge, investigate, or even consider alternatives. And it’s not just a hypothetical fear. History has repeatedly shown that the silencing of controversial or dissenting voices only deepens public division. Ironically, the very thing these “integrity” initiatives aim to prevent – public polarization – often worsens when people feel their speech is being filtered. With an overpowered referee deciding which facts to keep on the field, the game of democracy itself suffers. The Slippery Slope: Setting the Stage for Future Censorship The question becomes, once government-linked entities start moderating our conversations, where does it end? Today, it’s about “election integrity.” Tomorrow, it could be “economic stability” or “public health.” Every crisis invites a new round of justifications for more speech control. After all, if misinformation on elections is a threat to democracy, couldn’t misinformation on any number of other issues pose a similar threat? Accepting censorship in any form opens a Pandora’s box of future government interference, each intervention creating new precedents that make the next round of censorship feel more routine. The free speech argument here is simple: even if an opinion is wrong, unpopular, or offensive, it deserves protection. The minute we concede that it’s acceptable to police ideas – especially by bodies connected to government interests – we make it all the easier for future, more dangerous limitations to slip into place. The Real Effectiveness Question: Censoring Ideas or Fanning the Flames? Then there’s the effectiveness issue. Does suppressing “misinformation” really work, or does it just make it more insidious? Efforts like EIP may well reduce the volume of “dangerous” content on mainstream platforms, but it doesn’t just vanish. Ideas banned in one place tend to bubble up elsewhere – often in online echo chambers where censorship only serves to validate radical viewpoints, feeding a cycle of resentment and extremism. The disinformation crusade might actually be doing more harm than good, driving misinformation underground where it becomes even harder to address. The government’s digital eraser may scrub certain ideas from view, but it often intensifies belief among those already suspicious of authority. For them, censorship itself becomes “proof” of suppression, amplifying distrust and cementing conspiratorial thinking. In trying to stamp out the “lies,” EIP and its ilk may have simply fueled the fire. In the end, the dissolution of the Election Integrity Partnership is perhaps less a blow to public discourse than a win for the democratic spirit. As the Center for an Informed Public pivots from censoring to educating, we’re reminded that the battle against misinformation doesn’t require speech suppression. It requires a trust in the public’s ability to sift truth from nonsense – a trust that, in a healthy democracy, should never be in short supply. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Losing Their Grip: Why Anti-“Misinformation” Crusaders Are Mourning the End of Control appeared first on Reclaim The Net.

EU Tightens Social Media Censorship Screw With Upcoming Mandatory “Disinformation” Rules
Favicon 
reclaimthenet.org

EU Tightens Social Media Censorship Screw With Upcoming Mandatory “Disinformation” Rules

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. What started out as the EU’s “voluntary code of practice” concerning “disinformation” – affecting tech/social media companies – is now set to turn into a mandatory code of conduct for the most influential and widely-used ones. The news was revealed by the Irish media regulator, specifically an official of its digital services, Paul Gordon, who spoke to journalists in Brussels. The EU Commission has yet to confirm that January will be the date when the current code will be “formalized” in this way. The legislation that would enable the “transition” is the controversial Digital Services Act (DSA), which critics often refer to as the “EU online censorship law,” the enforcement of which started in February of this year. The “voluntary” code is at this time signed by 44 tech companies, and should it become mandatory in January 2025, it will apply to those the EU defines as Very Large Online Platforms (VLOPs) (with at least 45 million monthly active users in the 27-nation bloc). Currently, the number of such platforms is said to be 25. In its present form, the DSA’s provisions obligate online platforms to carry out “disinformation”-related risk assessments and reveal what measures they are taking to mitigate any risks revealed by these assessments. But when the code switches from “voluntary” to mandatory, these obligations will also include other requirements: demonetizing the dissemination of “disinformation”; platforms, civil society groups, and fact-checkers “effectively cooperating” during elections, once again to address “disinformation” – and, “empowering” fact-checkers. This refers not only to spreading “fact-checking” across the EU member-countries but also to making VLOPs finance these groups. This, is despite the fact many of the most prominent “fact-checkers” have been consistently accused of fostering censorship instead of checking content for accuracy in an unbiased manner. The code was first introduced (in its “voluntary” form) in 2022, with Google, Meta, and TikTok among the prominent signatories – while these rules originate from a “strengthened” EU Code of Practice on Disinformation based on the Commission’s Guidance issued in May 2021. “It is for the signatories to decide which commitments they sign up to and it is their responsibility to ensure the effectiveness of their commitments’ implementation,” the EU said at the time – that would have been the “voluntary” element, while the Commission said the time it had not “endorsed” the code. It appears the EC is now about to “endorse” the code, and then some – there are active preparations to make it mandatory. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Tightens Social Media Censorship Screw With Upcoming Mandatory “Disinformation” Rules appeared first on Reclaim The Net.

Memes Under Siege: China’s Crackdown on Online Youth Dissent
Favicon 
reclaimthenet.org

Memes Under Siege: China’s Crackdown on Online Youth Dissent

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The Cyberspace Administration of China (CAC) has initiated a sweeping two-month campaign to scrub content from the internet that it classifies as “harmful,” adhering to directives from Chinese Communist Party censors. On Thursday, a statement from CAC detailed plans to eliminate social media posts and other online material that doesn’t align with the Party’s stringent views on what constitutes acceptable discourse. In a recent report by the Global Times, a state-controlled publication, it was disclosed that the focus of this draconian crackdown will include content that sensationalizes disasters and extreme events or spreads falsehoods regarding public policies and societal matters. This implies that any dissenting views about the Chinese economic conditions under Xi Jinping’s regime, particularly those that are critical or negative, will be targeted and likely removed from online platforms. Notably, experts who have previously raised questions about Xi’s economic strategies have seen their analyses vanish from public view, suggesting that the culling of millions more social media entries should pose little challenge to the authorities. The campaign extends beyond economic commentary, as Chinese officials have also expressed frustration over memes generated by a discontented populace that criticize the ongoing scandals in the housing market, high levels of youth unemployment, and the tepid recovery efforts following the Wuhan coronavirus pandemic. Authorities are particularly intent on dismantling memes that paint the Communist Party as exploitatively viewing citizens merely as resources to be harvested. Expressions of despair by the youth are evident in their online complaints, describing their era as the “garbage time of history,” equating their situation with the final, uneventful moments of a basketball game where victory is impossible. The young, facing an unemployment rate that exceeded 21% before such data ceased being published, feel sidelined in a game they cannot win. Many young adults, feeling crushed by the system, have adopted the “lying flat” movement, rejecting societal expectations to partake in the corporate grind in favor of minimalistic living with their parents and performing odd jobs for small earnings. The CAC’s latest announcement indicates it will target and erase any “negative” speech touching on sensitive issues such as housing, education, healthcare, and food safety. Moreover, the censorship extends to users accused of hurling “malicious insults” or those who “stigmatize regions, professions, and groups.” Such actions, according to the CAC, fuel pessimism and fear, potentially inciting hostility among different societal factions. The directive also aims to tackle content that may amplify “occasional extreme incidents” or controversial events, along with any commentary that could be seen as stoking regional disparities or discrimination. The censorship drive will also address the spread of fabricated stories about public crises and disasters that could incite panic, covering everything from biased reports to complete fabrications designed to deceive the public. Such comprehensive censorship activities are not new; around the same time last year, the CAC embarked on a mission against puns and unconventional language usage, which they claimed confused the public and undermined the minor’s ideological values. Dissidents, however, had turned to puns and similar wordplay as a method to subtly critique the regime and evade censorship. The CAC is confident of not only addressing politically sensitive content but also purging the internet of “explicit and vulgar content.” Just last week, the CAC also proposed regulations that would extend China’s “Great Firewall” to global satellite Internet services like Starlink, demanding they comply with China’s censorship laws or face severe economic repercussions. This move could potentially bring international satellite communications under the purview of Chinese censors, reflecting the regime’s desire to extend its control beyond terrestrial digital communications. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Memes Under Siege: China’s Crackdown on Online Youth Dissent appeared first on Reclaim The Net.

When Text Becomes a Crime: How Transcribing Movies Led to Jail Time in Japan
Favicon 
reclaimthenet.org

When Text Becomes a Crime: How Transcribing Movies Led to Jail Time in Japan

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Beware of transcribing movies – the task may not be just tedious, but also dangerous, at least in Japan. In that country, this can be considered a form of piracy, and has recently resulted in the arrests of three persons, said to be working for the same “company.” However, reports about the case do not specify what kind of business entity this concerns – other than that money was earned in the process. The charges against the suspects are based on their involvement in transcribing Godzilla Minus One and Overlord III movies without first obtaining permission from the copyright holders. This allegedly went on for about a year ending in February 2024. While in much of the world media content “piracy” is mostly tied to video and audio, Japan has no fair use rule. For that reason, the fact that transcriptions – text – were used to write online articles, which generated ad money, landed the three men in jail. The use of information from the transcriptions that are contested by rights holders is the mention of names of characters, quotes, descriptions of scenes, etc., in the case of Godzilla 1.0. The “Overlord III case” also included “relevant images” being added to the articles, once again based on the transcription. This is considered a first-of-its-kind incident even in Japan. CODA – a Japanese organization “for content holders to cooperate in taking measures against piracy” – revealed that the arrests happened on October 29. “This case was investigated by the Miyagi Prefectural Police, and CODA coordinated with the affected rights holders, which led to this crackdown,” the group announced. CODA views the use of text obtained in this way as a “serious crime” on a par with what are known as “spoiler sites,” TorrentFreak writes. According to the Japanese organization, once a person reads text transcribed from a movie, their desire to “pay a fair price for content” gets “reduced.” And that conjecture is clearly enough to get people arrested. Now, the police are treating this as a case of conspiracy involving “company employees and management” that allegedly attracted a lot of clicks, and consequently generated revenue, which the movies’ copyright owners – Toho and Kadokawa corporations – believe is rightfully theirs. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post When Text Becomes a Crime: How Transcribing Movies Led to Jail Time in Japan appeared first on Reclaim The Net.

Canada’s Digital ID Drama Heats Up as Regulators Sidestep Parliament
Favicon 
reclaimthenet.org

Canada’s Digital ID Drama Heats Up as Regulators Sidestep Parliament

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Canadian regulators plan to move ahead with introducing national digital ID without the parliament’s involvement. Leaving the process out of the parliament in terms of approval and oversight is sure to add to the existing controversy around the issue of digital ID, which was in the past criticized and even rejected precisely by a number of Canadian MPs and parliamentary committees. On the other hand, this might explain why the regulators might rather take a route bypassing the lawmakers, despite the risky – in terms of proper democratic process – nature of such maneuvering. Critics are now calling this (another) example of Canada’s Liberal government’s overreach. Reports about these goings-on are based on Shared Services Canada (SSC), a government IT agency, recently announcing how the work on setting up a digital ID system for the whole country was progressing, while presenting this as essentially no different than current forms of obligatory ID (for instance, Canada’s equivalent to the social security number in the US). But opponents in the parliament, and beyond, have been consistently for years reiterating that the scheme is fraught with dangers that are not comparable to those affecting traditional ID systems, neither in depth nor breadth. These dangers range from data security, and cost of implementation, to various ways centralized databases containing people’s most sensitive personal information can be abused. And those, again, range from security – to the risk of digital IDs getting turned into effective tools for government mass surveillance and control of the entire population’s behavior. But SSC and other digital ID backers address these issues almost in passing while selling the benefits to the public as more convenience via unified access to government services, and even as something “empowering” citizens. However, what the most prominent individuals and organizations that push for global digital ID adoption (like Bill Gates, Tony Blair, the EU, and the WEF…) present as a way to usher in more equity and equality is seen as creating exactly the opposite effect. “Segregation and discrimination” is how one report out of Canada put it, the context being recent: Covid vaccine “passports” and the treatment received by citizens who decided against taking the jab. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Canada’s Digital ID Drama Heats Up as Regulators Sidestep Parliament appeared first on Reclaim The Net.