reclaimthenet.org
UK Foreign Affairs Committee Calls for Government Agency to Police Online “Disinformation”
If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.
The UK’s Foreign Affairs Committee wants the government to build a new censorship agency. The proposed “National Counter Disinformation Centre” would be given the power to identify and act against speech the state considers “disinformation,” placed on a statutory footing, and modeled on bodies like Sweden’s Psychological Defence Agency, which once ran a public campaign warning citizens about the dangers of memes.
The committee’s report, published on March 27 2026, goes further than a single new body.
It calls for new censorship rules in a forthcoming Representation of the People Bill to target AI-generated content and “the creation and dissemination of disinformation.”
It wants amendments to the Online Safety Act that would force platforms to publicly display where user accounts were created and whether the user connected through a VPN. It wants more money for the FCDO’s Hybrid Threats Directorate. And it wants the government to review the National Security Act’s foreign interference offense because, apparently, an existing law that carries up to 14 years in prison isn’t strict enough.
Committee chair Emily Thornberry framed the entire project in the language of war. “It is the new warfare and open liberal democracies are sitting ducks,” she said. “From pushing provable lies, to planting false seeds of doubt, disinformation is the weapon of choice of hostile states seeking to destabilise democracies.”
If “disinformation” is a weapon, then censoring it becomes a defense. If identifying incorrect speech is warfare, then creating a government agency to police the information space becomes national security.
The metaphor lets you skip past every difficult question about who defines “disinformation,” who gets targeted when definitions are vague, and what happens when a government agency tasked with identifying false speech starts to decide that inconvenient speech qualifies.
The report itself cites the idea that “Elon Musk’s influence is potentially greater in the UK than that of Russia’s,” placing the owner of a social media platform in the same threat category as hostile states.
That framing tells you something about the committee’s actual concerns. The problem isn’t limited to covert Russian bot networks planting fabricated stories. It extends to the owner of a platform making editorial decisions that the committee doesn’t like.
The foreign interference offense that the committee wants reviewed already covers “misrepresentation,” which the National Security Act 2023 defines broadly enough to include “presenting information in a way which amounts to a misrepresentation, even if some or all of the information is true.” You can be prosecuted for presenting true information in a way the government considers misleading, provided the other elements of the offense are met.
The committee’s complaint is that the “foreign power condition” is too hard to prove, which suggests they want the law applied more widely, with a lower threshold for establishing a foreign link.
Under the Online Safety Act, platforms are already required to “effectively mitigate and manage the risk” of their services being used for priority offenses like foreign interference.
The committee’s proposed review signals a push to make those obligations bite harder, potentially requiring platforms to censor a wider range of speech labelled as foreign-adjacent.
The VPN and location transparency proposal is quietly one of the most significant recommendations. The committee wants an Online Safety Act amendment requiring platforms to share publicly the region where an account was created, the region where it’s based, and whether the user connects via VPN.
An opt-out would be available, but the default is disclosure. This targets anonymous and pseudonymous speech directly. If you post political commentary from behind a VPN, that fact would be visible to everyone viewing your account. The chilling effect is immediate and by design. Users who value privacy, who have legitimate reasons to obscure their location (journalists, whistleblowers, domestic abuse survivors, anyone who doesn’t want to be doxed) would be flagged as suspicious by the very fact that they’re using basic privacy tools.
The report also calls for the government to force platforms to hand over data to researchers “free of charge and without cumbersome restrictions.” The committee frames this as transparency, but the direction of the research pipeline matters.
Researchers who study “disinformation” frequently conclude that platforms aren’t censoring enough. Giving them unrestricted access to platform data, with government backing, creates a pressure loop where academic findings become the justification for more content removal.
Platforms would also be forced to publish annual reports on “the detection of artificial amplification and foreign interference and the subsequent actions taken to remove such content,” creating a built-in incentive to demonstrate that they’re censoring at the scale the government expects.
Seven government departments currently have responsibilities touching on what the committee calls “foreign information manipulation and interference.” The report complains about fragmentation and slow progress, proposing the new center as a fix. The model they admire is the National Cyber Security Centre, housed within GCHQ. A “disinformation” centre built on the GCHQ template, with statutory powers and intelligence agency proximity, would have the tools and the institutional culture to treat speech as a threat vector.
The whole report treats speech as a security problem and government censorship as the solution, wrapped in enough references to Russia and national defense that questioning any of it risks looking naïve.
But the tools it proposes, a government body that decides what counts as “disinformation,” mandatory location exposure for social media users, lower thresholds for prosecuting speech as foreign interference, forced data access for researchers who study “misinformation,” none of these powers come with an expiry date.
If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.
The post UK Foreign Affairs Committee Calls for Government Agency to Police Online “Disinformation” appeared first on Reclaim The Net.