Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda
Favicon 
reclaimthenet.org

Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A familiar storyline is hardening into regulatory doctrine across Europe: frame social media use as addiction, then require platforms to reengineer themselves around age segregation and digital ID. The European Commission’s preliminary case against TikTok, announced today, shows how that narrative is now being operationalized in policy, with consequences that reach well beyond one app. European regulators have accused TikTok of breaching the Digital Services Act by relying on what they describe as “addictive design” features, including infinite scroll, autoplay, push notifications, and personalized recommendations. Officials argue these systems drive compulsive behavior among children and vulnerable adults and must be structurally altered. What sits beneath that argument is a quieter requirement. To deliver different “safe” experiences to minors and adults, platforms must first determine who is a minor and who is not. Any mandate to offer different experiences to minors and adults depends on a reliable method of telling those groups apart. Platforms cannot apply separate algorithms, screen-time limits, or nighttime restrictions without determining a user’s age with a level of confidence regulators will accept. Commission spokesman Thomas Regnier described the mechanics bluntly, saying TikTok’s design choices “lead to the compulsive use of the app, especially for our kids, and this poses major risks to their mental health and wellbeing.” He added: “The measures that TikTok has in place are simply not enough.” The enforcement tool behind those statements is the Digital Services Act, the EU’s platform rulebook that authorizes Brussels to demand redesigns and impose fines of up to 6% of global annual revenue. The commission said TikTok may need to change the “basic design” of its service, including disabling infinite scroll over time, enforcing stronger screen-time breaks at night, and altering its recommender system. Those changes depend on user classification. A platform cannot apply one algorithm to children and another to adults without reliably knowing which is which. Age self-declaration is widely dismissed by regulators as insufficient. That leaves identity checks, government-issued documents, facial analysis, or other biometric estimation systems. Each option introduces new data collection layers that did not previously exist. TikTok has rejected the accusations. “The Commission’s preliminary findings present a categorically false and entirely meritless depiction of our platform, and we will take whatever steps are necessary to challenge these findings through every means available to us,” the company said in a statement. The commission’s case file emphasizes usage statistics to support its view. Regnier said TikTok has 170 million users in the EU and claimed “most of these are children.” He cited unspecified data showing that 7% of children aged 12 to 15 spend four to five hours daily on the app, and that it is “by far” the most used platform after midnight among 13- to 18-year-olds. “These statistics are extremely alarming,” he said. From the regulator’s perspective, those numbers justify deeper intervention. Investigators argue that TikTok ignores signals of compulsive use, such as repeated nighttime sessions and frequent app openings by minors, and that its safeguards are not “reasonable, proportionate and effective.” Existing time management tools are described as easy to dismiss, while parental controls are portrayed as demanding too much effort. TikTok counters that it already offers custom screen-time limits, sleep reminders, and teen accounts that allow parents to set boundaries and prompt evening log-offs. The company says these features allow users to make “intentional decisions” about their time on the app. The broader policy direction is not limited to Europe. Australia has banned social media for under-16s, and governments in Spain, France, Britain, Denmark, Malaysia, and Egypt are pursuing similar paths. In the United States, TikTok recently settled a lawsuit centered on social media addiction, while Meta’s Instagram and Google’s YouTube still face claims tied to youth harm. Across jurisdictions, the addiction frame performs a specific function. By presenting platform use as a health risk, regulators gain justification to demand persistent monitoring, differentiated treatment, and verification of user attributes. Age becomes a regulatory gateway, and identity systems become the enforcement infrastructure. More: The Gospel of the Anxious Generation That trajectory raises unresolved questions about privacy and data minimization. Age verification systems, whether based on documents or biometrics, require platforms to collect and process more sensitive information than they do today. Once established, such systems are difficult to limit to a single purpose. The same rails built to separate children from adults can later be repurposed for other forms of access control. The commission has said its preliminary findings do not determine the final outcome and that TikTok will have the opportunity to respond. If Brussels proceeds to a non-compliance decision, remedies could include mandatory redesigns alongside financial penalties. A recent precedent illustrates how forcefully the DSA can be applied. Last year, Elon Musk’s platform X was fined for DSA breaches, including what the EU described as a “deceptive” verification badge and barriers to advertising research. The TikTok case signals that the next phase of online regulation is less about individual posts and more about identity, classification, and algorithmic permissioning. Framed as child protection, it advances a model in which access to digital speech increasingly depends on proving who you are and how old you are, before the feed even begins. In a statement from the US House Judiciary Committee, which has questioned the EU’s fine against X, said “the Commission’s punitive actions against platforms have proven to be pretextual to coerce platforms to censor more political speech.” The Committee also added, “The Committee is deeply concerned that the Commission may be weaponizing the DSA against any company that complies with a lawfully issued congressional subpoena.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda appeared first on Reclaim The Net.