Elon Musk’s X just became the first platform ever fined under the EU’s landmark Digital Services Act (DSA), and despite what Musk and his political allies are already claiming, this is not about “Brussels hating free speech.” It is about a set of shady, deceptive business practices that break EU and make it easier for scammers and liars to manipulate the platform.
What the EU is actually punishing
The headline number is attention‑grabbing: a 120 million euro fine, the first DSA penalty against any major platform. But the more important story is why X was fined: EU officials say the platform breached key transparency obligations in three main ways: its blue check “verification” system, its opaque advertising practices, and its decision to keep independent researchers blocked from accessing crucial public data.
Put simply, this is not a case about the EU ordering X to take down specific posts or banning certain viewpoints. It is about how X designs its products and systems in ways that mislead users and hide information that EU law requires to be public.
1. Blue check deception
Before Musk’s takeover of Twitter, a blue check carried a simple meaning: this account is who it says it is, and has been vetted by the platform. That verification layer wasn’t perfect, but it helped ordinary users quickly distinguish genuine public figures, journalists, and institutions from impostors.
Under Musk, that system was flipped on its head. Now, almost anyone can buy “verification” without meaningful identity checks, provided they’re willing to pay. Regulators argue this change actively misleads users about authenticity. When you see a blue check today, you can no longer assume the account is credible or even located where it claims to be.
We have already seen the consequences. Blue‑checked accounts posing as grassroots MAGA activists have turned out to be operated from abroad, exploiting the borrowed legitimacy of that little icon to push polarizing narratives. (This pattern is consistent with what researchers have documented about foreign‑run political accounts using paid verification to appear more trustworthy.) This is exactly the kind of misleading design the DSA is meant to address.
2. Ad transparency deficiencies
The DSA also cares about how platforms make money. Very large online platforms like X are required to maintain a clear, searchable database of all the ads they run—who paid for them, who they were targeted at, and when they ran. That database is supposed to be usable by journalists, watchdogs, and ordinary citizens who want to understand who is trying to influence them and how.
According to the European Commission, X’s ad repository falls far short of that standard. Officials say it is incomplete and riddled with design barriers that make it extremely difficult to audit. If you cannot easily see who is paying for which ads and targeting which communities, it becomes much harder to spot scams, manipulative political campaigns, or covert influence operations. That opacity is central to why X is facing this fine.
3. Researcher access
One of the most important parts of the DSA is its requirement that big platforms provide vetted researchers with access to public platform data. The idea is simple: researchers should be able to inform the public about what is happening online and platforms should not be allowed to grade their own homework.
X has taken the opposite path. Since Musk’s takeover, the company has sharply restricted data access and made it incredibly expensive for independent researchers to work with public data that was once comparatively accessible. The Commission argues this undermines the DSA’s core oversight system and keeps regulators, journalists, and academics in the dark about how Musk’s decisions are impacting users and democratic processes.
The “censorship” lie vs reality
Politicians like JD Vance and Marco Rubio are framing this enforcement action as proof that the EU wants to silence dissenting voices and crush conservative speech online. Musk and his allies regularly lean on this narrative, using “free speech” rhetoric to deflect from scrutiny of his product choices and revenue strategies.
But if you look at what the EU has actually done here, the censorship narrative falls apart. The decision does not demand that X remove particular posts, block specific viewpoints, or demote entire political ideologies. Instead, it targets deceptive interface design, missing information about advertising, and restrictions on independent oversight—issues that would be familiar in any consumer protection or financial regulation case.
If a bank mislabels financial products, or a food company obscures ingredients and safety data, regulators do not call that “censorship.” They call it fraud, deception, or a breach of consumer law. Digital platforms, especially those with huge power over our information environment, are finally being treated more like other powerful industries.
So when you see conservatives yelling that this decision is “censorship,” remember what is actually being scrutinized: a business model built on confusing users, obscuring the money trail, and shutting out watchdogs. We should all be demanding that platforms are held to the same standard you’d expect from banks or food companies: clear labels, no tricks, and real accountability when they break the rules. That is not censorship—it is the beginning of treating the information environment as the critical public infrastructure it has become.






