According to TheRegister.com, the European Commission has launched a formal investigation into X under the Digital Services Act (DSA) over concerns its Grok AI model could generate sexually explicit imagery, including sexualized images of children. The probe, announced by Commission executive vice-president Henna Virkkunen, follows public outcry and will assess if X properly mitigated risks when deploying Grok in the EU. Officials noted that since discussions began, X has made changes, like turning off image generation for non-paying users. The DSA allows for fines of up to 6% of global annual turnover, which could be around $174 million for X based on an estimated $2.9 billion revenue. Separately, the Commission has extended an existing December 2023 proceeding against X to include the impact of switching recommendations to Grok.
The Core Issue
Here’s the thing: this isn’t just about one creepy feature. It’s about the fundamental obligation under the DSA for “very large online platforms” to proactively assess and mitigate systemic risks. The Commission’s statement is brutally clear—they’re investigating whether X treated the rights of Europeans, especially women and children, as “collateral damage.” That’s a powerful, damning framing. It suggests regulators believe X may have rushed Grok’s image-gen tools to market with inadequate guardrails, a move that looks reckless at best and deeply negligent at worst. And let’s be honest, given X’s chaotic trajectory since Musk took over, does anyone find that surprising?
Broader Than One Headline
What’s interesting is the Commission stressing this investigation is “much broader than the specific incident.” That earlier incident, detailed in reports from The Guardian, sparked the initial fury. But now regulators are looking at the whole process. Did X do a legitimate risk assessment before launch? What mitigation measures did they consider and discard? The fact that they quickly gated the feature behind a paywall after regulators came knocking isn’t a great look—it feels reactive, not proactive. This probe is a test case for how the DSA’s “systemic risk” framework applies to generative AI features baked into social platforms.
The Stakes and The Fines
So, what’s on the line? Potentially a lot of money. That 6% of global turnover figure is no joke. X has already been fined €120 million under the DSA for other violations. But the financial penalty is almost secondary to the operational one. The DSA gives the EU power to demand fundamental changes to how a service is run. Could they force X to submit future AI features for pre-approval? Or mandate specific, auditable content moderation systems? That kind of oversight is what tech giants fear more than a one-time check. And it’s exactly why the U.S. Trade Representative is reportedly slamming the EU’s approach as “discriminatory.” We’re seeing a major regulatory philosophy clash play out in real time.
A Pattern of Problems
Look, this Grok investigation doesn’t exist in a vacuum. It’s an extension of the December 2023 proceedings, which were already examining X’s overall compliance. The Commission is basically connecting dots, arguing that the move to Grok-based recommendations itself could be a systemic risk. Think about it: if your core algorithm that surfaces content is powered by a model you can’t fully control, what does that mean for disinformation or illegal content? This is a company that’s dismantled much of its former trust and safety infrastructure. Now they’re leaning harder into AI. From a regulator’s chair, that probably looks like pouring gasoline on a fire they already asked you to put out. The outcome here will set a huge precedent for how AI integrates into social media—or if it can, without crossing the EU’s red lines.
