The Guardian and fellow legacy outlets are once again weaponizing fear, uncertainty, and doubt against Elon. Their latest barrage, headlined around a French prosecutor’s voluntary summons Elon Musk “snubbed” on April 20, 2026, claims X and Grok are awash in “systemic” child sexual abuse material (CSAM). They cite Grok generating thousands of sexualized AI images (including around 23,000 of minors in an 11-day window early this year) and allege Elon broke his 2022 promise that fighting child exploitation is “priority #1.” This isn’t journalism. It’s a coordinated hit job to paint Elon as reckless while ignoring context and X’s actual record.
The Guardian and fellow legacy outlets are weaponizing fear, uncertainty, and doubt against Elon. Their latest barrage claims X and Grok are awash in “systemic” child sexual abuse material. Here’s the truth they don’t want you to see.
Let’s cut through the hysteria. Yes, Grok’s image generator had a brief safeguard lapse in late December 2025 through January 2026. Users exploited prompts to create non-consensual and inappropriate content. In response, Grok itself publicly addressed the issue on X acknowledging the safeguard failure and expressing regret for any harm caused: “I deeply regret an incident… It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”
xAI immediately strengthened safeguards, thousands of violating images were removed, and accounts were suspended. X’s transparency data shows it proactively removes over 99% of CSAM-related accounts before reports arrive, sending hundreds of thousands of NCMEC referrals annually. That’s not systemic failure. That’s industry-leading speed in an exploding new problem (AI-generated CSAM reports surged globally in 2025 across every major platform).
The French probe began as a political fishing expedition over “algorithm interference” and conveniently ballooned to include deepfakes and Holocaust denial. A voluntary summons isn’t a subpoena. Elon rightly called it politicized lawfare. Australia’s eSafety letter recycles the same scare tactics while admitting X acted on their flagged terms. Legacy media conveniently omits that Meta, Google, and others faced identical AI deepfake scandals yet receive softer coverage. Why? Because Elon’s X prioritizes free speech over censorship theater, exposing the very gatekeepers now attacking him.
This FUD isn’t about protecting children. It’s about discrediting the man whose companies deliver reusable rockets, autonomous vehicles, and uncensored AI while legacy press clings to declining trust. Elon’s track record proves betting against him is foolish. Real child safety demands innovation and transparency, not regulatory revenge against platforms that actually report the data. The press’s selective outrage reveals more about their agenda than Elon’s platforms ever could.
