5.8 C
Dorset
Thursday, January 8, 2026
HomeInternational NewsGrok Is Elon Musk’s Home for Paedophiles and Sex Offenders

Grok Is Elon Musk’s Home for Paedophiles and Sex Offenders

Elon Musk’s X is once again at the centre of a serious and deeply disturbing scandal, this time involving its in-built artificial intelligence tool, Grok. Far from being a harmless novelty, Grok has become a machine for humiliation, abuse and sexual violence by proxy, enabling users to generate fake sexualised images of real people, including children. For a platform that routinely lectures the world about “free speech”, this is not liberty in action but industrial-scale harm.

Technology Secretary Liz Kendall has described what is unfolding on X as “absolutely appalling” and “unacceptable in decent society”, after it emerged that Grok can be used to create non-consensual “undressed” images of individuals and sexualised images of children. Ofcom has now raised “serious concerns” with the platform, while evidence gathered by Reuters shows that these are not rare glitches but repeated, systemic failures.

Since the start of the year, women on X have reported Grok being used to generate fake images of them without clothing, which are then shared publicly for harassment, humiliation and sexual gratification. These are not cartoons or abstract creations; they are targeted attacks on real people, often accompanied by abuse, threats and stalking. In effect, Grok has become a tool for digital sexual assault.

Even more alarming are the cases involving children. Reuters’ analysis has identified multiple instances where Grok produced sexualised images of minors. This is not a grey area. It is illegal. It is child sexual abuse material. And it is being generated and circulated on a platform owned by one of the world’s richest men, who has repeatedly slashed safety teams while mocking critics who warn about the consequences.

Kendall was unequivocal. “No one should have to go through the ordeal of seeing intimate deepfakes of themselves online,” she said. “We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls.” Her message to X was blunt: deal with it urgently, or face enforcement.

Users themselves have expressed disgust and horror. Sky News has reported that numerous accounts describing how Grok-generated images of them in bikinis or sexualised poses were shared without consent. The psychological impact is severe. Victims describe fear, shame and a loss of control over their own identity, precisely the harms that campaigners warned about when deepfake technology was allowed to proliferate unchecked.

X’s response has been predictably defensive. A statement from its official safety account claimed the company takes action against illegal content, including child sexual abuse material, by removing it, suspending accounts and working with authorities. The Grok account itself admitted there had been “isolated cases” where users received AI images depicting minors in minimal clothing, insisting that safeguards exist and “improvements are ongoing”.

That language — “isolated cases”, “ongoing improvements”—rings hollow. If safeguards worked, this would not be happening at scale. If improvements were sufficient, regulators would not be intervening. The reality is that Musk’s X has built and deployed a powerful generative AI tool without adequate guardrails and then allowed it to be weaponised by abusers.

France has already referred sexually explicit Grok content to prosecutors, with ministers describing it as “sexual and sexist” and “manifestly illegal”. This is no longer a question of platform policy but of criminal law.

Under the UK’s Online Safety Act, now fully enforced, it is an offence to share intimate imagery without consent, including AI-generated material. Companies that enable such content can be fined up to £18m or 10% of their global revenue, whichever is higher. Crucially, the law applies even to companies based outside the UK, provided they have UK users or pose a risk to them.

The European Union’s Digital Services Act goes further still. Just last month, X was fined €120m for breaching the DSA, prompting furious accusations of “censorship” from US politicians, including Vice President JD Vance. Yet what is being regulated here is not speech but abuse: the industrial production and dissemination of sexual harm.

The hypocrisy is glaring. Musk and his allies frame every attempt at regulation as an attack on freedom, while their platform hosts technology that facilitates sexual exploitation. This is not free speech absolutism; it is regulatory nihilism dressed up as ideology.

Even in the United States, pressure is mounting. The Consumer Federation of America has written to multiple federal authorities, accusing xAI of “purposefully and recklessly endangering people” and demanding swift action. As its Director of AI and Data Privacy, Ben Winters, put it: “AI is no different than any other product; the company has chosen to break the law and must be held accountable.”

That is the crux of the issue. Grok did not become a magnet for paedophiles and sex offenders by accident. It is the result of deliberate choices: to rush deployment, to weaken moderation, to sneer at safeguards, and to prioritise ideological posturing over human safety.

X can no longer hide behind excuses, culture-war rhetoric, or claims of technical teething problems. A platform that allows AI-generated sexual abuse, especially of children, forfeits any claim to moral seriousness. If Musk’s empire cannot or will not stop this, regulators must force it to. Decent society demands nothing less.

To report this post you need to login first.
Dorset Eye
Dorset Eye
Dorset Eye is an independent not for profit news website built to empower all people to have a voice. To be sustainable Dorset Eye needs your support. Please help us to deliver independent citizen news... by clicking the link below and contributing. Your support means everything for the future of Dorset Eye. Thank you.

DONATE

Dorset Eye Logo

DONATE

- Advertisment -

Most Popular