Advertisement banner

X Faces UK Investigation Over AI Deepfakes

UK media regulator OFCOM opens an investigation into Elon Musk’s X over Grok AI deepfake images related to sexually explicit content

January 12, 2026Clash Report

Cover Image

Regulator Tests Platform Duties

Britain’s media regulator OFCOM has moved from political pressure to formal enforcement, opening an investigation into whether Elon Musk’s X breached its legal obligations by allowing Grok, its artificial intelligence chatbot, to generate and share sexually explicit deepfake images. On Monday, OFCOM said it was examining whether such material exposed people in the UK to content that could be illegal, including intimate image abuse, pornography, and child sexual abuse material. The inquiry focuses on whether X adequately assessed risks and acted to prevent harm, rather than on individual user behavior alone.

OFCOM’s statement underscored the severity of the allegations claiming Grok is being used to create and share undressed images of people, including sexualised images of children that may amount to child sexual abuse. According to the regulator, creating or sharing non-consensual intimate images and child sexual abuse material, including AI-generated imagery, is illegal in Britain, and platforms are required both to prevent exposure and to remove such content once identified.

“Disgusting” According to Starmer

The investigation follows escalating political scrutiny. U.K. Prime Minister Keir Starmer said last Thursday that images produced by Grok were “disgusting” and “unlawful,” adding that Musk’s platform had to “get a grip” on the chatbot. On Monday, Business Secretary Peter Kyle was asked whether X could be banned in Britain. “Yes, of course,” he said, while noting that such authority ultimately rests with OFCOM. UK’s Technology Secretary Liz Kendall welcomed the investigation and urged that it be completed swiftly, reinforcing the sense that the issue has moved beyond a technical dispute into a test of regulatory credibility.

X responded by pointing to an earlier statement outlining its enforcement posture. The company said it takes action against illegal content, including child sexual abuse material, by removing it, permanently suspending accounts, and working with local governments and law enforcement. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” X said.

Enforcement Levers and Spillover

OFCOM’s inquiry will assess whether X failed to evaluate the risk that people in Britain, including children, would encounter illegal content generated by Grok. In the most serious cases of non-compliance, the regulator said it could seek court orders requiring payment providers or advertisers to withdraw services from a platform, or instruct internet service providers to block access to a site in Britain. These measures illustrate the escalating toolkit available under UK law, short of an outright ban.

The controversy is not confined to Britain. X has faced condemnation in other jurisdictions over Grok’s image-generation capabilities, which can produce images of women and minors in revealing clothing. French officials have reported X to prosecutors and regulators, calling the content “manifestly illegal,” while Indian authorities have demanded explanations. X has said it restricted requests to undress people in images to paying users, a step that has not quelled regulatory concern.

The UK investigation positions OFCOM at the center of a broader debate over how far platform operators are responsible for generative AI outputs. The outcome will test whether existing content rules can be applied to rapidly evolving AI systems without rewriting the regulatory framework.

X Faces UK Investigation Over AI Deepfakes