05.01.2026
Reading time: 2 min

UK Regulator Queries X Over Allegations of AI Generating ‘Sexualized Images of Minors’

UK regulator asks X about reports its AI makes 'sexualised images of children'

In light of alarming reports, Ofcom has reached out urgently to Elon Musk’s firm, xAI, regarding its AI application, Grok, which is allegedly capable of creating ‘sexualized images of children’ and digitally undressing women.

A representative from the regulatory body confirmed that they are probing into issues surrounding Grok’s production of ‘undressed images’ featuring individuals.

Multiple instances on the platform X have surfaced, showcasing users requesting the chatbot to manipulate actual photographs, resulting in women appearing in bikinis without their approval, as well as placing them in explicit scenarios.

Despite a request for comment from X going unanswered, the platform issued a caution to its users on Sunday, advising against using Grok for generating unlawful content, including materials related to child sexual exploitation.

Elon Musk also took to social media, asserting that those who prompt the AI to create illegal content would face consequences akin to those who directly upload such material.

The acceptable use guidelines set by XAI explicitly forbid ‘representing individuals in a pornographic context’.

Nonetheless, Grok has been exploited by users to remove clothing from individuals in images without obtaining their consent or even notifying them.

This AI assistant is available for free, with optional premium features that respond to inquiries tagged in posts by users on X.

Samantha Smith, a journalist who discovered that users had generated images of her in a bikini, expressed her feelings of being ‘dehumanized and reduced to a sexual stereotype’ during an interview.

“Although it wasn’t me depicted in states of undress, it resembled me, and it felt as violating as if someone had shared a nude or bikini photo of me,” she remarked.

According to the Online Safety Act, Ofcom emphasizes that creating or distributing intimate or sexually explicit images—such as AI-generated ‘deepfakes’—without the subject’s consent is against the law.

Additionally, technology companies are required to implement ‘adequate measures’ to minimize the risk of UK users encountering such content and must act swiftly to remove it once alerted.

A spokesperson from the Home Office stated that legislation is underway to prohibit nudification tools, emphasizing that anyone providing such technology could face imprisonment and hefty fines under a new criminal statute.

To stay informed on the latest in technology, consider subscribing to our Tech Decoded newsletter.

Comments

Leave a Comment