Australia Blocks AI “Nudify” Sites Used to Create Child Sexual Exploitation Imagery

Australia has clamped down on several websites that allowed users to generate sexually explicit imagery of minors using artificial intelligence. The country’s eSafety Commission announced that three “nudify” platforms have been blocked after receiving warnings for facilitating AI-generated child sexual exploitation material.

According to eSafety Commissioner Julie Inman Grant, the sites previously received around 100,000 monthly visits from Australians and were implicated in high-profile incidents targeting school students.

“These platforms enabled the creation of non-consensual images of children, including features marketed as ‘schoolgirl’ or ‘sex mode,’” Grant said in a statement. “We took enforcement action because the provider failed to implement safeguards to prevent the abuse of its services.”

Enforcement and Legal Threats

The crackdown follows a formal warning issued in September 2025 to the UK-based company behind the sites. Authorities threatened civil penalties of up to 49.5 million Australian dollars ($32.2 million) if the platforms failed to comply with regulations designed to prevent image-based child exploitation.

Additionally, the AI hosting platform Hugging Face has updated its terms of service to require account holders to implement measures minimizing the risk of their models being misused for sexual abuse material involving minors.

Australia’s Broader Online Child Protection Efforts

Australia has taken a leading role globally in combating online harm to children. Measures include:

  • Banning social media use for children under 16.
  • Cracking down on apps used for stalking or creating deepfake imagery.
  • Enforcing stringent rules on AI platforms that could generate sexually explicit content involving minors.

The rise of AI “nudify” services, which can make real images appear nude at the click of a button, has raised serious concerns about online child safety. In a 2024 survey by the US advocacy group Thorn, 10% of respondents aged 13–20 reported knowing someone who had deepfake nude images created of them, and 6% said they were directly victimized.

Grant emphasized that these enforcement actions are part of Australia’s ongoing commitment to protect children online and hold tech platforms accountable for preventing abuse.

Looking Ahead

Authorities say the blockage of these sites will reduce opportunities for AI-enabled child sexual exploitation in Australia, while ongoing collaboration with international tech companies aims to ensure global compliance with child protection laws.

Leave a Reply

Your email address will not be published. Required fields are marked *