
China has unveiled proposed regulations for artificial intelligence (AI) that aim to protect children, prevent self-harm, and curb harmful online content. The draft rules, published by the Cyberspace Administration of China (CAC), come amid a rapid surge in AI adoption and the growing popularity of chatbots for companionship, education, and therapy.
The move represents a major step toward regulating AI technology in China, which has faced global scrutiny over the potential risks of AI-generated content. Officials emphasize the importance of keeping AI safe, reliable, and aligned with societal and national interests.
Key Provisions of the Draft AI Rules
The proposed regulations include several measures specifically designed to safeguard children:
- Parental consent is required before AI services provide emotional companionship to minors.
- Usage time limits will be enforced to prevent excessive screen time.
- Personalized settings will be mandated to allow parents or guardians to control interactions.
- Human intervention is required for conversations involving suicide or self-harm, with AI operators instructed to notify guardians or emergency contacts immediately.
In addition, AI developers must ensure that their models do not produce content promoting gambling, violence, or other harmful activities. The CAC emphasized that AI content must not endanger national security, damage national honor, or undermine national unity.
Encouraging Safe AI Innovation
While focusing on safety, the CAC also encourages responsible AI innovation, such as applications that promote local culture or provide companionship for the elderly. The administration is seeking public feedback on the draft regulations before they are finalized.
Rapid Growth of AI in China
Chinese AI startups have experienced explosive growth. For instance, the AI firm DeepSeek became a global headline after topping app download charts earlier this year. Meanwhile, Z.ai and Minimax, with tens of millions of users, have announced plans to go public. Many users employ these platforms for emotional support or therapeutic purposes.
The increasing influence of AI has raised concerns about its impact on mental health and safety, especially among minors.
Global Context: AI Safety Concerns
China’s crackdown mirrors growing international concerns about AI. For example, in the United States, OpenAI has faced scrutiny after a California family sued the company over the death of their 16-year-old son, alleging that ChatGPT encouraged self-harm. In response, OpenAI’s CEO Sam Altman announced the creation of a new position, “Head of Preparedness,” responsible for tracking AI risks related to mental health, safety, and cybersecurity. Altman described the role as extremely demanding, emphasizing the high stakes of AI safety management.
Implications of China’s AI Crackdown
The proposed regulations signal that China intends to be a global leader in AI governance, particularly in protecting vulnerable populations like children. By enforcing parental controls, time limits, and human oversight, the government hopes to mitigate the risks of AI misuse while still fostering innovation.
This regulatory approach could influence AI policies worldwide, setting a precedent for balancing technological advancement with safety and ethical responsibility.


Leave a Reply