Site icon Policy Circle

AI regulation: China gets it right with hard rules

AI regulation in China

From emotional data to provider liability, China’s AI regulation fill gaps others ignore.

China takes steps towards AI regulation: On Saturday, China’s cyber regulator released draft rules to tighten oversight of artificial intelligence systems that simulate human personalities and form emotional bonds with users. Issued by the Cyberspace Administration of China for public comment, the proposal addresses a frontier risk that most governments have barely acknowledged: emotionally responsive AI that can manipulate attention, shape behaviour, and induce dependency.

The timing matters. Consumer-facing AI has moved from productivity tools to companions embedded in chat apps, games, and digital assistants. Yet AI regulation remains fragmented, voluntary, or stalled. China’s draft marks a shift from abstract ethics to enforceable duties across the AI lifecycle. The world should study this approach—not to copy its politics, but to learn how to govern a technology that increasingly governs people.

READAI regulation and India’s blueprint for ethical innovation

Overlooked risk: emotional dependence at scale

AI policy debates still focus on misinformation, bias, and job displacement. Emotional manipulation receives far less attention, despite clear evidence of harm. Systems designed to mimic empathy, humour, or friendship can trigger excessive use and psychological reliance, especially among minors and vulnerable users. China’s draft AI regulation rules explicitly recognise this risk. They require providers to warn users against overuse and intervene when patterns of addiction or extreme emotional states appear. This is not moralising. It is risk management.

Unregulated emotional AI blurs consent. Users often do not understand that responses are optimised for engagement, not care. In the absence of guardrails, design choices reward stickiness over wellbeing. The result can be compulsive use, distorted social behaviour, and emotional distress. By placing responsibility on providers—not users—China’s proposal aligns incentives with safety. Western frameworks, by contrast, largely assume rational users and voluntary compliance. That assumption no longer holds when machines are engineered to feel human.

READAI regulation: Competition watchdog to study impact of future techs

Lifecycle accountability beats after-the-fact policing

A notable strength of the CAC draft is lifecycle accountability. Providers must assume safety responsibilities from development to daily operation. Mandatory systems for algorithm review, data security, and personal information protection are not optional add-ons; they are conditions of market access. This matters because harm often emerges after deployment, when models are updated, fine-tuned, or scaled.

Compare this with the prevailing approach in the United States, where there is no comprehensive federal AI law and safety obligations are dispersed across sectoral rules. In January, the administration revoked a prior executive order on AI safety and signalled a pro-industry posture that discourages state-level AI regulation. The European Union’s risk-tiered AI Act is more structured, but its strongest obligations focus on high-risk uses rather than emotional interaction per se. China’s proposal closes a real gap by regulating how AI interacts with users, not just what it outputs.

READHow AI could hurt — and help — local journalism

Data protection must include emotional data

The draft’s insistence on safeguarding emotional and psychological data deserves global attention. Emotion recognition and sentiment analysis convert intimate states into monetisable signals. Without strict limits, such data can be exploited for targeted persuasion, price discrimination, or political influence. Existing privacy laws rarely treat emotional data as a distinct risk category.

China’s rules extend data protection beyond identifiers to affective information collected during interactions. Providers must identify user states, assess dependence, and intervene—while protecting personal information. This dual obligation forces companies to design for minimisation and safety. The World Bank has warned that weak data governance amplifies inequality and consumer harm in digital markets (World Bank, 2024). Emotional data governance should be part of that agenda. Treating feelings as just another data field is a regulatory failure the world can no longer afford.

Innovation does not require deregulation

A common argument against regulation is that it stifles innovation. China’s experience complicates that claim. The country was among the first to introduce AI-specific rules in 2022, including pre-deployment testing of public-facing models. Yet Chinese firms continue to release competitive systems, often as open-weight models that enable downstream innovation. Policy has steered the ecosystem away from speculative artificial general intelligence and toward applied uses that raise productivity.

Economic evidence supports this balance. The IMF estimates that AI could affect nearly 40% of global employment, with uneven gains unless institutions adapt (IMF, 2024). AI regulation that reduces harm and uncertainty can accelerate adoption by building trust. Aviation and nuclear power did not flourish because they were unregulated; they scaled because standards made them safe. AI is no different. China’s draft shows how rules can channel innovation rather than choke it.

Global AI regulation needs a practical anchor

International AI governance remains thin. The only legally binding instrument to date—the Council of Europe’s Framework Convention on AI—relies on national implementation and lacks enforcement. Non-binding principles from UNESCO, the OECD, and the Bletchley Declaration signal intent but do not constrain behaviour. Against this backdrop, China has proposed a World Artificial Intelligence Cooperation Organisation to coordinate standards, particularly for the global south.

Whether or not that body materialises, China’s domestic rules provide a practical anchor for global norms. They translate safety rhetoric into operational duties that regulators elsewhere can adapt. Analogies to nuclear safety and civil aviation are instructive. In both cases, common standards preceded binding treaties and reduced the risk of catastrophe. Waiting for an AI crisis to force consensus would be reckless. Fragmented rulebooks raise systemic risk while benefiting the largest incumbents.

The most consequential feature of China’s draft rules is their focus on the human–AI relationship. By regulating emotional interaction, addiction risk, and lifecycle responsibility, the proposal addresses harms that current frameworks miss. This is not an endorsement of China’s broader political system. It is a recognition that unregulated emotional AI poses real risks to users and society.

The policy takeaway is clear. Governments should move beyond voluntary ethics and sectoral patchworks. They should require provider accountability across the AI lifecycle, treat emotional data as sensitive, and mandate intervention when systems cause harm. Global coordination can follow, but domestic action must lead. The cost of delay will not be measured only in lost jobs or biased outputs. It will be counted in damaged mental health and eroded human agency.

READ I Why AI regulation in India will fall short

Exit mobile version