---
While the EU focuses on fundamental rights and the US on market-driven safety, China treats AI governance as an extension of state security. The regulatory pace in Beijing has been blistering, resulting in the world's first enforceable laws specifically targeting recommendation algorithms, deepfakes, and generative AI.
For global enterprises, understanding this framework is critical not just for market entry, but for recognizing the "Beijing Model" that is influencing regulations in the Global South.
The Vertical Strategy: Regulation by Application
China does not have a single "AI Act" yet. Instead, it issues targeted regulations for specific AI applications as they mature.
1. The Algorithm Registry (2022)
Under the Provisions on the Management of Algorithmic Recommendations, companies must file details of their algorithms with the Cyberspace Administration of China (CAC).
- The Mandate: Service providers with "public opinion properties" or "social mobilization capabilities" must register.
- What is Filed: Basic logic of the algorithm, training data sources, and self-assessment reports on security.
- Impact: Companies like ByteDance (Douyin/TikTok), Tencent, and Alibaba have effectively "opened the black box" to state regulators.
2. Deep Synthesis Provisions (Deepfakes)
Enforced since January 2023, these rules target AI that generates or manipulates content (video, voice, text).
- Watermarking: Mandatory "explicit" labeling for AI-generated content.
- Consent: Strict requirements for using biometric data (face/voice) of real people.
3. Generative AI Measures (2023)
The Interim Measures for the Management of Generative AI Services (effective August 15, 2023) govern the Chinese equivalents of ChatGPT and Midjourney.
The "Core Values" Clause:Article 4 explicitly states that content generated by AI must:
- Reflect "Socialist Core Values."
- Not contain content that subverts state power or endangers national security.
- Not promote discrimination or false information.
The Compliance Hurdle: Safety Assessment
Before a Generative AI service can be released to the public in China, it must pass a Security Assessment. This is effectively a state licensure system.
The Assessment Process:| Step | Requirement |
|---|---|
| 1. Training Data Audit | Firms must demonstrate the "legitimacy" of data sources and IP rights. |
| 2. Keyword Filtering | Implementation of robust filters for illegal/sensitive content. |
| 3. Model Filing | Submission of model parameters and safety test results to the CAC. |
| 4. Real-name Verification | Users must be verified (usually via mobile number) to use the service. |
China vs. The West: The "Safety" Divergence
The term "AI Safety" means different things in different capitals.
| Concept | Western Focus (EU/US) | Chinese Focus (Beijing) |
|---|---|---|
| Primary Harm | Bias, Discrimination, Existential Risk | Social Instability, Dissent, Content Quality |
| Transparency | Explainability to users | Visibility to the State (Registry) |
| Enforcement | Fines & Market Access | License Revocation & Criminal Liability |
Enterprise Action Plan: Operating in the Zone of Control
For multinational corporations (MNCs) operating in China, the "One Strategy Fits All" approach is dead.
1. Data Partitioning: Ensure that training data for Chinese models is physically and logically separated to comply with cross-border data transfer laws (PIPL).
2. Algorithm Filing: If your app uses recommendations (feeds, search rankings) in China, verify if you trigger the CAC Registry requirement.
3. Local Partners: The Generative AI licensure process heavily favors domestic entities. Partnering with local tech giants (Baidu, Tencent) is often the only viable path for foreign models.
4. Content Liability: Implement strict content moderation layers on top of the model output. In China, the service provider (not the AI) is liable for generated content.
For a broader view of global divergence, see Global AI Governance. To see how India is carving its own path, read India's AI Strategy.