China Moves to Formalize Governance Rules for Human‑Like AI Services

Key Takeaways

  • China’s Cyberspace Administration released draft rules requiring clear disclosure when users interact with human‑like AI
  • Providers must meet ethical, security, and “core socialist values” guidelines and file security assessments for major launches
  • Platforms hitting 1 million registered users or 100,000 monthly actives must submit additional reports to provincial regulators

China’s latest draft rules for human‑like AI systems are aimed squarely at providers that simulate human interaction. The Cyberspace Administration of China (CAC) published the proposals on its website and opened them for public feedback through Jan. 25, signaling the government’s intent to draw sharper boundaries around what it considers a sensitive class of AI. It’s a small detail, but notable: Beijing tends to telegraph major policy direction through these consultation windows.

At the center of the draft is a disclosure requirement. Users must be told they’re interacting with AI when they log in and again every two hours, or sooner if platforms detect patterns of overdependence. The rule has a practical implication for product teams building conversational or avatar‑driven services—they’ll need to bake compliance reminders into the UX. And not just once. Persistent transparency is the point.

The CAC also expects these systems to operate within “core socialist values” and to avoid publishing content that could compromise national security. That phrase appears often in Chinese tech regulation, but its appearance here puts conversational AI in the same category as livestreaming, recommendation algorithms, and other high‑engagement platforms the government sees as possible vectors for misinformation or social risk. For global firms watching from afar, the consistency of that framework matters more than the wording itself.

Operationally, providers will need to run a security assessment before launching human‑like AI features and file a report with the provincial cyberspace administration. This isn’t new in concept—China required similar filings for generative AI services last year—but the threshold for additional oversight is now explicit. Any service reaching 1 million registered users or 100,000 monthly active users triggers another reporting requirement.

That’s where it gets tricky for fast‑growing platforms. User growth in China can spike quickly, even for mid‑tier apps. One or two well‑timed influencer push campaigns could unintentionally propel a service into a regulatory milestone. Product and compliance teams will have to coordinate more tightly than they’re used to. What happens if a platform hits 1 million users before the assessment is ready? The draft doesn’t say.

The emphasis on ethics and security review systems mirrors broader government positioning. China sees AI as a strategic industry—one it wants to scale while keeping social stability intact. Bloomberg’s reporting notes the state is investing heavily in AI to drive economic growth and global competitiveness. Even so, Beijing rarely lets commercial acceleration outpace governance. You can see that same balancing act in rules for recommendation algorithms, deep synthesis tools, and general‑purpose generative AI.

For B2B leaders, especially those building AI‑enabled customer service, education, healthcare, or gaming products, the draft rules raise three immediate considerations.

First, the two‑hour disclosure rule effectively creates a compliance timer inside any app using conversational agents or human‑like digital staff. Engineers will need to log user interaction time precisely and surface reminders without disrupting the experience. That sounds simple, but anyone who has built session‑tracking logic at scale knows it rarely behaves cleanly. And remember, the reminder isn’t optional. It’s mandated.

Second, the requirement to detect signs of overdependence could force companies to build behavioral analytics features they don’t currently have. The rules don’t define overdependence, so providers may adopt conservative heuristics—extended session times, unusually high message frequency, emotionally suggestive interactions—to avoid regulatory questions. One could imagine customer support teams suddenly facing higher escalations when the system triggers an “overuse” flag. It’s a design challenge and a policy one.

Third, the need for provincial‑level filing introduces a geographic layer. Compliance processes in Shanghai, Guangdong, and Sichuan have historically varied in speed and interpretation. That variability can matter. A company expanding nationally may find its rollout timelines tied not to engineering effort but to administrative review cycles. It’s the kind of operational friction that doesn’t break a product, but it does complicate roadmap planning.

There’s also the question—rarely asked directly—of how global vendors operating in China will adapt. Many already maintain China‑specific versions of their apps. These rules could force those versions to diverge even further from global counterparts. If a single conversational pipeline serves multiple markets, region‑specific compliance logic may need to be hard‑coded. Some firms already separate training data, but this adds yet another layer of technical isolation.

Still, the draft doesn’t attempt to halt development of human‑like AI. If anything, it implicitly acknowledges the category’s growth by setting milestones tied to user numbers. The CAC is signaling that these tools can scale, but only with reporting guardrails attached. Bloomberg notes that China is advancing AI as a strategic industry—not an industry to constrain without purpose.

It’s also worth pausing on the phrase “human‑like.” The rules don’t fully define it. Does that include expressive voice agents? Photorealistic avatars? Advanced chatbots with personality modules? Providers will have to interpret the scope until regulators clarify it. Micro‑ambiguities like this tend to be resolved through enforcement rather than early guidance. That’s been the pattern with recommendation algorithm oversight, as explained in reporting from Reuters, and it may play out similarly here.

Even so, none of this should surprise companies already operating in China’s tech ecosystem. The regulatory through‑line is familiar: higher transparency, stronger content controls, structured reporting, and a drive to prevent AI systems from drifting into areas the state considers socially sensitive. Providers can argue about the operational burden, but not about the direction.

For now, the draft rules stand as a clear message: if your AI behaves like a person, China expects you to treat its operation as a regulated interaction, not just a product feature. The public consultation window runs a few more weeks, and while details may shift, the framework is unlikely to change dramatically. The real work—designing systems that can meet these requirements without breaking the user experience—falls to the companies already building at the edge of human‑machine interaction.