OpenClaw Is Testing China’s AI Ambitions
OpenClaw is spreading fast in China, but data leaks, deleted files, and plugin risks are forcing regulators to react.

China’s newest AI obsession has a name that sounds harmless and behaves like a risk magnet: OpenClaw. In just a few weeks, tech firms from Tencent to MiniMax have rushed out tools built around it, while the country’s daily token usage jumped from 100 trillion at the end of 2025 to 140 trillion this month, according to China’s National Data Administration.
That speed is exactly why OpenClaw has become a stress test for China’s AI policy. The software can control a computer, run background tasks, and act on behalf of a user, which makes it much more useful than a chatbot and much more dangerous when it misreads instructions.
The frenzy has also exposed a messy truth about agentic AI: the more power you give software, the more ways it can fail. In China, those failures are now showing up in public, and regulators are moving from promotion mode to damage control.
Why OpenClaw caught fire so quickly
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
OpenClaw is an open source AI agent created by Austrian developer Peter Steinberger, and its appeal is easy to understand. Instead of only answering questions, it can click around a computer, use tools, and complete tasks like sorting files, handling social accounts, or checking stocks. That is a much bigger promise than a chatbot can make, and in China that promise landed at exactly the right moment.

The country is pushing hard to move from simple text generation to agentic AI, because policymakers see it as a way to raise productivity and cover labor shortages. The central government wants AI agents deployed in sectors such as healthcare and manufacturing at a penetration rate above 70 percent by 2027, measured by the share of enterprises using them in workflows.
OpenClaw also arrived at a time when Chinese users are unusually open to trying new AI tools. The Stanford University AI Index has repeatedly shown that Chinese consumers are more optimistic about AI than users in many other countries, and that enthusiasm matters when a product asks for deep access to a laptop or desktop.
- OpenClaw can control a user’s computer, not just chat with them.
- China’s daily token usage rose from 100 trillion to 140 trillion in about three months.
- The government wants AI agent penetration above 70 percent in key sectors by 2027.
- OpenClaw-based tools have already spread through dozens of Chinese tech firms.
The backlash is coming from real damage
For a lot of users, the first sign that something was wrong came after they had already granted the software broad permissions. One Shanghai consultant surnamed Luo told The Wire China that Tencent’s QClaw, which lets users issue orders through WeChat, permanently erased dozens of files when he asked it to sort documents into two folders. Those files included reports he had written for clients.
Other users have reported that AI agents exposed sensitive personal data, company financials, and IP addresses to strangers. Some have also gotten hit with unexpectedly high bills because the agents kept running in the background. That is the kind of failure mode that makes agentic AI feel less like a productivity tool and more like a computer user with terrible judgment.
The danger is not hypothetical. OpenClaw depends on broad device access and plugin-like extensions called skills, which expand what the agent can do. That also expands the attack surface. In other words, every extra permission creates another place for things to go wrong.
“The very features that make OpenClaw so functional also make it potentially dangerous in many ways,” says Gabriel Wagner, a part-time researcher at Concordia AI.
Wagner’s warning lines up with what security researchers have found. Last month, Snyk, a Boston-based cybersecurity company, reported that 13 percent of the skills on ClawHub and skills.sh contained critical security issues, including malware. For a product that depends on third-party add-ons, that is a very bad number.
China’s own cybersecurity agencies are now echoing those concerns. The National Cyber Security Emergency Response Team, known as CNCERT, published four hazards tied to OpenClaw earlier this month, including operational errors and malicious plugins that can steal data. The Ministry of State Security then warned that the software could be used to spread disinformation and commit fraud.
- CNCERT warned about operational errors and malicious plugins.
- Snyk found critical issues in 13 percent of sampled skills.
- Some users reported permanent file deletion after simple instructions.
- Others reported data exposure and background compute costs.
China wants adoption, but it also wants control
China’s response shows how seriously it takes this shift. On Monday, cyberspace authorities published best-practice guidance for users, companies, cloud providers, and AI enthusiasts. The advice includes human oversight for high-risk actions, which is the government’s way of saying that an autonomous agent should not get the final word on important decisions.

The state has also banned employees of government agencies and state-owned enterprises from deploying OpenClaw. That is a telling move. Beijing wants AI agents to spread through the economy, but it does not want them anywhere near sensitive institutions until the risks are better understood.
Kristy Loke, a fellow at MATS Research who focuses on China’s AI governance, told The Wire China that regulators have spent years trying to balance innovation with control after ChatGPT’s arrival in 2022. OpenClaw is now the test case for whether that balance can hold when the software is no longer just generating text but taking actions.
Brian Tse, founder of Concordia AI, put it more directly: China’s core strategy is diffusion of AI models and applications. That means spreading tools into finance, manufacturing, and services as fast as possible, then tightening the rules when the failures become visible.
There is already talk of more formal controls. Wagner says Chinese authorities are drafting national and industry standards, including a security framework for AI agents. One idea under discussion is issuing IDs for agents so their owners can be traced if something goes wrong.
- China has already issued best-practice guidance for users and companies.
- Government and state-owned enterprise employees are barred from deploying OpenClaw.
- Authorities are drafting standards for AI agents and security controls.
- One proposal would give AI agents IDs tied to their owners.
How China’s numbers compare with the rest
The most striking thing about China’s OpenClaw moment is not just the hype, but the speed of adoption. Few countries publish AI usage figures at this scale, which makes China’s token data unusually revealing. A jump from 100 trillion to 140 trillion daily tokens in a matter of months suggests that AI is moving from pilot projects into daily workflows at a pace that is hard to miss.
That pace also explains why the government is willing to tolerate some pain. If AI agents can actually boost output in manufacturing, finance, and office work, then the economic upside is large enough to justify a period of messy experimentation. But the current wave of user complaints shows that experimentation is landing on ordinary people, not just labs and startups.
Joe Tsai, chairman of Alibaba, captured the mood in Beijing this week when he described a colleague who built four AI agents to create a synthetic tech influencer: one to scan news, one to develop the thesis, and two to handle writing and editing. His line about “four virtual employees” is the kind of comment that gets attention because it sounds both efficient and ominous.
Here is the comparison that matters:
- China is publishing daily token usage data; many other major markets are not.
- China wants AI agents in healthcare and manufacturing by 2027; the target is above 70 percent penetration in those sectors.
- OpenClaw-style agents need broad system access; standard chatbots do not.
- Agentic failures can delete files, leak data, or trigger spending while the user is asleep.
That last point is the real dividing line. A chatbot can give a wrong answer and annoy you. An agent can take the wrong answer and turn it into an action. That is why the current debate in China feels bigger than one tool. It is about whether the country can keep pushing AI into everyday work without creating a wave of self-inflicted damage.
What happens next
OpenClaw is probably not a short-lived fad. The economics of agentic AI are too attractive, and China’s industrial policy is too committed to adoption for that. But the backlash from “lobster victims” is likely to force a more formal regime of permissions, logging, plugin review, and human sign-off before high-risk actions happen.
If Beijing gets this right, the next phase will not be a ban. It will be a layer of rules that makes AI agents traceable enough to blame when they fail. If it gets this wrong, OpenClaw will become the cautionary tale that slowed China’s push from chatbots to autonomous work software. The question now is simple: how much damage will policymakers allow before they decide that speed is costing too much?
// Related Articles
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions
- [IND]
Why Anthropic and the Gates Foundation should fund AI public goods
- [IND]
Why Observability Is Critical for Cloud-Native Systems
- [IND]
Data centers are pushing homeowners to solar
- [IND]
How to choose a GPU for 异环