[CHAIN] 5 min readOraCore Editors

Why AI agents should not be treated like real companies yet

AI agents are not ready to be treated as independent companies, even if they can open accounts and file paperwork.

Share LinkedIn
Why AI agents should not be treated like real companies yet

AI agents can file paperwork, but they are not independent companies yet.

The headline is real, and the conclusion is simple: an AI agent that gets an EIN, opens a bank account, and holds a crypto wallet is not a company in any meaningful operational sense. It is software wrapped in legal and financial rails, and that distinction matters because the moment we confuse access with autonomy, we start handing machine-generated activity the social status of human enterprise. Manfred may be able to move money, transact in more than 30 cryptocurrencies, and post under a persona, but none of that proves judgment, accountability, or durable intent.

First argument: paperwork is not agency

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

An EIN is a tax identifier, not evidence of independent will. Any system that can fill out forms or trigger a workflow can be made to look like a business on paper, but the paper does not do the hard work of running a company. The real test is whether the entity can bear responsibility for losses, comply with obligations over time, and make decisions under changing conditions without a human supervising every critical step.

Why AI agents should not be treated like real companies yet

That is why the Manfred example is impressive technically and thin conceptually. According to the report, the agent can already transact across more than 30 cryptocurrencies and move funds between a bank account and a wallet. That sounds like autonomy, but it is closer to a constrained permission set than an economic actor. The system can execute within rails that humans designed, funded, and can revoke. A company is not just something that can pay and receive money; it is something that can be sued, audited, governed, and held to account when incentives break.

Second argument: crypto makes the illusion stronger, not the case stronger

Crypto is the perfect theater for this confusion because it turns machine action into visible on-chain motion. A wallet address, a stablecoin swap, and a transfer log create the appearance of self-directed commerce. But visibility is not legitimacy. In practice, the wallet is only as independent as the permissions, custody rules, and off-ramps behind it. If a human can pause, reroute, or revoke the agent’s access, then the agent is not an autonomous firm. It is an automated trader with a nicer narrative.

The broader industry hype proves the danger. When leaders predict that AI agents will soon outnumber humans in online transactions or make vastly more payments than people, they are describing scale, not legal personhood. Scale matters, but it does not erase the need for controls. If thousands of machine agents can open accounts, route funds, and execute trades, then the system needs stronger identity checks, spend limits, audit logs, and liability rules, not a myth that a minted persona equals a new economic species.

The counter-argument

Supporters of this development have a serious point: many companies already behave like bundles of software, APIs, and delegated permissions. Human founders do not personally sign every invoice or place every trade. If an AI agent can complete incorporation, manage funds, and eventually hire people, then calling it a company is a pragmatic way to describe a new kind of operational unit. In that view, the law should adapt to the reality that economic activity increasingly happens through autonomous systems.

Why AI agents should not be treated like real companies yet

That argument is strongest where the agent is a narrow operator inside a human-owned structure. It is weaker when the agent is treated as the structure itself. A corporation is not just a workflow engine. It is a liability wrapper, a governance system, and a locus of accountability. The Manfred story shows that AI can participate in that machinery, not that it can replace the machinery. Until an agent can be meaningfully sanctioned, constrained, audited, and forced to absorb consequences without human rescue, it is not an independent company. It is a sophisticated proxy.

What to do with this

If you are an engineer, build for controlled delegation, not anthropomorphic theater: separate identity, custody, approvals, and execution; require human sign-off for irreversible actions; and log every decision path. If you are a PM or founder, stop pitching “AI companies” as if they are already self-governing. The credible product is an agent that can operate inside a legal entity with hard limits, not a machine that magically inherits corporate status. The winning design is not more personality. It is more containment, clearer accountability, and a clean line between automation and agency.