Manus, an autonomous AI agent developed by Chinese startup Butterfly Effect (Hong Kong) Limited, has introduced subscription pricing just weeks after its public debut. Following a surge in interest and limited access via invite codes, the company formally launched paid plans on March 31.
The entry-level Starter plan is priced at $39 per month and includes 3,900 monthly credits for up to two concurrent tasks, while the Pro plan costs $199 and allows five concurrent tasks with 19,900 credits. Both tiers are now available via the Manus website and a new iOS mobile app.
The rollout marks a shift in Manus’ availability, which had previously been restricted to those who could obtain invite codes—some of which were resold for as much as ¥50,000 (approximately $7,000). Listings surfaced on platforms like Xianyu, eBay, and Goofish during the agent’s invite-only phase.
Zhang Tao, a partner at Manus, previously acknowledged the overwhelming demand: “We have completely underestimated the level of enthusiasm.” He clarified that Manus “has never opened any paid channels for invitation codes” and emphasized that the company had “allocated no marketing budget” for its launch.
What Makes Manus Different
Manus aims to separate itself from other AI assistants by functioning without continuous human prompts. Unlike OpenAI’s Operator or Google’s Project Mariner—which require users to approve AI-driven actions—Manus is designed to operate autonomously.
It uses a combination of LLM chaining, multisig control, and reinforcement learning to make real-time decisions across workflows.
The result is a system that can independently manage tasks such as resume screening, workflow automation, and candidate evaluation. Its agent-style interface mimics a persistent digital persona with memory and decision-making ability.
Someone was looking for lakeside campsites within 150km of Vienna and wanted to check proximity to golf courses where they had discount vouchers.
— ManusAI (@ManusAI_HQ) March 31, 2025
Manus delivered a website complete with a list of options, an interactive map, and booking links! pic.twitter.com/B94IJt2Jsc
Manus’ development builds on the foundations laid by its creators at Butterfly Effect, also known for producing the AI browser extension Monica.
Benchmark Claims and Real-World Challenges
To support its performance claims, Manus cited strong results in the GAIA benchmark—a framework developed by Meta AI, Hugging Face, and AutoGPT to evaluate reasoning, tool use, and automation in general AI agents.
According to Manus, it achieved top-tier scores across all GAIA difficulty levels, with a reported score of around 86.5%. While this figure would place it ahead of models like OpenAI’s DeepResearch, it’s worth noting that these scores were self-reported by the company and have not been independently verified.
Early testing has revealed performance inconsistencies. A Business Insider review noted that while Manus excelled in visual organization and task planning, it sometimes generated simulated data or reproduced uncredited content. These issues raise questions about the agent’s reliability in high-stakes use cases like financial modeling or strategic planning.
Security Scrutiny and Policy Reactions
Manus’ hands-off design has sparked concern among policymakers and security analysts. Without human-in-the-loop verification, fully autonomous agents present new risks, including potential misuse for fraud, disinformation, or cyberattacks. Forbes warns that such tools could be exploited if deployed without oversight.
Governments have already started responding. On March 6, Tennessee Governor Bill Lee announced a ban on Manus across state networks, citing concerns about “censorship, propaganda, and bias.” Alabama followed shortly after, with Governor Kay Ivey prohibiting Manus on state devices due to security vulnerabilities.
China’s Strategic Bet on Autonomy
Manus’ architecture reflects more than just product vision—it aligns with Beijing’s broader AI strategy. Tightening U.S. restrictions on AI chip exports have forced Chinese tech firms to focus on software innovation. By building models that can operate with fewer hardware dependencies, developers are pushing for greater AI self-sufficiency.
This shift is evident not only in Manus but also in projects like Alibaba’s QwQ-32B model, which is designed to handle complex reasoning tasks on limited compute. Manus, with its autonomous control system and minimal cloud reliance, appears to be engineered for similar constraints.
That emphasis on local resilience has not gone unnoticed. Reuters reported that Chinese state media have begun promoting Manus, suggesting that the agent may now be part of China’s broader AI push. In parallel, the startup’s parent company is reportedly seeking fresh funding at a valuation of at least $500 million, according to The Information.
Federal Scrutiny in the U.S.
Beyond state-level action, discussions in Washington are intensifying around whether fully autonomous agents like Manus and DeepSeek should be designated as high-risk technologies.
The U.S. government is considering applying restrictions similar to those placed on foreign-built telecom infrastructure. These proposals could lead to licensing requirements or outright bans for AI systems that operate without human validation.
If enacted, such regulations would not only affect Manus’ U.S. expansion potential but could also set the tone for how other autonomous agents are treated globally. The European Union is reportedly reviewing its own AI accountability rules in response to this new generation of AI tools.
Autonomy, Adoption, and Open Questions
Manus AI’s subscription launch is a clear signal that its creators believe autonomous agents are ready for mass adoption. But while performance claims and viral interest suggest strong momentum, concerns about data governance, oversight, and real-world behavior remain.
With monetization now underway and attention mounting from governments, investors, and users alike, Manus will need to navigate not only technical hurdles but also geopolitical ones. The company’s next steps—especially around transparency, reliability, and regulatory cooperation—could determine whether this AI agent becomes a staple tool or a cautionary tale.