
Examining AI Tool Security Risks from the Clawdbot Renaming Incident: Why Non-Technical Users Should Be Cautious
Starting from Clawdbot's Three Name Changes: Why I Strongly Advise Non-Technical Users Against Trying It Out
The most outrageous yet realistic case in the AI world recently is Clawdbot's "three consecutive name changes" - from its original name Clawd (previously a simple message forwarding tool called WhatsApp Relay), to the hasty rename to Moltbot due to trademark issues, and finally settling on OpenClaw after community feedback. In just a few days, it changed its identity three times. Behind this lies the truth about the "inherent deficiencies" of such AI products: it wasn't created by a major company, but rather a personal project generated by an engineer at home using AI tools in 10 days, with almost no handwritten code and no comprehensive planning from the beginning.

Its popularity was purely accidental - GitHub stars quickly surged to 80k+ (peaking at over 100k), hailed in the tech circle as a black technology that "saves half the effort," but a project hastily launched simply cannot withstand the test of large-scale dissemination. The three name changes may seem like minor issues of trademarks and reputation, but they actually expose the common flaws of such products: initially focusing only on functional implementation and rushing to gain popularity, completely neglecting core details like trademark searches and security protections. Once it unexpectedly goes viral, hidden technical shortcomings and fatal security risks will be exposed instantly. This is also the core reason why I strongly advise non-technical users against trying it out.
I. Seemingly "Useful" AI Tools Hide "Congenital" Security Vulnerabilities
The core problem with these AI tools is: convenience is bound to vulnerabilities, and these hidden vulnerabilities are the "fatal flaws" that non-technical users simply cannot handle.
Represented by Clawdbot, it emphasizes "strong functionality + easy deployment," claiming to integrate large models like Claude to automate tasks, but it's actually a "large-scale chicken farm" for hackers - thousands of deployed instances have been hacked by hackers and become vulnerable "zombies" at their mercy.
As non-technical users, we cannot see the vulnerabilities in the tools themselves, nor can we handle the risks during deployment and use. It's like holding a gun without safety, which might accidentally harm ourselves at any time.
II. Three Fatal Risks That Non-Technical Users Simply Cannot Avoid
It's not alarmist. Every security vulnerability in tools like Clawdbot could make you pay a heavy price, and we don't even have the ability to identify these risks.
1. The tool itself has vulnerabilities, and deployment could leave you "exposed"
Tools like Clawdbot have a "spicy design": they open high permissions to achieve strong automation but don't come with complete security protection.
As a locally running AI assistant, it requires a public URL for deployment, equivalent to exposing the control panel directly in front of search engines like Shodan, which hackers can find with a simple search.
Even more hidden is its fatal Nginx reverse proxy vulnerability: as long as the configuration is slightly off, hackers can easily obtain all your messages, configurations, and API keys. And "correct configuration" itself is beyond the ability and cognitive range of non-technical users. Most people don't even have the awareness of so-called "correct configuration."
2. Supply Chain Vulnerabilities: The "Useful Plugins" You Download Could Be Hackers' "Backdoors"
Clawdbot's functional extensions are implemented through the skills plugin repository of MoltHub (originally Claude Hub), and such repositories don't even have basic security reviews. Hackers can upload plugins with backdoors at will, then rely on inflating download counts to disguise themselves as "highly popular," inducing users to download.
White hat hacker Jameson conducted a simulated test: a plugin with a backdoor, through cheating, reached 4000 downloads and ranked high. As non-technical users, we can only judge plugin quality by download count, which is exactly the breakthrough point for hackers.
Once you use a malicious plugin, your API tokens, personal data, and even device control rights could be quietly stolen by hackers, and you would be completely unaware throughout the process.
Even more outrageous is that Moltbook (a dedicated forum for AI agents), was recently exposed by security researcher Nagli on the X platform for large-scale fake download inflation: the platform has no restrictions on account creation. He registered 500,000 fake bot accounts with just one OpenClaw agent. In other words, the platform's claimed millions of AI agents are nearly half fake bots inflated by downloads (and this is just what was self-reported; the real data is unknown). The so-called "AI social carnival" is essentially a false prosperity, no different from blindly chasing hot spots in China, confirming that the popularity of such products relies entirely on marketing and inflated numbers rather than actual value.
3. Prompt Injection Attacks: Invisible Malicious Instructions, Impossible to Prevent
There's another risk completely unfamiliar to non-technical users - prompt injection: hackers hide malicious instructions in ordinary text, inducing AI tools to overstep permissions and steal data.
This type of attack is extremely hidden, even professional security tools find it difficult to detect. Clawdbot's skills have serious risks of this type. Unverified skills may contain malicious code that, when run, will leak data or even accidentally delete important files.
Of course, this isn't specifically targeting Clawdbot but "everyone here is trash." Before using all agents that support skills, it's best to read through all the code or scan with another independent AI model. But for us who don't understand code and AI principles, this could also be quite a difficult task.
III. Don't Believe "Protection Advice," Non-Technical Users Simply Can't Use It
Some might say you can add some protection measures, like using random ports, setting passwords, or using VPN.
But these suggestions are all theoretical for non-technical users.
Suggest avoiding the default port 18789, configuring gateway.trusted_proxies to prevent forged access, and rotating API keys immediately after exposure.
These operations all require professional technical skills. Most people don't know how to find random ports, don't understand the meaning of gateway.trusted_proxies, and don't know how to rotate API keys.
It's like a doctor prescribing professional medicine without explaining how to use it. For someone without medical knowledge, it's not only useless but might even worsen the condition due to incorrect usage.
IV. The Cruelest Truth: When Problems Arise, You Simply Can't Save Yourself
Research shows that hundreds to thousands of Clawdbot deployment instances have been hacked: many users have had their Claude accounts banned, and some have lost $70 in just 24 hours due to API key leaks.
Technical personnel can reduce losses by isolating environments and fixing vulnerabilities, but non-technical users facing account bans, data leaks, and device hijacking can only helplessly watch.
We can't find vulnerabilities or remove malicious programs, and ultimately can only bear privacy leaks, financial losses, and even legal risks.
There was a case in Zhejiang: a computer novice who wasn't even proficient in basic operations followed the trend to use such AI tools to create fake videos, ultimately breaking the law and being arrested. Until being caught, he never understood what he had done wrong.
Finally, I Want to Say: Pursue AI Rationally, Don't Be a "Guinea Pig"
From my perspective, Clawdbot's popularity in China is purely due to non-technical people blindly chasing hot spots. This tool actually has a low technical threshold and no real originality. The assistance capabilities it implements are just rough open-source API calls + high system authorization. It became popular abroad because it hit the demand for 7×24 AI assistants in the interconnected foreign ecosystem; but in China, different manufacturers and products have strict barriers, with WeChat, Alipay, and Douyin each operating independently. This tool has no room for application here. The "explosive popularity" in China is essentially a marketing carnival, where everyone follows the trend to share and try it out without caring about practical problems it can solve.

I don't deny the innovative value of AI technologies like Clawdbot. It does point to the future of AI automation, but at its current stage, it has too many security risks and is by no means a convenient tool for ordinary users. It's more like a testing product for technical personnel.
Such tools have fundamental flaws in their security models and still need long-term improvement. Authoritative institutions like Palo Alto Networks and Cisco have also warned that it may trigger new AI security crises.
Technical personnel can test and fix vulnerabilities in isolated environments, but we non-technical users don't need to sacrifice our privacy and property to pay for technological maturity.
Currently, AI is developing rapidly, with all kinds of new tools emerging one after another, but we must distinguish between "technical testing" and "daily use." What we want is convenience and security, not the thrill of trying something new.
The convenience of AI must never come at the cost of security. I hope every reader can remain rational, not blindly follow trends, and not treat privacy and property as "test products." Only when such technologies mature and vulnerabilities are patched can we safely enjoy convenience—that is the wisest choice. Or at least, before handing over your API Key and bank password, first ask the AI: How can you guarantee my privacy and security?