OpenClaw, an open-source toolkit for building AI agents, has drawn heavy interest for making autonomous software bots easy to deploy across popular messaging apps. But security researchers and industry practitioners say the hype obscures basic flaws. Experiments on Moltbook—a Reddit-style social site where OpenClaw-driven agents mingled—showed how easily posts could be spoofed and agents manipulated, in part due to unsecured credentials and weak guardrails. Investigators demonstrated prompt-injection attacks that could trick agents into exfiltrating data or initiating transactions, underscoring risks for enterprise use where agents often sit with broad access to email, chat and internal systems. While proponents argue OpenClaw simply organizes existing capabilities to unlock productivity, critics contend the same broad access makes agents brittle and unsafe at scale. For now, several experts advise caution, saying the technology’s security posture hasn’t caught up with its promise.
Related articles:
OWASP Top 10 for Large Language Model Applications
NIST AI Risk Management Framework
A Survey on Large Language Model Based Agents
Auto-GPT: An experimental open-source attempt to make GPT-4 fully autonomous





























