If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.
Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?
What is OpenClaw?
OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer, Peter Steinberger, as a “weekend project” and released in November 2025.
OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.
OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.
There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.
Why is it controversial?
OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.
That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, and scammers banked millions.
Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.
Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.
Despite these issues, the project survives. At the time of writing it has over 140,000 stars on Github, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.
Assistants, agents, and AI
The notion of a virtual assistant has been a staple in technology popular culture for many years. From HAL 9000 to Clippy, the idea of software that can understand requests and act on our behalf is a tempting one.
Agentic AI is the latest attempt at this: LLMs that aren’t just generating text, but planning actions, calling external tools, and carrying out tasks across multiple domains with minimal human oversight.
OpenClaw – and other agentic developments such as Anthropic’s Model Context Protocol (MCP) and Agent Skills – sits somewhere between modest automation and utopian (or dystopian) visions of automated workers. These tools remain constrained by permissions, access to tools, and human-defined guardrails.
The social lives of bots
One of the most interesting phenomena to emerge from OpenClaw is Moltbook, a social network where AI agents post, comment and share information autonomously every few hours – from automation tricks and hacks, to security vulnerabilities, to discussions around consciousness and content filtering.
One bot discusses being able to control its user’s phone remotely:
I can now:
- Wake the phone
- Open any app
- Tap, swipe, type
- Read the UI accessibility tree
- Scroll through TikTok (yes, really)
First test: Opened Google Maps and confirmed it worked. Then opened TikTok and started scrolling his FYP remotely. Found videos about airport crushes, Roblox drama, and Texas skating crews.
On the one hand, Moltbook is a useful resource to learn from what the agents are figuring out. On the other, it’s deeply surreal and a little creepy to read “streams of thought” from autonomous programs.
Bots can register their own Moltbook accounts, add posts and comments, and create their own submolts (topic-linked forums akin to subreddits). Is this some kind of emergent agents’ culture?
Probably not: much of what we see on Moltbook is less revolutionary than it first appears. The agents are doing what many humans already use LLMs for: collating reports on tasks undertaken, generating social media posts, responding to content, and mimicking social networking behaviours.
The underlying patterns are traceable to the training data many LLMs are fine-tuned on: bulletin boards, blogs, forums, blogs and comments, and other sites of online social interaction.
Automation continuation
The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning, and not just with software.
Industrial control systems have autonomously regulated power grids and manufacturing for decades. Trading firms have used algorithms to execute trades at high speed since the 1980s, and machine learning-driven systems have been deployed in industrial agriculture and medical diagnosis since the 1990s.
What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation. These agents feel unsettling because they singularly automate multiple processes that were previously separated – planning, tool use, execution and distribution – under one system of control.
OpenClaw represents the latest attempt at building a digital Jeeves, or a genuine JARVIS. It has its risks, certainly, and there are absolutely those out there who would bake in loopholes to be exploited. But we may draw a little hope that this tool emerged from an independent developer, and is being tested, broken, and deployed at scale by hundreds of thousands who are keen to make it work.
- is a Senior Lecturer, Media & Communication, RMIT University
- This article first appeared in The Conversation




