OpenClaw
Personal AI assistant you run on your own devices. 25+ messaging channels, voice, cron jobs, browser control, and a skills system.
Open-source personal AI agents, honestly reviewed. No marketing fluff, just what you get when you self-host it.
TL;DR
- What it is: Open-source (MIT) personal AI assistant that connects to 20+ messaging platforms — WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and more — and can autonomously execute tasks on your own machine [README][2].
- Who it’s for: Developers and power users who want a persistent, self-hosted AI agent that acts rather than advises. Not for non-technical founders who have never touched a Linux terminal [1][5].
- Cost: Free to self-host. No SaaS tier, no per-message pricing. Your cost is a VPS, a Node.js runtime, and an API key from your AI provider of choice [README].
- Key strength: Breadth of channel integrations (20+ platforms), persistent memory that survives session restarts, proactive “heartbeat” scheduling, and a genuinely active open-source community. 320,000+ GitHub stars within months of launch [merged profile].
- Key weakness: Serious, documented security risks — prompt injection, supply chain attacks through the skills ecosystem, and unencrypted HTTP exposed to the internet by default. Gartner, CrowdStrike, and Cisco’s AI security team have all flagged it [3]. Not something you casually deploy on a machine with production credentials.
What is OpenClaw
OpenClaw is a self-hosted AI agent gateway. You install it on a machine you control, give it access to an LLM (OpenAI, Anthropic, or others), and it makes that AI reachable through every messaging app you already use — WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Google Chat, Microsoft Teams, Matrix, IRC, and a dozen more [README][2]. You send it a message from your phone; it executes the task on your computer and replies.
The project was built by Peter Steinberger, an Austrian engineer, as a weekend project in November 2025. It went viral. Within ten weeks it had 150,000 GitHub stars; the merged profile puts it at 320,100 today [1][merged profile]. The name has changed twice: it launched as “Clawdbot” (a play on Anthropic’s Claude), was asked to rename by Anthropic over trademark concerns, briefly became “Moltbot,” and landed on “OpenClaw” in January 2026. The lobster emoji stuck throughout [1][3]. Steinberger has since joined OpenAI, which has committed to keeping the project open source [3].
The core concept is different from a chatbot. ChatGPT tells you how to reorganize your files. OpenClaw reorganizes your files. It runs shell commands, browses the web, reads and writes to your filesystem, and can chain tasks across tools. The “Gateway” process is just routing — the actual work is done by the agent runtime (called “Pi”) running on whatever hardware you point at it [README][2].
Why People Choose It
The reviews paint a consistent picture: OpenClaw wins on the “it actually does things” axis, and loses on the “is it safe to run near anything important” axis.
The appeal. User testimonials on the homepage read like people who just discovered something fundamental shifted. One user quoted on the site: “It’s running my company.” Another: “This is the first time I have felt like I am living in the future since the launch of ChatGPT.” A Raspberry Pi user: “I just finished setting up OpenClaw on my Raspberry Pi with Cloudflare, and it feels magical.” That’s not marketing copy — those are real accounts of what happens when you give an LLM persistent memory, scheduled execution, and access to your actual machine [website].
The Japanese guide [1] breaks down why this feels different from prior AI tools. Three specific properties: execution (it runs commands, doesn’t just suggest them), persistent memory (your preferences and task history survive across sessions via a local MEMORY.md file), and proactive heartbeat (cron-style scheduled tasks that fire without a prompt from you). These three together produce something that behaves less like a chatbot and more like a background employee.
Versus cloud AI assistants. ChatGPT, Claude, and Gemini all run on vendors’ servers. Every prompt, every document you paste in, every calendar event you ask about — passes through their infrastructure. OpenClaw runs on your hardware, uses your API keys, and stores your memory locally. For anyone handling client data, proprietary code, or just preferring not to have their daily schedule live on third-party servers, that’s a real difference [2].
Versus n8n and Activepieces. Those are workflow automation tools — you build flows in a UI and connect apps. OpenClaw is fundamentally different: it’s a conversational agent that can be instructed in natural language to do things, rather than a visual pipeline you configure in advance. They’re not really competing for the same user.
Features
Channel coverage:
- 20+ messaging platforms supported as first-class channels: WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Google Chat, Microsoft Teams, Matrix, Feishu, LINE, Mattermost, Nextcloud Talk, Nostr, Synology Chat, Tlon, Twitch, Zalo, WeChat, WebChat [README]
- Plugin channels (Matrix, Nostr, Twitch, Zalo, others) bundled or installable separately [2]
- Single Gateway process serves all channels simultaneously
Agent runtime:
- Shell command execution, file management, browser control [1][README]
- Persistent memory via MEMORY.md and TOOLS.md — survives session restarts, learns your preferences over time [1][2]
- Proactive scheduling: cron-style heartbeat tasks that run without waiting for a prompt [1]
- Multi-agent routing: separate sessions per agent, workspace, or sender [2]
- Human-in-the-loop approval gates: you can require sign-off before the agent executes sensitive actions [5]
Voice and mobile:
- Voice input/output on macOS, iOS, and Android [README]
- iOS and Android companion apps that pair with the Gateway for camera and Canvas workflows [README]
- Voice wake support (docs reference, feature status: beta) [README]
Control UI:
- Browser dashboard at
http://127.0.0.1:18789/for chat, settings, session management [2][5] - Remote access via Tailscale or SSH tunnel [2]
Skills ecosystem:
- Installable skills via ClawHub — community-contributed extensions [1][3]
- VirusTotal partnership for skill security scanning (recently announced) [website]
Multi-model support:
- OpenAI, Anthropic, and other providers; OAuth profile rotation with fallbacks [README]
- Routing rules: e.g., send complex tasks to Claude, fast searches to Gemini [1]
Pricing: SaaS vs Self-Hosted Math
OpenClaw has no SaaS tier. There is no subscription, no managed cloud, no per-message pricing. It’s MIT-licensed software you install yourself [README][merged profile].
Your actual costs:
Self-hosted:
- OpenClaw license: $0
- VPS or home server: $5–20/month (or $0 on a Raspberry Pi or spare Mac mini you already own) [1]
- LLM API costs: varies by provider and usage. OpenAI and Anthropic charge per token. A moderately active personal assistant running primarily Claude Sonnet might cost $10–30/month depending on task volume — this is the real cost to model in your budget
- Your time to set up and maintain
Comparison to commercial alternatives:
- ChatGPT Plus: $20/month, server-side, conversation history under OpenAI’s control
- Claude Max: $100/month, same constraints
- Zapier AI agents: usage-based, can run $50–200+/month at meaningful task volumes
If you already pay for a Claude or OpenAI subscription, the marginal cost of self-hosting OpenClaw is effectively just a cheap VPS. The token costs through the API will likely exceed what you were paying anyway, but you get more control, no rate limits tied to a consumer plan, and the ability to automate background tasks that don’t require you to be at a keyboard.
Deployment Reality Check
The recommended install path is npm install -g openclaw@latest followed by openclaw onboard --install-daemon. The onboarding wizard walks through gateway setup, channel pairing, and skill installation [README][5].
Runtime requirements: Node 24 (Node 22.16+ minimum). Works on macOS, Linux, and Windows via WSL2 — native Windows is not recommended and reportedly unstable [5].
What you actually need:
- A Linux machine or macOS box with Node 24
- API keys for your AI provider
- A way to expose the gateway externally if you want mobile access (Tailscale is the cleanest option; raw port forwarding without HTTPS is specifically flagged as a risk by CrowdStrike [3])
- About 30–60 minutes for a technical user following the docs; 2–4 hours if you’ve never configured a Node daemon before
Windows-specific: WSL2 (Ubuntu 24.04) is the supported path on Windows. The Japanese guide [5] walks through the full setup: wsl --install -d Ubuntu-24.04, Node via n, permission fix for global npm packages, then the standard OpenClaw install. The openclaw-bridge tool handles Windows-to-WSL communication so the browser control UI at localhost:18789 works from your Windows browser [5].
What can go sideways:
- The skills ecosystem (ClawHub) is an active attack surface. A malware campaign called “ClawHavoc” distributed malicious skills through it; the recommendation from security researchers is to install only skills you’ve reviewed [3].
- CrowdStrike found many internet-exposed OpenClaw instances running HTTP without TLS [3]. If you expose the gateway port without a reverse proxy and HTTPS, you’re broadcasting your API keys and session data in plaintext.
- The 40+ security patches issued through February 2026 include RCE (remote code execution), prompt injection, and session hijacking fixes [3]. The project moves fast, which means the patch cadence is active — but it also means the codebase had serious vulnerabilities that needed patching.
- The “Moltbook” social platform associated with the project had 1.5 million API tokens leaked in one incident [3]. That’s a separate product, but it’s context for the operational security posture of the project ecosystem.
Pros and Cons
Pros
- MIT license, truly self-hosted. Your data stays on your hardware. No vendor can raise prices, change terms, or access your conversation history [README][2].
- Broadest messaging channel coverage in the category. 20+ platforms in a single Gateway process is genuinely impressive and practically useful [README].
- Persistent memory that actually works. The MEMORY.md/TOOLS.md approach means the agent accumulates context over time rather than starting fresh each session. Multiple users describe this as the feature that makes it feel qualitatively different [1][2].
- Proactive scheduling. Heartbeat tasks run without you sending a prompt. This is what makes it feel like an agent rather than a chatbot [1].
- 320,000+ GitHub stars in under five months — one of the fastest-growing open source projects in recent memory, which means community, skills, tutorials, and bug reports are abundant [merged profile][1].
- Human-in-the-loop controls. You can require approval before sensitive actions, which meaningfully reduces the blast radius of agent errors [5].
- Multi-model routing. Not locked to one provider; can route different task types to different models [README][1].
Cons
- Documented, serious security risks. Gartner called it an “unacceptable cybersecurity risk.” Cisco’s AI security team named it as a bad example. CrowdStrike found widespread unencrypted deployments. These aren’t hypothetical warnings — they’re specific findings from organizations that study this [3].
- Prompt injection is acknowledged as out of scope. The official security documentation classifies prompt injection as “not a vulnerability by design.” On a tool that autonomously browses the web and executes commands, this is a significant position [3].
- Skills supply chain is an active attack surface. ClawHavoc malware was distributed through the skills ecosystem. Review every skill before installing it [3].
- RCE vulnerabilities in the history. 40+ security patches through February 2026 include remote code execution fixes. Don’t run this on a machine with access to production systems or credentials you can’t afford to lose [3].
- Not beginner-friendly. Despite the polished onboarding wizard, getting OpenClaw running correctly with a stable daemon, external access, and proper TLS requires real systems knowledge. The non-technical founder use case is not well served here [5].
- No managed cloud option. If you want OpenClaw’s capabilities without the ops burden, there’s no official hosted tier. You either self-host or you don’t use it.
- The skills ecosystem is still immature compared to what the platform promises. ClawHub has growing content but many integrations require custom configuration or community-maintained skills that may be abandoned [1].
Who Should Use This / Who Shouldn’t
Use OpenClaw if:
- You’re a developer or power user comfortable with Linux, Node.js, and managing a daemon process.
- You want a persistent AI agent that works across the messaging apps you already use, without paying per-task or trusting a vendor with your data.
- You have hardware to run it on — a spare Mac mini, a Raspberry Pi, a cheap VPS — and you understand that the LLM API costs are where your actual spend goes.
- You’re building on top of it: custom skills, multi-agent setups, IoT integrations.
- You want control over which AI model handles which tasks, with fallback routing.
Don’t use OpenClaw if:
- You have any machine running it connected to production databases, payment systems, or credentials with broad access. The RCE history and prompt injection exposure make this a real risk, not theoretical [3].
- You’re a non-technical founder who heard about it on Twitter and wants an AI assistant that “just works.” The setup requires genuine technical skill; the security model requires active judgment about what access to grant.
- You work in a regulated industry. Gartner’s warning and the Belgian Cybersecurity Centre’s advisory mean your compliance team will likely say no anyway [3].
- You need a guaranteed SLA or a support tier. This is community open source; when something breaks you’re debugging it yourself.
Try a safer alternative if:
- You want the self-hosted AI agent concept but with a more conservative security posture — NanoClaw (sandbox-first, no root) or Cloudflare-based MoltWorker are cited as alternatives with narrower attack surfaces [3].
- You want AI automation without autonomous computer access — n8n or Activepieces handle structured workflow automation with less ambient risk.
Alternatives Worth Considering
- NanoClaw — fork focused on security: runs in containers by default, doesn’t request root, minimal permissions. Fewer skills available, but the security model is more defensible [3].
- MoltWorker — runs on Cloudflare Workers; no local machine access required, which eliminates the RCE and local filesystem risk categories entirely [3].
- ChatGPT Operator / Agents (OpenAI) — vendor-hosted, no self-deployment, but OpenAI handles the security surface. You trade data sovereignty for not having to run a daemon.
- n8n — workflow automation, not conversational agents, but if your use case is “connect apps and automate tasks,” n8n is more mature, more audited, and better documented for production deployments.
- Claude.ai Projects — persistent context across conversations, no setup required, Anthropic handles security. Not agentic in the same way, but covers the “remember my preferences” use case without any deployment overhead.
Bottom Line
OpenClaw is genuinely impressive as an engineering artifact and as a demonstration of what self-hosted AI agents can do. The messaging channel breadth, persistent memory, and proactive scheduling are all real capabilities that make it feel different from a chatbot. If you’re a developer who wants to run an AI agent on your own hardware and you’re comfortable managing the security surface carefully — isolated from sensitive credentials, behind proper TLS, with skills reviewed before installation — it’s worth experimenting with.
But the security track record through early 2026 is not the track record of software you deploy near anything important. Forty-plus patches including RCE, a malware campaign in the skills ecosystem, unencrypted deployments found at scale, and a design decision to treat prompt injection as out of scope — that combination deserves real weight before you hand this thing access to your file system and your API keys. The promise is there. The operational maturity to match it is still catching up.
If the security research and self-hosting complexity are the blockers — or if you want an AI setup done correctly for your business without carrying the ops burden — that’s exactly the kind of deployment upready.dev handles for clients.
Sources
- AI Worker (note.com) — “[2026年最新] OpenClaw完全入門ガイド|インストールから実践活用まで徹底解説” (Feb 11, 2026). https://note.com/ai__worker/n/nd8e0f87145b5
- OpenClaw Official Docs (Japanese) — “OpenClaw — docs.openclaw.ai”. https://docs.openclaw.ai/ja-JP
- TechGym (techgym.jp) — “OpenClawとは?危険性・セキュリティリスクと安全な代替案を徹底解説”. https://techgym.jp/column/openclaw/
- Skywork AI (skywork.ai) — “2026年最新版:openclawとは?AIエージェントの可能性を広げる革新ツールの全貌と徹底比較” (Apr 16, 2026). https://skywork.ai/skypage/ja/openclaw-ai-agent-innovation/2044719080312340481
- AIエージェントナビ (aiagent-navi.com) — “[2026年最新] OpenClawをWindowsで常駐させ、AIにタスクを丸投げする具体的な環境構築術”. https://aiagent-navi.com/ai-agent/openclaw-wsl-installation-guide/
Primary sources:
- GitHub repository and README: https://github.com/openclaw/openclaw (320,100+ stars, MIT license)
- Official website: https://openclaw.ai
- Documentation: https://docs.openclaw.ai
Features
Integrations & APIs
- Discord Integration
- Slack Integration
- Telegram Integration
- Webhooks
AI & Machine Learning
- AI Agents
Automation & Workflows
- Scheduled Tasks / Cron
Related AI & Machine Learning Tools
View all 93 →Ollama
166KRun open-source LLMs locally — get up and running with DeepSeek, Qwen, Gemma, Llama, and more with a single command.
Open WebUI
128KRun AI on your own terms. Connect any model, extend with code, protect what matters—without compromise.
OpenCode
124KThe open-source AI coding agent — free models included, or connect Claude, GPT, Gemini, and 75+ other providers.
Zed
77KA high-performance code editor built from scratch in Rust by the creators of Atom — GPU-accelerated rendering, built-in AI, real-time multiplayer, and no Electron.
OpenHands
69KThe open-source, model-agnostic platform for cloud coding agents — automate real software engineering tasks with sandboxed execution, SDK, CLI, and enterprise-grade security.
Daytona
67KSecure, elastic infrastructure for running AI-generated code — sub-90ms sandbox creation, stateful operations, and SDKs for Python, TypeScript, Ruby, and Go.