โ† Back to cheatsheet

OpenClaw Security Risks

OpenClaw can read files, run commands, and connect to external services โ€” which means incorrect defaults can be dangerous. Security starts with trust boundaries, not with a single setting.

OpenClaw guideUpdated 2026Practical setup steps

What You Need to Know

The primary threat surfaces are inbound messages, third-party skills, and credential exposure. Treat every DM, group message, and community-authored skill as untrusted input until explicitly verified. Use dmPolicy: "paired-only" or strict allowlists from day one, and never expose your assistant to public channels without understanding the implications.

Skill safety is the biggest practical risk. Community skills from ClawHub can execute arbitrary code on your machine. Before installing any skill, inspect the source for red flags: obfuscated code, hidden network calls, curl | bash patterns, base64-encoded payloads, or privileged filesystem access without a clear reason. Prefer bundled skills for security-critical setups.

Sandbox mode is your strongest containment layer. Enable sandboxEnabled: true for non-main sessions to restrict file access, command execution, and network calls to a controlled boundary. The sandbox now rejects hardlink alias escapes and enforces mkdirp boundary checks, but it is not foolproof โ€” treat it as defense-in-depth, not a guarantee.

Credential hygiene matters more than most users expect. Set chmod 600 on openclaw.json and chmod 700 on the credentials directory. Never store API keys in workspace files that the AI can read. Use the new openclaw secrets workflow (audit, configure, apply, reload) to manage secrets through ref-only auth profiles instead of inline values.

Regular audits catch drift. Run openclaw doctor --deep --yes weekly, openclaw security audit --deep after adding new skills or channels, and review AGENTS.md constraints whenever you change your assistant's scope. The goal is a setup where the assistant has exactly the permissions it needs and nothing more โ€” least privilege applied to an AI agent.