How to Run OpenClaw in Docker: Complete Setup Guide (2026)

Step-by-step guide to running OpenClaw, the open-source AI assistant, in a Docker container. Covers installation, configuration, Telegram setup, dashboard access, and production hardening.

Alex Daro
Alex Daro
How to Run OpenClaw in Docker: Complete Setup Guide (2026)

OpenClaw is an open-source personal AI assistant with over 219,000 GitHub stars. It connects to every messaging platform you already use — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Microsoft Teams — and provides a single agent you control. Running OpenClaw in Docker is the fastest path to a self-hosted AI assistant that stays isolated from your host system.

This guide covers everything from first docker build to a working multi-channel assistant accessible from your phone.

Why Docker for OpenClaw

Running OpenClaw directly on your machine works fine for development, but Docker offers clear advantages for anything beyond casual use:

  • Isolation — OpenClaw has shell access, browser control, and file system tools. A container limits the blast radius.
  • Reproducibility — Pin an image tag and get identical behavior across machines.
  • Portability — Run the same image on your Mac, a Linux VPS, or a cloud instance.
  • Cleanup — Stop the container and everything disappears. No leftover config files or orphaned processes.

If you need stronger isolation than Docker provides — hardware-enforced memory encryption with cryptographic attestation — see our guide on running OpenClaw in a Trusted Execution Environment.

Prerequisites

  • Docker Desktop (macOS/Windows) or Docker Engine (Linux)
  • Node.js 22+ (for the OpenClaw CLI, used for initial setup)
  • An API key for your preferred model provider (Anthropic, OpenAI, or ChatGPT OAuth)

Quick Start with Docker Compose

OpenClaw ships with Docker Compose support out of the box. This is the recommended path.

Clone and Build

git clone https://github.com/openclaw/openclaw.git
cd openclaw

Run the setup script that handles the Docker Compose configuration:

./docker-setup.sh

This creates two mount points on your host:

PathPurpose
~/.openclawConfiguration, credentials, memory
~/openclaw/workspaceFiles the agent can read and write

The setup script launches the onboarding wizard, which walks you through model selection and channel configuration.

Onboarding Choices That Matter

The wizard asks a lot of questions. Here are the ones that affect your Docker deployment:

Onboarding mode — Choose manual for full control over each step.

Model provider — You have two main options:

  1. API key (Anthropic or OpenAI): Straightforward but usage is billed per token. OpenClaw can burn through tokens quickly when using tools.
  2. ChatGPT OAuth: Authenticates against your existing ChatGPT subscription. This caps your spend at whatever you already pay for ChatGPT. The OAuth flow gives you a localhost URL that fails to load — copy that URL and paste it back into the wizard.

Tailscale — Skip this on first setup. You can add it later if you want remote access.

Verify the Container

After setup completes:

docker ps

You should see a container running the openclaw:local image, typically named openclaw-openclaw-gateway-1.

Manual Docker Setup

If you prefer not to use the setup script, you can run OpenClaw directly:

docker build -t openclaw:local .

Create the config directory and a minimal configuration:

mkdir -p ~/.openclaw
{
  "agent": {
    "model": "anthropic/claude-opus-4-6"
  },
  "gateway": {
    "port": 18789,
    "bind": "0.0.0.0"
  }
}

Save that as ~/.openclaw/openclaw.json, then run:

docker run -d \
  --name openclaw \
  -p 18789:18789 \
  -v ~/.openclaw:/root/.openclaw \
  -v ~/openclaw/workspace:/root/openclaw/workspace \
  -e ANTHROPIC_API_KEY=sk-ant-your-key-here \
  openclaw:local \
  node dist/index.js gateway --port 18789 --verbose

Connect Messaging Channels

OpenClaw's value is that it meets you where you already are. Here is how to wire up the most common channels.

Telegram is the lowest-friction channel for a Docker deployment because it only requires a bot token — no QR codes or device linking.

  1. Open Telegram and message @BotFather
  2. Send /newbot and follow the prompts
  3. Copy the bot token

Add the token to your config:

{
  "agent": {
    "model": "anthropic/claude-opus-4-6"
  },
  "channels": {
    "telegram": {
      "botToken": "123456:ABCDEF-your-token"
    }
  }
}

Restart the container and message your bot. OpenClaw will respond with a pairing code. Approve it:

docker compose run --rm openclaw-cli pairing approve telegram <CODE>

Slack

Set up a Slack app with Socket Mode enabled, then add both tokens:

{
  "channels": {
    "slack": {
      "botToken": "xoxb-your-bot-token",
      "appToken": "xapp-your-app-token"
    }
  }
}

Discord

Create a Discord bot in the Developer Portal and add the token:

{
  "channels": {
    "discord": {
      "token": "your-discord-bot-token"
    }
  }
}

WhatsApp

WhatsApp requires a QR code scan for initial device linking. Run the login command:

docker compose run --rm openclaw-cli channels login

This stores session credentials in ~/.openclaw/credentials. Subsequent container restarts reuse the existing session.

Access the Web Dashboard

OpenClaw includes a web UI on port 18789. Get an authenticated URL:

docker compose run --rm openclaw-cli dashboard --no-open

This prints a URL with a ?token=... parameter. Open it in your browser.

If you see a "pairing required" disconnection error, you need to approve the web client:

docker compose exec openclaw-gateway \
  node dist/index.js devices list

Find the pending request and approve it:

docker compose exec openclaw-gateway \
  node dist/index.js devices approve <REQUEST_ID>

Run Commands Inside the Container

The openclaw-cli service from Docker Compose handles most administrative tasks:

# Check status
docker compose run --rm openclaw-cli status
 
# Send a message
docker compose run --rm openclaw-cli message send \
  --to "+1234567890" --message "Hello from Docker"
 
# Talk to the agent directly
docker compose run --rm openclaw-cli agent \
  --message "What can you do?" --thinking high
 
# Restart the gateway
docker compose run --rm openclaw-cli gateway restart

For commands that need root access (installing packages, etc.):

docker compose exec -u root openclaw-gateway bash
apt-get update && apt-get install -y ripgrep

Production Hardening

A default Docker setup is fine for personal use. For anything more exposed, consider these steps.

DM Security

By default, OpenClaw uses pairing-based DM access — unknown senders get a pairing code and their messages are not processed until approved. This is safe.

To verify your DM security posture:

docker compose run --rm openclaw-cli doctor

Never set dmPolicy: "open" unless you explicitly intend for anyone to message your bot.

Resource Limits

OpenClaw's browser tool and shell execution can consume significant resources. Add limits in your docker-compose.yml:

services:
  openclaw-gateway:
    deploy:
      resources:
        limits:
          memory: 4G
          cpus: "2.0"

Persistent Credentials

The ~/.openclaw volume mount already persists credentials across container restarts. Back up this directory regularly — it contains your WhatsApp session state, API tokens, and conversation history.

Logging

Stream logs to a file for debugging:

docker compose logs -f openclaw-gateway > openclaw.log 2>&1 &

Or configure CloudWatch, Datadog, or any Docker logging driver in your Compose file.

Common Issues

Container exits immediately — Check logs with docker compose logs openclaw-gateway. Usually a missing API key or malformed config file.

WhatsApp disconnects after restart — The credentials volume might not be mounted correctly. Verify the mount in docker ps output.

High token usage — OpenClaw uses tools aggressively. Set agents.defaults.thinkingLevel: "low" to reduce token consumption, or use ChatGPT OAuth to cap spend.

"pairing required" in dashboard — See the device approval steps above. The web client needs explicit approval.

Beyond Docker: Trusted Execution Environments

Docker provides process-level isolation, but the host operator can still inspect container memory, read mounted volumes, and intercept network traffic. If you handle sensitive credentials or need verifiable guarantees about what code is running, a Trusted Execution Environment (TEE) adds hardware-enforced isolation with cryptographic attestation.

Read our deep dive on why AI assistants need TEEs, or jump straight to running OpenClaw in a Treza TEE for a step-by-step deployment guide using AWS Nitro Enclaves.


Want hardware-level isolation? Treza provides managed TEE infrastructure that runs any Docker workload — including OpenClaw — inside AWS Nitro Enclaves with a single CLI command. Visit trezalabs.com to get started.

Ready to get started?

Get in touch to learn how Treza can help your team build privacy-first applications.