How to Run OpenClaw in a Trusted Execution Environment (TEE) with Treza
Deploy OpenClaw, the open-source personal AI assistant, inside a hardware-isolated Trusted Execution Environment using Treza's Nitro Enclave infrastructure. Full walkthrough from Docker image to cryptographic attestation.

Running AI assistants on bare metal or standard cloud VMs introduces a class of risk that most teams quietly accept: the host operator can inspect memory, intercept API keys, and tamper with the runtime. Trusted Execution Environments (TEEs) eliminate that risk at the hardware level. This guide walks through deploying OpenClaw — the open-source personal AI assistant with 219k+ GitHub stars — inside a Treza TEE backed by AWS Nitro Enclaves.
By the end you will have an OpenClaw gateway running in a hardware-isolated enclave with cryptographic attestation proving exactly what code is executing.
New to OpenClaw? Start with our guide to running OpenClaw in Docker. Want to understand TEE fundamentals first? Read why AI assistants need Trusted Execution Environments.
Why Run OpenClaw in a TEE?
OpenClaw connects to real messaging surfaces — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Microsoft Teams — and holds long-running sessions with access to tools like browser control, file operations, and shell execution. That means it handles:
- Model API keys (Anthropic, OpenAI, or OAuth tokens)
- Messaging credentials (Telegram bot tokens, WhatsApp session state, Slack app tokens)
- User conversation history across every connected channel
- Tool execution context including file system access and shell commands
On a standard VM, the cloud operator, a compromised hypervisor, or a privileged insider can access all of this. A Trusted Execution Environment creates a hardware-enforced boundary where even the host operating system cannot read enclave memory or tamper with the running process.
What a TEE Guarantees
| Property | What It Means |
|---|---|
| Memory isolation | Enclave RAM is encrypted and inaccessible to the host OS, hypervisor, or other tenants |
| Code integrity | Cryptographic measurement (PCR values) prove exactly which binary is running |
| Attestation | A hardware-signed document proves the enclave identity to remote parties |
| No persistent storage | Enclaves are ephemeral — no disk means no data remanence |
| No interactive access | No SSH, no shell — the only interface is a defined vsock channel |
For an AI assistant that manages credentials and executes tools on your behalf, these properties are not academic. They are the difference between trusting an operator's promise and trusting hardware-enforced cryptography.
Treza TEE Architecture
Treza provides managed TEE infrastructure built on AWS Nitro Enclaves. The architecture isolates your workload from the host while maintaining network connectivity through a secure proxy layer.
┌─────────────────────────────────────────────────┐
│ EC2 Parent Instance │
│ │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ Parent Proxy │◄────►│ Nitro Enclave │ │
│ │ (vsock bridge)│ │ │ │
│ │ │ │ ┌────────────────┐ │ │
│ │ • HTTP fwd │ │ │ Enclave Proxy │ │ │
│ │ • KMS proxy │ │ │ (vsock ↔ TCP) │ │ │
│ │ • Log stream │ │ └───────┬────────┘ │ │
│ └──────────────┘ │ │ │ │
│ │ ┌───────▼────────┐ │ │
│ │ │ OpenClaw │ │ │
│ │ │ Gateway │ │ │
│ │ └────────────────┘ │ │
│ └──────────────────────┘ │
└─────────────────────────────────────────────────┘
The enclave communicates with the outside world exclusively through vsock — a virtual socket that carries length-prefixed JSON messages. The parent proxy translates these into HTTP requests, KMS API calls, and CloudWatch log streams. There is no network stack inside the enclave itself.
Proxy Message Types
The vsock channel between the enclave and parent carries five message types:
http_request— Outbound HTTP calls (model APIs, webhooks)kms_request— AWS KMS operations gated by attestation documentslog— Application logs streamed to CloudWatchhealth_report— Periodic health status from the enclavepcr_request— Retrieves Platform Configuration Register values
This constrained interface means the OpenClaw process inside the enclave can reach external APIs but cannot be reached or inspected by anything on the host.
Prerequisites
Before deploying OpenClaw in a Treza TEE, you need:
- A Treza account with enclave access enabled
- The Treza CLI installed and authenticated
- A Docker image of OpenClaw configured for your use case
- Model provider credentials (Anthropic API key or OpenAI/ChatGPT OAuth)
Install the Treza CLI
npm install -g @treza/cli
treza config initBuild the OpenClaw Docker Image
OpenClaw ships with Docker support. Clone the repo and build a production image:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
docker build -t openclaw:tee .Alternatively, use the pre-built image and layer your configuration on top:
FROM openclaw:local
COPY openclaw.json /root/.openclaw/openclaw.jsonYour openclaw.json should contain the model configuration and any channel tokens. Since this will run inside a TEE, these secrets are protected by hardware isolation — they cannot be extracted from the enclave even by the host operator.
{
"agent": {
"model": "anthropic/claude-opus-4-6"
},
"channels": {
"telegram": {
"botToken": "${TELEGRAM_BOT_TOKEN}"
}
},
"gateway": {
"port": 18789,
"bind": "0.0.0.0"
}
}Push the Image to a Registry
The image needs to be accessible from the Treza build pipeline:
docker tag openclaw:tee your-registry.com/openclaw:tee
docker push your-registry.com/openclaw:teeDeploy OpenClaw to a Treza TEE
With the CLI authenticated and the image pushed, create the enclave:
treza enclave create \
--name "openclaw-assistant" \
--description "OpenClaw personal AI assistant in TEE" \
--region us-east-1 \
--provider aws-nitro-enclave \
--image your-registry.com/openclaw:tee \
--workload-type service \
--health-path /health \
--health-interval 30 \
--aws-services kms \
--expose-ports 18789Let's break down what each flag does:
| Flag | Purpose |
|---|---|
--provider aws-nitro-enclave | Selects AWS Nitro Enclaves as the TEE backend |
--workload-type service | Long-running process with health checks (vs. batch or daemon) |
--health-path /health | OpenClaw's gateway health endpoint |
--health-interval 30 | Health check every 30 seconds |
--aws-services kms | Enables KMS proxy for attestation-gated key operations |
--expose-ports 18789 | Exposes the OpenClaw gateway port through the parent proxy |
What Happens During Deployment
When you run treza enclave create, the platform executes a multi-step workflow:
- Record creation — A DynamoDB entry is created with status
PENDING_DEPLOY - Workflow trigger — A DynamoDB Stream event triggers an AWS Step Functions execution
- Validation — The deployment request is validated (image accessibility, region availability)
- Infrastructure provisioning — Terraform creates an EC2 instance with Nitro Enclave support, builds the Enclave Image File (EIF) from your Docker image, and configures the proxy layer
- Enclave boot — The EIF is loaded into the Nitro Enclave, the vsock proxy starts, and health checks begin
- Status update — Status transitions to
DEPLOYED
Monitor the deployment:
treza enclave get openclaw-assistantStream the logs:
treza enclave logs openclaw-assistant --type applicationVerify the TEE with Cryptographic Attestation
Deployment alone is not enough. The value of a Trusted Execution Environment comes from attestation — cryptographic proof that the enclave is running exactly the code you expect.
Platform Configuration Registers (PCRs)
Every Nitro Enclave produces a set of PCR values that uniquely identify the running software:
| PCR | What It Measures |
|---|---|
| PCR0 | Hash of the Enclave Image File (EIF) — the exact binary |
| PCR1 | Hash of the Linux kernel and boot RAM filesystem |
| PCR2 | Hash of the application layer |
| PCR8 | Hash of the signing certificate (if the image is signed) |
Retrieve the PCR values for your enclave:
treza enclave get openclaw-assistant --json | jq '.pcrs'Or use the Treza API:
curl -H "Authorization: Bearer $TREZA_API_KEY" \
https://api.trezalabs.com/api/enclaves/{id}/pcrsAttestation Verification
Treza provides an attestation verification endpoint that checks the integrity of the enclave and returns a trust assessment:
curl -H "Authorization: Bearer $TREZA_API_KEY" \
https://api.trezalabs.com/api/enclaves/{id}/attestation/verifyThe response includes:
{
"status": "VERIFIED",
"integrityScore": 100,
"trustLevel": "HIGH",
"pcrs": {
"PCR0": "a1b2c3d4...",
"PCR1": "e5f6a7b8...",
"PCR2": "c9d0e1f2..."
},
"attestationDocument": "base64-encoded-document"
}A VERIFIED status with HIGH trust level confirms that the hardware attestation is valid, the PCR values match the expected image, and the enclave has not been tampered with.
Reproducible Builds for Auditable Attestation
For maximum trust, use reproducible Docker builds so that anyone can independently verify that a given PCR0 value corresponds to a specific OpenClaw commit:
FROM node:22-slim@sha256:abc123... AS builder
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install --frozen-lockfile
COPY . .
RUN pnpm build
FROM node:22-slim@sha256:abc123...
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
CMD ["node", "dist/index.js", "gateway", "--port", "18789"]Pin your base images by digest, use lockfiles, and avoid non-deterministic build steps. This lets third parties rebuild the image and confirm the PCR0 hash matches.
Configure OpenClaw Channels Inside the TEE
Once the enclave is running, pair your messaging channels. Since there is no SSH access to the enclave, channel configuration happens through the OpenClaw config file baked into the Docker image or via the exposed gateway API.
Telegram
Add the bot token to your openclaw.json before building the image, then pair:
# From any machine that can reach the enclave's exposed port
openclaw pairing approve telegram <CODE>WhatsApp requires a QR code scan for initial pairing. For TEE deployments, run the initial WhatsApp linking step locally, export the credentials directory, and include it in the Docker image:
COPY .openclaw/credentials /root/.openclaw/credentialsGateway Dashboard
Access the OpenClaw web UI through the exposed port. Get the dashboard URL with token:
# Retrieve the gateway URL from enclave metadata
treza enclave get openclaw-assistant --json | jq '.endpoints'TEE Security Model for AI Assistants
Running OpenClaw in a Trusted Execution Environment changes the security model fundamentally:
What the TEE Protects
- API keys and OAuth tokens are only accessible inside the enclave. The host operator cannot extract them from memory.
- Conversation history exists only in enclave RAM. When the enclave terminates, it is gone — no disk forensics possible.
- Tool execution (browser, shell, file operations) happens inside the enclave boundary. The host sees vsock traffic but cannot inspect the content.
- Model inference requests pass through the HTTP proxy but the enclave can use TLS to encrypt them end-to-end with the model provider.
What the TEE Does Not Protect
- Network traffic patterns — The parent proxy sees connection metadata (destination IPs, timing, volume) even if it cannot read encrypted payloads.
- Side-channel attacks — While Nitro Enclaves mitigate many side channels, they are not formally proven against all microarchitectural attacks.
- Configuration errors — If you bake secrets into a public Docker image or expose the gateway without authentication, the TEE cannot save you from your own mistakes.
KMS Integration with Attestation
Treza's KMS proxy gates key operations on valid attestation. This means you can configure AWS KMS key policies that only allow decryption when the request comes from an enclave with specific PCR values:
{
"Effect": "Allow",
"Principal": { "AWS": "*" },
"Action": "kms:Decrypt",
"Resource": "*",
"Condition": {
"StringEqualsIgnoreCase": {
"kms:RecipientAttestation:PCR0": "expected-pcr0-hash"
}
}
}This creates a hardware-enforced secret: the KMS key can only be used by an enclave running the exact OpenClaw image you built.
Managing the Enclave Lifecycle
Monitor Health
treza enclave get openclaw-assistant
treza enclave logs openclaw-assistant --type all --limit 50Pause and Resume
Pause the enclave to stop billing while preserving the configuration:
treza enclave pause openclaw-assistant
treza enclave resume openclaw-assistantNote that pausing terminates the enclave process. All in-memory state (sessions, conversation history) is lost. This is a feature of TEEs, not a bug — ephemeral memory means no data remanence.
Update the Image
To deploy a new version of OpenClaw, build and push a new image, then recreate the enclave:
treza enclave terminate openclaw-assistant
treza enclave create \
--name "openclaw-assistant" \
--image your-registry.com/openclaw:tee-v2 \
--workload-type service \
--health-path /health \
--health-interval 30 \
--aws-services kms \
--expose-ports 18789The new enclave will produce different PCR values reflecting the updated image. Update any KMS key policies to reference the new PCR0 hash.
Terminate and Delete
treza enclave terminate openclaw-assistant
treza enclave delete openclaw-assistantUse --force to skip confirmation.
Comparison: Docker vs. TEE Deployment
If you have read Simon Willison's guide to running OpenClaw in Docker, you might wonder what a TEE adds beyond containerization.
| Property | Docker | Treza TEE |
|---|---|---|
| Isolation | Process-level (namespaces, cgroups) | Hardware-level (encrypted memory, no host access) |
| Host access | Root can inspect container memory, volumes, network | Host OS cannot read enclave memory |
| Attestation | None — you trust the operator | Cryptographic proof of running code |
| Secrets | Stored in env vars or mounted volumes (readable by host) | Protected by hardware isolation and attestation-gated KMS |
| Persistence | Containers can mount persistent volumes | No persistent storage — fully ephemeral |
| SSH/shell | Available via docker exec | No interactive access — vsock only |
| Cost | Standard compute pricing | Nitro Enclave allocation on top of EC2 instance |
Docker is fine for local development and trusted infrastructure. A TEE is for when you need verifiable guarantees that no one — not even the infrastructure operator — can access your AI assistant's secrets or conversations.
What's Next
Treza is building toward on-chain attestation verification, where enclave PCR values are published to a smart contract and validated through a governance-driven approval process. This will enable:
- Decentralized trust — Anyone can verify an enclave's integrity without trusting Treza
- Attestation-gated smart contracts — DeFi protocols and DAOs can require TEE attestation before executing sensitive operations
- Reproducible build registries — A public mapping from source code commits to expected PCR values
If you are building AI agents that handle real credentials, real money, or real user data, running them in a Trusted Execution Environment is not a luxury — it is a baseline security requirement. Treza makes that accessible with a single CLI command.
Ready to deploy? Install the Treza CLI and spin up your first enclave:
npm install -g @treza/cli
treza config init
treza enclave create --name my-openclaw --image openclaw:tee --workload-type serviceVisit trezalabs.com for documentation, or join the community to discuss TEE deployments for AI agents.
Related Reading
- How to Run OpenClaw in Docker: Complete Setup Guide — Start here if you are new to OpenClaw.
- Why AI Assistants Need Trusted Execution Environments — Deep dive into TEE concepts and the threat model for AI agents.