Why AI Assistants Need Trusted Execution Environments: Running OpenClaw in a TEE
AI assistants handle API keys, messaging credentials, and user conversations. A Trusted Execution Environment enforces hardware-level isolation that Docker and VMs cannot match. Here's what TEEs are, why they matter, and how to run OpenClaw inside one.

Every self-hosted AI assistant is a vault of secrets. OpenClaw — the open-source personal AI assistant with 219k+ GitHub stars — connects to WhatsApp, Telegram, Slack, Discord, Signal, iMessage, and Microsoft Teams. It stores API keys for model providers, session tokens for messaging platforms, and the full conversation history for every channel. It runs tools that have shell access, browser control, and file system permissions.
If you run this on a standard cloud VM, the cloud operator can read all of it. If you run it in Docker, anyone with root on the host can docker exec into the container and extract every secret. Process-level isolation is not enough when the threat is a privileged insider or a compromised hypervisor.
Trusted Execution Environments (TEEs) solve this at the hardware level.
What Is a Trusted Execution Environment?
A TEE is a hardware-enforced isolated execution context. The CPU creates a region of encrypted memory that is inaccessible to any software running outside the enclave — including the operating system, the hypervisor, and other tenants on the same physical machine.
The three major TEE technologies in production today:
| Technology | Provider | Isolation Mechanism |
|---|---|---|
| AWS Nitro Enclaves | Amazon Web Services | Dedicated CPU cores + encrypted memory partition on Nitro hypervisor |
| Intel SGX | Intel | Hardware-encrypted enclaves within a process address space |
| AMD SEV-SNP | AMD | Full-VM encryption with integrity protection at the CPU level |
All three provide the same core guarantees, but Nitro Enclaves are the most practical for containerized workloads because they run standard Docker images without code modification.
Core TEE Guarantees
Memory isolation — Enclave RAM is encrypted by the hardware. The host operating system, hypervisor, and other VMs on the same physical machine cannot read or write enclave memory.
Code integrity — When an enclave boots, the hardware computes a cryptographic hash of everything loaded into it. These hashes, called Platform Configuration Registers (PCRs), uniquely identify the exact binary running inside.
Attestation — The TEE hardware can produce a signed attestation document that certifies the enclave's identity (PCR values) to a remote party. This lets third parties verify what code is running without trusting the operator.
No persistent storage — Enclaves are ephemeral. When they terminate, everything in memory is gone. No disk forensics, no data remanence.
No interactive access — There is no SSH into an enclave. Communication happens through a constrained channel (vsock for Nitro Enclaves). You cannot attach a debugger or dump memory.
Why Docker Is Not Enough
Docker containers provide namespace and cgroup isolation. This is useful — it limits the blast radius of a compromised process and makes deployments reproducible. But Docker isolation is enforced by the kernel, and anyone with kernel-level access bypasses it entirely.
| Threat | Docker | TEE |
|---|---|---|
| Root user on host reads container memory | Possible via /proc/<pid>/mem or docker exec | Blocked by hardware — enclave memory is encrypted |
| Cloud operator snapshots VM for inspection | Gets full container state including secrets | Snapshot contains encrypted memory — useless without hardware key |
| Hypervisor compromise | Full access to all guest memory | Blocked — hypervisor is outside the trust boundary |
| Malicious kernel module | Can intercept all container I/O | Cannot read enclave memory or tamper with execution |
| Supply chain attack on base image | Runs undetected | Detected via PCR mismatch in attestation |
For a personal AI assistant like OpenClaw, the practical implication is clear: Docker protects you from accidental interference, but a TEE protects you from deliberate attacks by privileged actors.
The AI Assistant Threat Model
Here is what an AI assistant like OpenClaw actually holds in memory during operation:
Credentials Layer
- Model API keys — Anthropic API key, OpenAI API key, or ChatGPT OAuth tokens
- Channel tokens — Telegram bot token, Slack bot + app tokens, Discord bot token, WhatsApp session state, Signal credentials
- Service credentials — AWS keys for KMS, Gmail Pub/Sub tokens, webhook secrets
Data Layer
- Conversation history — Every message from every connected channel
- Session context — Tool call results, file contents, browser snapshots
- User identity — Paired accounts, allowlists, pairing codes
Execution Layer
- Shell access —
exectool can run arbitrary commands - Browser control — CDP-based browser automation with full page access
- File system — Read/write access to the workspace directory
A compromise of any of these layers has cascading consequences. Stolen channel tokens give the attacker access to your messaging accounts. Stolen API keys let them spend your money. Stolen conversation history exposes every private discussion you have had through the assistant.
Running this workload inside a TEE means an attacker who compromises the host still cannot access any of it.
How TEEs Work in Practice
Nitro Enclaves (Used by Treza)
AWS Nitro Enclaves allocate dedicated CPU cores and memory to an isolated virtual machine that runs alongside the parent EC2 instance. The enclave has no network interface, no persistent storage, and no external access. Communication with the parent happens over vsock — a virtual socket that carries structured messages.
┌──────────────────────────────────────────────┐
│ EC2 Instance (Parent) │
│ │
│ ┌─────────────┐ vsock ┌────────────┐ │
│ │ Parent Proxy ├────────────►│ Nitro │ │
│ │ │ port 5000 │ Enclave │ │
│ │ • HTTP fwd │◄────────────┤ │ │
│ │ • KMS proxy │ │ [OpenClaw] │ │
│ │ • Log stream │ │ │ │
│ └─────────────┘ └────────────┘ │
│ │ │
│ ▼ │
│ Internet │
│ (Model APIs, messaging platforms) │
└──────────────────────────────────────────────┘
The parent proxy bridges the enclave to the network but cannot inspect the enclave's memory. HTTP requests from OpenClaw inside the enclave can use TLS to encrypt payloads end-to-end with the destination — the proxy sees ciphertext.
Attestation Flow
When a TEE boots, the hardware generates an attestation document containing:
- PCR0 — Hash of the Enclave Image File (the complete binary)
- PCR1 — Hash of the kernel and boot filesystem
- PCR2 — Hash of the application layer
- Nonce — A random value to prevent replay attacks
- Hardware signature — Signed by the Nitro Security Module, which chains to an AWS root of trust
A remote party (like a KMS service or a smart contract) can verify this document to confirm:
- The enclave is running on real Nitro hardware (not a simulation)
- The exact code inside matches an expected hash
- The attestation is fresh (nonce check)
This is fundamentally different from trusting an operator's claim that they are running a specific version. The attestation is unforgeable without access to the hardware security module.
Attestation-Gated Secrets
The most powerful use of attestation is gating access to secrets. AWS KMS supports key policies conditioned on enclave PCR values:
{
"Condition": {
"StringEqualsIgnoreCase": {
"kms:RecipientAttestation:PCR0": "a1b2c3..."
}
}
}With this policy, the KMS key can only be used by an enclave running exactly the code with the matching PCR0 hash. If anyone modifies the image — even adding a single debug line — the PCR changes and the key becomes inaccessible.
For OpenClaw, this means you can encrypt your channel tokens and API keys with a KMS key that can only be decrypted inside a specific, verified enclave build. The secrets never exist in plaintext outside the TEE.
Running OpenClaw in a TEE: Three Approaches
1. Self-Managed Nitro Enclaves
You can run Nitro Enclaves directly on any enclave-capable EC2 instance. This requires:
- An EC2 instance with
enclave: truein the launch template - Building an Enclave Image File (EIF) from your Docker image using
nitro-cli build-enclave - Running the enclave with
nitro-cli run-enclave - Implementing your own vsock proxy for network access
This gives you full control but requires significant infrastructure work: proxy implementation, health monitoring, log aggregation, attestation verification, and lifecycle management.
2. Managed TEE with Treza
Treza provides managed TEE infrastructure that handles the proxy layer, health checks, logging, attestation, and lifecycle management. You bring a Docker image and get a running enclave:
npm install -g @treza/cli
treza config init
treza enclave create \
--name "openclaw" \
--image your-registry.com/openclaw:latest \
--workload-type service \
--health-path /health \
--aws-services kms \
--expose-ports 18789Treza handles building the EIF, configuring the vsock proxy, streaming logs to CloudWatch, and exposing the attestation API. See our detailed walkthrough: Running OpenClaw in a Treza TEE.
3. Intel SGX / Azure Confidential Computing
If you are on Azure, Confidential Computing VMs use Intel SGX or AMD SEV-SNP. OpenClaw can run in a confidential container with Kata Containers. This approach is less mature for Docker workloads than Nitro Enclaves but offers similar guarantees.
Practical Considerations
Performance
Nitro Enclaves allocate dedicated CPU cores, so there is no virtualization overhead on compute. Memory encryption adds negligible latency (under 5% on most workloads). The main performance consideration is network I/O through the vsock proxy, which adds a few milliseconds per request — irrelevant for an AI assistant where model inference takes seconds.
Cost
A Nitro Enclave consumes resources from its parent EC2 instance. You are not paying extra for the enclave itself, but the parent instance needs enough CPU and memory to accommodate both the enclave allocation and the parent proxy. For OpenClaw, a c5.xlarge (4 vCPU, 8GB RAM) with 2 vCPUs and 4GB allocated to the enclave is a reasonable starting point.
Debugging
No SSH means no interactive debugging. You rely entirely on logs streamed through the vsock proxy. Structure your OpenClaw configuration to be verbose during initial setup:
{
"gateway": {
"verbose": true
}
}Once stable, reduce verbosity to save on log storage.
Ephemeral State
Enclave memory is lost on termination. This means:
- Conversation history resets when the enclave restarts. Persist important context to an external store if needed.
- WhatsApp sessions need to be re-linked after enclave recreation (or baked into the image).
- Pairing approvals must be re-done unless stored externally.
This is a security feature — no data remanence — but it requires planning around state management.
The Case for TEEs Beyond Security
TEEs are typically discussed in security terms, but they enable something more fundamental for AI agents: verifiable compute.
When an AI assistant runs inside a TEE with published PCR values, anyone can verify:
- Which model it is calling — The configuration is part of the image hash
- Which tools it has access to — Tool permissions are baked into the build
- That it has not been tampered with — PCR values are hardware-signed
This opens up use cases that require trust in the AI's execution environment:
- Regulatory compliance — Prove to auditors that sensitive data processing happens in an isolated, attested environment
- Multi-party AI agents — Run an agent that handles multiple parties' data with cryptographic proof that no party's data leaks to another
- On-chain verification — Smart contracts can require TEE attestation before executing transactions initiated by an AI agent
- Agent-to-agent trust — When AI agents interact with each other, TEE attestation provides a trust anchor that software-only isolation cannot
Getting Started
If you are new to OpenClaw, start with Docker:
How to Run OpenClaw in Docker — Complete setup guide from first install to multi-channel assistant.
When you need hardware-enforced isolation with cryptographic attestation:
Running OpenClaw in a Treza TEE — Step-by-step deployment to AWS Nitro Enclaves using Treza's managed infrastructure.
Treza provides managed TEE infrastructure for AI assistants, agents, and any Docker workload that needs hardware-level isolation. One CLI command to deploy. Cryptographic attestation out of the box. Visit trezalabs.com to deploy your first enclave.