Installing OpenClaw is not hard. The parts that trip people up are the bits every self-hosted service trips people up on: Node version mismatches, Docker networking assumptions, cloud firewalls, and the exact shape of the configuration file. This guide walks through every path I have used, with the gotchas I have hit, so your first gateway starts on the first try.
For the canonical command list and the latest flags, the authoritative source is always docs.openclaw.ai and the repository at github.com/openclaw/openclaw. This piece gives the operating context around those commands.
If you do not yet know what OpenClaw is, read the plain operator’s guide first. This piece assumes you are ready to install.
Before you start: what you actually need
Every platform needs the same baseline. Skip this and you will debug for hours:
- Node.js 22 or later. OpenClaw targets the current LTS line. Older Node versions will either refuse to start or misbehave in subtle ways.
- Git, for cloning the repository.
- A model API key for at least one LLM provider. Claude (Anthropic), GPT (OpenAI), Gemini (Google), or a local Ollama endpoint. You will configure this during setup. See using OpenClaw with Claude, GPT, Gemini, and Ollama for the details.
- A secure place for credentials. Environment variables, a secrets manager, or an encrypted config file. Do not commit tokens into source control.
- One channel choice, to start with. Slack is the most common; the built-in control UI works too. You can always add more later.
Once those are in hand, pick the platform that matches your environment.
Installing OpenClaw on Mac
Mac is the most common dev environment for new OpenClaw operators. The path is straightforward.
Step 1, install Node 22+
Use Homebrew if you do not already have Node:
brew install node@22
Verify:
node --version
You want v22.x or higher.
Step 2, clone the repository
git clone https://github.com/openclaw/openclaw
cd openclaw
Step 3, install dependencies
Follow the install commands in the repository README. The canonical instruction is at docs.openclaw.ai; the typical Node-project pattern is a package install followed by a build or start script.
Step 4, configure credentials
Create a config file or export environment variables for your chosen model provider. For Claude:
ANTHROPIC_API_KEY=sk-ant-...
For OpenAI:
OPENAI_API_KEY=sk-...
For Ollama on the same machine, usually no key is needed (a local endpoint at http://localhost:11434).
Step 5, start the gateway
openclaw gateway
The gateway should report a status. If it does, you are running.
Mac gotchas
- Apple Silicon vs Intel. Homebrew installs to different prefixes (
/opt/homebrewvs/usr/local). If you see “command not found” after a fresh install, check your shell PATH. - Node installed via nvm or fnm. Make sure the shell where you run
openclaw gatewayhas the same Node version you installed against. - Mac mini as a server. Perfectly viable as a home-lab host. 16GB RAM comfortably runs OpenClaw plus a mid-sized local Ollama model; 8GB is tight once you add Ollama.
Installing OpenClaw on Windows
Windows is a first-class target. Two paths: native Windows, or WSL2. For most operators, WSL2 is cleaner.
Path A, WSL2 (recommended)
- Enable WSL2 and install Ubuntu 22.04 or later from the Microsoft Store.
- Inside WSL, install Node 22+ via
nodesourceornvm. - Proceed as you would on Linux.
git clone, install, configure,openclaw gateway.
WSL2 gives you a clean Linux environment without leaving Windows. It is the path of least friction on Windows 11.
Path B, native Windows
- Install Node 22+ from nodejs.org.
- Install Git for Windows.
- Open PowerShell (or Windows Terminal).
git clone https://github.com/openclaw/openclaw,cd openclaw, install, configure, start.
Windows gotchas
- Line endings. Git on Windows may convert line endings by default (
core.autocrlf). For a Node project this rarely causes issues, but if something odd breaks, check. - Firewall prompts. The gateway listens on a port; Windows will prompt the first time it binds. Allow it on private networks.
- File paths with spaces. Avoid them for the install directory.
C:\openclawis safer thanC:\Users\Your Name\Documents\openclaw. - Long paths. Windows still has path-length limits. Install to a short prefix.
Installing OpenClaw on Linux (server or home lab)
Linux is the production host for most real deployments. Pick your distribution, install Node 22+ with your distro’s Node packager (nodesource, dnf module, or apt), clone the repo, install, configure, and start. Canonical steps are at docs.openclaw.ai.
Running the gateway as a service
For production, you want the gateway under a process manager so it restarts on failure and on reboot. Two common choices:
- systemd, for a native Linux service. Write a unit file that runs
openclaw gatewayunder a non-root user, points at your config, and enables auto-restart. - pm2, for a Node-friendly process manager with log handling.
pm2 start openclaw -- gateway,pm2 save,pm2 startupto survive reboot.
Either works. systemd is my default for dedicated servers; pm2 is friendlier if you are managing multiple Node services on the same host.
Linux gotchas
- Port binding on privileged ports. If you want the gateway on port 80 or 443, either run behind a reverse proxy (Nginx, Caddy) or grant the
cap_net_bind_servicecapability. Do not run the gateway as root. - File permissions on credentials. Restrict the config and environment files to the user that runs the gateway:
chmod 600. - Reverse proxy. For public-facing channels (HTTP mode for Slack, webhook endpoints), front the gateway with Nginx or Caddy for TLS termination and rate limiting.
Installing OpenClaw with Docker
Docker is the cleanest path if you want reproducible installs, isolation from the host, or a consistent environment across dev and production.
Step 1, pull or build the image
Check the OpenClaw docs for the current Docker image or Dockerfile. Build locally if an official image is not yet published:
git clone https://github.com/openclaw/openclaw
cd openclaw
docker build -t openclaw:local .
Step 2, prepare your config and secrets
Mount a config file and pass secrets as environment variables at runtime. Do not bake secrets into the image.
Step 3, run the gateway container
docker run -d \
--name openclaw \
--env-file ./openclaw.env \
-v $(pwd)/config:/app/config \
-p 3000:3000 \
openclaw:local gateway
Adjust the port and the config path per your setup. Confirm the container logs show a successful gateway start.
Docker Compose
For anything more than one container, use Compose. A typical stack has OpenClaw plus a reverse proxy (Caddy or Traefik) plus optional Ollama for local models. Check in docker-compose.yml to your infra repo, keep secrets in a .env file that stays outside source control.
Docker gotchas
- Host networking. If the agent needs to reach
localhostservices on the host (e.g. an Ollama endpoint athost.docker.internal:11434), configure the address accordingly. Do not assumelocalhostinside a container means the host machine. - File permission mismatches. The user inside the container may not match the host user. Either run as the same UID/GID, or chown the mounted volumes.
- Timezone. The container defaults to UTC. If schedules and logs need local time, set
TZaccordingly.
Installing OpenClaw in the cloud
“Cloud” here means “any hosted VM or container platform”. The install pattern is the same as the Linux or Docker instructions above; the cloud-specific considerations are what differ.
AWS
- Typical shape. EC2 instance, Elastic IP, Nginx or ALB in front, secrets in Systems Manager Parameter Store or Secrets Manager.
- Australian region.
ap-southeast-2(Sydney) by default for AU data residency. - Container option. ECS on Fargate or an EC2-backed cluster, behind an ALB.
- Gotcha. Security groups must allow inbound on your chosen port (and outbound to the model API endpoints).
Azure
- Typical shape. Virtual machine in Australia East, VNet with an NSG, secrets in Key Vault.
- Container option. Azure Container Apps is the cleanest managed option; App Service also works.
- Gotcha. Managed identity is your friend for accessing other Azure resources; do not embed keys.
GCP
- Typical shape. Compute Engine in
australia-southeast1, VPC with firewall rules, secrets in Secret Manager. - Container option. Cloud Run for stateless variants; GKE for heavier deployments.
- Gotcha. Cloud Run’s cold-start behaviour matters for a gateway; use minimum instances > 0 for a responsive experience.
Oracle Cloud, DigitalOcean, Vultr, Hetzner
All viable. The calculus is the same as any self-hosted Node service: a small VPS runs OpenClaw comfortably for most real deployments. For high concurrency or heavy local-model use, size up RAM and CPU.
Cloud gotchas (universal)
- Firewalls. Both network and host-level. The gateway’s port must be reachable; outbound to your LLM provider must not be blocked.
- Egress costs. High-volume OpenClaw deployments can generate non-trivial outbound traffic if they fetch a lot of web content. Watch your egress bill.
- Secrets management. Use the cloud provider’s secrets service. Do not put tokens in user-data scripts that survive as metadata.
- Observability. Ship the gateway’s logs to CloudWatch, Azure Monitor, Google Cloud Logging, or your chosen platform-neutral stack (Grafana, Loki, Datadog). Do not rely on
docker logsalone.
Verifying your install
Regardless of platform, after openclaw gateway starts, run this checklist.
- Gateway responds. A status endpoint or the startup logs should confirm the gateway is up.
- Model call works. Send a test prompt through the control UI or a channel. If the model call fails, the credentials or network path is wrong.
- Tool execution works. Configure one safe tool (file read, HTTP fetch to a known endpoint). Invoke it via an agent. Confirm the tool runs and the result returns.
- Channel delivery works. If you configured Slack, send a DM to the bot. Confirm the response. Details in OpenClaw Slack configuration.
- Audit is recording. Every action you just took should be in the logs with an identity, a timestamp, and an outcome. If not, fix the logging configuration before going further.
If all five pass, you have a healthy install.
The five most common install failures
In rough order of how often I have debugged each.
1. Wrong Node version
Symptom: the gateway will not start, or starts and crashes on the first request. Fix: confirm node --version on the exact shell where the gateway runs, and install 22+ via your package manager.
2. Missing or wrong model API key
Symptom: the gateway starts, the channel receives a message, but the agent never replies. Fix: check the environment variable is visible to the process, and that the key is valid. Re-issue a key to confirm.
3. Slack plugin disabled or “not available”
Symptom: the gateway accepts events but the Slack bot is silent. Fix: walk the troubleshooting in the Slack config guide. Usually tokens, event subscriptions, or a forgotten gateway restart after a config change.
4. Firewall blocks inbound or outbound
Symptom: the gateway is up but messages never arrive (inbound blocked) or the model call times out (outbound blocked). Fix: confirm the specific ports and the LLM provider endpoints are open end-to-end.
5. Docker networking surprises
Symptom: the gateway inside the container cannot reach your local Ollama instance, or host applications cannot reach the gateway. Fix: use host.docker.internal (Mac/Windows) or 172.17.0.1 (Linux) to reach the host from a container, and --network host or an exposed port to reach the container from the host.
Upgrading OpenClaw
Treat upgrades as normal operational work.
- Pull latest.
git pullin your clone, or pull a newer Docker image. - Review changelog. Look for breaking changes in the release notes.
- Re-install dependencies if
package.jsonchanged. - Restart the gateway once the new version is in place.
- Re-run the verification checklist above.
For production, do this in staging first. Do not upgrade directly on the gateway your team is using live.
Hardware sizing, a rough guide
- Single operator, light use. A small VPS (1-2 vCPU, 2-4GB RAM) runs the gateway and a few agents fine.
- Team use in Slack, moderate volume. 2-4 vCPU, 4-8GB RAM is comfortable.
- Production with heavy tool use, concurrent channels, and local Ollama. 4+ vCPU, 16GB+ RAM. If you run a 7B local model alongside OpenClaw, 16GB is the minimum; 32GB is more realistic. Mac minis with M-series chips are genuinely good hosts for this.
The LLM calls, not the gateway, dominate resource usage. The gateway itself is lightweight.
Going public: putting OpenClaw on a public domain
If you need the gateway reachable from the internet (for HTTP-mode Slack, for webhooks, for a cloud-hosted control UI):
- Reverse proxy with TLS. Caddy is the fastest; Nginx is the most flexible. Let’s Encrypt for certificates.
- Tight firewall. Only the proxy-facing ports exposed. SSH limited to your admin IPs.
- Authentication on the proxy. Basic auth, OAuth, or IP allowlist for the control UI.
- Rate limiting. At the proxy layer. Agents in a loop can burn a lot of requests if something goes wrong upstream.
- DDoS protection. Cloudflare or equivalent if you expect adversarial traffic.
Do not expose the gateway directly to the internet. Always front it.
When you want a walkthrough instead
This guide is comprehensive and self-directed. If you prefer a video walkthrough with somebody showing every click, the Udemy course covers install plus first-agent-in-production plus a handful of real workflows.
What to do next
If the gateway is running and verified: move on to configuring a channel. The Slack configuration guide is the most common first step. Alternatively, use the built-in control UI and add Slack later.
If your model choice is not yet settled: read using OpenClaw with Claude, GPT, Gemini, and Ollama before wiring the key in.
If you want to ship a first agent fast: start with the accounting-team case study for a concrete pattern, or the comprehensive OpenClaw 2026 guide for the playbook.
If you want professional help standing up an OpenClaw deployment: I run agent deployment engagements covering architecture, install, policy, first workflow, and handover.
Install is the easy part. Picking the right first workflow, bounding scope, and building the operator discipline to run it safely are where the real work is. This guide got you past the install.
Further reading: What is OpenClaw? A plain operator’s guide, Using OpenClaw with Claude, GPT, Gemini, and Ollama, OpenClaw Slack configuration, The comprehensive OpenClaw 2026 guide. Canonical docs: docs.openclaw.ai. Source: github.com/openclaw/openclaw.
Disclosure: the link to the OpenClaw AI Agents Install and Setup Guide is a Udemy referral link. I may earn a commission if you enrol, at no extra cost to you.