OpenClaw is an open-source autonomous AI agent framework that goes beyond chat. Instead of only responding to prompts, an OpenClaw agent can take actions: interacting with tools, files, APIs, and messaging platforms while running on infrastructure you control.
Because OpenClaw can do things — not just talk — it behaves more like application software than a typical chatbot. That distinction matters for how and where you run it, and how you think about setup from the start.
What Is OpenClaw?
OpenClaw connects a large language model (LLM) to a runtime that can execute skills. Skills are small, modular pieces of logic that define what the agent is allowed to do, such as summarizing text, organizing files, or interacting with external services.
Typical uses include:
- Automating repetitive personal workflows
- Acting as an always-on assistant in chat tools
- Experimenting with agent-based automation locally
OpenClaw is self-hosted by design. You decide where it runs, what it can access, and how much authority it has.
Supported Operating Systems
Before running any commands, it’s important to be clear about where OpenClaw runs.
Works well:
- Linux (Ubuntu, Debian, Arch, etc.)
- macOS (Intel and Apple Silicon)
Windows:
OpenClaw does not currently target native Windows as a primary platform.
The recommended and supported approach on Windows is to run OpenClaw inside a Linux environment using WSL 2 (Windows Subsystem for Linux).
From the OpenClaw perspective, WSL 2 behaves like a Linux machine, which avoids filesystem and permission issues common with native Windows paths.
In all cases, the commands below are run in a Linux or macOS terminal (or a Linux terminal inside WSL 2 on Windows).
Getting Started: Install and Run OpenClaw
Step 1: Get the OpenClaw Code (Run from Your Terminal)
Open a terminal on the machine where OpenClaw will run (Linux, macOS, or WSL 2 on Windows).
Download the official OpenClaw repository from GitHub:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
This creates a local copy of the OpenClaw project and moves you into its directory. If you don’t use Git, you can download the repository as a ZIP file from GitHub and extract it, then open a terminal in that extracted folder.
Step 2: Install the Required Runtime (Node.js)
OpenClaw runs on Node.js. You’ll need a modern, supported version.
Check whether Node.js is already installed:
node --version
npm --version
If Node.js is missing or outdated, install it using your operating system’s standard method (for example, a package manager on Linux/macOS or an official installer). Once Node.js is available, install OpenClaw’s project dependencies:
npm install
This installs only the packages OpenClaw needs inside the project directory.
Step 3: Choose Your Model Provider
OpenClaw does not include an AI model by default. You choose where the intelligence comes from. Common options include:
Cloud-hosted models (quickest to start):
- Anthropic (Claude) – strong reasoning and tool use
- OpenAI (GPT models) – widely supported
- Google Gemini – offers free tiers for experimentation, subject to usage limits
Local models (no cloud API required):
- Ollama – simple way to run models like Llama or Mistral locally
- LM Studio / vLLM – more advanced local inference setups
Managed gateways / enterprise options:
- LiteLLM – useful for routing, logging, or isolating API keys
- NVIDIA NIM – enterprise-focused inference services; compatibility depends on environment and may not suit all OpenClaw setups
If you want the fastest path to a working agent, start with a cloud provider you already have access to. If you care more about local control or offline experimentation, start with Ollama.
Step 4: Configure the Model for OpenClaw
For cloud providers, OpenClaw typically reads credentials from environment variables.
Example pattern (run in your terminal):
export OPENCLAW_MODEL=your_model_provider
export OPENCLAW_API_KEY=your_api_key
On Windows (inside WSL 2), the same commands apply. If you’re using a local model such as Ollama, you’ll install and run the model locally and configure OpenClaw to point to the local endpoint instead of providing an API key.
The goal is simply this: OpenClaw needs to know which model to use and how to reach it.
Step 5: Start the OpenClaw Agent
From inside the OpenClaw project directory, start the agent:
npm start
When OpenClaw starts, it initializes the agent runtime, loads any installed skills, and launches a local control interface. If startup fails, the terminal output will usually indicate whether a model configuration or runtime issue is the cause.
Step 6: Deploy a Super-Simple Agent Skill
Skills define what your agent can do. Start with something minimal to confirm everything works.
Create a new skill directory:
mkdir -p skills/hello_world
Inside this directory:
- SKILL.md describes the skill
- index.js contains the skill logic
After creating the folder, restart OpenClaw. It automatically discovers and loads skills placed in the skills/ directory.
At this point, you’ve confirmed that:
- OpenClaw runs correctly
- Your model is reachable
- Skills are detected and loaded
After the First Run
Once OpenClaw is running, slow down before adding more power.
A sensible next progression:
Interact with the agent locally and thoroughly observe behavior
Use low-risk test accounts for external services.
Add one skill or integration at a time, and keep changes small so mistakes are easy to undo
OpenClaw is most useful when you build confidence gradually instead of connecting everything at once.

Leave a Reply