This guide takes you from zero to a live agent as fast as possible. By the end you’ll have the gateway running and an agent responding to messages.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/operatoronline/standard-operator/llms.txt
Use this file to discover all available pages before exploring further.
Download the binary
Download the precompiled binary for your platform from the Releases page. Place it somewhere on your Verify the installation:
PATH — ~/.local/bin is a good choice on Linux and macOS.- Linux (x86_64)
- Linux (ARM64)
- Linux (RISC-V)
- macOS (Apple Silicon)
- Windows (x86_64)
Initialize your workspace
Run This creates:
operator onboard to create your configuration file and workspace directory at ~/.operator/:~/.operator/config.json— your agent configuration~/.operator/workspace/— the agent’s sandboxed working directory
Add your API key
Open Operator identifies providers by the prefix in the
~/.operator/config.json and add your LLM provider credentials to the model_list array. The agents.defaults.model_name field controls which model the agent uses by default.model field — anthropic/, openai/, gemini/, ollama/, etc. No code changes are needed to switch models; just update model_name in agents.defaults.To use a local Ollama model with no API key, set
"model": "ollama/llama3" and "api_base": "http://localhost:11434/v1", then omit the api_key field.Start the gateway
The gateway daemon connects your agent to configured channels (Slack, Telegram, Discord, etc.) and keeps it available continuously:You’ll see the gateway start and connect to any channels you’ve enabled in
config.json. Leave this running in a terminal, a tmux session, or a systemd service.If you haven’t configured any channels yet, the gateway still starts successfully — it just won’t accept inbound messages from external platforms until you configure one. You can always interact via the CLI.
What’s next
Now that you have a running agent, explore what Operator OS can do:Connect a channel
Add Slack, Telegram, Discord, or WhatsApp so you can message your agent from anywhere.
Configure models
Switch providers, configure load balancing, or point to a local Ollama instance.
Built-in tools
Give your agent access to DuckDuckGo, Brave Search, web fetch, and more.
Deploy with Docker
Run the gateway as a fully containerized service with Docker Compose.