OpenClaw ClawdBot supports Claude Opus 4.6, GPT-5, Kimi K2.5, GLM 5, MiniMax 2.5, local Ollama models, and OpenRouter integration for 300+ models.
OpenClaw ClawdBot is model-agnostic, supporting a wide range of cloud-based AI models from leading providers. Users can choose models based on their specific requirements for capability, speed, cost, and language specialization. The platform uses these models as its "brain" while providing the infrastructure for autonomous actions.
Latest flagship model with enhanced coding, planning, and debugging capabilities.
Industry-leading general-purpose models with state-of-the-art performance.
Massive context window specialist with advanced multimodal capabilities.
Open-source flagship with significantly improved coding abilities.
Industry-leading multi-language coding specialist from Shanghai.
OpenClaw ClawdBot integrates with Ollama (announced February 1, 2026) to enable completely local, offline AI operation. Local models provide privacy benefits, eliminate API costs, and allow OpenClaw to run in air-gapped environments or on devices without internet connectivity.
All processing happens locally on your device. No data sent to external servers. Complete control over your information and agent interactions.
No per-request charges or monthly subscriptions. One-time hardware investment provides unlimited usage without recurring fees.
Run OpenClaw without internet connectivity. Perfect for air-gapped environments, secure facilities, or unstable network conditions.
Fine-tune models on your own data, adjust parameters, and configure behavior without external limitations or restrictions.
ollama launch openclaw
Single command launches OpenClaw with Ollama integration. Models download automatically on first run.
Meta's latest open-source model with strong reasoning and coding capabilities. Excellent general-purpose choice for local deployment.
Alibaba's coding specialist with exceptional performance on programming tasks. Optimized for software development workflows.
Chinese-developed model with advanced reasoning capabilities. Strong performance on complex multi-step tasks.
Zhipu AI's latest open model with improved coding and reasoning. Available on Hugging Face for local deployment.
Open-source GPT variants maintained by community. Available in 20B and 120B parameter versions for different hardware capabilities.
Local deployment version of Moonshot's flagship model. Maintains extended context capabilities for long-form processing.
OpenClaw ClawdBot includes built-in OpenRouter integration, providing unified access to over 300 AI models from multiple providers through a single API. OpenRouter automatically routes requests to the most appropriate model based on task complexity, availability, and cost optimization.
Single integration provides access to 300+ models from Anthropic, OpenAI, Google, Meta, Mistral, Cohere, and more. Switch models without changing code.
One account, one bill for all providers. No need to manage separate API keys and subscriptions for each model provider.
Intelligent routing with automatic failover. If primary model is unavailable, OpenRouter switches to backup models seamlessly.
Distribute requests across providers for optimal availability and performance. Avoid rate limits and service disruptions.
The openrouter/openrouter/auto model automatically selects the most cost-effective AI based on prompt complexity:
Learn more at OpenRouter.ai
This comparison helps you choose the right AI model based on your specific requirements for context length, processing speed, cost efficiency, and use case specialization.
| Model | Context Window | Speed | Cost | Best For |
|---|---|---|---|---|
| Claude Opus 4.6 | 200K tokens | Medium | Premium | Complex reasoning, autonomous actions, comprehensive tasks |
| Claude Sonnet 4.5 | 1M tokens (beta) | Fast | Moderate | Coding, agent workflows, balanced performance |
| GPT-5 | 128K tokens | Fast | Premium | General tasks, broad knowledge, creative work |
| GPT-5.3-Codex | 128K tokens | Very Fast | Premium | Agentic coding, software development, debugging |
| Kimi K2.5 | 2M+ tokens | Medium | Moderate | Long documents, extensive context, multimodal tasks |
| GLM 5 | 128K tokens | Fast | Budget | Chinese language, coding, cost-sensitive projects |
| MiniMax 2.5 | 256K tokens | Fast | Moderate | Multi-language coding, Rust/Golang/Java development |
| Llama 3.3 70B (Local) | 128K tokens | Medium | Free* | Privacy-focused, offline operation, no API costs |
*Local models require hardware investment but have zero per-request costs. Speed depends on hardware configuration.
Configuring AI models in OpenClaw ClawdBot is straightforward. Follow these steps to set up your preferred models, configure API access, and optimize for your specific use case.
Store API credentials securely using environment variables. Never hardcode keys in configuration files.
export ANTHROPIC_API_KEY="your-claude-key"
export OPENAI_API_KEY="your-gpt-key"
export MOONSHOT_API_KEY="your-kimi-key"
Select your preferred AI model in OpenClaw's configuration file. This model handles requests by default.
{
"defaultModel": "anthropic/claude-opus-4.6",
"temperature": 0.7
}
Set backup models to ensure reliability. OpenClaw automatically switches if primary model is unavailable.
"fallbackModels": [
"anthropic/claude-sonnet-4.5",
"openai/gpt-5",
"openrouter/openrouter/auto"
]
Control costs by setting spending limits and request quotas. Prevent unexpected bills from runaway agent behavior.
"budgetLimits": {
"dailySpend": 50,
"monthlySpend": 500,
"requestsPerHour": 100
}
Activate advanced capabilities like Claude's effort parameter, GPT's function calling, or Kimi's long-context mode based on your model choice.
"modelFeatures": {
"claudeEffort": "high",
"kimiLongContext": true,
"gptFunctionCalling": true
}
Different AI models excel at different tasks. Use these recommendations to select the optimal model for your specific OpenClaw ClawdBot use case and requirements.
Choose Claude Opus 4.6 when you need the most capable AI for complex multi-step reasoning, comprehensive analysis, strategic planning, and sophisticated autonomous actions. Best for high-stakes tasks requiring deep understanding.
Choose GPT-5 for general-purpose tasks requiring balanced performance across diverse domains. Excellent broad knowledge, strong creative capabilities, and reliable execution make it ideal for varied workflows.
Choose Kimi K2.5 when working with extensive documents, large codebases, or tasks requiring 2 million+ token context windows. Industry-leading context length enables processing entire books, repositories, or conversation histories.
Choose GLM 5 for Chinese language processing, bilingual workflows, or cost-sensitive projects. Open-source availability enables custom deployment and fine-tuning for specialized Chinese language applications.
Choose GPT-5.3-Codex for software development workflows requiring advanced coding capabilities. 25% faster than previous versions with exceptional debugging, code generation, and autonomous development features.
Choose local Ollama models for privacy-critical applications, air-gapped environments, or zero-cost unlimited usage. Complete data sovereignty with no external API calls or cloud dependencies.