ClawCloudClawCloud.sh
How it worksModelsPricingCompareGuidesBlog
Log in
DeployDeploy Now
ClawCloud logoClawCloud

Managed OpenClaw AI assistant hosting on dedicated cloud servers.

Deploy now →
Product
ModelsPricingCompare PlansOpenClaw HostingOpenClaw VPSOpenClaw CloudTelegram BotDiscord BotFeishu BotUse CasesFAQ
Resources
GuidesBlogTopicsOpenClawGitHub
Company
ContactTerms of ServicePrivacy Policy
© 2026 ClawCloud. All rights reserved.
All guides

OpenRouter and OpenClaw — How the Model Backend Works

Published February 21, 2026

This guide explains how OpenClaw handles model routing on ClawCloud — specifically the difference between managed mode (default) and bringing your own key.

What OpenRouter is

OpenRouter is a routing layer that sits in front of multiple AI providers. Instead of managing separate API keys for Anthropic, OpenAI, and Google, you make requests to a single OpenRouter endpoint and it routes them to the right model.

OpenClaw has native OpenRouter support. When you use managed mode on ClawCloud, your instance connects to OpenRouter — not directly to the model provider.

How managed mode works

When you deploy on ClawCloud without providing an API key, you're using managed mode. ClawCloud provisions a dedicated OpenRouter sub-key for your instance, with a credit limit that matches your plan:

  • Lite — $8/month
  • Pro — $25/month
  • Max — $60/month

This sub-key is scoped to your instance only. Your bot uses that credit budget, and the counter resets at the start of each billing cycle. You can see current usage in the dashboard.

The sub-key is created automatically at checkout and pushed to your server during provisioning. You don't interact with OpenRouter directly — ClawCloud handles the key lifecycle.

ClawCloud dashboard credit usage bar showing monthly spend against plan limit

Model naming in managed mode

OpenRouter uses slightly different model IDs than the native providers. OpenClaw translates these automatically, so the model you see in the interface is the model you get. There's no hidden routing or tier substitution.

If you run openclaw models list on your bot, the IDs shown are the OpenRouter-format IDs — for example, anthropic/claude-sonnet-4 rather than a provider-specific version string. The behavior and capabilities are identical to calling the provider directly.

BYOK — bringing your own key

If you already have an Anthropic, OpenAI, or Google API key, you can use it directly. This is the BYOK (bring your own key) mode.

With BYOK:

  • Your key connects directly to the provider, not through OpenRouter
  • Credit usage is billed to your provider account — ClawCloud doesn't track it
  • No managed credit limit applies
  • You control your own usage caps and spending alerts on the provider side

BYOK is set during the deploy wizard. You select the provider and paste your key. Switching modes after deployment requires re-provisioning.

Which mode to choose

Managed mode makes sense if you don't have an existing API account or want predictable monthly costs. The credit limits are conservative — they cover daily use on budget models, and moderate use on mid-tier models like Claude Haiku or GPT-4.1-mini.

BYOK makes sense if you already pay for API access at a provider, want to use models outside the managed catalog, or need no credit ceiling.

See the credits guide for a full breakdown of how credits are tracked and reset.

Troubleshooting

Bot says "model not found" after switching — In managed mode, use OpenRouter model IDs (e.g., anthropic/claude-sonnet-4). In BYOK mode, use native provider IDs (e.g., anthropic/claude-sonnet-4-5). See the model switching guide for the full alias list.

Credit usage seems higher than expected — OpenRouter charges per token processed, including system prompts sent with every message. Longer system prompts increase cost. See the credits guide for tips on reducing per-message cost.

Want to switch from managed to BYOK (or vice versa) — Mode is set during deployment and applies for the instance's lifetime. Switching requires re-provisioning. See Destroy vs Regenerate.

Ready to deploy?

Skip the setup — your OpenClaw assistant runs on a dedicated server in under a minute.

Deploy Your OpenClaw

Keep reading

AI Models and ProvidersManaged AI HostingAll topics →
Post

Fix: OpenClaw managed reply reliability on ClawCloud

ClawCloud improved OpenClaw managed reply reliability so managed sessions recover from stale model state instead of failing before a usable reply.

Post

Why We Added Windows Servers for OpenClaw

The CLI barrier stops more OpenClaw deployments than anything else. Here's why we built managed Windows cloud servers to remove it.

Post

What Private AI Actually Means (and Where OpenClaw Fits)

What private AI means in practice for OpenClaw: where the gateway runs, where memory lives, and how local, VPS, and managed setups differ.

Post

Best OpenClaw Alternatives in 2026

Best OpenClaw alternatives in 2026, grouped by what you actually want: hosted OpenClaw, Claude Code, LangChain, NanoClaw, or IronClaw.

Post

OpenClaw vs Claude: Bot runtime vs Claude app vs Claude Code

OpenClaw vs Claude compares a self-hosted chat runtime with Claude.ai and Claude Code, so you can pick the right tool for chat, coding, or both.

Post

ClawCloud vs Clawy vs Donely: OpenClaw Hosting Compared

Comparing ClawCloud, Clawy, and Donely on OpenClaw hosting, pricing, and customization. ClawCloud is the stronger pick for control.