ClawCloudClawCloud.sh
How it worksModelsPricingCompareGuidesBlog
Log in
DeployDeploy Now
ClawCloud logoClawCloud

Managed OpenClaw AI assistant hosting on dedicated cloud servers.

Deploy now →
Product
ModelsPricingCompare PlansOpenClaw HostingOpenClaw VPSOpenClaw CloudTelegram BotDiscord BotFeishu BotUse CasesFAQ
Resources
GuidesBlogTopicsOpenClawGitHub
Company
ContactTerms of ServicePrivacy Policy
© 2026 ClawCloud. All rights reserved.
All posts

OpenClaw hosting update: BYOK + backup free models

Published March 4, 2026

OpenClaw hosting on ClawCloud with Discord chat and dashboard showing BYOK plus backup model support

If you run OpenClaw on a cloud server, this update gives you a safer fallback path.

You keep BYOK as the primary model path, and you can still switch to free backup models in chat when you need continuity.

What changed for BYOK users

With BYOK + Backup enabled:

  • Your BYOK key stays primary for normal usage.
  • A separate backup key is configured for free-model fallback.
  • You can switch models in chat whenever you need temporary coverage.

OpenClaw dashboard backup models panel with BYOK Google visible and backup models enabled

The dashboard view above shows exactly what this looks like in practice: BYOK stays active, backup models are visible on the same instance, and you can switch without changing billing mode.

How to switch in chat

Use these commands directly in Discord, Telegram, or Feishu chat:

/model list
/model status
/model openrouter/openai/gpt-oss-120b:free
/model status

/models is an alias for /model, so either command works.

When a model id contains a slash in the id itself, keep the full provider-prefixed format (for example openrouter/openai/gpt-oss-120b:free).

OpenClaw Discord chat switching to GPT OSS 120B free with /model command

Use backup mode when your primary key is rate-limited, when your provider has a temporary issue, or when you want lower-priority traffic to keep moving while you troubleshoot.

Recommended free models for OpenClaw backup

Two practical options:

  • openrouter/openai/gpt-oss-120b:free for general assistant traffic
  • openrouter/qwen/qwen3-coder:free for coding-heavy tasks

If you are unsure which to test first, start with openrouter/openai/gpt-oss-120b:free.

The free model catalog can change over time, so treat /model list as the source of truth for what is currently available on your instance.

Why this matters for OpenClaw hosting and VPS setups

For teams running OpenClaw hosting, OpenClaw VPS, or OpenClaw cloud deployments, this gives you a clean fallback path without re-deploying your server.

It is especially useful for OpenClaw managed hosting setups where uptime matters more than squeezing maximum model quality out of every request.

If you are comparing setup options, these pages help:

  • OpenClaw Hosting
  • OpenClaw VPS
  • OpenClaw Cloud

Limits to expect on free fallback

Free backup models are useful for uptime, but they are not equivalent to your paid primary model.

Expect:

  • slower responses during busy periods,
  • more variable output quality,
  • weaker reliability on long, multi-step reasoning.

For production-critical tasks, switch back to your BYOK primary model after the incident window.

Paid backup budget with BYOK (status)

Today, BYOK + Backup is optimized for free fallback models.

Support for paid backup budget alongside BYOK is planned, but not live yet. Until that ships, keep backup usage on :free models.

Learn the full flow

  • OpenClaw Configuration Guide: BYOK + Backup Free Models
  • How to Switch AI Models on Your OpenClaw Bot
  • How to Manage OpenClaw AI Credits on ClawCloud

Command behavior reference: OpenClaw Models CLI, OpenClaw slash commands.

Ready to deploy?

Skip the setup — your OpenClaw assistant runs on a dedicated server in under a minute.

Deploy Your OpenClaw

Keep reading

AI Models and ProvidersCredits, Costs, and BillingAll topics →
Post

Fix: OpenClaw managed reply reliability on ClawCloud

ClawCloud improved OpenClaw managed reply reliability so managed sessions recover from stale model state instead of failing before a usable reply.

Post

OpenClaw model update: Claude Sonnet 4.6, GPT-5.3 Codex, Gemini 3.1 Pro, and Grok Code

ClawCloud adds Claude Sonnet 4.6, GPT-5.3 Codex, Gemini 3.1 Pro Preview, and Grok Code Fast 1 to the managed catalog. 101 models, switchable in chat.

Post

Which OpenClaw AI Models Actually Work Well with Skills?

Not all OpenClaw models handle skills equally. Here's how model choice affects skill quality and what ClawCloud's tiers offer.

Post

Running DeepSeek and Qwen Models on OpenClaw with ClawCloud

DeepSeek and Qwen models are available on ClawCloud right now. Here's what's in the catalog, how to switch, and when each model fits.

Post

How OpenClaw Memory Works on a Dedicated Server

OpenClaw memory stores notes as Markdown files on disk. Here's how it works, why a dedicated server matters, and what you can configure.

Post

OpenClaw as a Private ChatGPT Alternative: Your Server, Your Data

How OpenClaw compares to ChatGPT as a private alternative — dedicated server, your API key, chat history you control, running in Telegram or Discord.