OpenClaw 2026.4.14 is a quality release, not a flashy one. The official release summary says it focuses on model-provider work, explicit-turn improvements for the GPT-5 family, and channel-provider issues, plus core performance refactors. In practice, that means fewer strange GPT-5.4 dead ends, better Telegram forum-topic context, and less drift around Codex and custom OpenAI-compatible setups.
This is the kind of release you notice after a restart, a /model switch, or a forum-topic reply stops acting weird. If you missed the earlier cycle, the OpenClaw 2026.3.28 upgrade guide covers the previous config-schema and channel work, and the OpenClaw 2026.3.24 upgrade guide covers heartbeat isolation, loop detection, and separate PDF/image routing.

What's Changed in OpenClaw 2026.4.14
GPT-5.4 turns recover more cleanly
The most useful fix in this release is not a new command. It is better behavior when GPT-5.4 runs get into awkward states.
OpenClaw 2026.4.14 now maps OpenAI's unsupported minimal reasoning effort to the supported low effort for GPT-5.4 requests, and it adds bounded continuation recovery when GPT-style runs come back with reasoning-only or otherwise empty turns. If you have ever watched a run clearly do work and still end without visible answer text, this is the class of fix that matters.
The config shape itself does not change:
{
"agents": {
"defaults": {
"model": { "primary": "openai/gpt-5.4" },
"thinkingDefault": "minimal"
}
}
}
Before 2026.4.14, that combination could still trip validation or stall on empty-turn recovery in embedded GPT-style runs. After 2026.4.14, OpenClaw normalizes the reasoning effort and gives the run one more bounded chance to produce a visible answer.
The same release also lets github-copilot/gpt-5.4 use xhigh, bringing it into line with the rest of the GPT-5.4 family.
For the exact OpenAI and Codex model rules, the official model providers reference and models guide are still the source of truth.
Telegram forum topics finally keep their human names
Two Telegram topic fixes land together here.
First, OpenClaw now learns human topic names from Telegram forum service messages and exposes those names in agent context, prompt metadata, and plugin hook metadata. Second, 2026.4.14 persists those learned names across restarts instead of waiting to relearn them from later service traffic.
The Telegram config shape is the same as before:
{
"channels": {
"telegram": {
"groups": {
"-1001234567890": {
"topics": {
"99": {
"requireMention": false,
"systemPrompt": "Stay on topic."
}
}
}
}
}
}
}
What changes is the context quality around that topic. Before this release, the agent could know it was in topic 99 but lose the human topic name after a restart. After 2026.4.14, that label survives. If you use forum topics as separate support lanes, project threads, or customer queues, this is a real quality-of-life fix.
The official Telegram section of the configuration reference documents the current topic config shape.
Codex and model catalogs stop drifting
This section sounds minor in the changelog, but it is one of the operator-facing fixes that can save time.
OpenClaw 2026.4.14 adds forward-compat support for gpt-5.4-pro on the Codex side, so list and status views can surface it before the upstream catalog fully catches up. It also fixes the Codex provider catalog output so apiKey is included. Without that metadata, Pi's ModelRegistry validator could reject the provider entry and silently drop all custom models from models.json.
If you use Codex, this is still the documented baseline config shape:
{
"agents": {
"defaults": {
"model": { "primary": "openai-codex/gpt-5.4" }
}
}
}
After upgrading, the quick sanity check is simple:
openclaw models status
openclaw models list --provider openai-codex --plain
If Codex rows or custom models had been disappearing before, 2026.4.14 is worth the upgrade for this alone.
Custom OpenAI-compatible setup gets less brittle
One small fix in 2026.4.14 will matter to anyone using a stricter self-hosted or proxied endpoint.
OpenClaw now uses max_tokens=16 for custom OpenAI-compatible verification probes during onboarding. That is a tiny request, and that is the point. Some stricter endpoints were rejecting the earlier probe even though real inference would have worked.
The official onboarding flow stays the same:
openclaw onboard --non-interactive \
--auth-choice custom-api-key \
--custom-base-url "https://llm.example.com/v1" \
--custom-model-id "foo-large" \
--custom-api-key "$CUSTOM_API_KEY" \
--secret-input-mode plaintext \
--custom-compatibility openai
2026.4.14 also fixes two other proxy-ish edge cases in the same neighborhood. Image and PDF tool runs now normalize configured provider/model refs before media-tool lookup, and memory embeddings keep non-OpenAI provider prefixes instead of collapsing them. If you route OpenClaw through a custom OpenAI-compatible stack, that is exactly the sort of boring fix you want.
The official onboard command reference and the custom providers section of the configuration reference cover the documented setup surface.
How ClawCloud Is Adopting OpenClaw 2026.4.14
ClawCloud's source tree is already aligned with this release in the places that matter most for managed deployments.
- GPT provisioning defaults now target
openai/gpt-5.4. - The generated allowlists include the GPT-5.4 family, with
gptresolving toopenai/gpt-5.4andgpt-minitoopenai/gpt-5.4-mini. - The current Linux and Windows provisioning paths both use the auth-choice wiring that current OpenClaw expects for Claude BYOK.
What is still pending is the image-level rollout. ClawCloud's compatibility tracker already marks the code and defaults as aligned for 2026.4.14, but the packaged Linux snapshot and Windows image still need the refreshed runtime as the default provisioned version for brand-new instances.
That is the honest state of adoption today: the platform code is ready, and the final rollout step is getting the refreshed runtime baked into fresh provisioning. The managed-session repair work from Fix: OpenClaw managed reply reliability on ClawCloud fits neatly into that same direction, because both changes reduce the odds of GPT-family state drifting into a broken reply path.
Upgrade Considerations
There is no official Breaking section for 2026.4.14. It reads as a compatibility release. Still, a few items deserve a deliberate post-upgrade check.
- Model behavior change:
openai/gpt-5.4now normalizesminimalthinking tolowon embedded runs. If you previously forcedlowas a workaround, you can keep that config or simplify back tominimal. - Telegram metadata fix: forum topic names are now learned and persisted. If your bot behavior depends on topic-specific instructions or hooks, test one topic before and after a restart.
- Codex catalog fix:
openai-codexno longer drops custommodels.jsonrows because of missingapiKeymetadata. If custom models had vanished fromopenclaw models list, recheck after upgrading. - Custom provider probe: stricter OpenAI-compatible endpoints now receive a very small onboarding verification request, which reduces false setup failures on self-hosted or proxied endpoints.
- Security hardening: 2026.4.14 tightens several allowlist and SSRF paths, including browser snapshot/screenshot routes and interactive-event authorization. If you depend on permissive edge behavior, run a smoke test after upgrading.
- ClawCloud-managed note: hosted users do not need to hand-edit GPT aliases or default GPT selections. That alignment work already lives in ClawCloud's managed defaults and allowlists.
How to Upgrade OpenClaw to 2026.4.14
Before you begin
Check what you rely on today.
If you use Telegram forum topics, Codex, custom OpenAI-compatible endpoints, or custom models in models.json, those are the flows worth testing first after the upgrade. If you are on ClawCloud, there is no new dashboard switch for this release.
Upgrade steps
npm install -g openclaw@2026.4.14
openclaw --version
openclaw doctor
After the package update, restart the OpenClaw process or service you normally run so the live gateway picks up the new build.
After upgrading
- Run
openclaw models statusand confirm your primary model still resolves as expected. - If you use Codex, run
openclaw models list --provider openai-codex --plainand make sure the expected rows are visible. - If you use Telegram forum topics, send one test message in a topic, restart the gateway once, then confirm the topic name still shows up correctly in bot behavior and status views.
- If you use custom OpenAI-compatible endpoints, rerun one normal chat turn and one image or PDF flow if you rely on those tool routes.
Upgrade Checklist
- Confirm your current version with
openclaw --version - Install
openclaw@2026.4.14 - Run
openclaw doctor - Restart the running OpenClaw process or service
- Run
openclaw models status - If you use Codex, list
openai-codexmodels and confirm expected rows appear - If you use Telegram forum topics, test one topic before and after a restart
- If you use custom OpenAI-compatible endpoints, run one normal chat turn and one media-tool flow
Frequently Asked Questions
Do I need to change my default model for 2026.4.14?
Usually no. This release does not introduce a new recommended default model in the public docs. It makes the GPT-5.4 family behave more consistently, especially around reasoning effort and empty-turn recovery.
Will my Telegram forum bot behave differently after this release?
Mostly it should behave more predictably. Topic names are learned, exposed to agent context, and persisted across restarts, which makes topic-specific routing and instructions less brittle.
How do I check whether Codex sees the right models after upgrading?
Run openclaw models status, then openclaw models list --provider openai-codex --plain. If custom models had been missing before, this release specifically targets that catalog path.
Do I need to rewrite my custom provider config?
Probably not. The bigger change is under the hood: onboarding verification now uses a tiny max_tokens=16 request, and OpenClaw is more careful about normalizing provider/model refs for media and memory paths.
Does ClawCloud already provision 2026.4.14 everywhere by default?
Not yet. ClawCloud's code, GPT defaults, and allowlists are already aligned for 2026.4.14. The remaining rollout step is baking the refreshed runtime into the packaged images used for fresh provisioning.
Related reading: OpenClaw 2026.3.28 upgrade guide, OpenClaw 2026.3.24 upgrade guide, and Fix: OpenClaw managed reply reliability on ClawCloud.