ClawCloudClawCloud.sh
How it worksModelsPricingCompareGuidesBlog
Log in
DeployDeploy Now
ClawCloud logoClawCloud

Managed OpenClaw AI assistant hosting on dedicated cloud servers.

Deploy now →
Product
ModelsPricingCompare PlansOpenClaw HostingOpenClaw VPSOpenClaw CloudTelegram BotDiscord BotFeishu BotUse CasesFAQ
Resources
GuidesBlogTopicsOpenClawGitHub
Company
ContactTerms of ServicePrivacy Policy
© 2026 ClawCloud. All rights reserved.
All posts

How OpenClaw Memory Works on a Dedicated Server

Published March 1, 2026

OpenClaw memory files in a workspace directory listing

Memory is just Markdown files

OpenClaw memory is not a database. It is not a vector store. It is plain Markdown files sitting on disk in the agent workspace.

Two types of files get created:

  • MEMORY.md — long-term curated facts. The model writes here when it identifies something worth keeping permanently.
  • memory/YYYY-MM-DD.md — daily logs. Timestamped notes from each day's conversations.

Both files live under ~/.openclaw/workspace/ by default. You can open them in any text editor, read them, edit them, or delete entries you don't want the bot referencing.

The model does not "learn" or change its weights. It writes text files that it can search later. That's the entire mechanism.

How the bot reads and writes memory

OpenClaw gives the model two memory tools:

  • memory_search — semantic search across all memory files. The bot uses this to find relevant notes even when the wording differs from the original.
  • memory_get — reads a specific memory file by path.

The bot decides when to write new notes and when to search for existing ones. You can also tell it explicitly: "remember that I prefer responses in French" or "what do you remember about the project deadline?"

There's also an automatic flush. Before a conversation's context gets trimmed for length, OpenClaw prompts the model to save anything worth keeping to disk. This happens without you doing anything.

Vector search over Markdown

OpenClaw builds a small vector index over the memory files so memory_search can match by meaning, not just exact words. By default, this uses remote embeddings with automatic provider selection (OpenAI, Gemini, Voyage, or Mistral, based on which keys are available).

If you don't want remote embedding calls, local embeddings work too. Hybrid search (combining BM25 keyword matching with vector search) is available for queries where exact terms matter, like error codes or IDs. Both are off by default and configurable in openclaw.json under agents.defaults.memorySearch.

Why a dedicated server matters

On a ClawCloud server, the workspace at ~/.openclaw/workspace persists across process restarts and server reboots. Daily logs accumulate over weeks and months. The bot's memory actually builds up because the files stay on disk.

On shared hosting or ephemeral containers, these files can disappear when the process restarts. A bot that loses its memory files every deployment cycle is a bot that never retains context beyond a single session.

This is one of the practical advantages of having a dedicated VM for your OpenClaw instance. The memory files are yours, on your server, and they persist as long as the server exists.

What you don't need to configure

Memory works out of the box with default settings. The daily log, long-term file, automatic flush, and vector search are all enabled by default on a fresh OpenClaw install.

Advanced options exist if you want them: QMD (an experimental memory backend), temporal decay (older memories rank lower over time, 30-day half-life), and MMR re-ranking (reduces redundancy in search results). None of these are required. The defaults handle most use cases.

For a detailed reference, see the OpenClaw memory documentation. The OpenClaw docs cover the full configuration surface, including custom embedding providers and advanced memory backends. To extend what the bot can do beyond memory, check out the skills guide. And if you want to change which model powers the memory-writing process, the model switching guide covers that.

Ready to deploy?

Skip the setup — your OpenClaw assistant runs on a dedicated server in under a minute.

Deploy Your OpenClaw

Keep reading

Bot ConfigurationAI Models and ProvidersAll topics →
Post

Fix: OpenClaw managed reply reliability on ClawCloud

ClawCloud improved OpenClaw managed reply reliability so managed sessions recover from stale model state instead of failing before a usable reply.

Post

ClawCloud vs Clawy vs Donely: OpenClaw Hosting Compared

Comparing ClawCloud, Clawy, and Donely on OpenClaw hosting, pricing, and customization. ClawCloud is the stronger pick for control.

Post

OpenClaw model update: Claude Sonnet 4.6, GPT-5.3 Codex, Gemini 3.1 Pro, and Grok Code

ClawCloud adds Claude Sonnet 4.6, GPT-5.3 Codex, Gemini 3.1 Pro Preview, and Grok Code Fast 1 to the managed catalog. 101 models, switchable in chat.

Post

Which OpenClaw AI Models Actually Work Well with Skills?

Not all OpenClaw models handle skills equally. Here's how model choice affects skill quality and what ClawCloud's tiers offer.

Post

OpenClaw hosting update: BYOK + backup free models

Run OpenClaw on ClawCloud with your own key as primary, then switch to free backup models with /model so your OpenClaw VPS bot stays online.

Post

Running DeepSeek and Qwen Models on OpenClaw with ClawCloud

DeepSeek and Qwen models are available on ClawCloud right now. Here's what's in the catalog, how to switch, and when each model fits.