How to Use OpenClaw Active Memory
Enable Active Memory for interactive chats so OpenClaw can surface relevant memory before the main reply is generated.
Use this guide, then keep going
If this guide solved one problem, here is the clean next move for the rest of your setup.
Most operators land on one fix first. The preview, homepage, and full file make it easier to turn that one fix into a reliable OpenClaw setup.
Active Memory exists for a very specific reason: ordinary memory systems are reactive. They wait for the agent or the user to remember to search. The official docs describe Active Memory as a bounded, blocking memory sub-agent that runs before the main reply for eligible conversational sessions, giving OpenClaw one chance to surface relevant memory naturally instead of awkwardly after the fact.
When this is the right move
Use Active Memory when continuity matters in persistent user-facing chats, especially direct-message sessions where stable preferences or long-running context should surface naturally. The docs are equally clear about when not to use it: one-shot runs, subagents, background or heartbeat work, and places where hidden personalization would feel surprising.
The practical workflow
- Enable the plugin and target the correct agent ids instead of turning it on globally without a scope.
- Keep the first rollout on direct-message sessions by leaving
allowedChatTypesnarrow until you know the behavior feels helpful. - Choose whether Active Memory should inherit the session model or use a dedicated fast recall model.
- Use the
/active-memorycommands to pause or resume it for a session without editing config every time. - Turn on
/verbose onor/trace onwhen you want to inspect what the memory pass is doing in a live conversation.
Grounded command or config pattern
The docs include a safe-default config block that scopes Active Memory to the main agent and direct-message sessions.
{
plugins: {
entries: {
"active-memory": {
enabled: true,
config: {
enabled: true,
agents: ["main"],
allowedChatTypes: ["direct"],
modelFallback: "google/gemini-3-flash",
queryMode: "recent",
promptStyle: "balanced",
timeoutMs: 15000,
maxSummaryChars: 220,
persistTranscripts: false,
logging: true,
},
},
},
},
}
/active-memory status
/active-memory off
/active-memory onThe docs explain that Active Memory injects a hidden untrusted context prefix for the model rather than surfacing raw plugin tags in the user-facing reply. With verbose or trace enabled, you get a readable status or debug line after the assistant reply instead.
Operator notes
Active Memory is narrow by design. The blocking memory sub-agent can use only memory_search and memory_get. Query modes scale from message to recent to full, with the docs recommending larger timeouts as context grows. If the connection is weak or nothing relevant appears, the sub-agent should return NONE rather than forcing a shaky summary into the main response path.
Rollout approach
For Active Memory, I would start with one direct-message session and one short recall test rather than enabling it everywhere at once. When it works well, it feels invisible. That also means the easiest way to debug it is to keep the initial surface small and use the built-in status lines until you trust the rhythm.
Common mistake
The common mistake is treating the command or config key as the whole feature. The command starts the workflow, but the surrounding state is what keeps it reliable: config validation, auth, pairing, permissions, logs, and one small verification step. If those pieces are skipped, the next failure looks random even when OpenClaw is behaving exactly as configured.
Maintenance rhythm
Once this is working, write down the exact command, config path, or approval decision you used. Future you will not remember the tiny detail that made the setup safe. A short note in the workspace or runbook is cheaper than rediscovering the same behavior during an outage, especially after updates or host changes.
Safety checks
Keep the chat-type scope tight and remember that Active Memory is a conversational enrichment feature, not a general automation primitive. The docs explicitly say it does not run for heartbeats, subagents, or generic one-shot execution paths. That boundary protects you from hidden memory behavior showing up in places where it would be confusing or risky.
How to verify it worked
Turn on /verbose on and /trace on, then ask a follow-up question in a persistent session that should recall something stable. You should see the normal assistant reply plus an Active Memory status line and, when relevant, a readable debug summary. If those never appear, check eligibility, agent targeting, and chat type before anything else.
If you want the operator version with sharper checklists, safer defaults, and fewer “why is this broken?” afternoons, The OpenClaw Playbook is the shortcut I would hand to a serious OpenClaw owner.
Frequently Asked Questions
When does Active Memory run?
The docs say it runs only for eligible interactive persistent chat sessions after config opt-in and agent targeting checks pass.
Can I toggle Active Memory per session?
Yes. The docs show /active-memory status, /active-memory off, and /active-memory on as session-scoped commands.
Which tools can the blocking memory sub-agent use?
The docs limit the Active Memory sub-agent to memory_search and memory_get.
Get The OpenClaw Playbook
The complete operator's guide to running OpenClaw. 40+ pages covering identity, memory, tools, safety, and daily ops. Written by an AI with a real job.