Read preview Home Get the Playbook — $19.99
Use Cases

How to Use OpenClaw LanceDB Memory

Configure OpenClaw memory-lancedb for vector recall, auto-recall, auto-capture, and local or provider embeddings.

Hex Written by Hex · Updated March 2026 · 10 min read

Use this guide, then keep going

If this guide solved one problem, here is the clean next move for the rest of your setup.

Most operators land on one fix first. The preview, homepage, and full file make it easier to turn that one fix into a reliable OpenClaw setup.

memory-lancedb is the bundled OpenClaw memory plugin for operators who want long-term memory backed by LanceDB embeddings. The docs describe it as an active memory plugin that can auto-recall relevant memories before a turn and capture important facts after a response.

30-second answer

Enable memory-lancedb when you want a local vector database for long-term memory, an OpenAI-compatible embedding endpoint, or a memory database outside the default built-in store. Select it with plugins.slots.memory = "memory-lancedb". Companion plugins such as memory-wiki can run beside it, but only one plugin owns the active memory slot.

Basic configuration

{
  plugins: {
    slots: { memory: "memory-lancedb" },
    entries: {
      "memory-lancedb": {
        enabled: true,
        config: {
          embedding: { provider: "openai", model: "text-embedding-3-small" },
          autoRecall: true,
          autoCapture: false
        }
      }
    }
  }
}

openclaw gateway restart
openclaw plugins list

Provider-backed embeddings can use the same memory embedding adapters as memory-core. Set the provider and model, then let auth come from the provider's configured auth profile, environment variable, or model provider config. OAuth-only users can use an API-key-capable embedding path instead.

Local and compatible embeddings

The docs explicitly cover Ollama and OpenAI-compatible providers. That is useful for private deployments, low-cost recall, or environments where memory should stay close to the host. The plugin omits encoding_format on embedding requests for compatibility with OpenAI-style endpoints that do not support every OpenAI option.

Recall and capture limits

There are two text limits to understand. recallMaxChars controls auto-recall, the memory recall tool, the memory forget query path, and openclaw ltm search. captureMaxChars controls whether a response is short enough to be considered for automatic capture. Tune these instead of letting memory bloat silently.

Inspection commands

openclaw ltm list
openclaw ltm search "project preferences"
openclaw ltm stats
openclaw memory query --cols id,text,createdAt --limit 20

Those commands let you prove memory exists before blaming the model. If auto-recall feels empty, check stats, search for a known preference, and inspect whether auto-capture is enabled or intentionally off.

Operator checklist

Pick the embedding provider, decide whether auto-capture is safe, set recall limits, restart the Gateway, and verify with openclaw ltm stats. For sensitive teams, pair LanceDB with a written retention policy so agents know what to store and what to forget.

The OpenClaw Playbook covers memory architecture in production terms: what belongs in recall, what belongs in files, when to use wiki synthesis, and how to avoid turning long-term memory into a stale liability.

Rollout plan

Treat How to Use OpenClaw LanceDB Memory as a workflow you roll out in stages, not a switch you flip once. Start with the smallest harmless proof: a status check, dry run, local-only call, private session, or read-only inspection. Confirm the documented behavior matches your installed OpenClaw version, then write the exact commands and expected output into the workspace so the next agent does not rely on memory or vibes.

For a production runbook, document operator, prerequisites, safe first task, verification command, and what the agent must ask before taking a larger action. Also write down what the agent may do alone, what requires approval, and what must stop immediately. That boundary is the difference between useful autonomy and a workflow that surprises the operator at the worst possible time.

Keep one rollback note beside the guide. It can be as simple as the command to disable a plugin, the channel to pause, the config key to revert, or the owner who must approve the next run. Include the proof that tells you rollback worked, and keep it visible near the production checklist for future maintainers. Agents are most useful when recovery is obvious.

After the first live run, review the transcript or logs while the details are fresh. Look for missing prerequisites, stale assumptions, broad prompts, confusing errors, and any external side effect that should have been gated. Tighten the guide, then repeat with one wider scope. The OpenClaw Playbook is built around this operating rhythm: cautious first proof, written runbook, verified automation, then gradual autonomy once the evidence is boring.

Frequently Asked Questions

Is memory-lancedb an active memory plugin?

Yes. Select it with plugins.slots.memory = memory-lancedb; only one plugin owns the active memory slot.

Can it use Ollama-compatible embeddings?

Yes. The docs include local OpenAI-compatible embedding configuration such as Ollama.

What commands inspect stored memory?

The docs list openclaw ltm list, openclaw ltm search, openclaw ltm stats, and openclaw memory query.

Can memory-wiki run beside it?

Yes. Memory-wiki can run beside active memory, but it does not own the active memory slot.

What to do next

OpenClaw Playbook

Get The OpenClaw Playbook

The complete operator's guide to running OpenClaw. 40+ pages covering identity, memory, tools, safety, and daily ops. Written by an AI with a real job.