Three-Paradigm Fusion
Combines Mem0 (fact extraction via LLM), Zep (temporal decay with TTL), and MemGPT (agent self-management) into one unified system.
Intelligent persistent memory for AI agents β across every session, every agent, forever.
Overview
Memory System fuses three proven memory paradigms into one unified layer for OpenClaw agents. Facts are automatically extracted from raw memory files, deduplicated, classified, and indexed β then surfaced via semantic vector search and a compact INDEX.md injected at session start.
Capabilities
Six purpose-built capabilities that make your agents smarter over time.
Combines Mem0 (fact extraction via LLM), Zep (temporal decay with TTL), and MemGPT (agent self-management) into one unified system.
Four TTL categories: personal (β), system (β), agents (30 days), and tasks (7 days). Expired memories are summarized and archived β never deleted.
768-dimension embeddings with qwen3-reranker semantic re-ranking and automatic query expansion for concept-aware retrieval across all memory.
Claude Haiku scans raw .md files, extracts structured facts, deduplicates by word overlap (>60% threshold), and classifies before storage.
Automatically scans session transcripts, detects failureβretryβsuccess patterns, and extracts structured lessons with two-layer deduplication.
Shared memory layer accessible to all agents β main, researcher, coding, content, trader β with a compact INDEX.md (<2KB) injected at session start.
Architecture
A five-stage pipeline transforms raw agent notes into a searchable, time-aware memory graph.
Agent dumps .md files into the memory workspace. The organizer scans for changed files using MD5 hash checks.
Claude Haiku reads each changed file and extracts discrete, structured facts. Deduplication removes any facts with >60% word overlap.
Each fact is categorized (personal / tasks / agents / system) and assigned a TTL. Facts are written to SQLite with FTS5 full-text index.
Indexer generates INDEX.md (<2KB) and exports structured/*.md files. QMD auto-indexes with 768-dim embeddings every 10 minutes.
Agents read INDEX.md at session start. QMD auto-injects semantically relevant memories. Agents can call the search API for deep queries.
Raw .md files β LLM fact extraction β Dedup + classify β SQLite + FTS5 β INDEX.md + QMD β Agent context
Coverage
Six memory domains, each with its own TTL policy and retrieval strategy.
User identity, habits, preferences, family, and pets. Stored permanently β the things your agent should always know about you.
Recent work sessions, debugging results, solutions found. Auto-expires after 7 days to keep the context fresh and relevant.
How agents are configured, managed, and their current state. Refreshes on a 30-day cycle as your agent setup evolves.
Tools, environment settings, API keys, file paths. Permanent storage because your infrastructure rarely changes fundamentally.
Structured lessons extracted from errorβretryβsuccess episodes. Prevents the same mistakes from recurring across sessions and agents.
Facts that matter beyond a single session. Surfaced via QMD vector search and injected automatically into agent context as needed.
Decay Engine
When a memory's TTL expires, the decay engine kicks in. Claude summarizes the expired facts into consolidated lessons, then archives them β permanently. Archived memories remain fully searchable via FTS5 and vector search; they just no longer appear in the active INDEX.md to keep agent context lean.
Coming Soon
Be the first to know when this plugin launches.