A2A Protocol (Google/Linux Foundation, 2025) GraphRAG (Microsoft arXiv:2404.16130) MACLA Bayesian learning (arXiv:2512.18950) A-RAG hybrid search (arXiv:2602.03442)
Key difference: Research papers assume cloud APIs. SuperLocalMemory implements everything locally with zero API calls. How Recall Works Query "authentication" triggers:
FTS5 full-text search TF-IDF vector similarity Graph traversal for related memories Hierarchical expansion (parent/child context) Hybrid ranking (combines all signals)
Performance: <50ms, even with 10K+ memories. Comparison FeatureSuperLocalMemoryMem0/Zep/LettaPrivacy100% localCloudCostFree$40-50+/moKnowledge GraphPattern Learning BayesianMulti-tool16+LimitedCLIWorks Offline Real Usage Cross-tool context: bash# Save in terminal slm remember "Next.js 15 uses Turbopack" --tags nextjs
# Later in Cursor, Claude auto-recalls via MCP Project profiles: bashslm switch-profile work-project slm switch-profile personal-blog # Separate memory per project Pattern learning: After several sessions, Claude learns you prefer TypeScript strict mode, Tailwind styling, Vitest testing—starts suggesting without being asked. Installation bashnpm install -g superlocalmemory Auto-configures MCP for Claude Desktop, Cursor, Windsurf. Sets up CLI commands. That's it. Why Local-First Matters
Privacy: Code never leaves your machine Ownership: Your data, forever Speed: 50ms queries, no network latency Reliability: Works offline, no API limits Cost: $0 forever
Tech Stack
SQLite (ACID, zero config) FTS5 (full-text search) TF-IDF (vector similarity, no OpenAI API) igraph (Leiden clustering) Bayesian inference (pattern learning) MCP (native Claude integration)
GitHub https://github.com/varun369/SuperLocalMemoryV2 MIT License. Full docs in wiki.
Current status: v2.4 stable. v2.5 (March) adds real-time event stream, concurrent access, trust scoring. v2.6 (May) adds A2A Protocol for multi-agent collaboration. Built by Varun Pratap Bhardwaj, Solution Architect . 15+ years AI/ML experience.