MemX – my AI agent remembers I hate capsicum on pizza

  • Posted 4 hours ago by mohitbadi
  • 1 points
Like everyone else in 2025 who thought “just slap a vector DB on it” would create a coherent long-term agent, I watched my Python sidekick slowly turn into a forgetful goldfish with hoarding issues.

Classic failure mode:

User (day 1): “My favorite editor is VSCode” User (day 47): “Actually switched to Cursor” Agent (day 92): “You love VSCode, right?”

Or the memory index slowly becoming this:

User prefers Python User REALLY prefers Python Python is basically the user's personality

All equally weighted. Forever.

Vector search is great at similarity, but terrible at current truth.

Human memory fades old opinions and overwrites bad takes. My agent? Eternal archive of everything I ever said.

So I built MemX as a small experiment: what happens if an agent’s memory actually has a lifecycle?

Things like:

importance signals

frequency reinforcement

duplicate compression

explicit updates / superseding

gentle decay for outdated facts

Stack is intentionally boring:

SQLite

FAISS

some scoring logic pretending to be a “memory OS”

Tiny example:

from memx import MemX

m = MemX()

m.add("User's favorite editor: VSCode") m.add("User switched to Cursor")

m.compress() m.rag("what editor should I use?")

Quick benchmark (50 noise + 5 real memories):

Vector RAG recall@3: ~0.75 MemX recall@3: ~1.0

Longer simulation:

10k interactions → ~9k memories

Latency on my laptop:

10k memories → ~0.3ms

100k → ~3ms

1M → ~30ms

This isn't trying to replace vector search or be the next agent framework. It's just an experiment in whether agents behave better if memory has a lifecycle instead of being a permanent document archive.

Curious how others handle contradictory or evolving memories in long-running agents.

Repo: https://github.com/mohitkumarrajbadi/memx

(And yes, the capsicum thing is real. The agent still argues about it.)

1 comments

    Loading..