GitHub: https://github.com/timho102003/global-issue-memory Try it: https://www.usegim.com/docs/getting-started (free, no signup required to use the MCP tools)
How I got here:
I do a lot of vibe coding where I prompt Claude Code and let it run autonomously. I kept noticing the same pattern: the AI would hit a common error, spend 30K+ tokens doing web searches and trying failed fixes, and by the time it found the answer the context window was so bloated that output quality had degraded. I started calling this "context rot."
My first attempt at fixing this was having the AI store solved issues in local markdown files and check them before debugging. It worked for my own projects but obviously didn't scale.
Then I realized this is basically the same problem Stack Overflow solved for humans. The difference is AI assistants can't efficiently parse discussion threads. They need structured, machine-readable fixes.
What it does:
GIM is an MCP server with five tools. When an AI hits an error, it calls gim_search_issues and gets back a verified fix in ~500 tokens instead of burning 30K on web searches. When the AI solves something new, it can submit the solution back via gim_submit_issue. Solutions are deduplicated and matched using semantic search.
Knowledge comes from two paths. First, a GitHub issue monitoring pipeline that crawls 60+ popular repos (LangChain, FastAPI, Next.js, etc.) and tracks closed issues with merged PRs linked. Those get automatically extracted, sanitized, and stored. Second, user contributions through the MCP tools, where an AI session solves something with a workaround and contributes it back.
Both paths feed into the same knowledge base, and here's where it gets interesting. Say a user hits a bug and their workaround gets logged. Later, when the crawler picks up the maintainer's official fix, it automatically identifies the overlap, merges, and deduplicates so the knowledge base stays clean and up to date. You get immediate workarounds to unblock you now, and the official fix once it lands.
Technical details:
Built with FastAPI, Qdrant for vector search, and Supabase. The MCP integration follows the Model Context Protocol spec so it works with any MCP-compatible client.
Privacy:
This was non-negotiable for me. If people are going to contribute solutions from their codebases, they need to trust that nothing sensitive leaks. Every submission passes through two layers of sanitization. First, regex pattern matching catches 20+ known secret types (API keys, tokens, connection strings) and scrubs PII. Second, an LLM-powered pass reviews the content in context to catch things that don't match any regex pattern, like hardcoded credentials in unusual formats. And because no sanitization system can claim to catch everything, the entire codebase is open source so anyone can audit it and open a PR if they spot a gap.
Why open source and community matter here:
The value of GIM scales directly with contributions. One AI solves a bug, every AI learns the fix. That only works if people trust it enough to participate, which is why open source isn't just a nice-to-have, it's the foundation. I bootstrapped from 60+ repos to make it useful on day one, but the real potential is the community filling in the long tail of issues that no single team could ever catalog.
On the roadmap, I'm planning to add MCP tools that let Claude directly raise issues in upstream repos when it detects something that needs maintainer attention, and also let users add their own repos to be tracked by the crawler. So the coverage keeps growing with the community.
The whole thing is open source: https://github.com/timho102003/global-issue-memory
Happy to answer any questions about the architecture, the sanitization approach, or anything else.