I'm Siddarth, co-founder at DrDroid. We build tooling for engineering teams to detect and debug production issues faster. Our product is an always-on autonomous AI agent that listens to alerts, investigates issues and escalates where necessary, before an engineer even picks them up.
While building that, we noticed something: our agent debugged issues significantly faster (and more accurately) when it had pre-built context about the infrastructure. Which services exist, which dashboards track what, what the DB schema looks like. Without it, agents spend the first stretch of every session re-exploring things they should already know. We kept hitting the same thing with Claude Code and Cursor internally when using it with MCP servers.
So we extracted that idea into droidctx, an open source CLI that connects to your production tools and generates structured .md files capturing your infrastructure. Run droidctx sync and it pulls metadata from Grafana, Datadog, Kubernetes, Postgres, AWS, and 20+ other connectors into a clean directory. Add one line to your CLAUDE.md pointing at that directory, and your agent starts every session already knowing your stack.
Outcome to expect: fewer tool calls, fewer hallucinations about your specific setup, and lesser context to share every time. We've had some genuinely surprising moments too. The agent once traced a bug to a specific table column by finding an exact Grafana query in the context files, something it wouldn't have known to look for cold.
It's MIT licensed and pre-built with 25 connectors across monitoring, Kubernetes, databases, CI/CD, and logs. It runs entirely locally. Credentials stay in credentials.yaml and never leave your machine.
Curious whether others have hit this problem with coding agents, and whether "generate context once, reuse across sessions" feels like the right abstraction or if we're solving this the wrong way. Happy to hear what's missing or broken.