The Problem: With 10+ frontier models now available (Claude Opus/Sonnet/Haiku, GPT-5/4o, Gemini 2.5, DeepSeek, etc.), developers spend cognitive load and tokens guessing which one to use. The right model for boilerplate is different from the one you need for debugging. But testing both is expensive.
The Solution:
A lightweight MCP server that uses task-aware heuristics to instantly recommend the optimal model based on:
Task type: refactoring, debugging, architecture, boilerplate, logic-heavy work
Performance metrics: speed vs. capability trade-offs
Cost efficiency: right-sizing model selection
Key Features:
Integrates seamlessly into your editor workflow (Claude Desktop, Cursor, Windsurf)
One-line installation: npx architectgbt-mcp
Task-specific recommendations across 8+ AI models
Open source, MIT licensed
Installation:
json { "architectgbt-mcp": { "command": "npx", "args": ["architectgbt-mcp"] } }
Why It Matters:
Model Context Protocol is becoming the standard for AI tooling (see: Windsurf's architecture)
With Anthropic, OpenAI, Google, and others releasing models monthly, having an advisor automating model selection is a productivity multiplier
Developers can focus on coding, not on hyperparameter tuning for which AI provider/model to use
GitHub: 3rdbrain/architectgbt-mcp
Would appreciate feedback from the HN community on use cases, improvements, or related problems in this space.