What we've evaluated:
- OpenAI's Evals framework: Works well for benchmarking but challenging for custom use cases. Configuration through YAML files can be complex and extending functionality requires diving deep into their codebase. Primarily designed for batch processing rather than real-time monitoring.
- LangSmith: Strong tracing capabilities but eval features feel secondary to their observability focus. Pricing starts at $0.50 per 1k traces after the free tier, which adds up quickly with high volume. UI can be slow with larger datasets.
- Weights & Biases: Powerful platform but designed primarily for traditional ML experiment tracking. Setup is complex and requires significant ML expertise. Our product team struggles to use it effectively.
- Humanloop: Clean interface focused on prompt versioning with basic evaluation capabilities. Limited eval types available and pricing is steep for the feature set.
- Braintrust: Interesting approach to evaluation but feels like an early-stage product. Documentation is sparse and integration options are limited.
What we actually need: - Real-time eval monitoring (not just batch) - Custom eval functions that don't require PhD-level setup - Human-in-the-loop workflows for subjective tasks - Cost tracking per model/prompt - Integration with our existing observability stack - Something our product team can actually use
Current solution:
Custom scripts + monitoring dashboards for basic metrics. Weekly manual reviews in spreadsheets. It works but doesn't scale and we miss edge cases.
Has anyone found tools that handle production LLM evaluation well? Are we expecting too much or is the tooling genuinely immature? Especially interested in hearing from teams without dedicated ML engineers.