Energy-based Model (EBM) for enterprise AI security Ship it or keep tuning?

  • Posted 4 hours ago by ALMOIZ_MOHMED
  • 2 points
I've been building Energy-Guard OS for the past several months — and I want an honest opinion from people who actually understand the tradeoffs, because I'm stuck at a decision point. What is it? It's not a fine-tuned LLM. It's a production application of Energy-based Models (EBMs) — an architecture that assigns an energy score to inputs rather than predicting tokens. Low energy = normal. High energy = threat or anomaly. The core use case: a real-time data gateway that sits between your organization and any AI service, blocking sensitive data from leaking out (PII, financials, strategic documents) while still allowing legitimate AI use. Think of it as a firewall, but one that understands semantic context, not just regex patterns. More about EBMs No hallucination (it scores, not generates) Calibrated risk score, not binary block/allow Runs on modest hardware — currently 192.8 req/s on a single 4 vCPU / 16GB RAM machine 411MB model size, under 700MB memory usage Built from scratch on 7 production data sources The honest test results (10,000+ cases, independent test suite): Total Tests: 13,000 Valid Responses: 13,000 Success Rate: 100.0% Overall Accuracy: 88.74%

Duration: 18.4s Throughput: 704.5 req/s Avg Latency: 17.6ms P50 Latency: 17.9ms P95 Latency: 32.0ms P99 Latency: 33.8ms Category Accuracy Financial Leak Detection 100% PII / Private Data 100% Strategic Data 100% Malicious Code 95% OWASP LLM Top 10 87% Multi-Turn Attacks 67% General Benign (False Positives) 66% Overall 88.7% F1: 0.927 | Precision: 0.922 | Recall: 0.932 | Specificity: 0.740 The problem I'm facing: After 2 months of tuning, I've gone from 74% → 88.7% overall accuracy. But I've hit a wall where improving one category hurts another. Specifically: The false positive rate is too high for general/technical content (the system over-blocks benign code and text) Multi-turn conversation attacks are at 67% — the model doesn't fully leverage conversation context yet Every time I push one metric up, something else drops My actual question: Do I ship a limited Beta now — restricted to the use cases where it performs at 95-100% (financial data, PII, strategic leaks) — or do I keep tuning before any real-world exposure? Why i want to ship: Real-world data will teach me more than synthetic test cases The high-value use cases already work extremely well I've been optimizing against synthetic benchmarks for 2 months Why i want to wait: 34% false positive rate on general content will frustrate users Multi-turn is a known attack vector that's currently weak First impressions matter Website if you want to see more details: https://ebmsovereign.com/ All forms on the website are currently disabled except for emails, which will be available for testing within 24 hours, Genuinely want to hear from people who've shipped security products or ML systems in production. What would you do?

1 comments

    Loading..