It runs notebooks in isolated pods and can validate them against deployed ML models on platforms like KServe, OpenShift AI, and vLLM. It also does regression testing by comparing notebook outputs against a "golden" version.
The goal is to make notebooks more reliable and reproducible in production environments. It's built with Go and the Operator SDK.
We're looking for contributors. There are opportunities to work on features like smarter error reporting, observability dashboards, and adding support for more platforms.
GitHub: https://github.com/tosin2013/jupyter-notebook-validator-oper...