One friction point I keep running into is how to handle logging and evaluation of the models. Right now I'm using Jupyter Notebook, I'll train the model, then produce a few graphs for different metrics with the test set.
This whole workflow seems to be the standard among the folks in my program but I can't shake the feeling that it seems vibes-based and sub optimal.
I've got a few projects coming up and I want to use them as a chance to improve my approach to training models. What method works for you? Are there any articles or libraries that you would recommend? What do you wish Jr. Engineers new about this?
Thanks!