As I understand it, you should continue to train as long as the loss on the training data and the loss on the test/unseen data continue to decline. There will be a point where the two (training and test/unseen datasets) will diverge. Ideally, you should save the model just before that point. At that point, the model loss on the training data will continue to decline but the model will fail to generalize on the test/unseen data. I'm looking forward to learning about regularization in the next video!!!
Thank you for that simple and clear explanation.👌
No better explanation can be made as compared to this one. i can't thank you enough for your video.
As I understand it, you should continue to train as long as the loss on the training data and the loss on the test/unseen data continue to decline. There will be a point where the two (training and test/unseen datasets) will diverge. Ideally, you should save the model just before that point. At that point, the model loss on the training data will continue to decline but the model will fail to generalize on the test/unseen data.
I'm looking forward to learning about regularization in the next video!!!
Amazing visualization for hyperparameters. thanks a lot. 🙏🏻
You're very welcome :)
Thnx
Have a nice day!💐
You're welcome :)
Love from India
🙌
🤙