I was inspired and learned the basics of TensorFlow after I completed the TensorFlow specialization on coursera. Personally I think these videos I created give a similar understanding but if you wanna check it out you can. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link which helps me create more future videos. affiliate: bit.ly/3JyvdVK non-affiliate: bit.ly/3qtrK39 One thing I did not mention in this video is using callbacks to save the model during training. This is because I'm going to make a completely separate video for callbacks which has a lot of different features and saving the model with some frequency (like every epoch or so) is just one of many. Anyways if you're interested to check it out before I make a video on it you can check out the docs here: www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint
Great! Now I can do during loooong training sessions and terminate the process from time to time without worrying that I lose everything. I can just pick it up where I left it.
Thank you for great video. I switched to PyTorch after working with tf/keras for a while. My problem was that it was impossible or very hard to save and load models with custom losses. It was a pain even in tf2.0 as 9 out 10 times a model would not load. Not sure if things have changed. A video on that topic (saving/loading with custom losses) would be very useful. Thanks again!
Thanks for teaching. I want to use the weights to Android Studio, but I have no idea about how to use the ".pb" or ".tflite" models weights to be use in Android Studio details. Would you like to share about this part ? That helps me a lot. :D
My love for that course is strong and is one of the best courses I've ever taken but I don't really have time to do that for now. If it helps I do have a Github repository with solutions to the others assignments that you could read if you get stuck: github.com/AladdinPerzon/Courses/tree/master/MOOCS/Coursera-Machine-Learning
I am using sub-classes functional API for my deep learning model. I have saved the model successfully but when reload the same it asks for configuration which was not saved and also unable to predict_classes. How can i define these 2 methods in my Python BiLSTM based class.
Sir, if we have saved the model as described above, how can we convert it to the TensorflowLite model so that it can be used in Android Studio, I have tried but using the Tensorflow Model Maker. I want to know how to be able to convert from saveModel to .tflite
For some reason, after loading the model and trying to make a test, the accuracy of the second test is a lot lower. For example, the test with the before saved model i got acc of 75% but after testing the loaded model the acc was down to 25%. I will be glad if someone knows and would tell me what the heck is going on TY
please i am working on NMT with attention using tensorflow , i want to save the model and the entire architecture into my google drive , i am using colab notebook.
in the part defining MyModel. in self.dense1 there's already activation='relu'. but why we have to use tf.nn.relu in call function again? isn't it going through relu two times?
model.save_weights('saved_model/Sequential_API_Weights') any can guide why am i receiving an error when i try it this way. The rest of the code is same. The error says to use .h5 file and i just dont want to because i this gives me headaches of not being able to find recourses online to get my weights from .h5 format.
@@jyothysankar7560 it gives an error but still saves it and you can load separately. What I have noticed is that using tensorflow in python gives a lot of errors but the code runs either way.
Im getting this error. Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements `get_config`and `from_config` when saving. In addition, please use the `custom_objects` arg when calling `load_model()`.
I was inspired and learned the basics of TensorFlow after I completed the TensorFlow specialization on coursera. Personally I think these videos I created give a similar understanding but if you wanna check it out you can. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link which helps me create more future videos.
affiliate: bit.ly/3JyvdVK
non-affiliate: bit.ly/3qtrK39
One thing I did not mention in this video is using callbacks to save the model during training. This is because I'm going to make a completely separate video for callbacks which has a lot of different features and saving the model with some frequency (like every epoch or so) is just one of many. Anyways if you're interested to check it out before I make a video on it you can check out the docs here: www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint
Great! Now I can do during loooong training sessions and terminate the process from time to time without worrying that I lose everything. I can just pick it up where I left it.
Thank you so much. This is very helpful.
Thank you for great video. I switched to PyTorch after working with tf/keras for a while. My problem was that it was impossible or very hard to save and load models with custom losses. It was a pain even in tf2.0 as 9 out 10 times a model would not load. Not sure if things have changed. A video on that topic (saving/loading with custom losses) would be very useful. Thanks again!
To add: having a keras layer with lamba function would also prevent a model from being loaded, even though it was training and saving just fine.
I'll look into that, right now I have never worked with custom losses in TensorFlow
Aladdin Persson thank you!
Thanks for teaching. I want to use the weights to Android Studio, but I have no idea about how to use the ".pb" or ".tflite" models weights to be use in Android Studio details. Would you like to share about this part ? That helps me a lot. :D
@Aladdin Persson: thanks again for these contents..... suggestion for future videos, tf.data, callbacks, serving etc.
Thank you for the suggestions, I have plans to cover tf.data, callbacks but I'm not too familiar with TF Serving. Will look into that
Can you please make videos on the remaining ML solutions for Machine Learning with Andrew Course.
My love for that course is strong and is one of the best courses I've ever taken but I don't really have time to do that for now. If it helps I do have a Github repository with solutions to the others assignments that you could read if you get stuck: github.com/AladdinPerzon/Courses/tree/master/MOOCS/Coursera-Machine-Learning
@@AladdinPersson It's ok. Thanks mate
So helpful video. How can I convert the subclass model into TF-Lite? Can you please help me out? Thanks in advance.
I am using sub-classes functional API for my deep learning model. I have saved the model successfully but when reload the same it asks for configuration which was not saved and also unable to predict_classes. How can i define these 2 methods in my Python BiLSTM based class.
If you load a model and trained it again, will it start its training from 0 again or will it retain its previous training state
if it starts from 0, what's the need of saving of saving the model.
Of course, it's going to retain its previous weights
What's the difference between model.save_weights and tf.train.checkpoint? When do we use which?
Sir, if we have saved the model as described above, how can we convert it to the TensorflowLite model so that it can be used in Android Studio, I have tried but using the Tensorflow Model Maker. I want to know how to be able to convert from saveModel to .tflite
I haven't studied this yet so I'm not sure, have you read this article: www.tensorflow.org/lite/convert
For some reason, after loading the model and trying to make a test, the accuracy of the second test is a lot lower. For example, the test with the before saved model i got acc of 75% but after testing the loaded model the acc was down to 25%. I will be glad if someone knows and would tell me what the heck is going on
TY
I have the same problem, got an accuracy of 98% before saving the model but it dropped to 36% after loading it.
I am having the same issue,did you guys find any solution
This is due to overfitting, model is trained enough so don’t bother retraining!
Do you have a tutorial on your pycharm setup
model.save() without an extension no longer works. You have to use a .keras or .h5 extension I believe
please i am working on NMT with attention using tensorflow , i want to save the model and the entire architecture into my google drive , i am using colab notebook.
Is it possible to save the model with different name?
in the part defining MyModel. in self.dense1 there's already activation='relu'. but why we have to use tf.nn.relu in call function again?
isn't it going through relu two times?
model.save_weights('saved_model/Sequential_API_Weights') any can guide why am i receiving an error when i try it this way. The rest of the code is same. The error says to use .h5 file and i just dont want to because i this gives me headaches of not being able to find recourses online to get my weights from .h5 format.
Did you get it? I havr the same issue
@@jyothysankar7560 it gives an error but still saves it and you can load separately. What I have noticed is that using tensorflow in python gives a lot of errors but the code runs either way.
Im getting this error. Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements `get_config`and `from_config` when saving. In addition, please use the `custom_objects` arg when calling `load_model()`.
🔥🔥🔥
how to use designed neural network in another code for prediction, can anyone please help or any example?
are u refering to tranfer learning?
No when I reload my neural network to another python code just by load.model command,the accuracy decreases of the same model