👉 Check out the blog post and other resources for this video: 🔗 deeplizard.com/learn/video/Zrt76AIbeh4 👀 Come say hey to us on OUR VLOG: 🔗 ua-cam.com/users/deeplizardvlog
I have watched almost all your tutorials on Tensorflow and Keras, and they are one of the best resources on the internet. Please do a series on the RNN, NLP, and relevant material on text processing in the future, thank you!
Thank you DeepLizard !! This is a well concise and well delivered exercise on Transfer Learning using MobileNet. Sincerely appreciate your efforts and hard working putting this video together. I actually stumbled across this video and was so impressed that I have now subscribed to this channel and will start looking at the earlier videos as well. THANK YOU!
Great video, I am sad that the series is almost finished! You should more! P.S. I saw in the comments, and I have this problem myself, that following video's code results in a: "ValueError: functional api Shapes (None, None) and (None, 7, 7, 10) are incompatible". Although I do not know exactly what I am doing I changed the following line and it worked: `x = mobile.layers[-2].output` (it was [-6]) Hope it helps!
Hey Charalampos, I was able to find out what is going on here. In a newer version of TensorFlow, a new parameter was introduced to the GlobalAveragePooling2D layer. This is the last layer that we grab from the original MobileNet model when constructing our fine-tuned version. The new optional parameter for this layer is called keepdims. When it is set to True, the output shape of the layer will keep the same number of dimensions from the previous layer. In the MobileNet model, the newer version of TensorFlow sets this parameter to True for the GlobalAveragePooling2D layer by default, causing the difference in output shapes. The new version is reshaping the output of this layer to be (None, 1, 1, 1024), as opposed to (None, 1024) if keepdims were set to False. To make the deeplizard code run the same as what’s shown in the video, we need to get and store the output from the fifth to last layer of the model, rather than sixth, and we need to add our own Reshape layer before adding our output layer. x = mobile.layers[-5].output x = tf.keras.layers.Reshape(target_shape=(1024,))(x) output = Dense(units=10, activation='softmax')(x) Then run the remaining code the same, and you should now see that the output shapes match what is shown in the video. I’ll update the corresponding code download and corresponding web page for this episode soon with this info.
Thank you so much Mandy... I went through your freecodecamp videos and reached to this point. those are amazing.. thank you so much for your hard effort on teaching us. the way you explain is amazing. and I'm watching all your tutorials one by one now.. 😊😊😊😊😊
I am getting a ValueError: functional api Shapes (None, None) and (None, 7, 7, 10) are incompatible. Can anyone please provide, the new and updated code, or, atleast tell me, from which mobilenet layer I should start removing. Because, in the latest Mobilenet versions global_average_pooling2d, is present 5th from last.
ValueError: functional api Shapes (None, None) and (None, 7, 7, 10) are incompatible ..Getting this error tried solving it through stackoverflow but no use ..pls share ur thoughts on this or any suggestions
Thank you for your tutorials, really great one, may i suggest an idea to make them even better, i think that it would be nice to have some sort of graphical workflow so we can visualize the necessary steps for the differents models or approachs, once again thank you for your effort and time, much appreciated.
Good day dear, first of all, thanks for the great works you have done, they all are helpful. Kindly, I am trying to run the code; however, for this cell model = Model(inputs=mobile.input, outputs=output), is giving that NameError: name 'Model' is not defined. So, I just want to know if somewhere in the cell if the Model was defined or not because it is not working.
try write on the top of the cell: `from tensorflow.keras import Model` or just add to this line `tf.keras.Model(inputs=mobile.input, outputs=output)`. Hope it works!
model = Model(inputs=mobile.input, outputs=output) I got the error: 'nameerror name 'model' is not defined ' I tried this and it working now: model = keras.Model(inputs=mobile.input, outputs=output) By the way big fan of urs.... Thanks a lot for collective intelligence :)
Thanks Aman! For the import, did you follow the import statements from the earlier episode? That is where we import Model from tf.keras: deeplizard.com/learn/video/OO4HD-1wRN8
@@deeplizard yes I did But it was still showing me the above error I think this was needed here so I added the same 🙂 Thanks a lot for replying Really impressed by this kind of updates and beautiful simple explanations 🥰 The best tutorials I could find on internet till date ATB for ur awesome work 💫💫
Thanks Mandy for the great tutorials, i followed this episode with the same model and i get val_accuracy of 0.98 within the first 5 epoch, and it keep increasing to 0.99 within 30 epoch. Why is it so different from yours? The result from prediction did match so im a bit confused. Can you comment? thanks
I have an interesting thing to ask. Is there a way we can use this model we've worked on for real time object detection? Please like this comment if you answer because yt doesn't notify me when someone replies. Edit: Also, thanks for this series. It's been a really fun and informative ride.
I have the same ERROR: "ValueError: functional api Shapes (None, None) and (None, 7, 7, 10) are incompatible". For those who has the same error with me, I just configure it without knowing how I do it. Haha MY NEW CODE: x = mobile.layers[-5].output x = tf.keras.layers.Reshape(target_shape=(1024,))(x) output = Dense(units=5, activation='softmax')(x) model = Model(inputs=mobile.input, outputs=output) for layer in model.layers[:-23]: layer.trainable = False model.compile(optimizer=Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['accuracy']) model.fit(x=train_batches, steps_per_epoch=len(train_batches), validation_data=valid_batches, validation_steps=len(valid_batches), epochs=30, verbose=2 ) STARTING TO OUTPUT: Epoch 1/30 27/27 - 11s - loss: 1.5371 - accuracy: 0.3902 - val_loss: 2.6568 - val_accuracy: 0.1963 - 11s/epoch - 404ms/step Epoch 2/30 ... ........ ........... Epoch 30/30
👉 Check out the blog post and other resources for this video:
🔗 deeplizard.com/learn/video/Zrt76AIbeh4
👀 Come say hey to us on OUR VLOG:
🔗 ua-cam.com/users/deeplizardvlog
I have watched almost all your tutorials on Tensorflow and Keras, and they are one of the best resources on the internet. Please do a series on the RNN, NLP, and relevant material on text processing in the future, thank you!
Thank you DeepLizard !! This is a well concise and well delivered exercise on Transfer Learning using MobileNet. Sincerely appreciate your efforts and hard working putting this video together. I actually stumbled across this video and was so impressed that I have now subscribed to this channel and will start looking at the earlier videos as well. THANK YOU!
Great video, I am sad that the series is almost finished! You should more!
P.S. I saw in the comments, and I have this problem myself, that following video's code results in a:
"ValueError: functional api Shapes (None, None) and (None, 7, 7, 10) are incompatible".
Although I do not know exactly what I am doing I changed the following line and it worked:
`x = mobile.layers[-2].output` (it was [-6])
Hope it helps!
Hey Charalampos, I was able to find out what is going on here.
In a newer version of TensorFlow, a new parameter was introduced to the GlobalAveragePooling2D layer. This is the last layer that we grab from the original MobileNet model when constructing our fine-tuned version. The new optional parameter for this layer is called keepdims. When it is set to True, the output shape of the layer will keep the same number of dimensions from the previous layer.
In the MobileNet model, the newer version of TensorFlow sets this parameter to True for the GlobalAveragePooling2D layer by default, causing the difference in output shapes. The new version is reshaping the output of this layer to be (None, 1, 1, 1024), as opposed to (None, 1024) if keepdims were set to False.
To make the deeplizard code run the same as what’s shown in the video, we need to get and store the output from the fifth to last layer of the model, rather than sixth, and we need to add our own Reshape layer before adding our output layer.
x = mobile.layers[-5].output
x = tf.keras.layers.Reshape(target_shape=(1024,))(x)
output = Dense(units=10, activation='softmax')(x)
Then run the remaining code the same, and you should now see that the output shapes match what is shown in the video.
I’ll update the corresponding code download and corresponding web page for this episode soon with this info.
@@deeplizard its still not working. plz update the code in website.
@@deeplizard Thank you very much for this explanation. I was facing the same problem as the OP and this solved it!
Thank you so much Mandy... I went through your freecodecamp videos and reached to this point. those are amazing.. thank you so much for your hard effort on teaching us. the way you explain is amazing. and I'm watching all your tutorials one by one now.. 😊😊😊😊😊
the way u explain things so clear thank u thank u thank u :)
Learned new things . thank you.
Thanks for all the great tutorials! You guys are amazing!
I am getting a ValueError: functional api Shapes (None, None) and (None, 7, 7, 10) are incompatible.
Can anyone please provide, the new and updated code, or, atleast tell me, from which mobilenet layer I should start removing.
Because, in the latest Mobilenet versions global_average_pooling2d, is present 5th from last.
ValueError: functional api Shapes (None, None) and (None, 7, 7, 10) are incompatible ..Getting this error tried solving it through stackoverflow but no use ..pls share ur thoughts on this or any suggestions
Big thank you..
Got 100% Accuracy with the data of signsDataset on testdata.
Please make a video using ResNet50 model also the way you have used MobileNet .....still thanks this will be very useful✌🏻
What exactly are "inputs" here in the functional model?
thanks you verymuch
Does this also help when I want to add another class to a pretrained model?
Thank you for your tutorials, really great one,
may i suggest an idea to make them even better, i think that it would be nice to have some sort of graphical workflow so we can visualize the necessary steps for the differents models or approachs,
once again thank you for your effort and time, much appreciated.
at 07:09....... 0.000 balu plu plu plupluplu hahahaa ....hey great tutorial..enjoyed watching this.
😅😅
Good day dear, first of all, thanks for the great works you have done, they all are helpful.
Kindly, I am trying to run the code; however, for this cell model = Model(inputs=mobile.input, outputs=output), is giving that NameError: name 'Model' is not defined. So, I just want to know if somewhere in the cell if the Model was defined or not because it is not working.
try write on the top of the cell:
`from tensorflow.keras import Model` or just add to this line `tf.keras.Model(inputs=mobile.input, outputs=output)`.
Hope it works!
lifesaver for deadlines of some interns.
wow the entire courses are excellent...! Thanks mandy and team
do you have any plan for detailed certificate course on NLP with latest techniques
model = Model(inputs=mobile.input, outputs=output)
I got the error:
'nameerror name 'model' is not defined '
I tried this and it working now:
model = keras.Model(inputs=mobile.input, outputs=output)
By the way big fan of urs.... Thanks a lot for collective intelligence :)
Thanks Aman!
For the import, did you follow the import statements from the earlier episode? That is where we import Model from tf.keras:
deeplizard.com/learn/video/OO4HD-1wRN8
@@deeplizard yes I did
But it was still showing me the above error
I think this was needed here so I added the same 🙂
Thanks a lot for replying
Really impressed by this kind of updates and beautiful simple explanations
🥰
The best tutorials I could find on internet till date
ATB for ur awesome work 💫💫
Thanks Mandy for the great tutorials, i followed this episode with the same model and i get val_accuracy of 0.98 within the first 5 epoch, and it keep increasing to 0.99 within 30 epoch. Why is it so different from yours? The result from prediction did match so im a bit confused. Can you comment? thanks
I am following the same procedure in video. But my model accuracy(99%) and confusion accuracy(16%) are different. any suggestion?
Is there any chance will you do alphaGo explanation with tf in the future :) ?? thanks Mandy
May add it in the future to our Reinforcement Learning course :)
I got this when I want to fit the model
ValueError: Shapes (None, None) and (None, 7, 7, 10) are incompatible
Check the videos' corresponding blog pages for code updates:
deeplizard.com/learn/video/Zrt76AIbeh4
How can we use live web cam instead of prepared test set for recognition
with videocapture...
how to save the model in .tflite file?
I have an interesting thing to ask. Is there a way we can use this model we've worked on for real time object detection? Please like this comment if you answer because yt doesn't notify me when someone replies.
Edit: Also, thanks for this series. It's been a really fun and informative ride.
Excellent video, but I can’t help wondering how many black stripes blouses does she have or what’s going to happen when this one wears out
Two possibilities:
The simulation will break at the moment the shirt wears out.
or
The shirt is computer generated and will never wear out.
Am I the only person that feels weird while watching a girl with a bed in the background? (I'm single 😅)
I have the same ERROR: "ValueError: functional api Shapes (None, None) and (None, 7, 7, 10) are incompatible".
For those who has the same error with me, I just configure it without knowing how I do it. Haha
MY NEW CODE:
x = mobile.layers[-5].output
x = tf.keras.layers.Reshape(target_shape=(1024,))(x)
output = Dense(units=5, activation='softmax')(x)
model = Model(inputs=mobile.input, outputs=output)
for layer in model.layers[:-23]:
layer.trainable = False
model.compile(optimizer=Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x=train_batches,
steps_per_epoch=len(train_batches),
validation_data=valid_batches,
validation_steps=len(valid_batches),
epochs=30,
verbose=2
)
STARTING TO OUTPUT:
Epoch 1/30
27/27 - 11s - loss: 1.5371 - accuracy: 0.3902 - val_loss: 2.6568 - val_accuracy: 0.1963 - 11s/epoch - 404ms/step
Epoch 2/30
...
........
...........
Epoch 30/30