This is an unusually well structured video. Not only do you go over the "what and why?", but you also provide a demonstration, visualisation and notebook file in case you wish to look it up yourself. Excellent work.
Great content. Watched your video just for the sake of Convolutional Autoencoders but you didn't define it clearly and not made any video further on it. Btw you teach amazingly in a very easy way. Love from Pk
Hi, you mentioned Autoencoders as Jack of Trades, could you give an example of feature selection or dimensionality reduction algorithm which outshines Autoencoders? Thank you
you said It is self-supervised learning, but can i used Annotated data with this CNN autoencoder? I have to do sematic segmentation, and the output is also an image. input are image and few sensor data. and i have annotated the features in the image. What model do you think i should use.?
I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to implement the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images? For instance, CCTVs? Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos? I’d love to learn more from your expertise and advices as I explore this topic further. Anyway, thank you so much for the insightful explanation and demo by the way! This is undoubtedly one of the most in-depth and easy-to-digest explanations out there. I do like your high energy and enthusiasm, and also the fresh and flexible implementation using the cat dataset instead of the usual MNIST dataset. Great work! 💯 Subscribed :)
Thanks for subscribing! Denoising video with CNN in real-time is still a big challenge. I'm not an expert; however, I guess it's better to go for a GAN instead of using an autoencoder for this purpose.
Thanks for the tutorial! You mentioned conv2Dtranspose is the same as conv2D if the padding is the same. If so why you are you using conv2Dtranspose? And why the last layer of CAE is Conv2D and not Conv2Dtranspose?
It would have been much better if you'd built the network as you went instead of just showing to finished article. Seeing mistakes is often more valuable.
The easiest way to resolve this is to change the dimension of your images to an even number (try to pick a number that has many factors) greater than 79. Remember, you'll need to set the number of conv. layers and dimension of conv. layers accordingly.
@@NormalizedNerd and that's what I did, down to 72 ! please, one more question : The accuracy in this case, what does it mean?, we're not in a classification problem !
@@belhafsiabdeldjalil5739 We don't use accuracy here. We use loss. It's calculated bases on the difference in pixel values between original image and generated image.
This is an unusually well structured video. Not only do you go over the "what and why?", but you also provide a demonstration, visualisation and notebook file in case you wish to look it up yourself. Excellent work.
Thanks mate! :D
Great content. Watched your video just for the sake of Convolutional Autoencoders but you didn't define it clearly and not made any video further on it. Btw you teach amazingly in a very easy way. Love from Pk
Hi, you mentioned Autoencoders as Jack of Trades, could you give an example of feature selection or dimensionality reduction algorithm which outshines Autoencoders?
Thank you
Love the way you explain. Thanks!
we can surely upscale the generated cat images using super resolution techniques. great video
Yes of course! Just remember the super resolution won't produce the exact image.
you said It is self-supervised learning, but can i used Annotated data with this CNN autoencoder?
I have to do sematic segmentation, and the output is also an image.
input are image and few sensor data. and i have annotated the features in the image.
What model do you think i should use.?
As far as I know, autoencoders can't be used for annotated data.
thank you for this 'rich' and amazing video.
Thanks for this great video!
I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to implement the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images? For instance, CCTVs?
Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos?
I’d love to learn more from your expertise and advices as I explore this topic further.
Anyway, thank you so much for the insightful explanation and demo by the way! This is undoubtedly one of the most in-depth and easy-to-digest explanations out there. I do like your high energy and enthusiasm, and also the fresh and flexible implementation using the cat dataset instead of the usual MNIST dataset. Great work! 💯
Subscribed :)
Thanks for subscribing! Denoising video with CNN in real-time is still a big challenge. I'm not an expert; however, I guess it's better to go for a GAN instead of using an autoencoder for this purpose.
please do a video on calculating mse and anomaliy detetction
Thanks for the suggestion!
Very well explained. Thank you so much.
Very well explained!
Thanks!
Great video and explanation, thank you! :)
Excellent, thanks!
Thanks for the tutorial! You mentioned conv2Dtranspose is the same as conv2D if the padding is the same. If so why you are you using conv2Dtranspose? And why the last layer of CAE is Conv2D and not Conv2Dtranspose?
Sir is there anyway to implement this to moving objects like movement is 360
how to access the dataset?
Hii this is great! Can you also explain variational autoencoders!!?
Thanks...Suggestion noted.
Great video, thanks!
Well done bro👌
Thanks bro
Thank You So Much!!
Glad you liked that :D
Could you please tell me which are better models than autoencoders for the same task ?
For example, we can perform noise reduction using GANs instead of autoencoders.
Thonks ❤️
suppose i have 200 training image then can I use autoencoder?
200 is really a low number. You can try data augmentation.
@@NormalizedNerd thank you for your reply
Is there any other technique to work with small number of images?
It would have been much better if you'd built the network as you went instead of just showing to finished article. Seeing mistakes is often more valuable.
I want to adapt your code on my 79by79 images
Feel free to use the code. If you are gonna publish/distribute just mention my channel's name :)
@@NormalizedNerd I'll proudly do, thank you sir !
but I have a probleme dealing with odd numbers !
The easiest way to resolve this is to change the dimension of your images to an even number (try to pick a number that has many factors) greater than 79. Remember, you'll need to set the number of conv. layers and dimension of conv. layers accordingly.
@@NormalizedNerd and that's what I did, down to 72 !
please, one more question :
The accuracy in this case, what does it mean?, we're not in a classification problem !
@@belhafsiabdeldjalil5739 We don't use accuracy here. We use loss. It's calculated bases on the difference in pixel values between original image and generated image.
For a sec I thought an IS member is on youtube, for god sake take off that hair. anyway, thanks for the video it is helpful.
omg balance your sound volume! that thing is exploding my hear drums more than Piper Perry on couch with 5 blacks.