im new to this deep learning scene having only recently completed learning about pytorch and implementing just normal neural nets for classification and other simple tasks. But you sir, are an excellent teacher. Not only you made the theories behind NST crystal clear, the visualizations you used and your choice of words made it really easy for a beginner like me to grasp. Thank you for such amazing learning material. Looking forward to other videos in the series.
Hey man I am loving this series...just wanted to request you to please finish the series. I'm also working on the NST project right now and you're videos are like a cheat sheet for me
@@TheAIEpiphany that's awesome! I'm doing my MSc in machine learning and the final component of my project I will be using a disentangled autoencoder or neural style transfer to do domain normalization or adaptation, so these videos are very helpful.
Very nice video. One small comment, at 7:37 the way you present those images makes it seem like those are just a reshaped channels in hidden representation. But if I understand these types of visualizations correctly, you must be showing a sort of deconvolution or gradient ascent sort of thing right? Thanks for the great content!
I don’t quite understand how the noise becomes the content image, it looks like in the animation it literally starts to look like the content image and not the like the content image after it was passed through part of the vgg network.
During the optimization of the content loss we try to find the input image that will minimize this loss (where the starting point is a noisy image). This is done thanks to back propagation of the gradient. (I recommend to look at the original paper if things remain unclear)
this video just changed my life! thanks to you i have new path.
Thanks
Best explanation that I found on the topic. The visualizations help me a lot yo understand the algorithm. Keep rocking!
im new to this deep learning scene having only recently completed learning about pytorch and implementing just normal neural nets for classification and other simple tasks. But you sir, are an excellent teacher. Not only you made the theories behind NST crystal clear, the visualizations you used and your choice of words made it really easy for a beginner like me to grasp. Thank you for such amazing learning material. Looking forward to other videos in the series.
Thanks a lot!!!
Thank you for the detailed explanation! Keep sharing content. Regards from Argentina
Gracias Ignacio! 😄
Hey man I am loving this series...just wanted to request you to please finish the series. I'm also working on the NST project right now and you're videos are like a cheat sheet for me
I didn't notice too much interest in that series so I stopped, I may finish it one day haha.
Bro you are amazing. Thanks a lot for the genuine inspiration.
Excellent work and very educational, thanks.
Niko se ne obogati, dok ne obogati druge.
Bravo, momče! Napred, napred. (:
Mudro zborite! Hvala!
Great explanation .Many thanks...
Great video! Very excited for what is to come in the future videos.
Appreciate the comment! I think you'll like it if you liked the ones so far, stay tuned! They'll be focusing more on making stuff and not theory.
@@TheAIEpiphany that's awesome! I'm doing my MSc in machine learning and the final component of my project I will be using a disentangled autoencoder or neural style transfer to do domain normalization or adaptation, so these videos are very helpful.
@@evancampbell6233 Awesome stuff, good luck with your future endeavours! I'll keep them coming!
WOW you are great. Thanks
Where's the video number 1?! I found the number 3, but not the first one
Very nice video. One small comment, at 7:37 the way you present those images makes it seem like those are just a reshaped channels in hidden representation. But if I understand these types of visualizations correctly, you must be showing a sort of deconvolution or gradient ascent sort of thing right?
Thanks for the great content!
hi sir, can you explain why choose 5 layers for style loss ?
thanks
Добар си дећко ! Само цепај :)
Zahvaljujem!
Sir thank you this materials.But ı want to learn that can we use webcam? If yes how to do that?
When you dont know pytorch huh but explanation is great
I don’t quite understand how the noise becomes the content image, it looks like in the animation it literally starts to look like the content image and not the like the content image after it was passed through part of the vgg network.
During the optimization of the content loss we try to find the input image that will minimize this loss (where the starting point is a noisy image). This is done thanks to back propagation of the gradient. (I recommend to look at the original paper if things remain unclear)
Do you know if this can this be run easily in a gradient paperspace notebook or virtual machine?
I figured it out by making all of these jupyter notebooks and changing a few things. Thank you for doing these, these videos are awesome!
oh no pytorch