Basic Theory | Neural Style Transfer #2

Поділитися
Вставка
  • Опубліковано 26 гру 2024

КОМЕНТАРІ • 30

  • @tastelessdegenerate9526
    @tastelessdegenerate9526 2 роки тому +2

    this video just changed my life! thanks to you i have new path.
    Thanks

  • @EdgarMarca
    @EdgarMarca 2 місяці тому +1

    Best explanation that I found on the topic. The visualizations help me a lot yo understand the algorithm. Keep rocking!

  • @atraps7882
    @atraps7882 4 роки тому +6

    im new to this deep learning scene having only recently completed learning about pytorch and implementing just normal neural nets for classification and other simple tasks. But you sir, are an excellent teacher. Not only you made the theories behind NST crystal clear, the visualizations you used and your choice of words made it really easy for a beginner like me to grasp. Thank you for such amazing learning material. Looking forward to other videos in the series.

  • @g.jignacio
    @g.jignacio 3 роки тому +3

    Thank you for the detailed explanation! Keep sharing content. Regards from Argentina

  • @mrinmoysarkar589
    @mrinmoysarkar589 3 роки тому +2

    Hey man I am loving this series...just wanted to request you to please finish the series. I'm also working on the NST project right now and you're videos are like a cheat sheet for me

    • @TheAIEpiphany
      @TheAIEpiphany  3 роки тому +1

      I didn't notice too much interest in that series so I stopped, I may finish it one day haha.

  • @josephtsangko3558
    @josephtsangko3558 Рік тому +1

    Bro you are amazing. Thanks a lot for the genuine inspiration.

  • @munch_muzak
    @munch_muzak 3 роки тому +1

    Excellent work and very educational, thanks.

  • @parkourboy012
    @parkourboy012 4 роки тому +1

    Niko se ne obogati, dok ne obogati druge.
    Bravo, momče! Napred, napred. (:

  • @jaikishank
    @jaikishank 3 роки тому

    Great explanation .Many thanks...

  • @evancampbell6233
    @evancampbell6233 4 роки тому

    Great video! Very excited for what is to come in the future videos.

    • @TheAIEpiphany
      @TheAIEpiphany  4 роки тому

      Appreciate the comment! I think you'll like it if you liked the ones so far, stay tuned! They'll be focusing more on making stuff and not theory.

    • @evancampbell6233
      @evancampbell6233 4 роки тому

      @@TheAIEpiphany that's awesome! I'm doing my MSc in machine learning and the final component of my project I will be using a disentangled autoencoder or neural style transfer to do domain normalization or adaptation, so these videos are very helpful.

    • @TheAIEpiphany
      @TheAIEpiphany  4 роки тому

      @@evancampbell6233 Awesome stuff, good luck with your future endeavours! I'll keep them coming!

  • @daniaste
    @daniaste 2 роки тому

    WOW you are great. Thanks

  • @mandragora-13
    @mandragora-13 3 місяці тому

    Where's the video number 1?! I found the number 3, but not the first one

  • @smnt
    @smnt 3 роки тому

    Very nice video. One small comment, at 7:37 the way you present those images makes it seem like those are just a reshaped channels in hidden representation. But if I understand these types of visualizations correctly, you must be showing a sort of deconvolution or gradient ascent sort of thing right?
    Thanks for the great content!

  • @hieugiap
    @hieugiap Рік тому

    hi sir, can you explain why choose 5 layers for style loss ?
    thanks

  • @ЏонМастерман
    @ЏонМастерман 4 роки тому +2

    Добар си дећко ! Само цепај :)

  • @ahmetkoraysonal5841
    @ahmetkoraysonal5841 2 роки тому

    Sir thank you this materials.But ı want to learn that can we use webcam? If yes how to do that?

  • @cheapearth6262
    @cheapearth6262 Рік тому

    When you dont know pytorch huh but explanation is great

  • @alexijohansen
    @alexijohansen 3 роки тому

    I don’t quite understand how the noise becomes the content image, it looks like in the animation it literally starts to look like the content image and not the like the content image after it was passed through part of the vgg network.

    • @AurelB96
      @AurelB96 3 роки тому

      During the optimization of the content loss we try to find the input image that will minimize this loss (where the starting point is a noisy image). This is done thanks to back propagation of the gradient. (I recommend to look at the original paper if things remain unclear)

  • @makeandbreak127
    @makeandbreak127 4 роки тому

    Do you know if this can this be run easily in a gradient paperspace notebook or virtual machine?

    • @tasteless687
      @tasteless687 3 роки тому

      I figured it out by making all of these jupyter notebooks and changing a few things. Thank you for doing these, these videos are awesome!

  • @iskrabesamrtna
    @iskrabesamrtna 3 роки тому

    oh no pytorch