Variational Autoencoders

Поділитися
Вставка
  • Опубліковано 28 кві 2024
  • In this episode, we dive into Variational Autoencoders, a class of neural networks that can learn to compress data completely unsupervised!
    VAE's are a very hot topic right now in unsupervised modelling of latent variables and provide a unique solution to the curse of dimensionality.
    This video starts with a quick intro into normal autoencoders and then goes into VAE's and disentangled beta-VAE's.
    I aslo touch upon related topics like learning causal, latent representations, image segmentation and the reparameterization trick!
    Get ready for a pretty technical episode!
    Paper references:
    - Disentangled VAE's (DeepMind 2016): arxiv.org/abs/1606.05579
    - Applying disentangled VAE's to RL: DARLA (DeepMind 2017): arxiv.org/abs/1707.08475
    - Original VAE paper (2013): arxiv.org/abs/1312.6114
    If you want to support this channel, here is my patreon link:
    / arxivinsights --- You are amazing!! ;)
    If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbr...
  • Наука та технологія

КОМЕНТАРІ • 454

  • @abaybektursun
    @abaybektursun 6 років тому +319

    Variational Autoencoders starts at 5:40

    • @pouyan74
      @pouyan74 4 роки тому +5

      You just saved five minutes of my life!

    • @moazalomary8123
      @moazalomary8123 3 роки тому +17

      @@pouyan74 no the first part was necessary...

    • @selmanemohamed5146
      @selmanemohamed5146 3 роки тому +2

      @@moazalomary8123 you think someone would enter a video about Variational Autoencoders if he doesn't know what Autoencoders are

    • @moazalomary8123
      @moazalomary8123 3 роки тому +23

      @@selmanemohamed5146 yeah i did... 😂😂😂 and i was lucky he explained both 😎🙌😅 + the difference between them and that's the important part

    • @moazalomary8123
      @moazalomary8123 3 роки тому

      @Otis Rohan Interested

  • @atticusmreynard
    @atticusmreynard 6 років тому +442

    This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.

    • @vindieu
      @vindieu 11 місяців тому

      Except for "Gaussian" that is weirdly russian pronunced "khaussian" wat?

  • @arkaung
    @arkaung 6 років тому +158

    This guy does a real job of explaining things rather than hyping up things like "some other people".

  • @ambujmittal6824
    @ambujmittal6824 4 роки тому +6

    Your way of simplifying things is truly amazing! We really need more people like you!

  • @obadajabassini3552
    @obadajabassini3552 6 років тому +9

    A really great talk! I have been reading about VAE a lot and this video helps me to understand it even better.
    Thanks!

  • @jingwangphysics
    @jingwangphysics 2 роки тому +9

    The beta-VAE seems enforcing a sparse representation. It magically picks the mostly relevant latent variables. I am glad that you mentioned ‘causal’, because that’s probably how our brain deals with high dimensional data. When resources are limited (corresponding to use large beta), the best representation turns out to be a causal model. Fascinating! Thanks

  • @debajyotisg
    @debajyotisg 4 роки тому

    I love your channel. A perfect amount of technicality so as to not scare off beginners, and also keep the intermediates/ experts around. Brilliant.

  • @515nathaniel
    @515nathaniel 4 роки тому +14

    "You cannot push gradients through a sampling node"
    TensorFlow: *HOLD MY BEER!*

  • @isaiasprestes
    @isaiasprestes 6 років тому +28

    Great! No BS, strait and plain English! That`s what I want!! :) Congratulations!

  • @adityakapoor3237
    @adityakapoor3237 2 роки тому +4

    This guy was a VAE to the VAE explanation. Really need more of such explanations with the growing literature! Thanks!

  • @paradoxicallyexcellent5138
    @paradoxicallyexcellent5138 5 років тому +3

    I was very interested in this topic, read the paper, watched some videos, read some blogs. This is by far the best explanation I've come across. You add a lot of value here to the original rapper's contribution. It could even be said you auto-encoded it for my consumption ;)

  • @antonalexandrov4159
    @antonalexandrov4159 Рік тому

    Just found your channel and I realize how with some passion and effort you explain things better than some of my professors. Of course, you don't go into too much detail but putting together the big picture comprehensively is valuable and not everyone can do it.

  • @hcgaron
    @hcgaron 5 років тому +1

    I discovered your channel today and I'm hooked! Excellent work. Thank you so much for your hard work

  • @nabeelyoosuf
    @nabeelyoosuf 4 роки тому

    Your explanations are quite insightful and flawless. You are are a gifted explainer! Thanks for sharing them. Please keep sharing more.

  • @ujjalkrdutta7854
    @ujjalkrdutta7854 5 років тому

    Really liked it. Firstly giving an intuition of the concept, its application and then to the objective function while explaining its individual terms, in a way everyone can understand, it was simply professional and elegant. Nice work and thanks!

  • @ashokkannan93
    @ashokkannan93 5 років тому

    I would like to see more videos from you. Clear explanation of concept and gentle presentation of math. Great job!

  • @dimitryversteele2410
    @dimitryversteele2410 6 років тому +3

    Great video ! Very clear and understandable explanaitions of hard to understand topics.

  • @rylaczero3740
    @rylaczero3740 5 років тому +18

    Bloody nicely explained than the Stanford people. Subscribed to the channel, I remember watching your first video on Alpha, but didn't subscribed then, I hope there will be more content on channel with same level of quality, otherwise its hard for people to stick around when the reward is sparse.

  • @ejkitchen
    @ejkitchen 6 років тому +2

    Your videos are quite good. I am sure you will get an audience in no time if you continue. Thank you so much for making these videos. I like the style you use a lot and love the time format (not too short and long enough to do a good overview dive). Well done.

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Thank you very much for supporting me man! New video is in the making, I expect to upload it hopefully somewhere next week :)

  • @Zahlenteufel1
    @Zahlenteufel1 Рік тому +3

    Bro this was insanely helpful! I'm writing my thesis and am missing a lot of the basics in a lot of relevant areas. Great summary!

  • @reinerwilhelms-tricarico344
    @reinerwilhelms-tricarico344 4 роки тому

    Great! Crisply clear explanations in such a short time.

  • @giorgiozannini5626
    @giorgiozannini5626 3 роки тому

    wait how did I not know of this channel. Beautiful explanation, perfectly clear. Thanks for the awesome work!

  • @agatinogiulianomirabella6590
    @agatinogiulianomirabella6590 2 роки тому

    Best explanation found on the internet so far. Congratulations!

  • @get.ai.enabled
    @get.ai.enabled 6 років тому

    This is a LIT channel for watching alongside papers. Thanks

  • @MonaJalal
    @MonaJalal 3 роки тому

    hands down this was the best autoencoder and variational autoencoder tutorial I found on Web.

  • @ashokkannan93
    @ashokkannan93 5 років тому +1

    Excellent video!! Probably the best VAE video I saw. Thanks a lot :)

  • @venkatbuvana
    @venkatbuvana 5 років тому

    Thanks a lot for sharing such a succinct summarization of VAEs. Very helpful!

  • @DistortedV12
    @DistortedV12 4 роки тому

    This was very lucid. You are gifted at explaining things!

  • @ativjoshi1049
    @ativjoshi1049 6 років тому

    Your explanation is crisp and to the point. Thanks.

  • @davidm.johnston8994
    @davidm.johnston8994 6 років тому

    Great videos man, keep them going, you're gonna find an audience!

  • @sethagastya
    @sethagastya 4 роки тому +1

    This was an amazing video! Thanks man. Will stay tuned for more!

  • @DanielWeikert
    @DanielWeikert 5 років тому +1

    Great work. Thanks a lot! Highly appreciate your effort. Creating these videos takes time but I still hope you will continue.

  • @aryanirvaan
    @aryanirvaan 2 роки тому

    Dude what a next level genius you are!
    You made them so easy to be understood, and just look at the quality of the content.
    Damn bro!🎀

  • @TheJysN
    @TheJysN 2 роки тому

    I hand such a hart time understanding the Reparameterization trick, now i finally got it. Thanks for the great explanation. Would love to see more Videos from you.

  • @moozzzmann
    @moozzzmann 4 місяці тому

    Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work

  • @superaluis
    @superaluis 6 років тому +1

    Great channel! Keep up with this awesome project. Already subscribed and going to share this channel with my colleagues.

  • @achakraborti
    @achakraborti 6 років тому

    First video I see from this channel. Immediately subscribed!

  • @shivamutreja6427
    @shivamutreja6427 2 роки тому

    Your videos are absolute crackin for a quick revision before an interview!

  • @MeauxTarabein
    @MeauxTarabein 5 років тому

    Very Helpfully Arxiv! keep the good Quality videos coming

  • @adityamalte476
    @adityamalte476 6 років тому

    Really appreciate your effort of simplifying research papers for viewers.Keep it up.I want more such videos

  • @AjithKumar-gk7bf
    @AjithKumar-gk7bf 5 років тому

    Just found this channel ... today... one word Brilliant...!!!

  • @lisbeth04
    @lisbeth04 5 років тому

    I love you. I spent so long on this and couldn't understand the intuition behind it, with this video I understood immediately. Thanks

  • @kalehermit
    @kalehermit 4 роки тому

    Thank you very much, this is the first time I understand the benefit of reparameterization trick.

  • @yanfengliu
    @yanfengliu 5 років тому

    This is really good. I like the way you explain things. Thank you for sharing!

  • @fktudiablo9579
    @fktudiablo9579 3 роки тому

    always the best place to have a good overview before diving deeper

  • @nohandlepleasethanks
    @nohandlepleasethanks 6 років тому

    Great explanations. This filled two crucial gaps in my understanding of VAEs, and introduced me to beta-VAEs.

  • @famouspeople3499
    @famouspeople3499 3 роки тому

    Great video, better than many tutor lessons in university, this animation and simplified the things with simple words

  • @liyiyuan45
    @liyiyuan45 3 роки тому

    This is sooooooo useful for 2am and you dragged by all the math in the actually paper. Thanks man for the clear explanation!

  • @TheRohr
    @TheRohr 6 років тому

    Great thanks for the video and the paper explanation! Really, really helpful, keep that paper explanation content!

  • @antoinesueur9289
    @antoinesueur9289 6 років тому +5

    Great content ! The format and delivery is perfect, hope to see more of these videos :) . Are you planning on doing a video on Capsule Networks in the future ?

    • @ArxivInsights
      @ArxivInsights  6 років тому +5

      More videos are definitely coming, the next one will be on novel state-of-the-art methods in Reinforcement Learning! I don't plan on making a video on Capsule Nets since there is an amazingly good video by Aurélien Géron on that topic and there's no way I can explain it any better than he did, no need to reinvent the wheel :p Here is his video: ua-cam.com/video/pPN8d0E3900/v-deo.html

  • @JakubArnold
    @JakubArnold 6 років тому

    Great explanation on why we actually need the reparameterization trick. Everyone just skims over that and explains the part that mu+var*N(0,1) = N(mu,var), but ignores the part why you need it. Good job!

  • @ejeinstein
    @ejeinstein 6 років тому

    Really really awesome channel!!! Look forward to watching more of your videos!

  • @maxhorowitz-gelb6092
    @maxhorowitz-gelb6092 6 років тому

    Wow! Great video. Very concise and easy to understand something quite complex.

  • @kristyleist3318
    @kristyleist3318 4 роки тому

    This is great! Keep going, we need you!
    Don't stop making amazing videos like this

  • @sunnybeta_
    @sunnybeta_ 6 років тому

    This video suddenly popped up today morning on my home page. Now i know my Sunday will be great. :D

  • @abhinavshaw9112
    @abhinavshaw9112 5 років тому

    Hi, I am a Graduate Student at UMass Amherst. I really liked your video, it gave me a lot of ideas. Watching this before reading the paper would really help. Please keep it coming I'll be waiting for more.

  • @animeshgoyal9583
    @animeshgoyal9583 4 роки тому

    Simply Amazing! Thanks for sharing this. Absolutely loved it. Hope to see more videos from you :)

  • @SlavIvanov
    @SlavIvanov 6 років тому

    This is great! Keep going, we need you!

  • @HeduAI
    @HeduAI 5 років тому

    Amazing explanation to a complicated topic! Thank you so much!!!!

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader 2 роки тому

    Shared your work with my followers. Keep making amazing content

  • @double_j3867
    @double_j3867 6 років тому

    Subscribed. Very useful -- i'm an applied ML researcher (applying these techniques to real-world problems) so I need a way to quickly "scan" methods and determine what may be useful before diving in-depth. These styles of videos are exactly what I need.

  • @vortexZXR
    @vortexZXR 4 роки тому

    So many ideas come to mind after watching this video. Well done!

  • @davidenders9107
    @davidenders9107 5 місяців тому

    Thank you! This was comprehensive and comprehendible.

  • @hitarthk
    @hitarthk 5 років тому

    Absolutely great stuff Arxiv Insights! Subscribed to your videos for life :)

  • @satishbanka
    @satishbanka 3 роки тому

    Very good explanation of Variational Autoencoders! Kudos!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 роки тому

    Your explanation is so clear.

  • @emanehab510
    @emanehab510 6 років тому +1

    Don't stop making amazing videos like this

  • @dmitrykalashnikov8637
    @dmitrykalashnikov8637 5 років тому

    Very useful. Great content. Continue what you're doing. Great job.

  • @abylayamanbayev8403
    @abylayamanbayev8403 Рік тому

    Finally I understood the intuition of sampling from mu and sigma and reparameterization trick. Thanks!

  • @betterbrained
    @betterbrained Рік тому

    You help so much with my exams, thanks man, subscribed for more high quality stuff!

  • @joshbrenneman
    @joshbrenneman 5 років тому

    Wow, love your videos. I have not worked with reinforcement learning, but I’d love to hear your analysis of other generative models.

  • @Golgafrincham
    @Golgafrincham 5 років тому

    Awesome explainations and interesting subjects, keep it up!

  • @jfndfiunskj5299
    @jfndfiunskj5299 6 років тому

    Very clearly explained. Good job.

  • @adityasoni6308
    @adityasoni6308 6 років тому

    Amazing description...
    Need more videos on different things

  • @robinranabhat3125
    @robinranabhat3125 6 років тому +230

    Don't you ever stop explaining papers like this. Better than Siraj's video.
    Just explain the code part a bit longer. And your channel is set.

    • @pablonapan4698
      @pablonapan4698 6 років тому +7

      exactly. show some more code please.

    • @shrangisoni8758
      @shrangisoni8758 5 років тому +4

      Yea we can't really do much until we code and see results ourselves.

    • @pixel7038
      @pixel7038 4 роки тому +1

      Siraj has improved his videos and provides more content. Don’t be stuck in the past ;)

    • @gagegolish9306
      @gagegolish9306 4 роки тому

      @@shrangisoni8758 He's explained the fundamental concepts, you can take those concepts and translate them to code. He shouldn't have to do that for you.

    • @dalchemistt7
      @dalchemistt7 4 роки тому +27

      @@pixel7038 Please stop spreading his name. He has faked his way more than enough already. Read more here: twitter.com/AndrewM_Webb/status/1183150368945049605 and here www.reddit.com/r/learnmachinelearning/comments/dheo88/siraj_raval_admits_to_the_plagiarism_claims/
      And what really bugs me is not the plagiarism- that's bad and shameful in itself- but the level of stupidity this guys had shown while plagiarizing- "gates" to "doors" and "complex Hilbert space" to "complicated Hilbert space".

  • @timurbabadjanov9115
    @timurbabadjanov9115 3 роки тому

    That was a great explanation! Thank you so much!

  • @mrdbourke
    @mrdbourke 6 років тому

    Epic video Xander! I learned a lot from your explanation. Now to try an implement some code!

  • @ck1847
    @ck1847 2 роки тому

    Thanks, this video clarified many things from the original paper.

  • @tamerius1
    @tamerius1 6 років тому

    You're explaining this very well! Finally an explanation on an AI technique that's easy to follow and understand. Thank you.

  • @AhmedKachkach
    @AhmedKachkach 6 років тому

    Immediate subscribe :) Thanks for this in-depth video. Please keep a format like this in the future (relatively in-depth explanation, to build a real intuition about these techniques).

  • @falsiofalsissimo5313
    @falsiofalsissimo5313 5 років тому

    We needed a serious and technical channel about latest findings in DL. That siraj crap is useless. Keep going! Awesome

  • @matthewbascom
    @matthewbascom 4 роки тому

    I like the subtle distinction you made between the disentangled variational auto-encoder versus the normal variational auto-encoder: Changing the first dimension in the latent space of the disentangled version rotates the face while leaving everything else in the image unchanged. But changing the first dimension in the normal version not only rotates the image, but changes other features as well. Thank you. Me gleaning that distinction from Higgins, et al. Beta-VAE Deepmind paper would be unlikely...

  • @kanglemu6801
    @kanglemu6801 3 роки тому

    Really love this video! Good job!

  • @hammadshaikhha
    @hammadshaikhha 6 років тому

    I just found this channel and subscribed. Great video, I enjoyed the pacing and technical components. Can you make videos like this for popular ML topics like backpropogation or EM algorithm?

  • @karrikarthik6936
    @karrikarthik6936 5 років тому

    Big Fan. Would love to see videos where you breakdown some of the applications using deep learning tools.

  • @nildiertjimenez7486
    @nildiertjimenez7486 3 роки тому

    One minute watching this video is enought to be a new subscriber! Awesome

  • @TienTaioan
    @TienTaioan 6 років тому

    Very clear explanation. Thank you!

  • @dippatel1739
    @dippatel1739 6 років тому +15

    your videos are awesome don't lose track bcuz of subscribers.

  • @md.mottakinchowdhury7898
    @md.mottakinchowdhury7898 5 років тому

    This is just good content. Such in depth explanations are what we need in AI community. Great work.

  • @xandermay7705
    @xandermay7705 6 років тому +7

    Holy crap, another Xander interested in machine learning :D

  • @akshayshrivastava97
    @akshayshrivastava97 3 роки тому +1

    Finally, someone who cares their viewers actually get to understand VAEs.

  • @karimabdultankian28
    @karimabdultankian28 6 років тому +1

    So clear, amazing 15 mins. ty,

  • @phattran4858
    @phattran4858 4 роки тому

    Thank you very much, I was trying to understand it, but it's much easier when I found this video!

  • @DILLIPKUMARSAHOOIITM
    @DILLIPKUMARSAHOOIITM 6 років тому

    Very good explanation. Subscribed to the channel. Looking for more thoughtful videos on cutting edge ML stuff.

  • @ianprado1488
    @ianprado1488 6 років тому

    Excellent material!!

  • @rileyrfitzpatrick
    @rileyrfitzpatrick 4 роки тому

    Im always intimidated when he says it is going to be technical, but then he explains it so concisely.

  • @lordsherpaman
    @lordsherpaman 4 роки тому

    That was a great explanation, thank you!

  • @PierLim
    @PierLim 6 років тому

    Very good explanation! Thank you man!

  • @kiwianaDJ
    @kiwianaDJ 5 років тому

    what a gem of a channel I have found her...

  • @submagr
    @submagr 4 роки тому

    Thanks for this video. It gave a nice overall idea about variational auto-encoders.