What are Normalizing Flows?

Поділитися
Вставка
  • Опубліковано 20 січ 2025

КОМЕНТАРІ • 89

  • @yassersouri6084
    @yassersouri6084 5 років тому +54

    The best video on the topic I have seen so far. Well done.

  • @TheBlenderer
    @TheBlenderer 5 років тому +27

    Awesome, thanks for the very clear explanation! Each step was quite "differentiable" in my head :)

  • @Terrial-tf7us
    @Terrial-tf7us 9 місяців тому +1

    you are amazing at explaining this concept in such a simple and understandable manner mate

  • @seank4422
    @seank4422 5 років тому +18

    Incredible video and explanation. Felt like I was watching a 3B1B video. Thank you!

    • @tuber12321
      @tuber12321 4 роки тому +1

      Yes, it uses very similar background music!

  • @abdjahdoiahdoai
    @abdjahdoiahdoai Рік тому

    this is so good, please don’t stop making videos!

  • @yannickpezeu3419
    @yannickpezeu3419 3 роки тому

    Wow... I'm speechless.
    Thanks ! Amazing quality !

  • @tiejean2851
    @tiejean2851 3 роки тому

    Thank you so much for making this video! Best video on this topic I've watched so far

  • @brown_alumni
    @brown_alumni Рік тому

    This is neat. Awesome graphics.. Many thanks!

  • @dbtmpl1437
    @dbtmpl1437 5 років тому +4

    That's absolutely brilliant. Keep up the good work!

  • @ayankashyap5379
    @ayankashyap5379 5 місяців тому +1

    Maybe this is a little late but at 4:24 , shouldnt the base distribution of z be parameterized by something other than theta? Usually that is a gaussian whose MLE estimate can be obtained in closed form.

  • @adamconkey2771
    @adamconkey2771 4 роки тому +1

    Thank you for this nice video, I've been struggling through some blog posts and this immediately cleared some things up for me. Great work!

  • @michaelcarlon1831
    @michaelcarlon1831 5 років тому +1

    This kind of video is super useful to the community! Thank you!

  • @romolw6897
    @romolw6897 4 роки тому

    This is a great video! Each time I watch it I learn something new.

  • @prithviprakash1110
    @prithviprakash1110 3 роки тому

    Great explanation, it all makes sense now. Gonna keep come backing anytime I need to revise.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    great video! This is definitely the best video on this topic.

  • @DavidSimonTetruashvili
    @DavidSimonTetruashvili 3 роки тому

    I think there may be a typo at 5:48.
    The individual Jacobians suddenly go to be taken wrt z_i instead of x_i, in the second line. That is not so, right?

  • @benren9004
    @benren9004 4 роки тому

    This is just such an elegant explanation.

  • @davidhendriks1395
    @davidhendriks1395 2 роки тому

    Great video! Was looking for a clear explanation and this did the trick.

  • @Zokemo
    @Zokemo 4 роки тому +1

    This is really beautiful. Keep up the amazing work!

  •  4 роки тому

    Great video! Gonna have to watch it again.

  • @ScottLeGrand
    @ScottLeGrand 5 років тому +1

    Short, sweet, and comprehensive...

  • @maximiliann.5410
    @maximiliann.5410 3 роки тому

    Thank you for the nice breakdown!

  • @junli9889
    @junli9889 4 роки тому

    @8:12 I believe here is grossed over: it seems to be the essential part, how to "make sure the lower right block is triangular"?

  • @poulamisinhamahapatra8104
    @poulamisinhamahapatra8104 4 роки тому

    Great visualisation of a complicated concept and lucid explanation. Thanks :)

  • @annasappington5911
    @annasappington5911 3 роки тому

    Fantastic video! Thanks for the hard work you put into these.

  • @tabesink
    @tabesink 4 роки тому

    Please put out more content! This was an amazing explanation.

  • @kazz811
    @kazz811 4 роки тому +4

    This is some pretty high level pedagogy. Superbly done, thanks!

  • @sShivam7
    @sShivam7 5 років тому +2

    Incredible explanation!

  • @matthiasherp9387
    @matthiasherp9387 2 роки тому

    Amazing explanations! I#m currently learning about normalising flows with a focus on the GLOW paper for a presentation and this video really gives a great overview und helps put different concepts together.

  • @philippmourasrivastava3860
    @philippmourasrivastava3860 6 місяців тому

    Fantastic video!

  • @lesleyeb
    @lesleyeb 4 роки тому

    Awesome video! Thanks for putting it together and sharing

  • @李扬-n7k
    @李扬-n7k Рік тому

    the most clear I have see

  • @hanwei5987
    @hanwei5987 4 роки тому

    Amazing explanation & presentation :)

  • @jehillparikh
    @jehillparikh 4 роки тому

    Great video and visualisation!

  • @samuelpanzieri7867
    @samuelpanzieri7867 4 роки тому

    Great video, made a pretty difficult topic very clear!

  • @michaellaskin3407
    @michaellaskin3407 4 роки тому

    Such an excellent video

  • @najinajari3531
    @najinajari3531 4 роки тому

    Very clear explanation. Thanks a lot :)

  • @arrow0seb
    @arrow0seb 4 роки тому

    Great video. I hope you release more like it! :)

  • @jg9193
    @jg9193 4 роки тому +1

    Please make more videos like this

  • @alvinye9900
    @alvinye9900 2 роки тому

    Awesome video! Thanks!

  • @superaluis
    @superaluis 4 роки тому

    Thanks for the great explanation!

  • @chyldstudios
    @chyldstudios 2 роки тому

    Great explanation!

  • @huajieshao5226
    @huajieshao5226 3 роки тому

    awesome video! Like it so much!

  • @simonguiroy6636
    @simonguiroy6636 4 роки тому

    Great video, well explained!

  • @matakos22
    @matakos22 3 роки тому

    Thank you so much for this!

  • @praveen3779
    @praveen3779 5 місяців тому

    Nice video, thankyou. But can you explain how this fits inside overall architecture of any simple Generative model and also how it can be implemented in code? Or just point me to a resource where I can find it.

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader 2 роки тому +1

    Please keep making videos

  • @zhenyueqin6910
    @zhenyueqin6910 4 роки тому +1

    Amazing! Thanks!

  • @p.z.8355
    @p.z.8355 3 роки тому

    what is the connection of this to the reparametrization trick?

  • @saharshayegan
    @saharshayegan Рік тому

    Thank you for the great explanation. What I don't understand here is the reason why we are looking for p_theta(x). Shouldn't it be p_phi(x)? (by phi I mean any other parameter that is not theta) Since we are looking for the probability in the transformed space.

    • @ariseffai
      @ariseffai  Рік тому +1

      Thanks for the question. While using a single symbol for the model's parameters is a standard notation (e.g., see eq. 6 from arxiv.org/abs/1807.03039), I agree that using two distinct symbols would've been a bit clearer and indeed some papers do that instead :)

  • @eyal8615
    @eyal8615 3 роки тому

    Well explained!

  • @robmarks6800
    @robmarks6800 3 роки тому

    Amazing, Keep at it!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    How do we find such a function f that performs the transformation? Is it the neural network? If so, wouldn’t that just be a decoder?

  • @sherlockcerebro
    @sherlockcerebro 4 роки тому

    I looked at the RealNVP and I can't seem to find the part where the latent space is smaller than the input space. Where could I find it?

  • @curtisjhu
    @curtisjhu Рік тому

    amazing, keep it up

  • @ThePritt12
    @ThePritt12 4 роки тому +1

    cool video, thanks! What video editing tools do you use for the animations?

    • @ariseffai
      @ariseffai  3 роки тому

      This one used a combination of matplotlib, keynote, & FCP. I've also used manim in other videos.

  • @MDNQ-ud1ty
    @MDNQ-ud1ty 11 місяців тому +3

    I think the way you explained the probability relationships is a bit poor. For example p_t(x) = p_t(f_t^(-1)(x)) would imply the obvious desire for f_t to be the identity map. If x is a different r.v. then there is no reason one would make such a claim. The entire point is that the rv's may have different probabilities due to the map(and it may not even be injective) and so one has to scale the rv's probabilities which is where the jacobian comes in(as would a sum over the different branches).
    It would have been better to start with two different rv's and show how one could transform one in to another and the issues that might creep. E.g., This is how one would normally try to solve the problem from first principles.
    The way you set it up leaves a lot to be desired. E.g., while two rv's can easily take the same value they can have totally different probabilities which is the entire point of comparing them in this way. I don't know who would start off thinking two arbitrary rv's would have the same probabilities and sorta implying that then saying "oh wait, sike!" isn't really a good way to teach it.

  • @sergicastellasape
    @sergicastellasape 5 років тому +1

    Great explanation!! I hope more videos are coming. I have a question, I don't really understand the benefit from the coupling layer example about "partitioning the variable z into 1:d and d+1:D". As explained in the video, you still need to ensure that the lower right sub-matrix is triangular to make the jacobian fully triangular. Then, isn't just more "intuitive" to say: the transformation of each component will "only be able to look at itself and past elements"? Then any x_i will only depend on z_{1:i} so the derivative for the rest will be zero. You still need to impose this condition on the "lower right sub-jacobian", then what's the value of the initial partitioning? Thanks!

    • @ariseffai
      @ariseffai  5 років тому +2

      Thank you and great question! The setup you describe is certainly one way of ensuring a fully triangular Jacobian and is the approach taken by autoregressive flows (e.g., arxiv.org/abs/1705.07057). But not only do we want a triangular Jacobian, we need to be able to efficiently compute its diagonal elements as well as the inverse of the overall transformation. The partitioning used by NICE is one way of yielding these two properties while still allowing for a high capacity transformation (as parameterized by m), which I think was underemphasized in the video. In the additive coupling layer, not only is the lower right sub-Jacobian triangular but it’s just the identity, giving us ones along the full diagonal. And the identity implemented by the first transformation (copying over z_{1:d} to x_{1:d}) guarantees g will be trivially invertible wrt 1st arg since the contribution from m can be recovered.

  • @brycejohnson9291
    @brycejohnson9291 4 роки тому

    that was a great video!

  • @TyrionLannister-zz7qb
    @TyrionLannister-zz7qb 9 місяців тому

    Are the animations and sound track inspired from a channel named 3Blue1Brown ?

  • @karanshah1698
    @karanshah1698 2 роки тому

    Isn't the Jacobian here acting more like a Linear Transformation over the 2D example of unit square? How is it a Jacobian?
    I seem to be confused on the nomenclature here.
    Also because these are chained invertible transforms with a nonzero determinant, can't we just squash all like a Linear Transform into one?

  • @matthias2261
    @matthias2261 4 роки тому

    Nice! This is absolutely breakfast-appropriate.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    Why would adjacent pixels for an image have autoregressive property?

  • @lucasfijen
    @lucasfijen 5 років тому +2

    Thanks a lot!

  • @motherbear55
    @motherbear55 4 роки тому +1

    Thanks for this explanation! Could you recommend on online class or other resource for getting a solid background in probability in order to better understand the math used to talk about generative models?

    • @sehaba9531
      @sehaba9531 Рік тому

      I am actually looking for the same thing, if you have found something interesting !

  • @antraxuran9
    @antraxuran9 4 роки тому

    Great video! I spotted a minor terminology mistake: you are referring to the evidence using the term "likelihood", which might confuse some folks

  • @CosmiaNebula
    @CosmiaNebula 4 роки тому +1

    The formula at 1:12 is wrong. The x on the right should be z.
    Similar for other formulas later.

    • @ariseffai
      @ariseffai  4 роки тому +1

      f is defined to be a mapping from Z to X. So f^{-1} takes x as input.

  • @shiva_kondapalli
    @shiva_kondapalli 3 роки тому

    Hi! Amazing video and visualization. Curious to know if the software used for the graphics was manim?

    • @ariseffai
      @ariseffai  3 роки тому +1

      Not in this particular video, but there are several manim animations in my other videos :)

  • @CristianGutierrez-th1jx
    @CristianGutierrez-th1jx 8 місяців тому

    Hands down the best intro to gen models one could ever had.

  • @stacksmasherninja7266
    @stacksmasherninja7266 2 роки тому

    Great video ! Can you also make a video on gaussian processes and gaussian copulas?

  • @ayushgarg70
    @ayushgarg70 4 роки тому

    amazing

  • @albertlee5312
    @albertlee5312 4 роки тому

    So what is normalizing flow?

  • @朱欣宇-u7q
    @朱欣宇-u7q 4 роки тому

    Awesome

  • @ejkmovies594
    @ejkmovies594 10 місяців тому

    giving my 3blue1brown vibes. Amazing video.

  • @qichaoying4478
    @qichaoying4478 3 роки тому

    For Chinese readers, you can also refer to Doctor Li's lecture: ua-cam.com/video/uXY18nzdSsM/v-deo.html

  • @EagleHandsJonny
    @EagleHandsJonny 4 роки тому

    Got that 3blue1brown background music

  • @chadgregory9037
    @chadgregory9037 3 роки тому

    this totally has something to do with principle fibre bundles doesn't it..... this is that shit James Simons figured out back in the 70s

  • @wenjieyin7624
    @wenjieyin7624 3 роки тому

    one mistake: NF cannot reduce dimensions!

  • @RamNathaniel
    @RamNathaniel 6 місяців тому

    Thanks for the video, but the background music put me to sleep - please change for next time.

  • @stazizov
    @stazizov 10 місяців тому

    Hello everyone from 2024, it seems the flow-matching hype has begun

  • @gapsongg
    @gapsongg Рік тому

    Bro it is really hard to follow. Nice mic and nice video editing, but the content is way to hard. Really really hard to follow.

  • @benmiss1767
    @benmiss1767 3 роки тому

    Amazing work, thank you very much!