The Kolmogorov-Arnold Theorem

Поділитися
Вставка
  • Опубліковано 26 січ 2025

КОМЕНТАРІ • 52

  • @Alteaima
    @Alteaima Місяць тому +23

    First I hope you see this comment we need a video on graph neural networks and we can’t find someone who breaks the topic down to this degree of simplicity so thanks for your help and we appreciate your efforts 🎉

    • @SerranoAcademy
      @SerranoAcademy  Місяць тому +6

      Thank you so much! Great suggestion! I'm actually working on an explanation of GNNs, with some people who are friends, and some like sports and some like music. Hoping to get it out pretty soon!
      If you have any other suggestions, please feel free to throw them in, I'm always looking for good topics to learn and explain. :)

    • @Alteaima
      @Alteaima Місяць тому

      @@SerranoAcademy thank you again i hope you’re doing great

    • @revimfadli4666
      @revimfadli4666 Місяць тому

      ​@@SerranoAcademycan you please link it to chemistry gnns and modular agents by deepak pathak?

  • @trantandat2699
    @trantandat2699 Місяць тому +9

    One of the best teacher i have seen so far. make complicated thing like this Kolmogorov Arnold Theorem to be very simple explanation

    • @SerranoAcademy
      @SerranoAcademy  Місяць тому

      @@trantandat2699 thank you for your kind words, I’m glad you enjoyed it! :)

  • @Atlas92936
    @Atlas92936 15 днів тому

    Luis, I have the utmost respect for you. I’ve been keeping up with your content in various platforms, coursera, LinkedIn, UA-cam, and I really think you’re a great human being. I related to your story about starting in mathematics and struggling as a student. Now you are well known in the ML community and make math more accessible for everyone. You are also conscious about social issues which is an overlooked quality. You’re clearly an achieved hard worker, yet humble. Thank you for the inspiration always.

    • @SerranoAcademy
      @SerranoAcademy  14 днів тому +1

      Thank you for such kind message. It's a real honor to be part of your learning journey, and to share our desire for a better world. :)

  • @znglelegendaire3005
    @znglelegendaire3005 13 днів тому

    You are the best professor that I know at the moment in the world! Thank you very much for the explanations.

  • @jamesmcadory1322
    @jamesmcadory1322 Місяць тому +2

    This is one of the best educational videos I’ve ever seen. It went at a good pace, had helpful visuals, and I feel like I understand the main idea of this theorem now. Thank you for the video!

  • @frankl1
    @frankl1 Місяць тому +2

    Best explanation of KAT and KAN with intuitive drawings, very much appreciated

  • @shivakumarkannan9526
    @shivakumarkannan9526 15 днів тому

    Such a brilliant theorem and very clear explanation using diagrams.

  • @MoreCompute
    @MoreCompute 6 днів тому

    Louis! What a great video you've made. Thank you for making it.

  • @BananthahallyVijay
    @BananthahallyVijay Місяць тому

    🎉🎉🎉🎉 The most lucid video I've wanted to see on why in theory you need only one hidden layer in a NN. A big thanks to the content creator. ❤

  • @Gamingforfunpeace
    @Gamingforfunpeace Місяць тому +3

    Honestly this is amazing. Could you please create a 5 part video series on these visual explanations of the Langlands Proof that just came out ( you know which one ) ...
    You have a gift for Mathematical Storytelling , I absolutely loved the visualizations .... That is what math is about .... The elegance of visual storytelling... Would love to see your visualization of that proof

  • @sahil_shrma
    @sahil_shrma Місяць тому +2

    Wow! the everything in two-layer thing and summation part seems fantastic. Thank you! Luis 💚

    • @SerranoAcademy
      @SerranoAcademy  Місяць тому

      @@sahil_shrma thank you so much, I’m glad you liked it! I was pretty amazed too when I first saw that the theorem implies the two-layer universality. :)

  • @sohaibahmed9165
    @sohaibahmed9165 18 днів тому

    Thanks bro! You made it really simple. Highly recommended❤

  • @jasontlho
    @jasontlho Місяць тому +1

    beautiful explanation

  • @Sars78
    @Sars78 Місяць тому

    This IS the most important theorem to appreciate the power of DNN in general.

  • @Harshtherocking
    @Harshtherocking Місяць тому

    i tried reading this paper in month of June 2024. Couldn't understand much of it. Thanks Luis for the amazing explanation.

  • @RasitEvduzen
    @RasitEvduzen Місяць тому

    Thanks for your beautiful explanation, I think next video should about Automatic Differentiation.

  • @sunilkumarvengalil2305
    @sunilkumarvengalil2305 14 днів тому

    Nice explanation! Thank you!

  • @junborao8910
    @junborao8910 16 днів тому

    Really helpful video. I really appreciate it.

  • @behrampatel3563
    @behrampatel3563 Місяць тому

    Louis I wish you health and happiness so you can continue to educate those of us who are way past their academic prime. For many reasons I never had the luxury of learning engineering . Khan academy , 3blue1brown and you made education accessible and approachable. Thank you , live long and prosper my friend. ❤

  • @djsocialanxiety1664
    @djsocialanxiety1664 Місяць тому +3

    awesome explanation

    • @SerranoAcademy
      @SerranoAcademy  Місяць тому

      Thank you, I'm glad you like it!

    • @djsocialanxiety1664
      @djsocialanxiety1664 Місяць тому +1

      @@SerranoAcademyany chance on a video that explains the training of KANs

    • @SerranoAcademy
      @SerranoAcademy  Місяць тому

      @@djsocialanxiety1664 this video has the architecture: www.youtube.com/watch?v=myFtp58U
      In there I talk a little bit about the training, which is mostly finding the right coefficients of the B-splines, using the usual gradient descent. AFAIK, the training is very analogous to a regular neural network, which is why I only mention it briefly, but if it's something more, I may make another video. If you know of any nuances in the training that can be explored, please let me know. Thanks!

  • @djsocialanxiety1664
    @djsocialanxiety1664 5 днів тому +1

    could you maybe explain why its „not so bad“ the (x+y)^2 is still entangled, but if thats not so bad then whats the whole thing with entanglement for in the first place?

  • @cathleenparsons3435
    @cathleenparsons3435 Місяць тому

    This is excellent! Thanks so much, really helpful

  • @neelkamal3357
    @neelkamal3357 Місяць тому

    crystal clear as always

  • @eggs-istangel4232
    @eggs-istangel4232 Місяць тому +1

    Not that I want to look like "oh I think there is a mistake" kid, but at 8:33 shouldn't first lower phi function(with respect to x_2) be phi_{1,2} (x_2) instead of phi_{2,1} (x_2)?

    • @SerranoAcademy
      @SerranoAcademy  Місяць тому +1

      Thank you so much! Yes, you're absolutely right. And I think also in the first term, with \Phi_1, they should be \phi_{1,1}x_1 + \phi_{1,2}x_2.
      I changed it so many times, and it was so hard to get the indices right, lol...

  • @hayksergoyan8914
    @hayksergoyan8914 Місяць тому

    nice job, thanks. Have you checked how this works for prediction of time series kind of data compared to LSTM,Arima ?

  • @alivaziri7843
    @alivaziri7843 22 дні тому

    Thanks for the video! Are the slides available freely?

    • @SerranoAcademy
      @SerranoAcademy  19 днів тому

      Thanks! Not yet, but I'll message here when they're out.

  • @akirakato1293
    @akirakato1293 Місяць тому

    So essentially you can train non-linear regression or boundary models without the need to expand feature space by, for example, appending x1*x2 column to training set before performing fit? I can see that it's computationally better for finding an approximate solution and naturally less overfitting but how well does the computation complexity perform when accuracy requirement is extremely high?

  • @SohaKasra
    @SohaKasra Місяць тому

    That was too fluent as always ❤

  • @GerardoGutierrez-io7ss
    @GerardoGutierrez-io7ss Місяць тому

    Where can I see the proof of this theorem?😮

  • @Pedritox0953
    @Pedritox0953 Місяць тому

    Great video! Peace out

  • @jimcallahan448
    @jimcallahan448 Місяць тому

    What about log(x) + log(y) ?
    Of course, because you mentioned Kolmogorov I assumed you are talking about probabilities.

    • @SerranoAcademy
      @SerranoAcademy  Місяць тому

      @@jimcallahan448 that’s a good example. Log(xy) is one that looks entangled, but can be written as log(x)+log(y), so it’s separable (i.e., a one layer KA network).

  • @csabaczcsomps7655
    @csabaczcsomps7655 Місяць тому

    Amazing.

  • @colonelmoustache
    @colonelmoustache Місяць тому

    This was so good, but i feel like there should be a nice matrix way to write this.
    Time to search deeper i guess
    Great topic btw

    • @SerranoAcademy
      @SerranoAcademy  Місяць тому +1

      Thanks for the suggestion! They do have a matrix with the capital \Phi's, multiplied by another one with the lowercase \phi's, where multiplication is instead composition of functions. I was going to add it here, but it started getting too long, so I had to cut it, but most other videos in the topic (plus the paper) have it.

  • @brandonprescott5525
    @brandonprescott5525 Місяць тому

    Reminds me of node based graphics software like Houdini or touchdesigner

  • @tomoki-v6o
    @tomoki-v6o Місяць тому

    I have an engineering degree ,no PHD, I am ML enthusiast how can join research in this case ? . i dony want to work as data scientisc . because i like pla ying with math.

  • @AI_ML_DL_LLM
    @AI_ML_DL_LLM Місяць тому

    Great video! You will definitely go to the heaven, see you there not soon :)

  • @sufalt123
    @sufalt123 Місяць тому

    so coooooool

  • @moonwatcher2001
    @moonwatcher2001 Місяць тому

  • @tigu511
    @tigu511 Місяць тому

    oh god!... ¿the translation in spanish is from an AI?, is really bad