What happens *inside* a neural network?

Поділитися
Вставка
  • Опубліковано 7 вер 2024

КОМЕНТАРІ • 97

  • @lored6811
    @lored6811 2 роки тому +90

    I'm sorry for you that this video didn't do so well with clicks, don't let this discourage you from making more beautiful explanations :) There will always be people having their aha-Moments through you

    • @mihir777
      @mihir777 Рік тому

      Intellectual videos don't really get much views. The cat and dog videos will always satisfy the greater population, providing easy dopamine hits to the reptilian brains.

  • @allantouring
    @allantouring 11 місяців тому +8

    Where's part 3? I love this series! ❤

  • @NovaWarrior77
    @NovaWarrior77 2 роки тому +34

    The prodigal son returns.

  • @DelandaBaudLacanian
    @DelandaBaudLacanian 2 роки тому +14

    my mind is blown, this is so simple and elegant, thank you for taking the time to explain neural networks and linear transformations, this is going to be one of those videos I watch over and over until I really grok it!

  • @elidrissii
    @elidrissii Рік тому +9

    Thank you for making these videos, absolute gems. People like you make UA-cam worth it.

  • @saidelcielo4916
    @saidelcielo4916 Рік тому +7

    Once again WOW this is the best visualization of neural networks I've ever seen, and I've learned tremendously from it. Please make more videos!!

  • @sortsvane
    @sortsvane 2 роки тому +11

    Hands down THE most lucid explanation of NN I've seen 💯 Sharing it with my CompSci group.
    Also curious to see how you'll visualise back propagation.

  • @vtrandal
    @vtrandal 2 роки тому +3

    This is a rare occasion where I am fortunate to be witnessing excellent progress in technology as it happens. Thank you!

  • @KenanSeyidov
    @KenanSeyidov Рік тому +2

    Excited for part 3!

  • @BlackM3sh
    @BlackM3sh Рік тому +2

    I happy I managed to find this video again. 😄 I suddenly felt an urge to rewatch it. I really like the clear visuals of the video. It's a shame you have yet to come out with a part 3, though.

  • @IndyRider
    @IndyRider Рік тому +2

    This video has done such a great job of visually breaking down a complex concept with examples!

  • @JamieSundance
    @JamieSundance Рік тому +2

    This video series is fantastic, these concepts never land for me until I see visual spacial context. Keep up the great work, you are greatly appreciated!

  • @waynedeng9604
    @waynedeng9604 2 роки тому +3

    this is the best video I’ve ever watched, I’m in tears, you’ve changed my life with your beautiful animations and soothing voice

  • @prometheus7387
    @prometheus7387 2 роки тому +7

    Grant Junior returns

  • @Odisse0
    @Odisse0 Рік тому +2

    big up for this outstanding work! as an fellow student of these topics, i want to thank you for the effort put there. i'm really impressed both in the script and animations. much love ❤

  • @TheBookDoctor
    @TheBookDoctor 2 роки тому +4

    Wow. I've watched a lot of "how do neural networks work" videos, and this is the first one that has offered me any truly new insight in a long time. Excellent!

    • @vcubingx
      @vcubingx  2 роки тому

      Thank you! I appreciate the kind words :)

  • @airatvaliullin8420
    @airatvaliullin8420 2 роки тому +1

    What a wonderful explanation! I need to know this for my project and each time I watch something about the NN I'm sure im getting better at understanding what's under the cover. But never have I seen such elegant way to introduce the topic. Bravo!

  • @arnavvirmani8688
    @arnavvirmani8688 2 роки тому +1

    Video makes it easy for non math folks like me to gain some semblance of an understanding of neural networks. Great job!

  • @aleksszukovskis2074
    @aleksszukovskis2074 2 роки тому +15

    Bruce! It's been a whole year. you still owe me 16 contents

  • @symbolspangaea
    @symbolspangaea Рік тому

    I saw this video 11 months after published, and came as a gift. Thank you sooooo much!

  • @imranyaaqub1704
    @imranyaaqub1704 2 роки тому +2

    Thank you for this informative video. I was one of many waiting for part 2, but didn't get notified as I was only subscribed, and didn't know to also hit the bell notification to get an update on when part 2 was out. I suspect many people will be coming back at odd points into the future to see if part 2 has come out. Hope they enjoy it as much as I have.

  • @asemhusein7575
    @asemhusein7575 2 роки тому +3

    The words can't explain how amazing this video
    finally a video that clears everything
    Thank you

  • @stevenbacon3878
    @stevenbacon3878 2 роки тому +2

    Thank you for making this video, it's awesome. I look forward to seeing more of your work!

  • @finkelmann
    @finkelmann 2 роки тому +1

    Brilliant stuff. I've watched my share of neural network videos, and this one is truly unique

  • @my_master55
    @my_master55 2 роки тому +1

    ngl, this is what is called the "high-quality content", thank you very much for your efforts 👏😍 🚀

  • @Max-fw3qy
    @Max-fw3qy Рік тому

    Geez man, your video is very good to visualize what a nn really does! One piece of advice, if I may....after a complicated explanation or a vary loaded explanation as you did with the output of the nn, which is very complex to understand uf you know nothing about it, try to summarize it with a simple sentence, just as you did in 7:40. That was beautifully explained, bravo!👍🏻👍🏻👍🏻

  • @judo-rob5197
    @judo-rob5197 2 роки тому

    Very nice explanations of a complicated topic. The visuals make it more intuitive.

  • @saidelcielo4916
    @saidelcielo4916 Рік тому +2

    Thanks!

  • @soumyasarkar4100
    @soumyasarkar4100 2 роки тому +1

    this is some extraordinary explaination

  • @usama57926
    @usama57926 2 роки тому +1

    Oh man! Finally 2nd part is here.....

  • @m4sterbr0s
    @m4sterbr0s 2 роки тому +1

    Awesome, a new video!! Really happy to see you making content again!!

  • @mohegyux4072
    @mohegyux4072 Рік тому

    UA-cam's algorithm should be ashamed of itself !! how could this video have less than 20k views!!!!!!
    thanks, had multiple whoa! moments

  • @arturpodsiady7978
    @arturpodsiady7978 Рік тому

    Great video, thank you!

  • @ChauNguyen-jy3fk
    @ChauNguyen-jy3fk 2 роки тому

    I've been waiting for this video for several months!

  • @polqb3205
    @polqb3205 2 роки тому

    Wow, the video is sooo good, the explanations are wonderful and the animations are so beautiful, I just love it 😍😍

  • @hiewhongliang
    @hiewhongliang 2 роки тому

    This is awesome!!! Keep posting and keep up the great work.

  • @usama57926
    @usama57926 2 роки тому

    What a great explanation. Waiting for part 3

  • @williamharr7338
    @williamharr7338 Рік тому

    Excellent Video!

  • @jacobliu760
    @jacobliu760 2 роки тому +1

    I enjoyed this video so much.

    • @vcubingx
      @vcubingx  2 роки тому +1

      Thank you Jacob.

  • @adriangabriel3219
    @adriangabriel3219 2 роки тому +2

    Really great! Do you have a tutorial on how you created the visualizations of the different layers? Would it be possible to do that in pure python as well?

  • @mourirsilfaut6769
    @mourirsilfaut6769 2 роки тому

    Thank you for making these videos.

  • @KukaKaz
    @KukaKaz 10 місяців тому

    Amazing video ! Keep it up 👍

  • @hannesstark5024
    @hannesstark5024 2 роки тому +1

    Awesome job!

  • @AegeanEge35
    @AegeanEge35 4 місяці тому

    Thanks!

  • @vincent2154
    @vincent2154 2 роки тому

    Really great 👍

  • @laurent-minimalisme
    @laurent-minimalisme 2 роки тому

    Man, this video is a masterpiece! congrat!

  • @LuddeWessen
    @LuddeWessen 2 роки тому

    Really nice video. However, I think you should mention that you use a binarized (one hot) encoding of argmax and not argmax as it is commonly defined, as viewers (like me) could get confused.
    Otherwise an excellent video, that conveys the intuition really well! 😀

    • @vcubingx
      @vcubingx  2 роки тому +1

      Good point, I'll include the terminology next time

  • @jasdeepsinghgrover2470
    @jasdeepsinghgrover2470 2 роки тому

    Amazing explanation!!..

  • @woddenhorse
    @woddenhorse 2 роки тому

    Simply Awesome 🔥🔥🔥🔥

  • @aaronwtr1150
    @aaronwtr1150 2 роки тому

    Thank you for this gerat video

  • @wise_math
    @wise_math Рік тому

    Nice video. How do you make the white edge border of a scene? (like in Recap Part 1 scene)

  • @toth1982
    @toth1982 Місяць тому

    Is this true?:
    In every other resources I have only met an activation function, which is an activation function in a single neuron, so it is a R -> R function. But in order to calculate softmax, you need the vector in the y neurons (the output of the last linear calculation). So it is basically applied on a layer, not just on one value.

  • @CesarMaglione
    @CesarMaglione 2 роки тому +1

    ¡Excellent! Take your like! 👍😉

  • @alexcheng2498
    @alexcheng2498 2 роки тому

    I've missed this.

  • @Hopeful-zx9wk
    @Hopeful-zx9wk 2 роки тому

    return of the king

    • @vcubingx
      @vcubingx  2 роки тому

      But when will hopeful69420 return

  • @TheRmbomo
    @TheRmbomo 2 роки тому +2

    5:25 When describing that the sum of the array resulting from softmax equals 1, I think the visual is missing that communication too. Such as stacking all of the lines on top of each other, up to a value of 1 or 100%. Don't just rely on words.
    Otherwise great video, thank you.

  • @skifast_takechances
    @skifast_takechances Рік тому

    banger

  • @praveenrajab0622
    @praveenrajab0622 2 роки тому

    In 10:49 , aren't the x and y coordinates of the plot is the output values of the second last layer of the nn?

  • @ko-prometheus
    @ko-prometheus Рік тому

    Can I use your mathematical apparatus, to investigate the physical processes of Metaphysics??
    I am looking for a mathematical apparatus capable of working with metaphysical phenomena, i.e. metamathematics!!

  • @dewibatista5752
    @dewibatista5752 4 місяці тому

    PART 3 PART 3 PART 3

  • @MadlipzMarathi
    @MadlipzMarathi 2 роки тому +1

    Finally

  • @jamietea1072
    @jamietea1072 Рік тому

    Intro
    part 1 Funny Galaxy
    part 2 Swastika
    part 3 Ending of Evangelion

  • @anshul.infinity
    @anshul.infinity 2 роки тому

    I am trying to visualise how the neural network transformed the input space into linearly separable space layer by layer in a new basic data set.

  • @pi-meson7677
    @pi-meson7677 2 роки тому +4

    When you come back after 2¹⁰ years

  • @Anujkumar-my1wi
    @Anujkumar-my1wi 2 роки тому

    I want to ask that as neural net approximates a function over a particular domain interval ,what'll happen if it gets input outside the domain when testing?

  • @ali493beigi5
    @ali493beigi5 2 роки тому

    Great! Can you explain me how you produce these animations? Is there any software you have used?

  • @anwarulbashirshuaib5673
    @anwarulbashirshuaib5673 2 роки тому

    holy shit!

  • @dann_y5319
    @dann_y5319 3 місяці тому

    9:13 grid

  • @RohanDasariMinho
    @RohanDasariMinho 2 роки тому

    Goat cubing x

  • @OrenLikes
    @OrenLikes 7 місяців тому

    w12 reads the first weight of the second input?
    this is confusing!
    should be w21 => from input x2, we look at w1 (that, obviously, goes to output 1)!

  • @gdash6925
    @gdash6925 2 роки тому

    where were you at 8:50? in University?

  • @nathannguyen2041
    @nathannguyen2041 2 роки тому

    How would a neural network handle categorical variables?

    • @vcubingx
      @vcubingx  2 роки тому +1

      As inputs? One way is to have each input be a vector of dimension n, where n is the number of categories. Then, for each input, assign the category index 1, and the rest 0. For example, if my input was a 4-category variable of either cat, dog, wolf, tiger. Then the input cat could be {1, 0, 0, 0}. See "one-hot encoding" if you're interested

    • @vcubingx
      @vcubingx  2 роки тому +1

      There are plenty of other ways. In the case of NLP (which is my domain atm), we want to be able to encode tokens (some sequence of characters) into input vectors. An older method to do this is word2vec, which converts words to vectors based off context. This allows us to assign each word to some input vector, and we can pass along each vector as inputs to an NN. These days though, modern neural language models (GPT3, etc.) have sophisticated embeddings and word2vec has largely fallen out of grace

  • @abrahamgomez653
    @abrahamgomez653 3 місяці тому

    Chaos happens

  • @enisten
    @enisten 2 роки тому

    3:47 Did you mean a range of i̶n̶p̶u̶t̶s̶ outputs?

  • @omridrori3286
    @omridrori3286 Рік тому

    What about part 3!!!

  • @PapaFlammy69
    @PapaFlammy69 2 роки тому

    wb :)

  • @nit235
    @nit235 2 роки тому

    Very informative video
    Thank you a lot
    Do you have any suggestions for me, I want to learn manim and make videos like how ML algorithms work, their pros and cons cases?
    Or, if you have a manim learners classes, then I can directly enroll to learn.

  • @jayantnema9610
    @jayantnema9610 2 роки тому

    hey don't you think saying "this is what NN does under the hood" an overshoot? I mean all the popular literature in textbooks and all ML community also claims that it does exactly that but if this was truly the case, if it was behaving that logicallh then adverserial attacks would have been impossible. But we all know that one pixel attack and noise based attacks are quite frequently achievable by GANs. The interpretation of layers extract features from the input is true provided features are not the human interpretable shapes or patterns, to call them so leads to an error. Because one pixel attacks and noise based attacks do not affect the feature as such, the horse is still horse if you change twentyish pixels out of a 1000.. but the NN suddenly starts saying it is a dog with 99% confidence. If it were really extracting features as in patterns as humans understand it would never even make that error. Humans have 100% accuracy and immunity against some twenty pixels changing out of a 1000 because we extract patterns. NN does not, if it did it should also be immune. But it is not. This means that the popular understanding is still incomplete and it would be wrong to say anything on how NN works under the hood. Since you can find multiple completely different sets of weights and still get excellent classification accuracy. This means the NN is interpreting the spiral in its own way and not human style 5 zone with nonlinear boundary. Because human style there is only 1 interpretation logically possible. That fails to explain how we can get multiple sets of weights, not at all close or alike, still giving solid accuracy

  • @TimmacTR
    @TimmacTR 2 роки тому +1

    What the.....

  • @revimfadli4666
    @revimfadli4666 6 місяців тому

    But salty redditors say this isn't how the thing works at all
    (they deleted their comments in shame after I asked for elaboration)

    • @vcubingx
      @vcubingx  6 місяців тому

      Haha, sorry but what redditors? What post are you talking about. Kinda curious

  • @Mehrdadkh87
    @Mehrdadkh87 Рік тому

    Hm

  • @jamesjones8487
    @jamesjones8487 Рік тому

    I finally realize that I am a useless stupid fool.

  • @OrenLikes
    @OrenLikes 7 місяців тому

    you said "softmax is not a version of argmax" and then you say "softmax is a smoother version of argmax" - make up your mind!

  • @usama57926
    @usama57926 2 роки тому

    When 3rd party is coming

  • @tomoki-v6o
    @tomoki-v6o 2 роки тому

    Finally

  • @sythatsokmontrey8879
    @sythatsokmontrey8879 2 роки тому

    Finally