The Neural Network, A Visual Introduction

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 274

  • @vcubingx
    @vcubingx  3 роки тому +10

    Part 2 is out! ua-cam.com/video/-at7SLoVK_I/v-deo.html

    • @Tothefutureand
      @Tothefutureand Рік тому

      We are looking for Part 3. Thx for sharing your experience and knowledge.

  • @unoriginalusernameno999
    @unoriginalusernameno999 4 роки тому +157

    Did you just say you got Yann Lecun to help you!!!! He's got a TURING award boi!

  • @nirbhay.8400k
    @nirbhay.8400k 3 роки тому +6

    This Deep Learning Series will be a life-saver for many !

  • @tonywang4431
    @tonywang4431 4 роки тому +39

    When you first showed 10:32, I was thinking that ReLUs are very bad because they collapse data too much and makes points indistinguishable. However, you later showed the 3D case in 11:49, which was very insightful for me. When data lies on a low dimensional manifold of a high dimensional space, the 11:49 picture is probably more accurate. In this case, ReLUs don't actually collapse data in such a bad way.

    • @postvideo97
      @postvideo97 4 роки тому +2

      ReLUs don't "need" to collapse data, it only collapses what is necessary. If you think in terms of linear combination of functions, 2 ReLUs can be combined into a "S" shaped sigmoid function, and 4 ReLUs can be combined to form a "Bell Curve" function. Both are very crude and merely approximations, but as you increase the number of dimensions they become smoother. An infinite amount of ReLUs (differently scaled and translated) can approximate any function.

  • @aaronchan6447
    @aaronchan6447 4 роки тому +16

    This is actually so good! You've explained it so very clearly and left no gaps in the logic.
    I have been wanting to get into machine learning, and you have helped immensely.

  • @alexandrepv
    @alexandrepv 4 роки тому +9

    I always tried to visualise the decision hyperplane on the data's domain, but this has been very insightful: Visualising the data into the projected-non-linear domain. Brilliant video! :)

  • @charlsssoooo
    @charlsssoooo 4 роки тому +5

    I love these videos. All of my life I was considered mathematically stupid. I can't read mathematical notations well. I failed pre-calc. But now as an adult, watching these visual videos have led me to be able to understand those concepts that were impenetrable to me when I was younger.

  • @shivChitinous
    @shivChitinous 4 роки тому +35

    This is great! Makes the analogy with biological neurons crystal clear for me for the first time 😄

    • @vcubingx
      @vcubingx  4 роки тому +1

      Thanks! I'm happy that you understood it!

  • @mansfiem
    @mansfiem 4 роки тому +11

    Great stuff! I'm familiar enough to understand the basics, but I love that this is visually done.

    • @vcubingx
      @vcubingx  4 роки тому +1

      Glad you liked it!

  • @saidelcielo4916
    @saidelcielo4916 Рік тому

    WOW. I've been studying neural networks for a bit now, but this made me see them in a new way. PLEASE MAKE MORE VIDEOS!!!!

  • @PowerhouseCell
    @PowerhouseCell 4 роки тому +21

    Really well done! It's cool to see the differences in the way you covered things compared to 3b1b. Can't wait to see more :D

  • @FedeGianca
    @FedeGianca 4 роки тому +4

    This is great work from you, congratulations!! Also a big thank you to Grant Sanderson, from @3blue1brown, for manim. Both of you make quality education so much more fun, as it should be. So thanks a lot!

  • @suleimansiddiqui2468
    @suleimansiddiqui2468 4 роки тому +1

    Eagerly waiting for chapter 2

  • @jamilahmed2926
    @jamilahmed2926 2 роки тому

    By far the most essential visualization of neural net Ive seen to date! 🤩

  • @alecunico
    @alecunico 4 роки тому +2

    Great man, thank you so much! Can't wait to see the chapter 2!

  • @COOLZZist
    @COOLZZist 2 роки тому

    Wow, amazing way of visualization of non linear function and how data is transformed.

  • @aidosmaulsharif9570
    @aidosmaulsharif9570 4 роки тому +5

    Man it is just so high level. Your explanation, vizuals and the topic itself are great. Subscribed and waiting for the next chapters!!!

    • @vcubingx
      @vcubingx  4 роки тому +1

      Thank you very much!

  • @anupriyamagesh
    @anupriyamagesh 3 роки тому

    Neural networks look simpler than these animations made. Fantastic job!

  • @rohanshetty1016
    @rohanshetty1016 4 роки тому +4

    This is awesome!. I had half-baked knowledge on all these topics before, after watching this video it's crystal clear!. You made it look so simple.
    Thank You!

    • @brendawilliams8062
      @brendawilliams8062 3 роки тому

      I liked it too. I couldn’t help notice the accordion action assiciated with the squares roots being used with two lengths to scale them.

  • @ragha1988
    @ragha1988 4 роки тому +2

    Amazing visualization. Looking forward to next videos in the series.

  • @morthim
    @morthim 4 роки тому

    one of the better talks on the topic. well done

  • @TheNostradE3
    @TheNostradE3 4 роки тому +4

    Really good content, one of the clearest explanations i've heard about neural networks so far! Keep up the good job, cannot wait for the following videos!

  • @ldx8492
    @ldx8492 4 роки тому

    Masterfully done, you managed to explain it in "simple terms, but not simpler"

  • @darmilatron
    @darmilatron Рік тому

    Thank you for your video, this is one of the best videos explaining neural networks that I have seen. Good Work

  • @gregvial
    @gregvial 4 роки тому

    Great video! You were right, even as an experienced user of neural networks it helped me see things in a different way

  • @rajeshviky
    @rajeshviky 4 роки тому

    One of the best and most intuitive way of descibing neural network! You took it to next level... looking forward more from you :)

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 3 роки тому

    I come from the new course by Alfredo and I dare say this was fantastic. Regards....

  • @electronutlabs
    @electronutlabs 4 роки тому

    Fabulous! Looking forward to the next in the series.

  • @viveksurve5031
    @viveksurve5031 4 роки тому +2

    RT from THE three blue one brown, great work dude!

  • @YitzharVered
    @YitzharVered 4 роки тому +2

    Wow! I've done things with neural networks before without even understanding the actual math behind it! Very enlightening!

  • @gustavoexel5569
    @gustavoexel5569 4 роки тому +23

    It would be nice to pay attention to the colors in the plots. I am colorblind, and at 6:52 it's almost impossible to see the boundary between the two colors.

    • @tristunalekzander5608
      @tristunalekzander5608 4 роки тому +5

      Well let me tell you they were beautiful and vibrant.

    • @deformercr6680
      @deformercr6680 4 роки тому

      @@tristunalekzander5608 ... Talk about rubbing salt to a wound..

    • @deformercr6680
      @deformercr6680 4 роки тому

      @ゴゴ Joji Joestar ゴゴ it's not pity, it's being considerate. If you're eating some delicious food right in front of someone who can't eat, and then you start telling the person how tasty the meal is... I would say that's a little inconsiderate.

    • @swoon86
      @swoon86 3 роки тому

      Is pretty much like the flag of Japan, with different colors. The inner circle is composed of blue dots and the remaining of red dots.

  • @t.gokalpelacmaz584
    @t.gokalpelacmaz584 4 роки тому +2

    You have really developed man. Great progress and keep it up.

  • @patrickryckman3867
    @patrickryckman3867 4 роки тому

    Absolutely awesome. Very informative and helpful for my visual mind.
    One thing I would love to see go along with your video, would be at the start you showed 3 neurons with 3 hidden layers. I would love to see a small dataset with perhaps 3 features, and follow along through with that throughout the whole video, and using real numbers so we could follow along and even work it out on paper if we wanted to.
    Anyways, Thank you so much for your awesome work.
    Subscribed!

  • @doyourealise
    @doyourealise 3 роки тому

    who came here from canziani sir course? :) Loved the visualization bro

  • @Dwika34
    @Dwika34 Місяць тому

    broo this is the best explanation so far thanks

  • @rgoddard
    @rgoddard 4 роки тому

    Great video. Really looking forward to the series!

  • @nikhilyewale2639
    @nikhilyewale2639 3 роки тому

    Nice video.. Waiting for next chapter on visualising neural-nets !

    • @vcubingx
      @vcubingx  3 роки тому

      Working on it! It should be out soon

  • @raresmircea
    @raresmircea 4 роки тому

    Kids today live at the beginning of the golden age of learning. I have a vague memory of reading a quote from Einstein where he said that art and science will eventually merge to bring a new way of representing reality. If he really said that, he was right! Our visual processing capacity is vast, and if we find sophisticated visual ways of conveying complex relations (through mediums like youtube, CGI, VR, AR) we will bring about kids that have superhuman abilities of grasping reality.

  • @VizExplains
    @VizExplains 4 роки тому +15

    Woahh, incredible! Happy to come this early :D

  • @NovaWarrior77
    @NovaWarrior77 4 роки тому +1

    Another banger Vivek!

  • @amanasci2481
    @amanasci2481 3 роки тому +14

    When is next part coming? Any updates?

    • @AnishBhethanabotla
      @AnishBhethanabotla 3 роки тому +1

      hes in college now

    • @vcubingx
      @vcubingx  3 роки тому +9

      @@AnishBhethanabotla I am yes, but I'm currently working on the next part! I've scripted, recorded and made most of the animations, so I have some editing and reviewing to go but it should be out soon!

  • @AWESOMEEVERYDAY101
    @AWESOMEEVERYDAY101 4 роки тому +1

    This is too good. 3B1B vibes man

  • @arazsharma4781
    @arazsharma4781 4 роки тому

    Superb!! Eagerly waiting for the next videos! :D

  • @RobotProctor
    @RobotProctor 4 роки тому +1

    Is this Manim? Nice work!

  • @donbasti
    @donbasti 3 роки тому

    Please make more, these are amazing!

  • @lightinrhythm8548
    @lightinrhythm8548 5 місяців тому

    Insane 🎉🎉,,,,more strong visualisation videos

  • @mohsin-ashraf
    @mohsin-ashraf 4 роки тому

    Still waiting for the next most precious videos on the world of this series, please update.

  • @sifiso5055
    @sifiso5055 4 роки тому +1

    Another excellent video🙌

  • @kavinyudhitia
    @kavinyudhitia 2 роки тому

    OOH MY GOSHHH THIS IS GREAT CONTENT. thanks a lot!!!!

  • @MrDaanjanssen
    @MrDaanjanssen 4 роки тому

    Superb animation, well done

  • @MrDark-fm4gp
    @MrDark-fm4gp 4 роки тому

    omg, I am so glad I found this channel

  • @Originalimoc
    @Originalimoc 4 роки тому

    Interesting, looking forward to part 2😉

  • @gabrielguimaraes5628
    @gabrielguimaraes5628 4 роки тому

    Awesome video!!!
    Can’t wait for the next ones!

  • @mjf1422
    @mjf1422 4 роки тому

    This is amazing stuff. So many things I was able to understand that I couldn't get my head around before. Thank you so much!!! 😊

  • @user-vn9ld2ce1s
    @user-vn9ld2ce1s 3 роки тому

    Sir, you have earned my subscribe, outstanding video.

  • @RedOneM
    @RedOneM 4 роки тому +2

    Thanks a lot, this definetly will become handy for my study just in a bit over a month.

  • @insightfool
    @insightfool 4 роки тому +1

    Great work. I am still left wanting a more course overview which metaphorically explains how AI is not simply a series of input->hidden->output byt way of some narrative discussion and/or metaphore. Wanting that before I go deep into the linear algebra, and then reference the steps in the matrix max discussion with what was described in the initial narrative.

  • @ShivamVerma-gq2sm
    @ShivamVerma-gq2sm 4 роки тому

    Thanks a lot for such a vivid explanation ! Looking forward to more such content

  • @_coderizon
    @_coderizon 8 місяців тому +2

    Thank you! may i use the animation from 9:45 to 11:15 form this video for a creating a own video on this linear transformation?

  • @govamurali2309
    @govamurali2309 3 роки тому

    @12:09 - Nice video, one question.Why didn't the sigmoid function squish the output into the unit area?

  • @ricardoroxas7690
    @ricardoroxas7690 4 роки тому +4

    Nice! Looking forward to this series. 😁Imagine if we see an animation of an actual handwritten number image transformed into the decision "square"

    • @vcubingx
      @vcubingx  4 роки тому +2

      Good idea! I believe distill.pub has something like this

    • @lukejagg
      @lukejagg 4 роки тому

      He’s probably using publicly available data, so I doubt he’ll do an animation like that.

  • @plutophy1242
    @plutophy1242 Рік тому

    love your series, it‘s so great

  • @mohammedbelgoumri
    @mohammedbelgoumri 2 роки тому

    Great video! Can you please explain how the Scenes are added to ScreenRectangles in the intro? I checked the code on GitHub and found nothing. Is it just done using a video editing software? Thanks in advance.

  • @sitrakaforler8696
    @sitrakaforler8696 Рік тому

    Awesome content man. Bravo!

  • @SergioUribe
    @SergioUribe 4 роки тому

    awesome explanation and video, kudos!

    • @vcubingx
      @vcubingx  3 роки тому

      Glad you liked it!

  • @devsutong
    @devsutong 4 роки тому +1

    would be really great if this guy work out these videos with 3b1b... 3b1b is a very good educator

  • @youtubepooppismo5284
    @youtubepooppismo5284 4 роки тому +2

    Handwritten-Digits are usually grayscale images, their pixel can be written in the form rgb(a, b, c) but since it's a grayscale image, a = b = c, so I can represent each pixel with only one numerical value. Given the matrix of those values I can then shove it into a linear vector and give it to the Neural Network.
    My question is; given a regular image with three (or even four) values for each pixel, how do I convert that into a vector? Do I just put the values next to each other or do I need to convert the rgb values into a single integer? Well the second one seems more reasonable since the first would add additional perceptrons to the Neural Network.

    • @vcubingx
      @vcubingx  4 роки тому

      One of the beauties of this idea of "learning" is that we don't actually need to worry about how we input the data, it just figures it out! Sometimes we don't even know what the data represents. As long as all the data is converted in the same way, it doesn't really matter! Here's an example: one time I used a neural network to play Super Mario Bros. The input to the network would've been a huge vector (iirc >65,000 inputs), which my laptop couldn't handle. Instead, I used the RAM of the console, which was just 128 inputs (not sure, I don't fully remember the number). However, I had no idea what each number represented! A lot of the time, our inputs are things that humans can't make sense of, but the neural network finds patterns among it.
      In reality, we don't use this "multilayer perceptron" I talked about in the video on colored images. Yann LeCun, the guy I talked about in the end, came up with something called "LeNet", or the convolutional neural network. I plan to make a similar video on the CNN, but it's gonna be a while before that comes out, so I encourage you to look it up! There are some fantastic resources on the web.

    • @vcubingx
      @vcubingx  4 роки тому

      To answer your question, it depends. The first example of just stretching it out would be ideal, because the second one can be a lossy compression. What I mean by that is if you add them, an RGB value of (60,0,0) is the same as (0,60,0).

    • @youtubepooppismo5284
      @youtubepooppismo5284 4 роки тому

      @@vcubingx Thanks for the reply and the great advice, I will definitely follow it.
      However I think you misunderstood my second "guess" because that wouldn't be a lossy compression.
      What I mean by that is just converting an rgb value to an integer that represents it uniquely.
      rgb(0, 0, 0) -> 0
      rgb(255, 255, 255) -> 16777215 (256^3 - 1)
      Which is literally just counting every possible rgb combination. If I remember correctly there are some pretty easy binary operations to make this conversion.
      Would this also work? Because if so, it'd be a smaller vector. Or it ould just mess everything up hehe
      Anyway, I will begin to seriously study this topic, also because I have a quite strong mathematical background so it shouldn't be too hard.

  • @aryanbhatia6992
    @aryanbhatia6992 4 роки тому +3

    thank you so much for making videos on deep learning , much neded.

  • @vasicnikola7674
    @vasicnikola7674 4 роки тому +2

    Simply beautiful. Thank you

  • @agb2557
    @agb2557 4 роки тому

    Great! Looking forward to the rest

  • @idos5049
    @idos5049 4 роки тому

    Great video, love the animations!

  • @parthpatadiya9197
    @parthpatadiya9197 4 роки тому +1

    Hi this explaination&visualization is awesome... Can you pls tell using which tools you created that 3d visualization coz i am desperatly looking for that to add in my Clg presentation and some of my lectures 🙂 thanks in advance

  • @AndreiMargeloiu
    @AndreiMargeloiu 3 роки тому +1

    Part 2 and 3 please!

  • @vtrandal
    @vtrandal 3 роки тому

    Excellent. Thank you!

  • @chrisr.3321
    @chrisr.3321 Рік тому

    this is suuuuuuuuuch a great video! Thanks

  • @victorvilanova3505
    @victorvilanova3505 3 роки тому

    Awesome job! I love it!

  • @tiosam1426
    @tiosam1426 4 роки тому +4

    Thank you, UA-cam Algorithm, the Wise.

  • @siddharthsahu1130
    @siddharthsahu1130 3 роки тому +1

    Thanks a lot
    where is the next video?

  • @DasGrosseFressen
    @DasGrosseFressen 3 роки тому

    Nice! Are the other 2 parts uploaded already?

    • @vcubingx
      @vcubingx  3 роки тому +1

      Unfortunately no, I'm working to release them!

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 3 роки тому

    What will happen on samples on third quadrant? All of them aggregate to the central point? How this aggregation works following what we are going to lose about these samples?

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 3 роки тому

    At 10:30 when you apply ReLu on graph, how did you calculate which samples should stay and which one should be cut?

  • @woowooNeedsFaith
    @woowooNeedsFaith 4 роки тому

    11:41 - This "folding" was a bit misleading term. To me folding implies that all the distances and angles between all the points on the folded part would be preserved (rotation around the fold line). Projection would seem to be (more) accurate description of this phenomenon.

  • @chaostrottel_hdaufdutube8144
    @chaostrottel_hdaufdutube8144 4 роки тому +1

    Better than the 3b1b nn series

  • @yashkatare3303
    @yashkatare3303 4 роки тому +1

    You explained it really well. Really like the videos. Plus your voice is as soothing as Sal's.

    • @vcubingx
      @vcubingx  4 роки тому +1

      You think? Haha thank you so much, I don't think many people think that :)

  • @usama57926
    @usama57926 4 роки тому +2

    When will the second video will out?

  • @SamuelJFord
    @SamuelJFord 4 роки тому

    Subscribed. Excellent video!

  • @perlindholm4129
    @perlindholm4129 4 роки тому

    Looking at the holes between lines between the layers. I'm wondering if this is holes in the maximum logic achieved. Blur the lines you get more generalizations? Imagine light as light and not laser pointers. That is. Calculate with raytracing and scattering. Like a 3D render.

  • @KSK986
    @KSK986 4 роки тому +1

    Thanks for these videos. Visualization provides powerful ways of understanding and these videos are of great help.

  • @rohitranjan965
    @rohitranjan965 4 роки тому

    Amen from the community 🙌🙌

  • @parmarsuraj99
    @parmarsuraj99 4 роки тому +2

    Beautiful and intuitive!

  • @lalitvinde1441
    @lalitvinde1441 3 роки тому

    Broooo it is awesome visualisation video😍,it makes foggy image of neural network fully cristal clear , i am really waiting for next chapter bro, when you gonna upload the next chapter....

  • @majstrstych15
    @majstrstych15 3 роки тому +1

    When is the next chapter gonna come out?

  • @imranyaaqub1704
    @imranyaaqub1704 4 роки тому

    Great video. When is part 2 coming out?

  • @tobiascornille
    @tobiascornille 4 роки тому +1

    Very cool! But why is the output after the sigmoid function still in 2D? Doesn't each point get only one number (the probability) as output?

    • @vcubingx
      @vcubingx  4 роки тому +1

      Good question! The output before the activation function is some x and y coordinate, so it's some vector [x, y]. Remember, since the output layer of the neural network has 2 outputs, we have 2 outputs between 0 and 1, essentially forming [sigma(x), sigma(y)]

    • @tobiascornille
      @tobiascornille 4 роки тому

      @@vcubingx aah that makes sense! Thanks!

  • @felixakwerh5189
    @felixakwerh5189 4 роки тому +4

    since you mentioned, sigmoid and relu i was hoping you would mention the softmax activation function and probably draw the graph as well, this is good video none the less

  • @luketyler5728
    @luketyler5728 4 роки тому

    Absolutely fantastic!

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 3 роки тому

    Hi, Is this true: The W vector is always perpendicular to separated plane?

  • @rohitranjan965
    @rohitranjan965 4 роки тому +1

    You just found a complex non linear boundary in my random head 🤯

  • @lesptitsoiseaux
    @lesptitsoiseaux 4 роки тому +1

    Hi! May I ask, my 12 year old son wants to know (we are watching this together) what software are you using to make those truly wonderful animations?? Cheers from Vancouver!

    • @asmwarriorYT
      @asmwarriorYT 2 роки тому

      I like this video.Thanks for sharing.
      I also want to ask this same question. Which software or tool do you use to create the animations.

  • @photogyulai
    @photogyulai 8 місяців тому

    Nice video dude! How the hell did you make such a complex animations? :-)