I'm sorry for you that this video didn't do so well with clicks, don't let this discourage you from making more beautiful explanations :) There will always be people having their aha-Moments through you
Intellectual videos don't really get much views. The cat and dog videos will always satisfy the greater population, providing easy dopamine hits to the reptilian brains.
my mind is blown, this is so simple and elegant, thank you for taking the time to explain neural networks and linear transformations, this is going to be one of those videos I watch over and over until I really grok it!
I happy I managed to find this video again. 😄 I suddenly felt an urge to rewatch it. I really like the clear visuals of the video. It's a shame you have yet to come out with a part 3, though.
This video series is fantastic, these concepts never land for me until I see visual spacial context. Keep up the great work, you are greatly appreciated!
big up for this outstanding work! as an fellow student of these topics, i want to thank you for the effort put there. i'm really impressed both in the script and animations. much love ❤
Wow. I've watched a lot of "how do neural networks work" videos, and this is the first one that has offered me any truly new insight in a long time. Excellent!
What a wonderful explanation! I need to know this for my project and each time I watch something about the NN I'm sure im getting better at understanding what's under the cover. But never have I seen such elegant way to introduce the topic. Bravo!
Thank you for this informative video. I was one of many waiting for part 2, but didn't get notified as I was only subscribed, and didn't know to also hit the bell notification to get an update on when part 2 was out. I suspect many people will be coming back at odd points into the future to see if part 2 has come out. Hope they enjoy it as much as I have.
Geez man, your video is very good to visualize what a nn really does! One piece of advice, if I may....after a complicated explanation or a vary loaded explanation as you did with the output of the nn, which is very complex to understand uf you know nothing about it, try to summarize it with a simple sentence, just as you did in 7:40. That was beautifully explained, bravo!👍🏻👍🏻👍🏻
Really great! Do you have a tutorial on how you created the visualizations of the different layers? Would it be possible to do that in pure python as well?
Really nice video. However, I think you should mention that you use a binarized (one hot) encoding of argmax and not argmax as it is commonly defined, as viewers (like me) could get confused. Otherwise an excellent video, that conveys the intuition really well! 😀
Is this true?: In every other resources I have only met an activation function, which is an activation function in a single neuron, so it is a R -> R function. But in order to calculate softmax, you need the vector in the y neurons (the output of the last linear calculation). So it is basically applied on a layer, not just on one value.
5:25 When describing that the sum of the array resulting from softmax equals 1, I think the visual is missing that communication too. Such as stacking all of the lines on top of each other, up to a value of 1 or 100%. Don't just rely on words. Otherwise great video, thank you.
Can I use your mathematical apparatus, to investigate the physical processes of Metaphysics?? I am looking for a mathematical apparatus capable of working with metaphysical phenomena, i.e. metamathematics!!
I want to ask that as neural net approximates a function over a particular domain interval ,what'll happen if it gets input outside the domain when testing?
As inputs? One way is to have each input be a vector of dimension n, where n is the number of categories. Then, for each input, assign the category index 1, and the rest 0. For example, if my input was a 4-category variable of either cat, dog, wolf, tiger. Then the input cat could be {1, 0, 0, 0}. See "one-hot encoding" if you're interested
There are plenty of other ways. In the case of NLP (which is my domain atm), we want to be able to encode tokens (some sequence of characters) into input vectors. An older method to do this is word2vec, which converts words to vectors based off context. This allows us to assign each word to some input vector, and we can pass along each vector as inputs to an NN. These days though, modern neural language models (GPT3, etc.) have sophisticated embeddings and word2vec has largely fallen out of grace
Very informative video Thank you a lot Do you have any suggestions for me, I want to learn manim and make videos like how ML algorithms work, their pros and cons cases? Or, if you have a manim learners classes, then I can directly enroll to learn.
hey don't you think saying "this is what NN does under the hood" an overshoot? I mean all the popular literature in textbooks and all ML community also claims that it does exactly that but if this was truly the case, if it was behaving that logicallh then adverserial attacks would have been impossible. But we all know that one pixel attack and noise based attacks are quite frequently achievable by GANs. The interpretation of layers extract features from the input is true provided features are not the human interpretable shapes or patterns, to call them so leads to an error. Because one pixel attacks and noise based attacks do not affect the feature as such, the horse is still horse if you change twentyish pixels out of a 1000.. but the NN suddenly starts saying it is a dog with 99% confidence. If it were really extracting features as in patterns as humans understand it would never even make that error. Humans have 100% accuracy and immunity against some twenty pixels changing out of a 1000 because we extract patterns. NN does not, if it did it should also be immune. But it is not. This means that the popular understanding is still incomplete and it would be wrong to say anything on how NN works under the hood. Since you can find multiple completely different sets of weights and still get excellent classification accuracy. This means the NN is interpreting the spiral in its own way and not human style 5 zone with nonlinear boundary. Because human style there is only 1 interpretation logically possible. That fails to explain how we can get multiple sets of weights, not at all close or alike, still giving solid accuracy
I'm sorry for you that this video didn't do so well with clicks, don't let this discourage you from making more beautiful explanations :) There will always be people having their aha-Moments through you
Intellectual videos don't really get much views. The cat and dog videos will always satisfy the greater population, providing easy dopamine hits to the reptilian brains.
Where's part 3? I love this series! ❤
The prodigal son returns.
my mind is blown, this is so simple and elegant, thank you for taking the time to explain neural networks and linear transformations, this is going to be one of those videos I watch over and over until I really grok it!
Thank you for making these videos, absolute gems. People like you make UA-cam worth it.
Once again WOW this is the best visualization of neural networks I've ever seen, and I've learned tremendously from it. Please make more videos!!
Hands down THE most lucid explanation of NN I've seen 💯 Sharing it with my CompSci group.
Also curious to see how you'll visualise back propagation.
This is a rare occasion where I am fortunate to be witnessing excellent progress in technology as it happens. Thank you!
Excited for part 3!
I happy I managed to find this video again. 😄 I suddenly felt an urge to rewatch it. I really like the clear visuals of the video. It's a shame you have yet to come out with a part 3, though.
This video has done such a great job of visually breaking down a complex concept with examples!
This video series is fantastic, these concepts never land for me until I see visual spacial context. Keep up the great work, you are greatly appreciated!
this is the best video I’ve ever watched, I’m in tears, you’ve changed my life with your beautiful animations and soothing voice
Grant Junior returns
big up for this outstanding work! as an fellow student of these topics, i want to thank you for the effort put there. i'm really impressed both in the script and animations. much love ❤
Wow. I've watched a lot of "how do neural networks work" videos, and this is the first one that has offered me any truly new insight in a long time. Excellent!
Thank you! I appreciate the kind words :)
What a wonderful explanation! I need to know this for my project and each time I watch something about the NN I'm sure im getting better at understanding what's under the cover. But never have I seen such elegant way to introduce the topic. Bravo!
Thank you!
Video makes it easy for non math folks like me to gain some semblance of an understanding of neural networks. Great job!
Bruce! It's been a whole year. you still owe me 16 contents
I saw this video 11 months after published, and came as a gift. Thank you sooooo much!
Thank you for this informative video. I was one of many waiting for part 2, but didn't get notified as I was only subscribed, and didn't know to also hit the bell notification to get an update on when part 2 was out. I suspect many people will be coming back at odd points into the future to see if part 2 has come out. Hope they enjoy it as much as I have.
The words can't explain how amazing this video
finally a video that clears everything
Thank you
Thank you for making this video, it's awesome. I look forward to seeing more of your work!
Brilliant stuff. I've watched my share of neural network videos, and this one is truly unique
ngl, this is what is called the "high-quality content", thank you very much for your efforts 👏😍 🚀
Geez man, your video is very good to visualize what a nn really does! One piece of advice, if I may....after a complicated explanation or a vary loaded explanation as you did with the output of the nn, which is very complex to understand uf you know nothing about it, try to summarize it with a simple sentence, just as you did in 7:40. That was beautifully explained, bravo!👍🏻👍🏻👍🏻
Very nice explanations of a complicated topic. The visuals make it more intuitive.
Thanks!
this is some extraordinary explaination
Oh man! Finally 2nd part is here.....
Awesome, a new video!! Really happy to see you making content again!!
UA-cam's algorithm should be ashamed of itself !! how could this video have less than 20k views!!!!!!
thanks, had multiple whoa! moments
Great video, thank you!
I've been waiting for this video for several months!
Wow, the video is sooo good, the explanations are wonderful and the animations are so beautiful, I just love it 😍😍
This is awesome!!! Keep posting and keep up the great work.
What a great explanation. Waiting for part 3
Excellent Video!
I enjoyed this video so much.
Thank you Jacob.
Really great! Do you have a tutorial on how you created the visualizations of the different layers? Would it be possible to do that in pure python as well?
Thank you for making these videos.
Amazing video ! Keep it up 👍
Awesome job!
Thanks!
Really great 👍
Man, this video is a masterpiece! congrat!
Really nice video. However, I think you should mention that you use a binarized (one hot) encoding of argmax and not argmax as it is commonly defined, as viewers (like me) could get confused.
Otherwise an excellent video, that conveys the intuition really well! 😀
Good point, I'll include the terminology next time
Amazing explanation!!..
Simply Awesome 🔥🔥🔥🔥
Thank you for this gerat video
Nice video. How do you make the white edge border of a scene? (like in Recap Part 1 scene)
Is this true?:
In every other resources I have only met an activation function, which is an activation function in a single neuron, so it is a R -> R function. But in order to calculate softmax, you need the vector in the y neurons (the output of the last linear calculation). So it is basically applied on a layer, not just on one value.
¡Excellent! Take your like! 👍😉
I've missed this.
return of the king
But when will hopeful69420 return
5:25 When describing that the sum of the array resulting from softmax equals 1, I think the visual is missing that communication too. Such as stacking all of the lines on top of each other, up to a value of 1 or 100%. Don't just rely on words.
Otherwise great video, thank you.
banger
In 10:49 , aren't the x and y coordinates of the plot is the output values of the second last layer of the nn?
Can I use your mathematical apparatus, to investigate the physical processes of Metaphysics??
I am looking for a mathematical apparatus capable of working with metaphysical phenomena, i.e. metamathematics!!
PART 3 PART 3 PART 3
Finally
Intro
part 1 Funny Galaxy
part 2 Swastika
part 3 Ending of Evangelion
I am trying to visualise how the neural network transformed the input space into linearly separable space layer by layer in a new basic data set.
When you come back after 2¹⁰ years
I want to ask that as neural net approximates a function over a particular domain interval ,what'll happen if it gets input outside the domain when testing?
Great! Can you explain me how you produce these animations? Is there any software you have used?
I use manim
holy shit!
9:13 grid
Goat cubing x
w12 reads the first weight of the second input?
this is confusing!
should be w21 => from input x2, we look at w1 (that, obviously, goes to output 1)!
where were you at 8:50? in University?
How would a neural network handle categorical variables?
As inputs? One way is to have each input be a vector of dimension n, where n is the number of categories. Then, for each input, assign the category index 1, and the rest 0. For example, if my input was a 4-category variable of either cat, dog, wolf, tiger. Then the input cat could be {1, 0, 0, 0}. See "one-hot encoding" if you're interested
There are plenty of other ways. In the case of NLP (which is my domain atm), we want to be able to encode tokens (some sequence of characters) into input vectors. An older method to do this is word2vec, which converts words to vectors based off context. This allows us to assign each word to some input vector, and we can pass along each vector as inputs to an NN. These days though, modern neural language models (GPT3, etc.) have sophisticated embeddings and word2vec has largely fallen out of grace
Chaos happens
3:47 Did you mean a range of i̶n̶p̶u̶t̶s̶ outputs?
What about part 3!!!
wb :)
Thanks:)
Very informative video
Thank you a lot
Do you have any suggestions for me, I want to learn manim and make videos like how ML algorithms work, their pros and cons cases?
Or, if you have a manim learners classes, then I can directly enroll to learn.
hey don't you think saying "this is what NN does under the hood" an overshoot? I mean all the popular literature in textbooks and all ML community also claims that it does exactly that but if this was truly the case, if it was behaving that logicallh then adverserial attacks would have been impossible. But we all know that one pixel attack and noise based attacks are quite frequently achievable by GANs. The interpretation of layers extract features from the input is true provided features are not the human interpretable shapes or patterns, to call them so leads to an error. Because one pixel attacks and noise based attacks do not affect the feature as such, the horse is still horse if you change twentyish pixels out of a 1000.. but the NN suddenly starts saying it is a dog with 99% confidence. If it were really extracting features as in patterns as humans understand it would never even make that error. Humans have 100% accuracy and immunity against some twenty pixels changing out of a 1000 because we extract patterns. NN does not, if it did it should also be immune. But it is not. This means that the popular understanding is still incomplete and it would be wrong to say anything on how NN works under the hood. Since you can find multiple completely different sets of weights and still get excellent classification accuracy. This means the NN is interpreting the spiral in its own way and not human style 5 zone with nonlinear boundary. Because human style there is only 1 interpretation logically possible. That fails to explain how we can get multiple sets of weights, not at all close or alike, still giving solid accuracy
What the.....
But salty redditors say this isn't how the thing works at all
(they deleted their comments in shame after I asked for elaboration)
Haha, sorry but what redditors? What post are you talking about. Kinda curious
Hm
I finally realize that I am a useless stupid fool.
you said "softmax is not a version of argmax" and then you say "softmax is a smoother version of argmax" - make up your mind!
When 3rd party is coming
Finally
Finally