Timestamps 00:00:00 Neural Networks I 00:00:39 Neural networks learn a function 00:03:34 Why we need a bias 00:04:49 Why we need a non-linearity 00:05:55 The main building block of neural networks 00:09:17 Combining units into neural networks 00:11:08 Neural networks as matrix operations 00:13:51 Neural network setups and loss functions 00:23:45 Backpropagation: Learning to get better 00:33:45 Neural networks search for transformations 00:34:55 DEMO: Building neural networks from scratch 01:09:29 Neural Networks I recap
This is a fantastic explainer, and man you've got a great set of pipes. I'm bookmarking this video for the next time someone asks me how neural networks work.
Very helpful set of videos. However, it is unclear how is it that the weights determined for one set of input values X1 and the corresponding expected output value Y1, will hod for any other set of input values X2 and their corresponding output value Y2? In your example, the weights computed for inputs x1=2, x2=3 and expected output y=0, maybe different for any other inputs and expected output.
6:17 or a perceptron? I remember my uni teacher did not like the word neural networks because it implies thats how biological brains work but in reality the two have little to do with each other
Yep, I also like getting away from biology-inspired terminology when I can which is why I prefer "unit" or "node". Regarding "perceptron": at 6:48, I explain the difference between units as we use them today and the classic perceptron from the 1950s (the latter doesn't have a differentiable activation function which is why I didn't include the term).
Timestamps
00:00:00 Neural Networks I
00:00:39 Neural networks learn a function
00:03:34 Why we need a bias
00:04:49 Why we need a non-linearity
00:05:55 The main building block of neural networks
00:09:17 Combining units into neural networks
00:11:08 Neural networks as matrix operations
00:13:51 Neural network setups and loss functions
00:23:45 Backpropagation: Learning to get better
00:33:45 Neural networks search for transformations
00:34:55 DEMO: Building neural networks from scratch
01:09:29 Neural Networks I recap
This is a fantastic explainer, and man you've got a great set of pipes. I'm bookmarking this video for the next time someone asks me how neural networks work.
The best explanation so far. Awesome slides! Thanks a lot!
omggg, kudos to your efforts!!!!! I really wish you have more subscribers
Wow..!! What a great way of explanation. Truly awesome..!!
great video so far!
Great explanations!
thx u for your hard work, to output this series of video
You have thought me on AI what 2 semesters of AI course has not... Simplified alot
Very helpful set of videos. However, it is unclear how is it that the weights determined for one set of input values X1 and the corresponding expected output value Y1, will hod for any other set of input values X2 and their corresponding output value Y2? In your example, the weights computed for inputs x1=2, x2=3 and expected output y=0, maybe different for any other inputs and expected output.
6:17 or a perceptron? I remember my uni teacher did not like the word neural networks because it implies thats how biological brains work but in reality the two have little to do with each other
Yep, I also like getting away from biology-inspired terminology when I can which is why I prefer "unit" or "node". Regarding "perceptron": at 6:48, I explain the difference between units as we use them today and the classic perceptron from the 1950s (the latter doesn't have a differentiable activation function which is why I didn't include the term).
11
can you send me slides?