Neural Networks from Scratch - P.5 Hidden Layer Activation Functions
Вставка
- Опубліковано 13 тра 2020
- Neural Networks from Scratch book, access the draft now: nnfs.io
NNFSiX Github: github.com/Sentdex/NNfSiX
Playlist for this series: • Neural Networks from S...
Spiral data function: gist.github.com/Sentdex/454cb...
Python 3 basics: pythonprogramming.net/introdu...
Intermediate Python (w/ OOP): pythonprogramming.net/introdu...
Mug link for fellow mug aficionados: amzn.to/3bvkZ6B
Channel membership: / @sentdex
Discord: / discord
Support the content: pythonprogramming.net/support...
Twitter: / sentdex
Instagram: / sentdex
Facebook: / pythonprogramming.net
Twitch: / sentdex
#nnfs #python #neuralnetworks
Bro the effort he puts in to make us understand this stuff is highly admirable. Thanks for doing this man. Will be waiting for pt. 6
When pt 6 will be released ?
@@mayurpanpaliya He is waiting for you to buy the book haha
I am excited for part 6
@@bossragegamer4081 5 months now :(
6 months
Dude, you're a legend. Bought the ebook Pre-Order yesterday, absolutely CANNOT WAIT for full release. My favourite thing about your videos, is your enthusiasm. For example, at 8:38, "What's so cool about ReLU is it's ALMOST linear, it's sooooo close to being linear, but yet that little itty-bitty bit of that rectified clipping at 0, is exactly what makes it powerful; as powerful as a sigmoid activation function, super fast, but this is what makes it work, and it's so cool! So WHY does it work??" Dude, I've never been so PUMPED to learn from someone with such enthusiasm in my LIFE. You take all the time you need to do this man, do it your way, and take your time, and you'll change the world. Thank you so much. Much love from Ireland. edit: spellings
Seriously! PUMPED encompasses all my feels as I follow along.
18:17 Seeing the neurons fire when activated and die when deactivated really helped to see what really goes under the hood of a neural network. Thanks for this really helpful animation and the whole nnfs initiative as a whole.
can you please explain how the activation point is getting changed by changing the bias? doesn't that flout the activation function which says y=x only when x>0?
@@Orchishman the bias here is essentially setting the activation point because y= max(0,max(0,-x+0.5)+0.48) will give you 0.48 for x being greater than or equal to 0.5 which serves as the lower bound for the function
@@Orchishman I created a graph so that you can play with the parameters and see for yourself how this is actually happening. I considered the first two rows and the last row of neurons only to make it simple (so 6 neurons in total in the hidden layer). I have numbered the neurons such that first neuron of first row has subscript 11, second neuron of first row has subscript 12 and so on (so, for example, the second neuron of the 8th row has subscript 82).
Now to simulate the movement that is happening in the video, adjust the slider for the variable w_22 and see what happens with the plot. You will see where the area of effect of the second neuron comes into play. You can also adjust other sliders for weights and biases to see see their influence on the output.
Here is the link: www.desmos.com/calculator/gruatlyner
When you open the plot, the values of the weights and biases are the same as seen in the video till 17:01.
I hope this helps.
40 mins ! Oh boy this is gonna be good
Yes😍
it IS good
This is by far the best explanation of how neural nets work that I have ever found. This should be it's own standalone teaching. The sine wave example with visuals - perfect! Thanks so much.
Hey sentdex, since the other parts are still in the works I’d like to give some feedback. Thanks for doing all this, the graphics help a ton to see how everything works. The only suggestion is to explain why the different concepts even exist, with some real life examples. This looks like it would be great for someone experienced that has used activation functions and everything else you discuss, and now they would like to closely see how it works. For a noob like me, it is not clear why they even exist, and it feels a bit like we are just listing different concepts without a clear picture of why, and what we are trying to achieve with this network. For example when you were showing how well the ReLu fits the data its not clear if that is actually desirable since it seems to overfit the data.
It is all result of years of experiment, scientists just try to look for patterns.
Hi, Here's a link of my video where I've explained how relu helps in fitting lines to the data. ua-cam.com/video/9t4TD5mcWSI/v-deo.html.
For anyone that’s sees this in the future and agrees, this series generally balances practicality with understanding. I would heavily recommend also giving 3blue 1browns series on neural networks a look as that focuses far more on understanding and doesn’t really go into code that much.
its been a month, still waiting for part 6.
ecpect something huge
Waiting also :), it will be worth the wait though for his quality.
Yes, please
I emailed Harrison about it, he says that he is finishing the draft (which is nearly complete) before continuing the series.
@@Nightmare-or2yd yayyyyy
I really struggled with the explanation on feature sets / features / samples / classes, I definitely don't think I fully get it (first time that has happened in this series so far!) The animation you mentioned would for sure help!
For the spiral dataset,
- features are the x-coordinates (x) and y-coordinates (y) of the points
- In the code, there are 300 x and 300 y values associated with the 300 points
- feature sets are the pairs (x, y) that fully define one point in the dataset
- In the code, there are 300 feature sets
- classes are the labels associated to the points
- In the code, there are 3 classes defined by the colors - red, blue, green - and each feature set (x, y) corresponds to one of these 3 classes (with 100 points each)
- samples are the combination of feature sets and classes that form the dataset
- For example: (x = 0.2, y = -0.5, color = red) and (x = -0.5, y = -0.2, color=blue) are samples from the dataset
Edit: Calling the function X, y = spiral_data(100, 3) creates samples belonging to 3 classes with 100 feature sets each.
X (feature sets) is an array of shape (300, 2) and y (classes) is a vector of size 300.
Same here. That's the only thing so far in this series which confused me.
@@nishantsvnit Ahh so a "feature set" is essentially "the set of features which a sample has" but unlabelled?
@@iAmTheSquidThing You are right. But it is better to not call it "unlabeled" because that is a term used for feature sets that have no labels assigned to them (which was not discussed in the video). In the example in the video, all the feature sets have corresponding labels (i.e., the 300 x,y coordinates belong to one of the 3 colors). So to rephrase your sentence, you can say that feature set and label are the two components that make up a sample. If there is no label, the sample (or feature set) is called unlabeled.
For more information on these terminologies, I would encourage you to see this: developers.google.com/machine-learning/crash-course/framing/ml-terminology#examples
@@nishantsvnit thanks
Bro, I just felt obligated to leave a comment for the perfect video you have made. This was literally the best visualization I have ever seen on youtube. This video deserves an oscar.
Dude. Having been here for the last 5ish years, it's awesome to see how far your production level has come. Always good content, now shinier. Would love to see a video on how the videos themselves are made.
This series (and the book) are incredible! Such an amazing teacher - I can't wait for part 6 :)
This is the first time I understand how to build a neural network. I love the work. My impatient side is wishing that all the videos be made available for this series but this will keep me hooked and awaiting your next post. Amazing job!
Man loved the video! So helpful and easy to learn. Need pt. 6 sooner, too eager to learn about back-propagation and weight/bias adjustments!
My guy cannot decide where to put his camera
I took my first machine learning course last semester and unfortunately all of the activities we did looked like those from the CS231 class you mentioned--no explanation, just code snippets and output. They were doable but considering it was most students first foray into python, it was quite a rough time to say the least. However, I am extraordinarily pleased to have found your channel and this series in particular--your instruction has helped more in the last 5 videos than my entire semester at university. Thanks for doing what you do.
This video just blew my mind. I still haven't bought the NNfS book yet. But this doesn't reflect how much it love to watch and re-watch your videos. This series will probably stay State-of-the-Art for a long time. Thank you!
Its wonderful what you're doing! Im just loving the in-depth knowledge of this course. Although I'm in high school I'm not finding it difficult to catch on!!👍👍
I know I'm late to the party, but the animations are amazing. I watched the double neuron part probably 20 times with the sound off to figure out what was going on. I had a recommendation for the animation and as I was typing it, I realized that I STILL didn't fully understand what was going on. I've got it now - thank you for the animations! This would be MUCH more difficult without them.
Specifically - the input of the second neuron going "backwards" was bending my brain.
I love how passionate he is throughout all these videos it brings me joy while learning this subject.
finally a video giving a clear insight of an activation function.
This is by far the best explanation of activation function I've come across. Really appreciate your work behind this series and getting into the crux of these topics.
I just wanted to thank you for all this stuff, I am in the process of getting a PhD in neuroscience and artificial neural networks seems like a great tool to help with research. You make it really clear, and unlike other tutorials that tend to just show how to use certain libraries you really get down to how they actually works. As soon as the book is out I am getting a physical copy!!
Man, this is just beautiful. Thank you and Daniel and the whole team responsible for this. You are bringing beauty into the world.
Hey can u explain me what is an activation function atol
I cannot explain how much amazing you way of explaining is. I just saw all of your videos in one go and now I am waiting for another one.
Thank you.
This entire series has been amazing. I really appreciate your effort to simplify and get things to a granular level. Kudos to you.
I'm going to be very good someday at building/training neural nets. It's all because of my curiosity that made me stumble on this fantastic playlist..... now I'm reading your book and practicing (coding after reading between the lines and understanding the theory) and consulting this playlist and several other resources in order to gain a deeper understanding. Thank you so much for being really amazing.
Waiting for P.6 eagerly..
same
@@sandeshtulsani1517 Same
You’re the first person I’ve seen who actually explains how something like ReLu is so helpful and powerful. Looking forward to part 6!
Very good content. Really shows the intuition in how a neural network works. Hopefully pt. 6 comes out soon
This is the best explanation video of how activation function works in the WWW 🚀. And thank you the one who put his time and effort in creating such beautiful animations for us. Thank you very much ❤
This week was the hardest to pass of this quarantine! Please don't make us wait so long 🙏🏻🙏🏻🥺🥺
idk mate, probably not a good idea to put pressure on him to upload more
@@Hacker097 He is showing appreciation
Amazing videos, I'm learning a lot that I missed the first time around. I was running into problems with my models not working well with my data and knew it was time to get back to basics. Thanks for making these!
Hey, I am a beginner in machine learning and you genuinely have been an inspiration. Thanks for existing!
Well, that's a long wait. Honestly I'll wait forever. 😂😂
But here it is, finally.♥️
Hey man! Thanks for your awesome videos! Im interested in this theme and your explaining it pretty good!
Im waiting for your next video 😉
Greetings from Germany and keep on producing 🚀
Lol PaderRiders interessiert sich für Nerdstuff XD
its like my favorite TV series uploaded new episode, this gonna be wild
as always thank u & Daniel :)
Bought your book, but still eagerly waiting for the next video. These are very well produced deep dives that are easy to understand.
Is there any paper for this optimizer? I've never heard of one before. How does it work?
Perhaps the author could help us out.... Hey @Daniel Optimizer Kukiela, please tell us about your optimizer!
Hello Daniel nice to see you here😂😂❤️
You are a legend man
How Does This Help Irene To SLAP ME"?" 👠😋😎
When he was saying optimizer, he was saying that the guy literally did it 'by hand'. So there is no optimizer, it was done by a human :P, if i understood correctly.
I never comment on videos, but I've been following sentdex for the last couple of years and this is amazing. Please keep up the good work, thank you for teaching me so so many things.
Thank you! Will do!
Those 3blue1brown api animations are amazing. MASSIVE production value. That really helped me understand this video to another level, thanks.
Fantastic video! I never understood the need for activation functions, now I get it completely. Incredible work thank you!
Me: Sees new video by sentdex about neural networks
Hand: Invents FTL Travel to click the video
I think an animation would be immensely helpful for absorbing the section about features and classes. Got lost for a while between data set and feature set and feature class
Thank you so much for really making me understand the working of the activation function. After seeing this video, my motivation to learn neural networks skyrocketed. And the effort you put into this video is overwhelming and I really appreciate you from the bottom of my heart. Once again , Thank You ❤️❤️❤️❤️
The detailed explanation and animations of fitting the sine wave are awesome!
I want to get the book but tbh I'm on the fence, my brain doesn't allow me to sit and go through paper, if this series resumes then I will because it will be good as a complement but not as a main means of studying in 2020.
Hi, I noticed that you did not paste the code for generating a dataset in the description. Also thanks for the new video!
The link to the code is there in the description under "Spiral data function"
At 29:27 if you freeze the screen you can see it. I copied the text into Jupyter notebook and it worked.
Xomg, that was impressive as always sentdex ! The visuals are a big help. Thanks for all of your tutorials. Naturally, I will buy the book to show my appreciation and continue on as this channel has become the edge of what you can do with python.
Best tutorial Visually, Verbally, Programmably and Conceptually. Thanks for enlightening us sentdex.
what optimizer do u use?
noob: ADAM
intellectual: Daniel
Why NNFS has stopped?
no stopped, sentdex is doin the draft first before uploading the new video i think
This series is the best one you've ever done, hands down. Easiest to follow, helpfully illuminated by the manim animation (manimations?). 11/10
Next course, deep q-learning from scratch XD
There has to be a place like heaven inside heaven for you.
aren't you the guy that was faster than light?
@@shauryapatel8372 yeah he was
There are very few topics online about learning what neurons and synapses in silico (Node and activation function) does in intuitive way. This video is already 3 years old but still holds the crown, sir.
Loving the series so far! All I do is wait for the next video to come out...
When is the next one coming?
When is part 6 going to be released
He said expect it sometime between June 2022 and December 2038
@@OfficialUA-cam3 thts a long wait dude🙂
@@subratkishoredutta4132 Yes but Sentdex is a busy man... writing books, running a UA-cam channel, maintaining a website, did you know he is raising three different families on two different continents? (that last one is a secret)
@@OfficialUA-cam3 yes he is..
@@OfficialUA-cam3 that last one is wrong, he got a neural network to do the other two
This is amazing, I'm coding along with C# and this is the first time I actually understand how Neural Networks work.
you have remarkable skill in explaining things concisely yet understandably, thank you for your videos!
Finally! First view and first comment 😍
Live success! Yay!
Neu - ral - net. It's in the brain.
FINALLY Some explained me with easy, visual way, what impact has layers. I was always wondering why de heck we need 2 layers, or why 8 neurons at each, and why not a 100? Thanks a lot! You are doing a great work. I can't wait for the next part!!
This is the most valuable channel to me and hopefully many others on youtube. PLEASE UPLOAD THE NEXT VIDEO SOON
at 25:10 When I code in python and I'm under 80 characters for
my line of code. I rename my variables extra long just to end up at 82
characters to trigger the pep8 lovers. I hate pep8
Why do you hate pep8?
@@kris10an64 it has unreasonable rules that shouldn't always apply. People are too strict with them. Raymond Hettinger himself agrees. Its suppose to make code more readable but its very flawed especially since python is already based on indentation. The 80 characters per line rule is the worst rule. If you have nested if blocks or if you like to work with lambda functions and iterators it can easily become long and well it can make the code blocky and ends up making it hard to read which is the very thing it was meant to avoid. In many cases following pep8 isn't the best option. Are you one of those pep8 absolutists? It also feels very restricting.
@@kris10an64 search for a video on youtube titled: "Beyond Pep8" where Raymond Hettinger talks about his dislikes of pep8 too and how some if its aspects are silly at best.
@@kris10an64 Here is how I like to code. Maybe it comes from me preferring C++ but these 2 functions can take 2 strings with hex values and xor them together:
def stringXOR(a,b): return ('0' * len(a if a > b else b) + '%02X' % (int(a,16) ^ int(b,16)))[-len(a if a > b else b):]
def stringAND(a,b): return ('0' * len(a if a > b else b) + '%02X' % (int(a,16) & int(b,16)))[-len(a if a > b else b):]
Make that pep8 friendly and it looks like hell.
@@kris10an64 Here's another example. This is how I like to reverse a hex string:
def byteFlop(hexstr): return ''.join(reversed([hexstr[y:y+2] for y in range(0, len(hexstr), 2)]))
Show me a pep8 version that is better.
Hats Off Sentdex !!! It has been very helpful for me to learn a little bit of Neural Network. Really aappreciate all the effort gone behind this.
wow, the example of linear vs non-linear activation function is amazing! This series is pure gold
Hi Harrison, Just wanted to thank you for this awesome series on NN, it really helps me alot in understanding things clearly from the scratch.
You have built a confidence in me that yes I can also learn this complex topic! Thank you 😊
You're very welcome!
Huge Respect!!!! You are really going far and beyond to make people understand this stuff. You are setting new milestones for educators around the world. ❣️Appreciate your efforts.
Man, this is the best neural network tutorial that I've ever seen. Thank you and keep going!
The animations are so smooth I could just sit there watching a graph all day!
Had been waiting for this for nearly 2 weeks now.... Thank you @sentdex.... ❤️
These keep getting better and better. I'll say again, this is exactly what I was looking for to get into AI as a (currently) non-AI dev. You go deep enough to understand what's going on behind the scenes but stay at high enough of a level that it doesn't feel like an advanced math course. Truly an art. Great work!
That's because he hasn't talked about backpropagation yet lol You should watch 1-2 videos of 3blue1brown on linear algebra just to understand how derivatives work, as that will increase your understanding immensely. Also, the math isn't that complicated as you usually just need to understand it once and then you can apply it globally to other network architectures as well, as they tend to operate on the same underlying principles.
Great series of videos, with clear, thorough explanations. Can't wait for future parts (so much so that I've ordered the e-book :) )
These videos are gold. Seriously I can't thank you enough! I really want to buy the e-book version of this amazing book and also join your channel but unfortunately, I live in a sanctioned country and every international transaction is blocked... Thank you again and keep up the good work!
This is the best video explanation I have ever seen on activation functions. Bravo
Absolutely brilliant explanation as to why a non-linear activation function can lead to good mapping of desired non-linear outputs. This is actually an extremely pertinent topic in my field of study (electrical engineering, power systems, which for three phase AC circuits have non-linear power flow solutions). Seeing "how" these ReLU neurons can model non-linear functions is absolutely mind blowing. Bravo!
Ever since I've heard of ReLU I've always questioned why is it better than sigmoid and the others even though it looks like 2 linear functions put together. Now I finally understand how it works and why it's so efficient! I also understand linearity and non-linearity much better than before and my thirst for knowing why and how it's all happening is satiated. Thank you for those amazing videos!
I'm horribly waiting for the next episode... please don't make us wait longer! You're doing a great job and your effort is really appreciated.
Really thank youuuu for this amazing series of videos! I really enjoy the way you explain complex concepts and the visual demonstration is extremely helpful! Many thanks keep it up!
That was great. Especially the explanation for why you need a non linear activation function.
Excellent work!! Please dont abandon so many of us and continue with part 6 and beyond... loads of respect for the time you put in making these videos.. Thank u from the most hidden layer of my heart for such explanations!!
Working on the book atm. Videos after
@@sentdex sure No Problem!! I promise I wont learn Neural Nets from any other source Till you are back in business !!
Warm wishes for the book !!
This is one of the BEST explanations of why ReLU works. I took 24 screen shots only for this video because of the amount of detail this video has. Eagerly waiting for the book to arrive today!!!
Part 6 cant come quick enough, loving this series
Having watched or read dozens of explanations about activation functions, this is the first one that enabled visualisation of what's going on behind the numbers. Congratulations, it's pretty hard to come up with something unique on this topic, especially on UA-cam, and this knocked it out the park. What's being used to create the animated net diagrams?
I cannot thank u enough on that magnificent effort .
I'm waiting the following parts
This is amazing, I love your channel. I watch this everyday, I'm going to show my dad this whole playlist. He's going to love it and then we're going to have some fun trying to do something similar but more simple I guess xP
Waiting for part 6
As I am doing a project in which I have to learn Neural Networks.
Great Work man.
I feel like when watching this series, I'm being bestowed sacred knowledge that only the mighty few understand. I am sitting here in awe, and absorbing all of this, going from zero to full blown data scientist in 1 badass series. Loving every moment, I simply cant get enough.
You made a complete noob like me understand,
Really looking forward to the next video. Please keep making videos for this series covering each and every topic in the field of neural networks. I wish there was a certification course from the side of sentdex which we could take up, learn, write exam and get certified
Great series though! I'm loving it! Looking forward to the next episodes.
Quality content for free. Second time going through the series. You are a good man
Just bought your book! Thank you for your clear explanations👌
Alright I'm glad I decided to actually watch this far, because I never quite understood the point of activation functions before now. That was a really nice explanation.
The sine wave fitting is so informative! Kudos on the animations
This is the most interesting video abount neural networks on youtube, and so well explained, tnx so much again sentdex.
Your classes are awesome. Really appreciate it!
I've been waiting for this moment for all my week, oh Lord
Awesome explanation. This should have been one of the first things you explain. "A Neural Network Fits Complex Functions to the Data Given"
okay period. the animations and explanation is MINDBLOWING
nice series sentdex
relu explanation was great and love your new animations!!! much easier to understand visual examples.