Neural Networks from Scratch - P.4 Batches, Layers, and Objects
Вставка
- Опубліковано 15 чер 2024
- Neural Networks from Scratch book: nnfs.io
NNFSiX Github: github.com/Sentdex/NNfSiX
Playlist for this series: • Neural Networks from S...
Neural Networks IN Scratch (the programming language): • Neural Networks in Scr...
Python 3 basics: pythonprogramming.net/introdu...
Intermediate Python (w/ OOP): pythonprogramming.net/introdu...
Mug link for fellow mug aficionados: amzn.to/2KFwsWn
Channel membership: / @sentdex
Discord: / discord
Support the content: pythonprogramming.net/support...
Twitter: / sentdex
Instagram: / sentdex
Facebook: / pythonprogramming.net
Twitch: / sentdex
#nnfs #python #neuralnetworks
I swear I am addicted to these more than Netflix.
lol you are weird xD
I literally wait for UA-cam’s notification telling me Sentdex dropped a new tutorial 😂
@@usamanadeem7974 me too brother ahahahah
same here
cringe
I don't even look at my calendar anymore. My week ends when sentdex drops a video.
My week starts when he drops video!!
@@akiratoriyama1320 countdown starts today for the next video
Agreed, I'm waiting patiently also :)
i think you might have to make new days cuz it is taking 2 weeks for his next vid
I’m glad I’m living at a time that people like you share their knowledge in such quality for free.
Thank you 🙏🏻
I was going to say the same thing
yes, absolutely. i come from a time where programming meant you buy a book that acually was outdated the moment you bought it.
thank you 🙂
Everytime I see a neural network tutorial they start as "import tensorflow as tf" without giving a shit about basic..but this is a very detailed basic clearing video, truly from scratch...THANK YOU FOR THE GOOD WORK
I agree with you, although you can check out Deeplearning.ai on Coursera. It's pretty good.
@@lucygaming9726 No thanks . Im too poor for that.
@@aleksszukovskis2074 its free and by andrew ng, the legend
@@janzugic6798 thanks
@@janzugic6798 you need a coursera subscription ($49/mo) after 7 day trial period regardless of the course being free
Errata:
16:17: initially this anim was incorrect when I recorded. We fixed the anim, but not the audio, resulting in my reading of the incorrect first row of values incorrectly. We're adding row vectors here, so the anim is correct, the words are not. =]
Please clarify the concept of the Gaussian Distribution that you introduced when talking about np.randn
if i draw a neural network of 12 inputs imaging into 3 output and
connect each neurons to the output, there will be 36 lines in total,
that means there has to be about 36 weights but the weight you took had
only 12 weights in array, how is that possible ?
@@anjali7778 He has only 4 inputs to the output layer... therefore number of weights = 4*3 = 12
If instead , you have 12 inputs ,you will get 12*3 = 36 weights
Am I wrong, or is there something missing around the 9:08 point?
Question: why did we need to transpose weights 2 since they are both 3x3 matrices, index1 of the would equal index 0 right?
Just imagine if we have tutorials like these on all the AI and Machine learning topics and also on probability and statistics. .. man, every few minutes in the video I try to scroll the video list up and down with the hope that there will be 700 more videos like these but it shows only 7 videos. Amazing work, I will order your book now. Appreciate your dedication and hard work
Bruh this visualisation... Its unreal🔥
ASMR for eyes, thanks Daniel!
@@Saletroo ikr 😂😂
The thing I love about you is just how beautifully you explain concepts, with immaculate animations and then literally make such complex tasks seem so easy! Gonna make my kids watch your tutorials instead of cartoons one day ♥️😂
U r just ... God for teaching programming... I am glad to have u as a teacher... 💪
Please continue with this playlist This is hands down the best series on youtube right now !!!
No plans to stop any time soon!
This is the online classes we all deserve
better than most ivy league schools
I really appreciate you doing this mate, I really wanted to learn Neural Networks and you are explaining this soo good.
Glad to hear it!
this is actually the first NN tutorial during which I haven't felt asleep..
ps. thank you for explaining some of the things twice!
I’d kind of given up on understanding ML and NN. Then I saw Neural Networks from scratch and Sentdex CANNOT make this easier. Loving this series.
I banged my head on numerous videos too. They assume a level of knowledge that was hard to peice together. This series is filling lots of gaps for me. The concepts are starting to jell, this whole field is fascinating!! Kind of empowering.
You have created one of the best series on this topic I have found on the internet. Explanations include everything, yet you still proceed at a fast steady pace.
I know I have said this before, but I am going to say it again, and keep on saying it till you continue to make such awesome tutorials. Thank you!
This is awesome! Finally, a series on neural nets I can understand easily.
This channel is so good that you'll never find any negative comments
They are there sometimes :) but yes fairly rare.
-comment
I wish anyone had ever taught me any concept the way you do..
this is better explained and with more quality than any neural network video where the concept is mostly shown just by the code
Still one of the best series on UA-cam to learn the basics of neural networks... fast!
I'm really glad you took the time to break down this concept step by step, will surely reduce the number of headaches in the future!
Thank you for your great content looking forward to the next one. 😄
At about 14:51, where you present the matrix multiplied by the vector, the proper mathematical notation would be to have the vector as a column vector, as well as the output vector being a column vector. This is truly how the matrix multiplication is able to work, because a vector is truly just a matrix where one of the dimensions is equal to 1. Other than that, I have to admit, these are my FAVORITE AI/ML videos yet!!!
I was looking for this comment. Thanks for pointing that out!
I have never bought a book from UA-cam before but you will be the first. You’ve deserved it. Absolutely love this work. Please keep it up
I've never learned linear algebra and I'm astounded how simple you made matrix multiplication out to be!
Feels like learning all the day, it never felt so simple before...thanks a lot 🙏🏻
This is my first time learning about Neural Networks, and you're doing a great job at explaining things in an easy to understand way.
Back when I was learning the concepts behind building a network, most tutorials went straight into the maths, while that is fine - what I wanted to understand was the different compositions from the input to the output. This video was what I was looking for back then before going deep into the theory and methodology. Great content!
Friend: "So what do you do in your free, unwind, leisure time?"
Me: "Neural Networks From Scratch"
Friend: "..."
Sister: "If that's informative, then what's educational"
Me: "Glad you asked!" *starts to explain neural networks and basic QP*
Sister: "NO! Make it stop!" *Never asks again*
Story of my life 😁
last time I was this early, Corona was just a beer brand...
Just ordered the book - can't wait to dive into it. Thanks you, this is good stuff and a priceless contribution to the evolution of this area of science.
i can't believe you created this course - absolutely fantastic and wonderfully thoughtful in its layout - thanks so much
This is the best series by far I've ever seen. Just what I was looking for. I wonder if you'll get into explaining the why also.
For instance, often times when I'm watching I do wonder "Why do we even have biases? What function do they serve? How do they enhance predictions? What sort of history/science/neuroscience underlies that and where do AI and neuroscience partways if so? Why does all of this work at all?"
Youssef I really hope @sentdex reads this ;)
I think it was explained in a previous video, how biases help making predictions. Check out the last video guys
He explained that in previous videos, but no all your questions
@@carloslopez7204 I agree it was explained a bit but I really didn't feel the explanation gave me a deep understanding of the why unfortunately, just a very rough surface level and vague hint of what might be going on.
I think that now some things, like the biases dont make sense now. But when you get into training(the lerning process) is all start to make sense.
The interesting part will be the backward propagation, im really looking forward to this
Your explanations are so clear, I really appreciate the hard work you've been through to design this series to make such complex topics so much fun to learn :) . Enjoying a lot
I watching this as a refresher as I studied this topic a few years ago, and I find the context you provide really useful. Thanks!
Understood batch size finally
Glad we could help!
I pre-ordered the book because this is interesting and I a eager to learn more
This reminds me of the best TV series ... You finish one episode and look forward to the next ...
Good job!
Never have I been so exciting for a new UA-cam video, you have earned my respect
Can you please provide a visual representation of how the batches pass along. I mean by using animation using bubbles and lines like you did in the initial videos.
well finally looks like my linear algebra class was not a waste of time at all
I just want to again say thank you so much for these videos. They are top notch. It truly has helped me get a deep understanding compared to what many other "tutorials" have. Plus all this information being provided free. I feel blessed!
After many searches I found this playlist! Thank you for making this Gold.
For people watching this video... remember this golden rule :
Say we have two Matrices A&B..in order to multiply A with B,i.e A.B
The number of Columns of Matrix A should be equal to number of Rows of Matrix B.
That's why A.B != B.A
Amazing video 👍!
Thanks a lot !
Keep up the amazing work !
Sentdex: we're arriving at the sexy parts...
Python: Oh, yes I am ;)
x = we're arriving at the sexy parts...
print(x)
I'm very thankful for this series, I just learn so much new thing, because you're so good in explaining, and there yet 5 videos to watch!
You are doing a great job explaining these concepts in a way that is easy to understand. I can't wait for the next part so I am ordering the ebook.
Great job.
Never clicked on a video so quickly
Notification => nnfs P4.
Me: clicks on the button faster than the speed of light
Mate, this series is unreal! Love your work
Hey just to let you know, this video 3 years later continues to help and encourage new programmers!
I'm in my freshmen year of highschool doing all gen ed courses, but I started working on this tutorial in my free time and I'm having a blast and actually understanding everything perfectly
Just wanted to say thank you so much for really helping people like me in our learning of Computer Science and machine learning! These are awesome and super enjoyable!
6:09 what's a fitment line? Google isn't helping me.
16:19 you said the other way around by mistake.. shouldn't it be 2.8 +2, 6.8+2, -0.59+2..
Yeah, that really confuse me more than it should have
yes. It is called broadcasting.
One of the best tutorials I have seen on the topic , Saludos de Argentina!
I can’t wait to see the implementation of backpropagation with the chain rule, it’s so simple when you teach it. Tysm
Why does the dot product switch inputs and weights when working with batches. e.g when input is a 1D array the calculation in the code is np.dot(weights, inputs) but for batch it is np.dot(inputs, transposed_weights). Why doesnt it work when we transpose the inputs instead? Im sure Im missing something simple. Thanks for the videos they are amazing!
Because matrix multiplication is not commutative
I think it is the nature of what we are doing - We are taking inputs - applying weights and biases - and delivering outputs. or entering exiting a decision. so we can't use an entrance to a neuron to exit another neuron.
I think the demonstrations by Harrison are to cement the concept and awareness of ValueError shape()... and he also showed how multiplication works between array and vector (multiplication)
I went back to lesson 3 for 2 things.
1. I like inputs being the first entry so 'My doors' are labelled correctly
2. Use the npArray().T in that example
If he had not shown us 3 before 4 - I would have found it harder to appreciate transpose() - I don't think I will ever just reverse the args when I am coding this stuff.
import this
Something that I find interesting that I think might have to do with this is that specifically in the case of 1 dimensional arrays, the shape is different. I am used to thinking of a matrix as rows by columns. For example, [[2,2],[3,3]] , would be a 2 by 2 matrix. 2 rows and 2 columns; however, lets take the example [1,2,3,4] , I would have expected the shape of this to be 1 by 4 ( 1 row and 4 columns) but it is not. The shape of [1,2,3,4] is (4, 1) . So, the way to think about it is by elements in a list of lists. The first entry x in shape (x , y) is how many lists in a list and the second entry y is how many entries within each element or list. In his example, his first inputs [1,2,3,4] the shape is (4,1) and when he put [[1,2,3,4],[1,2,3,4],[1,2,3,4]] the shape became (3,4) and if you are thinking about this in rows and columns that wouldn't be the case. I hope that made some modicum of sense lol
You could perform the same operation by transposing the inputs however keep in mind the matrix rule (A.B)' == B'.A' , e.g. inputs.(weights.T) == (weights.(inputs.T)).T aka. the output of inputs.weights_transposed will be equal to the transposed output of weights.inputs_transposed, the issue with the values probably comes from adding the biases without first either transposing them or transposing this output matrix back, as they will be added in a completely different order.
Great explanation and animation, but in 14:47 [1,2,3,2.5] in python is array which is vector or matrix (4,1) but when you write it in paper or animation you should write in vertical form like column, not row, because [1 2 3 2.5] in animation is matrix (1,4), not (4,1), so we can say every element in array [1,2,3,2.5] is row, 1 is 1st row, 2,5 is 4th row.
Hello! I'am following you for more then 2 years and this is the best course for me! With those explanation of math - it is realy cool. Thank you for this work :)
Watching this playlist is awesome, it made my task very easy. Have been stuck with the implementation of the multilayer perceptron for two days. Thanks
0th! Finally!
daniel!
So op
U R AWESOME
You are wayyyyy more buff than it seems by just your face.
I'll keep that in mind
This is the best channel with the best content, with amazing animation. Clear explanation. I'm in love with this man. :)
Incredible as always. This one struck a few lightbulbs. Thanks again, Eagerly anticipating #5, I'll have to work through the draft to prep
import neural_networks_from_scratch as nnfs
from nnfs import moments
best_moments = moments(channel='Sentdex')
print(best_moments[0])
''The SEXY part of deep learning''
Loving the effectiveness! The batch size explanation was amazing!
Glad you liked it!!
High quality animations. Much respect!
Amazing content sentdex, the visualizations are just top notch and aid to a much clearer explanation.
I am on lesson 4 now - you are such a great instructor, I love learning this stuff.
This content is really good. Thanks for making this simple. I have been binge-watching your videos.
I have been looking forward to this all week!
At the point where I had a question, I had not fully watched the video yet. So I commented my question. Literally five seconds later in the video you answer my question in the video.
I love the series, thanks for doing this!
Doing gods work, ordered the book a while ago and finally have time to actually dive into this now-thank you so much bro
Apart from trying to explain neural networks, you just explained the matrix dot product in the most intuitive way I have ever seen. I know how the dot product works by now, but I also remember how much work I had to give in to understand the concept given lectures and texts i had at university. I had to read through some difficult math equations and really think about what the book was trying to tell me, and I also had to go through a lot exercises to really get a grasp of it and remember it, and then you just explained it in 10 minutes and it makes perfect sense, although I had almost forgottes what it was all about. So easy. I wish my teacher had an animation like the one you show at 9:10. Then I wouldn't have had to struggle through the math classes, as much as I did, in my education as an electrical engineer.
I think this tutorial serie will explode. Atm, it's really clear, you're fantastic
I can't be more thankful for anyone than you and Daniel. Thank you so much!
Happy to do it!
I love this series and i always look forward for the next one. Thank you ❤
Great explanation and very clear. I look forward to all videos. What a learning process!!
Hey sentdex, such addictive content in your videos. Couldnt wait for the next release any longer. So I just pre-ordered the e-book.
Woo! Hope you enjoy!
This is the best tutorial ever I watched.. Kudos 👍🙌🙌🙌
Wooow, this how all subjects in school should be explained, amazing visualization, very clear!
I thought I understood all of these concepts until I watched your tutorials. it's amazing!
Im glad I found your channel man, i swear to god, your videos are awesome, Im only starting to understand ANNs after watching your videos.
God level series on Neural Network. Good job and always proud of you buddy!!
Made the batch learning benefits really clear, thank you
Best python neural networks video, for sure
Very useful video and very well explained through the series. Thanks a lot Harry!
Thank you for the clear explanation! I was completely lost after several videos! you made it so clear!
Thanks a lot
This will be my first object oriented programming.
It was kind of daunting for me, but you made it so simple.
Now that´s what I call real teaching: triggering curiosity ! Thank you so much, sentdex! Math rules!
Wow your video are just amazing, this clarity to explain complex thing is just incredible
Fantastic, I'm really excited about the following videos!
I think the single array of biases at 16:16 get added to the individual row of the dot product matrix is due to the Python broadcasting. Thanks a lot for this video series.
This is nuts. Crazy good quality
Hi Sentex,
Thanks for doing my engineering/programming career so much more interesting! You really are the best
Took my five years to find something like your videos in 2022. I dropped put of college from stress and i can finally sit down and try to understand this math. I hope the video which explains linear regression is as good as these four so far
Great video Sentdex. Looking forward to read the when it's out.
jujur saya hampir putus asa mencari tutorial membuat neural network untuk pemula, beruntung saya menemukan video anda, terimakasih banyak
This series is totally amazing! Thanks man
Thx so much for this series. You're really helping me understand the basic concepts behind this 👍👍👍