STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel" SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1 WATCH AI series: ua-cam.com/play/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ.html
so, if neural networks can´t reason.... why the people call it "articial intelligence"... when intelligence and learning aren´t the same thing? for me, neural networks are a good way to save patterns and return us the result we want ... with brutal force
I don't mind that you take your time making these. Your meticulous script preparation & attention to production values allow you to pack massive amounts of information into these videos. You are creating "aha!" moments & rewiring neurons around the world. Bravo!
I know a little about computers. Used to be a lot; but, then I retired and computers and computing move on. This was a wonderful explanation. Not too fast, not in the least boring, and I learned some things. Thank you and KUDOS!
i think the genius here, honestly, is the maintaining the whole way through the output neuron vector as points in 3d space. the way to divide points into groups, and combine them becoming oragami folds for depth. at 12:01 i finally understood that these differing output patterns all fit inside a 3d space, meaning, a brain, like, I can imagine these little lit up paths in a brain that the data goes through, but instead of like a radioactive isotope, it was a component of a stormcloud, and it routes down the pathway... You illustrated the finitude of possible induction in perception space, and then at the end what a limited number of neurons can represent while keeping things distinct and recognizable, fulfilling their purpose. Yet we know there's this infinity of things that can be represented in that process. its really magical, because we go from finitude to infinity and back-- without stopping, and without doubling back the way we came.
and what just gave me the chills was that i paused just after 12:00 minutes to write these comments calling what at that moment i thought was magic, and your next line was "and so the magic is..." Not to get corny about it but woah serendipity. read that as testament to the editing i guess. amazing job on this series. i really did wait this long to watch it all ahah
Even though I've seen these concepts before this video does a great job of slowly building up the ideas and bringing the viewer along to the next level of understanding. This was very good. Thank you for taking the time and effort to put this together.
i prefer your videos over 3b1b. you include a variety of backgrounds/contexts to help me pay more attention (and not get stuck to the monotone black bg with animations). thank you!!!
Your videos are a thing of beauty! The attention to detail is fascinating, especially how it clarifies the concepts that are explained. I can only imagine how beautiful the world would be if everything was explained in this manner!
Wow...! This was clearly the best ever explanation of neural networks I’ve ever seen! For awhile I even thought I understood them... ;-) great vid, thx!
You made amazing videos on Khan Academy years back and I've finally stumbled upon your criminally small channel. Keep up the good work, I hope the algorithm tips in your favor one day.
Watching these videos makes me feel just like how I did as a child watching the National Film Board of Canada videos. You've made the correct patterns, well done.
Another incredible explanation. I’ve got to say, the more I learn about filmmaking the more I appreciate your videos - the use of music is truly a step above any other educational content. A million applauses.
wow really nice to hear this. I get many comments about the music "ruining" the video :) but if you see them as short films then it starts to make sense. I refuse to change!
@@ArtOfTheProblem The music is way too loud. Please -- if it's distracting for some folks, I doubt having the volume on music a little less loud isn't going to ruin it for others.
Wow... this presentation is a winner, this is epiphanically so good... I just realized what comprised as our 3D mental space is not "an object" but a momentum, a summed illusion effect of all the hard work along the way rather than a hidden 3D holographic chamber tucked at the deep back end of the brain, much like how we perceive the illusion of time or gravity or consciousness as "one thing" when it is in fact a working dynamics of many factors too complicated to be directly visible for the average person - thus, our brain summed them up as an object and worse, gave 'em a name as "one object" because that's what we do (even though the purpose is so that we could easily understand how to describe the world as a useful prediction tool to be applied in everyday life). Suffice to say, now I am feeling a bit mixed up remembering the way Europe's Human Brain Project were presented: Showing a faux colored shadowy figure of a red flower within the jungle of neurons several distance away from the exposed retina.... so silly of me to be awed by that, back in the day... Anyway, love this, thank you so much!
@@ArtOfTheProblem I fell into the rabbit hole while searching for specific subjects as it got deeper and your addictively well-presented thought provoking series kept coming up, your titles and thumbnails evolved from "Hmmm... interesting" to "If I see you I will click you!"🙂
This is fascinating, and the best explanation I've ever seen for how neural networks actually work. You have earned my sub, and I look forward to more insightful explanations of a topic that boggles my mind!
this is genuinely one of the best, if not THE best videos about neural networks I have ever seen. Never once have I understood the concept clearly before this!
@@ArtOfTheProblem i was looking at videos related to these and just a LOT of math videos in general, so youtube recommended it to me. im so glad i clicked on this vid :)
I'm from accounting field. Randomly got this video from Reddit. I have to tell you, your explanation and way of presenting is not just good, it's interesting too.plase continue doing what you are doing.
Video is absolutely awesome, only thing that seemed missing to me, is difference between neural network and another well know mathematical models (relational databases design and analytics).
So good. The layering was such a important lesson to learn. With the 3D simulation it looks like a cloudy rainbow rubix cube being twisted and turned in our minds. The ramifications of these learnings are infinite. Imagine what perceptions our minds as sensory identifiers are not perceiving yet, and the avenues of worlds that it has the ability to open up as we simply use more complex sets of neural sensory functions in our body, and increase our pattern recognition's as an individual, social and planetary society. edit: I am going to have to go to the begining of the series and count my blessings
Wow very well done, and informative as usual. Thank you so much for the thoroughness in your explanation. One of the most underrated UA-camr's of all time!
You've done well to cover an area of the explanation which is rarely done properly. Most of the time its all about the application of the maths, and such, rather than the intuition side of what the network is trying to do. Nice vid :)
I think there is a typo at 5:15. Active and inactive should be flipped for any 1 line drawn for consistency. If the circles represent 'active' data points, the active-inactive labels for the slant line at the right should be flipped.
This is the most helpful explanation of neural networks I have yet found. I now feel much more confident in my understanding of what is happening within such a network and the way in which added layers function. Thank you!
I have been going through the math behind neural networks for some time now. My goal was to form a good intuition for how they work. I understood some of it but still had lot of doubts. This video cleared all my doubts. It is an absolute gem. I think someone who is already reading about neural networks would make the most out of this video. This is the most intuitive video I have seen on neural networks. Your script and narration was perfect. Great job and I hope you keep making similar videos
i really appreciate this, you are right and you are the perfect audience. I spent a loooooong time working on this. and i hit a wall after it when trying to do the same with sequential networks, and so it's on ice for now...
this might be the best video i have ever seen on youtube to date. thank you so much, this was so informative and i absolutely loved the models! Thank you so so much, please keep doing these!
really appreciate it, this was the last video I made and since then I've pivoted my efforts but I really do want to come back and do a follow up, especially since the explosion in applications lately related to sequential nets (which is where this video ends)
Like Graham Todd said in his comment, your vids always deliver waves of "Aha!" moments that join previously distant or incoherent bits of our minds. I hope these vids reach as many schools as possible, kids would benefit immensely and so the larger society of tomorrow. Thanks 🤘
Your intuitive explanation of why we need more than one neuron using graphs and folds blew my mind :D As someone who has worked in this field for over 6 years now, I must say I am super impressed :)
What a wonderful explanation of such a hardcore technical concept. I respect and appreciate the hard work you did to explain this concept easily. That paper analogy was best!
Dude , this is excellent work, you have explaned the secret of Neural Networks in a really beautiful way. It takes real understanding to be able to distil the information in such a beautiful way. Thank you s much for this.
thank you for feedback. At this moment i'm in the rough drafting stage of the video which follows this one. Probably will take me another month to write or so
@@ArtOfTheProblem Thank you so much! I look forward to watching that. I really liked your use of visual analogies, such as paper folding, to better understand what's happening inside of the neural network.
This is the most beautiful, deep presentation on neural networks I have seen. This has given me another depth of understanding. Thank you so much. I would love if you could provide a reading list for this series, to take my studies further.
Very nice. Not so sure about the folding paper, but the visualisations really show how the coordinates are transformed from the complicated manifolds to the relatively simple clusters, and that visualisation can possibly help guide neural network design. Shame you weren't able to answer the final question, ha ha ha!
@@ArtOfTheProblem a introduce course mentioned something about Morse Code, then i came here. appreciate all your videos here and in khan, they're really thought provoking and helpful
Great video series. Definitely learned some valuable insights especially in the way that lead us to deep learning. Also you gave great intuition of the concepts that are embedded in learning concepts, neural networks and finally deep learning networks. However I have have a not trivial addition to the video (although you kind of mention it in the previous video): A very fundamental building block inside of a single neuron is it‘s activation function. But the important thing is, that this function is non-linear and differentiable. Given that, the deep NN can transform the input space such that the output space can be separated into distinct regions (like we‘ve seen in the example of recognizing hand-written digits). In contrast a deep NN that only has linear activations is not having a chance if the problem gets more complex, as we see in the example where the input space is not separable by simple sections (like lines or hyper planes). Thanks a lot for your time you put into this series. Hope to see a continuation if you see fit ;)
apprecite this feedback, i'm indeed working on the next video in this series, I'm still stuck one one part which is illuminating what the 'heads' inside a transformer are learning. and telling the story of recurrence in neural nets.
thank you! it's so funny you say that, i'm working on a summary of this series (super condense) and i'm back at that idea again just now..and i was on fence for a moment (more of a 'geek detail') but yes i agree and i'm definitely going to include it at least briefly
Simply wonderful, Worth watching for all, Freshers, new comers, experienced persons etc. This video reminds me of a quote. "When something can be read without effort, great effort has gone into its writing". Same is true for this video as well. You have lived NNs. Watching for many more such videos, especially on basic CNN model.
Mind-blowing explanation! Just too overwhelmed by the amount of imagination and skills you’ve put behind this gorgeous show! Subscription done right away. Looking forward to have more in the days ahead.
Edit: You just answered... :) A question that you might address later on (I've been thinking about it and I'm currently at 4:38) - What if you want the A/C to start when the temperature is above 28C (cool) AND below 19C (heat)? A line won't do. Or, activate if all inputs are on or all inputs are off? A plane won't do. I'm guessing more than one neuron would be needed, and in the case of cool/heat/let-it-be - three outputs...
Amazing video! I love how you explain things intuitively. It would be amazing if you did a video about gpt4 since it can sort of "reason" and memorize the conversation.
Wow… this is the best explanation I have seen by far. I’ve made neural nets from scratch and I’ve never understood them as well as I do after watching the video. Thank you so much for all your time and effort in making this video.
Wow this is a huge compliment, you made nets from scratch? That's awesome, you probably have lots of intuition I don't have. I'm still struggling with how to do the final video in this series on sequential nets, I almost feel like I nailed transformers but not sure that's the whole story yet
Hi, your pictures and explanations are just too good, clear and coherent and made sense. That's how things should be explained. I want to cite your pictures and some of the wording. And I have no problem mentioning a youtube link instead of a textbook even though its not peer reviewed. I was wondering should is it ok if I use link in bib or you have a proper article written on it.
@@ArtOfTheProblem That's awesome to know, most AI videos feel dated after a month and this series still feels super informative even after 2 years. Hope to hear more from you on this topic!
@@kauezero that means a lot, and yes i'm thankful this video will age well it was not easy to decide what to not cover. i'm thankfull I didn't make the video on sequential nets until post LLM boom. now i'm trying to carefully decide what will make sense in a few years to say now :)
Amazing video. Although the occacional backgroung noise was quite distracting. For exampel, the one started at 4:00 was pretty annoying and I had to rewinde the video multiple times to be able to focus on the material. Overal a great simplification of such a concept.
appreciate it, I do have a follow up planned on how nets solve sequential problems but got very busy and hit a few snags. going to take another run at it soon
@@ArtOfTheProblem I work in the AI field and in neural networks. JUST because one works in it and you think you understand everything, then I see your instruction... it is amazing how much one realizes that I visualized things wrong and truly don't understand it like you thought I did. Again, excellent work.
@@ArtOfTheProblem yeaa i go there to watch series but there were only four videos and this one is last... i thought it gonna be a some more videos...i haven't watched yet but i will! I love your explanations, everything is perfect! You're a great teacher!!💜💜💝💝
@@NiteshKumar-ss8zd thanks so much, I'm still going to make a final video to this series when I get the time and feel like I have a strong thesis for the video
STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel"
SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1
WATCH AI series: ua-cam.com/play/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ.html
so, if neural networks can´t reason.... why the people call it "articial intelligence"... when intelligence and learning aren´t the same thing? for me, neural networks are a good way to save patterns and return us the result we want ... with brutal force
Thank you so much.i had been struggling
to understand the concepts behind neural network.You explained it to us so nicely.
This is maybe the best explanation I have seen of a topic that is rather elusive. I will watch this video again!
@@stevesmith291 so happy to hear it
This was very informative and explained the depth advantage in a really easy to grasp manner. Thank you!
I don't mind that you take your time making these. Your meticulous script preparation & attention to production values allow you to pack massive amounts of information into these videos. You are creating "aha!" moments & rewiring neurons around the world. Bravo!
this one took a longtime: ua-cam.com/video/OFS90-FX6pg/v-deo.html
I know a little about computers. Used to be a lot; but, then I retired and computers and computing move on. This was a wonderful explanation. Not too fast, not in the least boring, and I learned some things. Thank you and KUDOS!
so nice to hear
finally done: ua-cam.com/video/OFS90-FX6pg/v-deo.html
never would have imagine this stuff in this way. the patience and care of thought behind it is just, like, therapeutic to take in. million thanks man
beautiful
i think the genius here, honestly, is the maintaining the whole way through the output neuron vector as points in 3d space. the way to divide points into groups, and combine them becoming oragami folds for depth. at 12:01 i finally understood that these differing output patterns all fit inside a 3d space, meaning, a brain, like, I can imagine these little lit up paths in a brain that the data goes through, but instead of like a radioactive isotope, it was a component of a stormcloud, and it routes down the pathway...
You illustrated the finitude of possible induction in perception space, and then at the end what a limited number of neurons can represent while keeping things distinct and recognizable, fulfilling their purpose. Yet we know there's this infinity of things that can be represented in that process. its really magical, because we go from finitude to infinity and back-- without stopping, and without doubling back the way we came.
and what just gave me the chills was that i paused just after 12:00 minutes to write these comments calling what at that moment i thought was magic, and your next line was "and so the magic is..."
Not to get corny about it but woah serendipity. read that as testament to the editing i guess. amazing job on this series. i really did wait this long to watch it all ahah
@@hafty9975 we are definitely in sync
Yes! A new Art of the Problem video!
Let's all be real here, that last layer is really just on LSD. That's how it all works. Those were some trippy images.
Joking aside, fantastic video!
Even though I've seen these concepts before this video does a great job of slowly building up the ideas and bringing the viewer along to the next level of understanding.
This was very good. Thank you for taking the time and effort to put this together.
next part: ua-cam.com/video/OFS90-FX6pg/v-deo.html
Sorry for my English
I registered for this channel many years ago and waited eagerly for videos.
i prefer your videos over 3b1b. you include a variety of backgrounds/contexts to help me pay more attention (and not get stuck to the monotone black bg with animations). thank you!!!
thank you for feedback, working hard on next video now
hey keep going with the videos. The quality of your vids easily justifies 2M subs -- you’ll blow up eventually
Your videos are a thing of beauty! The attention to detail is fascinating, especially how it clarifies the concepts that are explained. I can only imagine how beautiful the world would be if everything was explained in this manner!
this comment made my day thank you
@@ArtOfTheProblem Yeah, learnt sin cos tan a bit by programming a circle and i understood kinda
finally done ua-cam.com/video/OFS90-FX6pg/v-deo.html
this deserves a lot more views
It was fascinating to see the images when probing the different layers. The paper folding example was great at explaining this at least for me.
3 years later i finish next part ua-cam.com/video/OFS90-FX6pg/v-deo.html
The channel is alive!
Wow...! This was clearly the best ever explanation of neural networks I’ve ever seen! For awhile I even thought I understood them... ;-) great vid, thx!
Sometimes I wish UA-cam had a super-like button or something to express how much I like this
You sir, you deserve much more attention. Very well illustrated and clearly explained. Thanks.
Thankyou for explaining the fundamental building blocks of a neural network in a way that's easy to understand.
appreciate the feedback
@@ArtOfTheProblem You know, these neural networks is just like building or fixing a car. It's a guy thing.
You made amazing videos on Khan Academy years back and I've finally stumbled upon your criminally small channel. Keep up the good work, I hope the algorithm tips in your favor one day.
glad you found me Zion, I hope for the tip one day too. thanks for the support
Watching these videos makes me feel just like how I did as a child watching the National Film Board of Canada videos. You've made the correct patterns, well done.
i grew up watching these
The paper folding part is a genius explanation!
Another incredible explanation. I’ve got to say, the more I learn about filmmaking the more I appreciate your videos - the use of music is truly a step above any other educational content. A million applauses.
wow really nice to hear this. I get many comments about the music "ruining" the video :) but if you see them as short films then it starts to make sense. I refuse to change!
@@ArtOfTheProblem haha yes! 👏👏 of course not simply short films - short cinematic masterpieces!
@@ArtOfTheProblem The music is way too loud. Please -- if it's distracting for some folks, I doubt having the volume on music a little less loud isn't going to ruin it for others.
thanks for feedback i'll work on it@@sams64sf
Wow... this presentation is a winner, this is epiphanically so good... I just realized what comprised as our 3D mental space is not "an object" but a momentum, a summed illusion effect of all the hard work along the way rather than a hidden 3D holographic chamber tucked at the deep back end of the brain, much like how we perceive the illusion of time or gravity or consciousness as "one thing" when it is in fact a working dynamics of many factors too complicated to be directly visible for the average person - thus, our brain summed them up as an object and worse, gave 'em a name as "one object" because that's what we do (even though the purpose is so that we could easily understand how to describe the world as a useful prediction tool to be applied in everyday life). Suffice to say, now I am feeling a bit mixed up remembering the way Europe's Human Brain Project were presented: Showing a faux colored shadowy figure of a red flower within the jungle of neurons several distance away from the exposed retina.... so silly of me to be awed by that, back in the day...
Anyway, love this, thank you so much!
Glad you found this series, curious if it was recommended by the algo or somewhere else?
@@ArtOfTheProblem I fell into the rabbit hole while searching for specific subjects as it got deeper and your addictively well-presented thought provoking series kept coming up, your titles and thumbnails evolved from "Hmmm... interesting" to "If I see you I will click you!"🙂
@@captainjj7184 Love that you have found the series. love this feedback
This is fascinating, and the best explanation I've ever seen for how neural networks actually work. You have earned my sub, and I look forward to more insightful explanations of a topic that boggles my mind!
beautifully crafted... we can see the hardwork you have put into it.. subbed
Words can't describe this marvellous explanation!
thrilled to get this feedback
I don't mind watching an hour of this with just you explaining. Thank you for creating this!
happy you found this channel, i've been dormant for a while but working on this next video...
this is genuinely one of the best, if not THE best videos about neural networks I have ever seen. Never once have I understood the concept clearly before this!
Faaaantastic, so happy youtube is now showing people this video out of the blue. did you search for it or see it as a suggestion?
@@ArtOfTheProblem i was looking at videos related to these and just a LOT of math videos in general, so youtube recommended it to me. im so glad i clicked on this vid :)
the best explaination ive ever heard
thruely intriguing
thrilled to hear it, still trying to crack the next video
I'm from accounting field. Randomly got this video from Reddit. I have to tell you, your explanation and way of presenting is not just good, it's interesting too.plase continue doing what you are doing.
Thanks for making these videos. The paper folding part analogy was really GOOD!
It's actually a paper by Yoshua Bengio - on the number of Linear Regions of deep neural networks
long time no see: ua-cam.com/video/OFS90-FX6pg/v-deo.html
Video is absolutely awesome, only thing that seemed missing to me, is difference between neural network and another well know mathematical models (relational databases design and analytics).
So good. The layering was such a important lesson to learn. With the 3D simulation it looks like a cloudy rainbow rubix cube being twisted and turned in our minds.
The ramifications of these learnings are infinite. Imagine what perceptions our minds as sensory identifiers are not perceiving yet, and the avenues of worlds that it has the ability to open up as we simply use more complex sets of neural sensory functions in our body, and increase our pattern recognition's as an individual, social and planetary society.
edit: I am going to have to go to the begining of the series and count my blessings
let me know what you think after finsihing the series as i'm working on a follow up
Love your videos. Every time i see them on my page I just have to watch them
Wow very well done, and informative as usual. Thank you so much for the thoroughness
in your explanation. One of the most underrated UA-camr's of all time!
very underrated channel... Amazing work !!
thanks for the support, much appreciated
What a straightforward explanation.
Just a remarkable video. The most clear explanation of NNs I’ve ever seen. Really well done.
thank you, glad you found this as it's buried deep in the results!
New video is up on Evolution of Intelligence ua-cam.com/video/5EcQ1IcEMFQ/v-deo.html
You've done well to cover an area of the explanation which is rarely done properly. Most of the time its all about the application of the maths, and such, rather than the intuition side of what the network is trying to do. Nice vid :)
so happy people are finding this, exactly what I wanted to do
Thank you! Extremely helpful visualizations
Awesome presentation! Lots of information in such a short video. Absolutely love it!
awesome thanks for feedback, I have another follow up on the way
Absolutely loved this! You're truly one of the best at teaching visually
I think there is a typo at 5:15. Active and inactive should be flipped for any 1 line drawn for consistency. If the circles represent 'active' data points, the active-inactive labels for the slant line at the right should be flipped.
This is the most helpful explanation of neural networks I have yet found. I now feel much more confident in my understanding of what is happening within such a network and the way in which added layers function. Thank you!
so happy to hear this, this was my goal of the series and so to hear it's connecting means a lot
Wow. This is brilliant. You guys are awesome. Thanks everyone involved in production. 👍🏿
Thanks I really hope to follow this up with another video eventually
Thinking of a NN as partitioning a perception space... just awesome. Thank you so much for this beautiful way of thinking about it.
awesome!!
This by far the best explanation of Neutral Network I ever seen in my life very simple but without compromising technicality.
appreciate it!
The way you explain such a complex concepts is mind-blowing. thnak you so much for teaching us.
appreciate it! still working on the next video..
Just awesome how you managed to explain a complex topic in a simple way!
thanks so much!
this has done more for me conceptually than actual ai classes i've taken. thank you.
yes! this is what I was hoping for. those classes are brutal and give no intuition
I have been going through the math behind neural networks for some time now. My goal was to form a good intuition for how they work. I understood some of it but still had lot of doubts. This video cleared all my doubts. It is an absolute gem. I think someone who is already reading about neural networks would make the most out of this video. This is the most intuitive video I have seen on neural networks. Your script and narration was perfect. Great job and I hope you keep making similar videos
i really appreciate this, you are right and you are the perfect audience. I spent a loooooong time working on this. and i hit a wall after it when trying to do the same with sequential networks, and so it's on ice for now...
This is an amazing way of explaining this!
glad you found it! stay tuned
Happy to say I have a new vid out! ua-cam.com/video/5EcQ1IcEMFQ/v-deo.html
this might be the best video i have ever seen on youtube to date. thank you so much, this was so informative and i absolutely loved the models! Thank you so so much, please keep doing these!
really appreciate it, this was the last video I made and since then I've pivoted my efforts but I really do want to come back and do a follow up, especially since the explosion in applications lately related to sequential nets (which is where this video ends)
Like Graham Todd said in his comment, your vids always deliver waves of "Aha!" moments that join previously distant or incoherent bits of our minds. I hope these vids reach as many schools as possible, kids would benefit immensely and so the larger society of tomorrow. Thanks 🤘
Wow, you are good at teaching. Making obvious the nonobvious is extraordinarily complex.
thank you, i'm still planning to do a follow up to this on sequential problems
Your intuitive explanation of why we need more than one neuron using graphs and folds blew my mind :D As someone who has worked in this field for over 6 years now, I must say I am super impressed :)
fantastic to hear, it gives me motivation to press on with the next video on transformers :)
@@ArtOfTheProblem Nice! Waiting for it and have subscribed :)
What a wonderful explanation of such a hardcore technical concept. I respect and appreciate the hard work you did to explain this concept easily. That paper analogy was best!
glaf you enjoyed this thanks :)
Dude , this is excellent work, you have explaned the secret of Neural Networks in a really beautiful way. It takes real understanding to be able to distil the information in such a beautiful way. Thank you s much for this.
appreciate it, i worked really hard on this and hit a wall after it. I'm still planning to follow up with more on sequential networks.
wow, new video, thank you so much!
Your content is brilliant. Thank you!
You are a great thinker and equally good presenter. Thank you for sharing.
Thanks for beings a point in my neural network... I appreciate your genius 🎈❤️
glad you enjoyed this video thanks
Just watched this now, and this explanation is absolutely amazing. Please make more videos about ML :)
thank you for feedback. At this moment i'm in the rough drafting stage of the video which follows this one. Probably will take me another month to write or so
@@ArtOfTheProblem Thank you so much! I look forward to watching that. I really liked your use of visual analogies, such as paper folding, to better understand what's happening inside of the neural network.
This is the most beautiful, deep presentation on neural networks I have seen. This has given me another depth of understanding. Thank you so much. I would love if you could provide a reading list for this series, to take my studies further.
glad you found this! did you see the entire series? i'll work on a list but I read quite widely and ferociously
I've started watching the series after your latest video. Just brilliant.
thrilled people are finding this finally :)))@@schophi
Very nice. Not so sure about the folding paper, but the visualisations really show how the coordinates are transformed from the complicated manifolds to the relatively simple clusters, and that visualisation can possibly help guide neural network design. Shame you weren't able to answer the final question, ha ha ha!
thanks I got held up working on the last video, I will get to it eventually
i'm actually from khan academy, i had never thought i can find such an impressive video just because i clicked a link. fascinating!
cool, how did you find this link?
@@ArtOfTheProblem a introduce course mentioned something about Morse Code, then i came here. appreciate all your videos here and in khan, they're really
thought provoking
and helpful
The beauty of this explanation made me smile
Great video series. Definitely learned some valuable insights especially in the way that lead us to deep learning. Also you gave great intuition of the concepts that are embedded in learning concepts, neural networks and finally deep learning networks.
However I have have a not trivial addition to the video (although you kind of mention it in the previous video): A very fundamental building block inside of a single neuron is it‘s activation function. But the important thing is, that this function is non-linear and differentiable. Given that, the deep NN can transform the input space such that the output space can be separated into distinct regions (like we‘ve seen in the example of recognizing hand-written digits). In contrast a deep NN that only has linear activations is not having a chance if the problem gets more complex, as we see in the example where the input space is not separable by simple sections (like lines or hyper planes).
Thanks a lot for your time you put into this series. Hope to see a continuation if you see fit ;)
apprecite this feedback, i'm indeed working on the next video in this series, I'm still stuck one one part which is illuminating what the 'heads' inside a transformer are learning. and telling the story of recurrence in neural nets.
These videos are so extremely good! Thanks for making these!
Fantastic video.
Holy crap... That was an amazing video!
Absolutely amazing video! Thank you for making it!
thanks Zoe!
This is the best explanation of neural networks so far. I still don't get it, but this video has given me a better grasp of it.
glad to hear it, baby steps. i'm still thinking through the next one
Holy fucking shit the paper fold example should be patented, absolute banger.
thank you! it's so funny you say that, i'm working on a summary of this series (super condense) and i'm back at that idea again just now..and i was on fence for a moment (more of a 'geek detail') but yes i agree and i'm definitely going to include it at least briefly
Thank you for your work!
Simply wonderful, Worth watching for all, Freshers, new comers, experienced persons etc. This video reminds me of a quote. "When something can be read without effort, great effort has gone into its writing". Same is true for this video as well. You have lived NNs. Watching for many more such videos, especially on basic CNN model.
This quote means a lot to me, I definitely put all the effort into this my brain could possibly muster
Great video. Was hoping to see more "maths" in the video though
Mind-blowing explanation! Just too overwhelmed by the amount of imagination and skills you’ve put behind this gorgeous show! Subscription done right away. Looking forward to have more in the days ahead.
thanks so much, i'm hard at work on the next video and pumped to finally get it out. I spent waaaay too long on the research, i hope it shows again :)
This is not just a great explanation and easy to follow, but also just so soothing
Amazing video! Really appreciate this, great work :)
The legend is back!
Edit: You just answered... :)
A question that you might address later on (I've been thinking about it and I'm currently at 4:38) -
What if you want the A/C to start when the temperature is above 28C (cool) AND below 19C (heat)? A line won't do.
Or, activate if all inputs are on or all inputs are off? A plane won't do.
I'm guessing more than one neuron would be needed, and in the case of cool/heat/let-it-be - three outputs...
Amazing video! I love how you explain things intuitively. It would be amazing if you did a video about gpt4 since it can sort of "reason" and memorize the conversation.
still working on this one, so much happening it's hard to keep up. thanks!
best explanation that i see about neural networks
more on the way!
That helped me understand neural networks in a new way I hadn't before. Thanks! I feel like I leveled up my knowledge.
Sweet that was my goal
Wow… this is the best explanation I have seen by far. I’ve made neural nets from scratch and I’ve never understood them as well as I do after watching the video. Thank you so much for all your time and effort in making this video.
Wow this is a huge compliment, you made nets from scratch? That's awesome, you probably have lots of intuition I don't have. I'm still struggling with how to do the final video in this series on sequential nets, I almost feel like I nailed transformers but not sure that's the whole story yet
thank you so much, finally have a better understanding on how neural networks work
yay! so happy people are finding this
Hi, your pictures and explanations are just too good, clear and coherent and made sense. That's how things should be explained. I want to cite your pictures and some of the wording. And I have no problem mentioning a youtube link instead of a textbook even though its not peer reviewed. I was wondering should is it ok if I use link in bib or you have a proper article written on it.
That's so awesome i'd love if you used that link too please share whatever work you are doing too thanks
That was magnificent, I mean really really really super breathtaking.
thanks Youssef, thrilled youtube started to surface this video i worked my ass off on it
This video cemented my understanding of neural nets, Thanks 👍
thrilled to hear this. i'm curious, what key questions do you come out of the video with?
Are you a human or a neural network?
Because this is the BEST explanation of a neural network a human ever did.
Thank you for this video, it opens your mind to a lot of things
I'm leaving this comment here for the UA-cam algorithm.
What an excellent video.
One of the best NN explanations
thanks Frank, still working on next one
@@ArtOfTheProblem That's awesome to know, most AI videos feel dated after a month and this series still feels super informative even after 2 years. Hope to hear more from you on this topic!
@@kauezero that means a lot, and yes i'm thankful this video will age well it was not easy to decide what to not cover. i'm thankfull I didn't make the video on sequential nets until post LLM boom. now i'm trying to carefully decide what will make sense in a few years to say now :)
Amazing video. Although the occacional backgroung noise was quite distracting. For exampel, the one started at 4:00 was pretty annoying and I had to rewinde the video multiple times to be able to focus on the material. Overal a great simplification of such a concept.
thanks for the feedback
I absolutely love this content, please make more such videos simplifying complex concepts in ML.
appreciate it, I do have a follow up planned on how nets solve sequential problems but got very busy and hit a few snags. going to take another run at it soon
👌👌👌 awesome explanation and visual representations…
Your explanation of this is truly astounding.
Thank you, I worked very very hard on this, and so happy people are finding it
@@ArtOfTheProblem I work in the AI field and in neural networks. JUST because one works in it and you think you understand everything, then I see your instruction... it is amazing how much one realizes that I visualized things wrong and truly don't understand it like you thought I did. Again, excellent work.
so happy that I could help@@gadworx
Such a good explanation, i didn't know anything about neural network, still i understand at full length!!
thrilled to hear this! did you watch the whole series?
@@ArtOfTheProblem yeaa i go there to watch series but there were only four videos and this one is last... i thought it gonna be a some more videos...i haven't watched yet but i will! I love your explanations, everything is perfect! You're a great teacher!!💜💜💝💝
@@NiteshKumar-ss8zd thanks so much, I'm still going to make a final video to this series when I get the time and feel like I have a strong thesis for the video
Excellent video!! Thank you!!