@@ultimateloser3411 Take clinical psych classes, take classes on quantum physics, take classes on anthropology, take classes on game theory, take classes on python/lisp/lua/C ++/C #, learn unity and ue engines.
I'm glad you brought up the caveats to the universal approximation theorem. Available data, compute, and architecture drastically narrow the scope of problems that we can approximate to any real degree of success. Approximation errors, estimation errors, and optimization errors all compound to drastically limit the expressive power of neural networks.
@@unh0lys0da16 naw. elon musk still wants us to believe neural networks will solve driving. They are very far away from collecting the entire universe of driving data to train on.
As technology advances, AI approaches neuroscience, which is what I find most fascinating about AI. Our brains are just sugar-driven neural networks, tuned to the inputs of everyday life on earth!
I've recently become interested in neural networks, and AI in general. This video was spot on for providing a fundamental understanding. The piecewise activation function used to trace along the unknown function was eye opening. Really brought that idea home for me finally. Thanks. Your video is very nicely produced. Right at the top by today's YT standards.
The part where you combined a bunch of line equations was interesting, I've always heard that activation functions are important but I never quite understood what their role is.
@@ScottBrown124 While this may be technically correct, had I just read your statement this would mean almost nothing for me. And I'm pretty sure this will be true for most people who did not persue an higher education in mathematics or a related field. Visually showing what this means with the inclusion of how things work without ReLU is what does the trick for a non-professional audience.
They break linearity in efficient ways and there is nothing much to them(lie). Even though this explains what they do “the why” part needs deeper insights.
Without activation(or with linear activation) NN can approximate linear functions with any dimension. However, most of the subjects in real life are represented by non linear functions. So if NN has any chance of learning some meaningful data set it needs non-linearity at some point in its calculations. You can use any non-linear function but most wouldn’t be computationally efficient. ReLu is simple enough and non-linear to be widely used in many NN applications as the activation function. If you wish you can you use sin cos or polynomials or other functions and compare your results and performance with widely used activation functions. Who knows maybe you find the next best activation function :)
This was probably just an aside in the video, but you got me from just knowing the basic shape of the ReLU-Function to why one would use it in a NN really damn quickly.
What a marvelous video. I've been watching tutorials and reading books on machine learning for over a year, but I've never really understood the core concepts... Until now. Thank you
Just found this channel. Quality of your videos is supreme, and you cover really interesting topics, which I've been studying myself for the better part of last year. Thank you!
I’ve been binging your videos all night but this video has easily earned my subscription. An absolutely fantastic explanation that perfectly captures the essence of neural networks without being bogged down by the maths.
I started writing neural networks for a research project when I was in medical school in the early 1990's. At the time, I struggled with explaining how they worked to my audience of medical professionals. I wish I had had this video to show them. It would have made things much easier. Bravo!
You got me when you said "A neural network is basically a Function approximator". No one could explain much simpler than this. This is byfar the best explanation of neural networks I have ever come across. Kudos mate!
Just came across your video from the feed. Dude, you've done an incredible job on propping up the basics. Huge appreciation. Waiting for your future videos.
Wow, this wasn't anything new to me, but you did an amazing job of explaning the background and building intuition. Truly one of the best youtube videos i've seen.
One of the best videos that I watched, I knew nothing about neural networks before this video and now I feel like I act as one who understands the damn thing. Thank you for beautiful explaination.
This was so well explained. I love how you briefly dipped into the deep math just giving a taste to do our own searching yet still explaining in depth what is going on.
I didn't come here expecting to learn anything but this video was incredible! I learnt: Neural networks are universal approximators, turing complete, and how multiple layers actually work!
a great voice, a great presentation and a great explanation. you have everything to make it as a succesfull channel how this only has 4k views is beyond me edit: also great thumbnail and title
I think this is one of the most valuable videos i've watched about understanding what neural networks are actually doing. Neural networks are kind of built upon the union of two of the most fundamental ML algorithms- the linear regression draws the main function, and logistic regression bends the function. More layers, more bends, more nuance in the description of relationships- all now automatable through this recursive optimization process. It's quite beautiful really
Watching this video will clear all the doubts! Some other videos may try to explain the concept visually and with analogies, but that can often lead to over-fitting, i.e. after watching those videos, will only remember the analogies and not the actual concept. This one nails it!
This is the introduction to neural networks that I was ever hoping for. I'm an engineer, but sadly, neural networks weren't part of my learning program. But I am interested in learning, and this explanation is a perfect description to the concept of NNs, it really clears my mind about what this is. So a neural network is a function approximator, and due to the way it is "built", it can approximate _any_ existing function, no matter how complex, as long as we have enough processing power (and memory). The diagram with the function where we only see a few dots is a great view of what neural networks do. They have a set of data (a bunch of pictures, for example), and when you ask for a result that isn't in its set, it extrapolates it based on what it does know. EDIT : I was wondering how a neural network could, for example, study a set of images and return a brand new pic corresponding to its database. I have realized that an image is just a sum of pixels, and a pixel is just a sum of several colors (the famous RGB, with an alpha channel for transparency). So the "function of an image" looks like this : (Pixel 1, Color 1) + (Pixel 1, Color 2) + (Pixel 1, Color 3) + (Pixel 1, Alpha) + (Pixel 2, Color 1) + (Pixel 2, Color 2) + (Pixel 2, Color 3) + (Pixel 2, Alpha) + ................. So a neural network can be trained to recognize the "sum of pixels and colors" of images, and then return a similar result! And, of course, the various layers help recognize patterns, like a character in a pic, or the hair, or the hands, and so on...
Sometimes you may want to use a simpler approximation with a neural network instead of the complete known function, since the function is too complex and computational intensive. Like in the case of acoustic simulations.
Fluid dynamics is another example. Mandelbrot set rendering. Cayley graph (in group theory) diameter calculation and pathfinding (optimally solving a Rubik’s cube, for instance, can take several minutes). Markov decision processes. There are so many things that neural networks can help us to solve or understand better.
This is actually a great point as well. There's a small set of research that has recently delved into using neural networks to approximate the solutions for NP-Hard problems. That's probably one of the most intuitive ones to think about; NP-Hard problems are quite literally defined as being complex and computationally intensive :P
Great video! I liked how you made sure to talk about the major limitations of Neural Networks. The main thing holding NNs back is how long it takes to train, and how inefficient their computation is. But the thing is, that just turns it into a problem that can be solved by scaling up hardware, which is not something that you can do with genius programmers. With machine learning, you can potentially solve any problem by just throwing money at it.
Max, any chance you could do one on Transformer architecture? It’s the core of most modern machine learning and I haven’t seen an intuitive explanation. It’s increasingly that it has remained mostly unchanged and undefeated. It would be incredible to hear your breakdown on it!
Apart from clear explanation throughout, kudos 👏 on the important remark ( _deep breath_ ) at 8:31 to precisely mention when neural network cannot learn the function and when it _should not_ be used to learn the function
Subbed. That thumb of the tensorflow playground spiral is why I watched I've only dabbled in other tools or unfinished projects so I've never gotten the thing to cooperate and forgot all about it.
Sure, to a degree. Software can't fully optimize itself because a computer is a living ecosystem with dynamic properties. One moment you want to optimize for something, the next for something else. As learning takes time, it will always take *more* time to continously optimize than to brute force. That is to say, LEARNING only accelerates REPETITIVE PROBLEMS. "Become better all the time" sounds repetitive but isn't a well-defined *problem*. So there's a practical limit to self-optimization usefulness.
Dude i learned so much from this video I was actually shocked like I’ve watched 30 min videos and was still left with so many questions but this one’s amazing ❤
No humans I think can learn anything (physical phenomenons) because of the nature of human brain Some neurons are isolated, some aren't, there are mini neural networks inside neural networks inside human brains Maybe
Didn’t have any prior knowledge on neural networks and it’s working but a high level overview like this makes it more sensible on how these neural networks works without involving too much background information in it just weights and bias with pre determined input and output to get the function that describes them by adding more and more neurons if that’s what needed for the approximation.This is fascinating id love to go deeper where veretasium talked about this concept but in a different approach.really fascinating stuff
Wow what a channel, just checked the rest of the videos and got excited for the upcoming weekend to sit down and take some notes on all your videos! Get ready for a lot of growth, looks like the algorithm picked you up!
Man for real, the visual graphical representation of the piecewise functions sum left me speechless, so freaking good. For someone who has a bad foundation at math, that representation was just everything i need to understand perhaps one of the most pivotal foundation of complex neural network. I don't consider my self dumb, but I've watched several videos that all it does is to explain the concept from an abstract perspective, or if not, making assumptions that we already know some mathematical concepts. With that visual representation, I can finally tell me self that i have understand how really the inner world, the magic box works. Thank you
I heard of ReLU a lot, but no one provided a good explanation... this was an aha! moment.. thank you for rectifying some linear units in my brain.. now I can dive deep into learning
For the past days, I've been thinking about how to explain a neural network, and realized that it's basically a really dumb and slow way of learning functions (that is, if you already know the function!) - the fact that this video introduces NNs exactly so is really affirming. Also, I finally understood what activation functions really are - they don't "activate" per se, but rather restrict activation in order to maintain complexity. So thank you for this video, this explanation of Neural Networks is by far the best I've ever come across.
Damn it! What an explanation that was, love you bro from the bottom of my heart keep it up bro. This is so far the best explanation i have seen so far 🤩🤩😘😘😘😘😍😍😍😍😍😍😍😍
this is a fantastical explanation of what neural networks really do!!!! it's insane! years of confusion, knowing the ins and outs and technicalities, i've never understood AI more than with these first 2 minutes of this video!
FUNCTIONS Describe the WORLD! - Thomas Garrity Indeed the BEST explanation on you tube (even better than 3Blue1Brown; no disrespect intended, of course). This SINGLE describes so many concepts that usually require a series of videos. Also, his choice of words is IMPECCABLE!
I been in this field for years and this is by far one of the best explanations I seen. Thank you for this.
I would have to agree
i think you need join field english
Any advice on how to build a career in the field without a degree?
@@ultimateloser3411 Take clinical psych classes, take classes on quantum physics, take classes on anthropology, take classes on game theory, take classes on python/lisp/lua/C ++/C #, learn unity and ue engines.
if you had enough data could NN learn anything ?
This is such a brilliantly constructed explanation of NNs. Props to you man, this must’ve taken a ton of effort to create.
I agree. Nicely compiled points and explanation of information gathered.
I'm glad you brought up the caveats to the universal approximation theorem. Available data, compute, and architecture drastically narrow the scope of problems that we can approximate to any real degree of success. Approximation errors, estimation errors, and optimization errors all compound to drastically limit the expressive power of neural networks.
Tell that to the managers
@@unh0lys0da16 naw. elon musk still wants us to believe neural networks will solve driving. They are very far away from collecting the entire universe of driving data to train on.
my man altman just walks in with a 7 trillion dollar pitch. hahaha
Such a good video mate, Hope your channel grows exponentially.
hehe jk
As technology advances, AI approaches neuroscience, which is what I find most fascinating about AI. Our brains are just sugar-driven neural networks, tuned to the inputs of everyday life on earth!
I hope one day neuroscience is simulated with computer science.
A joint of biological systems would be the construction of conscious things.
I've recently become interested in neural networks, and AI in general. This video was spot on for providing a fundamental understanding. The piecewise activation function used to trace along the unknown function was eye opening. Really brought that idea home for me finally. Thanks. Your video is very nicely produced. Right at the top by today's YT standards.
The part where you combined a bunch of line equations was interesting, I've always heard that activation functions are important but I never quite understood what their role is.
ReLU specifically constructs a solution as a bunch of piecewise linear functions.
@@ScottBrown124 While this may be technically correct, had I just read your statement this would mean almost nothing for me. And I'm pretty sure this will be true for most people who did not persue an higher education in mathematics or a related field. Visually showing what this means with the inclusion of how things work without ReLU is what does the trick for a non-professional audience.
I was always told that you need the combination of linearity and non-linearity to gain complexity. But this videos really did it.
They break linearity in efficient ways and there is nothing much to them(lie). Even though this explains what they do “the why” part needs deeper insights.
Without activation(or with linear activation) NN can approximate linear functions with any dimension. However, most of the subjects in real life are represented by non linear functions. So if NN has any chance of learning some meaningful data set it needs non-linearity at some point in its calculations. You can use any non-linear function but most wouldn’t be computationally efficient. ReLu is simple enough and non-linear to be widely used in many NN applications as the activation function. If you wish you can you use sin cos or polynomials or other functions and compare your results and performance with widely used activation functions. Who knows maybe you find the next best activation function :)
This was probably just an aside in the video, but you got me from just knowing the basic shape of the ReLU-Function to why one would use it in a NN really damn quickly.
What a marvelous video. I've been watching tutorials and reading books on machine learning for over a year, but I've never really understood the core concepts... Until now. Thank you
Just found this channel.
Quality of your videos is supreme, and you cover really interesting topics, which I've been studying myself for the better part of last year.
Thank you!
I’ve been binging your videos all night but this video has easily earned my subscription. An absolutely fantastic explanation that perfectly captures the essence of neural networks without being bogged down by the maths.
I started writing neural networks for a research project when I was in medical school in the early 1990's. At the time, I struggled with explaining how they worked to my audience of medical professionals. I wish I had had this video to show them. It would have made things much easier. Bravo!
High audio quality, slow and clear language, great presentation and most of all very informative! I'm glad this was recommended to me
You got me when you said "A neural network is basically a Function approximator". No one could explain much simpler than this. This is byfar the best explanation of neural networks I have ever come across. Kudos mate!
What a simplicity! Looking forward to watch more explanations.
Just came across your video from the feed.
Dude, you've done an incredible job on propping up the basics. Huge appreciation. Waiting for your future videos.
Wow, this wasn't anything new to me, but you did an amazing job of explaning the background and building intuition. Truly one of the best youtube videos i've seen.
One of the best videos that I watched, I knew nothing about neural networks before this video and now I feel like I act as one who understands the damn thing. Thank you for beautiful explaination.
This was so well explained. I love how you briefly dipped into the deep math just giving a taste to do our own searching yet still explaining in depth what is going on.
I was recommended this video and wow, great explanation. I hope your channel blows up because this is top quality neural network content.
Absolutely love these 3blue1brown style videos!
Keep it up man your channel is amazing.
I didn't come here expecting to learn anything but this video was incredible! I learnt: Neural networks are universal approximators, turing complete, and how multiple layers actually work!
a great voice, a great presentation and a great explanation. you have everything to make it as a succesfull channel how this only has 4k views is beyond me
edit: also great thumbnail and title
This channel is a treasure. A person who knows how to explain things with great knowledge.
It has been years in this field.....One of the best explanations I have seen till now....
This is by far the best explanation of the subject. I remember trying so hard to grasp the concept at the university. Thanks for putting this one out.
Excellent layout of the topic from basic first principles -- well done! Hoping to find more videos like this on your channel :)
Thank you for this. Your video is, by far, one of the best videos you can find to get an introduction into Neural Networks. Awesome!
3 Years after getting in to ML and 2 years into Deep Learning. But this video is beyond amazing
The visualizations are brilliant. Easily one of the most clarifying video I've seen about nns, and I've seen many
I think this is one of the most valuable videos i've watched about understanding what neural networks are actually doing. Neural networks are kind of built upon the union of two of the most fundamental ML algorithms- the linear regression draws the main function, and logistic regression bends the function. More layers, more bends, more nuance in the description of relationships- all now automatable through this recursive optimization process. It's quite beautiful really
Watching this video will clear all the doubts!
Some other videos may try to explain the concept visually and with analogies, but that can often lead to over-fitting, i.e. after watching those videos, will only remember the analogies and not the actual concept. This one nails it!
What you did here is nothing short of magical. Fantastic explanation, awesome visuals 👌
This is literally the best explanation of neural networks on UA-cam
This is the introduction to neural networks that I was ever hoping for. I'm an engineer, but sadly, neural networks weren't part of my learning program. But I am interested in learning, and this explanation is a perfect description to the concept of NNs, it really clears my mind about what this is.
So a neural network is a function approximator, and due to the way it is "built", it can approximate _any_ existing function, no matter how complex, as long as we have enough processing power (and memory).
The diagram with the function where we only see a few dots is a great view of what neural networks do. They have a set of data (a bunch of pictures, for example), and when you ask for a result that isn't in its set, it extrapolates it based on what it does know.
EDIT : I was wondering how a neural network could, for example, study a set of images and return a brand new pic corresponding to its database. I have realized that an image is just a sum of pixels, and a pixel is just a sum of several colors (the famous RGB, with an alpha channel for transparency).
So the "function of an image" looks like this :
(Pixel 1, Color 1) + (Pixel 1, Color 2) + (Pixel 1, Color 3) + (Pixel 1, Alpha) + (Pixel 2, Color 1) + (Pixel 2, Color 2) + (Pixel 2, Color 3) + (Pixel 2, Alpha) + .................
So a neural network can be trained to recognize the "sum of pixels and colors" of images, and then return a similar result! And, of course, the various layers help recognize patterns, like a character in a pic, or the hair, or the hands, and so on...
Seen many such videos explaining how neural networks work over the years, but this is the first time I feel I actually understand it.
What an absolutely wonderful, clear explanation. Thanks!
Brilliant job. One of the best videos introducing the basics of neural networks on UA-cam.
Sometimes you may want to use a simpler approximation with a neural network instead of the complete known function, since the function is too complex and computational intensive. Like in the case of acoustic simulations.
Fluid dynamics is another example. Mandelbrot set rendering. Cayley graph (in group theory) diameter calculation and pathfinding (optimally solving a Rubik’s cube, for instance, can take several minutes). Markov decision processes. There are so many things that neural networks can help us to solve or understand better.
This is actually a great point as well. There's a small set of research that has recently delved into using neural networks to approximate the solutions for NP-Hard problems. That's probably one of the most intuitive ones to think about; NP-Hard problems are quite literally defined as being complex and computationally intensive :P
Hands Down. This is the best video on neural networks explanation.
Really interesting and highquality video. Keep it up!
I was watching videos about Transformers and accidentally clicked on this video. Very concise and enjoyable video on the topic of neural networks.
Great video! I liked how you made sure to talk about the major limitations of Neural Networks.
The main thing holding NNs back is how long it takes to train, and how inefficient their computation is. But the thing is, that just turns it into a problem that can be solved by scaling up hardware, which is not something that you can do with genius programmers. With machine learning, you can potentially solve any problem by just throwing money at it.
One of THE best explanations of neural networks and how they actually work.
The best explanation I've seen
Indeed the best entry level explanation for Neural Networks I have ever seen. Thank you for this amazing video.
Max, any chance you could do one on Transformer architecture? It’s the core of most modern machine learning and I haven’t seen an intuitive explanation. It’s increasingly that it has remained mostly unchanged and undefeated. It would be incredible to hear your breakdown on it!
The visuals you presented are the most eye oppening ive seen in all ML courses ive taken.
We need more videos like this. Most of the YT science videos are so in Layman's terms. This is more like learning real scientific explanation 🧬 🧠
That’s a great way to explain a complex concept👏🏼
I'm gonna use fuzzy logic
Apart from clear explanation throughout, kudos 👏 on the important remark ( _deep breath_ ) at 8:31 to precisely mention when neural network cannot learn the function and when it _should not_ be used to learn the function
Right...!!
Its not learning, its conforming to the result you want. Billions of small dials, controlled by a logic algorithm, Its no magic, just alot of numbers!
Sounds the same as billions of small neurons, controlled by a genetic algorithm. Sounds like learning to me.
It's debatable.
So, What is the difference between learning ?
Subbed. That thumb of the tensorflow playground spiral is why I watched I've only dabbled in other tools or unfinished projects so I've never gotten the thing to cooperate and forgot all about it.
great video! looks a lot like 3blue1brown :D
Truly one of the BEST VIDEOS I've watched on Neural Networks....so very well explained!
can they learn how to train themselves and to modify their own code to make themselves better?
Sure, to a degree. Software can't fully optimize itself because a computer is a living ecosystem with dynamic properties. One moment you want to optimize for something, the next for something else. As learning takes time, it will always take *more* time to continously optimize than to brute force. That is to say, LEARNING only accelerates REPETITIVE PROBLEMS. "Become better all the time" sounds repetitive but isn't a well-defined *problem*. So there's a practical limit to self-optimization usefulness.
Dude i learned so much from this video I was actually shocked like I’ve watched 30 min videos and was still left with so many questions but this one’s amazing ❤
Fun fact: You can't learn everything.
Yes, but you can learn anything.
You can learn everything just has to be correct
Could u make since of it tho I’d love feedback on cosmo knowledge yt and Reddit grand unified theory
No humans I think can learn anything (physical phenomenons) because of the nature of human brain
Some neurons are isolated, some aren't, there are mini neural networks inside neural networks inside human brains
Maybe
I’m glad someone finally said it. You can’t learn everything, there is only a small amount of things actually worth knowing.
I haven't really known what neural networks were until I watched this video. Thanks for furthering my understanding!
Can't learn an XOR 😂😂😂
That’s not true, it’s even feasible to manually construct a neural network with just 3 nodes for such a simple task.
Didn’t have any prior knowledge on neural networks and it’s working but a high level overview like this makes it more sensible on how these neural networks works without involving too much background information in it just weights and bias with pre determined input and output to get the function that describes them by adding more and more neurons if that’s what needed for the approximation.This is fascinating id love to go deeper where veretasium talked about this concept but in a different approach.really fascinating stuff
Clear from start to finish.
This is a great way to explain neural networks because many people understand functions
Wow what a channel, just checked the rest of the videos and got excited for the upcoming weekend to sit down and take some notes on all your videos! Get ready for a lot of growth, looks like the algorithm picked you up!
Man for real, the visual graphical representation of the piecewise functions sum left me speechless, so freaking good. For someone who has a bad foundation at math, that representation was just everything i need to understand perhaps one of the most pivotal foundation of complex neural network. I don't consider my self dumb, but I've watched several videos that all it does is to explain the concept from an abstract perspective, or if not, making assumptions that we already know some mathematical concepts. With that visual representation, I can finally tell me self that i have understand how really the inner world, the magic box works. Thank you
You made complexe (almost) simple. Outstanding job, bravo and thank you very much 👏🏽
I heard of ReLU a lot, but no one provided a good explanation... this was an aha! moment.. thank you for rectifying some linear units in my brain.. now I can dive deep into learning
This is awesome! Simplest form of explanation ever seen but does not lack any detail. Kudos to you.
I knew it would be turing complete. Thanks immensely for this
Throughout the whole video, I thought this guy had at least 100k subs.
Loved every second of this video.
Wow!!!! Out of all the videos I watched on deep learning, this video simplified it the best. Amazing work on making this idea easy to understand.!
WOW !!, truly amazing, I got to understand what I couldn't in many years of reading papers and watching tutorials, big thanks to you.
This is one of the best explanations of neural networks and machine learning I ever came across. Especially the part 1:00 -1:30
For the past days, I've been thinking about how to explain a neural network, and realized that it's basically a really dumb and slow way of learning functions (that is, if you already know the function!) - the fact that this video introduces NNs exactly so is really affirming. Also, I finally understood what activation functions really are - they don't "activate" per se, but rather restrict activation in order to maintain complexity.
So thank you for this video, this explanation of Neural Networks is by far the best I've ever come across.
This is amazing, best explanation of neural Networks I've seen, saved it to watch it again and again, might need it
Damn it! What an explanation that was, love you bro from the bottom of my heart keep it up bro. This is so far the best explanation i have seen so far 🤩🤩😘😘😘😘😍😍😍😍😍😍😍😍
What a coincidence i was looking for how to approximate functions with anns but I couldn't find anything about it. Thanks for video.
this is one of the best and clear explanations. thank you
this is a fantastical explanation of what neural networks really do!!!! it's insane! years of confusion, knowing the ins and outs and technicalities, i've never understood AI more than with these first 2 minutes of this video!
most easy to grasp explanation i have come across till now sir. Thank You 🙏
This channel is gonna grow like crazy if the Emergent Garden keeps this up!
I believe, now I got some fundamental ideas how and why the neural network works.
Thanks a lot! A great video.
The exact info i was looking for finally straightforward! Thank you!
Nice video. May the creator of this video live long.
I wish I could have seen this video when I just started learning Machine learning.
Amazingly explained
The relu part is amazing, renewed my understanding of it, thanks!
I'm glad you stated, learn almost anything, a machine can never be conscious.
Your explanation is brilliant in so many different levels. Thanks for this.
I subscribed just by the title, thumbnail and channel name. Nice branding
Your editing skills and descriptive ability gained you a subscriber. Very good
finally i have a compact video that i can share with friends and family so they can more or less understand machine learning
thank u :)
This channel deserves more viewers. Thx for the clear explanations.
Loving the manim animations!
FUNCTIONS Describe the WORLD! - Thomas Garrity
Indeed the BEST explanation on you tube (even better than 3Blue1Brown; no disrespect intended, of course). This SINGLE describes so many concepts that usually require a series of videos.
Also, his choice of words is IMPECCABLE!
If it describes everything then it describes nothing.
Please continue making more videos; you have an excellent talent for creating graphics that make the topic understandable.
this is how I was thinking the whole time about NN its all about functions, but never seen anything like that in lectures
This is awesome! Quality of your videos is supreme thanks!
The explanation and video quality is crazy good. Your voice is so nice too!
Amazing explanation! Thanks 👍 Neural networks are indeed incredible!!