But what is a neural network REALLY?

Поділитися
Вставка
  • Опубліковано 16 чер 2024
  • My submission for 2022 #SoME2. In this video I try to explain what a neural network is in the simplest way possible. That means no linear algebra, no calculus, and definitely no statistics. The aim is to be accessible to absolutely anyone.
    00:00 Intro
    00:47 Gauss & Parametric Regression
    02:59 Fitting a Straight Line
    06:39 Defining a 1-layer Neural Network
    09:29 Defining a 2-layer Neural Network
    Part of the motivation for making this video is to try to dispel some of the misunderstandings around #deeplearning and to highlight 1) just how simple the neural network algorithm actually is and 2) just how NOT like a human brain it is.
    I also haven't seen Gauss's original discovery of parametric regression presented anywhere before, and I think its a fun story to highlight just how far (and how little) data science has come in 200 years.
    ***************************
    In full disclosure, planets do not orbit in straight lines, and Gauss did not fit a straight line to Ceres' positions, but rather an ellipse (in 3d).

КОМЕНТАРІ • 229

  • @dsagman
    @dsagman Рік тому +211

    “Do neural networks work because they reason like a human. No. They work because they fit the data.” You should have added “boom. mic drop.”. Excellent video!

    • @LuisPereira-bn8jq
      @LuisPereira-bn8jq Рік тому +33

      Can't say I agree. I really liked the video as a whole, but that "drop" was the worst part of the video to me, since it's a bit of a strawman, for at least two reasons:
      - knowing what a complex system does "at a foundational level" is very far from allowing you to understand the system. After all, Biology is "just" applied Chemistry which in turn is "just" applied Physics, but good luck explaining any complex biological system from physical principles alone.
      - much of what humans do doesn't use "reason" at all. A few years back I decided to start learning Japanese. And I recall that for the first few months of listening to random native Japanese speakers I'd have trouble even correctly identifying the syllables of their words. But after some time and more exposure to the sounds, grammar, and speech patterns, that gradually improved. Yet that improvement had little to do with me *reasoning* about the language, and was largely an unconscious process of my brain getting better at pattern recognition in the language.
      At least when it comes to "pattern recognition" I see no compelling reason to declare that humans (and animals, for that matter) are doing anything fundamentally different from neural networks.

    • @algorithmicsimplicity
      @algorithmicsimplicity  Рік тому +45

      My comments about neural networks reasoning were in response to some of the recent discussions about large language models being conscious. My impression is that these discussions give people a wildly inaccurate view of what neural networks actually do. I just wanted to make it clear that all neural networks do is curve fitting.
      Sure you can say "neural networks are a function that map inputs to outputs" and "humans are a function that map inputs to outputs", therefore they are fundamentally doing the same thing. But there are important differences between humans and neural networks. For one thing, in the human's case the function is not learned by curve fitting. It is learned by Bayesian inference. Humans are born with an incredible amount of prior knowledge about the world, including what types of sounds human language can contain. This is why you were able to learn to recognize Japanese sounds in a few months, where it would take a neural network the equivalent of thousands of years worth of examples.
      If you want to say that neural networks are doing the same thing as humans that's fine, but you should equally be comfortable saying that random forests are doing the same thing as humans.

    • @danielguy3581
      @danielguy3581 Рік тому +16

      @@algorithmicsimplicity Whatever mechanism underlies human cognition, if it begets the same results as a neural network, then it can be said to also "merely" perform curve fitting. Whether that can also be described in terms of Bayesian inference would not invalidate that. Similarly, it is not helpful stating there's nothing to understand or use as a model in neurobiology since it is just atoms minimizing energy states.

    • @charletsebeth
      @charletsebeth Рік тому +1

      Why ruin a good story with the truth?

    • @revimfadli4666
      @revimfadli4666 Рік тому +2

      @@LuisPereira-bn8jq aren't you making a strawman yourself?
      Also wouldn't your language example still count as his "learning abstract hierarchies and concepts"?

  • @BurkeMcCabe
    @BurkeMcCabe Рік тому +181

    BRUH. This video gave me that amazing feeling when something clicks in your brain and everything all of a sudden makes sense! Thank you I have never seen neural networks explained in this way before.

  • @Weberbros1
    @Weberbros1 Рік тому +140

    I was expecting a duplicate of many other nueral network videos, but this was a perspective that I have not seen before! Awesome Video!

  • @matthewfynn7635
    @matthewfynn7635 3 дні тому +1

    I have been working with machine learning models for years and this is the first time i have truly understood through visualisation the use of ReLU activation functions! Great video

  • @sissyrecovery
    @sissyrecovery 11 місяців тому +15

    DUUUUUUDE. I watched all the people you'd expect, 3Blue1Brown, StatQuest etc. I was so lost. Then I was ranting to a friend about how exasperated I was, and he sent me this video. BAM. Everything clicked. You're the man. Also, love how you ended it with a banger.

  • @AdobadoFantastico
    @AdobadoFantastico Рік тому +57

    Getting some bit of math history context makes these extra enjoyable. Great video, explanation, and visualization.

  • @newchaoz
    @newchaoz Рік тому +18

    This is THE best intuition behind neural networks I have ever seen. Thanks for the great video!

  • @orik737
    @orik737 Рік тому +11

    Oh my lord. I've been struggling with neural networks for awhile and I've always felt like I have a decent grasp on them but this video finally brought everything together. Beautiful introduction

  • @DanaTheLateBloomingFruitLoop
    @DanaTheLateBloomingFruitLoop Рік тому +31

    Simply and elegantly explained. The bit at the end was superb.

    • @zenithparsec
      @zenithparsec Рік тому

      Except this just described one activation function, and did not show it generalized to all neural networks. Being so accessible means it couldn't explain ReLU in context.
      Don't get me wrong, it's a good explanation of how some variants of the ReLU activation function works, but it doesn't explain what a neural network really is, nor prove that your brain doesn't work by fitting data in a similar way.

  • @Stone87148
    @Stone87148 Рік тому +15

    Building an intuitive understanding of the math behind Neural Network is so important.
    Understand the application of NN gets the job done; understand the math behind of NN makes the job fun. This video helps the latter! Nice video!

  • @bogdanmois3998
    @bogdanmois3998 12 днів тому +1

    You've put things in a different perspective for me and I loved your explanation! Great job!

  • @gregor-alic
    @gregor-alic Рік тому +26

    Great video!
    I think this video finally shows what I was waiting for, namely what is the purpose of multiple neurons / layers in a neural network intuitively.
    This is the first time i have actually seen it explained clearly, good job!

  • @sharkbaitquinnbarbossa3162
    @sharkbaitquinnbarbossa3162 Рік тому +11

    This is a really great video!! Love the approach with parametric regression.

  • @stefanzander5956
    @stefanzander5956 Рік тому +7

    One of the best introductory explanations about the foundational principles of neural networks. Well done and keep up the good work!

  • @PowerhouseCell
    @PowerhouseCell Рік тому +13

    This is such a cool way of thinking about it! You did an amazing job discussing a popular topic in a refreshing way. I can't believe I just found your channel - as a video creator myself, I understand how much time this must have taken. Liked and subscribed 💛

  • @williamwilkinson2748
    @williamwilkinson2748 Рік тому +3

    The best video I have seen in giving one an understanding of neural nets. Thank you. Excellent, looking for more from you.

  • @aloufin
    @aloufin 21 день тому +2

    amazing viewpoint of explanation. Would've loved an additional segment using this viewpoint to do the MNIST image recognition

    • @algorithmicsimplicity
      @algorithmicsimplicity  21 день тому +1

      I explain how this viewpoint applies in the case of image classification in my video on CNNs: ua-cam.com/video/8iIdWHjleIs/v-deo.html

  • @igorbesel4910
    @igorbesel4910 Рік тому +5

    Made my brain goes boom. Seriously thanks for sharing this perspective!

  • @napomokoetle
    @napomokoetle 8 місяців тому

    This the clearest video I've ever seen on UA-cam on what a Neural Network is. Thank you so much... you are a star. Could I perhaps ask or encourage you to please create for many of us keen on learning Neural networks on our own, a video practically illustrating the fundamental difference between supervised, unsupervised and reinforcement learning.

  • @PolyRocketMatt
    @PolyRocketMatt Рік тому +5

    This might actually be the clearest perspective on neural networks I have seen yet!

  • @metrix7513
    @metrix7513 Рік тому +5

    Like someone else said, I expected the video to be similar to all the others, but this one gave me so much more, very nice.

  • @dfparker2002
    @dfparker2002 5 місяців тому +1

    Best explanation of parametric calcs ever! Bias & weights has new meaning

  • @AyrtonGomesz
    @AyrtonGomesz 15 днів тому +1

    Great work. This video just updated my definition of awesomeness.

  • @jcorey333
    @jcorey333 4 місяці тому +1

    Your channel is really amazing! Thanks for making videos.

  • @paulhamacher773
    @paulhamacher773 18 днів тому +1

    Brilliant explanation! Very glad I stumbled on your channel!

  • @symbolsforpangaea6951
    @symbolsforpangaea6951 Рік тому +2

    Amazing explanations!! Thank you!!

  • @videos4mydad
    @videos4mydad 5 місяців тому +1

    This is the best video I have ever seen on the internet that describes what a neural network is actually
    The best and most powerful explanations are those that give you the intuitive meaning behind the math and this video does it perfectly
    When a video describes a neural network by jumping into matrices and talking about subscripts i's and J's, they're just talking about the mechanics and do absolutely nothing about making you understand what you're reading
    Unfortunately, this is how most textbooks approach the subject and it's also how many content creators approach the subject as well
    This type of video only comes from someone who understands things so deeply that they're able to explain it in a way that involves almost zero math
    I consider this video one of the true treasures of UA-cam involving artificial intelligence education

  • @ArghyadebBandyopadhyay
    @ArghyadebBandyopadhyay Рік тому +2

    THIS was the missing piece of the puzzle I was looking for. This video helped me a lot. Thanks.

  • @karkunow
    @karkunow 10 місяців тому +3

    Thank you! That is a really brilliant video!
    I have been using regressions often, but never knew that Neural Network is kinda the same idea.
    Very enlightening!

  • @doublynegative9015
    @doublynegative9015 Рік тому +12

    Just watched Sebastian Lague's video on Neural Networks the other day, and whilst great as always, it was _such_ a standard method of explaining them. Because mostly I just see this explained in the same way each time. This was such a nice change, and really provided me with a different way to look at this. Seeing 'no lin-alg, no calc, no stats' really concerned me, but, you did a great job, just by trying to explain different parts. Such a great explanation - would recommend to others.

  • @ChrisJV1883
    @ChrisJV1883 9 місяців тому

    I've loved all three of your videos, looking forward to more!

  • @jorgesolorio620
    @jorgesolorio620 Рік тому +1

    Where has this video been all my life! amazing simply amazing! we need more please

  • @srbox2
    @srbox2 6 місяців тому

    This is flat out the best video on neural networks on the internet, provided you are not a complete newbie. Never have I had such an "ahaaa" moment. Clear, consize, easy to follow, going from 0 to hero effortlessly. Bravo.

  • @Deletaste
    @Deletaste Рік тому +5

    And with this single video, you earned my subscription.

  • @illeto
    @illeto 29 днів тому +1

    Fantastic video! I have been working with econometrics, data science, neural networks, and various kind of ML for 20 years but never thought of the ReLU neural networks as just a series of linear regressions until now!

  • @some1rational
    @some1rational Рік тому

    Great video, this is an explanation I have not heard before. Also I don't know if that abrupt ending was purposefully sarcastic, but I thoroughly enjoyed it lol

  • @dineshkumarramasamy9849
    @dineshkumarramasamy9849 Місяць тому +1

    I love to get the history lesson first always, Excellent.

  • @Muuip
    @Muuip 8 місяців тому

    Another great concise visual explanation!
    Thank you!👍

  • @DmitryRomanov
    @DmitryRomanov Рік тому +1

    Thank you!
    Really beautiful point about layers and exp growth of number of a segments one can make!

  • @ultraFilmwatch
    @ultraFilmwatch 9 місяців тому +1

    Thank you thousands of times, you excellent teacher. Finally, I saw a high-quality and clear explanation of neural networks.

  • @bisaster5471
    @bisaster5471 Рік тому

    480p in 2022 surely takes me back in time. i love it!!

  • @martinsanchez-hw4fi
    @martinsanchez-hw4fi Рік тому +3

    Good one! Nice video. In the regression line of Gauss one is not taking the perpendicular distances, though. But very cool video!

  • @andanssas
    @andanssas Рік тому +1

    Great concise explanation, and it does works: it fits at least my brain's data like a glove! Not that I have a head shaped like a hand (or do I?), but you did light up some bulbs in there after watching those lines animations fitting better and better.
    However, what happens when the neural network fits too well?
    If you can briefly mention the overfitting problem in one of your next episodes, I''d greatly appreciate. Looking forward to the CNNs and transformer ones! 🦾🤖

  • @Justarandomguyonyoutube12345
    @Justarandomguyonyoutube12345 9 місяців тому +1

    I wish I could like the video more than once..
    Great job buddy

  • @KIRA_VX
    @KIRA_VX 3 місяці тому

    IMO one of the best explanation when it comes the idea/fundamental concept of the NN, Please make more 🙏

  • @qrubmeeaz
    @qrubmeeaz 9 місяців тому +1

    Careful there! You should explicitly mention that you are taking the absolute values of the errors. (Usually we use squares). Without the squares (or abs), the positive and negative errors will kill each other off, and the simple regression does not have a unique solution. Without the squares (or abs), you can start with any intercept, and find a slope that will give you ZERO total error!!

  • @orangemash
    @orangemash Рік тому

    Excellent! First time I've seen it explained like this.

  • @aravindr7422
    @aravindr7422 9 місяців тому

    wow. very good. keep posting great content like this. you have an better potential to explain complex topics to simpler versions. and there are people who just post content just for the sake of posting and minting money. we need more people like you.

  • @ibrahimaba8966
    @ibrahimaba8966 8 днів тому +1

    Thank you for this beautiful work!

  • @tunafllsh
    @tunafllsh Рік тому +2

    Wow this is a really interesting view to the neural networks and what role do layers play in it.

  • @yonnn7523
    @yonnn7523 8 місяців тому

    wow, ReLU is an unexpected starting point to explain NNs, but nicely demonstrates the flexibility of summing up weighted non-linear functions.such a refreshing way!

  • @metanick1837
    @metanick1837 10 місяців тому

    Nicely explained!

  • @MrLegarcia
    @MrLegarcia Рік тому +2

    This straight forward explaining method can save thousands of kids from dropping school "due to math"

  • @geekinasuit8333
    @geekinasuit8333 8 місяців тому

    I was wondering myself exactly what a simulated NN actually is doing (not what it is, but what it is doing) and this explanation is the best by far, if not THE answer. One adjustment I will suggest, is at the end explain that a simulated NN is not required at all, and explain that alternative systems can also perform the same function, which begs the question, what exactly are the fundamental requirements needed for line fitting to occur? Yes I like to generalize and get to the fundamentals.

  • @gbeziuk
    @gbeziuk Рік тому

    Great video. Special thanks for the historical background.

  • @xt3708
    @xt3708 9 місяців тому +1

    This makes total sense thank you. With the last observation of the video, how does that reconcile with statements from the openai team regarding emergent properties of GPT4, that they didn't expect, or don't comprehend. I might be mixing apples and oranges, but if it's just curve fitting then why has some thing substantially changed?

  • @ward_heimdal
    @ward_heimdal 9 місяців тому

    Hands down the most enlightening ANN series on the net from my perspective, afaik. I'd be happy to pay 5 USD for the next video in the series.

  • @colebrzezinski4059
    @colebrzezinski4059 8 місяців тому

    This is a really good explanation

  • @is_this_youtube
    @is_this_youtube Рік тому +1

    This is such a good explanation

  • @4.0.4
    @4.0.4 9 місяців тому

    This is the second video of yours that I watch that gives me an eureka moment. Fantastic content. One thing I don't get is, people used to use the sigmoid function before ReLU, right? Was it just because natural neurons work like that and artificial ones were inspired by them?

    • @algorithmicsimplicity
      @algorithmicsimplicity  9 місяців тому

      Yes sigmoid was the most common activation function up until around 2010. The very earliest neural networks back in the 1950s all used sigmoid, supposedly to better model real neurons, and nobody questioned this choice for a long time. Interestingly, the very first convolutional neural network paper in 1980 used ReLU, and even though it was already clear that ReLU performed better than sigmoid back then, it still took another 30 years for ReLU to catch on and become the most popular choice.

  • @geekinasuit8333
    @geekinasuit8333 8 місяців тому

    Another explanation that's needed is to explain the concept of gradient descent (GD), that's the generalized method used to figure out the best fit. Lot's of systems use GD including natural evolution, it's basically trial and error with adjustments, although there are various ways to make it work more efficiently which can become quite complicated. You can even use GD to figure out better forms of the GD algorithm, that is it can be used recursively on itself.

  • @SiimKoger
    @SiimKoger Рік тому

    Might be the best and most rational neural networks video on UA-cam that I've seen 🤘🤘

  • @lewismassie
    @lewismassie Рік тому +1

    Oh wow. This was so much more than I was expecting. And then it all clicked right in at about 9:45

  • @karlbooklover
    @karlbooklover Рік тому

    most intuitive explanation ive seen

  • @ButcherTTV
    @ButcherTTV 25 днів тому +1

    good video! very easy to follow.

  • @bassemmansour3163
    @bassemmansour3163 Рік тому

    👍 Super demonstration! how did you generate the graphics? Thanks!

  • @Gravitation.
    @Gravitation. Рік тому +6

    beautiful! could you do this type of videos on other machine learning models such as convolution?

  • @hibamajdy9769
    @hibamajdy9769 8 місяців тому

    Nice interpretation 😊, please can you make a video explaining how neural networks used in for example digit recognition

  • @Scrawlerism
    @Scrawlerism Рік тому

    Damn you need and deserve more subscribers!

  • @pimwpefaiwemna
    @pimwpefaiwemna Рік тому +1

    That's a great explanation

  • @johnchessant3012
    @johnchessant3012 Рік тому +1

    Great video!

  • @redpanda8961
    @redpanda8961 Рік тому +1

    great video!

  • @saysoy1
    @saysoy1 Рік тому +2

    I loved the video, would you please make another one explaining the back propagation?

    • @algorithmicsimplicity
      @algorithmicsimplicity  Рік тому +5

      Hopefully I will get around to making a back propagation video sometime, but my immediate plans are to make videos for CNNs and transformers.

    • @saysoy1
      @saysoy1 Рік тому +1

      @@algorithmicsimplicity just don't stop man!

  • @willturner1105
    @willturner1105 Рік тому +1

    Love this!

  • @LegenDUS2
    @LegenDUS2 Рік тому

    Really nice video!

  • @garagedoorvideos
    @garagedoorvideos Рік тому +2

    8:47 --> 9:21 Is like watching my brain while I predict some trades. 🤣🤣🤣 "The reason why neural networks work....is that they fit the data" sweet stuff.

  • @StephenGillie
    @StephenGillie Рік тому

    Having worked with a simple single-layer 2-synapse neuron in a spreadsheet, I find this video vastly overexplains the topic at a high level, while not going into enough detail. It does, however, go over the linear regression needed for the synapse weight updates. Also it treats the massive regression testing as a benefit instead of a cost.
    One synapse per neuron in the layer above, or per input if the top layer.
    One neuron per output if the bottom layer.
    Middle layers define resolution, from this video at a rate of (neurons per layer)^(layers).
    Fun fact: Neural MAC (multiply-accumulate) chips can perform whole racks worth of computation. The efficiency gain here isn't so much in speed as it is reduction of power and space, by rearranging the compute units and using analog accumulation. In this way the MAC units more closely resemble our own neurons too.

  • @dasanoneia4730
    @dasanoneia4730 9 місяців тому

    Thanks needed this

  • @oleksandrkatrusha9882
    @oleksandrkatrusha9882 9 місяців тому

    Amazing!

  • @lollmao249
    @lollmao249 3 місяці тому +1

    This EXCELLENT and the best video explaining intuitively what a neural network does. You are seriously brilliant

  • @DaryonGaming
    @DaryonGaming Рік тому

    i'm positive I only got this recommended because of Veritasium's FFT video, but thank you youtube algorithm nonetheless. What a brilliant explanation!

  • @Null_Simplex
    @Null_Simplex 4 місяці тому

    Thank you. This is far more intuitive than the usual interpretation with nodes and edges graph with the inputs bouncing back and forth between layers of the graph until it finally gets an output.
    What are the advantages and disadvantages between this method of approximating a function and polynomial interpolation?

    • @algorithmicsimplicity
      @algorithmicsimplicity  4 місяці тому +1

      For 1-dimension inputs and outputs, there isn't much difference between them. For higher dimensional inputs polynomials become infeasible, since a polynomial would need coefficients for all of the interaction terms between the input variables (of which there are exponentially many). For this reason, neural nets are preferred when input is high dimensional as they simply apply a linear function to the input variables, and then an activation function to the result of that.

  • @stefanrigger7675
    @stefanrigger7675 Рік тому

    Top notch video, one thing you might have mentioned is that you only deal with the one-dimensional case here.

  • @scarletsence
    @scarletsence Рік тому

    Actually adding a bit of math to this video won't hurt while you add to them visual representation of graphs and formulas. But any way one of the most accessible explanation i have ever seen.

  • @kwinvdv
    @kwinvdv Рік тому

    Neural network "training" is just model fitting. It is just that the proposed structure of it is just quite versatile.

  • @sciencely8601
    @sciencely8601 23 дні тому +1

    god bless you for this work

  • @koderksix
    @koderksix 8 місяців тому

    I like this video so much
    It really shows that ANNs are really just, at the end of the day, glorified multivariate regression models.

  • @abdulhakim4639
    @abdulhakim4639 Рік тому

    Whoa, easy to understand for me.

  • @TheCebulon
    @TheCebulon Рік тому

    Do you have a video of how to apply this to train a neural network?
    Would be awesome.

  • @MARTIN-101
    @MARTIN-101 Рік тому

    phenomenal

  • @borisbadinoff1291
    @borisbadinoff1291 6 місяців тому

    Brilliant! Lots to unload from the concluding sentence: neural networks works because they fit the data. Sounds like an even deeper issue than misalignment due to proxy-based training.

  • @HitAndMissLab
    @HitAndMissLab 9 місяців тому

    You are math God! . . . subscribed

  • @andriworld
    @andriworld Рік тому +3

    I love this revisionist perspective! Let’s forget all the decades we spent using other activation functions than ReLU

    • @xt3708
      @xt3708 9 місяців тому +1

      expound plz without sarcasm

    • @andriworld
      @andriworld 9 місяців тому

      @@xt3708 I didn’t mean to be entirely sarcastic as I truly love this video’s perspective. It’s my understanding that it wasn’t until 2011 that we realized that the ReLU activation function works so well, and that it was by experimentation so I assume we didn’t get the concepts in this video for many years prior, and only understand them now (I am not an expert though so don’t quote me please)

  • @theplotproject5911
    @theplotproject5911 Рік тому

    this is gonna blow up

  • @spadress
    @spadress 8 місяців тому

    Very good video

  • @talsheaffer9988
    @talsheaffer9988 Місяць тому +1

    Thanks for the vid! At about 10:30 you say a NN with n neurons in each of L layers expresses ~ n^L linear segments. Could this be a mistake? I think it's more like n^2 * L

    • @algorithmicsimplicity
      @algorithmicsimplicity  Місяць тому

      The number of different linear segments is definitely at least exponential in the number of layers, e.g. proceedings.neurips.cc/paper_files/paper/2014/file/109d2dd3608f669ca17920c511c2a41e-Paper.pdf

  • @dann_y5319
    @dann_y5319 2 місяці тому +1

    Omg great video

  • @justaname999
    @justaname999 Місяць тому +1

    This is a really cool explanation I haven't seen before.
    But I have two questions:
    Where does overfitting fit in here? more neurons would mean higher risk of overfitting? do layers help or are they unrelated?
    And where would co-activation of multiple neurons fit in this explanation? e.g., combination of information from multiple sensory sources?

    • @algorithmicsimplicity
      @algorithmicsimplicity  Місяць тому

      My video on CNNs talks about overfitting and how neural networks avoid it (ua-cam.com/video/8iIdWHjleIs/v-deo.html ) . It turns out that actually the more neurons and layers there are, the LESS neural nets overfit, but the reason is pretty unintuitive.
      From the neural nets perspective, there is no such thing as multiple sensory sources. Even if your input to the NN combines images and text, the neural net still just sees a vector as input, and it is still doing curve fitting just in a higher dimensional space (dimensions from image + dimensions from text).

    • @justaname999
      @justaname999 Місяць тому

      @@algorithmicsimplicity Thank you! I had read the more neurons lead to less overfitting and thought it was counterintuitive but I guess that must have carried over from the regular modeling approach where variables remain (or should) interpretable.
      I'll have a look at the other videos! Thanks
      I guess my confusion stems from what you address at the end. We can fairly simply imitate some things via principles like Hebbian learning but the fact that in actual brains it involves different interconnected systems makes me stumble. (and it shouldn't because obviously these models are not actually like real brains)

  • @promethful
    @promethful Рік тому

    Is this piecewise linear approximation of a network a feature of using the ReLU activation function? What if we use a sigmoid activation function instead?

  • @TheEmrobe
    @TheEmrobe Рік тому

    Brilliant.