Why neural networks aren't neural networks

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 479

  • @kaisle8412
    @kaisle8412 3 роки тому +763

    "Let's watch that animation again, since it took me so long to make"

    • @phafid
      @phafid 3 роки тому +20

      as someone who is struggling with math, this is the equivalent of Picasso

    • @roseproctor3177
      @roseproctor3177 3 роки тому +5

      Lol it was a great animation though

    • @tielessin
      @tielessin 3 роки тому +8

      Honestly, who can not relate haha

    • @mickolesmana5899
      @mickolesmana5899 3 роки тому +1

      fair enough

    • @mxmilkiib
      @mxmilkiib 3 роки тому +3

      Yelped, paused, rushed to the comments to thumb up the one about that line.

  • @roygalaasen
    @roygalaasen 3 роки тому +361

    First video? Off to a VERY promising start. This is just great! Hope low numbers won’t deter you from making more. (Or the amount of work.) Hope to see more from you.

    • @samsartorial
      @samsartorial  3 роки тому +93

      Thanks! I don't think I'm going to do a ton more, since it took me like a week of 14-hour days to make. But I was thinking I might do a video on transformers NNs or something unrelated like reactivity in user interfaces once the semester wraps up. IDK, we'll see.

    • @roygalaasen
      @roygalaasen 3 роки тому +20

      @@samsartorial I do understand and appreciate that it is a lot of work. I am glad you took the time for this one video at least! 😃

    • @piter239
      @piter239 3 роки тому +10

      Is there a way of increasing the possibility of future videos of this exquisite quality?

    • @romanemul1
      @romanemul1 3 роки тому +3

      @@samsartorial Lemme tell you. Noone wants from you to make a video in a week or a month. Just make them slowly. Otherwise you end up like 3b1b . Looks like he is at the end of ideas.

    • @RonWolfHowl
      @RonWolfHowl 3 роки тому

      @@samsartorial Those both sound like very exciting topics 😁

  • @CharlesWeill
    @CharlesWeill 3 роки тому +56

    Even after doing ML professionally for 5 years, seeing the transformations in this way taught me something new.

    • @revimfadli4666
      @revimfadli4666 Рік тому +2

      You might like reading Chris Olah's blog then

  • @patrickinternational
    @patrickinternational 3 роки тому +243

    Oh jeez, this is such a great video, I love how you describe the weighting process in NN to actual weights...brilliant. At the beginning you actually describe linear discriminant analysis as well. This is great because NN is really just a series of transforms and this is the best animation I have ever seen, way better than event Grant Sanderson's video on the topic. I added this to the list of all SoME1 videos that I could find. @

    • @patrickinternational
      @patrickinternational 3 роки тому +1

      ua-cam.com/video/MsNQtj3zVs8/v-deo.html

    • @Walkofsoul
      @Walkofsoul 3 роки тому

      @@patrickinternational Thanks !

    • @vtrandal
      @vtrandal 3 роки тому

      Off

    • @alejrandom6592
      @alejrandom6592 3 роки тому

      Not better, but they complement each other

    • @Artaxerxes.
      @Artaxerxes. 3 роки тому +1

      I don't think there's a better video than 3b1bs linear algebra series to understand linear transformations. This video makes sense only if you've understood that already. And it's perfect to watch once you've done that. I wish he makes more because it was very enjoyable although it lasted only for 10 min

  • @Riley.Rumble
    @Riley.Rumble 3 роки тому +103

    This is a great video. I work with neural nets daily and intellectually knew everything you said in this video, but your presentation and visualizations has completely reframed the way I think about NNs. Thank you!

  • @edyt4125
    @edyt4125 3 роки тому +25

    I have worked extensively with linear and nonlinear transformations of abstract geometries, and this is by far one of the best explanations of their correspondence with “neural networks”!! Great work!

  • @KeirRice
    @KeirRice 3 роки тому +5

    I've been reading about neural networks for years with limited understanding. In 10mins you have given me an entirely new and easier to understand perspective. Thank you so much!
    Fantastic work!

  • @PaulScotti
    @PaulScotti 3 роки тому +83

    As someone who has used neural networks in a research setting, I never even realized that neural networks are actually just a series of alternating linear and nonlinear transformations. Amazing video, hope you make more :)

    • @DeadtomGCthe2nd
      @DeadtomGCthe2nd 3 роки тому +4

      How does that happen? Brilliant teaches this.

    • @Finnnicus
      @Finnnicus 3 роки тому +2

      @@fuzzylogicq You're very smart 🌟

  • @kaemmili4590
    @kaemmili4590 3 роки тому +37

    hey sam , its marvelously clear , fluid and relevant ,we need more of anything you find interesting
    sincerly
    -everyone

  • @Geosquare8128
    @Geosquare8128 3 роки тому +51

    really great! hope you make some more videos :)

  • @paolopiaser_SystemsComposer
    @paolopiaser_SystemsComposer 3 роки тому +1

    How is it possible that I watched so many different videos explaining ML without really grasping it, and now this guys with 9 minutes makes it so clear. Thank you.
    PS. just a clarification: the use of the analogy with neurons was kinda on point at the time it was created, because they were trying to understand how living organisms were self organising, hence also trying to create a model of the neurons and the brain.

  • @kevinknutson4596
    @kevinknutson4596 3 роки тому +9

    Very well done video, definitely feels on par with 3b1b's ability to break up and explain complex phenomenon.

  • @flochforster22
    @flochforster22 3 роки тому +3

    I understand more about linear and non-linear transformations, why hyperbolic tangent is useful in NNs, and how NNs really work thanks to this video. Awesome work!

  • @piface3016
    @piface3016 3 роки тому +6

    Hey, just wanted to let you know this is one of my favorite videos on UA-cam, I absolutely fell in love with it.
    I come from Statistics so when I was first learning about "Neural Networks" and found out that the process of "learning" is literally just minimizing a cost function, that it has no magic going on, my thought was "So it's just MLE? It's just a Math thing?"
    This video is the best piece I've found so far in demystifying neural networks, plus it gives some insight on what's actually going on, besides the usual neuron-layer analogy. That's all!

    • @WelcomeBub
      @WelcomeBub 2 роки тому

      Well the field of AI has many more characterizing questions for choosing machine learning methods and models for a problem. For example how do you incorporate new knowledge/data for a trained neural network? NNs aren't really fit for this job at the moment and require relearning everything with the new samples.

  • @taggosaurus
    @taggosaurus 3 роки тому +6

    2:07 suggestion - probably more clarification needed on logical regression and linear regression terminologies. The word 'regression' pretty much always is used for prediction, not classification. Logistic Regression is a classification method and not prediction, so it's sort of a misnomer due to historical reasons from what I've heard.

  • @xxgn
    @xxgn 3 роки тому +9

    As someone who took machine learning in college, I have recollections of being surprised when we covered a bunch of techniques and every single technique boiled down to statistics, not only in how they worked but also in proving why they worked (or didn't) for different scenarios.

  • @a_name_a
    @a_name_a 3 роки тому +1

    Great video, I would also mention the convex hull. ANN's can not extrapolate, they can only Interpolate within the convex hull of the training set.

    • @samsartorial
      @samsartorial  3 роки тому

      I mean, they generally can't extrapolate much without additional inductive biases. But I'm not so sure about the convex hull thing: arxiv.org/abs/2101.09849

  • @janstaudacher6793
    @janstaudacher6793 3 роки тому +2

    How you introduced the scale for classifying the fruits by just one number and therefore introduced logistic regression was just pure genius!

  • @rainzhao2000
    @rainzhao2000 3 роки тому +4

    I come back to this video often because your animation of an NN transforming the data is just so satisfying.

  • @salmagamal5676
    @salmagamal5676 3 роки тому +1

    THISSSSS!! OMG my mind is blown to pieces even though I sort of already knew the information never have I seen anyone put the pieces together like this. My man thank you so much.

  • @jgcornell
    @jgcornell 3 роки тому +5

    This is frickin amazing, I just completed a postgrad in which we were just expected to accept the transforming data was viable. Whilst I mathematically understood it, I never intuited why - a few minutes into this video you visually slapped me in the face and showed me how it is obviously the same as transforming your boundary! I feel both foolish not to have seen it before and so elated!

  • @houcemfehri155
    @houcemfehri155 3 роки тому +2

    I can’t believe that this your first video with how much of a great quality it has. Keep making more, you’re amazing!!

  • @mirllewist3086
    @mirllewist3086 2 роки тому

    Professional Data Scientist here - very well done. And, a useful clarification - too much magical thinking out there regarding AI at the moment. Vids like this are a big help. Thanks

  • @dariuszb.9778
    @dariuszb.9778 3 роки тому +1

    That's why we have "neural" (built from neurons) and "neuronal" (built from a simplistic models of neurons) networks distinction in some languages.

  • @TheRealJavahead
    @TheRealJavahead 3 роки тому +2

    Great video. Keep them coming. This will significantly aid my ongoing “ML is just statistics” campaign. Thanks. Subscribed.

  • @JoshuaCowling
    @JoshuaCowling 3 роки тому +15

    I love your explanation here. This kind of simplification and visualisation an excellent way to break down some of the barriers around machine learning and expose more people to the processes involved, their limitations and applications. Bravo!

  • @cherubin7th
    @cherubin7th 3 роки тому +1

    True, if you make neural networks in low level tools like Tensorflow or Pytorch yourself, you basically just make matrix multiplications, matrix addition, and send it though some non linear function you like.

  • @wlmorgan
    @wlmorgan 3 роки тому +2

    So uh, we actually have pretty good ideas about how neurons change in response to activity. Signals generate electrical/protein activity in neurons which can lead to secondary signals that alter protein expression patterns, which in turn alter electrical activity for the same original signal. The trouble of course is that each neuron has different initial states; though, they can be classified broadly and interpreted as classes of neurons which behave in particular ways. These alterations modify the weighting of edges in your brain's network the same way true computational neural networks need modify their edges' weighting. Learning then is simply giving training samples and rewarding good matches, which fits in the scaffolding theory of learning of Vygotsky and Bruner quite well.

    • @WelcomeBub
      @WelcomeBub 2 роки тому

      But for neural networks we don't really have continuous learning, only hitting the reset button and changing up the samples. Applying a NN model does not also trigger learning in the process; the result of the application is simply used to nudge the model's weightings somewhat.

    • @diadetediotedio6918
      @diadetediotedio6918 Рік тому

      Nah, we really don't have such good ideas of what neurons do or how they change exactly.

  • @Yerocregnes
    @Yerocregnes 3 роки тому +1

    I only just recently realized that neural nets were just transformations between dimensions and it made so much click. The idea that there exists an information space that can encode data about whether an animal is a cat or a dog was crazy to me. Keep up the good work

  • @kyguypi
    @kyguypi 3 роки тому +1

    This video was great! I hope you take it as a compliment that for a second, I thought I was watching 3blue1brown. I understood activation functions coming in to this, but I still feel that I have even more clarity after your animation. Thanks!

  • @switzerland
    @switzerland 3 роки тому +1

    Your video made it click in my brain like only 3blue1brown could make when explaining the fouriertransforms. My mind is blown, it makes sense now. Thanks.

  • @sarthakkhanal6882
    @sarthakkhanal6882 3 роки тому +2

    As a teaching assistant for a deep-learning course, I am definitely referring this video to the students. Gives a very interesting perspective into what NN are, and why we use activation functions.

  • @stevenschilizzi4104
    @stevenschilizzi4104 2 роки тому

    Brilliant exposition. Will certainly help to blow away the fog of confusion that other sources may have, or have, generated. Thanks for your hard work!

  • @JITCompilation
    @JITCompilation 3 роки тому +1

    Holy crap. This is just what I needed. Are there any sources you recommend for going deeper into what you covered?

  • @Killadog1980
    @Killadog1980 3 роки тому +1

    What a great video! And you only have a single upload! How do you make such an amazing video and explanation without previous uploads

  • @iestynne
    @iestynne 2 роки тому

    What a *fantastic* video! I've watched a large number of videos on artificial neural networks over the past few years... yet I learned such a lot from this one! Such a (shockingly) clean perspective on how these systems work.
    The choice of the examples and the clarity of the writing and animation are just superb.
    If you didn't win, it's a travesty.

  • @EnergyWell
    @EnergyWell 3 роки тому +11

    This video is wonderful. I am very glad you took the time to visualize these transforms into animations. You are a great teacher!

  • @RichardAlbertMusic
    @RichardAlbertMusic 3 роки тому +1

    Excellent! Especially the part „Let’s play this animation again because it took me so long“ 😂

  • @amitbar2121
    @amitbar2121 3 роки тому +2

    Fantastic video. I’m extremely impressed, especially with that visualization you worked hard on. Great job, hope to see more videos from you soon!

  • @lb5928
    @lb5928 3 роки тому +1

    The single best video ever made on machine/Deep learning. Extremely intuitive and practical explanations and visuals. Well done.

  • @andrewglick6279
    @andrewglick6279 3 роки тому +2

    This video is amazing; I'm sharing it with so many people. I've gotten so tired of seeing the nodes/edges diagrams as an explanations of neural networks--I understood that one layer influenced the next one to get to a final result, but it bothered me that I never understood *how* or *why*. Your explanation in this video is exactly what I was looking for. Thank you!

  • @michealhall7776
    @michealhall7776 3 роки тому +1

    This is one of the best videos I have seen on the topic, please make more

  • @picumtg5631
    @picumtg5631 3 роки тому +3

    even though I do not know how linear transformations work(but this inspired me to do so soon) and only knowing the relevant calculus of machine learning, this is really mindblowing. I see how much thought was put in there and I thank you for your time and wish you a great life

  • @kaiserouo
    @kaiserouo 3 роки тому

    I used to think NN as a big nonlinear machine that has so many parameters that pumps up the DC dimension so high that seems to do any task well enough, while being easy enough to optimize loss.
    Understanding this as iterative process of linear & nonlinear transformation tells a better story about NN. I think is is also like PLAs and feature transformations, but iterate between those 2 a bunch of times. That helps the network reach a much more complex decision boundary, but instead of 1 very complex feature transformation (which may be hard to optimize or even think of), they introduce a much simpler and more general model by combining linear & nonlinear transformations, which can reach any complexity we want by simply adding layers.

  • @finnaginfrost6297
    @finnaginfrost6297 3 роки тому +3

    Absolutely incredible. I also can't think of any berries with those growth patterns. I love how obvious it became that projecting into a higher dimension was important.

  • @Bluedragon2513
    @Bluedragon2513 3 роки тому +1

    Great visualization; I had to stop what I was doing and watch because I realized it made so much more sense now

  • @iwasjason
    @iwasjason 3 роки тому +1

    Fantastic video! I love the 3b1b-esque elegance, emphasis on visual intuition, and the build-up to a worthwhile nugget of insight (neural nets aren't magic, just iterated linear and non-linear transformations). Looking forward to your future videos!

  • @tielessin
    @tielessin 3 роки тому +5

    Amazing Video! I started studying AI recently and this has given me a new perspective on the topic.

  • @Artifactorfiction
    @Artifactorfiction 3 роки тому +2

    This was superb - at some level this demystified for me how the hidden layers do the work - at least when the dimensions are small - just wonderful clarity in this video

  • @LinesThatConnect
    @LinesThatConnect 3 роки тому +8

    Thanks for this, it demystified the concept quite a bit for me!

  • @hcv1648
    @hcv1648 3 роки тому +3

    You changed my understanding of how I saw neural nets before. Amazed..🙏 would wait for more videos

  • @multimolti
    @multimolti 3 роки тому +1

    This is by far the best and most concise introduction to Machine Learning, Deep Learning and "AI" that I've ever seen. Great job!

  • @andrewfriedrichs9340
    @andrewfriedrichs9340 3 роки тому +2

    This example is very cool. With enough transformations you can separate out almost any data set.

  • @EssentialsOfMath
    @EssentialsOfMath 3 роки тому

    @3:20 on a very technical/mathematical note, translations are not linear transformations. A linear transformation has to preserve the form ax+y, in the sense that T(ax+y) = aT(x) + T(y). Translations T(x) = x+h fail this because T(ax+y) = ax+y+h while aT(x) + T(y) = ax + y + h + ah.

    • @samsartorial
      @samsartorial  3 роки тому +3

      Yeah in retrospect I should have put a little note on screen clarifying that the transformations are actually affine. I don't see the distinction made for neural networks very often, because it is always theoretically possible to emulate affine transformations by linearly transforming through 1 additional dimension (making some assumptions about the input space). Giving your neural network biases to play with in addition to weights just makes training easier.

  • @jamesdunbar2386
    @jamesdunbar2386 3 роки тому +1

    Extremely well done video. I hope you make more! I've taken some classes in Machine Learning and use some simple NNs at work so I have some familiarity with the mechanics, but it's nice to see the concepts so neatly spelled out! Very illuminating.

  • @incredulouschordate
    @incredulouschordate Рік тому

    This is seriously the best NN video I've seen, and yes that's after watching 3B1B's series

  • @CaptainSpaceCat17
    @CaptainSpaceCat17 3 роки тому +2

    I study AI, and no class has ever taught it to me from the same angle. Really insightful video, and I appreciate the visual explanation of the transformations.
    I'm wondering, how are you rendering your animations? They look very slick. I feel like I've seen a similar style before, maybe from 3blue1brown? Are you using some specific piece of software?

    • @samsartorial
      @samsartorial  3 роки тому +1

      Glad you like the video! I used a piece of software called manim which 3b1b created.
      You can see the source code for my animations here: gitlab.com/samsartor/nn_vis

  • @heynowyouarearock
    @heynowyouarearock 3 роки тому +2

    This is the most intuitive video I have ever seen.

  • @Cybermage10
    @Cybermage10 3 роки тому +1

    Hey Sam! Great to see you're doing well, hi from the old Mines LUG crew!

  • @fabianbleile9467
    @fabianbleile9467 3 роки тому +2

    hey sam, superb video! I am studying mathematics and i am scratching the surface of the neural network thing from time to time. the block from 5:53 to 6:16 made it super clear for me what neural networks actually are and the first real definition i've seen. It's demystifying and clarifying. I am definitely gonna build upon this concept! huge thanks :)

  • @19vangogh94
    @19vangogh94 3 роки тому +1

    The video was awesome! Sam, you have gift for this, I can totally see your videos teaching millions in few years

  • @silverfishers
    @silverfishers 3 роки тому

    Great video man! I like the explanation of the backstory of the name. It helps to give the whole concept some context

  • @yaiirable
    @yaiirable 3 роки тому

    That music is really amping up the tension

  • @1996Pinocchio
    @1996Pinocchio 3 роки тому +1

    Great video, I especially liked the transitions from one topic to the next, and the animations.

  • @TallSchmuck
    @TallSchmuck 3 роки тому +3

    Unbelievably amazing video. I thoroughly enjoyed it and learned to look at some parts of AI in a different way. I seriously hope you continue making videos! Great work!

  • @ming3706
    @ming3706 Рік тому +1

    I can't believe my highschool math is this important

    • @dabidmydarling5398
      @dabidmydarling5398 Рік тому

      I thought the same thing as well! I love ML and statistics!! :)

  • @Krasnoye158
    @Krasnoye158 3 роки тому +1

    This is really cool content. I wonder which class did you take in college or which book did you read which covered this subject? I'm interested. Please reply.

  • @Sydra.
    @Sydra. 2 роки тому +1

    You made the world a better place with this video!

  • @osten222312
    @osten222312 3 роки тому +1

    Great work! Hope to read more from you

  • @GoriIIaTactics
    @GoriIIaTactics 3 роки тому

    Very cool explanation and animation. I'm also just very alarmed at how many people in the comments have "used" neural networks and don't understand this already

  • @sgaseretto
    @sgaseretto 3 роки тому +2

    Damn I really loved this video! You summerized everything I always say to people when explaining them what artificial neural networks actually are, how they are really different from biological neural networks and why I don't like the metaphor of "neural networks" to describe them, but with great animations! Now I have this reference video to share when someone else brings the topic. Thanks for this really great video!

  • @jakob3267
    @jakob3267 3 роки тому +1

    This one of the best videos I have seen in a while. I really hope you will make more 🙏

  • @raghavendranimiwal9264
    @raghavendranimiwal9264 3 роки тому +1

    Great stuff. More power to you. Looking forward to more such amazing videos.

  • @Speed001
    @Speed001 3 роки тому +4

    This is amazing. I don't know anything about neural networks, but these are concepts I can understand, at least partially.

  • @mms7146
    @mms7146 Рік тому

    I loved the video!
    Just one question though: why not simply use a GLM instead of doing all that complicated data transformation?

  • @ZubairKhan-sp8vb
    @ZubairKhan-sp8vb Рік тому

    Just awsome!!! I have been thinking about neural networks this way, and how the key word has stuck with this statistical process. You illustrated beautifully just amazing!!

  • @Bokbind
    @Bokbind 3 роки тому +1

    Wow, this is incredibly well made! I'm going to subscribe in the hopes you make more like it!

  • @jjcadman
    @jjcadman 3 роки тому +1

    Fantastic explanations and visualizations.
    Thanks for all the time & effort you put into this.

  • @k2c027
    @k2c027 3 роки тому +1

    Such a great video Sam. Thanks for making this 🙏🏻

  • @itsrachelfish
    @itsrachelfish 3 роки тому

    How do you only have 494 subscribers!!?! Amazing content

  • @raghebalghezi9532
    @raghebalghezi9532 3 роки тому +2

    Nice work! How feasible do you think the same weight visualization can be done on RNN?

    • @samsartorial
      @samsartorial  3 роки тому +1

      Sorta. If you had a RNN that transformed data through 3 dimensions only, it would work great! We would see some initial points get shuffled around, check the decision boundary, then shuffle again with the same transforms as before, check the decision boundary, and shuffle again, etc. But it's really hard to come up with any meaningful RNN which only uses 3 dimensions, most use hundreds at least.

  • @HighlyShifty
    @HighlyShifty 3 роки тому +1

    Fantastic video, it may have taken you a lot of time but it was absolutely worth it! Subscribed in case you do decide to make more.

  • @MrPolluxxxx
    @MrPolluxxxx 3 роки тому +14

    Interestingly enough, biological neurons do somewhat work like that. A neuron performs a weighted sum of incoming signals (linear) and then produces an action potential if the summated signal is bigger than some threshold (non linear). So the analogy with neural networks isn't so bad.
    I'm off course simplifying, there are multiple kinds of neurons that all work differently.

    • @samsartorial
      @samsartorial  3 роки тому +15

      I think the best summary of the differences I have found is towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7, but in short: although individual neurons could be simplified to a linear+nonlinear system, systems of neurons put together are used very differently by the brain vs in modern statistical models. There has been a lot of research trying to bridge the gap between the two (e.g. Lillicrap, T.P., Santoro, A., Marris, L. et al. Backpropagation and the brain) but fundamentally:
      - nature likes to make neurons very internally complex because cells are expensive and you want the most computational bang-for-your-buck
      - humans like to make artificial neurons very internally simple because then our computer hardware can churn through enormous numbers of them easily

  • @chadx8269
    @chadx8269 2 роки тому +1

    Thanks for debunking the "Neural Network".

  • @gregoryg8902
    @gregoryg8902 3 роки тому +1

    Wow. Well done. I'm currently studying machine learning and this is a great introduction to many concepts at a high level.

  • @audunlarssonkleveland4789
    @audunlarssonkleveland4789 3 роки тому

    Im hoping for more videos, the visuals are great and clean.

  • @IllyNexus
    @IllyNexus 3 роки тому +1

    Thank you for this amazing piece of knowledge Sam Sartor. Subscribed, maybe you are devoting some time in the future for other projects of this type. See ya!

  • @marverickbin
    @marverickbin 3 роки тому

    You gave me an idea.
    Try to make a machine learning system that takes image os a plot os points (like your example) and draw a line that separates the 2 groups.
    Yeah, seems like killing a bee with a bazooka, but the point is to simulate how we do it, we do not have the vector list of points, we process the image in our brains and give a rough estimate. We (usually) do not transform the space, do not turn into higher dimensions, seems curves come naturally ( I think). We come with the solution with the same model as a generic image recognition/classification used for everything we see.

  • @nembobuldrini
    @nembobuldrini 2 роки тому +1

    Like n. 7001! ;) Brilliant! The best visual explanation of NN I have encountered so far. The visuals are extremely helpful in getting the gist of what a feed-forward neural network does.
    It's important to point out - and this would have spiced up the ending of the video too ;) - that there are other types of neural networks that are more similar to how the brain works. Hebbian learning and recursion are involved in these other types of neural networks, for which a simplification in the terms used in the video would be not so quite straightforward. It would actually be great to see a follow-up video on these kind of NN!

    • @diadetediotedio6918
      @diadetediotedio6918 Рік тому

      They are also, extreme simplifications of the processes that occur in a brain and, fundamentally, they end up being used as if they were the current "artificial neural networks" when they depend on statistical methods such as backpropagation.

  • @ApplepieFTW
    @ApplepieFTW 3 роки тому +1

    Wow, awesome quality and very clearly explained

  • @adityams1659
    @adityams1659 3 роки тому

    *I SEE YOU HAVE MADE THESE FOR 3B1B's SUMMER OF MATH*
    I suggest you start making videos!
    This video was like out of the world!

  • @code-cave
    @code-cave 3 роки тому

    Cool video. Wishing the linear transformations were written though! Like at 4:28, this doesn't look like just a rotation about the origin; it's some combination of rotation and shear

  • @tititiwon
    @tititiwon 3 роки тому

    Hi, good video, good animations, I learnt something. But I don't think a NN works like that. I don't think the predictions look like that while trainning. Is this more like a justification saying while they can learn to classify or process data?

  • @saminchowdhury7995
    @saminchowdhury7995 3 роки тому

    So that's why deeper networks works better for solving more complex problems.
    I always read that deeper layers "helps the network extract more features" whatever that means
    Thanks for the video

  • @andreheynes4646
    @andreheynes4646 3 роки тому +1

    This is beautiful. I understand these things better now. Thank you.

  • @archenemy49
    @archenemy49 3 роки тому +1

    This is the most beautiful video I have seen in several years. Thank you so much for sharing this enormous observation and perception! Thank you so much!!! ❤️

  • @pacukluka
    @pacukluka Рік тому

    Amazing ! Please keep it up, we have so much yet to learn.

  • @ativjoshi1049
    @ativjoshi1049 3 роки тому

    Awesome video. Can you provide links to any references/further reading material?

  • @konstantint1588
    @konstantint1588 3 роки тому +1

    That was absolutely fascinating and amazingly well done!
    I'd pay for course about this!

  • @DoYouHaveAName1
    @DoYouHaveAName1 6 місяців тому

    AMAZING! The animations and explanation are perfect!
    Thank you for your hard work