The Universal Approximation Theorem for neural networks

Поділитися
Вставка
  • Опубліковано 1 лис 2017
  • For an introduction to artificial neural networks, see Chapter 1 of my
    free online book: neuralnetworksanddeeplearning....
    A good series of videos on neural networks is by 3Blue1Brown. Start
    here: • But what is a neural n...
    This video just shows the (very simple!) basic idea of the proof. For the full proof of the universal approximation theorem, including caveats that didn't make it into this video, see Chapter 4 of my book:
    neuralnetworksanddeeplearning....
    This video was made as part of a larger project, on media for mathematics: / magic_paper
  • Наука та технологія

КОМЕНТАРІ • 75

  • @humzahkhan6299
    @humzahkhan6299 Рік тому +14

    Everyone wants to talk about the expressive power of neural networks, but I want to talk about Michael Nielsen's expressive power to make me finally understand expressive power so powerfully and expressively.

  • @NoahElRhandour
    @NoahElRhandour Рік тому +8

    Man I would give an arm and leg to know what amazing software he uses in this vid.... bet he coded it himself, the madman.

    • @NemexiaM
      @NemexiaM 2 місяці тому +1

      thats amazing, please if its available please please tell me what that is

  • @JulietNovember9
    @JulietNovember9 4 роки тому +3

    Wow...just, wow! you explained in a little over 6 min what I've spend hours trying to understand while going through different textbooks. Thank you!

  • @quintenone9107
    @quintenone9107 6 років тому +1

    Your last video was 3 years age, and the moment that I check for a new video for a project file to be able to update, you uploaded!

  • @medhavimonish41
    @medhavimonish41 3 роки тому +2

    Didn't know you had a channel. I started ANN with books , but your online book 2ith 5 chapter was extremely useful. Thanks for writing that book 👍

  • @MysuruBharath
    @MysuruBharath 5 років тому +11

    One of the most intuitive explanation for the approximation theorem, visually this makes it more accessible.
    Can the rectangular blocks be thought of as the blocks in Riemann summation, more you increase the blocks, better the approximation

  • @adosar7261
    @adosar7261 Рік тому +4

    Very nice explanation! Can you make a video of why increasing the number of hidden layer is more efficient to approximate a function than increasing the number of neurons with only one hidden layer?

  • @xxEndermelonxx
    @xxEndermelonxx 5 років тому +2

    one of the best explanations I've found so far!

  • @ahmedelsabagh6990
    @ahmedelsabagh6990 2 роки тому +19

    I can't express how much I'm impressed by this short amazing video!
    Could you please tell us which software you used for these graphs/drawings?

  • @anibus1106
    @anibus1106 11 місяців тому

    Huge thank you for the clear and to the point explanation.

  • @rkgaddipati
    @rkgaddipati 6 місяців тому

    Thank you for this; This is a beautiful and simple explanation to build the intuition for universal function approximator. Could you please do a follow-on explainer detailing out caveats?

  • @rickmonarch4552
    @rickmonarch4552 5 років тому +6

    OMG THE BEST EXPLAINATION EVER THAT MAKES SENSE :O THANK YOU

  • @saminchowdhury7995
    @saminchowdhury7995 4 роки тому +6

    Thank you for the great video.
    Can I know what tool your using to visualize the neural network

  • @tiangolo
    @tiangolo 6 років тому +3

    I love it! Awesome explanation. The interactive and intuitive magic paper makes a great difference.

  • @JwalinBhatt
    @JwalinBhatt 2 роки тому +1

    Very nice explanation!

  • @sushmitajadhav7133
    @sushmitajadhav7133 3 роки тому

    AMAZING EXPLAINATION! Thank you tons!

  • @kerem3715
    @kerem3715 8 місяців тому

    Thank you for this beautiful explanation. I realized that I knew nothing about neural network mathematics.

  •  4 роки тому +1

    This is great please do more

  • @danielhuynh6907
    @danielhuynh6907 3 роки тому

    Great explanation!

  • @raghavendra2096
    @raghavendra2096 3 роки тому +1

    Just exactly what i wanted!!!! thanks soo much Michael :)

  • @vivekKumar-qx2tl
    @vivekKumar-qx2tl 3 роки тому +1

    Nice explanation 👏👏

  • @swaralipibose9731
    @swaralipibose9731 3 роки тому

    Best video on UA-cam

  • @mikasomk34
    @mikasomk34 4 роки тому +2

    Dear Michael Nielsen. Nice Video!!. I am wondering about the app you were using on the video to make graphics; so would you mind telling us the name of it?

  • @omar24191
    @omar24191 3 роки тому

    Hey Michael ... thanks for the simple explanation! One more thing ... how to use your awesome Magic Paper program?!?! Thanks again

  • @livb4139
    @livb4139 Місяць тому

    I remember seeing this 6 years ago and loving your explanation. What are you up to lately if you don't mind me asking?

  • @CriticalPhemomenon
    @CriticalPhemomenon 6 років тому +2

    Good stuff..!

  • @paulcurry8383
    @paulcurry8383 3 роки тому

    Great video, question though: what is going on with the artificial neuron? In all my research I’ve been exposed to it using a heaviside step function activation, but this looks like it is using a smooth sigmoid activation or something?

  • @nathanwailes
    @nathanwailes 6 років тому +2

    Very, very cool.

  • @coolmanlulu
    @coolmanlulu 5 років тому +1

    great job

  • @muyigan1569
    @muyigan1569 2 роки тому +1

    Hope my professors can make things as simple as you does to understand!

  • @omerraw
    @omerraw 9 днів тому

    Intuitive and simple!

  • @Strausse12
    @Strausse12 3 роки тому +1

    Thank you!

  • @matheushernandes4212
    @matheushernandes4212 6 років тому +1

    Does it work for Recurrent Networks?

  • @nano7586
    @nano7586 5 років тому

    Hey there, I just managed to install Chalktalk and I was wondering if you would send me your template? I'm having a presentation about ANNs soon and I would be really thankful to have an illustration like yours for an introduction. I would of course give you credits for it. Best regards!
    (Btw.: my topic is "Radial Basis Activation functions, so I would make sure to use them instead of the sigmoidal type)

  • @curiositytv7424
    @curiositytv7424 3 роки тому +1

    Sir plz start classes teach More aabt quantum computing... And also source for solution for the problems of ur book😊

  • @lenag3329
    @lenag3329 2 роки тому +1

    what is the software u duplicate neurons in?

  • @hellotoearth
    @hellotoearth 5 років тому +1

    Isn't the proper terminology for a "tower" function that a sigmoid can 'collapse' into a unit step or 'Heaviside' function?

  • @loveplay1983
    @loveplay1983 Місяць тому

    What is the software that you were using in the lecture, which seems amazing.

  • @ImaginaryMdA
    @ImaginaryMdA 6 років тому +2

    Thank you, this was very clear!

  • @carolinefbandeira4493
    @carolinefbandeira4493 8 місяців тому

    slayed, thank you so much!!

  • @Anujkumar-my1wi
    @Anujkumar-my1wi 3 роки тому +1

    wow! can you tell me one thing that why increasing the number of neurons will increase the accuracy of approximation?

  • @user-gu2fh4nr7h
    @user-gu2fh4nr7h 2 місяці тому

    What GUI are you using for the neat squares and circles and stuff? Could be useful if code is available for making ODE compartment models.

  • @lucavisconti1872
    @lucavisconti1872 5 років тому +1

    Clear explanation. thanks.
    I don't know the function that has to be approximated but, I have a data set "input-output", let us say the pair [x,f(x)]. By using a trained NN I can find the best weights to approximate the unknown f(x) minimizing as much as possible the sum of the square errors...but then, if I need to use the just built trained NN to find the output of a new input, what should I do? Does a numerical simple example exist to show the full process? Thanks for your clarification

    • @RambutanLaw
      @RambutanLaw Рік тому

      The end result of the trained NN can be stored as matrices or Python pickle file or R object. When you want to get the prediction for the new input, just pass the data through the NN.

  • @sansin-dev
    @sansin-dev 3 роки тому

    Can anyone tell me what software he is using?

  • @C4rb0neum
    @C4rb0neum 5 років тому +1

    Great ideas about math and how it could be more dynamic. For pedagogic means I agree. For "production" and cooperation I do not (yet). As a CS person I think more fields should use Git to have all the benefits that come with it. I think there do not exist good tools for corporation on videos (corporation on code which generates video seems even more of a mental burden than normal math).

  • @thingsfromspace
    @thingsfromspace 6 років тому +36

    Awesome! What program are you using during this?

    • @MichaelNielsen
      @MichaelNielsen  6 років тому +35

      It's described here: cognitivemedium.com/magic_paper/

    • @thingsfromspace
      @thingsfromspace 6 років тому +1

      Michael Nielsen thanks!

    • @Papayalexius
      @Papayalexius 5 років тому

      that magic paper is impressive

    •  4 роки тому

      @@MichaelNielsen amazing

    • @iriss6143
      @iriss6143 4 роки тому

      @@MichaelNielsen Thank you so much for this video, so well explained!
      The app blew my mind as well. Is it available for download at all?

  • @minsookim7402
    @minsookim7402 6 років тому

    I love your voice

  • @usama57926
    @usama57926 2 роки тому

    What tool is this.... that is amazing

  • @knowledgeclub4441
    @knowledgeclub4441 7 місяців тому

    How to implement in simulink ????

  • @christianorlandosilvaforer3451
    @christianorlandosilvaforer3451 2 роки тому +1

    what is a linear neuron?

  • @huseyngorbani6544
    @huseyngorbani6544 Рік тому

    What App are you using for visualisation?

  • @loveandroid62
    @loveandroid62 2 місяці тому

    How is the software you use to draw called?

  • @TPLCreationLoft
    @TPLCreationLoft 5 місяців тому

    What's the software/program used for this? Thank you for the great video.

    • @user-bb4cv2ho9i
      @user-bb4cv2ho9i 4 місяці тому

      I have the same little question. The tools used in this video is absolutely going to change online classes

  • @timberwolf4242
    @timberwolf4242 26 днів тому

    One of the greatest mathematicians and one of the most gifted teachers in modern time! Huge props!

  • @TheSmkngun
    @TheSmkngun 6 років тому +1

    Very cool demonstration.
    But, isn't this basically overfitting with N free parameters?
    N is here: en.wikipedia.org/wiki/Universal_approximation_theorem

  • @MikeSieko17
    @MikeSieko17 8 місяців тому

    what program is that?

  • @usama57926
    @usama57926 2 роки тому

    That is crazy.. and beautiful... love you

  • @demetriusdemarcusbartholom8063

    ECE 449

  • @SreeramAjay
    @SreeramAjay 6 років тому

    👏👏👏

  • @leemosbacker276
    @leemosbacker276 3 роки тому

    This is backwards. UAT is a polynomial theorem and the NN has shown to be capable of incorporating that Theorem

  • @AleksandrSerov-rn2cn
    @AleksandrSerov-rn2cn 5 років тому +4

    This guy teaching you with closed eyes

  • @user-bw7cb1bs6q
    @user-bw7cb1bs6q 2 місяці тому

    Why his eyes are closed?

  • @raihanmomtaz7652
    @raihanmomtaz7652 4 роки тому

    cOOOOOOOOOOOOOOOOOOOOOOOOL !!!!!!!!!!!!!!!!!!

  • @bismeetsingh352
    @bismeetsingh352 4 роки тому

    Does anyone have a link /reference to a better explanation?

  • @shinn-tyanwu4155
    @shinn-tyanwu4155 10 місяців тому

    You are genius 😊😊

  • @ste3191
    @ste3191 Рік тому

    It's not a theorem, it's a model.

  • @Nachiketa_TheCutiePie
    @Nachiketa_TheCutiePie 3 роки тому

    video is like a sleeping pill to me