Activation Functions - EXPLAINED!

Поділитися
Вставка
  • Опубліковано 3 жов 2024
  • We start with the whats/whys/hows. Then delve into details (math) with examples.
    Follow me on M E D I U M: towardsdatasci...
    REFERENCES
    [1] Amazing discussion on the "dying relu problem": www.quora.com/...
    [2] Saturating functions that "squeeze" inputs: stats.stackexc...
    [3] Plot math functions beautifully with desmos: www.desmos.com/
    [4] The paper on Exponential Linear units (ELU): arxiv.org/abs/...
    [5] Relatively new activation function (swish): arxiv.org/pdf/...
    [6] Used an Image of activation functions from this Pawan Jain's Blog: towardsdatasci...
    [7] Why bias in Neural Networks? stackoverflow....

КОМЕНТАРІ • 157

  • @UdemmyUdemmy
    @UdemmyUdemmy Рік тому +73

    the screetching noise is irrtitaing..else nice tutoial

  • @desalefentaw8658
    @desalefentaw8658 4 роки тому +40

    wow, one of the best highlights of activation functions on the internet. Thank you for doing this video

  • @GauravSharma-ui4yd
    @GauravSharma-ui4yd 4 роки тому +29

    Awesome as always. Some points to ponder correct me if I am wrong
    1. Relu is just not a activation but can also be thought as a self regularizer, as it offs all those neurones whose values are negative, so it's just a kind of automatic dropout.
    2. A neutral net with just input and output layer, with softmax at the output layer is logistic regression, but when we add hidden layers in this network with no hidden activations then it's more Powerful than just vanilla logistic regression as it is now taking linear combination of linear combinations with different weight settings. But it still results in linear boundaries.
    Lastly your contributions to the community is very valuable, clears a lot nitty-gritty details in short time. Keep going like this :)

    • @generichuman_
      @generichuman_ 2 роки тому +5

      No, dropout is different. Random sets of neurons are turned off in order to cause the neurons to form redundancies which can make the model more robust. In the case of dying Relu, the same neurons are always dead, making them useless. Dropout is desirable and deliberate, dying Relu is not.

  • @SkittlesWrap
    @SkittlesWrap 5 місяців тому +1

    Straight to the point. Nice and super clean explanation for non-linear activation functions. Thanks!

  • @jhondavidson2049
    @jhondavidson2049 3 роки тому +5

    I'm learning deep learning rn and using the deep learning book published by MIT press for the same. That's kinda complicated for me to understand especially these parts cause m still an undergrad and have 0 previous experience with this. Thank you for explaining this so well.

  • @PritishMishra
    @PritishMishra 3 роки тому +7

    The most thing I love about your videos is the fun you add... Learning becomes a bit easier

  • @PrymeOrigin
    @PrymeOrigin 9 місяців тому

    One of the best explanations ive come across

  • @the-tankeur1982
    @the-tankeur1982 5 місяців тому +4

    I hate you for making that noises, i want to learn, comedia is something i would pass on

  • @shivendunsahi
    @shivendunsahi 4 роки тому +5

    I discovered your page just yesterday and might I say, YOU'RE AWESOME! Thanks for such good content bro.

    • @CodeEmporium
      @CodeEmporium  4 роки тому +3

      Thanks homie! Will dish out more soon!

  • @fahadmehfooz6970
    @fahadmehfooz6970 2 роки тому +1

    Amazing! Finally I am able to visualise vanishing gradient descent and dying relu.

  • @otabeknajimov9697
    @otabeknajimov9697 Рік тому

    best explanation of activation functions I ever seen

  • @rishabhmishra279
    @rishabhmishra279 2 роки тому +2

    Great explanation ! and the animations with maths formula and visualizing it is awesome !! Many thanks !

  • @linuxbrad
    @linuxbrad Рік тому

    7:48 "once it hits zero the neuron becomes useless and there is no learning" this explains so much, thank you!

  • @deepakkota6672
    @deepakkota6672 4 роки тому +8

    Wooo, Did I just noticed the complex explained simple. Thanks! Looking forward to more videos.

  • @oheldad
    @oheldad 4 роки тому +3

    Great video ! And what is more great - are the useful references you add at the description. ( For me (1)+(7) answer the questions I asked my self at the end of your video - so its was on point ) ! Thank you !

    • @CodeEmporium
      @CodeEmporium  4 роки тому

      Haha. Glad the references are useful! :)

  • @deepaksingh9318
    @deepaksingh9318 4 роки тому

    Wow... Perfect and easiest way to explain it..
    Everyone talks about what activations do but nobody shows in how actually it looks like behind the algos..
    And you explain things in the most easiest way which are so easy to understand and remember..
    So a big like for. All your videos..
    Could uh make more and more and DL.. 😄

    • @CodeEmporium
      @CodeEmporium  3 роки тому

      Thank you. I'm always thinking of more content :)

  • @kanehooper00
    @kanehooper00 5 місяців тому

    Excellent job. There is way too much "mysticism" around neural networks. This shows clearly that for a classification problem all the nerual net is doing is creating a boundary function. Of course it gets complicated in multiple dimensions. But your explanations and use of graphs is excellent

  • @adrianharo6586
    @adrianharo6586 3 роки тому +6

    Great video!
    The dissapointed gestures were a bit too much x'D
    A question I did have as a beginner was.
    What does it mean for a sigmoid gradient to "squeeze" values, as in they become smaller and smaller as they back propagate?

    • @AnkityadavGrowConscious
      @AnkityadavGrowConscious 3 роки тому

      It means that sigmoid function will always output a value between 0 and 1 regardless of any real number input. Notice the mathematical formula and graph of a sigmoid function for better clarity. Any real number will be converted to a number between 0 and 1. Hence sigmoid is said to "squeeze" values.

  • @mikewang8368
    @mikewang8368 4 роки тому

    better than most professors, thanks for great video

  • @SeloniSinha
    @SeloniSinha 3 місяці тому

    wonderful explanation!!!

  • @dazzykin
    @dazzykin 4 роки тому +4

    Can you cover tanh activation? (Thanks for making this one so good!)

    • @CodeEmporium
      @CodeEmporium  4 роки тому +5

      I wonder if there is enough support that warrants a video on just tanh. Will look into it though! And thanks for the compliments :)

  • @malekaburaddaha5910
    @malekaburaddaha5910 3 роки тому +1

    Thank you very much for the great, and smooth explanation. This was really perfect.

    • @CodeEmporium
      @CodeEmporium  3 роки тому

      Much appreciated Malek! Thanks for watching!

  • @rasikannanl3476
    @rasikannanl3476 4 місяці тому

    great .. so many thanks ... need more explanation

  • @nguyenngocly1484
    @nguyenngocly1484 3 роки тому

    With ReLU f(x)=x is connect, f(x)=0 is disconnect. A ReLU net is a switched system of dot products, if that means anything to you.

  • @alifia276
    @alifia276 3 роки тому

    Thank you for sharing! This video cleared my doubts and gave me a good introduction to learn
    further

  • @eeera-op8vw
    @eeera-op8vw 4 місяці тому

    good explanation for a beginner

  • @cheseremtitus1501
    @cheseremtitus1501 4 роки тому

    Amazing presentation ,easy and captivating to grasp

  • @sgrimm7346
    @sgrimm7346 11 місяців тому

    Excellent explanation. Thank you.

  • @meghnasingh9941
    @meghnasingh9941 4 роки тому +4

    wow, that was really helpful, thanks a ton!!!!

    • @CodeEmporium
      @CodeEmporium  4 роки тому +1

      Glad to hear that. Thanks for watching!

  • @younus6133
    @younus6133 4 роки тому +1

    oh man, amazing explanation.Thanks

  • @wagsman9999
    @wagsman9999 Рік тому

    Beautiful explanation!

  • @RJYL
    @RJYL Рік тому

    Great explanation for activation function I like it so much

  • @myrondcunha5670
    @myrondcunha5670 3 роки тому

    THIS HELPED SO MUCH! THANK YOU!

  • @epiccabbage6530
    @epiccabbage6530 Рік тому +1

    What are the axises on these graphs? Is it inputs, input*weights + bias for linear?

    • @NITHIN-tu7qo
      @NITHIN-tu7qo 7 місяців тому

      did you get answer for it?

  • @MrGarg10may
    @MrGarg10may Рік тому +1

    then why isn't leaky RELU ELU used everywhere in LSTM, GRU, Transformers ..? why is RELU used everywhere

  • @shrikanthnc3664
    @shrikanthnc3664 3 роки тому

    Great explanation! Had to switch to earphones though :P

  • @alonsomartinez9588
    @alonsomartinez9588 2 роки тому

    Awesome vid! Small sug: I might check the volume levels, during the screaming in :56 it was a bit painful to my ear and possibly sounded like audio clipping.

  • @yachen6562
    @yachen6562 3 роки тому +1

    Really awesome video!

  • @tarkatirtha
    @tarkatirtha 2 роки тому

    Lovely intro! I am learning at the age of 58!

  • @youssofhammoud6335
    @youssofhammoud6335 4 роки тому

    What I was looking for. Thanks!

  • @superghettoindian01
    @superghettoindian01 Рік тому

    Another great video
    🎉🎉🎉!

  • @kellaerictech
    @kellaerictech Рік тому

    That's great explanation

  • @ankitganeshpurkar
    @ankitganeshpurkar 3 роки тому

    Nicely explained

  • @DrparadoxDrparadox
    @DrparadoxDrparadox 2 роки тому +1

    Great Video. Could you explain what U and V are equal to in this equation : o = Ux + V ? And How did you come up with the decision boundary equation and how did you determine the values of w1 and w2 ?
    Thanks in advance

  • @linuxbrad
    @linuxbrad Рік тому

    9:03 what do you mean "most neurons are off during the forward step"?

  • @simranjoharle4220
    @simranjoharle4220 Рік тому

    This was really helpful! Thanks!

  • @vasudhatapriya6315
    @vasudhatapriya6315 Рік тому

    How is softmax a linear function here? Shouldn't it be non linear?

  • @pouyan74
    @pouyan74 2 роки тому +1

    I've read at least three books on ANN's so far, but it's only now, after watching this video, that I have the intuition of what exactly is going on and how do activation functions break linearity!

  • @mohammadsaqibshah9252
    @mohammadsaqibshah9252 Рік тому

    This was an amazing video!!! Keep up the good work!

  • @ShivamPanchbhai
    @ShivamPanchbhai 3 роки тому

    this guy is genius

  • @programmer4047
    @programmer4047 4 роки тому +1

    So, we should always use leaky reLU

  • @prashantk3088
    @prashantk3088 4 роки тому +1

    really helpful..thanks

  • @LifeKiT-i
    @LifeKiT-i Рік тому

    With graphical calculator, your explanation is sanely clear!! thank you!!

    • @CodeEmporium
      @CodeEmporium  Рік тому

      Thanks so much for the kind comment! Glad the strategy of explaining is useful :)

  • @mangaenfrancais934
    @mangaenfrancais934 4 роки тому +1

    Great video, keep going !

  • @AymaneArfaoui
    @AymaneArfaoui 3 місяці тому

    what does x and y represent in the graph you use to show the cats and dog points ?

  • @Mohammed-rx6ok
    @Mohammed-rx6ok 2 роки тому

    Amazing explanation and also funny 😅👏👏👏

  • @kphk3428
    @kphk3428 3 роки тому +1

    1:16 I couldn't see that there were different colors so I was confused.
    Also I found the voicing of the training neural net annoying. But some people may like what other people dislike, so it's up to you to keep on voicing them.

    • @gabe8168
      @gabe8168 3 роки тому +1

      the dude is making these videos alone, if you don't like his voice that's on you, but he can't just change his voice

  • @bartekdurczak4085
    @bartekdurczak4085 3 місяці тому

    good explanation but the noises are little bit annoying but thank you bro

  • @phucphan4195
    @phucphan4195 2 роки тому

    thank you very much, this is really helpful

  • @fredrikt6980
    @fredrikt6980 3 роки тому

    Great explanation. Just add more contrast to you color selection.

    • @CodeEmporium
      @CodeEmporium  3 роки тому

      My palette is rather bland i admit

  • @AmirhosseinKhademi-in6gs
    @AmirhosseinKhademi-in6gs Рік тому

    but we cannot use ReLU for the regression of functions with high degrees of derivatives!
    In that case, we should still go with infinitely differentiable activation functions like "Tanh", right?

  • @jigarshah1883
    @jigarshah1883 4 роки тому

    Awesome video man !

  • @aaryamansharma6805
    @aaryamansharma6805 4 роки тому +1

    awesome video

  • @x_ma_ryu_x
    @x_ma_ryu_x 2 роки тому +1

    Thanks for the tutorial. I found the noises very cringe.

  • @Nathouuuutheone
    @Nathouuuutheone 2 роки тому

    What decides the shape of the boundary?

  • @jaheerkalanthar816
    @jaheerkalanthar816 2 роки тому

    Thanks mate

  • @wucga9335
    @wucga9335 10 місяців тому

    so how do we know when to use relu or leacky relu? do we just use leacky relu all together in all cases?

  • @francycharuto
    @francycharuto 3 роки тому

    gold, gold, gold.

  • @tahirali959
    @tahirali959 4 роки тому +1

    good work bro keep it up
    -

  • @uzairkhan7430
    @uzairkhan7430 2 роки тому +1

    awesome

  • @najinajari3531
    @najinajari3531 4 роки тому

    Great Video and great page :) Which softwares you use to make these videos ?

    • @CodeEmporium
      @CodeEmporium  4 роки тому +1

      Thanks! I use Camtasia Studio for the editing; Photoshop and draw.io for the images.

  • @igorpostoev2077
    @igorpostoev2077 3 роки тому

    Thanks man)

  • @ronin6158
    @ronin6158 4 роки тому

    it should be possible to let (part of) the net optimize its own activation function no?

  • @patite3103
    @patite3103 3 роки тому

    Amazing!

  • @Edu888777
    @Edu888777 3 роки тому

    I still dont understand what a activation function is

  • @masthanjinostra2981
    @masthanjinostra2981 3 роки тому

    Benefited a lot

  • @ehsankhorasani_
    @ehsankhorasani_ 3 роки тому

    good job thank you

  • @TheAscent_
    @TheAscent_ 4 роки тому

    @6:24 How does passing what is a straight line into the softmax function also give us a straight line? Isn't the output, and consequently the decision boundary, a sigmoid?
    Or is it the output before passing it into the activation function what counts as the decision boundary?

    • @CodeEmporium
      @CodeEmporium  3 роки тому

      6:45 - The line corresponds to those points in the feature space (the 2 feature values) where The sigmoid's height is 0.5.

  • @prakharrai1090
    @prakharrai1090 2 роки тому

    can we use linear activation with hinge loss for Linear svm for binary classification.

  • @jamesdunbar2386
    @jamesdunbar2386 3 роки тому

    Quality video!

  • @harishp6611
    @harishp6611 4 роки тому

    yes! I liked it. Keep it up.

  • @undisclosedmusic4969
    @undisclosedmusic4969 4 роки тому +3

    Swish: activation function. Swift: programming language. More homework, less sound effects 😀

  • @farhanfadhilah5247
    @farhanfadhilah5247 4 роки тому

    this is helpful, thanks :)

  • @Acampandoconfrikis
    @Acampandoconfrikis 3 роки тому

    thanks brah

  • @VinVin21969
    @VinVin21969 3 роки тому

    plot twist: its not that the boundary no longer changes, the vanishing gradient cause the gradient to be very small , that we can assume it is negligible

  • @히안-p3j
    @히안-p3j Місяць тому

    I don't understand..

  • @ShinsekaiAcademy
    @ShinsekaiAcademy 3 роки тому

    thanks my man.

  • @abd0ulz942
    @abd0ulz942 Рік тому

    learn Activation Functions with Dora
    but I honestly it is good

  • @ExMuslimProphetMuhammad
    @ExMuslimProphetMuhammad 3 роки тому +1

    Bhai video shayad accha hoga but thumbnail pe Teri pic dekhke hi kafi log click na kare, I'm here just to let you know this:avoid putting your face on thumbnail or in video as no one is interested in seeing the educator while watching technical videos.

    • @CodeEmporium
      @CodeEmporium  3 роки тому +1

      You clicked. That's all i care about ;)

  • @jhondavidson2049
    @jhondavidson2049 3 роки тому

    Amazing!!!!!!!!!!!!!!!!!

  • @zaidalattar2483
    @zaidalattar2483 3 роки тому

    Perfect explanation!... Thanks

  • @abdussametturker
    @abdussametturker 3 роки тому

    thx. subscribed

  • @keanuhero303
    @keanuhero303 4 роки тому

    What's the +1 node on each layer?

  • @harshmankodiya9397
    @harshmankodiya9397 3 роки тому

    gr8 exp

  • @t.lnnnnx
    @t.lnnnnx 3 роки тому

    followeeeed

  • @splytrz
    @splytrz 4 роки тому

    I've been trying to make a convolutional autoencoder for mnist, and at first I used sigmoid activation on the convolutional part and it couldn't make anything better than just a black screen on the output but when I removed all activation functions it worked well. Does anyone have any idea why that happened?

    • @fatgnome
      @fatgnome 4 роки тому

      Are the outputs properly scaled back to pixel values after being squeezed by sigmoid?

    • @splytrz
      @splytrz 4 роки тому

      @@fatgnome Yes. Otherwise the output wouldn't match with images. Also I checked model.summary() every time I made changes to the model.

  • @frankerz8339
    @frankerz8339 3 роки тому

    nice

  • @DataScoutt
    @DataScoutt 3 роки тому

    Explained the Activation Function ua-cam.com/video/sar9xi-ah4M/v-deo.html

  • @wadyn95
    @wadyn95 4 роки тому

    Wtf what's the sound of pictures...

  • @hoomanrs3804
    @hoomanrs3804 5 місяців тому

    👏👏👏❤️