What is Activation function in Neural Network ? Types of Activation Function in Neural Network

Поділитися
Вставка
  • Опубліковано 4 лют 2025

КОМЕНТАРІ • 60

  • @MachineLearningWithJay
    @MachineLearningWithJay  3 роки тому +10

    If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.

    • @lamis_18
      @lamis_18 3 роки тому +1

      please indicate your variables or with pictures to explane what is Z and A ..etc

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому

      @@lamis_18 okay

  • @masplacasmaschicas6155
    @masplacasmaschicas6155 Рік тому +5

    You explain these concepts more completely and simply than any other video I’ve seen. Thank you

  • @tiyyob
    @tiyyob 2 роки тому +10

    Just started this playlist and found it very well explained. Thank you for great work.

  • @PrithaMajumder
    @PrithaMajumder 6 місяців тому +2

    Thanks a lot for This Amazing Introductory Lecture 😀
    Lecture - 3 Completed from This Neural Network Playlist

  • @sanjibdutta9688
    @sanjibdutta9688 Місяць тому +1

    Than you so much, youve made it so easy to understand. I have exams tomorrow. You saved me, God Bless!

    • @MachineLearningWithJay
      @MachineLearningWithJay  Місяць тому

      You're welcome! I am glad that the video helped you. Hope your exam went well!

  • @G83X
    @G83X Рік тому

    damn, this is lowkey a really good and insightful way of explaining this. I'll be sharing with my students. Exceptional tutorial

  • @blackswann9555
    @blackswann9555 2 місяці тому +1

    Softmax function now makes sense. thanks!

  • @brindhasenthilkumar7871
    @brindhasenthilkumar7871 3 роки тому +2

    Well and brief explanation of the activation functions Sir Patel, wonderful, I am acquiring new knowledge from your every videos, good, great going

  • @s.rt_
    @s.rt_ 2 місяці тому +1

    explained very crisply ,very helpful! thanks

  • @petcumircea8137
    @petcumircea8137 Рік тому +3

    Very underrated channel. Great explanation!

  • @rutvipatel6896
    @rutvipatel6896 2 роки тому +1

    You are saving me rn from my midtrem tomorrow. Thank you!!!'

  • @aienImchen-hs6fp
    @aienImchen-hs6fp 11 місяців тому +2

    will this explanation be enough for a beginnner in ML? I understood what you have explained .iam learnign from you .Thank you.

  • @chillax1629
    @chillax1629 2 роки тому

    thanks a lot for sharing! really helped me understanding why and when using which activation function. Very good!

  • @maheshyezarla5294
    @maheshyezarla5294 Рік тому +1

    very very very useful for me. Thank you

  • @priyanshupatelhawk
    @priyanshupatelhawk 2 роки тому

    Amazing Explanation, just one mistake at 10:16 to 10:24 that should be "Sigmoid and TanH" not "ReLU and TanH"...

  • @Hesham.Alshafie
    @Hesham.Alshafie Рік тому

    Perfect explanation. thank you. keep going

  • @pankajmourya4583
    @pankajmourya4583 Рік тому +1

    Great work bro 👍

  • @user-ul2mw6fu2e
    @user-ul2mw6fu2e 2 роки тому +4

    First of all, thank you very much for these videos. I have a question about cross entropy. I understand how cross entropy works. I don't understand why it works. I would appreciate it if you make videos about these topics.

  • @teozhisen4496
    @teozhisen4496 Рік тому

    very well explained, thanks so much for the video

  • @TechWorld-ec2ec
    @TechWorld-ec2ec 5 місяців тому +1

    Amazing explanation

  • @Satvikshukla0007
    @Satvikshukla0007 6 місяців тому +1

    Very well explained

  • @manikantaperumalla2197
    @manikantaperumalla2197 8 місяців тому +1

    well explanation brother. keep it up

  • @Ivaan_reminiscence
    @Ivaan_reminiscence 8 місяців тому +1

    @10:18 woudn't it be "both the tanh and sigmoid function (and not 'Relu') had this disadvantage of vanishing gradient prob..."... Relu is it's solution right?

  • @Luca_040
    @Luca_040 Рік тому

    Good summary, thank you

  • @RamaKrishna-fp7yd
    @RamaKrishna-fp7yd 2 роки тому +1

    Keep it up bro, nice explaination ✅

  • @MeshachJones-d5s
    @MeshachJones-d5s 9 місяців тому

    soooooo grateful for you

  • @Sameera_005
    @Sameera_005 4 місяці тому +2

    PERFECT

  • @nemeziz_prime
    @nemeziz_prime 2 роки тому

    Great explanation 🔥👏🏻

  • @Ivaan_reminiscence
    @Ivaan_reminiscence 8 місяців тому

    Does relu makes the f(x)=0 even if the x is very small but >0? because tanh/sigmoid the rate of change of gradient becomes very small but still >0, whereas in the relu the f(x) seems to be 0 only when x

  • @ArchitStark
    @ArchitStark 6 місяців тому

    Is it possible for you to add/share further reading documents ?

  • @algorithmo134
    @algorithmo134 8 місяців тому

    how does relu solve the vanishing gradient problem since some part of the gradient is zero for x < 0?

  • @susw3602
    @susw3602 2 роки тому +1

    which one is a non-symmetric activation function ?

  • @surajJoshiFilms
    @surajJoshiFilms 3 роки тому +1

    Hey bro
    I am beginner learning deep learning,Can you suggest me any materials to learn deep learning from scratch?

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому +1

      Hi Suraj, I would highly recommend you to take the coursera course from Andrew Ng for Deep Learning.
      Here’s its link : www.coursera.org/specializations/deep-learning
      This course is for absolute beginners and you will develop better understanding of deep learning.
      Also, if you feel like you can’t afford it, there are ways on coursera to take courses for free.. like Auditing or applying for financial aid.
      I hope you find Deep Learning interesting !!

    • @surajJoshiFilms
      @surajJoshiFilms 3 роки тому +1

      @@MachineLearningWithJay Thank you brother 😄for your suggestion

  • @pratiknale6993
    @pratiknale6993 3 роки тому +1

    💐💐💐

  • @saumyaagrawal7781
    @saumyaagrawal7781 5 місяців тому +1

    I think I’m in love with you

  • @mrunalwaghmare
    @mrunalwaghmare 6 місяців тому +1

    Bhai hindi me kyu koi smajhata nahi 🤢🤢🤢🤢🤢 🤮🤮🤮

    • @MachineLearningWithJay
      @MachineLearningWithJay  6 місяців тому

      Try Code Basics Hindi channel. Shayad aapko achaa lage

    • @Unacquaintedhere
      @Unacquaintedhere 3 місяці тому

      @@MachineLearningWithJay bhai woh softmax function me e kya hai aur unn e ki value kaise nikaale , aur unhe plus karke 0.9 something kaise aa raha hai

  • @harishkumar0064
    @harishkumar0064 2 роки тому +2

    Try to talk your normal accent

  • @ankur_ac7
    @ankur_ac7 Місяць тому +1

    Great explanation 🔥👏🏻