What is Activation function in Neural Network ? Types of Activation Function in Neural Network

Поділитися
Вставка
  • Опубліковано 22 сер 2024

КОМЕНТАРІ • 44

  • @CodingLane
    @CodingLane  3 роки тому +8

    If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.

    • @lamis_18
      @lamis_18 2 роки тому +1

      please indicate your variables or with pictures to explane what is Z and A ..etc

    • @CodingLane
      @CodingLane  2 роки тому

      @@lamis_18 okay

  • @masplacasmaschicas6155
    @masplacasmaschicas6155 8 місяців тому +4

    You explain these concepts more completely and simply than any other video I’ve seen. Thank you

  • @tiyyob
    @tiyyob 2 роки тому +10

    Just started this playlist and found it very well explained. Thank you for great work.

  • @PrithaMajumder
    @PrithaMajumder 27 днів тому +2

    Thanks a lot for This Amazing Introductory Lecture 😀
    Lecture - 3 Completed from This Neural Network Playlist

  • @petcumircea8137
    @petcumircea8137 Рік тому +3

    Very underrated channel. Great explanation!

  • @brindhasenthilkumar7871
    @brindhasenthilkumar7871 3 роки тому +2

    Well and brief explanation of the activation functions Sir Patel, wonderful, I am acquiring new knowledge from your every videos, good, great going

    • @CodingLane
      @CodingLane  3 роки тому

      Thank you so much ! I am glad you found my videos helpful.

  • @G83X
    @G83X 7 місяців тому

    damn, this is lowkey a really good and insightful way of explaining this. I'll be sharing with my students. Exceptional tutorial

  • @rutvipatel6896
    @rutvipatel6896 Рік тому +1

    You are saving me rn from my midtrem tomorrow. Thank you!!!'

    • @CodingLane
      @CodingLane  Рік тому

      Really happy to hear this. Glad the videos helped you! 🙂

  • @Satvikshukla0007
    @Satvikshukla0007 18 днів тому +1

    Very well explained

    • @CodingLane
      @CodingLane  16 днів тому

      Thank you! Glad it was helpful!

  • @aienImchen-hs6fp
    @aienImchen-hs6fp 5 місяців тому +1

    will this explanation be enough for a beginnner in ML? I understood what you have explained .iam learnign from you .Thank you.

  • @pankajmourya4583
    @pankajmourya4583 8 місяців тому +1

    Great work bro 👍

  • @manikantaperumalla2197
    @manikantaperumalla2197 2 місяці тому +1

    well explanation brother. keep it up

  • @chillax1629
    @chillax1629 Рік тому

    thanks a lot for sharing! really helped me understanding why and when using which activation function. Very good!

  • @user-ul2mw6fu2e
    @user-ul2mw6fu2e Рік тому +4

    First of all, thank you very much for these videos. I have a question about cross entropy. I understand how cross entropy works. I don't understand why it works. I would appreciate it if you make videos about these topics.

    • @CodingLane
      @CodingLane  Рік тому

      Thanks for the suggestion. Will try to cover this topic.

  • @maheshyezarla5294
    @maheshyezarla5294 Рік тому +1

    very very very useful for me. Thank you

  • @priyanshupatelhawk
    @priyanshupatelhawk Рік тому

    Amazing Explanation, just one mistake at 10:16 to 10:24 that should be "Sigmoid and TanH" not "ReLU and TanH"...

  • @user-dg1mn3ze7x
    @user-dg1mn3ze7x 6 місяців тому

    Perfect explanation. thank you. keep going

  • @teozhisen4496
    @teozhisen4496 11 місяців тому

    very well explained, thanks so much for the video

  • @RamaKrishna-fp7yd
    @RamaKrishna-fp7yd 2 роки тому +1

    Keep it up bro, nice explaination ✅

  • @user-py3rl8jt9s
    @user-py3rl8jt9s 4 місяці тому

    soooooo grateful for you

  • @ArchitStark
    @ArchitStark Місяць тому

    Is it possible for you to add/share further reading documents ?

  • @algorithmo134
    @algorithmo134 2 місяці тому

    how does relu solve the vanishing gradient problem since some part of the gradient is zero for x < 0?

  • @Ivaan_reminiscence
    @Ivaan_reminiscence 3 місяці тому

    @10:18 woudn't it be "both the tanh and sigmoid function (and not 'Relu') had this disadvantage of vanishing gradient prob..."... Relu is it's solution right?

  • @Ivaan_reminiscence
    @Ivaan_reminiscence 2 місяці тому

    Does relu makes the f(x)=0 even if the x is very small but >0? because tanh/sigmoid the rate of change of gradient becomes very small but still >0, whereas in the relu the f(x) seems to be 0 only when x

  • @Luca_040
    @Luca_040 8 місяців тому

    Good summary, thank you

  • @nemeziz_prime
    @nemeziz_prime Рік тому

    Great explanation 🔥👏🏻

  • @susw3602
    @susw3602 2 роки тому +1

    which one is a non-symmetric activation function ?

  • @surajjoshi3433
    @surajjoshi3433 3 роки тому +1

    Hey bro
    I am beginner learning deep learning,Can you suggest me any materials to learn deep learning from scratch?

    • @CodingLane
      @CodingLane  3 роки тому +1

      Hi Suraj, I would highly recommend you to take the coursera course from Andrew Ng for Deep Learning.
      Here’s its link : www.coursera.org/specializations/deep-learning
      This course is for absolute beginners and you will develop better understanding of deep learning.
      Also, if you feel like you can’t afford it, there are ways on coursera to take courses for free.. like Auditing or applying for financial aid.
      I hope you find Deep Learning interesting !!

    • @surajjoshi3433
      @surajjoshi3433 3 роки тому +1

      @@CodingLane Thank you brother 😄for your suggestion

  • @pratiknale6993
    @pratiknale6993 2 роки тому +1

    💐💐💐

  • @mrunalwaghmare
    @mrunalwaghmare 19 днів тому +1

    Bhai hindi me kyu koi smajhata nahi 🤢🤢🤢🤢🤢 🤮🤮🤮

    • @CodingLane
      @CodingLane  16 днів тому

      Try Code Basics Hindi channel. Shayad aapko achaa lage

  • @harishkumar0064
    @harishkumar0064 Рік тому +2

    Try to talk your normal accent