Vanishing and exploding gradients | Deep Learning Tutorial 35 (Tensorflow, Keras & Python)

Поділитися
Вставка
  • Опубліковано 4 гру 2024

КОМЕНТАРІ • 58

  • @codebasics
    @codebasics  2 роки тому +3

    Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced

  • @samarsinhsalunkhe7529
    @samarsinhsalunkhe7529 Рік тому +3

    best Deep Learning playlist till date

  • @eitanamos5867
    @eitanamos5867 3 роки тому +9

    Hi Sir, I appreciate your videos. They're really useful. Can you please make videos that show examples of RNN, LSTM as well as videos on Deep Reinforcement Learning

  • @meilinlyu3572
    @meilinlyu3572 2 роки тому +1

    Amazing explanations. Thank you very much!

  • @hardikvegad3508
    @hardikvegad3508 3 роки тому +5

    AMAZING EXPLANATION SIR....
    Please make a video on how do you understand and explain such complex topics so easily, that will help us to self educate ourselves🙌🏻🙌🏻🙌🏻

    • @codebasics
      @codebasics  3 роки тому +11

      good point. I will note it down.

  • @n.ilayarajahicetstaffit3709
    @n.ilayarajahicetstaffit3709 2 роки тому +1

    EXPLAINATION, VIDEO AND AUDIO QUALITY IS VERY GREAT. PLS GUIDE US WHAT KIND OF SOFTWARE, YOU HAVE BEEN USED FOR RECORDING THE VIDEO

    • @codebasics
      @codebasics  2 роки тому

      camtasia studio. blue yeti mic.

  • @saifsd8267
    @saifsd8267 3 роки тому +2

    Sir, can you please make a video on generative adversial networks and a simple example project which implements GAN?

  • @ah-rdk
    @ah-rdk 7 місяців тому

    Thank you very much, sir. Crystal clear explanation!

  • @suryanshpatel4750
    @suryanshpatel4750 8 місяців тому

    series of explanation video by video is awsome :)

  • @anonymousAI-pr2wq
    @anonymousAI-pr2wq 2 роки тому

    Thanks you for the great video. Clear and easy to understand.

  • @mandarchincholkar5955
    @mandarchincholkar5955 3 роки тому +2

    Please release all videos as soon as possible. 🙏🏻

    • @codebasics
      @codebasics  3 роки тому +3

      I am trying mandar. it takes time to produce these videos.

    • @muhammedrajab2301
      @muhammedrajab2301 3 роки тому

      @@codebasics I agree.

  • @walidmaly3
    @walidmaly3 3 роки тому +1

    Thanks a lot. i think there is a typo in the slides as a3 is missing. you have a2 followed by a4.

  • @akhileshkarra384
    @akhileshkarra384 2 роки тому

    Very good explanation

  • @Shannxy
    @Shannxy 2 роки тому

    4:35 This felt personal

  • @Acampandoconfrikis
    @Acampandoconfrikis 3 роки тому +1

    4:36 is literally me, lol
    amazing explanation tho, thanks so much!

  • @vetrijayakumaralumni376
    @vetrijayakumaralumni376 3 роки тому +1

    Need survival Analysis ! Plzzz do it

  • @tahahusain8577
    @tahahusain8577 3 роки тому +3

    Hi Dhaval, Great content! Really learning a lot from your videos. Do you upload your slides as well? Would be really helpful if I could go through slides when required. Thank you.

  • @jongcheulkim7284
    @jongcheulkim7284 3 роки тому

    Thank you.

  • @md.alamintalukder3261
    @md.alamintalukder3261 Рік тому

    Thanks a lot

  • @piyalikarmakar5979
    @piyalikarmakar5979 3 роки тому

    Sir, how GRU and LSTM can solve vanishing Gradient problem?? Is there any vedio on that? Kindly let me know..

  • @jaysoni7812
    @jaysoni7812 3 роки тому +1

    Make a video on optimisers

    • @codebasics
      @codebasics  3 роки тому +2

      point noted.

    • @jaysoni7812
      @jaysoni7812 3 роки тому

      @@codebasics 😂 thank you sir 🙏 last time I have requested for vanishing gradient and you made it for that thanks again.

    • @jaysoni7812
      @jaysoni7812 3 роки тому +1

      @@codebasics I hope you will cover all optimisers like GD, SGD, Mini batch SGD, SGD with momentum, Adagrde, Adadelta & RMSprop and Adam if it is possible

  • @porrasbrand
    @porrasbrand 2 роки тому

    As the number of hidden layers grow, the gradient becomes very small and the weights will hardly change.

  • @joyanbhathena7251
    @joyanbhathena7251 3 роки тому +1

    Missing the exercise questions

  • @haneulkim4902
    @haneulkim4902 2 роки тому

    While training deep neural network with 2 units in the final layer with sigmoid activation function for binary classification 2 weights of final layer becomes both 0 leading to same score for all inputs since it only uses bias in sigmoid, what are some reasons for this?

  • @ChessLynx
    @ChessLynx 2 роки тому

    3:33 "Bigger small number" lol

  • @yourentertainer19
    @yourentertainer19 Рік тому +1

    Hi everyone, I have one doubt, as said in the video many times we do derivative of loss with respect to weights, but the loss is a constant value and derivative of constant is zero, so how the weights are updated, I know its a silly question but can anyone please answer this it would be very helpful

  • @sahith2547
    @sahith2547 3 роки тому

    Great Explanation sir 🔥🔥🔥. .....I wonder why you haven't reached M subscribers...!!!!

  • @kishanikandasamy
    @kishanikandasamy 3 роки тому

    Perfect Explanation! Thank You

  • @jojushaji3010
    @jojushaji3010 2 роки тому

    where to get the presentation ure using

  • @manojsamal7248
    @manojsamal7248 3 роки тому

    if the weights of this single layer are same in RNN then why to back propogate till last why not use only the last word.. and get weight

  • @harshalbhoir8986
    @harshalbhoir8986 Рік тому

    great!!

  • @emmanuelmoupojou1505
    @emmanuelmoupojou1505 2 роки тому

    Great !

  • @rohankushwah5192
    @rohankushwah5192 3 роки тому

    Sir how many tutorials are still remaining to complete this deep learning playlist ?
    Or how much we have covered this deep learning playlist so far in terms of percentage ?

    • @codebasics
      @codebasics  3 роки тому

      we have covered around 90% tutorials. I will publish more videos on RNN and then we will start deep learning projects.

    • @rohankushwah5192
      @rohankushwah5192 3 роки тому

      @@codebasicsegerly waiting for DL projects 😋

  • @taabarrimahaganacsigaiyoti6356
    @taabarrimahaganacsigaiyoti6356 3 роки тому

    I have recently started your data science tutorials especially I have been doing python and statistics learning, I have no fear on programming concepts but problem comes from when it comes to machine learning which brings me back to my days of school like algebra, matrix and calculus so is there a short path that can help me to cover those areas? can i be data scientist while I am normal at math?

    • @codebasics
      @codebasics  3 роки тому +1

      I would say as and when you encounter math topic just try to get that topic clarified. I am in fact going to make a full tutorial series on "math for ML". stay tuned!

    • @shanglee643
      @shanglee643 3 роки тому

      @@codebasics Holy moly! I want to hug you, teacher.

  • @shimulbhattacharjee9560
    @shimulbhattacharjee9560 3 місяці тому

    there is no a3 after a2, and also after ... how do a5 and a6 come?

  • @haneulkim4902
    @haneulkim4902 2 роки тому

    Hi while training highly imbalanced dataset in binary classification weights of final layer keep going to zero leading to y_pred = 0 for all X. What are some reasons for this?

  • @ronyjoseph7868
    @ronyjoseph7868 3 роки тому

    Sir in cnn features are automatically extracted, but my project coordinator ask me what features are automatically extracted by cnn, i am stuck on this question, please help me what should i answer. I always say "we dont need to teach any features cnn extracts it in convo layer? But i think he didn't satisfy in this ans

  • @r21061991
    @r21061991 3 роки тому

    Sir please include coding along with the videos

  • @nisargbhatt4967
    @nisargbhatt4967 Рік тому

    bhai ne title me Tensorflow, Keras and Python likha hai lekin pichle teen videos me koi tutorial to nhi hai.. not enough for me to get started

  • @anandailyasa2530
    @anandailyasa2530 2 роки тому

    🔥🔥🔥🔥👍👍

  • @richasharmav
    @richasharmav 3 роки тому

    👍🏻👍🏻

  • @raviagrawal5656
    @raviagrawal5656 3 місяці тому +1

    Stop using "more smaller"

  • @somdc6095
    @somdc6095 2 роки тому +1

    " The vanishing gradient is like a dumb student in a class who is hardly learning anything", I think, this example doesn't suits in your mouth.

  • @shashankjaiswal1298
    @shashankjaiswal1298 3 роки тому

    I protest on behalf of dumb students.. kadi ninda from my side.