Session On Different Types Of Loss Function In Deep Learning

Поділитися
Вставка
  • Опубліковано 25 сер 2024
  • Join Ineuron Full Stack Data Science Course with Placement Guarantee
    ineuron1.viewp...
    ⭐ Kite is a free AI-powered coding assistant that will help you code faster and smarter. The Kite plugin integrates with all the top editors and IDEs to give you smart completions and documentation while you’re typing. I've been using Kite for a few months and I love it! www.kite.com/g...
    All Playlist In My channel
    Interview Playlist: • Machine Learning Inter...
    Complete DL Playlist: • Complete Road Map To P...
    Julia Playlist: • Tutorial 1- Introducti...
    Complete ML Playlist : • Complete Machine Learn...
    Complete NLP Playlist: • Natural Language Proce...
    Docker End To End Implementation: • Docker End to End Impl...
    Live stream Playlist: • Pytorch
    Machine Learning Pipelines: • Docker End to End Impl...
    Pytorch Playlist: • Pytorch
    Feature Engineering : • Feature Engineering
    Live Projects : • Live Projects
    Kaggle competition : • Kaggle Competitions
    Mongodb with Python : • MongoDb with Python
    MySQL With Python : • MYSQL Database With Py...
    Deployment Architectures: • Deployment Architectur...
    Amazon sagemaker : • Amazon SageMaker
    Please donate if you want to support the channel through GPay UPID,
    Gpay: krishnaik06@okicici
    Telegram link: t.me/joinchat/...
    Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more
    / @krishnaik06
    Please do subscribe my other channel too
    / @krishnaikhindi
    Connect with me here:
    Twitter: / krishnaik06
    Facebook: / krishnaik06
    instagram: / krishnaik06

КОМЕНТАРІ • 66

  • @ameermohamedr4982
    @ameermohamedr4982 3 роки тому +162

    Rest of the World : We have Andrew Ng for Teach AI
    India : We have Krish Naik 🔥

    • @arjyabasu1311
      @arjyabasu1311 3 роки тому +1

      100% right

    • @thepresistence5935
      @thepresistence5935 3 роки тому +3

      its true he is the one of the boon for our nation

    • @RTC1655
      @RTC1655 3 роки тому +5

      As an educator Krish Naik >> Andrew Ng

    • @thepresistence5935
      @thepresistence5935 3 роки тому +4

      @@RTC1655 haa dude both are good, please don't compare. but i love krish

    • @gamer_acjarafat6422
      @gamer_acjarafat6422 Рік тому +1

      @@thepresistence5935 but honestly as a beginner it's really hard to understand from Andrew ng , I can grasp very topics from krish very easily (I'm in 9th class).

  • @maheshmec1
    @maheshmec1 10 місяців тому +2

    Krish there is correction @15:49 you said Squared error loss as Mean Squared Error loss (when we divide with sample number then it will be MSE). You have corrected it at 27:44, thanks!

  • @daniellabrito4325
    @daniellabrito4325 3 роки тому +2

    so nicely u explained the differences between tensorflow versions . Thanks . They ask this in interview too

  • @raginibhayana8305
    @raginibhayana8305 Рік тому +1

    for some apparent reason i have started to binge watch these

  • @spacecoreexplorer320
    @spacecoreexplorer320 3 місяці тому

    One of the Best Teachers in India . He'll make the concept simple and clear .Great teaching skills

  • @IdealVijay-
    @IdealVijay- 3 роки тому +10

    1/m should be Mean square error. M would be the number of batch size. Isn't it?

  • @praveenkuthuru7439
    @praveenkuthuru7439 27 днів тому

    Amazing tutorial....excellent work....thanks for the content

  • @rushikeshpalnitkar4019
    @rushikeshpalnitkar4019 Рік тому +2

    @18:57 sir i think that it has been squared not only to penalize the high error but also to account for the fact that if error is not squared,then if for one record the error is positive and for other it is negative then if we add them without squaring then total error will show to be reduced,when infact it is not that way

  • @mandarchincholkar5955
    @mandarchincholkar5955 3 роки тому +3

    Sir want a video on metrics of accuracies.

  • @thepresistence5935
    @thepresistence5935 3 роки тому +4

    we enjoyed the paid class freely, thankyou Krish sir, please create a playlist and upload paid class videos please, it's very useful for us 😂😂😂😂, and it's easy to understand during online classes. ( I got a deeper understanding of optimization and loss function thanks!)

    • @moindalvs
      @moindalvs 2 роки тому +5

      Please don't have such a mindset. This is called community service if someone in the community is blessed he should contribute to and make it available for the less blessed people. So simply put if you are blessed contribute to the channel. thanks Krish and everyone who contributes for the community. take care brother don't get me wrong. if i have or may said anything incorrect i am open to all opinions and correction. I am just another man.

    • @thepresistence5935
      @thepresistence5935 Рік тому +1

      @@moindalvs It's ok no worries. :) I thought it's a paid class for some students.

    • @ashfa_noor
      @ashfa_noor 5 місяців тому

      😅😅 1:08:44 😮😮h 1:09:26 bjjnjhjjhhuugvhjvhj😂ijnkjkkoj​@@moindalvs

  • @ATHARVA89
    @ATHARVA89 3 роки тому +1

    superb clarity i got

  • @ArunKumar-sg6jf
    @ArunKumar-sg6jf 3 роки тому +2

    Sir make next tutorial for object detection TENSORFLOW

  • @aakash2402
    @aakash2402 3 роки тому +2

    When are we getting next part of this? The development part?

  • @joeljoseph26
    @joeljoseph26 8 місяців тому

    one small correction, MSE is 1/n summation (y-y^)2

  • @koustavdutta5317
    @koustavdutta5317 3 роки тому +2

    Krish, how Scientists came up with Cross Entropy Losses and why are they named so ?????

  • @aakash2402
    @aakash2402 3 роки тому

    This is what I was waiting for!

  • @ritikkohad5045
    @ritikkohad5045 Рік тому +1

    At 15:09, Instead of MSE it's actually SSE.

  • @shohelhossain1873
    @shohelhossain1873 3 роки тому

    Please make a tutorial on What is the difference between model error and model risk?

  • @mdmynuddin1888
    @mdmynuddin1888 2 роки тому +1

    if layer >40 then we use swish activation function

  • @sahirshaikh6344
    @sahirshaikh6344 Рік тому

    Krish can you please upload a video on how create environment please

  • @shilashm5691
    @shilashm5691 2 роки тому

    Quadratic equation is non-linear algebra

  • @shamshersingh9680
    @shamshersingh9680 3 роки тому +1

    How squaring a loss function penalises the model. I think squaring is performed to ensure that negative values of loss function do not cancel out positive values and we get false indication that our loss function is very small.

    • @AshishKumar-ox2ng
      @AshishKumar-ox2ng 3 роки тому +3

      if error between y and y^ is 4, then loss will be square of 4. so by squaring it gets more penalised and if error is less than 1, ie 0.4 then its square is 0.16 so less penalised

  • @CRTagadiya
    @CRTagadiya 2 роки тому

    Could you please cover deep learning loss function in class imbalanced ( focal loss, weighted cross entropy)

  • @utkarshshastri2216
    @utkarshshastri2216 3 роки тому +1

    How MSE loss penalizer is different from regularisation?

  • @Joe-tk8cx
    @Joe-tk8cx Рік тому +1

    Hi Krish, You are videos are awesome, I am almost 1/4 done and have learned so much. I had one questions, is DSA a prerequisite for this ?

    • @sportsoctane
      @sportsoctane Рік тому

      Yes, while implementing algos u require, basic is enough tho

  • @raghavagrawal6263
    @raghavagrawal6263 3 роки тому

    Thank you

  • @anilbhallavi2942
    @anilbhallavi2942 5 місяців тому

    can't we use number directly for target like 1, 2, 3 instead of one hot encoding.

  • @vinitamishra4701
    @vinitamishra4701 3 роки тому

    Krish...can you explain Probability as well

  • @SuperRia33
    @SuperRia33 Рік тому

    I was asked in interview why squared and not mean cubed error and despite of watching this video i couldn't recollect during interview😢😢

  • @Rahul_Singh_Rajput_04
    @Rahul_Singh_Rajput_04 2 роки тому

    thankyou sir.....

  • @dikshitlenka
    @dikshitlenka 3 роки тому +1

    why we divide 2 while calculating the loss function?

    • @sanjeevdutt5867
      @sanjeevdutt5867 3 роки тому

      it's basically 1/n where n is no. of samples in a batch

  • @jitendrakumarsah8683
    @jitendrakumarsah8683 3 роки тому

    Ok both👍

  • @shreyasb.s3819
    @shreyasb.s3819 3 роки тому

    Good topic

  • @jitendrakumarsah8683
    @jitendrakumarsah8683 3 роки тому

    Good one sir

  • @kishnakumar8602
    @kishnakumar8602 3 роки тому

    Yes

  • @rodriguekalach5502
    @rodriguekalach5502 2 роки тому

    Mr. Krish where can I find the note of this lecture?

  • @sonalijain3497
    @sonalijain3497 3 роки тому

    great explanation but for me, it's very confusing.

  • @IrfanKhan-oh7kb
    @IrfanKhan-oh7kb 2 роки тому +2

    Dear Krish, can we define loss function with constraints like in optimization algorithms ?

  • @adarshjamwal3448
    @adarshjamwal3448 3 роки тому

    Great

  • @_ARCATEC_
    @_ARCATEC_ 3 роки тому +1

    Hello 👋🤓💓

  • @andrewbeef8758
    @andrewbeef8758 3 роки тому

    sir i have a doubt , i think we should select the correct class's probability rather than selecting the highest probability after softmax in y hat... please correct me if i am wrong .

  • @jitendrakumarsah8683
    @jitendrakumarsah8683 3 роки тому

    Yes👍

  • @satviktejas2995
    @satviktejas2995 Рік тому

    how can we also get into the meeting

  • @kashishrajput4934
    @kashishrajput4934 2 роки тому

    U have steam do u play csgo or anything else ?? 🔥🔥

  • @sowmyakavali2670
    @sowmyakavali2670 3 роки тому

    why 1/2 is considering in front of loss function?

    • @sowmyakavali2670
      @sowmyakavali2670 3 роки тому

      Answer I found, correct me if wrong.
      It makes the math easier to handle. Adding a half or not doesn't actually matter since minimizing is unaffected by constants

  • @subhadipghosh8194
    @subhadipghosh8194 3 роки тому

    Very informative lesson Krish!! But, At 48:20 should'nt there be weights connected to the output layer before applying the softmax function?

  • @cynthiac2174
    @cynthiac2174 3 роки тому

    krish i wanted to buy your materials ... i couldn't with my debit card kindly let me know an alternate

    • @krishnaik06
      @krishnaik06  3 роки тому

      Try to join membership using gpay....search in youtube how to do it

    • @cynthiac2174
      @cynthiac2174 3 роки тому

      thank you let me try the same ..

  • @ravitanwar9537
    @ravitanwar9537 3 роки тому

    wow

  • @vatsalshingala3225
    @vatsalshingala3225 Рік тому

    ❤❤❤❤❤❤❤❤❤❤