Session On Different Types Of Loss Function In Deep Learning

Поділитися
Вставка
  • Опубліковано 10 гру 2024

КОМЕНТАРІ • 66

  • @ameermohamedr4982
    @ameermohamedr4982 3 роки тому +164

    Rest of the World : We have Andrew Ng for Teach AI
    India : We have Krish Naik 🔥

    • @arjyabasu1311
      @arjyabasu1311 3 роки тому +1

      100% right

    • @thepresistence5935
      @thepresistence5935 3 роки тому +3

      its true he is the one of the boon for our nation

    • @RTC1655
      @RTC1655 3 роки тому +5

      As an educator Krish Naik >> Andrew Ng

    • @thepresistence5935
      @thepresistence5935 3 роки тому +4

      @@RTC1655 haa dude both are good, please don't compare. but i love krish

    • @gamer_acjarafat6422
      @gamer_acjarafat6422 Рік тому +1

      @@thepresistence5935 but honestly as a beginner it's really hard to understand from Andrew ng , I can grasp very topics from krish very easily (I'm in 9th class).

  • @maheshmec1
    @maheshmec1 Рік тому +3

    Krish there is correction @15:49 you said Squared error loss as Mean Squared Error loss (when we divide with sample number then it will be MSE). You have corrected it at 27:44, thanks!

  • @spacecoreexplorer320
    @spacecoreexplorer320 6 місяців тому

    One of the Best Teachers in India . He'll make the concept simple and clear .Great teaching skills

  • @raginibhayana8305
    @raginibhayana8305 2 роки тому +1

    for some apparent reason i have started to binge watch these

  • @rushikeshpalnitkar4019
    @rushikeshpalnitkar4019 Рік тому +2

    @18:57 sir i think that it has been squared not only to penalize the high error but also to account for the fact that if error is not squared,then if for one record the error is positive and for other it is negative then if we add them without squaring then total error will show to be reduced,when infact it is not that way

  • @IdealVijay-
    @IdealVijay- 3 роки тому +10

    1/m should be Mean square error. M would be the number of batch size. Isn't it?

  • @daniellabrito4325
    @daniellabrito4325 3 роки тому +2

    so nicely u explained the differences between tensorflow versions . Thanks . They ask this in interview too

  • @praveenkuthuru7439
    @praveenkuthuru7439 4 місяці тому

    Amazing tutorial....excellent work....thanks for the content

  • @ATHARVA89
    @ATHARVA89 3 роки тому +1

    superb clarity i got

  • @thepresistence5935
    @thepresistence5935 3 роки тому +4

    we enjoyed the paid class freely, thankyou Krish sir, please create a playlist and upload paid class videos please, it's very useful for us 😂😂😂😂, and it's easy to understand during online classes. ( I got a deeper understanding of optimization and loss function thanks!)

    • @moindalvs
      @moindalvs 2 роки тому +5

      Please don't have such a mindset. This is called community service if someone in the community is blessed he should contribute to and make it available for the less blessed people. So simply put if you are blessed contribute to the channel. thanks Krish and everyone who contributes for the community. take care brother don't get me wrong. if i have or may said anything incorrect i am open to all opinions and correction. I am just another man.

    • @thepresistence5935
      @thepresistence5935 Рік тому +1

      @@moindalvs It's ok no worries. :) I thought it's a paid class for some students.

    • @ashfa_noor
      @ashfa_noor 9 місяців тому

      😅😅 1:08:44 😮😮h 1:09:26 bjjnjhjjhhuugvhjvhj😂ijnkjkkoj​@@moindalvs

  • @ritikkohad5045
    @ritikkohad5045 Рік тому +1

    At 15:09, Instead of MSE it's actually SSE.

  • @mandarchincholkar5955
    @mandarchincholkar5955 3 роки тому +3

    Sir want a video on metrics of accuracies.

  • @aakash2402
    @aakash2402 3 роки тому +2

    When are we getting next part of this? The development part?

  • @joeljoseph26
    @joeljoseph26 11 місяців тому

    one small correction, MSE is 1/n summation (y-y^)2

  • @koustavdutta5317
    @koustavdutta5317 3 роки тому +2

    Krish, how Scientists came up with Cross Entropy Losses and why are they named so ?????

  • @ArunKumar-sg6jf
    @ArunKumar-sg6jf 3 роки тому +2

    Sir make next tutorial for object detection TENSORFLOW

  • @mdmynuddin1888
    @mdmynuddin1888 2 роки тому +1

    if layer >40 then we use swish activation function

  • @shamshersingh9680
    @shamshersingh9680 3 роки тому +1

    How squaring a loss function penalises the model. I think squaring is performed to ensure that negative values of loss function do not cancel out positive values and we get false indication that our loss function is very small.

    • @AshishKumar-ox2ng
      @AshishKumar-ox2ng 3 роки тому +3

      if error between y and y^ is 4, then loss will be square of 4. so by squaring it gets more penalised and if error is less than 1, ie 0.4 then its square is 0.16 so less penalised

  • @aakash2402
    @aakash2402 3 роки тому

    This is what I was waiting for!

  • @utkarshshastri2216
    @utkarshshastri2216 3 роки тому +1

    How MSE loss penalizer is different from regularisation?

  • @anilbhallavi2942
    @anilbhallavi2942 8 місяців тому

    can't we use number directly for target like 1, 2, 3 instead of one hot encoding.

  • @sahirshaikh6344
    @sahirshaikh6344 Рік тому

    Krish can you please upload a video on how create environment please

  • @shohelhossain1873
    @shohelhossain1873 3 роки тому

    Please make a tutorial on What is the difference between model error and model risk?

  • @CRTagadiya
    @CRTagadiya 2 роки тому

    Could you please cover deep learning loss function in class imbalanced ( focal loss, weighted cross entropy)

  • @rodriguekalach5502
    @rodriguekalach5502 2 роки тому

    Mr. Krish where can I find the note of this lecture?

  • @Joe-tk8cx
    @Joe-tk8cx Рік тому +1

    Hi Krish, You are videos are awesome, I am almost 1/4 done and have learned so much. I had one questions, is DSA a prerequisite for this ?

    • @sportsoctane
      @sportsoctane Рік тому

      Yes, while implementing algos u require, basic is enough tho

  • @shilashm5691
    @shilashm5691 2 роки тому

    Quadratic equation is non-linear algebra

  • @satviktejas2995
    @satviktejas2995 2 роки тому

    how can we also get into the meeting

  • @dikshitlenka
    @dikshitlenka 3 роки тому +1

    why we divide 2 while calculating the loss function?

    • @sanjeevdutt5867
      @sanjeevdutt5867 3 роки тому

      it's basically 1/n where n is no. of samples in a batch

  • @vinitamishra4701
    @vinitamishra4701 3 роки тому

    Krish...can you explain Probability as well

  • @SuperRia33
    @SuperRia33 Рік тому +1

    I was asked in interview why squared and not mean cubed error and despite of watching this video i couldn't recollect during interview😢😢

  • @IrfanKhan-oh7kb
    @IrfanKhan-oh7kb 3 роки тому +2

    Dear Krish, can we define loss function with constraints like in optimization algorithms ?

  • @raghavagrawal6263
    @raghavagrawal6263 3 роки тому

    Thank you

  • @andrewbeef8758
    @andrewbeef8758 3 роки тому

    sir i have a doubt , i think we should select the correct class's probability rather than selecting the highest probability after softmax in y hat... please correct me if i am wrong .

  • @jitendrakumarsah8683
    @jitendrakumarsah8683 3 роки тому

    Good one sir

  • @sowmyakavali2670
    @sowmyakavali2670 3 роки тому

    why 1/2 is considering in front of loss function?

    • @sowmyakavali2670
      @sowmyakavali2670 3 роки тому

      Answer I found, correct me if wrong.
      It makes the math easier to handle. Adding a half or not doesn't actually matter since minimizing is unaffected by constants

  • @_ARCATEC_
    @_ARCATEC_ 3 роки тому +1

    Hello 👋🤓💓

  • @Rahul_Singh_Rajput_04
    @Rahul_Singh_Rajput_04 3 роки тому

    thankyou sir.....

  • @kashishrajput4934
    @kashishrajput4934 3 роки тому

    U have steam do u play csgo or anything else ?? 🔥🔥

  • @jitendrakumarsah8683
    @jitendrakumarsah8683 3 роки тому

    Ok both👍

  • @sonalijain3497
    @sonalijain3497 3 роки тому

    great explanation but for me, it's very confusing.

  • @cynthiac2174
    @cynthiac2174 3 роки тому

    krish i wanted to buy your materials ... i couldn't with my debit card kindly let me know an alternate

    • @krishnaik06
      @krishnaik06  3 роки тому

      Try to join membership using gpay....search in youtube how to do it

    • @cynthiac2174
      @cynthiac2174 3 роки тому

      thank you let me try the same ..

  • @shreyasb.s3819
    @shreyasb.s3819 3 роки тому

    Good topic

  • @subhadipghosh8194
    @subhadipghosh8194 3 роки тому

    Very informative lesson Krish!! But, At 48:20 should'nt there be weights connected to the output layer before applying the softmax function?

  • @jitendrakumarsah8683
    @jitendrakumarsah8683 3 роки тому

    Yes👍

  • @kishnakumar8602
    @kishnakumar8602 3 роки тому

    Yes

  • @adarshjamwal3448
    @adarshjamwal3448 3 роки тому

    Great

  • @vatsalshingala3225
    @vatsalshingala3225 Рік тому

    ❤❤❤❤❤❤❤❤❤❤

  • @ravitanwar9537
    @ravitanwar9537 3 роки тому

    wow