Intuitively Understanding the Cross Entropy Loss

Поділитися
Вставка
  • Опубліковано 3 лип 2021
  • This video discusses the Cross Entropy Loss and provides an intuitive interpretation of the loss function through a simple classification set up. The video will draw the connections between the KL divergence and the cross entropy loss, and touch on some practical considerations.
    Twitter: / adianliusie

КОМЕНТАРІ • 61

  • @leoxu9673
    @leoxu9673 2 роки тому +23

    This is the only video that's made the connection between KL Divergence and Cross Entropy Loss intuitive for me. Thank you so much!

  • @jasonpittman7853
    @jasonpittman7853 Рік тому +9

    This subject has confused me greatly for nearly a year now, your video and the kl-divergence video made it clear as day. You taught it so well I feel like a toddler could understand this subject.

  • @nirmalyamisra4317
    @nirmalyamisra4317 3 роки тому +13

    Great video. It is always good to dive into the math to understand why we use what we use. Loved it!

  • @bo3053
    @bo3053 Рік тому

    Super useful and insightful video which easily connects KL-divergence and Cross Entropy Loss. Brilliant! Thank you!

  • @TheVDicer
    @TheVDicer 2 роки тому

    Fantastic video and explanation. I just learned about the KL divergence and the cross entropy loss finally makes sense to me.

  • @alirezamogharabi8733
    @alirezamogharabi8733 Рік тому

    The best explanation I have ever seen about Cross Entropy Loss. Thank you so much 💖

  • @ananthakrishnank3208
    @ananthakrishnank3208 5 місяців тому

    Excellent expositions on KL divergence and Cross Entropy loss within 15 mins! Really intuitive. Thanks for sharing.

  • @hansenmarc
    @hansenmarc 2 роки тому

    Great explanation! I’m enjoying all of your “intuitively understanding” videos.

  • @kvnptl4400
    @kvnptl4400 9 місяців тому +1

    This one I would say is a very nice explanation of Cross Entropy Loss.

  • @shubhamomprakashpatil1939
    @shubhamomprakashpatil1939 Рік тому

    This is an amazing explanatory video on Cross-Entropy loss. Thank you

  • @viktorhansen3331
    @viktorhansen3331 Рік тому

    I have no background in ML, and this plus your other video completely explained everything I needed to know. Thanks!

  • @matiassandacz9145
    @matiassandacz9145 3 роки тому +2

    This video was amazing. Very clear! Please post more on ML / Probability topics. :D Cheers from Argentina.

  • @whowto6136
    @whowto6136 2 роки тому

    Thanks a lot! Really helps me understand Cross Entropy, Softmax and the relation between them.

  • @hasankaynak2253
    @hasankaynak2253 2 роки тому

    The clearest explanation. Thank you.

  • @yfd487
    @yfd487 Рік тому +2

    I love this video!! So clear and informative!

  • @Micha-ku2hu
    @Micha-ku2hu Місяць тому

    What a great and simple explanation of the topic! Great work 👏

  • @allanchan339
    @allanchan339 2 роки тому

    It is a excellent explanation to make use of previous video of KL divergence in this video

  • @francoruggeri5850
    @francoruggeri5850 Рік тому +1

    Great and clear explanation!

  • @LiHongxuan-ee7qs
    @LiHongxuan-ee7qs 3 місяці тому

    So clear explanation! Thanks!

  • @ssshukla26
    @ssshukla26 2 роки тому +6

    And no one told me that (minimizing KL is almost equivalent to minizing CLE) in 2 years studying in a University... Oh man... thank you so much...

    • @DHAiRYA2801
      @DHAiRYA2801 Рік тому +3

      KL = Cross Entropy - Entropy.

  • @lebronjames193
    @lebronjames193 2 роки тому

    really superb video, you should record more !

  • @kevon217
    @kevon217 2 роки тому

    Simple and helpful!

  • @yassine20909
    @yassine20909 Рік тому

    Nice explanation, thank you.

  • @dirtyharry7280
    @dirtyharry7280 Рік тому

    This is so good, thx so much

  • @mikejason3822
    @mikejason3822 2 роки тому

    Great video!

  • @blakeedwards3582
    @blakeedwards3582 2 роки тому

    Thank you. You should have more subscribers.

  • @shchen16
    @shchen16 Рік тому

    Thanks for this video

  • @HaykTarkhanyan
    @HaykTarkhanyan 8 днів тому

    great video, thank you!

  • @mixuaquela123
    @mixuaquela123 Рік тому +3

    Might be a stupid question but where do we get the "true" class distribution?

    • @patrickadu-amankwah1660
      @patrickadu-amankwah1660 Рік тому

      Real world data bro, from annotated samples.

    • @user-gk3ue1he4d
      @user-gk3ue1he4d Рік тому

      Human is the criteria for everything,so called AI

    • @AnonymousIguana
      @AnonymousIguana 2 місяці тому

      In the classification task, the true distribution has the value of 1 for the correct class and value of 0 for the other classes. So that's it, that's the true distribution. And we know it, if the data is labelled correctly. The distribution in classification task is called probability mass function btw

  • @omkarghadge8432
    @omkarghadge8432 3 роки тому +1

    Great! keep it up.

  • @sushilkhadka8069
    @sushilkhadka8069 Рік тому

    This is so neat.

  • @Darkev77
    @Darkev77 2 роки тому +3

    Brilliant and simple! Could you make a video about soft/smooth labels instead of hard ones and how that makes it better (math behind it)?

    • @SA-by2xg
      @SA-by2xg Рік тому

      Intuitively, information is lost whenever discretizing a continuous variable. Said another way, the difference between a class probability of 0.51 and 0.99 is very different. Downstream, soft targets allow for more precise gradient updates.

  • @kutilkol
    @kutilkol 2 роки тому

    superb!

  • @shahulrahman2516
    @shahulrahman2516 Рік тому

    Thank you

  • @starriet
    @starriet 2 роки тому

    essence, short, great.

  • @yegounkim1840
    @yegounkim1840 Рік тому

    You the best!

  • @vandana2410
    @vandana2410 2 роки тому

    Thanks for the great video. 1 question though. What happens if we swap the true and predicted probabilities in the formula?

  • @jiwoni523
    @jiwoni523 5 місяців тому

    make more videos please , you are awesome

  • @user-bi2jm1cn1h
    @user-bi2jm1cn1h 19 годин тому

    How does the use of soft label distributions, instead of one-hot encoding hard labels, impact the choice of loss function in training models? Specifically, can cross-entropy loss still be effectively utilized, or should Kullback-Leibler (KL) divergence be preferred?

  • @thinkbigwithai
    @thinkbigwithai 9 місяців тому

    At 3:25
    why don't we model it as argmax Summ P* logP (without minus sign)?

  • @MrPejotah
    @MrPejotah Рік тому

    Great video, but only really clear if you know what the KL divergence is. I'd hammer that point to the viewer.

  • @quantumjun
    @quantumjun 2 роки тому

    will the thing in 4:12 be negative if you use information entropy or KL divergence? are they both > 0?

    • @yassine20909
      @yassine20909 Рік тому

      As explained in the video the KL divergence is a measure of "distance", so it has to be >0. There other prerequisites for a function to be a measure of distance like symmetry, and couple other things i forget about.

  • @sukursukur3617
    @sukursukur3617 2 роки тому

    Why dont we use just mean of (p-q)^2 instead of p*log(p/q) to understand dissimilarity of pdfs?

  • @genkidama7385
    @genkidama7385 Місяць тому

    distirbution

  • @madarahuchiha1133
    @madarahuchiha1133 3 місяці тому

    what is true class distribution?

    • @elenagolovach384
      @elenagolovach384 Місяць тому

      the frequency of occurrence of a particular class depends on the characteristics of the objects

  • @tanvirtanvir6435
    @tanvirtanvir6435 Рік тому

    0:08
    3:30
    P* is true prob

  • @pradiptahafid
    @pradiptahafid 2 роки тому

    3:24 .The aha moment when you realize whta's the purpose of the negative sign in cross entrophy

    • @pradiptahafid
      @pradiptahafid 2 роки тому

      4:24. do you know how golden the statement is

  • @zhaobryan4441
    @zhaobryan4441 2 роки тому

    hello, handsome could you share the clear slides?

  • @zingg7203
    @zingg7203 2 роки тому

    Volumn is low

  • @ajitzote6103
    @ajitzote6103 4 місяці тому +1

    not really a great explaination, so many terms were thrown in. that's not a good way to explain something.

  • @commonsense126
    @commonsense126 Рік тому +1

    Speak slower please

    • @Oliver-2103
      @Oliver-2103 9 місяців тому +1

      Your name is commonsense and you still don't use your common sense lol In every UA-cam application, there is the option to slow a video down to 75%, 50% or even 25% speed. If you have trouble with understanding his language, you should just select the 0.75 speed option.

    • @commonsense126
      @commonsense126 9 місяців тому +1

      @@Oliver-2103 Visually Impaired people have problems seeing some of the adjustments one can make on a phone even when they know that they exist.