The Softmax : Data Science Basics

Поділитися
Вставка
  • Опубліковано 19 чер 2024
  • All about the SOFTMAX function in machine learning!

КОМЕНТАРІ • 104

  • @birajkoirala5383
    @birajkoirala5383 4 роки тому +36

    tutorials with boards noww...nice one dude...underrated channel I must say!

  • @wennie2939
    @wennie2939 3 роки тому +31

    I really love how you progress step by step instead of directly throwing out the formulas! The best video on UA-cam on the Softmax! +1

  • @DFCinBE
    @DFCinBE 7 місяців тому +2

    For a non-mathematician like myself, this was crystal clear, thanks very much!

  • @marcusakiti7608
    @marcusakiti7608 Рік тому +1

    Awesome stuff. Searched this video because I was trying to figure out why the scores/sum scores approach wouldn't work and you addressed it first thing. Great job.

  • @karimamakhlouf2411
    @karimamakhlouf2411 Рік тому

    An excellent and straightforward way of explaining. So helpful! Thanks a lot :)

  • @ManpreetKaur-ve5gw
    @ManpreetKaur-ve5gw 3 роки тому

    The only video I needed to understand the SOFTMAX function. Kudos to you!!

  • @iraklisalia9102
    @iraklisalia9102 3 роки тому +3

    What a great explanation! Thank you very much.
    The why do we choose this formula versus this formula explanation is truly makes everything clear. Thank you once again :)

  • @YAlsadah
    @YAlsadah 2 роки тому

    What an amazing, simple explanation. thank you!

  • @zvithaler9443
    @zvithaler9443 2 роки тому +1

    Great explenations, your addition of the story to the objects really help understanding the material

  • @debapriyabanerjee8486
    @debapriyabanerjee8486 3 роки тому +9

    This is excellent! I saw your video on the sigmoid function and both of these explain the why behind their usage.

  • @omniscienceisdead8837
    @omniscienceisdead8837 2 роки тому

    the person who is going to be responsible for me kick starting my ML journey with a good head on my shoulders, thank you ritvik, very enlightening

  • @grzegorzchodak
    @grzegorzchodak 11 місяців тому

    Great explanation! Easy and helpful!

  • @MORE2Clay
    @MORE2Clay 2 роки тому

    The introduction to softmax which explains why softmax exists helped me a lot understanding it

  • @serdarufukkara7109
    @serdarufukkara7109 3 роки тому

    thank you very much, you are very good at teaching, very well prepared!

  • @zafarnasim9267
    @zafarnasim9267 2 роки тому

    Woooow ,really liked our teaching approach, awesome!

  • @masster_yoda
    @masster_yoda 3 місяці тому

    Great explanation, thank you!

  • @debaratiray2482
    @debaratiray2482 2 роки тому

    Awesome explanation.... thanks !!!

  • @shiyuyuan7958
    @shiyuyuan7958 2 роки тому

    Very clear explained , thank you, subscribed

  • @Nova-Rift
    @Nova-Rift 3 роки тому

    You're amazing. great teacher

  • @nehathakur8221
    @nehathakur8221 3 роки тому

    Thanks for such intuitive explanation Sir :)

  • @fatemehsefishahpar3626
    @fatemehsefishahpar3626 3 роки тому

    How great was this video! thank you

  • @kausshikmanojkumar2855
    @kausshikmanojkumar2855 9 місяців тому

    Beautiful!

  • @ridhampatoliya4680
    @ridhampatoliya4680 3 роки тому

    Very clearly explained!

  • @diegosantosuosso806
    @diegosantosuosso806 8 місяців тому

    Thanks Professor!

  • @kausshikmanojkumar2855
    @kausshikmanojkumar2855 9 місяців тому

    Absolutely beautiful.

  • @eliaslara6964
    @eliaslara6964 3 роки тому +1

    Dude! I really love you.

  • @somteezle1348
    @somteezle1348 3 роки тому +1

    Wow...teaching from first principles...I love that!

  • @kavitmehta9143
    @kavitmehta9143 3 роки тому

    Awesome Brother!

  • @ekaterinakorneeva4792
    @ekaterinakorneeva4792 8 місяців тому

    Thank you!!! This is so much clearer and straighter than 2 20-minutes videos on Softmax from "Machine Learning with Python-From Linear Models to Deep Learning" from MIT! To be fair, the latter explains multiple perspectives and is also good in its sense. But you deliver just the most importaint first bit of what is softmax and what are all these terms are about.

  • @rizkysyahputra98
    @rizkysyahputra98 3 роки тому +1

    Clearest explanation about softmax.. thank you

  • @zacharydan7236
    @zacharydan7236 3 роки тому

    Solid video, subscribed!

  • @jackshaak
    @jackshaak 3 роки тому +1

    Just great! Thanks, man.

  • @user-mf3sm2ds7j
    @user-mf3sm2ds7j 3 роки тому

    Thank you so much! You made it very clear :)

  • @dragolov
    @dragolov 2 роки тому

    Bravo! + Thank you very much!

  • @okeuwechue9238
    @okeuwechue9238 3 місяці тому

    Thnx. Very clear explanation of the rationale for employing exponential fns instead of linear fns

  • @vamshi755
    @vamshi755 3 роки тому

    Now i know why lot of your videos answers WHY question. You give importance to application not the theory alone. concept is very clear. thanks

  • @peterniederl3662
    @peterniederl3662 3 роки тому

    Very helpful!!! Thx!

  • @wduandy
    @wduandy 3 роки тому

    Amazing!

  • @cobertizo
    @cobertizo 3 роки тому

    I came for the good-looking teacher but stayed for the really clear an good explanation.

  • @michael88704
    @michael88704 Рік тому

    I like the hierarchy implied by the indices on the S vector ;)

  • @aFancyFatFish
    @aFancyFatFish 3 роки тому

    Thank you very much, clear and helpful to me as a beginer😗

  • @MTech-DataScience
    @MTech-DataScience Рік тому

    Thank you so much. I now understand why exp is used instead of simple calc.😊

  • @karimomrane7556
    @karimomrane7556 Рік тому

    I wish you were my teacher haha great explanation :D Thank you so much ♥

  • @salmans1224
    @salmans1224 3 роки тому +1

    awesome man..your videos make me less anxious about math..

  • @zahra_az
    @zahra_az 2 роки тому

    that was so much sweet and inspiring

  • @brendanamuh5683
    @brendanamuh5683 Рік тому

    thank you so much !!

  • @oligneflix6798
    @oligneflix6798 2 роки тому

    bro you're a legend

  • @hezhu482
    @hezhu482 4 роки тому

    thank you!

  • @igoroliveira5463
    @igoroliveira5463 3 роки тому

    Could you do a video about the maxout unit? I read it on Goodfellow's Deep Learning book, but I did not grasp the intuition behind it clearly.

  • @dikshanegi1028
    @dikshanegi1028 7 місяців тому

    Keep going buddy

  • @ZimoNitrome
    @ZimoNitrome 3 роки тому

    good video

  • @azinkatiraee6684
    @azinkatiraee6684 Рік тому

    a clear explanation!

  • @anishbabus576
    @anishbabus576 3 роки тому

    Thank you

  • @shreyasshetty6850
    @shreyasshetty6850 3 роки тому

    Holy shit! That makes so much sense

  • @markomarkus8560
    @markomarkus8560 3 роки тому

    Nice video

  • @ayeddie6788
    @ayeddie6788 2 роки тому

    PRETTY GOOD

  • @yingchen8028
    @yingchen8028 3 роки тому

    more people should watch this

  • @tsibulsky4900
    @tsibulsky4900 Рік тому

    Thanks 👍

  • @yuchenzhao6411
    @yuchenzhao6411 4 роки тому

    Very good video

  • @sukursukur3617
    @sukursukur3617 3 роки тому +1

    3:18 very good teacher

  • @korwi7373
    @korwi7373 2 роки тому

    thanks

  • @johnginos6520
    @johnginos6520 4 роки тому

    Do you do one on one tutoring?

  • @evagao9701
    @evagao9701 4 роки тому

    hi there, what is the meaning of the square summation?

  • @MLDawn
    @MLDawn 3 роки тому +1

    please note that the outputs of Softmax are NOT probabilities but are interpreted as probabilities. This is an important distinction! The same goes for the Sigmoid function. Thanks

  • @jeeezsh4704
    @jeeezsh4704 2 роки тому

    You teach better than my grad school professor 😂

  • @Fat_Cat_Fly
    @Fat_Cat_Fly 3 роки тому +1

    👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻

  • @anandiyer5361
    @anandiyer5361 2 роки тому

    @ritwikmath want to understand why you chose the subscript N to describe the features; they should be S_1..S_M isn't it?

  • @suyashdixit682
    @suyashdixit682 Рік тому +1

    Yet again an Indian dude is saving me!

  • @bryany7344
    @bryany7344 3 роки тому +2

    1:14, how is it a single dimensional for sigmoid? Shouldn't it be two dimensions?

    • @vahegizhlaryan5052
      @vahegizhlaryan5052 2 роки тому

      well after applying sigmoid you get only one probability p (the other one you can calculate as 1-p) so actually you only need one number in case of sigmoid

  • @seojun2599
    @seojun2599 8 місяців тому

    How to dealing with high Xi values? I got 788, 732 for Xi value, and if I exp(788) it gives error bcs it exp results near to infinity

  • @evgenyv5687
    @evgenyv5687 3 роки тому +1

    Hey, thank you for a great video! I have a question: in your example, you said that probabilities between 0,1 and 2 should not be different from 100, 101, and 102. But in the real world, the scale which is used to assess students makes difference and affects probabilities. The difference between 101 and 102 is actually smaller than between 1 and 2, because in the first case the scale is probably much smaller, so the difference between scores is more significant. So wouldn't a model need to predict different probabilities depending on the assessment scale?

    • @EW-mb1ih
      @EW-mb1ih 2 роки тому

      same question!

    • @imingtso6598
      @imingtso6598 2 роки тому

      My point of view is that the softmax scenario is different from sigmoid scenario. In the sigmoid case, we need to capture the changes in relative scale because subtle changes around the 1/2 prob. point result in significant prob. changes(turns the whole thing around, drop out or not); whereas in the softmax case, there are more outputs and our goal is to select the very case which is most likely to happen, so we are talking about an absolute amount rather than a relative amount(final judge). I guess that's why ritvik said" change in constant shouldn't change our model'.

  • @d_b_
    @d_b_ Рік тому

    Maybe this was explained in a past video, but why is "e" chosen over any other base (like 2 or 3 or pi)...

  • @tm0209
    @tm0209 5 місяців тому

    What does dP_i/dS_j = -P_i * P_j mean and how did you get it? I understand dP_i/dS_i because S_i is a single variable. But dP_i/DS_j is a whole set of variables (Sum(S_j) = S_1 + S_2 ... S_n) rather than a single one. How are you taking a derivative of that?

  • @joelpaddock5199
    @joelpaddock5199 5 місяців тому

    Hello Boltzmann distribution we meet again, cool nickname

  • @mrahsanahmad
    @mrahsanahmad 3 роки тому

    I am new to Data Sceince. However, why would a model output 100, 101 and 102 as three outputs unless the input had similarity to all three classes. Even in our daily lives, we would ignore 2 dollar variance on $100 think but complain if something which was originally free but now costs 2 dollars. Question is, why would we give up the usual practice and use some fancy transformation function here ?

  • @jasonokoro8400
    @jasonokoro8400 Рік тому

    I don't understand *why* it's weird that 0 maps to 0 or why we need the probability to be the same for a constant shift...

  • @matgg8207
    @matgg8207 2 роки тому

    what a shame that this dude is not a professor!!!!!!!!

  • @ltang
    @ltang 3 роки тому

    Oh.. softmax is for multiple classes and sigmoid is for two classes.
    I get that your i here is the class. In the post below though, is their i observations and k the classes?
    stats.stackexchange.com/questions/233658/softmax-vs-sigmoid-function-in-logistic-classifier

  • @srl2017
    @srl2017 2 роки тому +1

    god

  • @gestucvolonor5069
    @gestucvolonor5069 3 роки тому +1

    I knew things were about to go down when he flipped the pen.

    • @mrahsanahmad
      @mrahsanahmad 3 роки тому

      are you crazy. the moment he did that, I knew it would be fun listening to him. He was focused. Like he said, theory is relevant only in context of practicality.

  • @jkhhahahhdkakkdh
    @jkhhahahhdkakkdh 3 роки тому

    Very different from how *cough* Siraj *cough* explained this lol

  • @mmm777ization
    @mmm777ization 3 роки тому

    4:00 I thank you have express it in a wrong way you wanted to say that we need to go into depth and not just focus on the application that is the façade which here's deriving formula

  • @QiyuanSong
    @QiyuanSong Рік тому

    Why do I need to go to school?

  • @suryatejakothakota7742
    @suryatejakothakota7742 3 роки тому

    Binod stop ads

  • @fintech1378
    @fintech1378 Рік тому

    minute 11-12.30 you are not very clear and going too fast

    • @ritvikmath
      @ritvikmath  Рік тому

      hey thanks for the feedback, will work on it