Cost Function in Neural Network | Types of Cost function we use in different applications

Поділитися
Вставка
  • Опубліковано 4 лют 2025

КОМЕНТАРІ • 31

  • @MachineLearningWithJay
    @MachineLearningWithJay  3 роки тому +3

    If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.

  • @naveenkv7693
    @naveenkv7693 2 роки тому +11

    In a few years I can't help but think your videos on machine/deep learning concepts are going to be recommended by many teachers for basic understanding. Nicely explained.

  • @PrithaMajumder
    @PrithaMajumder 6 місяців тому +3

    Thanks a lot for This Amazing Introductory Lecture 😁
    Lecture - 4 Completed from This Neural Network Playlist

  • @williammartin4416
    @williammartin4416 11 місяців тому +1

    Thanks!

  • @adnanhowlader143
    @adnanhowlader143 3 роки тому +5

    Thanks for making this..I feel like there is not enough explanation about this topic in UA-cam

  • @S.aliakbar.h7
    @S.aliakbar.h7 8 місяців тому

    You are a good teacher. Congratulations ❤

  • @tiyyob
    @tiyyob 2 роки тому +1

    Splendid work. The way you explained it with example is a tremendous help.

  • @ISMAIL-dl3gw
    @ISMAIL-dl3gw 2 роки тому

    Thank you for your excellent explanation

  • @bonpagnakann5470
    @bonpagnakann5470 2 роки тому

    thank you very much for this video! I appreciate your effort in simplifying this. I would like to check with you about the cost for Multi-class classification in sigma whether there should be (1/m) for many observations in the training 9:00 ? Because I saw you include (1/m) for the average for binary classification, but I did not see you including it in the Multi-class classification. Thank you in advance for your response. ^^

  • @NathanLuMax
    @NathanLuMax 3 роки тому +2

    Seriously underrated

  • @manikantaperumalla2197
    @manikantaperumalla2197 8 місяців тому +1

    cost function for binary classification that you mentioned is also called as entropy.
    am I right?

  • @Ivaan_reminiscence
    @Ivaan_reminiscence 8 місяців тому

    when you summarized the formula @9:00 min, wont there be an 1/m upfront? as that's what you explained at the top handwritten

  • @shahfahad7248
    @shahfahad7248 Рік тому

    Can you suggest a book which you follow for classification and its cost function

  • @indrayudhmondal412
    @indrayudhmondal412 2 роки тому +1

    This guy teaches better than my professor.

    • @MachineLearningWithJay
      @MachineLearningWithJay  2 роки тому

      Thank you very much… this comment means a lot to me 😇

    • @indrayudhmondal412
      @indrayudhmondal412 2 роки тому

      @@MachineLearningWithJay Of course. What I have realized from my years in different universities is that, it is very easy to lose your audience with long drawn out presentations. No one likes to sit through a long grueling session. Try to keep your future videos short as well. :)

  • @ahmadjohara7824
    @ahmadjohara7824 2 роки тому

    Thanks alot, keep it up!

  • @learnhome9659
    @learnhome9659 3 роки тому +1

    Great Chapter, Really Appreciate your efforts...You can become another Sal Khan...One suggestion, the animated pointer feels disturbing to eyes, You may use a normal one instead

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому

      Hi.. Thank you for the compliment and your suggestion! Also the other videos do not have animated pointers.

  • @ramincybran
    @ramincybran 9 місяців тому

    thx

  • @harshitjuneja9462
    @harshitjuneja9462 3 роки тому +1

    for multi-class classification, do we not add a (1-y)log(1-a) term?

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому +1

      No Harshit... we dont need to add that term.
      You can assume binary classification as multiclass classification with 2 categories. If you do so, the cost function for multiclass classification (taking 2 categories) becomes cost function for binary classification. Because, for binary classification, if y is our first category, then now (1-y) is the other category.

  • @md.enamulatiq9262
    @md.enamulatiq9262 Рік тому

    Can you plz provide the slides you used for these videos?

  • @PavanKumar-hp1el
    @PavanKumar-hp1el 2 роки тому +2

    My doubt is can't we get y = [1,0,1,0] if we have (predicted output) a = [0.5,0.2,0.8,0.3] ?? what is the guarantee that we always get only one element in a is having >= 0.5 ??

    • @MachineLearningWithJay
      @MachineLearningWithJay  2 роки тому

      Are you asking about softmax function? If so, [0.5,0.2,0.8,0.3] this array is not possible, as the sum of probabilities should be 1.
      If you are talking about sigmoid, then we wont get an array like that, but we will only get a single number between 0 to 1.

  • @AnbuArasu-fg4ss
    @AnbuArasu-fg4ss Рік тому

    What if the actual value is 0 and the predicted value is 1.
    Will the error be infinite.