Logistic Regression Part 4 | Loss Function | Maximum Likelihood | Binary Cross Entropy

Поділитися
Вставка
  • Опубліковано 14 жов 2024

КОМЕНТАРІ • 64

  • @harkiratmakkar9202
    @harkiratmakkar9202 6 місяців тому +5

    Absolutely Love your content sir. Just one correction. We minimise cross entropy because we add an extra minus sign. So, to maximise likelihood, we minimise the cross entropy. What you said is wrong that log of smaller number has higher value. Logarithm is a continuously increasing function, so to maximise f(x), you can maximise log(f(x)) which is equivalent to minimising -log(f(x))

  • @waseemrandhawa5658
    @waseemrandhawa5658 2 роки тому +32

    First person on a UA-cam whose teaching style is impressive. You are totally awesome. 😍
    sir main ne You Tube pay probably 10 to 20 video likes ki hn gi. But aap ka jo teaching style hai majboor kar deta ha video ko like karne me aur dil say feelings ati hai "ja pana tussi great ho" 😇

  • @saiprashanth1587
    @saiprashanth1587 2 роки тому +2

    This is the best video on logistic regression and this is the best ML playlist in youtube. Thank you so much sir

  • @binilrajadhikari2643
    @binilrajadhikari2643 2 роки тому +2

    This is one of the best tutorial about Logistic Regression in UA-cam.

  • @arslanahmed1311
    @arslanahmed1311 Рік тому +4

    We really appreciate the amount of effort you put into making this playlist

  • @sanaayakurup5453
    @sanaayakurup5453 Рік тому +3

    I have never seen such a concise playlist on Logit. Thank you sir, all my doubts are solved!

  • @priyanshutanwar
    @priyanshutanwar 3 роки тому +3

    The best explanation of cross entropy Ive come across!

  • @life3.088
    @life3.088 2 роки тому +1

    i have ur all on downlaoad due to net issue,but when i have net than i prefer to watch online .this is a small gift my side

  • @singnsoul6443
    @singnsoul6443 11 місяців тому

    I was looking for content to study machine learning for GATE DA. Thank you so much for, hands down, THE BEST content.

  • @purubhatnagar483
    @purubhatnagar483 8 місяців тому

    you deserve an appreciation. best content i have read so far. you have my blessing. will buy your mentorship program too.

  • @dakshbhatnagar
    @dakshbhatnagar 2 роки тому +2

    Dhaansu Explanation, Bhai!! It's interesting to see how math is worked out.

  • @aiml66_bangerashashankchan81

    The most goated Logistic regression explaination ever

  • @anuragpandey5748
    @anuragpandey5748 Місяць тому

    best in depth explanation on internet 👍

  • @akhilp6263
    @akhilp6263 Рік тому +2

    This brought meaning to my life ❤🥂

  • @YashJaiswal-lr7vi
    @YashJaiswal-lr7vi 4 місяці тому

    Great Explaination.
    I want to add one correction: -1 < -0.04

  • @ParthivShah
    @ParthivShah 7 місяців тому +1

    Thank You Sir.

  • @manojssonawane
    @manojssonawane Рік тому

    Salute to you sir....The depth at which you are delivering the machine learning knowledge is very appreciable....Wish you a great fortune sir...

  • @nitinchityal583
    @nitinchityal583 Рік тому +1

    Sir, you have made my machine-learning journey like a roller coaster... Thoroughly enjoyed

  • @tanujrana1212
    @tanujrana1212 Рік тому

    You are delivering much better content compared to some self proclaimed "leading provider of ML content".. Kudos..✌️✌️

  • @hasanrants
    @hasanrants 27 днів тому

    thank you Sir, completed on 17th September 2024 @10:25PM

  • @pritamrajbhar9504
    @pritamrajbhar9504 4 місяці тому

    explanation was simple and on point.

  • @pranalipatle2215
    @pranalipatle2215 Рік тому

    what a explanation .........felt clear& awesome

  • @CANADAANDUSABEAUTY
    @CANADAANDUSABEAUTY 10 місяців тому

    great work.Nice explanation!!!!!!God bless

  • @chetanshetty8368
    @chetanshetty8368 2 роки тому +1

    Hi Sir, Awesome videos. Can you please help me understand, when we are in a multi-dimensional space (Say 10 Independent and a Dependent variable(Binary)) how can we determine if the data is linearly separable. Thanks in advance

  • @HA-bj5ck
    @HA-bj5ck 8 місяців тому

    Give this man a Medal!!!

  • @anilkathayat1247
    @anilkathayat1247 5 місяців тому

    Best Explanation Sir❤

  • @kalluriramakrishna5732
    @kalluriramakrishna5732 27 днів тому

    Another level 💯

  • @rockykumarverma980
    @rockykumarverma980 15 днів тому

    Thank you so much sir🙏🙏🙏

  • @GauravKumarGupta-fn8pw
    @GauravKumarGupta-fn8pw Рік тому

    Best explanation. thanks Sir

  • @mohinigoyal3063
    @mohinigoyal3063 Рік тому

    Sir one more question
    Apne sabse pahle bola linearly separable Hona chahie data logistic regression banate samay
    Question hai ki hum
    Regression ko apply kiye bina check kaise karenge ki Vo linearly separable hai ya nahin means kya code hota hai
    Ya kase dekhe ki bo linearly sepreble hai ya nahi

  • @MehulKumar-xw9rx
    @MehulKumar-xw9rx 7 днів тому

    Sir please Explain the mathematics behind the maximum likelihood formula in some new video

  • @Ravi-sl5ms
    @Ravi-sl5ms 3 роки тому

    Thank you. Wonderfully explained..

  • @justcodeitbro1312
    @justcodeitbro1312 Рік тому

    wow thanks for this great explanation.

  • @revathivamshi7621
    @revathivamshi7621 3 місяці тому

    The Submission of negatives log of maximum likelihood is called cross entropy.

  • @skiritijayadev2932
    @skiritijayadev2932 4 місяці тому

    Jai ho nithish sir
    GOAT

  • @radhikawadhawan4235
    @radhikawadhawan4235 7 місяців тому

    why did we consider product of probabilities to be Loss Function? in MSE minimising y - y_hat is logical, but i didnt get the logic for the same here.

  • @arslanahmed1311
    @arslanahmed1311 Рік тому

    100% samajh aa gai

  • @sujithsaikalakonda4863
    @sujithsaikalakonda4863 2 роки тому

    Great Explanation sir.

  • @vrushabhjain1542
    @vrushabhjain1542 2 роки тому

    Awesome explaination

  • @preetisrivastava1624
    @preetisrivastava1624 Рік тому

    Can we say that uncertainty is inversely proportional to probability
    ie entropy is inversely proportional to probability or log p
    or entropy is directly to -log p
    and then the solution lies in minimizing the loss or entropy or uncertainity

  • @unkownboy9962
    @unkownboy9962 11 місяців тому

    why this channel is so underrated

  • @AnjaliSharma-lq8ut
    @AnjaliSharma-lq8ut 6 місяців тому

    I wish I could give more than 1 like.

  • @mohinigoyal3063
    @mohinigoyal3063 Рік тому

    Sir please yah bataiye maximum likelihood ham logistic regression Mein kyon find Karte hai
    Kyunki linear relation mein ham sum of squared error nikalte Hain Jahan hamara error minimum hota hai to logistic regression Mein ham maximum error kyon nikalte Hain maximum likelihood se

  • @sahilvimal8795
    @sahilvimal8795 4 місяці тому

    00:02 The perceptron algorithm failed to produce the optimal solution in logistic regression.
    02:33 In logistic regression, we need to find a loss function that minimizes the mistakes made by the model.
    04:51 The launch function helps determine the quality of a model and choose the best one.
    07:37 Calculating prediction probabilities and understanding the logistic regression model
    10:20 The Maximum Likelihood method calculates the product of probabilities for each data point to determine the better model.
    13:01 Replacing product with sum using logarithm
    15:44 Cross entropy is a key concept in logistic regression.
    18:45 The goal is to minimize the maximum likelihood in the binary cross entropy loss function.
    21:14 The formula used in logistic regression for binary cross entropy may not work for all cases
    24:12 Introduction to loss function and maximum likelihood
    27:07 Logistic Regression Loss Function and Maximum Likelihood
    Crafted by Merlin AI.

  • @adityabhatt04
    @adityabhatt04 3 роки тому

    Great video.

  • @rambaldotra2221
    @rambaldotra2221 3 роки тому

    Loved it Mind Blowing ✨

  • @dikshabatra8657
    @dikshabatra8657 Рік тому

    Thankyou so much😄😄

  • @rafibasha4145
    @rafibasha4145 2 роки тому +1

    How maximum likelihood and log loss const functions are related

  • @COMEDY_007..
    @COMEDY_007.. Рік тому +1

    sir aakhir hum kyo perceptron LGA rhe hai jb hamare pass already sklrearn mai pre defined model hai😢😢

  • @janauppara
    @janauppara 10 місяців тому

    how to get that copy of one note what he has written. Did he upload it anywhere?

  • @o2protectordm909
    @o2protectordm909 Рік тому +1

    Exercise book @CampusX

  • @shravanshukla5352
    @shravanshukla5352 2 роки тому +1

    Please upload a real application vedio of interpretation on logistics regression
    Suppose we analyze insurance data only 9% predict are eligible for personal loan

  • @astikpatel6522
    @astikpatel6522 3 роки тому

    thank you sir

  • @ThanosAtWork
    @ThanosAtWork 11 місяців тому

    do anyone have made notes of this deep learning playlist ?

  • @rajeshkr0021
    @rajeshkr0021 Рік тому +1

    Sir Codes Download nhi ho pa rha github se and Datasets bhi. How to download?

  • @ABHISHEKPRASADPRAJAPATI-jq3cg
    @ABHISHEKPRASADPRAJAPATI-jq3cg 4 місяці тому

    Teacher should have also teaching quality and your effort justify that....

  • @jroamindia1754
    @jroamindia1754 Рік тому

    How y2 = 0 and y4 = 0 ?? why u took the value as y2= 0

  • @subhanjalpant8824
    @subhanjalpant8824 18 днів тому

    22:00

  • @ProgramerSalar
    @ProgramerSalar Рік тому +1

    gzb😀😃😄😄😄😄😄😄😄😄😄😄😄😄😄

  • @kindaeasy9797
    @kindaeasy9797 5 місяців тому

    maja aaya ++

  • @RohanOxob
    @RohanOxob Рік тому

    9:00

  • @ajayraho
    @ajayraho Місяць тому

    Andrew Ng who? 😂