Absolutely Love your content sir. Just one correction. We minimise cross entropy because we add an extra minus sign. So, to maximise likelihood, we minimise the cross entropy. What you said is wrong that log of smaller number has higher value. Logarithm is a continuously increasing function, so to maximise f(x), you can maximise log(f(x)) which is equivalent to minimising -log(f(x))
First person on a UA-cam whose teaching style is impressive. You are totally awesome. 😍 sir main ne You Tube pay probably 10 to 20 video likes ki hn gi. But aap ka jo teaching style hai majboor kar deta ha video ko like karne me aur dil say feelings ati hai "ja pana tussi great ho" 😇
Hi Sir, Awesome videos. Can you please help me understand, when we are in a multi-dimensional space (Say 10 Independent and a Dependent variable(Binary)) how can we determine if the data is linearly separable. Thanks in advance
Sir one more question Apne sabse pahle bola linearly separable Hona chahie data logistic regression banate samay Question hai ki hum Regression ko apply kiye bina check kaise karenge ki Vo linearly separable hai ya nahin means kya code hota hai Ya kase dekhe ki bo linearly sepreble hai ya nahi
Can we say that uncertainty is inversely proportional to probability ie entropy is inversely proportional to probability or log p or entropy is directly to -log p and then the solution lies in minimizing the loss or entropy or uncertainity
Sir please yah bataiye maximum likelihood ham logistic regression Mein kyon find Karte hai Kyunki linear relation mein ham sum of squared error nikalte Hain Jahan hamara error minimum hota hai to logistic regression Mein ham maximum error kyon nikalte Hain maximum likelihood se
00:02 The perceptron algorithm failed to produce the optimal solution in logistic regression. 02:33 In logistic regression, we need to find a loss function that minimizes the mistakes made by the model. 04:51 The launch function helps determine the quality of a model and choose the best one. 07:37 Calculating prediction probabilities and understanding the logistic regression model 10:20 The Maximum Likelihood method calculates the product of probabilities for each data point to determine the better model. 13:01 Replacing product with sum using logarithm 15:44 Cross entropy is a key concept in logistic regression. 18:45 The goal is to minimize the maximum likelihood in the binary cross entropy loss function. 21:14 The formula used in logistic regression for binary cross entropy may not work for all cases 24:12 Introduction to loss function and maximum likelihood 27:07 Logistic Regression Loss Function and Maximum Likelihood Crafted by Merlin AI.
Please upload a real application vedio of interpretation on logistics regression Suppose we analyze insurance data only 9% predict are eligible for personal loan
Absolutely Love your content sir. Just one correction. We minimise cross entropy because we add an extra minus sign. So, to maximise likelihood, we minimise the cross entropy. What you said is wrong that log of smaller number has higher value. Logarithm is a continuously increasing function, so to maximise f(x), you can maximise log(f(x)) which is equivalent to minimising -log(f(x))
First person on a UA-cam whose teaching style is impressive. You are totally awesome. 😍
sir main ne You Tube pay probably 10 to 20 video likes ki hn gi. But aap ka jo teaching style hai majboor kar deta ha video ko like karne me aur dil say feelings ati hai "ja pana tussi great ho" 😇
same here
This is the best video on logistic regression and this is the best ML playlist in youtube. Thank you so much sir
This is one of the best tutorial about Logistic Regression in UA-cam.
We really appreciate the amount of effort you put into making this playlist
I have never seen such a concise playlist on Logit. Thank you sir, all my doubts are solved!
The best explanation of cross entropy Ive come across!
i have ur all on downlaoad due to net issue,but when i have net than i prefer to watch online .this is a small gift my side
I was looking for content to study machine learning for GATE DA. Thank you so much for, hands down, THE BEST content.
you deserve an appreciation. best content i have read so far. you have my blessing. will buy your mentorship program too.
Dhaansu Explanation, Bhai!! It's interesting to see how math is worked out.
The most goated Logistic regression explaination ever
best in depth explanation on internet 👍
This brought meaning to my life ❤🥂
Great Explaination.
I want to add one correction: -1 < -0.04
Thank You Sir.
Salute to you sir....The depth at which you are delivering the machine learning knowledge is very appreciable....Wish you a great fortune sir...
Sir, you have made my machine-learning journey like a roller coaster... Thoroughly enjoyed
You are delivering much better content compared to some self proclaimed "leading provider of ML content".. Kudos..✌️✌️
thank you Sir, completed on 17th September 2024 @10:25PM
explanation was simple and on point.
what a explanation .........felt clear& awesome
great work.Nice explanation!!!!!!God bless
Hi Sir, Awesome videos. Can you please help me understand, when we are in a multi-dimensional space (Say 10 Independent and a Dependent variable(Binary)) how can we determine if the data is linearly separable. Thanks in advance
Give this man a Medal!!!
Best Explanation Sir❤
Another level 💯
Thank you so much sir🙏🙏🙏
Best explanation. thanks Sir
Sir one more question
Apne sabse pahle bola linearly separable Hona chahie data logistic regression banate samay
Question hai ki hum
Regression ko apply kiye bina check kaise karenge ki Vo linearly separable hai ya nahin means kya code hota hai
Ya kase dekhe ki bo linearly sepreble hai ya nahi
Sir please Explain the mathematics behind the maximum likelihood formula in some new video
Thank you. Wonderfully explained..
wow thanks for this great explanation.
The Submission of negatives log of maximum likelihood is called cross entropy.
Jai ho nithish sir
GOAT
why did we consider product of probabilities to be Loss Function? in MSE minimising y - y_hat is logical, but i didnt get the logic for the same here.
100% samajh aa gai
Great Explanation sir.
Awesome explaination
Can we say that uncertainty is inversely proportional to probability
ie entropy is inversely proportional to probability or log p
or entropy is directly to -log p
and then the solution lies in minimizing the loss or entropy or uncertainity
why this channel is so underrated
I wish I could give more than 1 like.
Sir please yah bataiye maximum likelihood ham logistic regression Mein kyon find Karte hai
Kyunki linear relation mein ham sum of squared error nikalte Hain Jahan hamara error minimum hota hai to logistic regression Mein ham maximum error kyon nikalte Hain maximum likelihood se
00:02 The perceptron algorithm failed to produce the optimal solution in logistic regression.
02:33 In logistic regression, we need to find a loss function that minimizes the mistakes made by the model.
04:51 The launch function helps determine the quality of a model and choose the best one.
07:37 Calculating prediction probabilities and understanding the logistic regression model
10:20 The Maximum Likelihood method calculates the product of probabilities for each data point to determine the better model.
13:01 Replacing product with sum using logarithm
15:44 Cross entropy is a key concept in logistic regression.
18:45 The goal is to minimize the maximum likelihood in the binary cross entropy loss function.
21:14 The formula used in logistic regression for binary cross entropy may not work for all cases
24:12 Introduction to loss function and maximum likelihood
27:07 Logistic Regression Loss Function and Maximum Likelihood
Crafted by Merlin AI.
Great video.
Loved it Mind Blowing ✨
Thankyou so much😄😄
How maximum likelihood and log loss const functions are related
sir aakhir hum kyo perceptron LGA rhe hai jb hamare pass already sklrearn mai pre defined model hai😢😢
how to get that copy of one note what he has written. Did he upload it anywhere?
Exercise book @CampusX
Please upload a real application vedio of interpretation on logistics regression
Suppose we analyze insurance data only 9% predict are eligible for personal loan
thank you sir
do anyone have made notes of this deep learning playlist ?
Sir Codes Download nhi ho pa rha github se and Datasets bhi. How to download?
Teacher should have also teaching quality and your effort justify that....
How y2 = 0 and y4 = 0 ?? why u took the value as y2= 0
22:00
gzb😀😃😄😄😄😄😄😄😄😄😄😄😄😄😄
maja aaya ++
9:00
Andrew Ng who? 😂