ai/bi how can it be small when ai is big . please don’t put wrong explanation. It misleads and create lot of confusion . I would request you to please go through it and confirm . As the understanding get completely twisted .
K-means++ techinque is used to avoid random initilization traps.Or in other words to intialize the centroids very far apart from each others so that all the data are grouped properly and classification becomes much accurate.
Hello Mr Krish, I believe you made a mistake in the silhoutte score section. If a(i) >>> b(i) this would mean that the intra cluster distance is greater than the inter cluster distance, which would further imply that the clustering that is done is poor. In the video you have taken that as a case of good clustering. Same is true for a(i)
Sir At 38:23, you have said that agglomerative is top down but its bottom up, and similarly divisible is top down. Sir thank you for the informative sessions.
Time : 1: 16 ---> when our centroid is located very nearly then our model can't able to make good cluster , to outcome this problem we use K-MEAN ++ Initialization --> after applying this our centroid located FAR from each other. Thanks Sir 💓💓💓
Hi Krish. Pls make share more transition stories 🥲 Also emphasize more on the hiring process. Many companies probably have live codings rounds . Pls throw more light on this 🙏🏻
sir when will end to end data science projects come with dvc and mlops included in that i really need it sir if possible please bring it as soon as possible
no bro when dividing it will be like one-vs-all method, i.e consider 1st k value , one side will belonging the cluster of this point and the either is side will belong to the other groups
seriously Bhai !!!!!! Esse bande padhye to padhne mai maaja aata hai...... saale clg ke gvr teachers , female faculty se baate krne mai hi time waste krte hai....
Dont Forget to subscribe the channel for more such videos :)
Hello Sir, I wanted some help on ONA, where or which course is available to learn ONA.
Sir please complete the staticial playlist
Sir i want to join pw 3500 rupees data science course,,,, but on ow app this course is not visible
sir please complete stat playlist😕
ai/bi how can it be small when ai is big . please don’t put wrong explanation. It misleads and create lot of confusion . I would request you to please go through it and confirm . As the understanding get completely twisted .
K-means++ techinque is used to avoid random initilization traps.Or in other words to intialize the centroids very far apart from each others so that all the data are grouped properly and classification becomes much accurate.
correction = 1:08:06
if bi> ai then only it is a good cluster
You are right.
Yes I got confused because of this line😅
yah bro , good point i got confused also
Fantastic explanation. No deviated talks, just the crux. Love the way you explain. Great going !!!
Hello Mr Krish,
I believe you made a mistake in the silhoutte score section. If a(i) >>> b(i) this would mean that the intra cluster distance is greater than the inter cluster distance, which would further imply that the clustering that is done is poor. In the video you have taken that as a case of good clustering. Same is true for a(i)
As always, super crisp and dope explanations!
Sir At 38:23, you have said that agglomerative is top down but its bottom up, and similarly divisible is top down.
Sir thank you for the informative sessions.
time --> 1:08 , if b(i) > a(i) then our cluster is good.
.
this playlist is Very Helpful...
Thanks Sir ❤❤❤❤❤❤❤
exactly...bro...ty
simple and easy when the value is 1 , it is the appropriate case but when the value is 0 or -1 then it is a bad case
Time : 1: 16 ---> when our centroid is located very nearly then our model can't able to make good cluster , to outcome this problem we use K-MEAN ++ Initialization --> after applying this our centroid located FAR from each other.
Thanks Sir 💓💓💓
sir, aapka teaching style bhut jyada accha hai, mere doubts clear hotey hai aapki hi videos se. Is lectures k notes kha se le sir.
Amazing video. Thank you for teaching us with dedication.
Thank you very much sir i am from non it background but get all the concept little bit the way you r teaching is superb
Thank you for the awesome free content on clustering.
Very Nice explanation. Really helpful.
Hi Krish. Pls make share more transition stories 🥲
Also emphasize more on the hiring process. Many companies probably have live codings rounds . Pls throw more light on this 🙏🏻
sir please make complete playlist for ml for college exam like arm and all other topics,Your way of teaching is excellent!!
thanks you so much sir for this lecture
mistake: a(i) < b(i) , good cluster, but it is said opposite in video timestamp(1:05:00)
sir ur amazing..!!!!
It is very helpful to understand unsupervised learning methods
By using k-means ++ we can initialise centroids which are far from each other, so that we can overcome Random Initialisation Trap.
Dendrograms : A Dendrogram is a tree-like diagram used to visualize the relationship among clusters.
Thank you so much sir.
Sir please complete statistics playlist in Hindi
Hello Krish, if possible can you record some sessions around HM Markov model and Expectation-maximization algo?
Sir , please share the board PDP as you shared in the playlist of ML , it will very helpful for us
Great wok you are doing Krish, Please tell me about the Whiteboard app that you use, it will be a great help
Sir kindly provide lecture of ANN also
k means for large dataset and hierarchial for small dataset
sir when will end to end data science projects come with dvc and mlops included in that i really need it sir if possible please bring it as soon as possible
Hy i am unable to get ppt or notes of the classes.
how do you define small and large dataset ?
@krishnaikhindi please help me understand
for K=3 , how perpendicular lines will divide the clusters..there will be 6 clusters right..
no bro when dividing it will be like one-vs-all method, i.e consider 1st k value , one side will belonging the cluster of this point and the either is side will belong to the other groups
seriously Bhai !!!!!! Esse bande padhye to padhne mai maaja aata hai...... saale clg ke gvr teachers , female faculty
se baate krne mai hi time waste krte hai....
score=silhouette_score(x_train,labels=kmeans.labels_)
this line throws an error. Please guide
notes of last few videos is missing
Can you please share the whiteboard you use
yeh slides github par upload kiya hai kya?.
For huge dataset K means
Please yellow color use kare in compare of red
Hard hai
Stay hard
But how to find min pts value..?
Can I get this notepad pdf???
Sir django p playlist bnao