Something went wrong while using pd.crosstab! So the updated confusion matrices are as follows - At 7:50 The correct confusion matrix is 92303 14 1535 135 At 10:30 The correct confusion matrix is 93798 41 40 108 Sorry for the mistake :)
When we apply SMOTE, the number of samples doesn't changes. But as explained by you, if we are adding some synthetic samples, the training example should also increase right??
Hi Bhavesh, Very good explanation. I was particularly confused about implementing SMOTE on the main data. But I guess you're correct that we must implement SMOTE on training data. Thank You
I started watching the undersampling video for a problem and ended up watching the full series cause of how well explained they are. Gald I discovered your channel! Wish I did sooner xD
Hi, you used only two target 0 and 1 , how to do with more than two . Suppose target 1 is around 2000 , target 2 is around 200 , target 3 is around 11 and so on.
Thank you for this video. Understood SMOTE very well. Please make videos more often and How do you explain things so effortlessly with such clarity ? Where is this clarity coming from ? Great job
even i have this doubt - Hi, you used only two target 0 and 1 , how to do with more than two . Suppose target 1 is around 2000 , target 2 is around 200 , target 3 is around 11 and so on.
I have a categorical dependent variable with 3400 records in which the distribution of 0s and 1s are 2677 and 723 respectively, Will this be considered as an imbalanced dataset ? or if I would have 1s less than 5% of the total record only then it would be considered as imbalanced. Kindly clarify the doubt
What if there are more than 2 classes? In your video Sir, there are only 2 classes.. For example, I want to make 3 classes.. How can I implemented 3 classes on python use SMOTE?? Thank you, Sir
When the final ratio came out to be 0.005, doesn't it imply that the we are going to be generating a very small number (0.005 * majority) of samples for the minority class? How will the length of minority class samples ever be equal to that of majority class?
Here while fitting the training dataset after tuning hyperparameters using gridsearchcv why you have used X_train and y_train and why not X_train_res and y_train_res dataset
so the idea of opting for ratio parameter in SMOTE to be a hyperparameter is to ensure we get better results is that correct, in general is it a good option to make ratio option of SMOTE to be a hyperparameter rather then fixing it to 1
Thank you for this video! 2 thumbs up! Question - at 4:06 you selected KNN = 3 but I didn't see you applying that concept in the code section. Can you please elaborate on where you set KNN as 3 in the code section? Did I misunderstand something?
The final ratio for the final model after Grid search CV was for SMOTE=0.0005/Does thatg imply that the ratio(Minority class/Majority class)=0.005 .?Then how is the minority class gettting oversampled to equal proportion as the majority class??
I have a sample of only 28. Unfortunately I don't have more sample. Will SMOTE work? Secondly, which logistic regression should be used? Sklearn or statsmodels? Both give different results. Please help.
hi bhavesh could you please confirm in order to ensure the oversampling method doesnt reduce the accuracy of the model should we always use hyperparameter tuning or is there some other method also to undo the damage of oversampling method in logistic regression for attrition prediction
Nice content! I would like to compare some techniques of oversampling.. Can you pl help me out to get the hard code of SMOTE not the packaged one..thanks
True positive is 0 in the confusion matrix(by the formula the Precision and Recall should be equal to zero) .So how did you get that great number (over 70 %)?
You are some DOPE shit brother and by that i mean youre really good ! explained the important stuffs like only on train set beautifully ! really great !
Hey, when I try using make_pipeline(SMOTE(), SVC()) it gives me an error : All intermediate steps should be transformers and implement fit and transform or be the string 'passthrough' 'SMOTE(k_neighbors=5, kind='deprecated', m_neighbors='deprecated', n_jobs=1, out_step='deprecated', random_state=None, ratio=None, sampling_strategy='auto', svm_estimator='deprecated')' (type ) doesn't what's going wrong here
At the end of the video, how all the 4 metrics scored above 70% if the model did not predicted correct none of samples classified as 1? There was 0 True Positives and 63 False Negatives!
You must have figured it out by now. Am only a student. It has been deprecated as the video is 1 year old. try using this sm = SMOTE(random_state=42, sampling_strategy = 'minority')
The smote ratio parameter is deprecated, my off balanced dataset sklearn classification_report is off balanced in the support column even after smoting.
Nope! It doesn't, it only upsamples your data by generating artificial samples! How good the model performs depends on how well your classes are apart!
Something went wrong while using pd.crosstab! So the updated confusion matrices are as follows -
At 7:50
The correct confusion matrix is
92303 14
1535 135
At 10:30
The correct confusion matrix is
93798 41
40 108
Sorry for the mistake :)
Why we are using "random_state=12" ?
@@sahubiswajit1996 it is just his preference, for being able to get the same result from the randomness.
When we apply SMOTE, the number of samples doesn't changes. But as explained by you, if we are adding some synthetic samples, the training example should also increase right??
@@sahubiswajit1996 you can take any number
I guess it's kinda off topic but does anybody know a good site to stream new tv shows online ?
Hi Bhavesh,
Very good explanation. I was particularly confused about implementing SMOTE on the main data. But I guess you're correct that we must implement SMOTE on training data.
Thank You
Thanku Bhavesh❣️❣️.Bina bore kiye padhaya 👏🏻👏🏻👏🏻 excellent
I started watching the undersampling video for a problem and ended up watching the full series cause of how well explained they are. Gald I discovered your channel! Wish I did sooner xD
Glad it was helpful!
Not only you explained really well the illustration were perfect for a beginner to understand what oversampling mean. Thank you:)
Glad it was helpful!
You have no idea how helpful that was
Thank you so much :)
Most helpful and professional video I found on SMOTE. Thanks a lot!
I'm glad you like it
I'll come back to this video. Seems helpful!
Thank you sir for giving a wonderful lecture. Can you tell me how I can put the sampling ratio as per my choice instead of 1:1 using SMOTE?
Hi, you used only two target 0 and 1 , how to do with more than two . Suppose target 1 is around 2000 , target 2 is around 200 , target 3 is around 11 and so on.
Thank you for this video. Understood SMOTE very well. Please make videos more often and How do you explain things so effortlessly with such clarity ? Where is this clarity coming from ? Great job
Thank you! Will do!
Your handwriting is pretty. Thanks for the explanation once again. Cheers!
even i have this doubt -
Hi, you used only two target 0 and 1 , how to do with more than two . Suppose target 1 is around 2000 , target 2 is around 200 , target 3 is around 11 and so on.
arxiv.org/pdf/1106.1813.pdf - check out algorithm, neighbours does matters.
Very well explained Thank you. Especially appreciated the explanation of nearest neighbor
This is very well done :) Nothing overly flashy and yet very clear.
Glad you enjoyed it
Very Good Explanation. But, can we use this method for multiclass problem? Also, does SMOTE leads to overfitting issue?
Thank you ! Simple and clear explanation
Glad it was helpful!
I have a categorical dependent variable with 3400 records in which the distribution of 0s and 1s are 2677 and 723 respectively, Will this be considered as an imbalanced dataset ? or if I would have 1s less than 5% of the total record only then it would be considered as imbalanced. Kindly clarify the doubt
When I tried to set up the smote ration, getting invalid ratio parameter for SMOTE.Can u help?
What if there are more than 2 classes? In your video Sir, there are only 2 classes.. For example, I want to make 3 classes.. How can I implemented 3 classes on python use SMOTE?? Thank you, Sir
Hi Bhavesh, very nicely explained can you please tell me the literature of the following examples. thanks
Thanks, Bhavesh!
Glad you enjoyed it
If we want to normalize the data as well, should we do it before applying SMOTE?
Excellent explanation!
I'm glad you liked it
Thank you so much for the great explanation!
Glad it was helpful!
When the final ratio came out to be 0.005, doesn't it imply that the we are going to be generating a very small number (0.005 * majority) of samples for the minority class? How will the length of minority class samples ever be equal to that of majority class?
Lovely Explanation! Thank you!
kindly tell me I have 5 classes imbalanced data set. SMOTE will work for multi CLASS data set ?
6:20 what library u imported before declaring SMOTE() class?
I don't understand how we infer from auc roc. What are we seeing there and what are the values plotted here.
Here while fitting the training dataset after tuning hyperparameters using gridsearchcv why you have used X_train and y_train and why not X_train_res and y_train_res dataset
How we can overcame the problem of Overlapping when used SMOTE??
Looks like the weights is also not working on smote. Any alternative way to test different weights?
so the idea of opting for ratio parameter in SMOTE to be a hyperparameter is to ensure we get better results is that correct, in general is it a good option to make ratio option of SMOTE to be a hyperparameter rather then fixing it to 1
cello pointec- bachpan ki yaad dila di :)
Can u please tell how this SMOTE can be applied for streaming data- In Test then Train Framework??
Thank you for this video! 2 thumbs up! Question - at 4:06 you selected KNN = 3 but I didn't see you applying that concept in the code section. Can you please elaborate on where you set KNN as 3 in the code section? Did I misunderstand something?
When KNN is not stated, the default is 5.
in your crosstab function you have y_test[target]. What is that? why is target used to index the y_test object?
Great Explanation....👏
The final ratio for the final model after Grid search CV was for SMOTE=0.0005/Does thatg imply that the ratio(Minority class/Majority class)=0.005 .?Then how is the minority class gettting oversampled to equal proportion as the majority class??
Can i apply sampling for test set too.. Becuase its also very unbalanced??? Plzzz reply
I have a sample of only 28. Unfortunately I don't have more sample. Will SMOTE work? Secondly, which logistic regression should be used? Sklearn or statsmodels? Both give different results. Please help.
Thanks to explain with notes help me alot
Hello Sir !
Could you please describe how SMOTE technique can be used to balance data images
very informative video, simple and to the point keep it up
Glad you liked it!
gettings errors as :
__init__() got an unexpected keyword argument 'ratio'
AttributeError: 'SMOTE' object has no attribute 'fit_sample'
Can SMOTE be used for Multi label classification dataset ?
Thank you
hi bhavesh could you please confirm in order to ensure the oversampling method doesnt reduce the accuracy of the model should we always use hyperparameter tuning or is there some other method also to undo the damage of oversampling method in logistic regression for attrition prediction
Nice content! I would like to compare some techniques of oversampling.. Can you pl help me out to get the hard code of SMOTE not the packaged one..thanks
With SMOTE, can we achieve higher f1 in practice? I saw that f1 was around 0.72
After generating the synthetic data in which kind of situation this data can be useful any limitation of this type of data.
True positive is 0 in the confusion matrix(by the formula the Precision and Recall should be equal to zero) .So how did you get that great number (over 70 %)?
Please read the pinned comment!
@@bhattbhavesh91 I like your videos. :)))
You are some DOPE shit brother and by that i mean youre really good ! explained the important stuffs like only on train set beautifully ! really great !
Nice explanation
Can you tell i should do scaling before or after the smote?
Quite interesting! Thanks for the lesson.
Glad you liked it!
Do you need to remove outliers of dataset if you SMOTE?
can u elaborate with a random forest algorithm in google colab?
Thanks alot. You mk it so simple :) Liked n subscribed bro.
Thanks and welcome
Hi, what do we do if we have a balanced dataset but still want to increase the number of rows
How do I split my data into training and testing if my data is imbalanced?
Please start a playlist for beginners to learn AI ,ML please
Sure!
Thanks for teaching new stuff.☺
Hey, when I try using make_pipeline(SMOTE(), SVC())
it gives me an error :
All intermediate steps should be transformers and implement fit and transform or be the string 'passthrough' 'SMOTE(k_neighbors=5, kind='deprecated', m_neighbors='deprecated', n_jobs=1,
out_step='deprecated', random_state=None, ratio=None,
sampling_strategy='auto', svm_estimator='deprecated')' (type ) doesn't
what's going wrong here
The SMOTE function has changed after I created this video! Please refer to the documentation!
thank you so much - very informative video
Glad it was helpful!
Thank you so much Sir
Most welcome
Thank you sir !
Most welcome!
shouldn’t it be generate_auc_roc_curve(pipe, X_test). If no if Bhaveshbhai you or anyone can explain pls.
Good work man! Thanks
Glad it helped!
Well explained
Thank you!
Very well explained sir!!!
Hiii, can you please tell how to use SMOTE on time series and sequential data
you are a google search away for an answer!
Realy thanks♥️
You're welcome 😊
Can we use smote to target column in data set
At the end of the video, how all the 4 metrics scored above 70% if the model did not predicted correct none of samples classified as 1? There was 0 True Positives and 63 False Negatives!
You are great bro
Sir, could you please make a video on outlier detection?
I have already created a video on outlier detection.
Link - ua-cam.com/video/2Qrost474lQ/v-deo.html
if we use smote in the pipeline, is it only upsampling on training or also on testing when we call predict? Thanks
Smote can only be used in Logistic Regression or any classification model
any classification algorithm!
Can you please share the notebook with us using google colab?
Nice expalnation
how does smote work with categorical data?
Really help
I have got this error when trying to run the smote:
__init__() got an unexpected keyword argument 'ratio'
any clues ?
Thank you
You must have figured it out by now. Am only a student. It has been deprecated as the video is 1 year old.
try using this sm = SMOTE(random_state=42, sampling_strategy = 'minority')
Thanks Gurunath for sharing this!
The smote ratio parameter is deprecated, my off balanced dataset sklearn classification_report is off balanced in the support column even after smoting.
The SMOTE function has changed after I created this video! Please refer to the official documentation!
Hii bhavesh , i used ur this code of smote bt i m getting an error of ratio ie invalid parameter ratio for estimator Smote , how to resolve this
I guess the function has changed! Do have a look at the documentation to learn more about it!
again ROC auc curve is used ??
Good work bro.. thank you
very well explained sir thank you
You are welcome
Thanks 👍
Welcome 👍
Hi~can you share the data set
How to handled extremely imbalanced data for regression problem .
Smote__ratio is not a parameter of smote help me out plz......
The SMOTE function has changed after I created this video! Please refer to the official documentation!
Does smote guarantee to improve classifier performance ?
Nope! It doesn't, it only upsamples your data by generating artificial samples! How good the model performs depends on how well your classes are apart!
Getting an error: ValueError: Unknown label type: 'continuous-multioutput'
you are a google search away for an answer!
@@bhattbhavesh91 lol that's right 😂
Nice
Perfection
what is the use of defining random_state ?
ua-cam.com/video/c249O4giblM/v-deo.html
Lovelyyyyyyy
Thanks