Something went wrong while using pd.crosstab! So the updated confusion matrices are as follows - At 7:50 The correct confusion matrix is 92303 14 1535 135 At 10:30 The correct confusion matrix is 93798 41 40 108 Sorry for the mistake :)
When we apply SMOTE, the number of samples doesn't changes. But as explained by you, if we are adding some synthetic samples, the training example should also increase right??
Hi Bhavesh, Very good explanation. I was particularly confused about implementing SMOTE on the main data. But I guess you're correct that we must implement SMOTE on training data. Thank You
I started watching the undersampling video for a problem and ended up watching the full series cause of how well explained they are. Gald I discovered your channel! Wish I did sooner xD
Hi, you used only two target 0 and 1 , how to do with more than two . Suppose target 1 is around 2000 , target 2 is around 200 , target 3 is around 11 and so on.
even i have this doubt - Hi, you used only two target 0 and 1 , how to do with more than two . Suppose target 1 is around 2000 , target 2 is around 200 , target 3 is around 11 and so on.
Thank you for this video. Understood SMOTE very well. Please make videos more often and How do you explain things so effortlessly with such clarity ? Where is this clarity coming from ? Great job
Thank you for this video! 2 thumbs up! Question - at 4:06 you selected KNN = 3 but I didn't see you applying that concept in the code section. Can you please elaborate on where you set KNN as 3 in the code section? Did I misunderstand something?
Here while fitting the training dataset after tuning hyperparameters using gridsearchcv why you have used X_train and y_train and why not X_train_res and y_train_res dataset
I have a categorical dependent variable with 3400 records in which the distribution of 0s and 1s are 2677 and 723 respectively, Will this be considered as an imbalanced dataset ? or if I would have 1s less than 5% of the total record only then it would be considered as imbalanced. Kindly clarify the doubt
You are some DOPE shit brother and by that i mean youre really good ! explained the important stuffs like only on train set beautifully ! really great !
Nice content! I would like to compare some techniques of oversampling.. Can you pl help me out to get the hard code of SMOTE not the packaged one..thanks
When the final ratio came out to be 0.005, doesn't it imply that the we are going to be generating a very small number (0.005 * majority) of samples for the minority class? How will the length of minority class samples ever be equal to that of majority class?
so the idea of opting for ratio parameter in SMOTE to be a hyperparameter is to ensure we get better results is that correct, in general is it a good option to make ratio option of SMOTE to be a hyperparameter rather then fixing it to 1
What if there are more than 2 classes? In your video Sir, there are only 2 classes.. For example, I want to make 3 classes.. How can I implemented 3 classes on python use SMOTE?? Thank you, Sir
I have a sample of only 28. Unfortunately I don't have more sample. Will SMOTE work? Secondly, which logistic regression should be used? Sklearn or statsmodels? Both give different results. Please help.
hi bhavesh could you please confirm in order to ensure the oversampling method doesnt reduce the accuracy of the model should we always use hyperparameter tuning or is there some other method also to undo the damage of oversampling method in logistic regression for attrition prediction
Hey, when I try using make_pipeline(SMOTE(), SVC()) it gives me an error : All intermediate steps should be transformers and implement fit and transform or be the string 'passthrough' 'SMOTE(k_neighbors=5, kind='deprecated', m_neighbors='deprecated', n_jobs=1, out_step='deprecated', random_state=None, ratio=None, sampling_strategy='auto', svm_estimator='deprecated')' (type ) doesn't what's going wrong here
The final ratio for the final model after Grid search CV was for SMOTE=0.0005/Does thatg imply that the ratio(Minority class/Majority class)=0.005 .?Then how is the minority class gettting oversampled to equal proportion as the majority class??
True positive is 0 in the confusion matrix(by the formula the Precision and Recall should be equal to zero) .So how did you get that great number (over 70 %)?
The smote ratio parameter is deprecated, my off balanced dataset sklearn classification_report is off balanced in the support column even after smoting.
You must have figured it out by now. Am only a student. It has been deprecated as the video is 1 year old. try using this sm = SMOTE(random_state=42, sampling_strategy = 'minority')
Nope! It doesn't, it only upsamples your data by generating artificial samples! How good the model performs depends on how well your classes are apart!
At the end of the video, how all the 4 metrics scored above 70% if the model did not predicted correct none of samples classified as 1? There was 0 True Positives and 63 False Negatives!
Something went wrong while using pd.crosstab! So the updated confusion matrices are as follows -
At 7:50
The correct confusion matrix is
92303 14
1535 135
At 10:30
The correct confusion matrix is
93798 41
40 108
Sorry for the mistake :)
Why we are using "random_state=12" ?
@@sahubiswajit1996 it is just his preference, for being able to get the same result from the randomness.
When we apply SMOTE, the number of samples doesn't changes. But as explained by you, if we are adding some synthetic samples, the training example should also increase right??
@@sahubiswajit1996 you can take any number
I guess it's kinda off topic but does anybody know a good site to stream new tv shows online ?
Hi Bhavesh,
Very good explanation. I was particularly confused about implementing SMOTE on the main data. But I guess you're correct that we must implement SMOTE on training data.
Thank You
You have no idea how helpful that was
Thank you so much :)
Thanku Bhavesh❣️❣️.Bina bore kiye padhaya 👏🏻👏🏻👏🏻 excellent
Not only you explained really well the illustration were perfect for a beginner to understand what oversampling mean. Thank you:)
Glad it was helpful!
Most helpful and professional video I found on SMOTE. Thanks a lot!
I'm glad you like it
I started watching the undersampling video for a problem and ended up watching the full series cause of how well explained they are. Gald I discovered your channel! Wish I did sooner xD
Glad it was helpful!
I'll come back to this video. Seems helpful!
Your handwriting is pretty. Thanks for the explanation once again. Cheers!
This is very well done :) Nothing overly flashy and yet very clear.
Glad you enjoyed it
Hi, you used only two target 0 and 1 , how to do with more than two . Suppose target 1 is around 2000 , target 2 is around 200 , target 3 is around 11 and so on.
Thank you sir for giving a wonderful lecture. Can you tell me how I can put the sampling ratio as per my choice instead of 1:1 using SMOTE?
6:20 what library u imported before declaring SMOTE() class?
even i have this doubt -
Hi, you used only two target 0 and 1 , how to do with more than two . Suppose target 1 is around 2000 , target 2 is around 200 , target 3 is around 11 and so on.
arxiv.org/pdf/1106.1813.pdf - check out algorithm, neighbours does matters.
Thanks, Bhavesh!
Glad you enjoyed it
Very well explained Thank you. Especially appreciated the explanation of nearest neighbor
Thank you for this video. Understood SMOTE very well. Please make videos more often and How do you explain things so effortlessly with such clarity ? Where is this clarity coming from ? Great job
Thank you! Will do!
Lovely Explanation! Thank you!
Thank you ! Simple and clear explanation
Glad it was helpful!
Hi Bhavesh, very nicely explained can you please tell me the literature of the following examples. thanks
Quite interesting! Thanks for the lesson.
Glad you liked it!
Thank you for this video! 2 thumbs up! Question - at 4:06 you selected KNN = 3 but I didn't see you applying that concept in the code section. Can you please elaborate on where you set KNN as 3 in the code section? Did I misunderstand something?
When KNN is not stated, the default is 5.
Excellent explanation!
I'm glad you liked it
Great Explanation....👏
Here while fitting the training dataset after tuning hyperparameters using gridsearchcv why you have used X_train and y_train and why not X_train_res and y_train_res dataset
When I tried to set up the smote ration, getting invalid ratio parameter for SMOTE.Can u help?
Thank you so much for the great explanation!
Glad it was helpful!
If we want to normalize the data as well, should we do it before applying SMOTE?
I have a categorical dependent variable with 3400 records in which the distribution of 0s and 1s are 2677 and 723 respectively, Will this be considered as an imbalanced dataset ? or if I would have 1s less than 5% of the total record only then it would be considered as imbalanced. Kindly clarify the doubt
cello pointec- bachpan ki yaad dila di :)
very informative video, simple and to the point keep it up
Glad you liked it!
Nice explanation
Looks like the weights is also not working on smote. Any alternative way to test different weights?
Thanks to explain with notes help me alot
Thanks for teaching new stuff.☺
Hello Sir !
Could you please describe how SMOTE technique can be used to balance data images
Very Good Explanation. But, can we use this method for multiclass problem? Also, does SMOTE leads to overfitting issue?
How we can overcame the problem of Overlapping when used SMOTE??
Very well explained sir!!!
kindly tell me I have 5 classes imbalanced data set. SMOTE will work for multi CLASS data set ?
You are some DOPE shit brother and by that i mean youre really good ! explained the important stuffs like only on train set beautifully ! really great !
Can u please tell how this SMOTE can be applied for streaming data- In Test then Train Framework??
Nice content! I would like to compare some techniques of oversampling.. Can you pl help me out to get the hard code of SMOTE not the packaged one..thanks
When the final ratio came out to be 0.005, doesn't it imply that the we are going to be generating a very small number (0.005 * majority) of samples for the minority class? How will the length of minority class samples ever be equal to that of majority class?
Thanks alot. You mk it so simple :) Liked n subscribed bro.
Thanks and welcome
Good work man! Thanks
Glad it helped!
in your crosstab function you have y_test[target]. What is that? why is target used to index the y_test object?
so the idea of opting for ratio parameter in SMOTE to be a hyperparameter is to ensure we get better results is that correct, in general is it a good option to make ratio option of SMOTE to be a hyperparameter rather then fixing it to 1
thank you so much - very informative video
Glad it was helpful!
What if there are more than 2 classes? In your video Sir, there are only 2 classes.. For example, I want to make 3 classes.. How can I implemented 3 classes on python use SMOTE?? Thank you, Sir
Do you need to remove outliers of dataset if you SMOTE?
Hi, what do we do if we have a balanced dataset but still want to increase the number of rows
I don't understand how we infer from auc roc. What are we seeing there and what are the values plotted here.
I have a sample of only 28. Unfortunately I don't have more sample. Will SMOTE work? Secondly, which logistic regression should be used? Sklearn or statsmodels? Both give different results. Please help.
hi bhavesh could you please confirm in order to ensure the oversampling method doesnt reduce the accuracy of the model should we always use hyperparameter tuning or is there some other method also to undo the damage of oversampling method in logistic regression for attrition prediction
Well explained
Thank you!
After generating the synthetic data in which kind of situation this data can be useful any limitation of this type of data.
How do I split my data into training and testing if my data is imbalanced?
Can you tell i should do scaling before or after the smote?
Can i apply sampling for test set too.. Becuase its also very unbalanced??? Plzzz reply
Hey, when I try using make_pipeline(SMOTE(), SVC())
it gives me an error :
All intermediate steps should be transformers and implement fit and transform or be the string 'passthrough' 'SMOTE(k_neighbors=5, kind='deprecated', m_neighbors='deprecated', n_jobs=1,
out_step='deprecated', random_state=None, ratio=None,
sampling_strategy='auto', svm_estimator='deprecated')' (type ) doesn't
what's going wrong here
The SMOTE function has changed after I created this video! Please refer to the documentation!
Can SMOTE be used for Multi label classification dataset ?
Thank you
Good work bro.. thank you
Realy thanks♥️
You're welcome 😊
You are great bro
Thank you sir !
Most welcome!
Thank you so much Sir
Most welcome
The final ratio for the final model after Grid search CV was for SMOTE=0.0005/Does thatg imply that the ratio(Minority class/Majority class)=0.005 .?Then how is the minority class gettting oversampled to equal proportion as the majority class??
Hiii, can you please tell how to use SMOTE on time series and sequential data
you are a google search away for an answer!
With SMOTE, can we achieve higher f1 in practice? I saw that f1 was around 0.72
Can we use smote to target column in data set
Sir, could you please make a video on outlier detection?
I have already created a video on outlier detection.
Link - ua-cam.com/video/2Qrost474lQ/v-deo.html
True positive is 0 in the confusion matrix(by the formula the Precision and Recall should be equal to zero) .So how did you get that great number (over 70 %)?
Please read the pinned comment!
@@bhattbhavesh91 I like your videos. :)))
Nice expalnation
Really help
how does smote work with categorical data?
gettings errors as :
__init__() got an unexpected keyword argument 'ratio'
AttributeError: 'SMOTE' object has no attribute 'fit_sample'
Smote can only be used in Logistic Regression or any classification model
any classification algorithm!
shouldn’t it be generate_auc_roc_curve(pipe, X_test). If no if Bhaveshbhai you or anyone can explain pls.
can u elaborate with a random forest algorithm in google colab?
again ROC auc curve is used ??
The smote ratio parameter is deprecated, my off balanced dataset sklearn classification_report is off balanced in the support column even after smoting.
The SMOTE function has changed after I created this video! Please refer to the official documentation!
if we use smote in the pipeline, is it only upsampling on training or also on testing when we call predict? Thanks
Please start a playlist for beginners to learn AI ,ML please
Sure!
Thanks 👍
Welcome 👍
I have got this error when trying to run the smote:
__init__() got an unexpected keyword argument 'ratio'
any clues ?
Thank you
You must have figured it out by now. Am only a student. It has been deprecated as the video is 1 year old.
try using this sm = SMOTE(random_state=42, sampling_strategy = 'minority')
Thanks Gurunath for sharing this!
very well explained sir thank you
You are welcome
Hii bhavesh , i used ur this code of smote bt i m getting an error of ratio ie invalid parameter ratio for estimator Smote , how to resolve this
I guess the function has changed! Do have a look at the documentation to learn more about it!
Can you please share the notebook with us using google colab?
Hi~can you share the data set
Nice
Does smote guarantee to improve classifier performance ?
Nope! It doesn't, it only upsamples your data by generating artificial samples! How good the model performs depends on how well your classes are apart!
Perfection
Getting an error: ValueError: Unknown label type: 'continuous-multioutput'
you are a google search away for an answer!
@@bhattbhavesh91 lol that's right 😂
At the end of the video, how all the 4 metrics scored above 70% if the model did not predicted correct none of samples classified as 1? There was 0 True Positives and 63 False Negatives!
Thanks
what is the use of defining random_state ?
ua-cam.com/video/c249O4giblM/v-deo.html
Smote__ratio is not a parameter of smote help me out plz......
The SMOTE function has changed after I created this video! Please refer to the official documentation!
How to handled extremely imbalanced data for regression problem .
Lovelyyyyyyy