Sir you have so many great videos it is increasing my knowledge everyday. Thank you so much. You are the best Professor of Statistics I have ever come across.
Towards the end.. I paused to see where the music is coming from... it started way too early.. This video was the answer to all my questions!! explained so well.. Thank you
I love your videos. Thanks. Just one point: ROSE and over/under sampling are two different approaches. The former is based on bootstrapping, the latters are more traditional. You used traditional approaches to the problem. Besides, the 30% success is not "rare event". It would be better to use a dataset with 5% or lower success rate.
Great explanation sir.explaining it to the minute details with very simple explanation is awesome feature you had.Appreciate if you could continue this journey with more important topics
Thanks for your feedback! You can find some useful playlists on the channel. Here is one example: ua-cam.com/play/PL34t5iLfZddu8M0jd7pjSVUjvjBOBdYZ1.html
Classification and Prediction with R - you can find some machine learning lecture videos from this link: Statistical & Machine Learning Methodologiesua-cam.com/play/PL34t5iLfZddu8M0jd7pjSVUjvjBOBdYZ1.html
Hello Dr. Bharatendra,first, thank you for the explanation, you're english is easy to understand. In this case, I don't know why the ROSE function doesn't work for me, because when Run the line, for example, to oversampling the train, the variable 'over' is NULL (empty), but I can solver this with Caret Packages.
Thanks for this awesome video sir. I have few doubts: 1. How can we set some attributes to keep rank and gpa within possible range in synthetic data...how to write that condition? 2.Whats the diff bw both over and undersampling together and the synthetic data we prepared at the end?? 3. Why have we used positive= '1' in rf formula, in previous video of yours I haven't seen such thing
1. If it is not in the algorithm, you can manually do so before developing a model. 2. Together it does oversampling where samples are smaller and under sampling where number are of cases are higher. 3. That indicates what level of response we are more interested in.
Thank-you for this video! I've watched a number of your videos and they make things so straightforward and easy to pick up. Is there any way to tweak this method for dealing with a factor with more than two levels - I'm looking at 9 different levels and keep on getting errors with the function shown in this video.
could you please share the code if you sorted out this problem? I am looking at 5 different levels and I stuck to continue the project. avinaykumar03@gmail.com Thanks in Advance!
what if I have more than two categorical variables? I am getting this error when performing undersampling "Error in (function (formula, data, method, subset, na.action, N, p = 0.5, : The response variable must have 2 levels" please help me out. TIA
Excellent video Sir... keep them coming... can you do videos with examples of various functions in Caret especially with large datasets and prediction with xgboost, e1071 packages...thanks
Hello Sir, very well explained, but i just wanted to know, in all the sampling method we got accuracy not more than 60%, will there not be any problem with our Model if we apply the same model using future data of the same dataset?
It can only help in improving overall accuracy to some extent. Doing oversampling or under sampling in when there is significant class imbalance does not guaranty very high accuracy because it totally depends on what data you are using.
Thank you so much Professor, Very lucidly explained and you have kept the data and code available which is so very useful. Wanted to know you if there are some other way of imbalanced class like cost sensitive classifier etc?
Again a beautiful explanation. Sir I wish to ask you what if there is class imbalance in the validation data set but not the training set? For example, suppose we want to evaluate a model which we have developed using local responses to see wether it performs well globally/in another province.. But we find the dataset distribution to be highly skewed thereby giving rise to class imbalance problem..and when we apply our model it gives lower kappa values.. What is the best way out? As i was reading the caret package details where it has been advised not to use upscaling/downscaling on validation data set! (but we have show the models work)
Thank you for another great video sir! Also, you mentioned under synthetic data that we can use ‘attributes’ to make sure that we don’t go outside the range (in GPA & Rank). Can you please touch upon these attributes? Looking forward to hearing from you! Thank you!
Sir nicely explained. i have doubt if we have more than two classes like multi nominal regression ROSE algorithm not working. how to rectify that error??
thx for the nyc video sir. can we use this for logit model as well ? or only for randomforest ? so, we can attain acc by doing ovun. one small doubt, in this dataset we have 70% of data with "0" and 30 with 1. so we did ovun and gained increase in acc. "what is the best proportion of 0s-1s inorder to get high acc? i mean any bench mark is there, like 50-50, 40-60... for 0s,1s or yes/no ??? Is it ok if overall acc is reduced, inorder to increase the sensitivity ???? i have used N=400 instead of 376, which made 0=188, 1=212, so sensitivity=0.63. can i do like this ??
Sir this rose package is only valid when you have 2 outcomes(0,1).What if i am facing multi class imbalance problems like there are 4 outcomes(0,1,2,3).How to handle such imbalance ?can you share with us
Thank you sir for one more valuable lecture. Sir, can we do, Random over-sampling (1:2), randomly selecting minority samples with replacement and adding them into the training data set with bootstrap?
Sir, Can we use ROSE / SMOTE method for target variable with class more than 2. If yes ; then could you pl suggest what other parameters should we use. I tried with parameters mentioned in this video ; but getting error claiming the class is more than 2.
Thank you for the explanation sir.. In this data we have factors as 0 and 1. How to handle the imbalance if we have more than 2 factors in the data set?
Dr. Bharatendra Rai but my mission is to classify 3 classes. For example CTG dataset , all 3 classes have the same major imbalance. So how to balance them. ?
@@bkrai Yes I got you now Sir. I can name 2 and 3 class (with less frequncies) as 1 and Normal as -1 so I will end up with 2 classes variable. Thanks again Dr.
thank you for this video brother.. you use over sampling on the training data to have same class as 180... but this was only done on the training set so what do you do to the testing set? i realise that you partition the data before you employing the over sampling method and you apply the over sampling on the training set but what happens to the testing? because i think it will also be inbalance? please i need explanation here. thank you
It should only be done for training data because the prediction model is based on that. The model is not based on test data. We use test data only for assessing the model. When a final model is deployed in practice, it is likely to come across data similar to test data.
@@bkrai thanks so much for that great insight. I was thinking that because there was High class imbalance in the data set so if you do your partition there is high likely that the imbalance will affect in both the training and testing.. So if train data is over sample to avoid the imbalance then the testing data remains untouch meaning it is still imbalance. so meaning the predictions will be high in one class and low in the other.. I will implement this and see how it will go.. Thumps up bro. Your Vids are aspiring.
i just did what you just explained above but it seems that my sensitivity was 96% and my specificity was just 20%.. i realise that although training set was over sampled but in the testing there was still class imbalance because over sampling was done after partitioning. so prediction on my unseen data which is the testing was really bias..as more patients were predicted as having the disease while they already have the disease but specificity was extremely poor.. so i really dont understand how to avoid this scenario on the testing data.. thanks again
@@bkrai thanks again, yeah we use test data to assess the model, but what of if the test is also higly imbalance? will this not affect our recall value, specifi etc?
let us suppose we have 205 instances for class 0 and we want to use over sample method. So the resulting over sample data points are 410 for both classes . Is that acceptable since we have original data points only 400 instances?? thanks.
Hallo Sir thank you for the video. It is really helpful! However, I still have a question. So I tried to do undersampling. And I got an error that the response variable must have 2 levels --> "Error in (function (formula, data, method, subset, na.action, N, p = 0.5, : The response variable must have 2 levels." So it means it can only work for 2 response level or I might do something wrong? I have the same code as you have. under
Sir, one more doubt- why we have made admit a factor?..and not Rank...and when do we apply normalization and standardization... Which ML models do not require standardization and normalizatio. Kindly tell
A student getting admission or not getting admission is not really a rank. It is a factor type of variable. Regarding normalization and standardization, you will see that they are addressed in each video where appropriate. You can refer to top 10 here: ua-cam.com/play/PL34t5iLfZddsQ0NzMFszGduj3jE8UFm4O.html
so we increase the models accuracy in predicting "1" as that was the questions interest . what about the predictors which have larger influence on "admit" .How to know which predictors are significant. should we use logit regression for that.
Great video sir, Is there any video regarding the multi-label classification with a validation dataset based on the training dataset model and apply only test dataset. thank you, sir.
Hi Sir, Wantedd to check ifin a Logistic Regression problem both dependent variable and independent variable are dichotomous in nature and there is imbalnce data in both the cases , then what is the best way to treat the imbalance data present in both IV and DV
So, class imbalance problem should be treated only when we want to predict a class with less instances against a class with more instances? Whereas when we predict the class with more instances , this means we do not have class imbalance problem and we should continue with our prediction? Am I right?
Hey, thanks for your video. You explained things very well! As I am doing regression on a multiclass factor (starting point polr model) my question somehow differs. My outcome is a 10-class (likert scale) question of a survey with 1 „something isnever justified“ to 10„… is always justified“-> So ordered factor levels (1:10) The densityplot shows that levels >5 have near to zero density. The former literature (looking at the same survey question) recoded to a 4-class factor (also ordered) by „collapsing“ the levels
Look at number of data points at each level and if you find some classes have very few data points, then it may be a good idea to group some categories.
@@bkrai Thanks a lot. I decided to recode the factor by collapsing the lowest categories (Option 1) and collapsing all the categories expept the highest value (option two). But one further question: My predictors are also very imbalanced. F. e. I have got: - marital status (8 classes) where single or married are prevalent (~ 50%) -likert scale questions with 4-classes and with 10 classes where the „middle categories“ are relatively few. My idea was to code alternative factors for (Single/ Married/ living together); and for the likert scale : -Option 1: Collapsing them to balanced factors, or -Option 2: Taking them as numerical predictors; Next thep would be to run models with the alternative options. Would you agree with such an approach? Or am I „cheating the data“ if I am recoding my predictors in that way? Also I am not sure if numerical predictors are useful in ordered logistic/ probit regression. Thanks in advance! And I now watched the majority of your videos. They are very helpful :-)
How to tackle data imbalance in multi level classification problem? Any links describing the same in R would be of great help! For ex, if data set is varied (target var: class1 ~ 100 samples, class2 ~ 1000 samples, class3 ~ 10000 samples, class4 ~ 20000 samples)
Hello sir, I got struck at random forest step..it is showing me the error of this kind.."Error in randomForest .default(m,y,...) NA/NAN/Inf in foreign function call (arg 1) In addition: warning message: In data.matrix(X): NAs introduced by coercion"
Hello sir, how is this method different from using cross validation in caret package?? Do i still need to do this if i have intention of generalizing my predictive power using cross validation??
Cross validation helps with generalization. Addressing class imbalance helps with giving proper weight to each class of the categorical dependent variable. So they serve different purpose.
sir , our accuracy should be more or less ?? if our accuracy is more it is again affecting at sensitivity or specificity ? Q2:WE do over sapling or under sampling according to make our "0" or "1" to predict more accuracy then it will impact on other .it is not issue?
Accuracy is always higher the better. But there may be situations where sensitivity or specificity may be more important and in those cases trying to improve them may lead to a lower overall accuracy.
Hi Sir, what if we simply sample the no. of observations from majority class equal to minority class without disturbing minority class and without using ROSE package?
@@bkrai but I think it isn't really a big deal.. because it prevents data leakage..I think also the time series has this problem not classification ...and even there are papers where the authors have used undersampling before Splitting?
Thank you so much for this tutorial. I followed these steps in my dataset. bt at the end I got the same confusion matrix for train, under, over and both data. and my accuracy, kappa, sensitivity, specificity etc. are 1, while the Mcnemar's Test P-Value is NA. Sir, could you please help me to correct this?
Thanks for this video sir! In your video "Logistic Regression with R: Categorical Response Variable at Two Levels (2018)"....we converted rank also into a factor...after doing so my accuracy in coming out to be 1 in all cases of under,over,both and random sampling....Kindly clear my doubt...why we didnt convert rank into factor in this video?? and why just converting that into factor we got 1 accuracy???
Sir,thank you so much for the vdo and sharing your expertise. One question I hv is "should be performed SMOTE before feature selection from the imbalanced data?" . Please answer
Sir. I hv a model in Random Forest. Dataset is in csv file. If i want to make a web interface and deploy what should I do. Is storing the dataset in database like mysql mandatory. I want users to give values through four textboxes/input box and provide the predicted result in a text form.
Well thank you for the explanation it's the first time i use this package, and i don't know what the difference from using rose() to balance the data and using ovun sample()? what i'm looking for is to balance my data using ROSE from Menardi and Torelli.
@@bkrai thank you for your response sir I was one mistake that was the reason I was getting that error I watched your video carefully and I resolved that error thank you sir
Can we use rose when we have 100 classes. I think rose is only for 2 classes. In your case 0 and 1. How to do oversampling when one have many imbalanced classes. Many thanks Kevin
Thank You, Prof. for this video. I am trying to adapt this approach to my data set but I have been getting incorrect data type error as follows: " Error in terms.formula(formula, data = frml.env) : 'data' argument is of the wrong type " when I run this: ovrf
sir i am using data of network attack where i have 3 levels in response variable so while using the functions ovun.sample i am getting the error that response must have 2 levels so pls help me in this
@@bkrai I do have one question. You are just applying sampling technique (over, under, both, rose) only on train data, building model and validating on test data. Why are you not applying sampling technique on test data ? Is there no need to balance test data as well before validate model on it?
@@bkrai Thanks for your reply. I have read SMOTE is also used to handle imbalanced data. I do have below questions, I would be thankful if you will reply 1. Both ROSE and SMOTE work similarly ( I mean internal calculation ) ? If not then which one is good. ? 2. Which one among ROSE and SMOTE would you prefer ? 3. Do you have any video on SMOTE ?
They are slighly different. Rose uses smoothed bootstrapping to draw artificial samples from the feature space neighbourhood around the minority class. On the other hand, Smotr draws artificial samples by choosing points that lie on the line connecting the rare observation to one of its nearest neighbors in the feature space.
Hi sir, great videos, various all R related are helping me a lot. Need help on finance and fraud analytics. Please can you post some finance domain related courses.
I have a data were I have to classify gender based on websites visited but there are websites which are visited by both male and female means both 0 and 1
I like the way your lectures are so crisp. It gives a first-hand experience to those looking to learn these techniques by doing hands-on.
Thanks for the feedback!
Too Good a Lecture. Thank You Dr. Rai
You're most welcome!
Thanks a lot! You helped me in econometrics class. From Brasil.
You're welcome 😊
Sir you have so many great videos it is increasing my knowledge everyday. Thank you so much. You are the best Professor of Statistics I have ever come across.
Thanks!
Towards the end.. I paused to see where the music is coming from... it started way too early..
This video was the answer to all my questions!! explained so well.. Thank you
+Sargam Gupta 🙂
I love your videos. Thanks. Just one point: ROSE and over/under sampling are two different approaches. The former is based on bootstrapping, the latters are more traditional. You used traditional approaches to the problem. Besides, the 30% success is not "rare event". It would be better to use a dataset with 5% or lower success rate.
Thanks for the feedback!
Can we say that we have an imbalanced data when success event is of 5% or lower rate?
Great explanation sir.explaining it to the minute details with very simple explanation is awesome feature you had.Appreciate if you could continue this journey with more important topics
Thanks for your feedback! You can find some useful playlists on the channel. Here is one example:
ua-cam.com/play/PL34t5iLfZddu8M0jd7pjSVUjvjBOBdYZ1.html
Thanks Sir for your valuable lectures. Sir you indeed teaches with practical in R. May happy always and long live.
You are most welcome!
Thank you Dr. Rai for sharing this video.
You are welcome!
Excellent explanations!!!! plz make more videos on machine learning
Classification and Prediction with R - you can find some machine learning lecture videos from this link:
Statistical & Machine Learning Methodologiesua-cam.com/play/PL34t5iLfZddu8M0jd7pjSVUjvjBOBdYZ1.html
Hello Dr. Bharatendra,first, thank you for the explanation, you're english is easy to understand. In this case, I don't know why the ROSE function doesn't work for me, because when Run the line, for example, to oversampling the train, the variable 'over' is NULL (empty), but I can solver this with Caret Packages.
If caret works, that's fine too.
Hello Sir, thank you so much for such a nice video. JUst wanted to know, the step you used for synthetic data, that process is SMOTE only right?
ROSE and SMOTE work slightly differently. But both help to address class imbalance problem.
Thanks Sir for your prompt action.
Thanks for this awesome video sir. I have few doubts:
1. How can we set some attributes to keep rank and gpa within possible range in synthetic data...how to write that condition?
2.Whats the diff bw both over and undersampling together and the synthetic data we prepared at the end??
3. Why have we used positive= '1' in rf formula, in previous video of yours I haven't seen such thing
1. If it is not in the algorithm, you can manually do so before developing a model.
2. Together it does oversampling where samples are smaller and under sampling where number are of cases are higher.
3. That indicates what level of response we are more interested in.
Thank-you for this video! I've watched a number of your videos and they make things so straightforward and easy to pick up. Is there any way to tweak this method for dealing with a factor with more than two levels - I'm looking at 9 different levels and keep on getting errors with the function shown in this video.
You can take subsets with 2 levels at a time where class imbalance is present and apply this method. And finally you can combine your data.
could you please share the code if you sorted out this problem? I am looking at 5 different levels and I stuck to continue the project.
avinaykumar03@gmail.com
Thanks in Advance!
@@VinayKumar-jf7pr did you get an ans for this?
Thank you 1 million times...learnt a lot in 30 mins
Thanks for comments!
what if I have more than two categorical variables?
I am getting this error when performing undersampling
"Error in (function (formula, data, method, subset, na.action, N, p = 0.5, :
The response variable must have 2 levels" please help me out. TIA
Sorry seeing this now. I hope you already figured out.
I believe this library can be used to for most of the classifiers such as Logistic Regression, SVM, and not just limited to Random Forest?
That's correct!
Thanks!
Excellent video Sir... keep them coming... can you do videos with examples of various functions in Caret especially with large datasets and prediction with xgboost, e1071 packages...thanks
Thanks for the suggestions!
Hello Sir, very well explained, but i just wanted to know, in all the sampling method we got accuracy not more than 60%, will there not be any problem with our Model if we apply the same model using future data of the same dataset?
It can only help in improving overall accuracy to some extent. Doing oversampling or under sampling in when there is significant class imbalance does not guaranty very high accuracy because it totally depends on what data you are using.
This is very useful and very well explained 👍
Thanks for comments!
Quality videos! I appreciated! Very educative!
Thanks for comments!
Thank you so much Professor, Very lucidly explained and you have kept the data and code available which is so very useful. Wanted to know you if there are some other way of imbalanced class like cost sensitive classifier etc?
Sorry seeing this now. I hope you already figured out.
Awesome video!
If you could let me know how to implement the same when prediction model is a neural network, that would be great. Thank u.
For neural networks, you can use this link:
ua-cam.com/video/-Vs9Vae2KI0/v-deo.html
Again a beautiful explanation. Sir I wish to ask you what if there is class imbalance in the validation data set but not the training set? For example, suppose we want to evaluate a model which we have developed using local responses to see wether it performs well globally/in another province.. But we find the dataset distribution to be highly skewed thereby giving rise to class imbalance problem..and when we apply our model it gives lower kappa values.. What is the best way out? As i was reading the caret package details where it has been advised not to use upscaling/downscaling on validation data set! (but we have show the models work)
It is only used for training data. Validation data represents unseen data that the model has to deal with. So validation data should be kept as it is.
Thank you for another great video sir!
Also, you mentioned under synthetic data that we can use ‘attributes’ to make sure that we don’t go outside the range (in GPA & Rank). Can you please touch upon these attributes?
Looking forward to hearing from you!
Thank you!
Sorry seeing this now. I hope you already figured out.
Sir nicely explained. i have doubt if we have more than two classes like multi nominal regression ROSE algorithm not working. how to rectify that error??
With more than 2 classes, choose 2 of them that need improvement and apply the method.
Hi, i want to ask you a question: can i use these methods only for constructing regression models and evaluating all the explanatory variables? thanks
yes
thx for the nyc video sir.
can we use this for logit model as well ? or only for randomforest ?
so, we can attain acc by doing ovun. one small doubt, in this dataset we have 70% of data with "0" and 30 with 1. so we did ovun and gained increase in acc. "what is the best proportion of 0s-1s inorder to get high acc? i mean any bench mark is there, like 50-50, 40-60... for 0s,1s or yes/no ???
Is it ok if overall acc is reduced, inorder to increase the sensitivity ???? i have used N=400 instead of 376, which made 0=188, 1=212, so sensitivity=0.63. can i do like this ??
Sorry seeing this now. I hope you already figured out.
What if there are more than 2 factors in the dependent variables.how to deal with class imbalance there?
You can take subsets with 2 levels at a time where class imbalance is present and apply this method. And finally you can combine your data.
Sir this rose package is only valid when you have 2 outcomes(0,1).What if i am facing multi class imbalance problems like there are 4 outcomes(0,1,2,3).How to handle such imbalance ?can you share with us
Hi Mohit. Did you find a way to handle data imbalance in multi level classification problem?
Sorry seeing this now. I hope you already figured out.
The RF model is built using balanced train dataset, but the prediction is used unbalanced test dataset? Should we balance the test dataset as well?
No we should not balance test data as the model is already built.
Thank you sir for one more valuable lecture. Sir, can we do, Random over-sampling (1:2), randomly selecting minority samples with replacement and adding them into the training data set with bootstrap?
Sir, I have a small doubt. what if we have a multinomial logit model, how do we partition the data then?
You can do two at a time.
Sir, Can we use ROSE / SMOTE method for target variable with class more than 2. If yes ; then could you pl suggest what other parameters should we use. I tried with parameters mentioned in this video ; but getting error claiming the class is more than 2.
You can do it by doing 2 at a time.
@@bkrai Could you please explain it with a sample code to explain. We are predicting severity with levels 1
Prof Rai, your videos have been the best! Could you please do a video on XGBoost?
You can access it from this link:
ua-cam.com/play/PL34t5iLfZddu8M0jd7pjSVUjvjBOBdYZ1.html
Thank you for the explanation sir.. In this data we have factors as 0 and 1. How to handle the imbalance if we have more than 2 factors in the data set?
You can do it two at a time and repeat.
Thanks for this video. How we can solve class imbalance problem if we have a response variable with 3 classes.? Thanks very much.
You can do it 2 at a time and select those 2 that have major imbalance problem.
Dr. Bharatendra Rai but my mission is to classify 3 classes. For example CTG dataset , all 3 classes have the same major imbalance. So how to balance them. ?
You can make 2 classes that have lower frequency to match with class-1.
@@bkrai Yes I got you now Sir. I can name 2 and 3 class (with less frequncies) as 1 and Normal as -1 so I will end up with 2 classes variable. Thanks again Dr.
Thanks for the update!
Thanks for the large font!
You are welcome!
thank you for this video brother.. you use over sampling on the training data to have same class as 180... but this was only done on the training set so what do you do to the testing set?
i realise that you partition the data before you employing the over sampling method and you apply the over sampling on the training set but what happens to the testing? because i think it will also be inbalance? please i need explanation here. thank you
It should only be done for training data because the prediction model is based on that. The model is not based on test data. We use test data only for assessing the model. When a final model is deployed in practice, it is likely to come across data similar to test data.
@@bkrai thanks so much for that great insight. I was thinking that because there was High class imbalance in the data set so if you do your partition there is high likely that the imbalance will affect in both the training and testing.. So if train data is over sample to avoid the imbalance then the testing data remains untouch meaning it is still imbalance. so meaning the predictions will be high in one class and low in the other.. I will implement this and see how it will go..
Thumps up bro. Your Vids are aspiring.
i just did what you just explained above but it seems that my sensitivity was 96% and my specificity was just 20%.. i realise that although training set was over sampled but in the testing there was still class imbalance because over sampling was done after partitioning. so prediction on my unseen data which is the testing was really bias..as more patients were predicted as having the disease while they already have the disease but specificity was extremely poor.. so i really dont understand how to avoid this scenario on the testing data.. thanks again
@@bkrai thanks again, yeah we use test data to assess the model, but what of if the test is also higly imbalance? will this not affect our recall value, specifi etc?
Assessment has to be with actual data even if there is high imbalance.
let us suppose we have 205 instances for class 0 and we want to use over sample method. So the resulting over sample data points are 410 for both classes . Is that acceptable since we have original data points only 400 instances?? thanks.
If you have 400 observations and 205 are class-0, you don't really have class imbalance problem.
Hallo Sir thank you for the video. It is really helpful! However, I still have a question. So I tried to do undersampling. And I got an error that the response variable must have 2 levels
--> "Error in (function (formula, data, method, subset, na.action, N, p = 0.5, : The response variable must have 2 levels."
So it means it can only work for 2 response level or I might do something wrong?
I have the same code as you have.
under
If you have more than 2 levels, you can try 2 at a time.
@@bkrai thank you for feedback sir! however, I still do not understand what you mean by trying 2 at a time.. could you be more specific? thank you!
Sir, one more doubt- why we have made admit a factor?..and not Rank...and when do we apply normalization and standardization...
Which ML models do not require standardization and normalizatio. Kindly tell
A student getting admission or not getting admission is not really a rank. It is a factor type of variable. Regarding normalization and standardization, you will see that they are addressed in each video where appropriate. You can refer to top 10 here:
ua-cam.com/play/PL34t5iLfZddsQ0NzMFszGduj3jE8UFm4O.html
Namaste!
According to you which method do you think is the best ?
It depends on method that gives best results.
so we increase the models accuracy in predicting "1" as that was the questions interest . what about the predictors which have larger influence on "admit" .How to know which predictors are significant. should we use logit regression for that.
That's correct, for statistical significance of predictors you can relay on the logit regression model.
Great video sir, Is there any video regarding the multi-label classification with a validation dataset based on the training dataset model and apply only test dataset. thank you, sir.
Try this:
ua-cam.com/play/PL34t5iLfZddvv-L5iFFpd_P1jy_7ElWMG.html
Hi Sir,
Wantedd to check ifin a Logistic Regression problem both dependent variable and independent variable are dichotomous in nature and there is imbalnce data in both the cases , then what is the best way to treat the imbalance data present in both IV and DV
DV may have imbalance because of IV. If you focus on IV, that should be enough.
So, class imbalance problem should be treated only when we want to predict a class with less instances against a class with more instances? Whereas when we predict the class with more instances , this means we do not have class imbalance problem and we should continue with our prediction? Am I right?
No. It is still needed in the case you described.
@@bkrai But why when I apply over , under and rose sampling ,I get less accuracy and less sensitivity. There is improvement .
Hey,
thanks for your video. You explained things very well! As I am doing regression on a multiclass factor (starting point polr model) my question somehow differs.
My outcome is a 10-class (likert scale) question of a survey with 1 „something isnever justified“ to 10„… is always justified“-> So ordered factor levels (1:10)
The densityplot shows that levels >5 have near to zero density. The former literature (looking at the same survey question) recoded to a 4-class factor (also ordered) by „collapsing“ the levels
Look at number of data points at each level and if you find some classes have very few data points, then it may be a good idea to group some categories.
@@bkrai
Thanks a lot. I decided to recode the factor by collapsing the lowest categories (Option 1) and collapsing all the categories expept the highest value (option two).
But one further question:
My predictors are also very imbalanced. F. e. I have got:
- marital status (8 classes) where single or married are prevalent (~ 50%)
-likert scale questions with 4-classes and with 10 classes where the „middle categories“ are relatively few.
My idea was to code alternative factors for (Single/ Married/ living together); and for the likert scale :
-Option 1: Collapsing them to balanced factors, or
-Option 2: Taking them as numerical predictors;
Next thep would be to run models with the alternative options. Would you agree with such an approach? Or am I „cheating the data“ if I am recoding my predictors in that way? Also I am not sure if numerical predictors are useful in ordered logistic/ probit regression.
Thanks in advance! And I now watched the majority of your videos. They are very helpful :-)
How to tackle data imbalance in multi level classification problem? Any links describing the same in R would be of great help! For ex, if data set is varied (target var: class1 ~ 100 samples, class2 ~ 1000 samples, class3 ~ 10000 samples, class4 ~ 20000 samples)
You can do two at a time.
Sir, when we fit any model to the training data, does that already tune the model parameters or we have to tune them manually before testing it?
It depends on the model you are using. Random forest model doesn't need much tuning.
Thank you Sir
welcome!
Thank you very much for your amazing videos!!!
Thanks for comments!
Hello sir, I got struck at random forest step..it is showing me the error of this kind.."Error in randomForest .default(m,y,...)
NA/NAN/Inf in foreign function call (arg 1)
In addition: warning message:
In data.matrix(X): NAs introduced by coercion"
Make sure you do not have missing values
And one more doubt... How does p value effect on output in both sampling
Would you please do a video on how to do SMOTE using R
Thanks, I've added it to my list.
Hello sir, how is this method different from using cross validation in caret package??
Do i still need to do this if i have intention of generalizing my predictive power using cross validation??
Cross validation helps with generalization. Addressing class imbalance helps with giving proper weight to each class of the categorical dependent variable. So they serve different purpose.
sir , our accuracy should be more or less ?? if our accuracy is more it is again affecting at sensitivity or specificity ?
Q2:WE do over sapling or under sampling according to make our "0" or "1" to predict more accuracy then it will impact on other .it is not issue?
Accuracy is always higher the better. But there may be situations where sensitivity or specificity may be more important and in those cases trying to improve them may lead to a lower overall accuracy.
Thank you sir
Hi Sir, what if we simply sample the no. of observations from majority class equal to minority class without disturbing minority class and without using ROSE package?
Yes that’s fine too.
Hi Can I do a Chi-Square test for Binary Responses to see whether the two classes are Uniformly dstributed or skewed (imbalanced). Thanks
You may try this:
ua-cam.com/video/1RecjImtImY/v-deo.html
Thank you so much bro I..my question is if Can I use undersampling techniques before Splitting the dataset into training and testing?
Testing data should be similar to data expected when using the model. That's why we do it after splitting.
@@bkrai but I think it isn't really a big deal.. because it prevents data leakage..I think also the time series has this problem not classification ...and even there are papers where the authors have used undersampling before Splitting?
If that works for your data, then should be fine.
Thank you so much for this tutorial.
I followed these steps in my dataset.
bt at the end I got the same confusion matrix for train, under, over and both data. and my accuracy, kappa, sensitivity, specificity etc. are 1, while the Mcnemar's Test P-Value is NA.
Sir, could you please help me to correct this?
Sir, what is the reason behind taking train n test samples
You may refer to following:
ua-cam.com/video/EV5N-pIdvJo/v-deo.html
Yes sir I have watched it but one more question sir, should we smote the date with the 70% samples??
Another awesome video. Can you please share the data file?
email id?
I've now added link below the video itself for downloading the file.
Thank you. Appreciate your help
Sir,
How can we use the technique for multi class classification ? Example : NSP data.
You can do it two at a time.
Thanks for this video sir!
In your video "Logistic Regression with R: Categorical Response Variable at Two Levels (2018)"....we converted rank also into a factor...after doing so my accuracy in coming out to be 1 in all cases of under,over,both and random sampling....Kindly clear my doubt...why we didnt convert rank into factor in this video?? and why just converting that into factor we got 1 accuracy???
Make sure you check accuracy based on test data. That is unlikely to be 1.
Sir,thank you so much for the vdo and sharing your expertise. One question I hv is "should be performed SMOTE before feature selection from the imbalanced data?" . Please answer
I would say yes.
@@bkrai thank you Sir
Sir. I hv a model in Random Forest. Dataset is in csv file. If i want to make a web interface and deploy what should I do. Is storing the dataset in database like mysql mandatory. I want users to give values through four textboxes/input box and provide the predicted result in a text form.
Hi sir, thanks. Really explained well, are there any formal courses on Analytics for finance professionals which you will recommend?
You can try this:
ua-cam.com/play/PL34t5iLfZdduGEuSXYrleeBdvfQcak0Ov.html
Amazing Video!!!! Thanks sir really! It helped me a loooootttt
Thanks for feedback!
Well thank you for the explanation
it's the first time i use this package, and i don't know what the difference from using rose() to balance the data
and using ovun sample()?
what i'm looking for is to balance my data using ROSE from Menardi and Torelli.
Error in as.data.frame.default(data) :
cannot coerce class '"ovun.sample"' to a data.frame
I got this error how to solve this error
Difficult to say much without looking at the code. Check your code.
@@bkrai thank you for your response sir I was one mistake that was the reason I was getting that error I watched your video carefully and I resolved that error thank you sir
Thanks for the update!
ROSE can only be used if the classification is binary?
For more than 2, you can do two at a time.
@@bkrai okay.. Thankyou
You are welcome!
Can we use rose when we have 100 classes. I think rose is only for 2 classes. In your case 0 and 1. How to do oversampling when one have many imbalanced classes. Many thanks Kevin
You can take subsets with 2 levels at a time where class imbalance is present and apply this method. And finally you can combine your data.
Many Thanks for your reply, could you mention code/example/link how to do that (Kevin.maz155@gmail.com)
@@kevinm8607 Hi Kevin, did you ever figure out how the R code to use with the oversampling method when you have multiple imbalanced classes?
can this model be used to make a model for recommendation system (collaborative filtering) many thanks in advance for your reply.
Sorry seeing this now. I hope you already figured out.
Thanks a lot for giving me MOOC knowledge..
Dear sir,
I really love your way teaching
Could you please me music link that you have used in your video
Here is the link:
drive.google.com/open?id=1wOOjoEr3Y8QyoWS7V5X_9KQ2rrtezpDZ
Sir please make a video on multinomial Mixed effects regression. I heartily request as I find no literature to my suitability on this. 🙏
Thanks, I've added it to my list of future videos.
@@bkrai thank you so much sir. Heartily awaiting it🙏
You are welcome!
Thanks for the great Video. How do i calculate F1 out of those results?
Thank You, Prof. for this video. I am trying to adapt this approach to my data set but I have been getting incorrect data type error as follows:
" Error in terms.formula(formula, data = frml.env) :
'data' argument is of the wrong type "
when I run this:
ovrf
what is t in data= t? Probably there is some error there.
Bharatendra Rai
The t is my training data.
Bharatendra Rai
I have tune it reapetedly but nothing has changed. Thank you for helping.
I'm seeing this now. I think you need to use data after $ sign.
why we chosen N=500 in ROSE() function?
It is artificially created data and I chose a round figure of 500.
@@bkrai we can choose according to our no of rows?
This command rose
sir i am using data of network attack where i have 3 levels in response variable so while using the functions ovun.sample i am getting the error that response must have 2 levels so pls help me in this
have you solved your dataset, what you used to solve imbalance in dataset of
multiclass classification
Sorry seeing this now. I hope you already figured out.
Thanks, it is very helpful.
Thanks for comments!
Thank you, sir. it is very helpful... I need a binary dataset for practice. can you please upload.
Data file: goo.gl/D2Asm7
Awesome Explanation
Thanks for comments!
@@bkrai I do have one question. You are just applying sampling technique (over, under, both, rose) only on train data, building model and validating on test data. Why are you not applying sampling technique on test data ? Is there no need to balance test data as well before validate model on it?
Test data is like any new data that will be used for prediction. New data points are not likely to come balanced.
@@bkrai Thanks for your reply. I have read SMOTE is also used to handle imbalanced data.
I do have below questions, I would be thankful if you will reply
1. Both ROSE and SMOTE work similarly ( I mean internal calculation ) ? If not then which one is good. ?
2. Which one among ROSE and SMOTE would you prefer ?
3. Do you have any video on SMOTE ?
sir this one is called smote ?
They are slighly different. Rose uses smoothed bootstrapping to draw artificial samples from the feature space neighbourhood around the minority class. On the other hand, Smotr draws artificial samples by choosing points that lie on the
line connecting the rare observation to one of its nearest neighbors in the feature space.
Godó video. Is necessary calibrate the probabilities? How can i do this? Thanks
Its not necessary.
Hi sir, great videos, various all R related are helping me a lot. Need help on finance and fraud analytics. Please can you post some finance domain related courses.
Thanks for the suggestion, I've added it to my list.
What if there is a class with 0 and 1 both
In the example provided it will not be feasible to have a student admitted as well as not admitted.
I have a data were I have to classify gender based on websites visited but there are websites which are visited by both male and female means both 0 and 1
If each row represents an instance of a website visited, then it can only be visited by one.
Nice. But video is not properly visible initially.
I just checked it, and everything looks fine. I think it may have something to do with internet speed at your end.
I mean It is blurred initially. Yes may be internet issues.
Thanks for letting me know.
Best Video
Thanks for comments!
THANK YOU!
You're welcome!
Awesome
Thanks!
Hello sir, is the oversampling method in this video using the smote algorithm? If not, what is the difference between the two?
what if the data have 4 classes and imbalance?
You can try two at a time.