Hi Siddha, you offer great way to explain things, thanks! Just a question on the steps to follow. Are the steps the following: 1. Run train_test split and display scores for a number of different models (e.g. random forest, decision tree, svc…) >>> this is from video 8.2 2. Validate the score performance seen in step 1 via cross-validation and pick the best performing model (thinking: It could be that a model that is best in train_test_split, is not the best model when running cross-validation. >>> this is from video 8.2 3. Take the best model (let’s say it’s Random Forest) and perform Grid Search via hyperparameter tuning as you explained in this video. In this case, I don’t have to run GridSearch on all models, as I already defined in steps 1 and 2 which one is the best and I am going to use. 4. Once the best model has been validated, run the model on the entire dataset, meaning on X (instead of X_train, X_test) as we don’t need to test anything anymore (we know which ones are the best ones). Are these steps correct or am I confusing something?
hi thank you so much it was reallllly helpful i have a question ? when we use gridsearchCV and use cv=5 on it , it's mean that cross_val_score running and data separate to cv=numbers of folds ? like when using cross_val_score alone ?
The course is very in-depth. It helps me a lot.
Thank You for your all efforts
Great explanation 👌
Thanks a lot, beautifully explained
why did you take combinations like 1,5,10,20 ...how to take the values
Hi Siddha, you offer great way to explain things, thanks! Just a question on the steps to follow.
Are the steps the following:
1. Run train_test split and display scores for a number of different models (e.g. random forest, decision tree, svc…) >>> this is from video 8.2
2. Validate the score performance seen in step 1 via cross-validation and pick the best performing model (thinking: It could be that a model that is best in train_test_split, is not the best model when running cross-validation. >>> this is from video 8.2
3. Take the best model (let’s say it’s Random Forest) and perform Grid Search via hyperparameter tuning as you explained in this video. In this case, I don’t have to run GridSearch on all models, as I already defined in steps 1 and 2 which one is the best and I am going to use.
4. Once the best model has been validated, run the model on the entire dataset, meaning on X (instead of X_train, X_test) as we don’t need to test anything anymore (we know which ones are the best ones).
Are these steps correct or am I confusing something?
How long you will take to completely upload all the remaining video for this ML course???
I am not sure to be honest.... My schedule is packed as of now...
This guy is underrated
bestttttt . Thankyou so muchh
Thanks you 💟
Thank you so much
keep going
Great video.
for regression will the parameters for tunig remain same or change ?
hi
thank you so much
it was reallllly helpful
i have a question ? when we use gridsearchCV and use cv=5 on it , it's mean that cross_val_score running and data separate to cv=numbers of folds ? like when using cross_val_score alone ?
So, what I am basically asking is. When do we use Grid Search? Before cross validation or after cross validation?
You can perform cross validation and grid search cv at the same time.
Sir could u pls make videos on Real industry data science projects
Plz,update u r circullam
The topics may change... That's why I gave a tentative curriculum.... I'll definitely update after sometime.
@@Siddhardhan plz,try as fast as possible
Sure
@@Siddhardhan tq sir