*Are you new to Machine Learning?* Watch my video series, "Introduction to Machine Learning in Python with scikit-learn": ua-cam.com/play/PL5-da3qGB5ICeMbQuqbbCOQWcS6OYBr5A.html
For beginners: When I tried to complete an ML project of say a simple model based on Logistic or Linear regression it used to take me about a month. As I was a beginner in Python, Pandas, SQL and the rest of it, I thought this will take me a long time to master and may be I am a late comer into this. But a year forward now and thanks to Data School, Sentdex, Krish naik, Statquest, Thinkful Webinar and more I am surprised that all I need is a day or less to complete these projects. Because of the meticulous analysis on Data School when I needed a deeper understanding that's where my gps leads me to. Thank you Data School.
Your guideline does not only involves basic codes, but it actually involves very practical and useful functions. I want to sincerely thank you for your effort!
OMG!!! I’ve just started ML in kaggle for the past few weeks. Theres a lot of information to absorb but you teach us in the most understandable way and yet up-to-date question why we should use scikit instead of using dummies. This video is extremely helpful and informative. Thank you alot!!! Guess I gonna spend the rest of the day to watch all of your videos
I feel fortunate that I stumbled across this video. Very well articulated. Slows down pace, so that folks can hear, understand and digest. Most videos I come across, seem to rush through the contet before one can digest. Thanks for taking time and sharing your knowledge
THANK YOU for this tutorial! Was wandering around the web to solve unexpected errors that came by following, apparently, outdated tutorials. If I have landed up on this tutorial the very first time, it would have saved me around 4 hours of useless surfing. Thanks again
That's awesome to hear... glad I could be of help! By the way, I'll be launching a full course covering these topics (and more)... sign up here to get notified when it launches: scikit-learn.tips
I was looking for clear explanation of Pipeline for a long time. You nailed it. Crystal clear explanation and understood by watching one time. Thank you.
This is an excellent and simple explanation of this topic. I must say that you are a very talented in the way you teach! You choose your words in a way that emphasizes only the important and relevant staff. Thanks!!!
Thanks, this helps a lot. Was scratching my head on pipeline and column transformer before this video. Also you got a very soothing voice and it helps to relax and really enjoy the learning.
Thank you for this tutorial. I was working with logistic regression this week and was trying to figure out how to one hot encode for a categorical variable with hundreds of categories. I was getting 100% accuracy and precision so something wasn’t right. I’m going to try the steps that you outlined in this tutorial. Thanks.
thank you so much!!!!! it was very helpful. yours is the only channel i come running to for help whenever im stuck somewhere. rich conent!! keep sharing these wonderful thingss
Thank you very much and welcome back after a long time. You are as good as gets when it comes to Machine Learning. You have made me learn a lot. I cant wait for videos on deep learning. I hope you ll come up with deep learning soon. Thanks again
God damn this video is good. I was struggling with column_transformer and pipelines till late last night. The options you suggest here are so much better and easier to understand for me. I am totally going through your "Introduction to Machine Learning in Python with scikit-learn" playlist soon. Thanks for putting this out!
You're very welcome! If you want to go deeper into this topic, you may want to check out my course: courses.dataschool.io/building-an-effective-machine-learning-workflow-with-scikit-learn
Oh my god! after so much of exhaustive waiting another video came, which is far more useful than others for me! I just love your videos, the content was really useful in my real life, most of the youtube channels they just take the ideal ones which I might not encounter in my whole life! please do these videos regularly!
That is awesome to hear, thanks so much for your kind words! 🙏 Actually, I publish a new Q&A video every month for Data School Insiders at the $5 level: www.patreon.com/dataschool
00:58 1) It allows you to properly cross-validate a process rather than just a model. In other words, when you are doing cross-validation like cross_val_score, normally you just pass a model to it. Well, there are cases when that is not going to give you accurate results because you're doing the preprocessing outside of the cross-validation. So a pipeline, generally speaking, is useful because you can cross-validate a process that includes (a) *preprocessing* as well as (b) *model building*.
You are a great teacher. Please make the tutorials or series for Data Visualization, In-Depth Data Analysis, and Cleaning, and Project Deployment, etc. Since after Learning Python and its libraries and ML, these are the next steps.
Sir, just before 5 minutes I visited our channel to ask you the same question where it was difficult for me to encode multivariables in kaggles house prediction using advanced regression dataset. Fortunately and surprisingly you posted same. Thank you so much.
You are by far the best data science teacher on youtube. Can you make a video on creating your own custom transformers using it to modify your data, then using that custom transformer in a ColumnTransformer and a Pipeline?
Thanks for your suggestion! I'm working on a course that will likely cover that topic. Sign up here to get notified when it launches: scikit-learn.tips
Thank you so much! You're the best! Please go over scaling when you have a chance :) Question: Is is ok to leave in all of the OneHotEncoded columns with this pipe approach? I believe you previously mentioned how it's best to drop one of the columns to prevent multicollinearity. Any way to do this within the pipe?
You are so kind, thank you! 😊 Yes, I plan to cover StandardScaler at some point. Yes, it is okay to leave in all of the one-hot encoded columns. However, the "drop" parameter for OneHotEncoder (new in scikit-learn 0.21) does allow you to drop one feature per category. Hope that helps!
Finally someone explained me properly what is columns transformer and why we use pipeline. I would like you to put your course to udemy , then i ll buy it 100% .. maybe on average you will sell each course for less price, but trust me, you are explaining this so good, you can sell tens of thousands of courses in few months , ... or in the case you have this on udemy , please provide me with the link!
Thanks for your kind words and your suggestion! I know that many students like Udemy courses, but my values as a course creator don't align with their business model, and so I'm not currently interested in publishing a course there. I prefer to offer courses directly to interested students. Thanks for understanding!
Thanks for such a detailed tutorial. I am working on a similar problem where I have multiple categorical features. In my dataset, the categorical variables has more than 90 possible values, as a result I am having an additional 121 columns when i use the Get.dummy, but I actually want just four levels. Please kindly advise me.
Your videos are amazing and are really helping with the last module on my MSc. I know there is no need to encode Pclass as it is an ordinal variable that is already ordered and you explained that really clearly. I notice also that you explain well about use cases and processing with regard to onehotencoder vs ordinalencoder in other videos. For marking/best academic marking practices in my module would you recommend creating a onehotencoder for my nominal and a ordinalencoder for my ordered data then piping both through make_pipeline? Thank you in advance :-)
I'm hoping to cover feature scaling in a future video, but I do have a video about feature selection: ua-cam.com/video/YaKMeAlHgqQ/v-deo.html Hope that helps!
Hi! First I would like to thank you for this awesome video! Super well explained, super clear and veeeery useful. Thanks a lot! I have a question, does it makes sens to encode n times (cv=n in your cross val) the data set? I mean... using a pipeline is great for test purposes as you explained, but I am not sure that is necessary to use the entire pipeline (including encoding) for the cross val...but maybe I am missing something. Could you clarify this point? Thanks it advance for your comments !
Great question! Yes, it is critical that you cross-validate the entire pipeline (rather than just the model) so that the data preprocessing occurs within each fold of cross-validation. Doing the preprocessing prior to cross-validation can lead to data leakage, which means that your evaluation scores will be less reliable. This is a complex topic, but I hope that helps a bit!
Thank you for your videos I simply love them!!!.. :) I have one ques ain't we need to drop one column after doing one hot encoding since there would dummy trap(for eg if there is two categories only, both columns are providing same information). how can we drop that?
Thanks for your kind words, and great question! You don't have to drop the first level of each categorical feature (since it's unlikely to impact model performance), but if you'd like to do it, you can set drop='first' for OneHotEncoder to accomplish this. Hope that helps!
Just excellent. Thanks! I am very new to data science so please bear with me. Question - "For a dataset that has several categorical features each column with a lot of different values (say each categorical column has 100 different values as opposed to just 2 for Gender - male or female), after using onehotencoder to convert them to unordered numerical values, the number of table columns increases astronomically. Then you run the model and say one or more of the categorical features are amongst the most useful, how do you reverse or convert back these encoded features to know which categorical feature each represents?
Thank you so much for this video. It really helped me a lot. I do have a question about this process. In my case one of the columns of my out of sample data has more categories than my in sample data (basically, I have the opposite scenario as the one you mentioned at 26:19). Would this process work in my case?
Yes, you just have to modify the default parameters of OneHotEncoder to handle unknown categories. See this video for details: ua-cam.com/video/bA6mYC1a_Eg/v-deo.html
Thanks for such a detailed tutorial. I am working on a similar problem where I have multiple categorical features. In my dataset some of the categorical variables have more than 10 possible values, as a result my 13 features are getting converted into 74. I am fairly new to this and this is a bit confusing for me cause 74 features seems too much. Could you please share your expertise ? If what I am doing is right ? , or I need to look for another way to encode the feature?
*Are you new to Machine Learning?* Watch my video series, "Introduction to Machine Learning in Python with scikit-learn": ua-cam.com/play/PL5-da3qGB5ICeMbQuqbbCOQWcS6OYBr5A.html
Sir what about dummy variable trap , When we use Column Transformer ?
Great question! See this video: ua-cam.com/video/NYtwyvyvDEk/v-deo.html
The Legendary Data Science guy is back!
Thank you for the warm welcome! 😄
For beginners:
When I tried to complete an ML project of say a simple model based on Logistic or Linear regression it used to take me about a month. As I was a beginner in Python, Pandas, SQL and the rest of it, I thought this will take me a long time to master and may be I am a late comer into this.
But a year forward now and thanks to Data School, Sentdex, Krish naik, Statquest, Thinkful Webinar and more I am surprised that all I need is a day or less to complete these projects.
Because of the meticulous analysis on Data School when I needed a deeper understanding that's where my gps leads me to.
Thank you Data School.
You are so very welcome!
Your guideline does not only involves basic codes, but it actually involves very practical and useful functions. I want to sincerely thank you for your effort!
Thanks very much for your kind words!
OMG!!! I’ve just started ML in kaggle for the past few weeks. Theres a lot of information to absorb but you teach us in the most understandable way and yet up-to-date question why we should use scikit instead of using dummies. This video is extremely helpful and informative. Thank you alot!!! Guess I gonna spend the rest of the day to watch all of your videos
Awesome! Glad to hear this was helpful to you 👍
I feel fortunate that I stumbled across this video. Very well articulated. Slows down pace, so that folks can hear, understand and digest. Most videos I come across, seem to rush through the contet before one can digest. Thanks for taking time and sharing your knowledge
Thanks very much for your kind words! 🙏
THANK YOU for this tutorial! Was wandering around the web to solve unexpected errors that came by following, apparently, outdated tutorials. If I have landed up on this tutorial the very first time, it would have saved me around 4 hours of useless surfing. Thanks again
That's awesome to hear... glad I could be of help! By the way, I'll be launching a full course covering these topics (and more)... sign up here to get notified when it launches: scikit-learn.tips
I was looking for clear explanation of Pipeline for a long time. You nailed it. Crystal clear explanation and understood by watching one time. Thank you.
You're so very welcome! 🙏
This is an excellent and simple explanation of this topic. I must say that you are a very talented in the way you teach! You choose your words in a way that emphasizes only the important and relevant staff. Thanks!!!
Wow, thank you!
Thanks, this helps a lot. Was scratching my head on pipeline and column transformer before this video.
Also you got a very soothing voice and it helps to relax and really enjoy the learning.
Great to hear!
Fantastic tutorial! Great teacher, best Machine Learning teacher on youtube! Thank you!
Thanks so much!
There is something about your explanations, that i just get it instantly. You deserve an award
You are too kind, thank you!
Yes, that is the role of the OneHotEncoder.
Absolutely perfect and useful lessons! Thinking of becoming a patron member as I get a little more confident with ML
That would be awesome, thank you so much! You can join here: www.patreon.com/dataschool
Thank you for this tutorial. I was working with logistic regression this week and was trying to figure out how to one hot encode for a categorical variable with hundreds of categories. I was getting 100% accuracy and precision so something wasn’t right. I’m going to try the steps that you outlined in this tutorial. Thanks.
Good luck!
thank you so much!!!!! it was very helpful. yours is the only channel i come running to for help whenever im stuck somewhere. rich conent!! keep sharing these wonderful thingss
Thank you so much!
Nice to have u back sir. This session was so fruitful. Thanks a ton. Keep it up!
That's awesome to hear!
Thank you very much and welcome back after a long time. You are as good as gets when it comes to Machine Learning. You have made me learn a lot. I cant wait for videos on deep learning. I hope you ll come up with deep learning soon. Thanks again
Thanks very much for your kind words, and for your suggestion as well!
Always love your step by step, clear lessons. Keep it coming.
Thank you!
thank you Kevin, very thorough explanation. I'm glad I found your channel. I like the way you teach.
Thank you so much! 🙏 That's great to hear!
Perfect timing, was just searching on pipelines the other day.
Would be great to follow-up by tacking on Gridsearch in this context.
That's awesome to hear! I will definitely cover grid search of a pipeline at some point - thanks for the suggestion!
That was really something amazingly explained, I was looking for all these topics to understand. I got it in one go.
Thanks a ton.
You're very welcome!
Thx kevin, one of best & simplest explanations of pipeline
Glad it was helpful!
yo! Mind blown with the amount of things i learnt from this. Please keep at it!
Thank you! You might like my scikit-learn tips: github.com/justmarkham/scikit-learn-tips
You are a high quality TEACHER , thank you very much.
You are very welcome! 😄
you are the best tutor i have ever met , keep up the good work. Thank you
Wow, thanks!
God damn this video is good. I was struggling with column_transformer and pipelines till late last night. The options you suggest here are so much better and easier to understand for me. I am totally going through your "Introduction to Machine Learning in Python with scikit-learn" playlist soon. Thanks for putting this out!
You're very welcome! If you want to go deeper into this topic, you may want to check out my course: courses.dataschool.io/building-an-effective-machine-learning-workflow-with-scikit-learn
Oh my god! after so much of exhaustive waiting another video came, which is far more useful than others for me! I just love your videos, the content was really useful in my real life, most of the youtube channels they just take the ideal ones which I might not encounter in my whole life! please do these videos regularly!
That is awesome to hear, thanks so much for your kind words! 🙏
Actually, I publish a new Q&A video every month for Data School Insiders at the $5 level: www.patreon.com/dataschool
Dayyyyuuummmm.......why did I not stumble upon ur videos earlier ????!!!!!!
😄
Man I love you. I just love you. I love your videos. I love the way you explain things. I love the pace of you videos. I love everything. Thank you.
Thank you so much, Harshita! 🙏
00:58
1) It allows you to properly cross-validate a process rather than just a model. In other words, when you are doing cross-validation like cross_val_score, normally you just pass a model to it. Well, there are cases when that is not going to give you accurate results because you're doing the preprocessing outside of the cross-validation.
So a pipeline, generally speaking, is useful because you can cross-validate a process that includes
(a) *preprocessing* as well as
(b) *model building*.
Thank you for explaining the pipeline approach so well!
You're very welcome!
Preprocessing with pipeline was complex topic to understand for me before watching this video. Thanks a lot for the video.
You're very welcome! Glad it helped 👍
You are a great teacher. Please make the tutorials or series for Data Visualization, In-Depth Data Analysis, and Cleaning, and Project Deployment, etc. Since after Learning Python and its libraries and ML, these are the next steps.
I have many more tutorials! Many of them are listed here: www.dataschool.io/launch-your-data-science-career-with-python/
i just discovered your channel and i gotta tell you , you got a permanent subscriber here!!! LOVE YOUR TEACHING STYLE!!!!!!!!!!!!!!!
Thank you! 🙏
Excellent! I was using the pandas dummies and your explanation of why pipeline and ohe is a better solution solves all the problems. thanks again
Glad it helped!
Kevin, it's 5:20am Winston-Salem time and I am digging this. I was very confused. Thank you so much.
Excellent!
Impressive explanation, and logical approach to material presentation. You just got a new sub.
Welcome aboard!
My god I love your detailed solution. Even my 5yo sibling can understand it. Wonderful. Definitely worth a subscribe.
Awesome! 🙌
After searching alot, i found this channel n i feel its best for me:)
Happy to hear that!
Really great that you did a video like this .
It just helped me a lot and I am really thankful for it brother . Keep going .
Thanks!
Very clearly explained and helpful video - Thank you!
Glad it was helpful!
Thanks, for this clear and well paced tutorial.
Glad it was helpful!
just want to say thank you. I am a beginner and you teach much better than my professor.
Glad to hear I have been helpful! 🙏
Absolute goat bruh, really thankful for your content
Thank you!
Sir, just before 5 minutes I visited our channel to ask you the same question where it was difficult for me to encode multivariables in kaggles house prediction using advanced regression dataset. Fortunately and surprisingly you posted same. Thank you so much.
That's amazing! 🙌 I hope this video is helpful to you, and let me know if you have any questions!
@@dataschool I have a problem with functions, I can't write custom functions in Python which is very important what to do sir?
@@JainmiahSk You can definitely write custom functions in Python!
Thankyou dataschool, it was not only helpful, it was great, enlightening and awesome.
What a nice thing to say, thank you so much! 🙏
You are by far the best data science teacher on youtube.
Can you make a video on creating your own custom transformers using it to modify your data, then using that custom transformer in a ColumnTransformer and a Pipeline?
Thanks for your suggestion! I'm working on a course that will likely cover that topic. Sign up here to get notified when it launches: scikit-learn.tips
Great content, i am learning this in my college data science class. You did better than my professor!
Are you undergrad or grad?
Thank you! 🙏
Thank you so much! You're the best! Please go over scaling when you have a chance :)
Question: Is is ok to leave in all of the OneHotEncoded columns with this pipe approach? I believe you previously mentioned how it's best to drop one of the columns to prevent multicollinearity. Any way to do this within the pipe?
You are so kind, thank you! 😊
Yes, I plan to cover StandardScaler at some point.
Yes, it is okay to leave in all of the one-hot encoded columns. However, the "drop" parameter for OneHotEncoder (new in scikit-learn 0.21) does allow you to drop one feature per category. Hope that helps!
Even I had the same doubt... Thank you for clarifying 😊
You just gained another subscriber...this was super useful
Great to hear!
Finally someone explained me properly what is columns transformer and why we use pipeline. I would like you to put your course to udemy , then i ll buy it 100% .. maybe on average you will sell each course for less price, but trust me, you are explaining this so good, you can sell tens of thousands of courses in few months , ... or in the case you have this on udemy , please provide me with the link!
Thanks for your kind words and your suggestion! I know that many students like Udemy courses, but my values as a course creator don't align with their business model, and so I'm not currently interested in publishing a course there. I prefer to offer courses directly to interested students. Thanks for understanding!
Thank you good sir, this tutorial was better than many paid tutorials on Udemy. Blessed!
Glad it was helpful! 🙌
Amazing video. You are an excellent instructor. Got yourself a new subscriber :)
Thank you so much!
Your explanation is very clear, thank you very much
You're welcome!
I just found solution to my problem after watching your video. Thanks a lot.
You're welcome!
Thanks for such a detailed tutorial. I am working on a similar problem where I have multiple categorical features. In my dataset, the categorical variables has more than 90 possible values, as a result I am having an additional 121 columns when i use the Get.dummy, but I actually want just four levels.
Please kindly advise me.
Amazing explanation, as always!
Thank you!
Thanks alot for this tutorial Kevin. It really saved me😅
Glad to hear that!
Your videos are amazing and are really helping with the last module on my MSc. I know there is no need to encode Pclass as it is an ordinal variable that is already ordered and you explained that really clearly. I notice also that you explain well about use cases and processing with regard to onehotencoder vs ordinalencoder in other videos. For marking/best academic marking practices in my module would you recommend creating a onehotencoder for my nominal and a ordinalencoder for my ordered data then piping both through make_pipeline? Thank you in advance :-)
Thank you for this amazing video. Please do some videos on feature selection and scaling techniques in python!
I'm hoping to cover feature scaling in a future video, but I do have a video about feature selection: ua-cam.com/video/YaKMeAlHgqQ/v-deo.html
Hope that helps!
Thank you sir! You've really saved my life...
🙌
This is a great video! Thank you. Will you be showing how to do parameter tuning with pipeline?
Yes, I actually cover that in one of my courses: courses.dataschool.io/building-an-effective-machine-learning-workflow-with-scikit-learn
awesome explanation!! Thanks a lot
You're very welcome!
thanks a lot for helping everyone out,
was just wondering if you will be uploading more videos in the future
Yes! I just started posting again last week. Thanks for watching!
Hi, this will be very helpful.. Thank you for making this video!!
You are very welcome! 🙌
MIND BLOWN!!!! CV FOR A PROCESS!!! NOICE ONE!!
🤯
You are 100x better than my ML course teacher at uni. GG bro.
Thank you! 😄
Hi! First I would like to thank you for this awesome video! Super well explained, super clear and veeeery useful. Thanks a lot!
I have a question, does it makes sens to encode n times (cv=n in your cross val) the data set? I mean... using a pipeline is great for test purposes as you explained, but I am not sure that is necessary to use the entire pipeline (including encoding) for the cross val...but maybe I am missing something.
Could you clarify this point? Thanks it advance for your comments !
Great question! Yes, it is critical that you cross-validate the entire pipeline (rather than just the model) so that the data preprocessing occurs within each fold of cross-validation. Doing the preprocessing prior to cross-validation can lead to data leakage, which means that your evaluation scores will be less reliable. This is a complex topic, but I hope that helps a bit!
Extremely helpful, thank you so much !!!
Glad it helped!
thank you very much, very clear video
You're very welcome! 😄
Hi Kevin, thank you so much for the wonderful explanation, could you also explain how to use GridSearch or RandomizedSearch along with Pipelines?
Great suggestion! I'm working on a tutorial that will be published on UA-cam in late April. It will include that topic. Stay tuned!
this is so helpful that I have to comment. great job. thanks a lot
Glad it was helpful!
I love it! Amazing tips!
Thank you!
Thank you very much, it 's very interesting and by the way, it is exactly what i need in my current ML project.
That's great to hear! Good luck with your project 🙌
@@dataschool thanks 👍
Thank you for speaking slowly. It’s nice to listen to a non-English speaking person
You're very welcome! :)
Great ! Great ! Great! tutorial..many thanks Kevin
You're very welcome!
amazing information. wow! thank you so much man.
You're very welcome!
Great example, educational.
Thank you!
I am enriched by this teaching.
Great to hear!
Just another amazing video. 😄
Thank you so much for your kind words! 😊
This video is excellent.
Thank you!
Thank you for your videos I simply love them!!!.. :) I have one ques ain't we need to drop one column after doing one hot encoding since there would dummy trap(for eg if there is two categories only, both columns are providing same information). how can we drop that?
Thanks for your kind words, and great question! You don't have to drop the first level of each categorical feature (since it's unlikely to impact model performance), but if you'd like to do it, you can set drop='first' for OneHotEncoder to accomplish this. Hope that helps!
A very nice video that save my life I can see it is well explained keep uploading
Thanks!
Simply the best!!
Thank you!
mkayyyyy, awesome tutorial!!!
Thank you!!
Awesome video and thank you for this explanation!!! I have one request could you please make video on PCA
Thanks for your suggestion!
Just excellent. Thanks! I am very new to data science so please bear with me. Question - "For a dataset that has several categorical features each column with a lot of different values (say each categorical column has 100 different values as opposed to just 2 for Gender - male or female), after using onehotencoder to convert them to unordered numerical values, the number of table columns increases astronomically. Then you run the model and say one or more of the categorical features are amongst the most useful, how do you reverse or convert back these encoded features to know which categorical feature each represents?
I'm not sure off-hand, sorry!
It's a good tutorial for some reasons that you will explain later.:D
Your tutorial is informative as always. May you prepare a tutorial how to interprete model. Like 'Black Box' interpretation in RF. Thank you.
Thanks for your suggestion! I'll consider it for the future!
Thank you for your very helpful videos. I have question on this video: Why the way you assigned data to x is different than the way used for y?
X needs to be a 2-dimensional object, and y needs to be a 1-dimensional object. Does that help?
Thank you so much.😍😍🙏🙏👍👍 It helped me a lot.
Great to hear!
Awesome job!
Thanks!
Thanks Kevin, do you have any video example that shows how to incorporate a self defined function in pandas pipeline?
thanks you, waiting for more tutorials :3
You're very welcome! I will do my best to publish more!
Thank you so much for this video. It really helped me a lot. I do have a question about this process. In my case one of the columns of my out of sample data has more categories than my in sample data (basically, I have the opposite scenario as the one you mentioned at 26:19). Would this process work in my case?
Yes, you just have to modify the default parameters of OneHotEncoder to handle unknown categories. See this video for details: ua-cam.com/video/bA6mYC1a_Eg/v-deo.html
Simply the best.
Thank you!
Hello, in the last example. How is the NAN values handled. Are they removed by one of the methods or do you have to remove them by yourself?
Thanks for such a detailed tutorial. I am working on a similar problem where I have multiple categorical features. In my dataset some of the categorical variables have more than 10 possible values, as a result my 13 features are getting converted into 74. I am fairly new to this and this is a bit confusing for me cause 74 features seems too much. Could you please share your expertise ? If what I am doing is right ? , or I need to look for another way to encode the feature?
74 features is not necessarily too much! You can have thousands of features (or more) and still have an effective model!
This video helps a lot👍👍👍
Great!
Very good, it cleared my many doubts
Great to hear!