in case if we have multiple variables which are non-ordinal, do we use the onehotencoder on all the variables at once by adding them to the list initially or do we do this one by one?
Hey guys I hope you enjoyed the video! If you did please subscribe to the channel! If you want to watch a full course on Machine Learning check out Datacamp: datacamp.pxf.io/XYD7Qg Want to solve Python data interview questions: stratascratch.com/?via=ryan I'm also open to freelance data projects. Hit me up at ryannolandata@gmail.com *Both Datacamp and Stratascratch are affiliate links.
This is a great video. Explained in a manner that a newbie like myself can understand. Thank you. A question: What if the dataset contains multiple categorical variables (as well as numerical), and they are all required as input to make a prediction. How can one go about it?
Thank you! There are multiple ways to one hot encode the categorical variables. Check out my titanic video and or the house predictions. I show a few different processes
Hii...I have an error like OneHotEncoder._init_() got an unexpected keyword argument 'sparse'.... Also I already imported library which are necessary... please tell me what should I do😢
dude how about if i have two different datasets while theier categorical values are different how can i do one hot encoding the first one has 9349 rows × 17 columns and the second one has 365 rows × 17 columns while if i make one hot encoding they will be produced for the first one they become 611 columns of hot encoding and the second one become 20 columns please help me how can i do this note the two datasets have Origin and destintion city names
Have a need for a data project? Email me or fill out the form on my website. Looking for the code? Check out the article: Looking for the code? Check out the article: ryannolandata.com/one-hot-encoder/
Stopped a bit short. Need to go through how to use the encoder for predicting and not just setting up for training. eg. enc.transform() on the features you need to run the prediction on . Has been a bit of a pain with the datatype.
I don’t know if i understand your comment but you can make a make_pipeline to build all preprocessing steps: use a ColumnTransformer to select the columns to one hot encode and use the one hot encoder. You can cross validate, fit and predict using the pipeline instead of building a model again.
Please make sure all cells are visible on screen. Sometimes not able to view end of cell content.
in case if we have multiple variables which are non-ordinal, do we use the onehotencoder on all the variables at once by adding them to the list initially or do we do this one by one?
thanks a lot dude! really helped me grasp the basics!
No problem
Thanks a lot Ryan! This has to be one of the best videos out here dealing with encoders. If only others were this easy!
Thanks again.
Also, do I have to fit and transform all my sets? Or only the training set? Do I have to fit the test set? Thanks again!
Hey guys I hope you enjoyed the video! If you did please subscribe to the channel!
If you want to watch a full course on Machine Learning check out Datacamp: datacamp.pxf.io/XYD7Qg
Want to solve Python data interview questions: stratascratch.com/?via=ryan
I'm also open to freelance data projects. Hit me up at ryannolandata@gmail.com
*Both Datacamp and Stratascratch are affiliate links.
This video was so helpful, thank you. Think you could also make one on frequency encoding and the other types of encoding?
I can add them to my backlog after my stats series
Very good tutorial, but what about the "dummy variable" trap? I think you should drop one of these new variables.
Nice tutorial, clean and direct!
Thank you
This is a great video. Explained in a manner that a newbie like myself can understand. Thank you.
A question: What if the dataset contains multiple categorical variables (as well as numerical), and they are all required as input to make a prediction. How can one go about it?
Thank you! There are multiple ways to one hot encode the categorical variables. Check out my titanic video and or the house predictions. I show a few different processes
Hii...I have an error like OneHotEncoder._init_() got an unexpected keyword argument 'sparse'.... Also I already imported library which are necessary... please tell me what should I do😢
Join our discord and post your notebook
@@RyanAndMattDataScience okay
Perfect explanation! very helpful :)
Thank you
Trying your code I get this error: 'AttributeError: 'OneHotEncoder' object has no attribute 'set_output''. Any idea why this is?
Nvm just needed to update scikit-learn
Ok great. Everything else working properly?
dude how about if i have two different datasets while theier categorical values are different how can i do one hot encoding
the first one has 9349 rows × 17 columns
and the second one has 365 rows × 17 columns while if i make one hot encoding they will be produced
for the first one they become 611 columns of hot encoding
and the second one become 20 columns please help me how can i do this note the two datasets have Origin and destintion city names
u can merge them first, encode it, then split it again
protect this man
haha I appreciate it
Have a need for a data project? Email me or fill out the form on my website.
Looking for the code? Check out the article: Looking for the code? Check out the article: ryannolandata.com/one-hot-encoder/
Thanks a lot was a great help :) hope you have a good day
Thanks for checking this out
thank you very much 💕
Great explanation, thanks
Thank you
Thank you!
No problem
Thank you ❤
No problem
thanks dude
Great video!
Thanks!
Thank you so much for this video !!!!
Thanks for checking it out
Thanks buudy
thanks buddy it helps me !:)
Awesome glad you liked it
lerant a lot! thanks!!
Awesome! That’s the goal
Stopped a bit short. Need to go through how to use the encoder for predicting and not just setting up for training. eg. enc.transform() on the features you need to run the prediction on . Has been a bit of a pain with the datatype.
I don’t know if i understand your comment but you can make a make_pipeline to build all preprocessing steps: use a ColumnTransformer to select the columns to one hot encode and use the one hot encoder. You can cross validate, fit and predict using the pipeline instead of building a model again.
I have some projects that do. I may remake this video in the furture
skibi learn 😝😝😝
please go lil slow hard to understand
I'll have an article on this soon you can also check out
@@RyanAndMattDataScience thank you
Thanks buddy
Np