for info: if you delete the column island then you should delete the rows containing value 1 as well or you will have the other encoded columns equals zero in all.
i've been lost in feature engineering chapter on the book that i am currently ready for machine learning right now, and straight ahead i found your video, and with the whole 47 minutes i have learned 2-3 things from you and i understand the whole process lot more better now, this all thanks to you Greg! keep up making these types of videos bcs WE NEED YOU!!
Wouldn't this be called Transformation techniques of preprocessing instead? I thought Dimensionality Reduction would be separate from Feature Engineering, with Feature Scaling making up the 3rd subtopic. So something like: Dimensionality Reduction (removing features) PCA Clustering Feature Engineering (creating/transforming features) One-Hot Binning Feature Scaling (normalisation/standardisation of features) Your scaling
at 4:30 after looking it up, "frac=1" sets the fraction of the dataset to shuffle to be 100% of the total dataset, instead of say 25%. didn't really explain that properly
The techniques you showed were very important but don't know why you were ruining it by training on illogical models. I understand that it was just for test purpose but you could do better by training some logical models.
Take my courses at mlnow.ai/!
Run a heat map for all columns when viewing correlations before running PCA, there are way more opportunities for dimensionality reduction.
Oh interesting I should look into this
for info: if you delete the column island then you should delete the rows containing value 1 as well or you will have the other encoded columns equals zero in all.
i've been lost in feature engineering chapter on the book that i am currently ready for machine learning right now, and straight ahead i found your video, and with the whole 47 minutes i have learned 2-3 things from you and i understand the whole process lot more better now, this all thanks to you Greg! keep up making these types of videos bcs WE NEED YOU!!
Awe this is so nice. So glad to have brought a bit of value. Thank you so much for the encouragement and support, it means a lot. Happy learning 😄
Been waiting for this one!! Amazing video! Thanks Greg
Glad to hear it! No problem 👍😊👍
I've waited for this video! Many Thanks! :)
Yup, I know you have been. No problem Mike and thank YOU 😄
Greg, your videos are absolutely lovely and reinforce everything I’m learning in my classes, thank you so much
Oh I'm so glad to hear that! Thank you for the support and for the kind words, I really appreciate it :)
You explain everything in such an easy-to-follow way! Thanks for the amazing video!
Ghost to hear it! Thank you so much, and you're very welcome!
this kind of way is what we need outside the uni class. Enough for PCA knowledge in Uni, let's code!
really great comprehensive video, would be great if you did one on how to select features to get the best results for this problem itself
Buddy, I have subscribed you. Please keep uploading more video that helps a lot
Thanks so much, and I absolutely will!
Amazing video as always! Super helpful
Glad you thought so!
Best of the best!
Thank you greg, you are the man!
Thanks so much! I really appreciate that.
You're Gold !! Keep up the good work.
Thanks so much really appreciate that
Thank you a lot for these helpful experiments .. it gave me a lot of ideas in data preprocessing !
Super glad to hear it!
Thank you so much. Your efforts are really appreciated.
Thanks a lot greg , you have helped me a lot through this videos.
Wouldn't this be called Transformation techniques of preprocessing instead? I thought Dimensionality Reduction would be separate from Feature Engineering, with Feature Scaling making up the 3rd subtopic. So something like:
Dimensionality Reduction (removing features)
PCA
Clustering
Feature Engineering (creating/transforming features)
One-Hot
Binning
Feature Scaling (normalisation/standardisation of features)
Your scaling
Thank you for posting this, i like all of your videos :)
Great to hear!!
at 4:30 after looking it up, "frac=1" sets the fraction of the dataset to shuffle to be 100% of the total dataset, instead of say 25%. didn't really explain that properly
Well explained., but why do you keep turning everything into numpy arrays? I do not think it is necessary.
I guess it's just a habit... Probably not necessary yeah
great🙌🏻, very helpful keep making more such videos
Thanks so much, will do!
many thanks
Many welcomes!
do u have a tutorial on catBoostencoder
This is very informative....
Thanks! Very usefull!
great job thank you
You're very welcome!
Wish I could give more likes!
One will do ☺️
amazing tutorial
Thank you I wish i was as smart as you ughhh but at least i learned some from this
Amazing ❤
well explained thanks
You're very welcome!
great video. pkease talk slowe for the beginners
Thank you!
tnx
Superb
thank you bro
Thanks man
thank you so much
You are very welcome!!
thanks!
Love it
Great
Great vid, but slow down your speaking so lesser experienced people can follow along!
Thank you. If it is too fast, there is a slowdown option :)
@@GregHogg Yes, i can absolutely slow down the speed of the tutorial. Have a great day
@@beauclark2199 great! You too!!
The techniques you showed were very important but don't know why you were ruining it by training on illogical models. I understand that it was just for test purpose but you could do better by training some logical models.
Too long and boring this guy is not straighforward
Ikr Screw this guy