26:38 --I just wrote the simple code instead of using column transformer and to use function transformer on the Fare column. accuracy was improved as you said. Thanks
I think there is a mistake while applying cross-validation on entire datasets because ideally it should be applied to training data. This is to prevent information leakage from the test set into the training process. The purpose of the test set is to simulate unseen data and evaluate the final model's performance. If you perform cross-validation before the train-test split, you may inadvertently use information from the test set leading to an overly optimistic assessment of the model's performance
so handling skewed or non-linear data mean scaling the data and feed it to the model directly . in addition it is use in EDA or feature engineering and moreover can we use standard scaler or any other scaler for data handling (skewed or non-linear)
HI Sir, i started watching your vedio's ,very informative,but i have some errors while using some other similar data, can you help explaining and correcting my doubts?, i woulld really appreciate.Thanks
When we are applying functional transformer to Age column of the training data , it is getting left skewed and not normally distributed. Is it the same for anyone ?
@Campusx sir if data is skew or not normally distributed it means we have a outliers correct? For removing outliers we use trimming or capping. And if we use log transform then is it like it will adjust the outliers and convert it into normally distributed?
I don't think non normal data means presence of outlier. Like Binomial distribution, Chi square distribution are not normal does this mean they have outliers??
IF MY DATASET HAS 10 COLUMN AND FEW OF THEM ARE LEFT SKEWED AND FEW ARE RIGHT SKEWED AND SOME ARE NORMALLY DISTRIBUTED HOW DO WE HANDLE THOSE , DO WE APPLY DIFFERENT DIFFERENT TRANSFORMATION ON THE BASIS OF COLUMN
doubt : Sir LR assume residual to be Normally distributed not the data. So, why we suddenly start making features to be ND . if Residual of LR is not normally ditributed that means relationship is not liner and we try to make those column ND which are not linear with Target Variable, This is my understand . Please explain.
Hi yash...Nice Question...I had also same question..Yeah we know there is assumption that residual should be normallly dirstibuted but also when u get data as normally distributed in certain algos like LR, logR then model performance gets better as we can reduce heterosedasticity from model...when u data is normally distruibuted model statistial power of compuattation gets increase..hope u got u r answer,,if U knew anything more ..plz feel free to share
@@parthraghuwanshi2929 For handling outlier you can use other method, transformation is especially to make feature normal for linear model like LR, LogR, NB etc.
i badly need this answer i have looked few place but i ain't getting the correct answer! Variable transformation( Function and Power Transform ) considered as featured transformation technique or this is another part of feature engineering method but not included in feature transformation ?
Hello Sir, After applying Function Transformer my X_train shape is changing from (719,2) to (80,7) can you help me why is this happening, my X_test shape is intact. I am following your GitHub but still facing issues. Please help
I have a doubt in data preprocessing. First we remove outlier --> Feature scaling --> Gaussian Distribution or remove outlier --> Gaussian Distribution --> Feature scaling kindly help ?
26:38 --I just wrote the simple code instead of using column transformer and to use function transformer on the Fare column.
accuracy was improved as you said.
Thanks
I think there is a mistake while applying cross-validation on entire datasets because ideally it should be applied to training data. This is to prevent information leakage from the test set into the training process. The purpose of the test set is to simulate unseen data and evaluate the final model's performance. If you perform cross-validation before the train-test split, you may inadvertently use information from the test set leading to an overly optimistic assessment of the model's performance
Marvelous knowledge
Thanks sir for your efforts😀
This was mindblowingly awesome! Thanks Nitish
Sir, u r the best.. u know very think..
i want to ask that is cross-validation used over training data or can apply over the whole x and y
One correction: I think square root transformation works good on left skewed data, not square transformation
Should we remove outliers first or we can remove outliers after using transformer
so handling skewed or non-linear data mean scaling the data and feed it to the model directly . in addition it is use in EDA or feature engineering and moreover can we use standard scaler or any other scaler for data handling (skewed or non-linear)
HI Sir, i started watching your vedio's ,very informative,but i have some errors while using some other similar data, can you help explaining and correcting my doubts?, i woulld really appreciate.Thanks
finished watching and coding
When we are applying functional transformer to Age column of the training data , it is getting left skewed and not normally distributed. Is it the same for anyone ?
@Campusx sir if data is skew or not normally distributed it means we have a outliers correct? For removing outliers we use trimming or capping. And if we use log transform then is it like it will adjust the outliers and convert it into normally distributed?
I don't think non normal data means presence of outlier. Like Binomial distribution, Chi square distribution are not normal does this mean they have outliers??
even after a converting to a normal distribution you can get rid of the outliers by removing from the 3rd or 4th std deviation
Thanks man
#campux @18:08 min, why did we use mean value. it might inject outliers. I think the median should be better. what's your opinion
Thank you sir
I think nAge column was right skewed than the fare column
can you provide us the OneNote Notes for 100 days of ML?
amazing
IF MY DATASET HAS 10 COLUMN AND FEW OF THEM ARE LEFT SKEWED AND FEW ARE RIGHT SKEWED AND SOME ARE NORMALLY DISTRIBUTED HOW DO WE HANDLE THOSE , DO WE APPLY DIFFERENT DIFFERENT TRANSFORMATION ON THE BASIS OF COLUMN
why we do log transformation when we have Standardization ?
thanks sir !!
This transform control than outlayer sir
Sir log, sd and normalization tenio ka use same hi hota hai kiya? Teenon concept mein ham data ko ek normal range mein lekar aate hai, I am right
if there is fillna function then why we use simpleimputer function over train and test data
even i want to know this
Sir, we can use the central limit theorem then why do we use such transformations to convert a pdf to the normal distribution?
Day - 30
Thank You.
Kaha impovement hua cross validation k baad Sir ji samjha nehi
doubt : Sir LR assume residual to be Normally distributed not the data. So, why we suddenly start making features to be ND . if Residual of LR is not normally ditributed that means relationship is not liner and we try to make those column ND which are not linear with Target Variable, This is my understand . Please explain.
Hi yash...Nice Question...I had also same question..Yeah we know there is assumption that residual should be normallly dirstibuted but also when u get data as normally distributed in certain algos like LR, logR then model performance gets better as we can reduce heterosedasticity from model...when u data is normally distruibuted model statistial power of compuattation gets increase..hope u got u r answer,,if U knew anything more ..plz feel free to share
why without using ML pipeline output [survived=arr[1]] and output of with using ML pipeline [notsurvived=arr[0] are not same ?
sir cross validation aapne entire data me ku laga diya. aise to before and after results sahi nahi aayenge ... :)
function transformer come under which part of feature engineering ?
To make features normally distributed
@@beautyisinmind2163 does it is method of handling outlier or outlier should be handle differently
@@parthraghuwanshi2929 For handling outlier you can use other method, transformation is especially to make feature normal for linear model like LR, LogR, NB etc.
I'm not getting the coding part, can anyone please suggest me something??
Sir ye normalisation bhi to same kam krta hai
normalization does not guarantee normal
i badly need this answer i have looked few place but i ain't getting the correct answer! Variable transformation( Function and Power Transform ) considered as featured transformation technique or this is another part of feature engineering method but not included in feature transformation ?
done
Hello Sir,
After applying Function Transformer my X_train shape is changing from (719,2) to (80,7) can you help me why is this happening, my X_test shape is intact. I am following your GitHub but still facing issues. Please help
It is resolved
I have a doubt in data preprocessing.
First we remove outlier --> Feature scaling --> Gaussian Distribution
or
remove outlier --> Gaussian Distribution --> Feature scaling kindly help ?
Second one
feature scaling shall be applied at last
this project difficult to understand