Tutorial 29-R square and Adjusted R square Clearly Explained| Machine Learning

Поділитися
Вставка
  • Опубліковано 7 лис 2024

КОМЕНТАРІ • 144

  • @Emotekofficial
    @Emotekofficial 5 років тому +68

    Sum of squared residuals also called the sum of squared Errors is SSE and Sum of squared Regression is SSR just make sure about this since new students can get confused.
    Y = individual data points, Yreg = predicted Regression points Ymean = Average of Individual data points
    SSE = Y - Yreg
    SSR = Yreg - Ymean
    so,
    SST = SSE + SSR
    = Y - Ymean

    • @porselvans6172
      @porselvans6172 3 роки тому

      Thank you, now understood well

    • @porselvans6172
      @porselvans6172 3 роки тому

      @Ahmed Kellen didn't they ask money

    • @ShashwatAgarwal007
      @ShashwatAgarwal007 3 роки тому

      Hey can you help me the 'N' here, is it the total number of features or the total number of data points.

    • @GamerBoy-ii4jc
      @GamerBoy-ii4jc 3 роки тому +1

      @@ShashwatAgarwal007 big N is total number of population and small n is total number of samples which we take from population

  • @blindprogrammer
    @blindprogrammer 2 роки тому +3

    Initially
    N=1000
    R^2=0.85
    p=5 (initially)
    adjusted R_Squared = 1 - ((1-0.85)(1000-1)/(1000-5-1)) = 0.9849
    1. suppose a new non-correlated variable is added:
    N=1000
    R^2=0.86 (suppose new R^2)
    p=6 (new)
    adjusted R_Squared = 1 - ((1-0.86)(1000-1)/(1000-6-1)) = 0.8591
    2. suppose a new correlated variable is added:
    N=1000
    R^2=0.92 (suppose new R^2)
    p=6 (new)
    adjusted R_Squared = 1 - ((1-0.92)(1000-1)/(1000-6-1)) = 0.9195
    As we can notice, on adding a non-correlated predictor, the overall adjusted R_squared has decreased while it has increased on adding a correlated predictor. Hope it helps!

    • @shivampal9282
      @shivampal9282 2 місяці тому

      But it decreased from the initial adj R^2, so how we find out that new feature is correlated

  • @MrPrashanth55
    @MrPrashanth55 5 років тому +10

    SSR means Sum of the Squares of the Residuals
    SST - Sum of the Squares of the Total....

  • @aryanudainiya9486
    @aryanudainiya9486 2 роки тому

    best teacher of ML on the youtube

  • @tanvipunjani7096
    @tanvipunjani7096 3 роки тому +2

    I am glad I came across this tutorial. Very well explained !

  • @NeerjaChawla
    @NeerjaChawla 11 місяців тому +1

    very informative and useful content, lucid explaination

  • @sakshirikhe2869
    @sakshirikhe2869 4 роки тому +2

    It's very excellent and detailed explanation for a beginner!!!

  • @kavururajesh1760
    @kavururajesh1760 4 роки тому +1

    Explained in detailed manner keep doing

  • @kinnaryraval
    @kinnaryraval 3 роки тому +4

    Hi Krish, Nicely explained. But have a query. R-square will always increase whether calculated against significant or insignificant feature. So, there is no thing that R-sq will be less for non-corelated features and more for corelated ones, like it will increase blindly. So, how can you say that R-adj will decrease when added attributes are non-corelated as R-sq will still increase, making R-adj = 1 - smaller_number ? I hope my question is bit clear. Thanks n respect sir!! (v).

  • @mohammad.anas7777
    @mohammad.anas7777 2 роки тому

    Nayek sir
    p is total independent features or those independent features which we have added later?
    Also, can we say that N is total number of columns in the data set?
    is so then, should we count those columns also which have irrelevant data like ticket serial number or passenger name in titanic dataset?

  • @anishchhabra6085
    @anishchhabra6085 2 роки тому +1

    Can you please explain how the SSres will decrease as we try to add a new independent variable?

  • @bobbypathak123
    @bobbypathak123 3 роки тому

    Wow.. thanks so much Krish. This was the best explanation i found

  • @akshaymote3430
    @akshaymote3430 9 місяців тому +1

    I didn't get one thing that even in Adjusted R2, whether there's correlation or not is not taken into consideration. So, by just considering number of variables, how correlation issue gets addressed?

  • @anuradhadevi1414
    @anuradhadevi1414 2 роки тому

    Bahut accha somjaya sir thank you sir

  • @balaramg89
    @balaramg89 2 роки тому

    N - total sample size, indicates no of rows in the model?

  • @ayushmaheshwari5805
    @ayushmaheshwari5805 4 роки тому +2

    please tell why SS res decrease as we increase the feature
    please explain ?

  • @praneethcj6544
    @praneethcj6544 4 роки тому

    Very intuitive explanation..!!! You have been such an inspirational instructor ..!!!!

  • @durgakorde3589
    @durgakorde3589 2 роки тому

    Thanks a lot Krish 🙂its really helpful

  • @sushilpoudel8091
    @sushilpoudel8091 2 місяці тому

    very helpful video, thank you sir

  • @independent7212
    @independent7212 3 роки тому

    Thank you so much sir for your great support by making such videos.

  • @nilupulperera
    @nilupulperera 4 роки тому +1

    Very interesting Krish. As always you stimulate us to think and learn.

  • @kalyanreddy6260
    @kalyanreddy6260 2 роки тому

    Rsqaure meanns ssr/sst only right whay -1 before that . Just to know in some excel videos it shows only ssr/sst

  • @reddy764
    @reddy764 5 років тому +1

    Can you suggest good book for Machine Learning ?

  • @ankursingh5969
    @ankursingh5969 2 роки тому

    Krish R-square will increase in both of the cases whether the variable is correlated with dependent variable or not. hence it result in decrease in Adj R-Squarein both of the case. However the magnitute will be difference.

  • @Priyadarshan123
    @Priyadarshan123 4 роки тому +3

    Hello sir, I am making a project on income and health expenses, my r-squared value comes out less than 1%. What should i interpret from this? Should i change my linear model or try other? What should i do?

    • @kitagrawal3211
      @kitagrawal3211 2 роки тому

      you should add another feature which is correlated to the target variable. Low R-squared means that your independent feature and target variable are not correlated. You can confirm this by computing the correlation between them

  • @shubhamprasad6910
    @shubhamprasad6910 3 роки тому +3

    Which variable in the R^2 adjusted is equation has related to correlation. it is not R^2 and all other variable have nothing to do with correlation. Is it the ratio of (n-1)/(n-p-1)?

    • @akshaymote3430
      @akshaymote3430 9 місяців тому

      Even I have same question. There should be something more in the formula of R2 adjusted which will take correlation into account.

  • @burhanuddinraja7209
    @burhanuddinraja7209 3 роки тому

    Sir, but if p will increase the N will also increase because they both have independent variables. So the denominator will always be zero.

    • @kitagrawal3211
      @kitagrawal3211 2 роки тому

      N is the number of samples, not number of predictors. For the shape of dataframe (m,n) the number of samples is m and number of preictors is n.

  • @anubhavgupta8146
    @anubhavgupta8146 4 роки тому +1

    Bhai kya karke manoge , itna simply koi kaise padha sakta hai👍

  • @rajeshdhyani3114
    @rajeshdhyani3114 3 роки тому

    Well Explained

  • @akshaykrishnan7985
    @akshaykrishnan7985 5 років тому +5

    Good morning sir. Please do upload a video with explanation of what exactly is p-value. Getting confused with it. I hope atleast your explanation would give more clarity.

  • @abhinavjain5561
    @abhinavjain5561 3 роки тому

    In adjusted r2, their is r2
    But whether the feature is correlated or not the r2 value will increase than how we are able to say something about adjusted r2

  • @adylmanulat2465
    @adylmanulat2465 2 роки тому

    good day sir, I just wanted to ask if an independent variable is not significant or does not have an explanatory power to the model but when removing it lowers the adjusted r-square what does this imply? so far the reason that i know the reason is because the t-statistic is greater than one. With this information, what can we infer?

  • @woblogs2941
    @woblogs2941 4 роки тому

    Thank you sir u made the things veery easy

  • @voramb123
    @voramb123 4 роки тому

    Very interesting and excellent but requested to give examples to evaluate situations

  • @hakkamadan9941
    @hakkamadan9941 3 роки тому

    beautiful explanation sirji

  • @srinagabtechabs
    @srinagabtechabs 3 роки тому

    Excellent explanation.. thank u very much

  • @hemachand5617
    @hemachand5617 4 роки тому

    Let's say I have 10 features and some R square value is calculated. Later it found that 4 of the features are uncorrolated with the target. Now 1-R2 value is not going to change and so does the adjusted R2 value. Can u correct me if I'm analyzing it wrong hoping it would follow the simple linear regression model not the lasso

  • @hanman5195
    @hanman5195 5 років тому +7

    All time never ever found these kind explanation.
    I will not follow any howle heros except Sadhguru and You.

  • @SandeepKumar-ie1ni
    @SandeepKumar-ie1ni 4 роки тому

    Sir, As you said that in order to avoid negate values in the residuals we squared the terms SSres and SStot , but sir if we apply mode on both values neglecting squared both terms , what will be the change in R values ?? On squaring the R value its getting larger which is reaching towards 1 more easily that depicts our model has fitted well . please answer sir .

  • @firta_banjara
    @firta_banjara 4 роки тому

    hi krish,
    if we add features with high error then the SSres increases , but if we add features with low error then SSres decreases

  • @varunkukade7971
    @varunkukade7971 4 роки тому +3

    You said by using 1st formula that even if independent feature is not related, r^2 value increses .that was the drawback. But at 14.18 sec of video you are saying if the feature is not related then we would get smaller r^ value from 1 st formula. I got confused here. Please solve my confusion. I will be glad. Please🙌🙌🙏

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 роки тому

      No.even if the feature is not correlated to output variable,the value of r square will increase, thats why we uses the adjusted r square..if the feature is not correlated, value will decrease....
      May be he said that by mistake

    • @kitagrawal3211
      @kitagrawal3211 2 роки тому

      he meant that for the same features, if they are correlated with the target variable, you will get a higher R2 value and a smaller value if they are uncorrelated.

  • @Kmrabhinav569
    @Kmrabhinav569 2 роки тому

    Well done

  • @pranjalgupta9427
    @pranjalgupta9427 4 роки тому

    Awesome video and explaination

  • @mawais2560
    @mawais2560 4 роки тому +1

    what are possible interpretations and justifications for low r square values in management science?

  • @kumarvaibhav5325
    @kumarvaibhav5325 3 роки тому

    Sir it would be great it you can compliment this with an example

  • @abhi9029
    @abhi9029 3 роки тому

    Hi Krish, At the end of your each sentence while explanation please make the same rhythm of the speech. What happen here is at the end of your sentence you make your voice very low so this creates confusion while listening.

  • @alishaparveen5226
    @alishaparveen5226 2 роки тому

    Could you please explain with any example from scratch with multi output in regression?? I want to predict 2 output (distance travelled and velocity) from the dataset.

  • @kishanpandey4798
    @kishanpandey4798 5 років тому

    If I have 10 features and if I need to know which feature is affecting output y and which is not affecting y. Do I need to find correlation between y and each feature separately. If yes , then how? If not , then what to do? Krish please reply. Thanks

    • @deepakgehani
      @deepakgehani 5 років тому +1

      You can do Eda, do a pairplot check correlation and put on heatmap and later you can aply machine learning algo

    • @kishanpandey4798
      @kishanpandey4798 5 років тому

      @@deepakgehani thanks a lot. I will apply this and revert back to you in case I face any other issue. Thanks again

    • @praneethcj6544
      @praneethcj6544 4 роки тому

      You need to perform chi square test if both IP&Op variables are categorical and ANOVA for cat ,cont variables ,finally Pearson correlation for both continuous ...!!!

    • @praneethcj6544
      @praneethcj6544 4 роки тому

      You write in a loop all the variables and check correlation.

    • @mranaljadhav8259
      @mranaljadhav8259 4 роки тому

      you have many way to find , firstly you can find correlation between them using heatmap or corr method, secondly you an find the VIF value of the features , last way you can check your standard error by using OLS method.

  • @sagarpandya7865
    @sagarpandya7865 3 роки тому

    Great explanation Thank you

  • @hemantdas9546
    @hemantdas9546 4 роки тому +1

    What does this mean that R square will always increase when feature is added. This means when features are increased predictions are better. Is it so?

    • @kulpreetsingh9064
      @kulpreetsingh9064 4 роки тому

      No bro, That will depend whether the features getting added are correlated or not. If the features getting added are not correlated with the target variable then the adjusted R square will decrease, however if they are correlated then naturally adjusted R square will also increase.

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 роки тому +1

      Adding multiple feature will automatically increase the r square, as increasing feature decreases the value of SSres.even if the feature is not related to the output variable. Adding multiple feature to our model can perform better in sample than when tested out of sample.So in such case adjusted r square works

  • @sahilbhatia2671
    @sahilbhatia2671 3 роки тому

    very well explained

  • @biswajitnayaak
    @biswajitnayaak 2 роки тому

    i am not 100% sure if this is correct when you say it needs to be squared (Actual - Predicted) because of negative value but i suspect its for the outliers

  • @richasharma5949
    @richasharma5949 4 роки тому

    Good explanation, but it would be better to add an example. That way it will become more clear :)

    • @deepknowledge2505
      @deepknowledge2505 4 роки тому

      Please see if this could help you
      ua-cam.com/video/3SoK930HWL0/v-deo.html

  • @utkarshsalaria3952
    @utkarshsalaria3952 3 роки тому +1

    Sir at last of the video you said that r^2 will never be decreasing on increase of independent features even if the that feature is not correlated , then how can you say that adjusted R^2 will decrease when R^2 is less (at 14:16) which will never be true according to the fact that R^2 will always be increasing then how can it be less It have actually confused me Plz help if anyone knows

    • @rohandogra5421
      @rohandogra5421 3 роки тому +1

      Yup I also have the same problem

    • @tiverekarrahul
      @tiverekarrahul 2 роки тому +1

      1) If added features are correlated with target, R2 grows much fater compared to denominator term containing number of features ( p). Hence Adj. R2 also increases.
      2) If added features are not correlated or less correlated with target, then R2 grows slower compared to denominator term containing number of features ( p). Hence Adj. R2 will increase a little, but will not have any significant rise.( NOTE: Adj R2 Does not decrease) That is what is called as penalized. Not allowed to grow at same rate as that of correlated features case.

  • @anubhasinha2557
    @anubhasinha2557 4 роки тому +2

    Nicely explained... Can you help me with difference between Sum of Residual and Cost function? Looks like both have same formula.

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 роки тому +2

      Actually both are same..sum of residual is the sum of square of difference between predicted and actual data points and cost function is also same,

    • @anubhasinha2557
      @anubhasinha2557 4 роки тому

      @@ayushmishra-sw4po Thanks Ayush!!!

  • @ayantikabhowmik1261
    @ayantikabhowmik1261 4 роки тому

    Great explanation Sir!

  • @harishgoud6772
    @harishgoud6772 5 років тому

    Sir SSR means sum of squares of residuals.

  • @mahalerahulm
    @mahalerahulm 4 роки тому

    Wonderful Explanation !!

  • @gauravjoshi9764
    @gauravjoshi9764 3 роки тому

    i just wanna know this total sample size is total number of columns or total number of rows

    • @kitagrawal3211
      @kitagrawal3211 2 роки тому

      sample size is total number of rows. predictors are total number of columns

  • @ruchiyadav1334
    @ruchiyadav1334 3 роки тому +5

    Kuch samjh nhi aya

  • @keerthanpu808
    @keerthanpu808 8 місяців тому

    HOW U TOOK AVERAGE LINE IN GRAPH (ON WHAT BASIS?)

    • @AkashRusiya
      @AkashRusiya 5 місяців тому +1

      It's simply the arithmetic mean of target variable's "actual" values.

  • @bhavanasree7573
    @bhavanasree7573 4 роки тому

    What do we do next if we get to know that r-square is small ? Yeah it says the model isn't a good fit but is there any way we can improve the model after getting to know the r squared is less or we use some other method to solve this model

  • @seemaarya598
    @seemaarya598 4 роки тому

    How we can say adj r square is significant or not

  • @sangitakhade1730
    @sangitakhade1730 3 роки тому

    what is the meaning of penalize

  • @tannurohela6192
    @tannurohela6192 2 роки тому

    Hey, I didn't get the term Penalizing. In the video just before explaining Adjusted R square, it was said that "it is not Penalizing the new added features". Can someone please elaborate.

  • @rachanagovekar1683
    @rachanagovekar1683 3 роки тому +2

    What are these 33 dislikes for ? Is your language different :-D, Awesome explanation Krish, hats off

    • @adityasagarr
      @adityasagarr 3 роки тому

      maybe in search of hindi content

  • @tonnysaha7676
    @tonnysaha7676 3 роки тому

    Thank you sir🙏

  • @gopakumar138
    @gopakumar138 4 роки тому

    very useful video

  • @amitanand8485
    @amitanand8485 5 років тому

    Thanks .. Explained beautifully

  • @adipurnomo5683
    @adipurnomo5683 3 роки тому

    Fantastic course!. I hope you doing well sir .

  • @ParallelUniverse550
    @ParallelUniverse550 3 роки тому

    Can R square be considered as training accuracy?

    • @kitagrawal3211
      @kitagrawal3211 2 роки тому

      yes, it is a performance metric. in practice, adjusted r-score is used more often

  • @kewalagrawal6539
    @kewalagrawal6539 4 роки тому +2

    This is the problem with our education system...everything is just formula based...you started off with the formula without even giving any intuition about what actually R2 and adjusted R2 mean...what does a 50% R2 tell you...formula and maths always come last...you should first make your students visualize what these terms mean without using any maths at all...once they are good with it...then you bring the formula

  • @shaz-z506
    @shaz-z506 5 років тому +1

    Thank you Krish that's the good explanation.

  • @datascience6718
    @datascience6718 4 роки тому +1

    Sir, what is the meaning of penalize in terms of machine learning?

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 роки тому +4

      Here Panalize means er are adding extra predictor which is no use..so it will decrease the value of Adjusted R sq

    • @datascience6718
      @datascience6718 4 роки тому

      @@ayushmishra-sw4po thank you so much

  • @emilyme9478
    @emilyme9478 3 роки тому

    Awesome

  • @ravitadiboina6065
    @ravitadiboina6065 4 роки тому

    Why r2 value is no decreasing when features are increasing is their any theory behind it

    • @kitagrawal3211
      @kitagrawal3211 2 роки тому

      yes. you will always be adding either 0 or small values > 0 (because of the square) so it will either remain the same or increase.

  • @manzarabbas6312
    @manzarabbas6312 4 роки тому

    kamal !!!!!

  • @vjukulkarni6057
    @vjukulkarni6057 4 роки тому

    Hi krish can u please suggest how to explain the algorithm in interview

  • @subhamsaha2235
    @subhamsaha2235 3 роки тому

    Still not clear for me, can anyone help me out.
    In case of un-correlated or correlated variable, If p increases then N will also increase, R2 obviously increase, then how its penalizing?

    • @kitagrawal3211
      @kitagrawal3211 2 роки тому

      N is constant here because it's number of samples vs p is number of preictors.

  • @saifsalim6084
    @saifsalim6084 5 років тому

    In which condition, SSR will be greter than SST?

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 роки тому +1

      As we increase the number of independent feature the value of SSR will increase

    • @nilaykushawaha2666
      @nilaykushawaha2666 3 роки тому

      If the model prediction is worst than the average prediction we have assumed in SST

  • @sardarsahib3993
    @sardarsahib3993 5 років тому

    superb

  • @mayurisagiraju7928
    @mayurisagiraju7928 5 років тому

    thank you so much...It helped

  • @snigdharay8847
    @snigdharay8847 5 років тому

    If these two are different then why do all say that r-sqaure and adjusted r-sqaure both are same and while seeing the ouput we always see the adjusted r-square.

    • @generationwolves
      @generationwolves 5 років тому +9

      R-Squared and Adj R-Squared are NOT the same.
      For Simple Linear Regression, the R-Squared and Adj. R-Squared values will almost be similar. You can just check the R-Squared value to evaluate your model's goodness of fit.
      For multiple Linear Regression, you will find that no matter what, the R-Squared value will keep increasing as you add new features (even if the new feature is not correlated to the dependent variable). This leads you to believe that the new feature (independent variable) you've added is contributing to building a better model, which is not the case. The adjusted R-Squared function provides a penalty mechanism that reduces the overall value if the new feature is not contributing to the model. This metric is usually considered to evaluate the goodness of fit (in the case of Multiple Linear Regression), especially when you're using a Feature Selection method like Step-Wise Regression.

  • @ganesanr2307
    @ganesanr2307 4 роки тому

    Since R Square is the squared value of r, then how it will get a negative value.
    R square always 0 to 1. It will never ever be a negative number

    • @linuxrhel6107
      @linuxrhel6107 4 роки тому +1

      There is no such value of R, only R Square is the terminology used for this formula. Check out the formula for R Square.

    • @ganesanr2307
      @ganesanr2307 4 роки тому

      R is the Correlation Coefficient

    • @meetmeraj2000
      @meetmeraj2000 4 роки тому +1

      R squared can be a negative value if the model is worse than average best fit line.

  • @nileshsuryavanshi8792
    @nileshsuryavanshi8792 4 роки тому +1

    very well explained, thank you sir.

  • @Dyslexic_Neuron
    @Dyslexic_Neuron Рік тому +1

    not a satisfactory explanation as to how R adjusted takes care of non correlated value, just hacking the formula doesnt make it very clear. The intuition and the reason for adding sample size is not explained properly.
    Overall not a good explanation

  • @shubhamkundu2228
    @shubhamkundu2228 3 роки тому

    Little Confusing for the use of Adjusted Rsquare !.. So when we add more independent variables to model, the Rsquare will always make sure to increase, then Adjusted Rsquare checks if independent variables is not correlated to the target variable and minimize Rsquare value.
    Does that mean while feature selection, we should take those independent features that are correlated to target/output variable and drop other..?
    Aren't we supposed to take those independent variables in model that are not correlated with each other and they are independent, so why penalizing them which are not correlated !! For independent variables that are correlated, we could drop them !

  • @tejas4054
    @tejas4054 Рік тому

    Particular bolna kab band kroge

  • @machinelearningchefs3525
    @machinelearningchefs3525 4 роки тому

    Correct yourself R-squared = SumSquareRegression/SumSquareTotal and this entity cannot be negative.
    SST = SSR + SSE.
    So SST > SSE , there is no chance of R-squared to be negative. This what happens when you are teaching without have good understanding of concepts behind them. You have more than 150K subscribers and do not mislead them
    From mathematical stand point R-square is the ratio of variation explained due to the model to variation in the data

    • @jagannathgirisaballa
      @jagannathgirisaballa 4 роки тому +2

      𝑅2 compares the fit of the chosen model with that of a horizontal straight line (the null hypothesis). If the chosen model fits worse than a horizontal line, then 𝑅2 is negative. Note that 𝑅2 is not always the square of anything, so it can have a negative value without violating any rules of math. 𝑅2
      is negative only when the chosen model does not follow the trend of the data, so fits worse than a horizontal line.
      Example: fit data to a linear regression model constrained so that the 𝑌
      intercept must equal 1500
      i.stack.imgur.com/CHpzE.png
      The model makes no sense at all given these data. It is clearly the wrong model, perhaps chosen by accident.
      The fit of the model (a straight line constrained to go through the point (0,1500)) is worse than the fit of a horizontal line. Thus the sum-of-squares from the model (𝑆𝑆reg)
      is larger than the sum-of-squares from the horizontal line (𝑆𝑆tot). 𝑅2 is computed as 1−𝑆𝑆reg𝑆𝑆tot. When 𝑆𝑆reg is greater than 𝑆𝑆tot, that equation computes a negative value for 𝑅2
      .
      With linear regression with no constraints, 𝑅2
      must be positive (or zero) and equals the square of the correlation coefficient, 𝑟. A negative 𝑅2 is only possible with linear regression when either the intercept or the slope are constrained so that the "best-fit" line (given the constraint) fits worse than a horizontal line. With nonlinear regression, the 𝑅2
      can be negative whenever the best-fit model (given the chosen equation, and its constraints, if any) fits the data worse than a horizontal line.
      Bottom line: a negative 𝑅2
      is not a mathematical impossibility or the sign of a computer bug. It simply means that the chosen model (with its constraints) fits the data really poorly.

    • @jagannathgirisaballa
      @jagannathgirisaballa 4 роки тому +3

      This person has put in a great degree of time and effort which is an indication of his passion. The reason he has 150K subscribers is that the followers are able to make sense of what he is saying. And dude, logically what will he gain by misleading them. Is he preaching some religion???? I checked your UA-cam channel...surprised that you are commenting without having uploaded a single video?? I recommend that first of all we learn to appreciate the person and even if there is a mistake in something he is saying(to err is human!), lets show some humility in pointing it out.

    • @machinelearningchefs3525
      @machinelearningchefs3525 4 роки тому

      @@jagannathgirisaballa Hi I understand that you no idea about ML or stats. I dont need videos to be uploaded to comment on others videos. Anyway I have Phd in ML/Computer Vision. I dont want get into fight with you . Chill and follow his Videos.

    • @krishnaik06
      @krishnaik06  4 роки тому +2

      Buddy chill...whatever I explain is based on the practical experience...so that means I have proof of everything I do. Any how u r highly qualified, I think u should share your knowledge with everyone...I would also love to see some implementations from your end..and Yes I do not mislead anyone..You can check my linkedin profile, and these videos have helped people to clear interviews. Anyhow it has not helped you, I am sorry about it. So in conclusion misleading is a very wrong term to use over here. Being a highly qualified guy like you, it doesn't suit you at all. Cheer stay safe and healthy. I would also suggest u to go through this link
      stats.stackexchange.com/questions/12900/when-is-r-squared-negative

    • @jagannathgirisaballa
      @jagannathgirisaballa 4 роки тому +1

      @@machinelearningchefs3525 bro, I will be the first person to accept that I have no idea of ML or stats. And that's my excuse of being here and watching the video. So, bro with a PhD, whats your excuse of being here and watching the video? Checking out the opposition? :-) anyways, peace brother. I am here for learning and would love to learn from anyone..apologies if my comment hurt your feelings. not intentional.

  • @harisjoseph117
    @harisjoseph117 3 роки тому

    Thank you Krish. Nice explanation.

  • @prateeksachdeva1611
    @prateeksachdeva1611 2 роки тому

    Very well explained

  • @SACHINKUMAR-px8kq
    @SACHINKUMAR-px8kq 2 роки тому

    Thankyou so much sir

  • @pratiknabriya5506
    @pratiknabriya5506 5 років тому

    Thanks...very well explained.