Curse of Dimensionality Easily explained| Machine Learning

Поділитися
Вставка
  • Опубліковано 3 січ 2025

КОМЕНТАРІ • 38

  • @manujkumarjoshi9342
    @manujkumarjoshi9342 3 роки тому +31

    I have a perfect example.....my instructor gave me a 2.5-hour lecture on this finally m confused but in 7 min video you made it clear. It's a Curse of Dimensionality.........great.

  • @masalaaa3
    @masalaaa3 2 роки тому +3

    One concern: When you are using the word exponentially, you are using it wrongly. The exponential increase required for ensuring reliable modeling is in the sampling size and not the number of features. Thank you for the good illustration though. Time: 5:55

  • @lewiduressa
    @lewiduressa 2 місяці тому

    Great video!

  • @sandipansarkar9211
    @sandipansarkar9211 4 роки тому +1

    Watched it for second time for better understanding and coding practice. Thanks

  • @discoverdevops5368
    @discoverdevops5368 4 роки тому +2

    In a very simple term, if you are working with large number of dimensions the pattern discover is challenging, this is what the curse of dimensionality.

  • @ga43ga54
    @ga43ga54 5 років тому +5

    Please make a video on the math behind t-SNE.... Great video!! Thank you

  • @cmbharathi2064
    @cmbharathi2064 4 роки тому

    simply explained.. thank you Krish

  • @salihsarii
    @salihsarii Рік тому

    Thank you for this simple explanation :)

  • @skyman7290
    @skyman7290 5 років тому +8

    Thanks for the effort. But what you explained is not curse of dimensionality it is simply increasing the model parameters which leads to overfitting. Whereas curse of dimensionality talks about high-dimensional data which their distance distribution gets independent of the data.

    • @krishnaik06
      @krishnaik06  5 років тому +1

      Hello my dear friend. I think you got confused between overfitting, underfitting and curse of dimensionality. Overfitting and underfitting usually happens when you dont select the right hyperparameter for the machine learning algorithm that we are using. Here we are discussing about attributes,features. Today i will also be uploading a video on overfitting and underfitting.
      Thanks
      Krish

    • @skyman7290
      @skyman7290 5 років тому +3

      @@krishnaik06 No I am not confused. When you have more features it means your models needs more parameters therefore you increase the complexity of your model this leads to over fitting when you have few training data. The solution is either regularizing the parameters or reducing the dimension. (off course increasing training data helps). Curse of dimensionality is a different subject please at least see the wikipidia page : en.wikipedia.org/wiki/Curse_of_dimensionality

    • @adityachandra2462
      @adityachandra2462 4 роки тому +2

      @@skyman7290 the predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but then decreases,[4] which is known as Hughes phenomenon[5] or peaking phenomena........from your given link I have got this, Kindly check the article first before commenting on Krish's video. Peace!!

    • @subho2859
      @subho2859 4 роки тому

      @@krishnaik06 is it only that the right hyperparameter is responsible for underfitting and overfitting not selecting the right features are responsible

    • @majortom-ey7yj
      @majortom-ey7yj 4 місяці тому

      @@krishnaik06 Surely it happens because number of sample data needed increases exponentially as dimensions increase? Even if dimensions increases linearly

  • @akashgayakwad9550
    @akashgayakwad9550 5 років тому +5

    How do u we know what is threshold value of features selection?

  • @louerleseigneur4532
    @louerleseigneur4532 3 роки тому

    Thanks Krish

  • @vijaynale7893
    @vijaynale7893 5 років тому

    Thanks you much bro.. waiting for your next video

  • @sandipansarkar9211
    @sandipansarkar9211 4 роки тому +1

    Great explanation Krish. No need to make notes .Just understand. Thanks

  • @redreaper8652
    @redreaper8652 Рік тому

    Basically diminished returns?

  • @subho2859
    @subho2859 4 роки тому +1

    Is it necessary that the curse of dimensionality happens when the no of features increased exponentially ?

  • @HARSHRAJ-2023
    @HARSHRAJ-2023 5 років тому

    Eagerly waiting for your next video. Please upload soon.

  • @Albertrose.24
    @Albertrose.24 3 роки тому

    Thanks for explaining clearly sir..

  • @eneskosar.r
    @eneskosar.r 5 років тому

    Very well explanation. Easy to understand.

  • @chaitanyamallepudi3531
    @chaitanyamallepudi3531 4 роки тому +3

    Could we select the features using the l2 regularization coefficients; which helps us to select the right features which are not shrinked? By that can we reduce the curse of dimensionality?

  • @senadredzic8835
    @senadredzic8835 4 роки тому

    Well explained! Thanks

  • @jayasimhayenumaladoddi1602
    @jayasimhayenumaladoddi1602 2 роки тому

    Can you please make a very on OLPP?

  • @dholearihant6011
    @dholearihant6011 4 роки тому

    sir thanx for the video i am new to the scenario of machine learning and this helped me

  • @MarcelloNesca
    @MarcelloNesca 5 років тому

    This was a great video explained very easily!

  • @jamalnuman
    @jamalnuman 9 місяців тому

    Great

  • @manjunath.c2944
    @manjunath.c2944 5 років тому

    kindly do video on Chunking and Lazy Learners method

  • @hsin-yuku4086
    @hsin-yuku4086 4 роки тому

    What does accuracy here mean? Does it mean the ability for the model to predict?

  • @anithaani4672
    @anithaani4672 2 роки тому

    Thank you bro

  • @somubd
    @somubd 5 років тому

    Thank You

  • @trantrungnghia9642
    @trantrungnghia9642 4 місяці тому

    TALK ENGLISH

  • @subratapaul061325
    @subratapaul061325 Рік тому

    I have a perfect example.....my instructor gave me a 2.5-hour lecture on this finally m confused but in 7 min video you made it clear. It's a Curse of Dimensionality.........great.