ML Teach by Doing Lecture 13: Magic of features

Поділитися
Вставка
  • Опубліковано 15 лис 2024

КОМЕНТАРІ • 16

  • @ARUNSARKAR-wl2dp
    @ARUNSARKAR-wl2dp 6 місяців тому +1

    Brilliant, how you are making problem so easy to understand.

  • @abishekdhakal5786
    @abishekdhakal5786 6 місяців тому +2

    Lecture 13 done. Q: Do we have to increase the no of features as we go up increasing the dimensions?

  • @neetpreparation1874
    @neetpreparation1874 7 місяців тому +1

    And was this the real MIT course that you read during your Phd sir. And were your teach the same . Love you teaching by giveing example

    • @vizuara
      @vizuara  7 місяців тому

      Yes, this was the course I took during my PhD. Thanks for your kind words and support!

  • @EliAhm-p2z
    @EliAhm-p2z 3 місяці тому

    so, first how can we find out if the dataset (big dataset) is linearly separable or not? and can we say for sure at the end of transformation , no matter if overfitting happen, we can be able to separate the data linearly?

  • @prathameshj994
    @prathameshj994 4 місяці тому +1

    sir can u allot some python sxercices becausei m not able to transfer my thinking into my code

  • @neetpreparation1874
    @neetpreparation1874 7 місяців тому +1

    Can we make more feature on the basics of differenciblity of data set or it is dependent on the no of dimension we go for separating the data? And is percentron model is used nowaday in the real world.

    • @vizuara
      @vizuara  7 місяців тому

      Perceptron is not used too much, but it is one of the oldest "real" ML algorithms. All modern ML algorithms are formulated as optimization problems, but not the perceptron. It's pure intuition of the person who came up with the update rule of theta and theta0 in the perceptron!

  • @rajubalasubramaniyam9002
    @rajubalasubramaniyam9002 7 місяців тому +1

    how we decide wat dimensions to opt for ? and wats the application of going into these higher dimensions or any use case for this if you could clarify!!

    • @vizuara
      @vizuara  7 місяців тому +1

      We try out higher and higher dimensions. For a complex dataset, we can sequentially try 1,2, 3...15 etc dimensions for feature representation. It's a hyperparameter which we have to fine tune before the training of the ML algorithm begins

  • @praveenchaturvedi2761
    @praveenchaturvedi2761 6 місяців тому +3

    13th lecture Done..

    • @vizuara
      @vizuara  6 місяців тому

      Great! Thanks for following along

  • @helrod6131
    @helrod6131 Місяць тому

    thank you!

  • @meoe00n
    @meoe00n Місяць тому

    Gods work here

  • @prathameshj994
    @prathameshj994 4 місяці тому

    done sir thanksss