so, first how can we find out if the dataset (big dataset) is linearly separable or not? and can we say for sure at the end of transformation , no matter if overfitting happen, we can be able to separate the data linearly?
Can we make more feature on the basics of differenciblity of data set or it is dependent on the no of dimension we go for separating the data? And is percentron model is used nowaday in the real world.
Perceptron is not used too much, but it is one of the oldest "real" ML algorithms. All modern ML algorithms are formulated as optimization problems, but not the perceptron. It's pure intuition of the person who came up with the update rule of theta and theta0 in the perceptron!
how we decide wat dimensions to opt for ? and wats the application of going into these higher dimensions or any use case for this if you could clarify!!
We try out higher and higher dimensions. For a complex dataset, we can sequentially try 1,2, 3...15 etc dimensions for feature representation. It's a hyperparameter which we have to fine tune before the training of the ML algorithm begins
Brilliant, how you are making problem so easy to understand.
Thanks!
Lecture 13 done. Q: Do we have to increase the no of features as we go up increasing the dimensions?
And was this the real MIT course that you read during your Phd sir. And were your teach the same . Love you teaching by giveing example
Yes, this was the course I took during my PhD. Thanks for your kind words and support!
so, first how can we find out if the dataset (big dataset) is linearly separable or not? and can we say for sure at the end of transformation , no matter if overfitting happen, we can be able to separate the data linearly?
sir can u allot some python sxercices becausei m not able to transfer my thinking into my code
Can we make more feature on the basics of differenciblity of data set or it is dependent on the no of dimension we go for separating the data? And is percentron model is used nowaday in the real world.
Perceptron is not used too much, but it is one of the oldest "real" ML algorithms. All modern ML algorithms are formulated as optimization problems, but not the perceptron. It's pure intuition of the person who came up with the update rule of theta and theta0 in the perceptron!
how we decide wat dimensions to opt for ? and wats the application of going into these higher dimensions or any use case for this if you could clarify!!
We try out higher and higher dimensions. For a complex dataset, we can sequentially try 1,2, 3...15 etc dimensions for feature representation. It's a hyperparameter which we have to fine tune before the training of the ML algorithm begins
13th lecture Done..
Great! Thanks for following along
thank you!
Gods work here
done sir thanksss