Hi Sebastian, liked the videos and the detail. One thing i noticed for calculating alpha the function which is used, it will give the more importance to the weak learner with higher rate as well. Only thing that will happen we will be getting a negative sign attached to the output leading to choosing other class(assuming binary classification in classes (-1,1)). in the video, at 27:08, you mentioned classifier with high error is not important for prediction. Let me know if i am missing something.
26:17, Misclassified instances gain higher weights: so the next classifier is more likely to classify it correctly. I think what you said was the opposite of this which doesn't seem correct.
This looks like a great method. However, during the iterative "weighting" process, there are too many fixes. Wouldn't that result in an over-fitting issue?
I practice, I would say it is not overfitting more than other algorithms, necessarily. It's actually better than most non-ensemble classifiers but that might be something to look into on a selection of datasets. In practice, I think why it doesn't suffer from overfitting that much is that a) the decision trees are still just 1-level deep and b) you consider the ensemble of decision trees from the different rounds instead of just the last tree.
I need to add all the links to the PDFs to the video descriptions some time. For now, all the lecture slides can be found here: sebastianraschka.com/pdf/lecture-notes/stat451fs20/
Good question. Here, I meant that it is computationally expensive, i.e., it takes a long time to run and/or requires more computational resources than other simpler models.
Sebastian I have a qustion. I am following this course while reading your book (Machine Learning with Pytorch ..). My question is this. In your book you code out a perceptron model using python. Do we need to know the code behind these algorithms, like the ID3 tree or the AdaBoost code? Do we need to go into the anaconda3 libraries and search for the algorithms and actually know the code behind them? Or we it is sufficient to only know how to call them from the scikit learn Library? I am asking because I suppose to be able to become a machine learning engineer you have to know the code behind the algorithms and actually be able to code them out yourself, or I am I completely wrong?
Thank you for these ML videos! I will buy your book to support your work
Thanks a lot @كاي بيدرام
Thank you for the clarifications I just finished this part in your book
and thank you as well for the stacking extra lecture
Simple and complete explanation, thanks prof !!!
Thanks for these super helpful and amazing tutorials!
can't wait for the rest of the course ♥!
Hi Sebastian,
liked the videos and the detail. One thing i noticed for calculating alpha the function which is used, it will give the more importance to the weak learner with higher rate as well. Only thing that will happen we will be getting a negative sign attached to the output leading to choosing other class(assuming binary classification in classes (-1,1)). in the video, at 27:08, you mentioned classifier with high error is not important for prediction. Let me know if i am missing something.
26:17, Misclassified instances gain higher weights: so the next classifier is
more likely to classify it correctly. I think what you said was the opposite of this which doesn't seem correct.
Thank you for the details.
You haven't used the validation set.
This looks like a great method. However, during the iterative "weighting" process, there are too many fixes. Wouldn't that result in an over-fitting issue?
I practice, I would say it is not overfitting more than other algorithms, necessarily. It's actually better than most non-ensemble classifiers but that might be something to look into on a selection of datasets. In practice, I think why it doesn't suffer from overfitting that much is that a) the decision trees are still just 1-level deep and b) you consider the ensemble of decision trees from the different rounds instead of just the last tree.
@@SebastianRaschka It makes sense, thank you very much for the explanation!
Thanks Man. It’s really a good tutorial ❤.
Sir what even pdf and link which you are mentioning the lectures, that link please mention the here sir
I need to add all the links to the PDFs to the video descriptions some time. For now, all the lecture slides can be found here: sebastianraschka.com/pdf/lecture-notes/stat451fs20/
What is meant by a model been expensive ?
Good question. Here, I meant that it is computationally expensive, i.e., it takes a long time to run and/or requires more computational resources than other simpler models.
Sebastian I have a qustion.
I am following this course while reading your book (Machine Learning with Pytorch ..).
My question is this.
In your book you code out a perceptron model using python.
Do we need to know the code behind these algorithms, like the ID3 tree or the AdaBoost code?
Do we need to go into the anaconda3 libraries and search for the algorithms and actually know the code behind them?
Or we it is sufficient to only know how to call them from the scikit learn Library?
I am asking because I suppose to be able to become a machine learning engineer you have to know the code behind the algorithms and actually be able to code them out yourself, or I am I completely wrong?