![Statistical Learning and Data Science](/img/default-banner.jpg)
- 294
- 125 318
Statistical Learning and Data Science
Germany
Приєднався 26 лют 2019
I2ML - Random Forest - Bagging Ensembles
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
Переглядів: 59
Відео
I2ML - Random Forests - Out-of-Bag Error Estimate
Переглядів 3821 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Proximities
Переглядів 4621 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Feature Importance
Переглядів 5921 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Basics
Переглядів 8821 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Tuning - In a Nutshell
Переглядів 3321 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Non-Linear Models and Structural Risk Minimization
Переглядів 2421 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Geometry of L2 Regularization
Переглядів 2621 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Weight Decay and L2
Переглядів 1021 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Geometry of L1 Regularization
Переглядів 2021 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Bayesian Priors
Переглядів 2121 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Early Stopping
Переглядів 721 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Other Regularizers
Переглядів 13Місяць тому
SL - Regularization - Other Regularizers
SL - Regularization - Lasso Regression
Переглядів 15Місяць тому
SL - Regularization - Lasso Regression
SL - Regularization - Ridge Regression
Переглядів 33Місяць тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Introduction
Переглядів 35Місяць тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Elastic Net and regularized GLMs
Переглядів 13Місяць тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Information Theory - Information Theory for Machine Learning
Переглядів 1875 місяців тому
SL - Information Theory - Information Theory for Machine Learning
SL - Information Theory - Joint Entropy and Mutual Information II
Переглядів 585 місяців тому
SL - Information Theory - Joint Entropy and Mutual Information II
SL - Information Theory - Joint Entropy and Mutual Information I
Переглядів 1065 місяців тому
SL - Information Theory - Joint Entropy and Mutual Information I
SL - Information Theory - KL Divergence
Переглядів 1335 місяців тому
SL - Information Theory - KL Divergence
SL - Information Theory - Cross Entropy and KL
Переглядів 705 місяців тому
SL - Information Theory - Cross Entropy and KL
SL - Information Theory - Differential Entropy
Переглядів 3755 місяців тому
SL - Information Theory - Differential Entropy
SL - Advanced Risk Minimization - Properties of Loss Functions
Переглядів 1318 місяців тому
SL - Advanced Risk Minimization - Properties of Loss Functions
SL - Advanced Risk Minimization - Bias-Variance Decomposition
Переглядів 1128 місяців тому
SL - Advanced Risk Minimization - Bias-Variance Decomposition
SL - Advanced Risk Minimization - MLE vs ERM II
Переглядів 838 місяців тому
SL - Advanced Risk Minimization - MLE vs ERM II
great video thanks, very clear
thank you
how do we get phi values in step 4?
LET'S GOOOOOOOO 💫💫 THANK YOU FOR TAKING YOUR TIME TO MAKE THESE VIDEOS💯💯💯💯❤❤
Is this algorithm inspired by k-means clustering
Thank you! You explained thing very clearly.
Very helpful video. The visualization in the OOB is very easy to understand. Thank you!
great lecture! what if some of the variables to be optimized are limited in a certain range? using multivariate normaldistribution to generate offspring might exceed the range limit?
Excuse me, but wouldn't z1+z2+z3+....zT be (-1)^T/2 instead of (-1/2)^T? Anyway, you did a great job. Congratulations!!
first :)
It's so hard to hear what you're saying, please amplify the audio post processing on your future uploads. Excellent presentation nonetheless, you explained it so simply and clearly. <3
Thank you. We are still not "pros" with regards to all technical aspects of recording. Will try to be better in the future.
at 4:00 isn't the square supposed to be inside the square brackets?
HI, great course, however a small note, at 12:20 I think that the function on the left might not be convex, as the tangent plane in the "light blue area" is on top on the function, and not below the function, thus violates the definition of convexity (afaik they are called Quasiconvex function that sort of functions)...
finally found out clear logic behind the weights. Thank you so much🎉
Nice video. I have a question. How to calculate shapley value for classification problem?
to not violate axiom, do it in logit space
Thank you for doing this video in Numerator layout. It seems many videos on machine learning use Denominator layout but I definitely prefer Numerator layout! Is it possible you could do a follow-up video where you talk about partial derivative of scalar function with respect to MATRIX ? Most documents I've looked at seem to use Denominator layout for this type of derivative (some even use Numerator layout with respect to VECTOR, and then switch to Denominator layout with respect to MATRIX). I assume it's because Denominator layout preserves the dimension of the matrix, making it more convenient for gradient descent etc. What would you recommend I should do?
💐 Promo'SM
So, the permutation order was only to define which feature will get the random value? Not creating a whole new instance with feature order same as the permutation order? (The algorithm shows S, j, S- , but your example shows S, S-, j)
Thank you!
Nice lecture. I also recommend Dr. Ahmad Bazzis convex optimization series.
Great work
Hi, sadly your github link doesnt work for me. Thanks for the video.
Can you explain more about the ommitted variable bias in M-plots? My teacher told me that you can use a linear transformation to explain the green graph by transofmring x1 and x2 to two independent random variables x1 and U. Is that true?
Thank you so much for this video. Consider example on 7:20 min. -- but doesn't it look like feature permutation? Shouldn't I use expected values for other variables (x2,x3) ? Thanks in advance.
Thanks
the material is not accessible right now, can someone reupload it ?
what do you mean with "pessimistic bias" ?
If i understand correctly, it is pessimistic, because you use lets say 90% of your available data as training set and 10% as test set. So the model you test is only trained on 90% of your data, but your final model that you use/publish will use 100% of the data. This means it will probably perform better than your training model that used 90% but you cant validate it because you have no test data left. In the end you evaluate the model on 90% of the data which is probably slightly worse than the model that is trained on 100% of the data.
good explanation madam
Good Explanation Madam
Hallo, I'm a student in TU dortmund, our lecture also use your resources, but the link in the description is not working now, how can we get access to the resources?
Great Explanation!! Thank You
Excellent. Thanks
Finally some serious content
Thanks Sir, but excuse myself (zero statistics & mathematics background), but what does this video suggest about using Brier Score for measuring forecast?
Nice explanation! You are using the same set of HP configurations λi, with i=1, ...,N through fourfold CV (in the innner loop). But, what happend if I would like to use a Bayesian hyperparameter to sample the values of the paramters? For example, for each outer cv with its corresponding inner cv, could I use a Bayesian hyperparameter search? Then the set of HP configuration wouldn't be the same in each inner cv, and then the question is: Can be the set of HP configurations different in each inner cv and is it still valid this nested cross valitadion method?
awesome!! thanks so much!
thank you, well explained.
Heres a free resource on one of the most important academic concepts of the modern age: 800 views. GG humanity. GG
thank you for the explanation
Finally! a good explanation of this. Nice work
Do you have feature interaction explanation of random forest? I mean feature interaction using permutation.
Gut erklärt, danke!
Seeing how you can build any thinkable pipe, I was thinking that building a automl-system like "TPOT" would actually now be possible in the R world. Probably even something better! The idea is to optimizie the pipeline with a genetic algorithm, like the ones the GA package provides. So I guess the GA would choose between different pipelines and then also inside of these pipelines between different hyperparameters.
mlr3 is great! However the lecturer really appears like he has explained it too often. What would really be awesome would be a video to code along. I may make a video like that in the future but if the makers of the package would do it, clearly the content would be superior.
I am a Data Science professional and these videos are the best you can find on UA-cam. Thank you and keep up the good work!
Excellent.
Good content, I am trying to wrap my head around 17:25 expectation -> Covariance, I will study to understand that part.
First
Hallo, ich wollte fragen, ob man irgendwie mit euch Kontakt aufnehmen kann. Ich hätte fragen zu einem random forest model auf r.
Very impressed with the quality of your content thank you