Statistical Learning and Data Science
Statistical Learning and Data Science
  • 294
  • 125 318
I2ML - Random Forest - Bagging Ensembles
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
Переглядів: 59

Відео

I2ML - Random Forests - Out-of-Bag Error Estimate
Переглядів 3821 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Proximities
Переглядів 4621 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Feature Importance
Переглядів 5921 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Basics
Переглядів 8821 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Tuning - In a Nutshell
Переглядів 3321 день тому
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Non-Linear Models and Structural Risk Minimization
Переглядів 2421 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Geometry of L2 Regularization
Переглядів 2621 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Weight Decay and L2
Переглядів 1021 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Geometry of L1 Regularization
Переглядів 2021 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Bayesian Priors
Переглядів 2121 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Early Stopping
Переглядів 721 день тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Other Regularizers
Переглядів 13Місяць тому
SL - Regularization - Other Regularizers
SL - Regularization - Lasso Regression
Переглядів 15Місяць тому
SL - Regularization - Lasso Regression
SL - Regularization - Lasso vs. Ridge
Переглядів 11Місяць тому
SL - Regularization - Lasso vs. Ridge
SL - Regularization - Ridge Regression
Переглядів 33Місяць тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Introduction
Переглядів 35Місяць тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Elastic Net and regularized GLMs
Переглядів 13Місяць тому
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Information Theory - Information Theory for Machine Learning
Переглядів 1875 місяців тому
SL - Information Theory - Information Theory for Machine Learning
SL - Information Theory - Joint Entropy and Mutual Information II
Переглядів 585 місяців тому
SL - Information Theory - Joint Entropy and Mutual Information II
SL - Information Theory - Joint Entropy and Mutual Information I
Переглядів 1065 місяців тому
SL - Information Theory - Joint Entropy and Mutual Information I
SL - Information Theory - KL Divergence
Переглядів 1335 місяців тому
SL - Information Theory - KL Divergence
SL - Information Theory - Cross Entropy and KL
Переглядів 705 місяців тому
SL - Information Theory - Cross Entropy and KL
SL - Information Theory - Differential Entropy
Переглядів 3755 місяців тому
SL - Information Theory - Differential Entropy
SL - Information Theory - Entropy II
Переглядів 715 місяців тому
SL - Information Theory - Entropy II
SL - Information Theory - Entropy I
Переглядів 1555 місяців тому
SL - Information Theory - Entropy I
I2ML - Evaluation - In a Nutshell
Переглядів 1227 місяців тому
I2ML - Evaluation - In a Nutshell
SL - Advanced Risk Minimization - Properties of Loss Functions
Переглядів 1318 місяців тому
SL - Advanced Risk Minimization - Properties of Loss Functions
SL - Advanced Risk Minimization - Bias-Variance Decomposition
Переглядів 1128 місяців тому
SL - Advanced Risk Minimization - Bias-Variance Decomposition
SL - Advanced Risk Minimization - MLE vs ERM II
Переглядів 838 місяців тому
SL - Advanced Risk Minimization - MLE vs ERM II

КОМЕНТАРІ

  • @virgenalosveinte5915
    @virgenalosveinte5915 15 днів тому

    great video thanks, very clear

  • @fayezalhussein7115
    @fayezalhussein7115 24 дні тому

    thank you

  • @rebeenali4317
    @rebeenali4317 27 днів тому

    how do we get phi values in step 4?

  • @gamuchiraindawana2827
    @gamuchiraindawana2827 Місяць тому

    LET'S GOOOOOOOO 💫💫 THANK YOU FOR TAKING YOUR TIME TO MAKE THESE VIDEOS💯💯💯💯❤❤

  • @holthuizenoemoet591
    @holthuizenoemoet591 2 місяці тому

    Is this algorithm inspired by k-means clustering

  • @errrrrrr-
    @errrrrrr- 2 місяці тому

    Thank you! You explained thing very clearly.

  • @jackychang6197
    @jackychang6197 2 місяці тому

    Very helpful video. The visualization in the OOB is very easy to understand. Thank you!

  • @convel
    @convel 3 місяці тому

    great lecture! what if some of the variables to be optimized are limited in a certain range? using multivariate normaldistribution to generate offspring might exceed the range limit?

  • @MarceloSilva-cm5mg
    @MarceloSilva-cm5mg 4 місяці тому

    Excuse me, but wouldn't z1+z2+z3+....zT be (-1)^T/2 instead of (-1/2)^T? Anyway, you did a great job. Congratulations!!

  • @fiNitEarth
    @fiNitEarth 5 місяців тому

    first :)

  • @gamuchiraindawana2827
    @gamuchiraindawana2827 5 місяців тому

    It's so hard to hear what you're saying, please amplify the audio post processing on your future uploads. Excellent presentation nonetheless, you explained it so simply and clearly. <3

    • @berndbischl
      @berndbischl 5 місяців тому

      Thank you. We are still not "pros" with regards to all technical aspects of recording. Will try to be better in the future.

  • @bertobertoberto242
    @bertobertoberto242 6 місяців тому

    at 4:00 isn't the square supposed to be inside the square brackets?

  • @bertobertoberto242
    @bertobertoberto242 7 місяців тому

    HI, great course, however a small note, at 12:20 I think that the function on the left might not be convex, as the tangent plane in the "light blue area" is on top on the function, and not below the function, thus violates the definition of convexity (afaik they are called Quasiconvex function that sort of functions)...

  • @rohi9594
    @rohi9594 7 місяців тому

    finally found out clear logic behind the weights. Thank you so much🎉

  • @weii321
    @weii321 8 місяців тому

    Nice video. I have a question. How to calculate shapley value for classification problem?

    • @zxynj
      @zxynj 7 місяців тому

      to not violate axiom, do it in logit space

  • @twist777hz
    @twist777hz 8 місяців тому

    Thank you for doing this video in Numerator layout. It seems many videos on machine learning use Denominator layout but I definitely prefer Numerator layout! Is it possible you could do a follow-up video where you talk about partial derivative of scalar function with respect to MATRIX ? Most documents I've looked at seem to use Denominator layout for this type of derivative (some even use Numerator layout with respect to VECTOR, and then switch to Denominator layout with respect to MATRIX). I assume it's because Denominator layout preserves the dimension of the matrix, making it more convenient for gradient descent etc. What would you recommend I should do?

  • @mackwebster7704
    @mackwebster7704 8 місяців тому

    💐 Promo'SM

  • @mxzeromxzero8912
    @mxzeromxzero8912 10 місяців тому

    So, the permutation order was only to define which feature will get the random value? Not creating a whole new instance with feature order same as the permutation order? (The algorithm shows S, j, S- , but your example shows S, S-, j)

  • @mxzeromxzero8912
    @mxzeromxzero8912 10 місяців тому

    Thank you!

  • @fanhbz1018
    @fanhbz1018 11 місяців тому

    Nice lecture. I also recommend Dr. Ahmad Bazzis convex optimization series.

  • @shubhibans
    @shubhibans Рік тому

    Great work

  • @maxgh8534
    @maxgh8534 Рік тому

    Hi, sadly your github link doesnt work for me. Thanks for the video.

  • @jengoesnuts
    @jengoesnuts Рік тому

    Can you explain more about the ommitted variable bias in M-plots? My teacher told me that you can use a linear transformation to explain the green graph by transofmring x1 and x2 to two independent random variables x1 and U. Is that true?

  • @ocamlmail
    @ocamlmail Рік тому

    Thank you so much for this video. Consider example on 7:20 min. -- but doesn't it look like feature permutation? Shouldn't I use expected values for other variables (x2,x3) ? Thanks in advance.

  • @hkrish26
    @hkrish26 Рік тому

    Thanks

  • @appliedstatistics2043
    @appliedstatistics2043 Рік тому

    the material is not accessible right now, can someone reupload it ?

  • @yt-1161
    @yt-1161 Рік тому

    what do you mean with "pessimistic bias" ?

    • @sogari2187
      @sogari2187 Рік тому

      If i understand correctly, it is pessimistic, because you use lets say 90% of your available data as training set and 10% as test set. So the model you test is only trained on 90% of your data, but your final model that you use/publish will use 100% of the data. This means it will probably perform better than your training model that used 90% but you cant validate it because you have no test data left. In the end you evaluate the model on 90% of the data which is probably slightly worse than the model that is trained on 100% of the data.

  • @kcd19923
    @kcd19923 Рік тому

    good explanation madam

  • @kcd19923
    @kcd19923 Рік тому

    Good Explanation Madam

  • @appliedstatistics2043
    @appliedstatistics2043 Рік тому

    Hallo, I'm a student in TU dortmund, our lecture also use your resources, but the link in the description is not working now, how can we get access to the resources?

  • @namrathasrimateti9119
    @namrathasrimateti9119 Рік тому

    Great Explanation!! Thank You

  • @Parthsarthi41
    @Parthsarthi41 Рік тому

    Excellent. Thanks

  • @vaibhav_uk
    @vaibhav_uk Рік тому

    Finally some serious content

  • @Rainstorm121
    @Rainstorm121 2 роки тому

    Thanks Sir, but excuse myself (zero statistics & mathematics background), but what does this video suggest about using Brier Score for measuring forecast?

  • @guillermotorres4988
    @guillermotorres4988 2 роки тому

    Nice explanation! You are using the same set of HP configurations λi, with i=1, ...,N through fourfold CV (in the innner loop). But, what happend if I would like to use a Bayesian hyperparameter to sample the values of the paramters? For example, for each outer cv with its corresponding inner cv, could I use a Bayesian hyperparameter search? Then the set of HP configuration wouldn't be the same in each inner cv, and then the question is: Can be the set of HP configurations different in each inner cv and is it still valid this nested cross valitadion method?

  • @dsbio4671
    @dsbio4671 2 роки тому

    awesome!! thanks so much!

  • @oulahbibidriss7172
    @oulahbibidriss7172 2 роки тому

    thank you, well explained.

  • @canceledlogic7656
    @canceledlogic7656 2 роки тому

    Heres a free resource on one of the most important academic concepts of the modern age: 800 views. GG humanity. GG

  • @manullangjihan2100
    @manullangjihan2100 2 роки тому

    thank you for the explanation

  • @joshuaharrison6708
    @joshuaharrison6708 2 роки тому

    Finally! a good explanation of this. Nice work

  • @user-ff5bd8lm7r
    @user-ff5bd8lm7r 3 роки тому

    Do you have feature interaction explanation of random forest? I mean feature interaction using permutation.

  • @BlueQualityRhythm
    @BlueQualityRhythm 3 роки тому

    Gut erklärt, danke!

  • @BlueQualityRhythm
    @BlueQualityRhythm 3 роки тому

    Seeing how you can build any thinkable pipe, I was thinking that building a automl-system like "TPOT" would actually now be possible in the R world. Probably even something better! The idea is to optimizie the pipeline with a genetic algorithm, like the ones the GA package provides. So I guess the GA would choose between different pipelines and then also inside of these pipelines between different hyperparameters.

  • @BlueQualityRhythm
    @BlueQualityRhythm 3 роки тому

    mlr3 is great! However the lecturer really appears like he has explained it too often. What would really be awesome would be a video to code along. I may make a video like that in the future but if the makers of the package would do it, clearly the content would be superior.

  • @vladimiriurcovschi1657
    @vladimiriurcovschi1657 3 роки тому

    I am a Data Science professional and these videos are the best you can find on UA-cam. Thank you and keep up the good work!

  • @OppaMack
    @OppaMack 3 роки тому

    Excellent.

  • @OppaMack
    @OppaMack 3 роки тому

    Good content, I am trying to wrap my head around 17:25 expectation -> Covariance, I will study to understand that part.

  • @fahaddeshmukh3444
    @fahaddeshmukh3444 3 роки тому

    First

  • @dustinrosenfeld9459
    @dustinrosenfeld9459 3 роки тому

    Hallo, ich wollte fragen, ob man irgendwie mit euch Kontakt aufnehmen kann. Ich hätte fragen zu einem random forest model auf r.

  • @fuckooo
    @fuckooo 3 роки тому

    Very impressed with the quality of your content thank you