Stacking Ensemble Learning|Stacking and Blending in ensemble machine learning

Поділитися
Вставка
  • Опубліковано 1 лис 2024

КОМЕНТАРІ • 127

  • @atomicbreath4360
    @atomicbreath4360 3 роки тому +5

    First of all thanks for the video.
    Bagging: when we take different models and train them parallely with each getting sub-set if data from the total data and each model has high-variance and low bias.
    Boosting: same as above but the difference is instead of training them parallely ,output of one model is given as input to the other and each model should have high-bias and low-variance.

  • @crazycurlzXD
    @crazycurlzXD 4 роки тому +2

    I've been struggling to understand this for quite a few hours now. Finally, got it. Thank you so much!

  • @TheMuktesh89
    @TheMuktesh89 Рік тому +1

    That is very nicely Explained. Thank you, Sir.

  • @sharatainapur
    @sharatainapur 2 роки тому +7

    Hello Aman Sir, Thank you for the great video, simple explanation.
    Could you please elaborate on how the meta-model is built and used for the testing / real-test set?
    Like here, the meta-model uses Logistic Regression, right? How a logistic regression works to stack the results from base model?

  • @hetal1926
    @hetal1926 3 роки тому

    I am a new in this field and I was trying to understand this concept, refered many webpages and seen many videos. You explain Very nicely. I got it the concept.

  • @ranajaydas8906
    @ranajaydas8906 3 роки тому +1

    Thanks a lot... Was struggling with this Stacking approach.... Now it's clear!

  • @davidgao5351
    @davidgao5351 8 місяців тому

    Great explanation of the concept. Thank you for also showing the python samples to really bring it home.

  • @sridattu4467
    @sridattu4467 3 роки тому

    I beg my pardon....I was struggling with this technique
    Very clearly understood and the code n got executed!!
    Thanks a lot

  • @tosinlitics949
    @tosinlitics949 Рік тому

    I love that you throughly explained the theory before you dove into the code. Great job!

  • @arrafihriday1333
    @arrafihriday1333 2 роки тому

    Laudable teaching. Learnt a lot.

  • @reflex2627
    @reflex2627 3 роки тому

    Absolutely very good explanation , better than my professor

  • @nan8922
    @nan8922 3 роки тому

    Wow, fast and cleare, thanks.

  • @SandeepSSMishra
    @SandeepSSMishra 8 місяців тому

    Can you make a separate video for Blending with detailed example and implementation without the libraries?

  • @sandipansarkar9211
    @sandipansarkar9211 2 роки тому

    finished watching

  • @sonalkapuriya4944
    @sonalkapuriya4944 Рік тому

    crystal clear explanation

  • @FarhanAhmed-xq3zx
    @FarhanAhmed-xq3zx 3 роки тому +1

    Greatly explained💥👌

  • @bharatbajoria
    @bharatbajoria 3 роки тому +1

    Bagging is Bootstrap Aggregation which is used primarily to reduce Variance , it uses CLT to do the same. Boosting improves the base learners by learning from mistake of the previous model using homogeneous weak learners, it helps in reducing Bias.

  • @harshakrishna8259
    @harshakrishna8259 3 роки тому +1

    Thanks a lot bro! ... Helped a lot for one of my projects!!

  • @akshayshenoy7417
    @akshayshenoy7417 3 роки тому +1

    Beautiful man got my concepts cleared you deserve more reach.

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +1

      Thanks a lot Akshay. Kindly share in the data science groups you are part of :)

  • @preranatiwary7690
    @preranatiwary7690 4 роки тому +1

    Good content

  • @SandeepSSMishra
    @SandeepSSMishra Рік тому

    Sir Namaskar. That code you did in Python is for stacking or blending, kindly say.

  • @Mars7822
    @Mars7822 2 роки тому

    Nice class

  • @mikohotori4276
    @mikohotori4276 3 роки тому

    Thanks for your sharing.

  • @pranitflora9482
    @pranitflora9482 Рік тому

    Very well explained. Can you also explain KcrossK cross validations and go in dept of meta model.

  • @ujjwala7286
    @ujjwala7286 2 роки тому

    Thank you for explaining .Can you suggest which ensemble techniques is suitable for deep learning model for video classification task

  • @hansmeiser6078
    @hansmeiser6078 3 роки тому

    Simply wonderful!

  • @magesh10mano
    @magesh10mano 3 роки тому

    Good Explanation... Thank you

  • @seema5579
    @seema5579 2 роки тому

    Thnks 4 the video, sir, can i perform stacking between different CNN models and feature fusion in between these models

  • @samuelpradhan1899
    @samuelpradhan1899 2 місяці тому

    How to use stacking regressor models from sklearn and keras??

  • @saurabhdeokar3791
    @saurabhdeokar3791 3 роки тому

    In bagging , we make different subset of dataset by using row sampling and replacement and that subset we pass different model's to make prediction and at the end we combine or aggregate all of the model prediction.

  • @ling6701
    @ling6701 Рік тому

    Thank you.

  • @phanikumar3136
    @phanikumar3136 4 роки тому

    bagging is the process known as parallel computing and in this method, we can choose rows and columns with replacement and its example is randomforest....but in boosting it is a series computing and example xgboost.

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Correct. Thank you.

    • @pankajnegi9278
      @pankajnegi9278 4 роки тому

      In bagging we take base learner models, with high variance and low bias,.
      Eg - random forest we typically take decision trees ( which are Fully grown to their max depth, with max_depth = None) as such decisions trees are high variance, models.
      The main aim of bagging itself is to reduce high variance of the overall/final model,
      In bagging we have bootstrap (row sampling and column sampling) and aggregation steps which helps to achieve low variance final model,
      Also every base learner is being train on a sample dataset not on whole data set, so every base learner is learning something unique or different from other base learners

  • @vinushan24
    @vinushan24 Місяць тому

    Thanks!

  • @tusharsalunkhe7916
    @tusharsalunkhe7916 3 роки тому +1

    Thank you sir for this lecture.
    Want to know one thing... data which goes to Meta model consist of Independent variables and actual output value (Target variable Y) along with predictions from weak learners like LR, SVM, NN... so how does Meta Model use predictions from weak learners to predict the final output/ prediction?
    Is Meta model consider predictions from weak learners as additional independent variables(along with existing independent variables) and target variable as dependent variable and give final prediction ?
    Please help.

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      Good question Tushar.
      The meta model will take predictions from weak learner as features.(no original feature)

    • @tusharsalunkhe7916
      @tusharsalunkhe7916 3 роки тому

      @@UnfoldDataScience Thanks for reply. So predictions from weak learners are taken as Independent variables and original target variable as dependant variable... Right?

  • @sangrammishra4396
    @sangrammishra4396 3 роки тому

    Love your study sir..

  • @vaddadisairahul2956
    @vaddadisairahul2956 3 роки тому

    Bagging helps in reducing variance due to overfitting in decision trees and further to reduce bias, boosting is used. Hence, ultimately we achieve a model with low bias and low variance.

  • @anuragchandnani8037
    @anuragchandnani8037 4 роки тому +1

    thank you

  • @Neerajkumar-xl9kx
    @Neerajkumar-xl9kx 3 роки тому

    Thanks a lot , i am a beginner

  • @Kumarsashi-qy8xh
    @Kumarsashi-qy8xh 4 роки тому

    Nice subject

  • @chitramdasgupta3122
    @chitramdasgupta3122 4 роки тому

    Thank you! Keep making these videos.

  • @christian.belmar
    @christian.belmar Рік тому

    excellent

  • @orchidchetiaphukan4658
    @orchidchetiaphukan4658 4 роки тому

    Clearly Explained.

  • @MegaBoss1980
    @MegaBoss1980 3 роки тому

    Can we do level 2 meta model. Also can we insert new training features in meta model?

  • @eduardocasanova7301
    @eduardocasanova7301 2 роки тому +1

    Hi Aman, thanks for your explanation! I have a question though - is regularization and ensembling the same? In the decision trees case we use the same techniques of bagging and boosting, so, if i'm regularizing am i implicitly ensembling and viceversa?
    Thank you!

  • @tolifeandlearning3919
    @tolifeandlearning3919 4 роки тому

    Good Explanation.

  • @sonambarsainya3747
    @sonambarsainya3747 2 роки тому +1

    Hi Aman, thanks for your explanation. I have a question regarding deep learning models, Can we stack yolov4 (which I converted the .weight file into a .h5 file) and other CNN models like InceptionResnetV2+LSTM into one ensembling model for different classification with different data?

  • @talaasoudalial-bimany6605
    @talaasoudalial-bimany6605 3 роки тому

    Than you very much
    Just I want to understand there are how many approaches when imlpimenting stacked ensemble learning
    I mean when we combine base learners to meta learner

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      The way of implementing can be many depending on how u implement that in code however the internal logic remains same.

  • @jyotireddy341
    @jyotireddy341 3 роки тому

    Hi. Thank you so much for the video. Can you please guide on how to merge 2 BERT models together. Thanks for the help!

  • @sabeenak7159
    @sabeenak7159 3 роки тому

    Thank you sir

  • @manjunambiar4954
    @manjunambiar4954 Рік тому

    While executing the for loop, there is an error message. "type error:KNN not iterable" .How to solve this?

  • @svltechnologiespvtltd9181
    @svltechnologiespvtltd9181 3 роки тому

    nice explanation, how can we do testing with test data set

  • @sadhnarai8757
    @sadhnarai8757 4 роки тому

    Very good Aman

  • @studywithme4275
    @studywithme4275 2 роки тому

    thanks

  • @sagaradoshi
    @sagaradoshi 2 роки тому

    Hi Aman, Thanks for the video..I have one question
    When we have finished training the stacked model and now when we have test sample..will the test sample go through all the learners + metal model (i.e, SVM, Random forest, gaussian and logistic regression-meta model) or we will feed the test sample only to meta model (i.e., logistic regression in our case)?

    • @stemicalengineer
      @stemicalengineer Рік тому

      Also have this question in mind. Why is no one answering this?

  • @MrXRes
    @MrXRes 4 роки тому

    Thank you for the video
    Can this approach be usfull for semantic segmentation purposes?
    For example we have metamodel consists of UNet, Deeplab and FCN
    And metaclassifier FCN
    Is it going to get better result?

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Yes, we can try that, I am not 100% sure it will work though.

  • @raphaeldayan
    @raphaeldayan 4 роки тому

    thank you a lot!

  • @sandipansarkar9211
    @sandipansarkar9211 2 роки тому +1

    I am unable to locate this ipynb file in your respective google drive .Please guide

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому

      I think this file is missing, I will try to find out and place it however I doubt it may be in my old laptop and difficult to recover.

  • @MAK335
    @MAK335 4 роки тому

    sir which ensemble technique we should choose? and when we should choose ? how do we decide that

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Bagging and Boosting are good. Decision will happen based on available resources , Data Size etc.

    • @MAK335
      @MAK335 4 роки тому

      @@UnfoldDataScience nobody has made a single video on this on UA-cam. you should definitely make a video on this topic !!!

  • @RinP3103
    @RinP3103 4 роки тому

    Hi.. I am very much interested in ML concepts and try to build career in this, but I can see lots of mathematical derivations are there when trying to learn any new concept, also so many libraries are there, its quite difficult to get acquainted to all these, can you please guide how to actually learn all these so that can be understood well.

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Hi Rinky,
      Please watch my machine learning playlist once. Tell me if it boosts your confidence:
      ua-cam.com/video/8PFt4Jin7B0/v-deo.html

  • @nasali5116
    @nasali5116 3 роки тому

    is it possible if blending model sometimes lower accuracy that the initial model??

  • @jagannathdas3991
    @jagannathdas3991 3 роки тому +1

    Sir stacking thoda aur clearly bolte... Blending achha tha..🙏🙏

  • @sandipansarkar9211
    @sandipansarkar9211 2 роки тому +1

    finished coding

  • @FlorieAnnRibucan
    @FlorieAnnRibucan 3 роки тому

    Good day! May I request a link for a copy of your code, sir? Thank you

  • @ASNPersonal
    @ASNPersonal 3 роки тому +2

    Getting error with this code:
    # creating stacking classifier with above models
    stackingclf = StackingClassifier(classifiers=[myclf1, myclf2, myclf3], meta_classifier=mylr)
    Without error below code:
    # creating stacking classifier with above models
    stackingclf = StackingClassifier(estimators=[myclf1, myclf2, myclf3], final_estimator=mylr)

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +2

      may be this argument method is not taking "meta_classifier" due to version issue.

    • @gauravkanu4486
      @gauravkanu4486 3 роки тому +1

      Thanks a lot @anugati saved my time!

  • @Monty-hl1rb
    @Monty-hl1rb Рік тому

    Sir ..Plz exam briefly... Not understanding 😟😟

  • @shivanshsingh5555
    @shivanshsingh5555 4 роки тому

    @5:48 at this time, you said "and this training data goes to another model, called meta model."
    the way you pointing the finger and that what to said is not getting understood by me. For me these are very much imp so i goes to the depth of each n every word along with action. Please kindly sort my query please?
    and
    what is training model here after dividing 75 records into 80-20%....?
    if it is 80% (as i know) then y didn't you mentioned it....im confused

    • @mohammadmoslemuddin7274
      @mohammadmoslemuddin7274 3 роки тому +3

      I can share with you my understanding. First, we divide the 100 training examples into 75 Training and 25 Test examples. Then we divide the 75 Training examples into (80% Training and 20% Validation examples) i.e. 60 training and 15 validation examples. After that, we train the different base models on these 60 training examples and make predictions on the 15 validation examples. The predictions on the 15 examples become the input to our meta-model. Now we train the meta-model and test our accuracy on the initial 25 test examples. In short this Blending. When we follow the K-fold approach to divide the 75 Training examples to divide and train as 60 training and 15 validation examples, it is called stacking. Hope it helps. Happy learning.

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +2

      Thanks Mohammad and Shivansh for discussion.

  • @riyosantoyosep1749
    @riyosantoyosep1749 3 роки тому +1

    please make subtitle

  • @MAK335
    @MAK335 4 роки тому

    iris is not binary classification it has more than 2 classes in target variable

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Yes, Correct it has three categories. Did I say 2, Thanks for pointing out.

    • @MAK335
      @MAK335 4 роки тому

      @@UnfoldDataScience you didn't say 2 but you said iris is a binary classification dataset .....

  • @zainabfatima9932
    @zainabfatima9932 2 роки тому +1

    Code for this one?

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому

      I m searching for it however there is a possibility it may have it in my old laptop. I will try to find and upload.

  • @SESHUNITR
    @SESHUNITR 4 роки тому

    Good job