Machine Learning Mastery
Machine Learning Mastery
  • 48
  • 244 003
Implementing Bayesian Optimization - Step by Step Coding - Part 2
This is continuation of the Part1 video and I take you through all the building blocks for implementing of Bayesian Optimization step by step including key coding blocks.
Переглядів: 273

Відео

Implementing Bayesian Optimization - Step by Step Coding - Part 1
Переглядів 1832 місяці тому
In this video, I take you through all the building blocks for implementing of Bayesian Optimization step by step including key coding blocks.
Kalman Filter Simplified - Algorithm explained with Examples
Переглядів 1282 місяці тому
Kalman Filter Simplified - Algorithm explained with Examples
How to Manage Train vs Test Divergences
Переглядів 292 місяці тому
In practical applications, training vs testing set divergences are inevitable. In this video, I explain how to manage such divergences. You can score them using methods that I explained in my other videos.
Fixing Model Probability - Why this matters? How to do it?
Переглядів 1162 місяці тому
Many models including Support Vectors, Decision Trees, Random Forests, XGBoost, CatBoost, LightGBM provide you "Predicted Probability" which are usually INCORRECT. There is a statistical reason why they are incorrect and I explain a sound approach to correct them.
Nested Cross Validation - Algorithm Explained
Переглядів 1422 місяці тому
Learn when to use Nested Cross Validation and How it adds value to understand the variance bias play.
What is KFold Cross Validation? When NOT to use it? How to use it with modifications for your data
Переглядів 45010 місяців тому
KFold cross validation plays a very important role in understanding the variance play in your model. Most people take it for granted and don't use its full potential. I explain how to use it right, how to read its variance play and also highlight when NOT to use vanilla KFold ? But, rather use its extensions as implemented in SKLearn. My AI and Generative AI Courses are details here: ai.generat...
How to really find if my Test Data is diverging from my Training dataset? This WORKS!
Переглядів 53910 місяців тому
Adversarial Validation is a practical method used for finding if testset (seen in production) has started to diverge from training set. We detail the scoring function and how you can implement this. Very effective for mixed tabular data usecases. My AI and Generative AI Courses are details here: ai.generativeminds.co To get a FREE invite to our classes, fill below link: invite.generativeminds.co
Use CentralLimit Theorem to turn any distribution to Normal ? Really?
Переглядів 22511 місяців тому
Central Limit Theorem defines the law of large numbers. We list exactly what the law defines and how empirically non-gaussian distributions can be handled using this theorem for our applications. My AI and Generative AI Courses are details here: ai.generativeminds.co To get a FREE invite to our classes, fill below link: invite.generativeminds.co
How Bootstrapping helps with scoring your Train Test Divergences?
Переглядів 20611 місяців тому
How do you score Train Test Divergences? Bootstrapping is one simple approach to hep you get a grip on this topic. Relying on random sampling methods, its statistically valid and practically a good reference point to be used along side Adversarial scoring techniques. My AI and Generative AI Courses are details here: ai.generativeminds.co To get a FREE invite to our classes, fill below link: inv...
How I built Generative AI for Retail in 60 Days
Переглядів 528Рік тому
Below is the link to an FREE interactive video where I explain the step by step path to building your own Generative AI for your businesses within 60 Days. Just follow the steps and you will get RESULTS !! WATCH it FREE here : how-to-llm.generativeminds.co/ My AI and Generative AI Courses are details here: ai.generativeminds.co To get a FREE invite to our classes, fill below link: invite.genera...
Bayesian Optimization - Math and Algorithm Explained
Переглядів 54 тис.3 роки тому
Learn the algorithmic behind Bayesian optimization, Surrogate Function calculations and Acquisition Function (Upper Confidence Bound). Visualize a scratch implementation on how the approximation works iteratively. Finally, understand how to use scikit-optimize package todo hyperparameter tuning using bayesian optimization. My AI and Generative AI Courses are details here: ai.generativeminds.co ...
Decision Tree Hyperparam Tuning
Переглядів 3,8 тис.3 роки тому
Learn how to use Training and Validation dataset to find the optimum values for your hyperparameters of your decision Tree. Demonstrated for - Max Tree Depth and Min Sample Leaves hyper parameters. My AI and Generative AI Courses are details here: ai.generativeminds.co To get a FREE invite to our classes, fill below link: invite.generativeminds.co
Decision Tree Cost Pruning - Hands On
Переглядів 2,4 тис.3 роки тому
In this handson video you will Learn how to find the right Cost Pruning Alpha parameter for your decision tree. My AI and Generative AI Courses are details here: ai.generativeminds.co To get a FREE invite to our classes, fill below link: invite.generativeminds.co
Gradient Boosting Hands-On Step by Step from Scratch
Переглядів 2,8 тис.3 роки тому
Learn how to write gradient boosting tree algorithm from scratch. Learn how the Loss function is derived and applied into python code as part of your boosting iteration. Learn a trick to present your charts as interpretable categorical values rather than encoded numerical values. (This is done a lot in practice) My AI and Generative AI Courses are details here: ai.generativeminds.co To get a FR...
Hyperparameters - Introduction & Search
Переглядів 5 тис.3 роки тому
Hyperparameters - Introduction & Search
Feature Importance Formulation of Decision Trees
Переглядів 6 тис.3 роки тому
Feature Importance Formulation of Decision Trees
How to Regularize with Dropouts | Deep Learning Hands On
Переглядів 6393 роки тому
How to Regularize with Dropouts | Deep Learning Hands On
How to Regularizing with Weight & Activation Regularizations | Deep Learning
Переглядів 5983 роки тому
How to Regularizing with Weight & Activation Regularizations | Deep Learning
How to Fix Vanishing & Exploding Gradient Problems | Deep Learning
Переглядів 3 тис.3 роки тому
How to Fix Vanishing & Exploding Gradient Problems | Deep Learning
How to Accelerate training with Batch Normalization? | Deep Learning
Переглядів 7984 роки тому
How to Accelerate training with Batch Normalization? | Deep Learning
What is a Perceptron Learning Algorithm - Step By Step Clearly Explained using Python
Переглядів 22 тис.4 роки тому
What is a Perceptron Learning Algorithm - Step By Step Clearly Explained using Python
How to Tune Learning Rate for your Architecture? | Deep Learning
Переглядів 1,6 тис.4 роки тому
How to Tune Learning Rate for your Architecture? | Deep Learning
How to Find the Right number of Layers/Neurons for your Neural Network?
Переглядів 13 тис.4 роки тому
How to Find the Right number of Layers/Neurons for your Neural Network?
How to Configure and Tune Batch Size for your Neural Network?
Переглядів 2,8 тис.4 роки тому
How to Configure and Tune Batch Size for your Neural Network?
Back Propagation Math Step By Step Detailed with an Example | Deep Learning
Переглядів 2,7 тис.4 роки тому
Back Propagation Math Step By Step Detailed with an Example | Deep Learning
Back Propagation Concept Math Step By Step for a Two Layer Feed Forward Network
Переглядів 4674 роки тому
Back Propagation Concept Math Step By Step for a Two Layer Feed Forward Network
How Gradient Descent finds the weights? Gradient Descent Math Step By Step with Example | Neural Net
Переглядів 12 тис.4 роки тому
How Gradient Descent finds the weights? Gradient Descent Math Step By Step with Example | Neural Net
How to use Gaussian Mixture Models, EM algorithm for Clustering? | Machine Learning Step By Step
Переглядів 19 тис.4 роки тому
How to use Gaussian Mixture Models, EM algorithm for Clustering? | Machine Learning Step By Step
Principal Component Analysis (PCA) Maths Explained with Implementation from Scratch
Переглядів 6744 роки тому
Principal Component Analysis (PCA) Maths Explained with Implementation from Scratch

КОМЕНТАРІ

  • @ositaonyejekwe
    @ositaonyejekwe Місяць тому

    image for undersamping and over sampling makes no sense. Can you explain what is the orange part and what is the blue part???so confusing...

  • @taiwoiromini6016
    @taiwoiromini6016 2 місяці тому

    Where or how do you get the initial 50 data points?

  • @tairjan601
    @tairjan601 2 місяці тому

    For those who want to understand more about how we get Lagrangian and Dual Formulation - Appendix C called SVM Dual Problem from "Hands On Machine Learning" book might be helpful. Explains Lagrangian and some maths

  • @sebastianrodriguez672
    @sebastianrodriguez672 2 місяці тому

    So much value. Thank you

  • @ivandanendra9999
    @ivandanendra9999 3 місяці тому

    "Would you be able to provide the references for this material, please? Thank you very much!"

  • @eduardocesargarridomerchan5326
    @eduardocesargarridomerchan5326 3 місяці тому

    Tutorial en castellano de optimizacion bayesiana, por si a alguien le interesa: ua-cam.com/video/nNRGOfneMdA/v-deo.html

  • @backbench3rs659
    @backbench3rs659 3 місяці тому

    Excellent way to teach❤

  • @tairjan601
    @tairjan601 3 місяці тому

    man, your videos are amazing. i literally just remembered how great this video is but couldn't find your channel for almost an hour. glad i did. honestly one of my favourites about KFold

  • @kailashmehta7568
    @kailashmehta7568 3 місяці тому

    Best explanation on GMM algorithm, thanks !

  • @AbhinavKumar-ip5fb
    @AbhinavKumar-ip5fb 3 місяці тому

    Awesome and well explained

  • @sumangorkhali5748
    @sumangorkhali5748 3 місяці тому

    Best explained... millions of thanks

  • @mshika2150
    @mshika2150 4 місяці тому

    can i get the code ?

  • @khemchand494
    @khemchand494 5 місяців тому

    Very well explained. I got the complete intuition of GMMs in a go.

    • @machinelearningmastery
      @machinelearningmastery 3 місяці тому

      Nice ! Have you thought how the equations change when its no more normal distribution but is t-dist ? (T-dist is practically more robust)

  • @vrhstpso
    @vrhstpso 5 місяців тому

    😀

  • @sm-pz8er
    @sm-pz8er 7 місяців тому

    Very well simplified explanation. Thank you

  • @prabhjot-ud6ru
    @prabhjot-ud6ru 7 місяців тому

    best ever explanation for GMM. Thanks a lot for such a helpful video.

  • @benheller472
    @benheller472 8 місяців тому

    Hello, I’ve been watching your videos. Thank you! They are great. Is there a way to contact you directly?

  • @9951468414
    @9951468414 8 місяців тому

    Which reference book do you use?

  • @9951468414
    @9951468414 8 місяців тому

    Hello there, Can you give the material notes

  • @VIVEK_InLoop
    @VIVEK_InLoop 9 місяців тому

    Nice sir

  • @siliconvalley_mr_li
    @siliconvalley_mr_li 10 місяців тому

    well done in <15 min!!! highly recommended

  • @DM-py7pj
    @DM-py7pj 11 місяців тому

    is the end of the video missing?

  • @nashtashasaint-pier7404
    @nashtashasaint-pier7404 11 місяців тому

    This seems to be correct if and only if you assume that your three models are independant. This is fine, but I think this does not say much in practical cases, as it is very unlikely that you will have 3 base learners that are not correlated. In general, it seems pretty complicated to come up with a "comprehensive" formula that takes into account the respective covariances of these three models with each others and expresses the probabilistic advantage ensembling has.

    • @machinelearningmastery
      @machinelearningmastery 11 місяців тому

      The formulation has been the premise of why variance reduces theoretically when ensembling is in place compared to independent models. From practical standpoint, it works well which is why random forest is such a star with so many hyperparams to ensure you get different trees as much possible across 100s of features faced in real applications.

  • @VictorTimely-9
    @VictorTimely-9 11 місяців тому

    More on Statistics.

  • @wenkuchen
    @wenkuchen 11 місяців тому

    very clear explanation for decision tree features importance, thanks

  • @meha1233
    @meha1233 11 місяців тому

    You should mention the normalized method. I kill myself to find out how to normalize those numbers

    • @machinelearningmastery
      @machinelearningmastery 11 місяців тому

      Which normalization would you like to see? The wgt computation in each iteration is normalized. Could you clarify.

  • @countrylifevlog524
    @countrylifevlog524 11 місяців тому

    can you provide these slides

  • @tomryan7679
    @tomryan7679 Рік тому

    @machinelearningmaster Great video, thanks! Could you please share the dataset used so that we can replicate this?

  • @namanjha4964
    @namanjha4964 Рік тому

    Thanks a lot for the video

  • @saleemun8842
    @saleemun8842 Рік тому

    by far the clearest explanation of bayesian optimization, great work, thanks man!

  • @Xavier-Ma
    @Xavier-Ma Рік тому

    Wonderful explaination! Thanks professor.

  • @YuekselG
    @YuekselG Рік тому

    is there a mistake in 9:10 ? there is 1 f(x) too much i think. Has to be N(f(x_1), ... (x_n) l o, C*)) / N(f(x_1), ... (x_n) l o, C)). Can anyone confirm this? ty

  • @syedtalhaabidalishah961
    @syedtalhaabidalishah961 Рік тому

    what a video!!! simple and straight forward

  • @Goop3
    @Goop3 Рік тому

    Very intuitive explanation!! Thank you so much! I found this gem of a channel today!

  • @hosseindahaee2886
    @hosseindahaee2886 Рік тому

    thanks but there is a typo in y=-1 wtx+b<= -1 not wtx+b<= 1

  • @gvdkamdar
    @gvdkamdar Рік тому

    This entire series is one of the most comprehensive explanations I have found for SVMs. Extremely grateful for it

  • @agc444
    @agc444 Рік тому

    Wonderful video, many thanks. Perhaps it would be nice if you made the code available for us learners to play with. Thanks.

  • @saremish
    @saremish Рік тому

    Very clear and informative. Thanks!

  • @hatemmohamed8387
    @hatemmohamed8387 Рік тому

    is there any repo containing the codes for the entire playlist

  • @mahdiyehbasereh
    @mahdiyehbasereh Рік тому

    Why don't we inherit from the keras.model class? Thanks alot for your tutorials

    • @machinelearningmastery
      @machinelearningmastery 11 місяців тому

      Yes, you can do that and make it easier to use in multiple places.

  • @ywbc1217
    @ywbc1217 Рік тому

    extremely not good explanations

  • @dhanushka5
    @dhanushka5 Рік тому

    Thanks

  • @Ruhgtfo
    @Ruhgtfo Рік тому

    Best explanation find the most, thank-you

  • @DilipKumar-dc2rx
    @DilipKumar-dc2rx Рік тому

    You taught better than my instructor 🙂

  • @farhaddotita8855
    @farhaddotita8855 Рік тому

    Thanks so much, the best explanation of xgBoost I´ve seen so far, most people doesnt matter about the math intuition!

  • @JLBorloo
    @JLBorloo Рік тому

    Good stuff but consider sharing the Notebooks in the future

  • @chinmayb172
    @chinmayb172 Рік тому

    Can you please tell me if I have 10 classes of training data, what number of epochs should I use?

    • @machinelearningmastery
      @machinelearningmastery Рік тому

      In general, I recommend that we set epochs to very large value say 50,000. Then in your code you setup early exit logic as part of training. This will work best for most cases since the training fit will automatically exit when convergence has happened. Hope that helps.

  • @fardian6818
    @fardian6818 Рік тому

    I am a silent internet user, what I usually do when I like a content is just by pressing the like button and save the link on the txt file, but this time is an exception, your content is very simple and completely what I'm looking for. I write you a comment, as the first commentator in this video 😀 You have a new subscriber now. Keep up the good work

  • @isultan
    @isultan Рік тому

    Wow!!! Excellent lecture!!

  • @mikehawk4583
    @mikehawk4583 Рік тому

    Why do you add the mean of the predicted points back to the predicted points?

    • @machinelearningmastery
      @machinelearningmastery Рік тому

      Lets see if can correlate it with a hypotheses that humans would do to learn. Lets say we are in a Forest & searching for trails of human foot marks to get out of it. Every time we find a footprint, we valid & learn about surroundings, vegetation, terrain,etc. Over a period of time we learn ehat leads to exit And what doen't. That precisely the idea here. Hope that helps.

    • @mikehawk4583
      @mikehawk4583 Рік тому

      @@machinelearningmastery I'm sorry but I still don't get it. You can explain it with more math. What I don't get is after predicting a miu, why do we need to add omega? Like what does omega do where?