Gaussian Mixture Models - The Math of Intelligence (Week 7)

Поділитися
Вставка
  • Опубліковано 28 вер 2024

КОМЕНТАРІ • 242

  • @jericklee8071
    @jericklee8071 5 років тому +4

    From a muddy blur to crystal clear in 30 min, thank you very much for this video Siraj

  • @alinazari6563
    @alinazari6563 4 роки тому +1

    I love how passionate you are about this

  • @jinitgandhi1197
    @jinitgandhi1197 4 роки тому +3

    suggestion at time 6:45 minutes, the y values aren't the probabilities of the x values, intuitively the probability for a single point on the gaussian will be 0.

  • @tomhas4442
    @tomhas4442 3 роки тому +23

    3:44 Intro, Gaussian Distribution, Probability Density Function (PDF)
    7:38 GMM Intro
    9:08 Covariance matrix
    10:15 GMM Definition, K Gaussians
    11:30 How to apply GMM for classification
    12:30 Problem statement, Fitting a GMM model, Maximum Likelihood Estimate (MLE)
    13:58 Similarity to Kmeans clustering algorithm
    16:13 Expectation maximization (EM) algorithm and difference to Gradient Descent
    18:15 When to apply GMM, anomaly detection, clustering, object tracking
    19:30 Coding example with Python
    25:10 EM algorithm workflow in practice, Log Likelihood
    27:54 EM algorithm visual / walkthrough
    36:30 Summary
    great video, many Thanks :)

  • @pandawendao
    @pandawendao 7 років тому +10

    the iteration function is empty, which makes the current code completely random, it should be "mix.Mstep(mix.Estep())" inside that function

    • @Aureole62
      @Aureole62 5 років тому +4

      Like he understands that

  • @browsertab
    @browsertab 5 років тому +37

    The butt kissing ends at 3:40

  • @ngplradhika
    @ngplradhika 6 років тому +7

    Your accent reminds me of Mitchell from Modern Family(fav character) :')
    Also great video thanks!!

  • @alessandrorosati969
    @alessandrorosati969 Рік тому

    I have the problem with the gaussian mixture models, I don't know how generate outliers uniformly in the p-parallelotope defined by the
    coordinate-wise maxima and minima of the ‘regular’ observations in R?

  • @moorsyjam
    @moorsyjam 7 років тому +4

    I got pretty confused around 33:33 with the E step. You've computed wp1 and wp2, which is cool, and then normalised them so their sum is 1 [wp1/(wp1+wp2) + wp2/(wp1+wp2) = (wp1+wp2)/(wp1+wp2) = 1], which makes sense. You then add the log of this sum to self.loglike. But the log of 1 is 0... Which is where you lost me.

    • @emrahyigit
      @emrahyigit 7 років тому +1

      You are right! Siraj should check and fix that with UA-cam annotations.

    • @茱莉-x2o
      @茱莉-x2o 2 роки тому

      Agree

  • @BahriddinAbdiev
    @BahriddinAbdiev 6 років тому

    I have some questions:
    1. In the end, what we achieved: probability distribution of people whether they keep playing the game?
    2. May it cause overfitting if we set too many gaussian distributions?
    Regards.

  • @ethereumnews3873
    @ethereumnews3873 6 років тому

    you are the best source of ML... thanks for your attention(s) and love to AI!!!!!

  • @CarlosCosta-gs8rb
    @CarlosCosta-gs8rb 7 років тому

    Hi. Great again Siraj. You're the best on that online apparently. Should we have a video about non-parametric estimation or Higher Order statistics, perhaps ICA?

  • @getinenglish3472
    @getinenglish3472 4 роки тому

    I keep getting this error :
    AttributeError Traceback (most recent call last)
    in
    10 try:
    11 mix.iterate()
    ---> 12 if mix.loglike > best_loglike:
    13 best_loglike = mix.loglike
    14 best_mix = mix
    AttributeError: 'GaussianMixture' object has no attribute 'loglike'
    I am not sure what to do in this case. Any ideas?
    Thank you

  • @esakkiponraj.e5224
    @esakkiponraj.e5224 5 років тому +2

    whether wp1 + wp2 = 1 always...so self.loglike += log(wp1 + wp2) will be zero ????
    Is it true ?? whether my assumption is wrong ??
    Kindly explain...

    • @ACLNM
      @ACLNM 4 роки тому +1

      He makes mistakes... If only that was the only one... Referring to Variance as Variation... Doesn't know how a Standard Deviation is calculated... omg.

  • @mikkelgedehansen9479
    @mikkelgedehansen9479 3 роки тому

    Would be nice with timestamps, since it is quite impossible to find the bit of information about Gaussian mixture models that I was actually looking for...

  • @tensorhack5271
    @tensorhack5271 7 років тому

    Hi, Im following this channel for a while now and love that you create different series. can you make a small series of basic examples next, so it's easier to learn and get started. With one of your first videos I've just created an sklearn programm that had 50 examples of fruit and car names and with KNN I've got pretty good results. but they are not perfect. now I want to use deep learning for that and would love to see a series where you give different simple examples like this to compare and get started using the different libaries and algorithms. And yes you created some beautiful similar content before but it's not exactly that. Best Wishes

  • @vijaysingh-zf2ox
    @vijaysingh-zf2ox 3 роки тому

    Code doesn't work, There is a problem in GaussianMixture class.

  • @PabloMartinez-ut8on
    @PabloMartinez-ut8on 7 років тому

    You can visit us in Uruguay! Everyone is welcome in Uruguay and especially, people who motivate the world to be better, like you @siraj!

  • @MsSmartFox
    @MsSmartFox 4 роки тому

    @Siraj
    , why do you change the formula at 29:54? instead of sigma^2 you are using abs(sigma).

  • @hemilysantos600
    @hemilysantos600 6 років тому

    Hi, how to change the variance and average Gaussian function in matlab? Can you show an example of what the code looks like?

  • @茱莉-x2o
    @茱莉-x2o 2 роки тому

    pdf does not give the probability. It gives the probability density at that x

  • @JayanthBagare
    @JayanthBagare 7 років тому

    Hey @siraj where are you going to be in India would love to catch up

  • @julioargumedo6722
    @julioargumedo6722 7 років тому

    Hey Siraj thank you. If you ever come to México, you'll have a room, a meal, a beer and a friend :)

  • @eliza337
    @eliza337 5 років тому

    I am trying to use your notebook and getting this error -- any ideas??
    I am getting an error for #checking the fitting process
    AttributeError: 'GaussianMixture' object has no attribute 'loglike'

  • @jcxmej
    @jcxmej 7 років тому

    Siraj I have a question/problem. I have two data inputs which is to be comparatively trained by a learning model. It's not a multiple set of data but only one. It's a set of pair of inputs. I have been reading pairwise svm. How do I do that? Is there a better model.

  • @redafekry3303
    @redafekry3303 4 роки тому

    could you please show an example on 3d data (XYZ - points) ?

  • @onefulltimeequivalent1230
    @onefulltimeequivalent1230 7 років тому

    Siraj, any plans on coming to Germany with in the future?

  • @siddharthkotwal8823
    @siddharthkotwal8823 7 років тому

    Hey Siraj! Come down to Mumbai for some beers and nerding out!

  • @gabrielcustodiodasilva
    @gabrielcustodiodasilva 7 років тому

    You is amazing! Siraj!

  • @valentinocostabile9314
    @valentinocostabile9314 7 років тому

    Great! u solved smartly my doubts... thanks man =)

  • @AmCanTech
    @AmCanTech 7 років тому

    Is there a guide in how to set up jupyter notebook?

    • @antonylawler3423
      @antonylawler3423 7 років тому +1

      If you aren't already using Python, use the anaconda distribution. www.continuum.io. It will also include the most useful libraries.

  • @resitk7272
    @resitk7272 5 років тому

    what if we had more than one column to train?? anyone ??
    Should we do same processing twice or combine dataset into one row?

  • @dan92white
    @dan92white 6 років тому +1

    Great presentation and really well explained! Are you using AWS Sagemaker for this?

  • @nehadureja
    @nehadureja 4 роки тому

    Thank you. Very helpful video. :)

  • @teodorstoev2610
    @teodorstoev2610 6 років тому

    I still can't understand why the bell curve turns into circle, could someone provide me with an explanation,please .

    • @prateek6502-y4p
      @prateek6502-y4p 5 років тому +1

      It's top view of 3d hill shaped bell curve....which looks like a circle with its centre being the pick of curve

  • @AmandeepSingh-cv5qz
    @AmandeepSingh-cv5qz 6 років тому

    why you did not explain the code

  • @Iceport
    @Iceport 4 роки тому

    6:45 y is not the probability. y is the "likelihood" because the probability function is a pdf.

  • @LearnToMaster
    @LearnToMaster 6 років тому

    Another great tutorial, Siraj. By the way, if anyone gets an error seaborn deprecation and being unable to plot histograms, I resolved it by going into the linked 3rd party lib seaborn/distrubutions.py and changing "normed" to "density"

  • @amazingindia7897
    @amazingindia7897 6 років тому

    Hi siraj iam working on helmet detection model can u hlp me out!

  • @kjkunaljindal24
    @kjkunaljindal24 6 років тому

    I believe, the objective is to maximize the likelihood of observed data, not the observed data and the hidden variables.

  • @hacademicabel
    @hacademicabel 5 років тому

    That was an amazing intro! Great videos man!

  • @AishwaryaRadhakrishnan34
    @AishwaryaRadhakrishnan34 4 роки тому

    Awesome !!!!!!

  • @flydragoon88
    @flydragoon88 7 років тому

    you are awesome!

  • @MahdiZouch
    @MahdiZouch 6 років тому

    you are amazing (y)

  • @Alisha-vg8bw
    @Alisha-vg8bw 6 років тому

    please explain Expectation Maximization.. i em not understanding and crying please explain.

  • @danieldavieau1517
    @danieldavieau1517 5 років тому

    Holy shit you're fkkin awesome

  • @quebono100
    @quebono100 7 років тому

    Thats reminds me of the firefly algorythm

  • @smilebig3884
    @smilebig3884 4 роки тому +1

    Made no sence. Better try comming up with more and more examples rather than just reading slides.

  • @mktsp2
    @mktsp2 3 роки тому

    First 3 mins is him sucking up the subscribers

  • @kryptoshi4706
    @kryptoshi4706 7 років тому +17

    Siraj for the president

  • @AmCanTech
    @AmCanTech 7 років тому

    And then ppl like you and me...

  • @awaisahmadsiddiqi6505
    @awaisahmadsiddiqi6505 6 років тому

    come to china

  • @vasileioskosmidis1421
    @vasileioskosmidis1421 3 роки тому

    Ridiculous

  • @babumoshay8117
    @babumoshay8117 6 років тому +5

    Bhai hadd ho gyi...kitna style marta hai ye...aur upar se itni bakwaass krta hai...

  • @ShivaniSingh-ed1ib
    @ShivaniSingh-ed1ib 6 років тому

    R u a Gay... Bcz u r reacting like that... Plzz... Stop overacting and try to speak normally.. Like a civilian not like karan johar

  • @idiocracy10
    @idiocracy10 6 років тому +15

    warning: when he finger styles his hair, get ready for hardcore info dump.
    PS: 3blue1brown series on linear algebra has THE BEST vid on eigen vectors/value pairs, no joking.

  • @jayce8978
    @jayce8978 7 років тому +32

    In case you have bad results using Gaussian mixtures, keep in mind the EM optimization only has local convergence properties, just like gradient descent: it can get stuck. Restarting the the density estimation with other initial parameters might solve it ! :)

  • @hammadshaikhha
    @hammadshaikhha 7 років тому +10

    Siraj, I think it would of been helpful if you showed the resulting clusters that you get from the gaussian mixture model approach in your data. You showed how to model your data using the gaussian mixture, but I am unclear on how we get the specific clusters (say 2 clusters) from that?

  • @ivanpastukhov8664
    @ivanpastukhov8664 5 років тому +1

    Hi all! I've fixed some bugs from the original .ipynb so you are welcome to try my one: github.com/ivanpastukhov/Gaussian_Mixture_Models/blob/master/intro_to_gmm_%26_em.ipynb
    Siraj, thanks a lot for your brilliant videos!

    • @imrohu
      @imrohu 3 роки тому

      Thank you so much Ivan!

  • @simonmandlik910
    @simonmandlik910 7 років тому +7

    Where do I get the dataset? It is not mentioned anywhere and is not in Github repository either

    • @imtryinghere1
      @imtryinghere1 5 років тому +3

      Dataset can be found at: raw.githubusercontent.com/brianspiering/gaussian_mixture_models/master/bimodal_example.csv

  • @dhrumilbarot1431
    @dhrumilbarot1431 7 років тому +2

    Checkout Siraj's India visit info >>pydata.org/delhi2017/

  • @sandeepozarde2820
    @sandeepozarde2820 4 роки тому +1

    Can you please control your moving hands data points? too much distraction.

  • @brunoribeiro512
    @brunoribeiro512 6 років тому +2

    Great video, I tried running your code on my terminal and it's giving the error that 'GaussianMixture' object has no attribute 'loglike', would you happen to know why an error like would occur, or anyone by that matter. Thank you so much

  • @RoxanaNoe
    @RoxanaNoe 6 років тому +2

    I watch 4-5 vídeos of you per day. I'm Learning generative models for drug Design Siraj. Watch your videos not only motivates me, also makes my life & study fun and cool.

  • @ACLNM
    @ACLNM 4 роки тому +1

    So... 38 minutes to predict something and he just forgets about the prediction part?
    I'm sorry, but the justification on 36:02 is not enough for my satisfaction.

  • @qin-yu
    @qin-yu 4 роки тому +1

    1:30 to 3:40 - talk trash

  • @CrazySkillz15
    @CrazySkillz15 6 років тому +2

    Thank you! Your videos helped me a lot... I was so lost and confused about this topic that I was on the verge of giving up. Checked out your tutorials that gave a lot of useful information and insights. Thanks a tonne! :) :D Keep up the good stuff

  • @DosellIo1
    @DosellIo1 7 років тому +2

    Great series!!!! even helps me in my AI learning curve at Udacity. Thanks for it. rgds tibor

  • @yangchitika412
    @yangchitika412 4 роки тому

    I keep getting the following error though:
    ---------------------------------------------------------------------------
    AttributeError Traceback (most recent call last)
    in
    8 #train!
    9 mix.iterate(verbose=True)
    ---> 10 if mix.loglike > best_loglike:
    11 best_loglike = mix.loglike
    12 best_mix = mix
    AttributeError: 'GaussianMixture' object has no attribute 'loglike'

  • @shashankesh
    @shashankesh 6 років тому +3

    25:22 EM model

  • @boscojay1381
    @boscojay1381 4 роки тому +1

    Hi Siraj, I appreciate your videos and I love your content. I' am working on a project on cross-matching using active learning, what advice would you have for me? I' am trying to build something scalable but not so computationally intense.

  • @kakolelouch5261
    @kakolelouch5261 7 років тому +1

    Hi Siraj, wonderful video! I am wandering what is the difference between Gaussian mixture model and least square method in the data fitting' view?

  • @kshiteejsheth9416
    @kshiteejsheth9416 7 років тому +1

    hey siraj ! EM is a heuristic with no guarantees for global convergence. there have been recent algorithms based on method of moments, random projections etc. which provably recover the gmm under some assumptions

  • @GugaOliveira70
    @GugaOliveira70 6 років тому +1

    Thank you very much! Your explication is very good and educative! I'm recommending your channel to my friends too.

  • @bitvox
    @bitvox 7 років тому +2

    Hi, your videos are great!. Please cover VGG, Alexnet, and others sometime.

  • @getinenglish3472
    @getinenglish3472 4 роки тому +1

    Wow! Finally I got my head around this subject. Well done and amazing teaching skills 👏🏻
    Andre

  • @zaphbeeblebrox5333
    @zaphbeeblebrox5333 2 роки тому

    6:45 "y values are the probabilities for the x values." NO!! y values are decidedly NOT probabilities.

  • @zaphbeeblebrox5333
    @zaphbeeblebrox5333 2 роки тому

    7:39 "Sometimes the data has multiple distributions". WTF?! This video is deeply misleading. Please take it down.

  • @prayanshsrivastava1810
    @prayanshsrivastava1810 6 років тому +2

    33:30
    wp1/(wp1+wp2) + wp2/(wp1+wp2) = 1
    log(wp1 + wp2) = log(1) = 0
    How is his model being trained?

    • @prizmaweb
      @prizmaweb 5 років тому

      You guess a theta ( model params) , then that gives you a probability distribution of the hidden variables. With that known, you maximize the joint probability distribution of X and the hidden variables. That gives you a new theta. Repeat the 2 steps above: use the new theta model params instead of your guess.

    • @muhammadshahzaib9122
      @muhammadshahzaib9122 5 років тому

      We actually try to get the value of log(wp1 + wp2) =1 not (wp1 + wp2) to be 1.

  • @chitralalawat8106
    @chitralalawat8106 5 років тому +1

    Here, x1, x2... are the vecors or are the data points of a vector x?

  • @heathicusmaximus8170
    @heathicusmaximus8170 2 роки тому

    Apple sends their hinge prototypes to this guy for testing. If this guy won't wear out hinges, who will?

  • @tarekamineafir714
    @tarekamineafir714 6 років тому +1

    Really thanks man, your video helped me a lot in my Hyperspectral Images classification project's

  • @farzadfayyaz6953
    @farzadfayyaz6953 2 роки тому

    when I do your codes couldnot find data file error? Why? how can find it?

  • @zaphbeeblebrox5333
    @zaphbeeblebrox5333 2 роки тому

    8:30 "x is the number of data points"? What are you talking about?!

  • @adarshsrivastava1074
    @adarshsrivastava1074 5 років тому +1

    Great Video! Really helpful for Data scence students..

  • @maxemc2716
    @maxemc2716 5 років тому

    The hand gestures as always are a little bit distracting to me by the way..

  • @spiderman7616
    @spiderman7616 6 років тому +1

    Hey Siraj!
    Just found your channel and it doesn't cease to amaze. I am learning a lot about AI and ML with your vibrant and enthusiastic expression. My 2 cents would be to talk a tiny bit slower but it is up to you. Congrats and Keep up the Good Work!

  • @gunay-turan
    @gunay-turan 3 роки тому

    where is the mathematical explanation?

  • @thepresistence5935
    @thepresistence5935 2 роки тому

    Where I can get the blog he is following?

  • @jamesgohigh26
    @jamesgohigh26 2 роки тому

    Handsome. I like you teaching

  • @harkus8831
    @harkus8831 4 роки тому

    Does anyone know what the mix value is ?

  • @dhaksheshkolli3369
    @dhaksheshkolli3369 4 роки тому

    Thanks for the vid. Owe ya one big broth

  • @Sha0w_
    @Sha0w_ 4 роки тому

    OKOK I will subscribe you ; >

  • @mauropappaterra
    @mauropappaterra 5 років тому +1

    We love you Siraj

  • @Vivekagrawal5800
    @Vivekagrawal5800 2 роки тому

    Video starts at 03:40

  • @mayurkulkarni755
    @mayurkulkarni755 6 років тому +1

    Super tutorial! Thank you so much!

  • @nomercysar
    @nomercysar 5 років тому +1

    Thanks for reading theory to me. Couldn't do that by myself

    • @Arik1989
      @Arik1989 5 років тому

      I know you're being sarcastic, but honestly, I'm looking for people to do just that for me, I HATE reading technical material.

  • @pierre-louistermidor7118
    @pierre-louistermidor7118 2 роки тому

    good job!

  • @antonylawler3423
    @antonylawler3423 7 років тому +1

    Siraj. The depth and range of your knowledge still continues to amaze me.

  • @ctf5086
    @ctf5086 4 роки тому

    Okay