Sampling from Bayes' Nets

Поділитися
Вставка
  • Опубліковано 16 жов 2024

КОМЕНТАРІ • 62

  • @curdyco
    @curdyco 9 місяців тому +5

    Even after 11 years it's one of the best videos. Thank you

  • @sofiayfantidou6457
    @sofiayfantidou6457 9 років тому +43

    Best and simplest thing I've found on the web so far about sampling on Bayes' nets!

  • @narjissyy5865
    @narjissyy5865 2 місяці тому

    Excellent explanation. I appreciate it so much. Now i finally understand this after so much struggling looking at some other resources. Thank you!

  • @adarbayan719
    @adarbayan719 2 роки тому +6

    Dear Mr. Abbeel, thank you for your explanation! It has been 10 years and this tutorial still rocks :)

  • @VonGazza82
    @VonGazza82 9 років тому +26

    At 24:00, you added the last sample that had +d even though you were conditioning on -d. Just a small error.

  • @nikolatesla9598
    @nikolatesla9598 4 місяці тому

    greatest explanation on an example. Thank you professor Pieter!

  • @kshitijtakarkhede7833
    @kshitijtakarkhede7833 6 місяців тому +1

    Thank you sir!!!

  • @dadamkd
    @dadamkd 4 роки тому +4

    Good video, it's actually very simple when you see it in practice and not just as an abstract set of equations.

  • @aaronmailhot9370
    @aaronmailhot9370 3 роки тому +5

    Solid tutorial, thanks so much! My one suggestion for making it even better would be to tweak the details of the Likehood Weighting example a bit (starts at 17:50). Basically, I was a bit confused for a while how one could get different weight values from the samples, because in this example the samples will indeed always return the same results since they are only controlled by evidence variables. I think changing the example to 'find P(-d | -b, -c)' would work (swap out a for b, to allow a to be randomized to change which area P(b | a) and P(c | a) pull from).

  • @cangozpinar
    @cangozpinar 3 роки тому

    Waaay better than anything I've seen so far. Much respect!

  • @ahmedlakhel5383
    @ahmedlakhel5383 Рік тому

    Thank you so much you saved my life

  • @aplicano0921
    @aplicano0921 5 років тому

    This is the better video and the better teacher!

  • @bonkers33331
    @bonkers33331 3 роки тому +1

    Excellent description of Bayesian network sampling!

  • @hypebeastuchiha9229
    @hypebeastuchiha9229 2 роки тому +1

    Thank you so so so much
    My professor could learn a thing from you

  • @sumowll8903
    @sumowll8903 2 роки тому

    This video is so good and clear!

  • @NazerkeSafina
    @NazerkeSafina 4 роки тому

    Thank you very much for this! I wish you get everything you want in life

  • @mohammadabbasi5952
    @mohammadabbasi5952 Рік тому

    great tutorial Mr.Abbeel :)❤

  • @AshutoshSahuMRM
    @AshutoshSahuMRM 10 місяців тому

    Great explanation !!!

  • @jazibjamil9883
    @jazibjamil9883 5 років тому +6

    In the last part you fixed B=-b and a=-a but the samples you showed did not reflect this. That part is extremely wrong and needs rectification.

  • @jeffkral5016
    @jeffkral5016 3 роки тому +1

    at 9:04 does it really map to +c? 0.04 seems like it would map to -c.

  • @mustafaghaleb
    @mustafaghaleb 7 років тому

    The best tutorial about sampling. Thanks :)

  • @sparshjain6077
    @sparshjain6077 5 років тому

    Very clear explaination. Awesome video. thanks a lot!

  • @nealpobrien
    @nealpobrien 5 місяців тому

    Excellent video

  • @bhaitato
    @bhaitato 6 років тому +1

    Thank you for this simple lesson!

  • @kudamushaike
    @kudamushaike 6 років тому

    Thank you sooo much. You just saved my exam

  • @aram4165
    @aram4165 11 років тому

    Thank you professor, I really became a fan of your teaching style. One question: is likelihood weighting is the same as Gibbs sampling?

  • @PieterAbbeel
    @PieterAbbeel  11 років тому +2

    Hi Aram, yes, likelihood weighting is an instantiation of importance sampling. In general in importance sampling you were interested in getting samples from a distribution Q1, but unfortunately you don't know how to sample efficiently from Q1 so you instead sample from another distribution Q2 that is easier to sample from --- and then you reweight your samples by the ratio Q1/Q2. In likelihood weighting: Q1 = P(unobserved variables | observed variables), Q2 = defined by the sampling process.

  • @prayaglehana7187
    @prayaglehana7187 5 років тому

    best explanation of all !

  • @muhammedyusufsener1622
    @muhammedyusufsener1622 5 років тому

    Great explanation sir, thank you very much!

  • @shixinli2818
    @shixinli2818 4 роки тому

    Very helpful! Thank you very much

  • @AishwaryaRadhakrishnan34
    @AishwaryaRadhakrishnan34 5 років тому

    Best explanation!

  • @zhuoerlyu4705
    @zhuoerlyu4705 5 років тому

    Still best video until 2019

  • @Dr.hayder374
    @Dr.hayder374 2 роки тому

    Dear Dr., in time 10:06, is the answering probabilistic queries uses for crating condition probability in Genie software

  • @dr.p.m.ashokkumar5344
    @dr.p.m.ashokkumar5344 4 роки тому

    Excellent.. But how to do sampling when the variables are continuous...

  • @preetivyas_
    @preetivyas_ 7 років тому

    While doing likelihood weighting for the last question, while collecting samples how you would proceed? Given there are multiple conditional queries.

    • @ManishPrajapatchamp
      @ManishPrajapatchamp 6 років тому +1

      You can directly add up the weights containing those (evidence + queries) upon the summation over the weights of the evidence

  • @nckporter
    @nckporter 7 років тому

    Hi, Where did you get the examples?

  • @lydiama7803
    @lydiama7803 7 років тому

    Very clear. Thank you.

  • @breakinggood-r2v
    @breakinggood-r2v 10 місяців тому

    excellent

  • @aram4165
    @aram4165 11 років тому

    Excuse me, I meant importance sampling.is likelihood weighting same as importance sampling?

  • @jarlaxleow
    @jarlaxleow 2 роки тому

    Bless

  • @zhuoerlyu4705
    @zhuoerlyu4705 5 років тому

    Best video

  • @salimakraa5027
    @salimakraa5027 6 років тому +6

    How did you find the weights of each sample in 20:58 ? thanks in advance

    • @sathblr
      @sathblr 6 років тому +3

      I too have the same question. GIven the evidence (-a,-b), we don't sample 'a' and 'b' from the distribution and we sample only c and d. But still there are samples with +a,+b / +a,-b / -a,+b combinations in 20:58!! how are the combined weights calculated? Thanks in advance. Thanks for the clear and simple explanation given.

    • @gabrielemazzola9652
      @gabrielemazzola9652 6 років тому

      ​@@sathblr In general, when you do Likelihood weighting you have to weigh each sample by the likelihood of that generated sample with the evidence variables ("how much that sample makes sense, given the evidence of your query")
      This means the total weight is the product of the weights for each evidence variable (remember: evidence variables are always fixed by the query): conditional probability of the evidence given its parents.
      In this particular case, though, the evidence variables are 'a' and 'b':
      - 'a' doesn't have parents.
      - 'b' has only 'a' as a parent.
      For this reason, the weight of each sample is given by the following: P(A) * P(B|A), where B and A are actual settings of the variables for the current sample.
      Example:
      In the last sample (21:01) we have "+a , +b , -c , -d" --> weight = P(+a) * P(+b | +a) = 0.8 * 0.8 = 0.64
      For this reason, I believe the weights provided by the Teacher are just for the sake of explaining the concepts.
      Please, correct me if I'm wrong. I wish this helped.

  • @tega2754
    @tega2754 7 років тому

    Nice explanation

  • @gourangpathak4443
    @gourangpathak4443 Рік тому

    God Explaination

  • @vikankshnath8068
    @vikankshnath8068 4 роки тому

    Please make more small AI videos on different topics having good question examples.

  • @hardikchawla4966
    @hardikchawla4966 5 років тому

    i should be paying my college fees to this guy.

  • @DanielVazquez
    @DanielVazquez 4 роки тому

    Go bears!

  • @Wodro
    @Wodro 6 років тому

    0:00 earrape alert

  • @Workshirt
    @Workshirt 7 років тому +1

    19:46 0.2 * 0.5 =/= 0.1

  • @manas_singh
    @manas_singh 3 роки тому

    From Rejection Sampling onwards, the explanation is bad. It is too fast.

  • @milos-simic
    @milos-simic 7 років тому +1

    This is a good video, but you're talking too fast.

    • @buddhabrot
      @buddhabrot 5 років тому +1

      this video is understandable even without audio

  • @sheheryar89
    @sheheryar89 5 років тому

    Good, but you were too fast, and the words were hard too.

  • @trollerxoxox
    @trollerxoxox 9 років тому +1

    Can ya talk any fucken faster, who ya tryin to impress, a machinegun?

  • @peterpfankuchen
    @peterpfankuchen 6 років тому +1

    Dude get some space between your mouth and mic, and talk a bit slower...