Graphical Models 1 - Christopher Bishop - MLSS 2013 Tübingen

Поділитися
Вставка
  • Опубліковано 10 гру 2024

КОМЕНТАРІ • 20

  • @ching-chenghsu1423
    @ching-chenghsu1423 8 років тому +5

    As a physicist, I really like his approach to machine learning. Very impressive slides and Feynman diagram was a good analogue . Thanks for MPI for IS sharing the series of video, I was a PhD student in MPI for physics.

  • @timelyrain
    @timelyrain 4 роки тому

    great lecture, many thanks for making this publicly available

  • @bikrammajhi3020
    @bikrammajhi3020 Рік тому

    wow. one of the best lecture on graphical model introduction. thank you

  • @Dr2quan
    @Dr2quan 8 років тому +26

    probability begin from 36:00

  • @martinjensen3626
    @martinjensen3626 6 років тому +2

    I love how Mr. Bishop is always using “Pretty Woman” in the movie recommendation example. He must really love that move. :-)
    Great content - thanks for sharing!

  • @usptact
    @usptact 7 років тому +2

    This is a kind of content that will be still relevant when deep learning will be superseded by the next big thing.

  • @redjammie8342
    @redjammie8342 6 років тому +3

    Bold prediction about the future of Machine Learning especially given that Deep Learning has already become a thing at the time of this lecture.

  • @darshanime
    @darshanime 7 років тому +4

    How does the left graph on 1:14:48 represent PCA/ICA/linear regression?

    • @AhmedIsam
      @AhmedIsam 5 років тому +1

      It is simple. The node at the top is the latent variable (reduced dimension in PCA terms) and the node at the bottom is the observed variable (the original data size). In Regression, the node at the top is weight you're looking for, and the one at the bottom is data you observe. That said, this is very rough, lacks many PGM details like plates and shaded nodes and many other details.

  • @sagaradoshi
    @sagaradoshi 2 роки тому

    Thanks for the wonderful video...One quick question ..why did we start with joint probability in graphs? I mean in lecture we saw product rule, sum rule, conditional probability etc...but then in graphs we started factorization using joint probability. Why? What is the idea behind doing/finding joint probability of x1..xp? I have already checked over net that joint prob would give an idea about which variable are dependent on other..but then how is this related to our graph learning?..Many thanks in advance

  • @edgetrading2
    @edgetrading2 10 років тому +6

    Thanks for the great lecture! I like your book.

  • @bingeltube
    @bingeltube 6 років тому +1

    Recommendable, but a very gentle introduction

  • @yuwuxiong1165
    @yuwuxiong1165 6 років тому

    Great lecture...just want mention that @1:01:59, for "the graphs in the essence are adding nothing to the equations...", I guess Judea Pearl may have a different opinion, from causality's point of view, that graphs do add something explicitly to the algebraic equations.

  • @potobill
    @potobill 2 роки тому

    Is the product rule backwards in the butler Cook example?

  • @Nestorghh
    @Nestorghh 9 років тому +4

    world-class

  • @heitorrapela
    @heitorrapela 5 років тому

    Awesome lecture! :)

  • @sarthakchecker8212
    @sarthakchecker8212 Рік тому

    Nice!💯