Keynote Talk: Model Based Machine Learning

Поділитися
Вставка
  • Опубліковано 10 гру 2024

КОМЕНТАРІ • 10

  • @imrannaseem6595
    @imrannaseem6595 5 років тому +3

    At 9:08 he compares a simple problem (electrical current prediction given a certain voltage) with a much complicated problem of image recognition. IMHO to differentiate between computationally and statistically big data the targeted problem should be the same. So for instance the plane recognition problem, thousands of pictures of the similar type of aeroplane are computationally large but statistically insufficient. However may be a reduced number of pictures of various models may suffice statistically.

  • @jacobdavidcunningham1440
    @jacobdavidcunningham1440 4 роки тому +7

    1:30 If anybody wants this super long list lol
    generative adversarial network
    Markov random field
    K-means clustering
    Radial basis functions
    decision trees
    logistic regression
    Kalman filter
    kernel PCA
    random forest
    deep networks
    principal components
    Hidden Markov model
    convolutional networks
    support vector machines
    Gaussian mixture
    linear regression
    independent component analysis
    Gaussian process
    factor analysis
    Boltzmann machines

  • @AhmedIsam
    @AhmedIsam 5 років тому +4

    32:33 He said it the wrong way, actually the red is better than green and that's why it is amazing.

  • @alexanderkarpenko8776
    @alexanderkarpenko8776 2 роки тому +1

    Unexpected to hear that SVM is a "funny approach for machine learning " (14:03 min). Vladimir Vapnik made an exceptional contribution to the development of the statistical learning theory, and I believe Professor Bishop did not mean what he said.

    •  Рік тому +1

      He deliberately used that expression because western people can't handle the truth that while they were playing with McCulloch-Pitts neuron in the 70s, it had been more than 10 years that Vapnik had already discovered Support Vector Machines

  • @nextwave319
    @nextwave319 2 роки тому

    Excellent!

  • @KristoferPettersson
    @KristoferPettersson 6 років тому

    You say that the model of a person restricts the expressions of freedom so that less data is needed to conclude that any data is a person - fine. But then you say that a convolutional layer represent a model of a person when it clearly is not. Rather the convolutional layer has parameters which iteratively change according to an algorithm to converge towards to a structured representation of a person. Clearly the product is the model we're after and the convolutional layer is more like a 'meta model'. It seems reasonable that the more degrees of freedom this meta model allows for the more kinds of models it can derive, right?

  • @bvdtrading
    @bvdtrading 5 років тому +1

    Why he is holding water for the whole talk ? lol

    • @JohnWick-el6ck
      @JohnWick-el6ck 4 роки тому

      His concentration toward topic made him forget about bottle

    • @diarandor
      @diarandor 4 роки тому +1

      Because he's exploring uncharted waters. XD