Geoff Hinton - Recent Developments in Deep Learning

Поділитися
Вставка
  • Опубліковано 27 гру 2024

КОМЕНТАРІ • 18

  • @fernandodoria8717
    @fernandodoria8717 2 місяці тому +1

    Nobel Prize in Physics in 2024, congratulations Dr. Hinton!

    • @oimrqs1691
      @oimrqs1691 5 днів тому

      And he ends this talkinga bout language models. Legendary.

  • @Brguggyu240
    @Brguggyu240 11 років тому

    Sounds like he's either referring to the initialization of the weights before training (which he goes into further depth in his Coursera neural nets course), or layer-wise pretraining as described/summarized in "Exploring Strategies for Training Deep Neural Networks" (Larochelle, Bengio, Louradour, Lamblin 2009), for the purpose of avoiding local optima and improving efficiency.

  • @roelbakker9238
    @roelbakker9238 11 років тому +1

    This is a fascinating presentation! It is not only interesting from a technical (machine learning) point of view, but it also shows a glimpse of future intelligent computer systems....

    • @fernandodoria8717
      @fernandodoria8717 2 місяці тому

      Nobel Prize in Physics in 2024, congratulations Prof. Hinton!!

  • @ThaFacka
    @ThaFacka 10 років тому +1

    56:30 the answer is already here. cortical processing is done the right way, the way NuPic does it and in terms of natural language processing, cortical.io.

  • @anthonyproschka2047
    @anthonyproschka2047 10 років тому

    Anybody have an idea on how to interpret the features the networks learn? Like can you interpret them f.e. as logical expressions (negations, conjunctions, disjunctions of basic input features) or else?

    • @MobyMotion
      @MobyMotion 9 років тому +1

      +Anthony Proschka You might find this interesting, if you scroll down to "Visualizing the predictions and the "neuron" firings in the RNN" he talks about neurons he's found in one of his nets that fire at specific times like at the end of sentences. Very fascinating:
      karpathy.github.io/2015/05/21/rnn-effectiveness/

    • @anthonyproschka2047
      @anthonyproschka2047 9 років тому

      +Moby Motion Hi Moby thanks for the link. This is indeed very interesting how certain neurons seem to handle certain structures of the text! I still find it noteworthy that neural nets are sort of black boxes not entirely comprehensible to humans. It is very hard to decipher the regularities they find in data.
      Some further thoughts:
      - I was always fascinated by the idea to build a machine that will do (empirical) scientific research on its own. If you think about it: all major scientific insights (i.e. about causalities in the real world) fundamentally depended on some intuitive if not random brainwave by a researcher (f.e. as a consequence of some apple hitting Newton's head). In my opinion, scientific progress doesn't need to depend on this randomness. With today's and the near future's data and computing power we should be able to build machines that do nothing but search for new insights (= regularities in the data). Of course, we do need a very good representation of the real world in our data. AND we need algorithms that are able to create their own abstract features. That's why I was asking about whether we can understand this process of abstraction that happens when you go up the layers in a artificial neural network.
      - AI that is achieved through Deep Learning (=Deep Neural Networks) will not be comprehensible for humans (even though we built it ourselves). Its just not feasible for us to quickly understand what happens within a neural network. The moment we let artificial neural networks make decisions that have real world impact we cannot fully understand these decisions anymore.
      - Another questions about the type of regularities ANNs find: is there any guarantee that NNs will find the simplest explanation of the data? This has to do with overfitting and Occam's Razor... Also, how can we achieve the inductive one-shot learning that humans achieve (where they induce regularities from just one observation)?

  • @SomeInfo-ib3wz
    @SomeInfo-ib3wz 11 років тому

    Some may be over my head, but very interesting stuff here.

  • @tusharsharma8023
    @tusharsharma8023 10 років тому

    Can someone please post the papers he refers in the talk??

    • @fernandodoria8717
      @fernandodoria8717 2 місяці тому

      Nobel Prize in Physics in 2024, congratulations Prof. Hinton!!

  • @bipolar3372
    @bipolar3372 10 років тому

    speculate, you can't accumulate, if you don't proverbial saying, mid 20th century; meaning that outlay (and some degree of risk) is necessary if real gain is to be achieved.

  • @SweetHyunho
    @SweetHyunho 10 років тому

    Amazing ending. I think it has a point because the net result of all human activity is the persistence of the human race. Not that the millions of model neurons reasoned that out.

  • @kkochubey
    @kkochubey 10 років тому

    I wish I know all this stuff

  • @KaplaBen
    @KaplaBen 11 років тому

    Hinton talks at 0:41

  • @NielsStender
    @NielsStender 9 років тому

    Random forest comment 32:20

    • @GmailUnited
      @GmailUnited 9 років тому

      +Niels Stender excuse me? how dare you bother those.