Recurrent Neural Networks RNNs, Graph Neural Networks GNNs, Long Short Term Memory LSTMs

Поділитися
Вставка
  • Опубліковано 7 лют 2025

КОМЕНТАРІ • 8

  • @donaghegan5170
    @donaghegan5170 3 роки тому +1

    At 53:55 there is a lot of information on the slide, and without a pointer, it is difficult to grasp what you are saying. I think less information-dense slides would help.

  • @miguelamaral9642
    @miguelamaral9642 3 роки тому

    anybody else have the same UA-cam recommendations as Manolis? 47:25 Must be a good sign

  • @nunocalaim3184
    @nunocalaim3184 3 роки тому +1

    I have no idea why Manolis keeps saying random numbers every time he asks whether the audience is able to follow...
    I guess to infer what are these hidden variables I need to create a hidden Markov Model where the observables are the numbers he says, and the input is the last slide of the presentation

    • @ManolisKellis1
      @ManolisKellis1  3 роки тому +4

      It's the number of 5, 4, 3, 2, 1 answers ;-)

  • @theworldsonfire.4091
    @theworldsonfire.4091 3 роки тому

    Where is lecture three?

    • @theworldsonfire.4091
      @theworldsonfire.4091 3 роки тому

      Dave, I’m losing my mind.

    • @ManolisKellis1
      @ManolisKellis1  3 роки тому +3

      @@theworldsonfire.4091 Daisy, Daisy, give me your answer do. It's posted now, after lectures one and two.

    • @theworldsonfire.4091
      @theworldsonfire.4091 3 роки тому

      @@ManolisKellis1 🙂 perfect. I’ll look again!