Geoffrey Hinton and Yann LeCun, 2018 ACM A.M. Turing Award Lecture "The Deep Learning Revolution"

Поділитися
Вставка
  • Опубліковано 22 чер 2019
  • We are pleased to announce that Geoffrey Hinton and Yann LeCun will deliver the Turing Lecture at FCRC. Hinton's talk, entitled, "The Deep Learning Revolution" and LeCun's talk, entitled, "The Deep Learning Revolution: The Sequel," will be presented June 23rd from 5:15-6:30pm in Symphony Hall.
  • Наука та технологія

КОМЕНТАРІ • 49

  • @muckvix
    @muckvix 4 роки тому +18

    Hinton talk started at 10:00

  • @VietVuHunzter
    @VietVuHunzter 5 років тому +14

    Video started at 6:26
    Geoffrey Hinton started at: 16:00
    Yann LeCun started at 48:50

  • @ssureshpdx
    @ssureshpdx 5 років тому +4

    Inspiring lectures, interlaced with good humor. glad to be in the auditorium and watch it live

  • @a.pourihosseini
    @a.pourihosseini 4 роки тому +3

    I love Geoffrey Hinton's sense of humor :DD every three minutes of the talk is filled with at least one joke.
    But, more to the point, I loved how they painted a very accurate "big picture" of where AI was at (at the time), and of its future.

  • @impolitevegan3179
    @impolitevegan3179 5 років тому +1

    I find the most interesting part was the QA however it was really short. Thanks for publishing this.

  • @BiancaAguglia
    @BiancaAguglia 4 роки тому +6

    1:02:45 "The brain has about 10^14 synapses and we only live for about 10^9 seconds." 😊 Of course I felt compelled to convert 10^9 seconds to years. It turns out that's about 31.7 years. Most of us live slightly longer than that. (I know Geoff and Yann were referring to the order of magnitude though. 😊)

    • @nigelwan2841
      @nigelwan2841 4 роки тому +1

      they were compared in terms of the order of magnitude

  • @syko1430
    @syko1430 5 років тому +3

    1:21:24 QA begins

  • @vineetgundecha7872
    @vineetgundecha7872 5 років тому +11

    I guess Hinton has been using the same slides since the 90's.

    • @wentianzhao64
      @wentianzhao64 5 років тому +2

      because PPT was not invented until 90's

    • @zhongzhongclock
      @zhongzhongclock 4 роки тому +3

      He wants to prove his points are right from then, since points are not changed,why change PPT?

  • @bobertgumball1584
    @bobertgumball1584 4 роки тому

    I missing the livestream but I'm here now!

  • @ryanmckenzie1990
    @ryanmckenzie1990 4 роки тому +8

    The term exponentially is used quadratically too often, almost spat out my coffee

  • @driziiD
    @driziiD 2 роки тому

    a lifetime of work, truly inspiring

  • @ContinualAI
    @ContinualAI 4 роки тому

    Amazing lecture!

  • @ThichMauXanh
    @ThichMauXanh 4 роки тому +2

    What did Yann said? "What's 10 to the 6 between friends"? I did not get this joke.

    • @tedfujimoto4299
      @tedfujimoto4299 4 роки тому

      He meant 10^6 (which means 1,000,000).

    • @ThichMauXanh
      @ThichMauXanh 4 роки тому

      @@tedfujimoto4299 yeah, but what does it mean "What's 10^6 between friends" - I thought this suppose to be a joke and I did not get it.

    • @tedfujimoto4299
      @tedfujimoto4299 4 роки тому +13

      @@ThichMauXanh It's an instance of the saying "What's (insert item here) between friends? (idioms.thefreedictionary.com/what%27s...+between+friends%3F)". For example, someone would say "No, let me pay for the meal. What's a couple of dollars between friends?" The joke is that the item being discussed usually refers to a small quantity but Yann is asking the audience to forgive his "off-by-a-factor-of-one-million" mistake.

    • @rodrigueswilder
      @rodrigueswilder 4 роки тому +2

      He said that autonomous vehicles could drive well for 30 minutes, but they were not at human level yet because humans have 1 accident every 100 miles... then he oops, laughed, and said 1 million miles. So, he made a mistake. After that came the joke: what is 10 to the 6 between friends? Brilliant! hahaha

  • @impolitevegan3179
    @impolitevegan3179 5 років тому +3

    they both look like high level characters from Godfather, probably because they are Godfathers

  • @xianstudio9086
    @xianstudio9086 5 років тому +4

    It seems that all great scholars tell great jokes...

    • @deeplemming3746
      @deeplemming3746 4 роки тому

      This non-great scholar is the implicit joke itself !

  • @pascalbercker7487
    @pascalbercker7487 2 роки тому

    If you watch nothing else of this fantastic lecture, at least don't miss the baby Orangutan's amazement at a magic trick that shows how much they know and expect about object permanence! His reaction is absolutely priceless! (around minute 58:00)

  • @AnimeshSharma1977
    @AnimeshSharma1977 4 роки тому

    "There is a wonderful reductio-ad-absurdum of reinforcement learning called Deep Mind" @17:25 ;) Not sure why these guys are so skeptical of reinforcement learning given that the Synapse they mention :P

  • @daverostron8089
    @daverostron8089 2 роки тому +1

    Absolutely love the classic English humour lol

  • @billykotsos4642
    @billykotsos4642 5 років тому

    Love Monty Python !!!

  • @driziiD
    @driziiD 2 роки тому

    from this talk AGI seems inevitable

  • @billykotsos4642
    @billykotsos4642 5 років тому +4

    Seems kinda harsh to compare these NN(s) and their training time to that of humans. Humans have undergone many many years of evolution, to get where we are now.

    • @cajun70122
      @cajun70122 4 роки тому

      It's not all that harsh because we are able to observe the end product of all that evolution, and try to build a thing that models what we see. Therefor the NNs do not need to struggle through the many years of evolution. Which reminds me of what Lecun said about reinforcement learning: it is slow because the learner needs to make many, many mistakes to learn the best way to act. Evolution is like reinforcement learning in that way: millions of years of trial and error to achieve what we see today. But the researchers have the benefit of seeing the result of all that trial and error, so they can skip many of the trials and aim directly for what we can see now as successful.

    • @CandidDate
      @CandidDate 2 роки тому

      @@cajun70122 You're in fantasyland. Then it should take a computer running at one million times faster than humans, if they evolved over one million years, exactly one year to complete its training. And yet here we are, learning Java and trying to make a computer think like a human.

  • @CandidDate
    @CandidDate 2 роки тому

    Regarding balancing a pencil on its point and letting go, which way it will fall. If the robot is sensitive enough, it could move its hand just so, thus predicting which way it will fall. The point is control. Who has it and why.

  • @PeriDidaskalou
    @PeriDidaskalou 4 роки тому +1

    reductionism is idiocy!

    • @danielalorbi
      @danielalorbi 3 роки тому +1

      Science, mathematics and engineering (The most successful formal reductionist frameworks) has been very successful so far. We have good reason to believe it isn't going to suddenly fail.
      If by idiocy you mean it isn't the ultimate form of knowledge acquisition, by all means propose something more effective.