SLAC Dataset From MIT and Facebook | Two Minute Papers

Поділитися
Вставка
  • Опубліковано 19 лис 2024

КОМЕНТАРІ • 26

  • @TheReficul
    @TheReficul 6 років тому +2

    Holy moly the scale of this data set...

  • @deep.space.12
    @deep.space.12 6 років тому +1

    For a moment I thought this was about Stanford Linear Accelerator Center... Who does naming like that!? The acronym should have been SLAD because the full name is *A Sparsely Labeled ACtions Dataset.*

  • @MikeTrieu
    @MikeTrieu 6 років тому +2

    Lucas truly is living up to his given name meaning "bright" 😁

    • @ikerclon
      @ikerclon 6 років тому +2

      I know one day he will have to explain all the new papers to his father (me). Until then, I'll do my best!

  • @killroy42
    @killroy42 6 років тому +2

    When will data set creation become a primary application for NNWs?

  • @anastasiadunbar5246
    @anastasiadunbar5246 6 років тому +1

    Could be useful for UA-cam to skip misleading parts that has nothing to do with the title of the video.

  • @Kram1032
    @Kram1032 6 років тому +4

    Nice!
    So wait, are you saying the dataset is useful for past tasks it wasn't designed for? Or rather that this is a new dataset with far higher quality resulting in these kinds of performance boosts?

    • @Kram1032
      @Kram1032 6 років тому

      Ah, that makes sense

    • @AySz88
      @AySz88 6 років тому +1

      It sounds like ALSO, when you do transfer learning from networks that had better (corner-case) data, those networks are better at the new task. (2:55)

    • @ahmedmazari9279
      @ahmedmazari9279 6 років тому

      Kram1032 what do you mean by corner case

    • @Kram1032
      @Kram1032 6 років тому

      Julien mentioned corner cases, not me, but I think they meant like, people doing some task that you might not expect based on the surroundings. Some examples of that were mentioned in the video.

  • @Ronnypetson
    @Ronnypetson 6 років тому

    Great!

  • @johnhall5427
    @johnhall5427 6 років тому

    You mention they discarded >100K samples due to duplicative video content with different voice annotations. Did they do anything with those voice annotations? Seems like a cheap superproject for someone to take the results of the video fidgeting and add metadata harvesting of other types to the new neural network.

  • @oncedidactic
    @oncedidactic 6 років тому

    So the answer to one shot learning is using super curated data compiled by ever more clever layers of classifiers.
    (jk)
    Cool stuff!

  • @andrestifyable
    @andrestifyable 6 років тому

    Cool

  • @ExhaustedPenguin
    @ExhaustedPenguin 6 років тому +10

    Karoshow Knife-a-hair

    • @TwoMinutePapers
      @TwoMinutePapers  6 років тому +10

      A formidable attempt. :)

    • @robertvralph
      @robertvralph 6 років тому

      I've seriously had to rewind videos while reading your name to try to piece out what the heck it is :D

    • @HiAdrian
      @HiAdrian 6 років тому

      -Károly Zsolnai-Fehér-
      *Karoly Yon-Haifa* here...

    • @romsthe
      @romsthe 6 років тому

      car or joy knife a hair

  • @frankx8739
    @frankx8739 6 років тому

    So, are these "annotations" helping algorithms classify images? If so, that's cheating!

    • @russellcox3699
      @russellcox3699 6 років тому +1

      frank x typically models learn from a training set and evaluated and test/validation set. Labels are only given to the model during training.

  • @ToriKo_
    @ToriKo_ 6 років тому

    Early again

  • @Ronnypetson
    @Ronnypetson 6 років тому

    Great!