For a moment I thought this was about Stanford Linear Accelerator Center... Who does naming like that!? The acronym should have been SLAD because the full name is *A Sparsely Labeled ACtions Dataset.*
Nice! So wait, are you saying the dataset is useful for past tasks it wasn't designed for? Or rather that this is a new dataset with far higher quality resulting in these kinds of performance boosts?
Julien mentioned corner cases, not me, but I think they meant like, people doing some task that you might not expect based on the surroundings. Some examples of that were mentioned in the video.
You mention they discarded >100K samples due to duplicative video content with different voice annotations. Did they do anything with those voice annotations? Seems like a cheap superproject for someone to take the results of the video fidgeting and add metadata harvesting of other types to the new neural network.
Holy moly the scale of this data set...
For a moment I thought this was about Stanford Linear Accelerator Center... Who does naming like that!? The acronym should have been SLAD because the full name is *A Sparsely Labeled ACtions Dataset.*
Lucas truly is living up to his given name meaning "bright" 😁
I know one day he will have to explain all the new papers to his father (me). Until then, I'll do my best!
When will data set creation become a primary application for NNWs?
inception :D
Could be useful for UA-cam to skip misleading parts that has nothing to do with the title of the video.
Nice!
So wait, are you saying the dataset is useful for past tasks it wasn't designed for? Or rather that this is a new dataset with far higher quality resulting in these kinds of performance boosts?
Ah, that makes sense
It sounds like ALSO, when you do transfer learning from networks that had better (corner-case) data, those networks are better at the new task. (2:55)
Kram1032 what do you mean by corner case
Julien mentioned corner cases, not me, but I think they meant like, people doing some task that you might not expect based on the surroundings. Some examples of that were mentioned in the video.
Great!
You mention they discarded >100K samples due to duplicative video content with different voice annotations. Did they do anything with those voice annotations? Seems like a cheap superproject for someone to take the results of the video fidgeting and add metadata harvesting of other types to the new neural network.
So the answer to one shot learning is using super curated data compiled by ever more clever layers of classifiers.
(jk)
Cool stuff!
Cool
Karoshow Knife-a-hair
A formidable attempt. :)
I've seriously had to rewind videos while reading your name to try to piece out what the heck it is :D
-Károly Zsolnai-Fehér-
*Karoly Yon-Haifa* here...
car or joy knife a hair
So, are these "annotations" helping algorithms classify images? If so, that's cheating!
frank x typically models learn from a training set and evaluated and test/validation set. Labels are only given to the model during training.
Early again
Great!