This is a really high quality video, on par with 2 minute papers but with a more detail oriented approach. Also you have a lovable vibe king, keep it up
I absolutely hate 2 minute papers. It's all hype and no substance. I physically cringe every time I hear the guy say "now hold onto your papers everybody! this is gonna be crazy!" and then he tells you the most boring anti-climactic shit possible.
Another way to frame the problem of neural network representations becoming “too specific” to learn new tasks at 25:59 is to consider exactly how the gradient of weights is computed. It’s the matrix multiplication between the directional error after a layer and the directional values before the layer. When the values become totally orthogonal to the error (they contain no information relative to the error), then it’s impossible to reduce the error by changing the weights in that layer. The reason weight randomization helps with this problem is it introduces new values after the layer that was randomized. However a much more efficient way to do this is to instead reduce the existing weights in a layer with linear regression over a representative sample of data to “pack” the good information into fewer existing neurons. Then you’re free to randomly initialize the remaining neurons, or even better to initialize weights that produce values already aligned with the directional error! I’ve got some ongoing research in this area if anyone is interested in collaborating. 🤓
Thanks for your videos, but at 7:44, efficient zero and mu zero do not reconstruct the raw observation/image, mu zero learns it’s latent representation based on value equivalence only while efficient zero also cares about temporal consistency, so they take next observation to supervise the representation and dynamics part of the model in an unsupervised manner(simsiam)
Still considering. Part of the issue is that every paper is just so different when it comes to this, and lots of the background is going to be dependent on the paper. Still might try as I guess maybe I can extract some general guidelines from my process
I wonder if there is any benefit to be had at all from, like, across multiple full training iterations, distill a large model into a smaller one and then distill the small one back into a larger one (vs. *just* repeatedly distilling a large model into a model of the same size)
I wonder if you could train a model that could beat a human in Rock Paper Scissors, but with retained memory in a best of 7 or so. That would only require it to train on human behavior episodes, which would be hard to acquire. But if this was possible with synthetic games, this would be the best party trick ever.
Why did they have to choose the same name as the Ada programming language ._. They did the same thing with MLKit, which was a model language suite of tools, which google decided should instead be a machine learning kit
This is a really high quality video, on par with 2 minute papers but with a more detail oriented approach. Also you have a lovable vibe king, keep it up
I used to love 2 Minute Papers. But it's become very repetitive now, and just too fluffy. Probably I'm not in the target audience anymore.
I absolutely hate 2 minute papers. It's all hype and no substance. I physically cringe every time I hear the guy say "now hold onto your papers everybody! this is gonna be crazy!" and then he tells you the most boring anti-climactic shit possible.
Yeah, but how come your stinky doo doo though…
Yeah, but how come your stinky doo doo though…
@@herpderp728 Yeah, but how come your stinky doo doo though…
edan bro makes my dopamine policy gradients high everytime. fingers crossed we get open rl foundation models.
Just give this environment to speed runners, watch the true potential of what humans can do with games.
Thanks for the video!
Another way to frame the problem of neural network representations becoming “too specific” to learn new tasks at 25:59 is to consider exactly how the gradient of weights is computed.
It’s the matrix multiplication between the directional error after a layer and the directional values before the layer. When the values become totally orthogonal to the error (they contain no information relative to the error), then it’s impossible to reduce the error by changing the weights in that layer.
The reason weight randomization helps with this problem is it introduces new values after the layer that was randomized. However a much more efficient way to do this is to instead reduce the existing weights in a layer with linear regression over a representative sample of data to “pack” the good information into fewer existing neurons. Then you’re free to randomly initialize the remaining neurons, or even better to initialize weights that produce values already aligned with the directional error! I’ve got some ongoing research in this area if anyone is interested in collaborating. 🤓
sounds pretty badass. might be easier to do a backward pass through lin-reg as well
I'd be interested! How do I get in contact?
@@jadenlorenc2577 my UA-cam profile has links to different places, whatever is easiest for you!
amazing breakdown, thank you for making this paper accessible to me!
At 7:10, the first pronounciation of Muesli is right. German Müsli, Muesli may be the Swiss-German spelling.
Thanks for your videos, but at 7:44, efficient zero and mu zero do not reconstruct the raw observation/image, mu zero learns it’s latent representation based on value equivalence only while efficient zero also cares about temporal consistency, so they take next observation to supervise the representation and dynamics part of the model in an unsupervised manner(simsiam)
sounds like RL is progressing? maybe I should jump back in !
since wandb doesn’t work for me i will actually try clearml thanks to you
Been thinking about this for some time
My 2nd petition on this matter. Please make a video of how you read and implement papers. Thank you **kiss**
Still considering. Part of the issue is that every paper is just so different when it comes to this, and lots of the background is going to be dependent on the paper. Still might try as I guess maybe I can extract some general guidelines from my process
@@EdanMeyer Where to start would be a pretty good help
Do you think the approaches here could be applied to Dreamer V3?
Coffee is culture too!
I wonder if there is any benefit to be had at all from, like, across multiple full training iterations, distill a large model into a smaller one and then distill the small one back into a larger one (vs. *just* repeatedly distilling a large model into a model of the same size)
22:55 uhh 5 x 300 isn't 1800 lmao
I really liked vscode theme on the clear ml section. Can you share it?
Community Material Theme ocean high contrast
7:10 Myu-slee. It's a quick, easy and tasty breakfast so that you too, can be reinforced!
Lmao I don’t think I could have been any further from the mark
@@EdanMeyer no worries -- it was incredibly entertaining XD
Really love it !
I wonder if you could train a model that could beat a human in Rock Paper Scissors, but with retained memory in a best of 7 or so. That would only require it to train on human behavior episodes, which would be hard to acquire. But if this was possible with synthetic games, this would be the best party trick ever.
Why did they have to choose the same name as the Ada programming language ._.
They did the same thing with MLKit, which was a model language suite of tools, which google decided should instead be a machine learning kit
I’m pretty sure every short name in ML papers shares a name with something else at this point lol
If you're ever interested in collaborations, let me know. I'd love to have you on my newsletter to cover some of your most interesting ideas.
Good stuff
Muesli is pronounced "MEW-zlee" HTH
The hell, we have the same name!
"ADA" and "Muesli"
Thought this was about the cardano ecosystem. lol
An army of GPU's? time to break open the piggy bank.
Wow x)
I worry that independent agents will make mistakes faster than we can realign their goals.
20:25 I laughed
I cried
I sobbed
i cant even train ciar10 in 15 mins
Like for a cultured matcha enjoyer
first
AGI is easy. Just build a neural network that takes in input, and puts out an output.
Yup, it's just a bunch of keystrokes in the right order. Soez