Right its amazing to hear him talk about tech. He's very very intelligent and would much rather hear him talk about tech as he understands what's going on in the market today and the future of AI. He's obviously good with marketing too but that's not what got him here. At the root he is a programmer!
when it comes to AI he actually has a very surface understanding, let me know if you want exact examples. But good on him for leveraging one guest into the next all the way up the chain - can't hate on that hussle@@dumdum407
Using synthetic data can be interpreted as a form of model smoothing. Based on a shallow analysis of currently available information it might be the case that it helps to stabilize the gradients during training.
I don't think so. It would introduce bias towards thinking like previous language models, but it wouldn't be overfitting. Overfitting is when you fit too much to a "quirky" dataset, drawing conclusions from tiny amounts of data. In this context, overfitting would be if we went over the same dataset dozens of times (dozens of epochs), until the quality of the model's outputs started to decline.
@@alexanderbrown-dg3sy But will not this depend on what kind of synthetic data it is? Is there not an infinite amount of possible kinds of synthetic data, depending on the process used to create it?
@@GlennGaasland I can’t remember the name of the paper off the top of my head, but the research stated and proved that synthetic data doesn’t retain the long tail of data entropy. However, you said the type of synthetic data matter, yes? The only way I see pre-training models on entirely synthetic datasets, is once we have models that can produce outputs beyond human capability because it could produce data more high-quality data than humans. I always thought and still believe that hybrid is the optimal solution, currently. But conversely, we know with gpt4 level models, now llama3-70B models 😂 we can exploit extended test time compute to emulate models with much larger capacity and produced expert level output, so cut this work in a generation scheme, potentially but you’re talking about potentially hundreds of thousands or millions of outputs for every piece potential data. If we had models that were efficient(fff network, structured sparsity…etc) and more efficient inference. Possibly this could work at scale.
The synthetic data will still be useful data. Like, some writers complain about their books being fed to these models. But you can instead have 50 intelligent analyses of that book fed in instead, without ever feeding in the actual book. You could sharpen learning on topics before feeding it in. You can restructure the data so that questions better mirror more intelligent responses. The basic data on the internet is not organized for the needs of what LLMs need to do... So, synethetic all the way!
1:02 "Inference generating synthetic data to then go feed into that model"..... The simplest and most efficient way of doing this is through simulation. Therefore, we're already in one of these "inference generating" simulations.
If you've trained models, one can tell how this doesn't really work as well as you think. In fact, it ends up introducing a lot of hallucinations to models, when being fed data that has been generated by itself/another AI. We're not anywhere close for this to really work, or at least work in a way that we as a species won't be able to really tell the difference.
@@GodbornNoven Nope, I'm saying all generated data is bad. Anything under 99.9% accuracy will fall off so quickly, and current models aren't even above 90% accuracy.
@@rodrigobarraza exactly. It causes chaos in control system and blindspots in test scripts, what makes you think AI will not have problem? I like the terminology of hallucinations as it describes well.
The answer is no it can't, at least not in any meaningful way. You can't make a leap from inductive reasoning to abductive reasoning with better inductive reasoning.
Synthetic data won’t be seen as that impressive in 10 years time. New architectures and perhaps things like agency in the world will be what makes a bigger difference
Wow! Great stuff! Keep in mind that synthetic data from RLAI is what's most important. LLMs have no trouble creating training data from reasoning about chat history.
Just like humans need both imagination and real-world experience to thrive, the ideal scenario for LLM development likely involves a combination of both synthetic and real-world data
You know where there's plenty of data? The actual universe. POV, smart glasses multimodal data would be useful, I would think. Domestic humanoid robots, eventually. Associate morphemes wirh bundles of sensory data at a deep level.
Fearing China seeing the models assumes that they don't have more AI researchers than we do, and that they aren't training more. The thing with Chinese government is that at least for things that are obviously important, they are perfectly capable of dedicating oceans of resources on their own. The working assumption should be that they surpass us in AI, no matter what we do. And that it then becomes a question of "Do we have access to their models?" Which, in an unfriendly environment, we might not.
The key factor here is compute which China simply doesn't have access to (due to embargos etc.). If they can crack that, then the US is in for a big shock. Until then, the US will have the lead.
I enjoyed this. This self-directed evolution must be baked in future modes to perform a kind of artificial selection. Which can lead to very open waters. Again amped up about seed improver architecture but maybe it’s the Brooklynite in me, I do not want RSI to reach parity. I don’t. Not trying to be cynical here. Love tech. Just have concerns here.
You can take a million monkeys and try all kinds of methods to teach them, but they still won't have the brain of a human. Human brains aren't substantially larger. The gap in sophistication between human and primate brains can't really be described by the difference in stimuli alone. It's a difference in the structure of the brains that allows us to more effectively chunk our understanding of stimuli and self-correct. It's a change in architecture that's going to bridge the gap with AI. More efficient ways to model the brain as opposed to more efficient GPU architectures.
Much more interesting listening to Zuck talk about tech and not marketing bs.
Right its amazing to hear him talk about tech.
He's very very intelligent and would much rather hear him talk about tech as he understands what's going on in the market today and the future of AI.
He's obviously good with marketing too but that's not what got him here. At the root he is a programmer!
sounds a bit like marketing to me. the empirical evidence is not mapping exactly to what they make it to be, IMHO.
It seems like this interviewer is asking better questions than some of the biggest tech podcasters out there. I'll have to watch more.
Dwarkesh Patel is great. I can't watch Lex Fridman anymore.
@@bobbyc1120 What is wrong with Lex Fridman?
@@dumdum407lex fridman lies about MIT scientists thing and advertises himself as a intellectual , which is far from his accomplishments.
when it comes to AI he actually has a very surface understanding, let me know if you want exact examples. But good on him for leveraging one guest into the next all the way up the chain - can't hate on that hussle@@dumdum407
@@world-top0meanwhile we are watching some random guy interview zuck. ok mate
I am super sure thatn 99% of the big tech CEOs can't discuss this at this detail. Only very very very few CEOs know the tech
Using synthetic data can be interpreted as a form of model smoothing. Based on a shallow analysis of currently available information it might be the case that it helps to stabilize the gradients during training.
very nicely put
I'm glad they tuned up Zuck's algo. He is definitely climbing out of the Uncanny Valley.
😂
Wouldn't training on too much synthetic data produce an effect similar to over fitting the data set?
If you don’t mix with real data..you get model collapse. Some new research explore these real to synthetic ratios.
I don't think so. It would introduce bias towards thinking like previous language models, but it wouldn't be overfitting. Overfitting is when you fit too much to a "quirky" dataset, drawing conclusions from tiny amounts of data. In this context, overfitting would be if we went over the same dataset dozens of times (dozens of epochs), until the quality of the model's outputs started to decline.
@@alexanderbrown-dg3sy But will not this depend on what kind of synthetic data it is? Is there not an infinite amount of possible kinds of synthetic data, depending on the process used to create it?
@@GlennGaasland I can’t remember the name of the paper off the top of my head, but the research stated and proved that synthetic data doesn’t retain the long tail of data entropy. However, you said the type of synthetic data matter, yes? The only way I see pre-training models on entirely synthetic datasets, is once we have models that can produce outputs beyond human capability because it could produce data more high-quality data than humans. I always thought and still believe that hybrid is the optimal solution, currently. But conversely, we know with gpt4 level models, now llama3-70B models 😂 we can exploit extended test time compute to emulate models with much larger capacity and produced expert level output, so cut this work in a generation scheme, potentially but you’re talking about potentially hundreds of thousands or millions of outputs for every piece potential data. If we had models that were efficient(fff network, structured sparsity…etc) and more efficient inference. Possibly this could work at scale.
The synthetic data will still be useful data.
Like, some writers complain about their books being fed to these models.
But you can instead have 50 intelligent analyses of that book fed in instead, without ever feeding in the actual book.
You could sharpen learning on topics before feeding it in. You can restructure the data so that questions better mirror more intelligent responses.
The basic data on the internet is not organized for the needs of what LLMs need to do... So, synethetic all the way!
That’s the worry but from what I understand OpenAI did some tests last year and determined simulated data can take us a lot further than we thought.
1:02 "Inference generating synthetic data to then go feed into that model"..... The simplest and most efficient way of doing this is through simulation.
Therefore, we're already in one of these "inference generating" simulations.
oh no
“Artificial Intelligence” eating “Synthetic Data”…. Interesting times.
If you've trained models, one can tell how this doesn't really work as well as you think. In fact, it ends up introducing a lot of hallucinations to models, when being fed data that has been generated by itself/another AI. We're not anywhere close for this to really work, or at least work in a way that we as a species won't be able to really tell the difference.
We don't have enough intelligence but also our data is privacy
@@rodrigobarrazathat's only if the data fed to the ai is not filtered. High quality data is high quality data.
@@GodbornNoven Nope, I'm saying all generated data is bad. Anything under 99.9% accuracy will fall off so quickly, and current models aren't even above 90% accuracy.
@@rodrigobarraza exactly. It causes chaos in control system and blindspots in test scripts, what makes you think AI will not have problem? I like the terminology of hallucinations as it describes well.
Zuck is getting better and better in speaking the human language. Impressive!
Zuckerborg's hair plugs are coming in nicely.
The answer is no it can't, at least not in any meaningful way. You can't make a leap from inductive reasoning to abductive reasoning with better inductive reasoning.
But the main problem is generating *meaningful* synthetic data. Because bad data may lead the model to collapse.
Synthetic data won’t be seen as that impressive in 10 years time. New architectures and perhaps things like agency in the world will be what makes a bigger difference
Inference energy cost isn't the problem, it's training, inference has been solved. Look up LPUs and Groq. And no, not X Grok but Groq.
Wow! Great stuff! Keep in mind that synthetic data from RLAI is what's most important. LLMs have no trouble creating training data from reasoning about chat history.
is there somewhere the whole interview?
I would assume so, since the data they train on is curated and that is the same idea (not synonymous) as reorganized data
What's possible is to fine tune using synth data. At some point, your accuracy won't improve at which point you need more real data.
Though model collapse can happen too with too much synthetic data in the trainingloop.
One of the guys in the last episode claimed that models were under-parameterized...
If you’re a smart CEO you’ll tell people synthetic data is the way to go, simply because it definitely isn’t
Yeah that was a surprising creative leap
Sam Altman next?
Nice closing thought Dwarkesh... We are definitely in uncharted territory in terms of geopolitics and AI.
You know you're a tool when you make Zuck seem human and reasonable
„These models… They just wanna learn“ - Ilya Sutskever
Feel the AGI…
Synthetic worlds next, 3D simulations of life like the Sims
Just like humans need both imagination and real-world experience to thrive, the ideal scenario for LLM development likely involves a combination of both synthetic and real-world data
You know where there's plenty of data? The actual universe. POV, smart glasses multimodal data would be useful, I would think. Domestic humanoid robots, eventually. Associate morphemes wirh bundles of sensory data at a deep level.
At the expense of users, of course.
Fearing China seeing the models assumes that they don't have more AI researchers than we do, and that they aren't training more.
The thing with Chinese government is that at least for things that are obviously important, they are perfectly capable of dedicating oceans of resources on their own.
The working assumption should be that they surpass us in AI, no matter what we do.
And that it then becomes a question of "Do we have access to their models?"
Which, in an unfriendly environment, we might not.
Chinas is always immitating not innovating
The key factor here is compute which China simply doesn't have access to (due to embargos etc.). If they can crack that, then the US is in for a big shock. Until then, the US will have the lead.
bruh, if you think China's models will ever surpass America's you're out of the loop.
That is absolute rubbish. @@mahavakyas002
@@greenbeans7573really? 😂😂😂
I enjoyed this. This self-directed evolution must be baked in future modes to perform a kind of artificial selection. Which can lead to very open waters. Again amped up about seed improver architecture but maybe it’s the Brooklynite in me, I do not want RSI to reach parity. I don’t. Not trying to be cynical here. Love tech. Just have concerns here.
You can take a million monkeys and try all kinds of methods to teach them, but they still won't have the brain of a human. Human brains aren't substantially larger. The gap in sophistication between human and primate brains can't really be described by the difference in stimuli alone. It's a difference in the structure of the brains that allows us to more effectively chunk our understanding of stimuli and self-correct. It's a change in architecture that's going to bridge the gap with AI. More efficient ways to model the brain as opposed to more efficient GPU architectures.
You're just engineering the outcome you want at that point. I'm not sure how this wouldn't introduce bias.
THATS MY WORK I WILL GO NUTS ZUCK
training on synthetic data will have the same results that inbreeding does
😂
Science by the perfect wrong way
So basically he doesn’t know.
Do you??
Can there be any more ads in this ? The guy is a complete shill
Whatever Mark says....do the opposite.
Love how Zuckerberg still finds ways to “meta” everything; words, ideas; nonsense.
I get the feeling, but I didn't find it to be out of context at all.
He runs a shareholder company so everything he says about meta has to be carefully worded itherwise he'd get in trouble with the SEC
for the algo
?
The guy was talking about "Distillation" when talking about a model outputting data and using it for training.
Training a model on the synthetic data it generates does not change the distribution. This field is full of midwits using investor money.