“ELON MUSK”. That is the name of the OpenAi cofounder that escaped you… not Greg Brockman, ELON. He’s the one that funded the founding of the company. Without ELON there would be no OpenAi
There will be a consequence to using an ever greater quantity of data without sentiment curating, quality does matter and that is going to require a generalist to rectify.
This must be an older interview. Reasoning is taking large steps right now. It's been said that right now, reasoning is at a sub PhD level but there are models out there who are above PhD level of reasoning and will soon be able to do research unaided. When a LLM is able to do real research (not web surfing type research but real, scientific research) we will accelerate everything.
@@jackflash6377 we also do abstract thinking and a lot of other stuff LLMs don't, by the way how they work. The point is, even we don't exactly know how our brains work.
Is it possible to use the data stored in the Oceans. I recently read that there are approximately 1.5 million species undiscovered, that seems to be a lot of data to be used to train models.
But surely the ai would collect data, that would be the whole point of the AI being there. It would collect date about the genome, transcriptome, proteome etc of the organisms….
@@SuperStargazer666 That's not how a Neural Net learns, the data you talk about could be used and eventually understood by A.I but that's different to that data being used to actually train the model itself, the base information effectively get boiled away leaving the understand of the ideas the `connections`
He is wearing an OpenAi T-shirt, I presume that is an old video because he left Open Ai.
“ELON MUSK”. That is the name of the OpenAi cofounder that escaped you… not Greg Brockman, ELON. He’s the one that funded the founding of the company. Without ELON there would be no OpenAi
There will be a consequence to using an ever greater quantity of data without sentiment curating, quality does matter and that is going to require a generalist to rectify.
'like what you see in the movies'
uh...wich movies...that is important
well science fiction has predicted so many great experiences of modern life
This must be an older interview.
Reasoning is taking large steps right now.
It's been said that right now, reasoning is at a sub PhD level but there are models out there who are above PhD level of reasoning and will soon be able to do research unaided.
When a LLM is able to do real research (not web surfing type research but real, scientific research) we will accelerate everything.
LLMs still don't really reason, just replicate patterns found in the training data. Yes, you can include scientific papers in training data.
@@drwhitewash Do humans reason any differently? We look at past experience and use that to "reason".
@@jackflash6377 we also do abstract thinking and a lot of other stuff LLMs don't, by the way how they work.
The point is, even we don't exactly know how our brains work.
@@drwhitewash I hear ya. I also think they don't know exactly how AI works either.
@@jackflash6377 but we do know, how LLMs work.
Nope, I'll humbly ask AI to help human, then AI will know the greatest threat to human is human so AI will end human. All gonna be fine.
Your Muse says good morning ☀️
Is it possible to use the data stored in the Oceans. I recently read that there are approximately 1.5 million species undiscovered, that seems to be a lot of data to be used to train models.
Great idea, AI should be able to overcome the current limitations humans are subject to when attempting to explore what’s down there in the deep.
no `data` is `information` that can be fed into a computer so a e-book is `data` it can be read by a computer a fish for instance is not data
But surely the ai would collect data, that would be the whole point of the AI being there. It would collect date about the genome, transcriptome, proteome etc of the organisms….
@@SuperStargazer666 That's not how a Neural Net learns, the data you talk about could be used and eventually understood by A.I but that's different to that data being used to actually train the model itself, the base information effectively get boiled away leaving the understand of the ideas the `connections`
this is useless data. here are a few hundred gigabytes of the most optimized code, that's what's worth paying attention to