tbf, he's highlighting the most recent development in a series of incremental improvements. It is a little confusing that he intersperses video clips from previous Nvidia research, but I'd say it's directly relevant and tracks their progress from GFX rendering to their current and more broad applications for GPUs in highly parallelized algorithms (Ai) and simulation
If you think about it, we kind of already invented time machines since we can speed up time in computer simulations, what we need is processing power/energy.
For that you would need all data from the planet atleast, every single detail, then recreate it and control it, but you cant do that in reality, only in a virtual world
You know nothing about physics or computer science. this is far off from time travel because the time in the computer sim isn't actually time...much how much like how physical objects in the game aren't physical objects in real life.....i almost can't believe this thread didn't know that.. Before any say "time isn't physically ReAL"...... time is physically real.
It's crazy to think that 50-80% of your neurons are mainly used for movement (small walnut shaped thing at the rear and base of your brain called the cerebellum).
Love how these breakthroughs in robotics could finally give me some help with folding my laundry! Can't wait to see what other mundane tasks get automated.
we need someone to make a game environment which allows people to use their webcam and gpu to train game ai in simulated tasks of increasing difficulty, i'm thinking something like a minecraft style open world where users can train ai to work together building, maintaining and supplying traps and defenses to protect against zombie hoards while making vast wonders - leverage the huge amount of dedication, passion and ingenuity of people at play. It'd create huge amounts of useful training data and find optimized solutions to all sorts of problems which researchers simply don't have time to think about.
There isn't one, at least not in the video, they optimized the training process by letting AI generate examples from few human curated ones. So now they have to check these generated examples to filter out the junk, as long as the junk filtering process offers better yield, it will work.
Love the idea of robots learning from synthetic demos and sped-up simulations! Folding laundry while I read papers sounds like a dream come true. Can't wait to see these helpful little robots in action!
Yeah but governments will create a system of financial help to everybody, based on natural resources from your country, so you're gonna be unemployed but able to do anything you want with your time and not worry about money, but thats only for like 3 or a little bit more years from now
@@shirowolff9147 Exactly I hate the doomers that don't actually think about what solutions are in place, they're quick on the trigger for the worst outcome yet can't perceive that humanity has a plan for that.
Dojo - Using simulation, the AI car can “drive” millions of virtual miles and encounter situations that might be rare in reality. This accelerates learning and validation without any physical harm. Only after the AI consistently performs well in these virtual worlds do engineers consider enabling it more broadly in the real one, with careful monitoring and gradual rollouts
Cant wait for 4 hour internet guides on "how to walk on two legs" with examples, neat tricks and $2 a month subscription based series about running and new trends, findings and discoveiries from runner community daily.
People working on training robots should provide augmented reality glasses to companies performing the tasks they want robots to learn. The company employees get free tech, and you get valuable training data. Funny enough, this idea is straight out of The Simpsons. We all know how eerily accurate that show is at predicting the future!
So if this technology/research truly works then the humanoids can literally do anything by next year and we should see them in homes commercially by 2026. They should be able to build others/themselves and exponentially increase as long as they have the parts to build. I don’t see why it would take several years if they can get thousands of years of training within a year now. Not hours, if those are really years then that’s insane!
do they implement a cerebellum and primary mortex cortex like heirarchy over the network architecture, or is there no need as that is just a evolutionary kluge?
I'm loving the potential for robots to learn from few human demonstrations! SkillGen and Hover are game-changers. Can't wait to see what the future holds for robotics and AI applications.
Robots as slaves to human laziness is the worst idea possible, as owning slaves corrupts human beings. Such machines should always be reserved for human needs, and not human wants.
They should make an algo that overlays an exoskeleton on millions of hours of video youtube video. Then use that exoskeleton as the training data for the robots.
This needs to be open source. The AI has to be uniformed to be modified by the user. The algorithm, not the robot. The robot can be manufactured separately from the code. People should have free demonstrations of the technology that their algorithms are working with before the final product and payment. And they can adjust the codes to suit the customers needs and price wouldn’t alter because of the algorithm as the robot is demonstrating what is in the codes. It’s like a virtual drive-in before you are given the food they present to your order and if your order is wrong, it can be replaced for the item that you asked for. But it of course comes with caveats.
You don't know how many stupid people out there. Give them this technology and they will harm you in a way you couldn't comprehend and for no apparent reason. I was always advocating for the opposite - a single ASI that would serve every human and every nation equally. Two or more competing ASIs trying promote opposing world views, or imposing power of their creators will definitely end up disastrously for humanity.
Would it be fair to condense this video into the sentence: "robots can also be trained using synthetic data"? (limitations and compromises were not mentioned)
The robot needs to have a self image a 3d model of itself so it can use its 3rd eye perspective to see its hand moving and alongside tiny sensors can correlate index finger of robot hand is the same as the 3d model of that robots hand.
We are getting really close to simulating real reality like ours. I know that there are still some fellows that don't believe this, but don't worry, time always will prove one's point
Interesting way around one-shot learning. Generate many fake examples from the single example to train on. However, seems more like sophisticated mimicry rather than intelligence.
I find it funny when people say there is not enough data to train the machines... I get it the algorithms we use still aren't that good. But really insects animals and humans all train on the data available in the world and get by with substantially less training data. Machine learning experts should likly go back to more of the neuroscience. I bet we neuroscientists could great simulations with the equivalent amount of compute that some of these large models learn with.
We've all seen this before😃 Gaining the training experience of 1000 copies of oneself at the same time...... Naruto🍥 did it to learn the Rasenshuriken in days, what would have taken him years to learn!
This comment doesn't apply to this video. But please stop covering unreal engine, blender, and generative AI releases. You are great at reporting on scientific papers. You are not great at reporting on business publications. This video is about a scientific paper. It covers information that the public wouldn't be learning about otherwise. But your videos about unreal engine, for example, contain no information that cannot be found elsewhere. Also you end up ignoring the issues when you cover business publications. It makes you seem incredibly disingenuous in those videos. I understand you are genuinely excited and interested. But please focus on what you are good at doing. That thing is explaining science papers in an easy to understand manner, so that people can get an understanding of the developments that are happening. Your video on ReSTIR and VICMA are great examples of important developments that would otherwise go uncovered. Your videos on AlphaFold are great examples of videos that are about topics that might have gotten mainstream attention but for which your videos provide a better understanding of why it is important and what was done. You are going to keep covering LLM developments, so please cover papers like Mixture of a Million Experts, or Anthropic's research into mechanistic interpretability. Those are things where your videos would be both helpful and enjoyable.
I think every youtuber should do whatever they want. But at the same time, I have a feeling that TwoMinutePapers might be just doing what gives more views right now. So I agree with you.
What a confusing video. Sadly I think your channel is going down hill. It used to be great, but your over-use of that affected way of speaking and your inability to clearly state what is going on in the video snippets you present is creating a not very helpful mush.
So a very small model can run a robot ?? in a phone?? or a game virtual character??, its just software if by so small models, could be a small model male movement, small model women movement, so lets say there is a small model of movement and another for talking and actions and another to create sounds, and another for etc.. there should be a way to have all in one software, like a extremely virtuos software that optimize everything to the maximum efficiency and low resources, instead of focusing on bigger... like a model that is doing several stuff at the same time, trained to at the same time run a robot and have a conversations, is AGI, is fake, is just a script, IT JUST A GOOD SRIPT, its not real conscience, its enough... or a multimodal small models trained at the same time to work together like the brain does, a big model, and small models do other stuff, but all trained at the same time... Like a human brain its so possible but the main model need to have spacial awareness, like a 3d world representation of itself, imagination, time to tink its answer, do several stuff at the same time, while thinking at the same time doing dishes, and be coherence, and good, and service, follow characters, act, etc.... You don't need a big model, just like a group of models and their training data would be a SET OF RULES, or a script to follow, so you have the software on your phone to run a robot??? you can buy a robot an by fast wifi run it by your phone?? that is crazy, the cost of running a robot will crash, instead of high cost computers a phone running the robots.
I love your videos - but I find you are tending to talk faster than ever - over video clips that themselves contain lots of information - often annotations - so to watch your videos I am forever pausing and rewinding pausing and rewinding - as I can’t listen to what you say and also process the video esp I can’t Listen to what you say and read the text on the screen at the same time - the two streams of language clash ! - so please slow down your narration somewhat and leave gaps for us to u see stand the video clip you’re talking over ‘my estimate is you need to leave the same length of time after each of your sentences over a fresh clip with no speech for the viewer to understand the clip. I am a native English speaker / so it must be worse for people who aren’t native English speakers !
NVIDIA could incentivize the collection of tracking and motion data by creating a line of wearable apparel embedded with sensors. Users who wear this apparel, allowing their data to be recorded and analyzed, would be compensated with access to NVIDIA's cloud-based GPU compute resources. This would function as a trade, effectively exchanging data for processing time.
Humans receive feedback from their muscles to their brains so can adapt to new situations - robots rely on pre-programmed paths, and so if a single thing is out of place the whole thing falls flat.
We still need to figure out to tell the robot what "Hot" actually mean. Not Give them 10 billion of Hot items to "learn" what hot actualy mean. and IMO "Learn" is misused word for this thing, similar to "AI", the truth about this tools is "pattern recognition machine", and they actually can't learn. idk
First. I got here due to my training being 10,000 times faster...
Respect
Damn, my training’s only 33,333.333* X faster.
I am human. They used my data to train you guys. That's why you guys are not very smart .
What took you so long. 2 papers down the road they were here before this video.
I used the forbidden technique to duplicate myself a hundred times.
The elders said it could not be done.
I would show them.
I feel like I've seen this topic 5 times already
tbf, he's highlighting the most recent development in a series of incremental improvements.
It is a little confusing that he intersperses video clips from previous Nvidia research, but I'd say it's directly relevant and tracks their progress from GFX rendering to their current and more broad applications for GPUs in highly parallelized algorithms (Ai) and simulation
If you think about it, we kind of already invented time machines since we can speed up time in computer simulations, what we need is processing power/energy.
If u think about it we are in a time machine
Massive object warp time
For that you would need all data from the planet atleast, every single detail, then recreate it and control it, but you cant do that in reality, only in a virtual world
@@shirowolff9147 for sure, hence I started with kind of. I am familiar with Laplace's Demon concept.
You know nothing about physics or computer science. this is far off from time travel because the time in the computer sim isn't actually time...much how much like how physical objects in the game aren't physical objects in real life.....i almost can't believe this thread didn't know that..
Before any say "time isn't physically ReAL"...... time is physically real.
@@jryde421 thanks for the good laugh 😂 since you know so much about time, why don't you tell me what is time? 🤡
It's crazy to think that 50-80% of your neurons are mainly used for movement (small walnut shaped thing at the rear and base of your brain called the cerebellum).
It also manages all the internal organs though (like heart, lungs, liver, stomach, etc) and like half our reflexes.
@@sammonius1819 I think you're thinking of the brain stem instead.
It's not nearly 80%, and its functions are not limited to movement.
@@djayjp well yeah but i thought we were counting the whole central nervous system
@@sammonius1819 The original post refers only to the cerebellum.
wasn't this already existing thing???
Love how these breakthroughs in robotics could finally give me some help with folding my laundry! Can't wait to see what other mundane tasks get automated.
we need someone to make a game environment which allows people to use their webcam and gpu to train game ai in simulated tasks of increasing difficulty, i'm thinking something like a minecraft style open world where users can train ai to work together building, maintaining and supplying traps and defenses to protect against zombie hoards while making vast wonders - leverage the huge amount of dedication, passion and ingenuity of people at play. It'd create huge amounts of useful training data and find optimized solutions to all sorts of problems which researchers simply don't have time to think about.
Create what you wish to see in this world
Robo hyperbolic time chamber
the captions spelling your name correctly is on another level
Once a wise Man said: " Great power comes with a Great electricity bill. "
So what is the new idea?
There isn't one, at least not in the video, they optimized the training process by letting AI generate examples from few human curated ones. So now they have to check these generated examples to filter out the junk, as long as the junk filtering process offers better yield, it will work.
3:55 That vector is looking a little funky right there..
imposter
Love the idea of robots learning from synthetic demos! Can't wait to see how this tech evolves, maybe one day I'll have a robot folding my laundry too
3:00 Bros in a hyperbolic time chamber
Love the idea of robots learning from synthetic demos and sped-up simulations! Folding laundry while I read papers sounds like a dream come true. Can't wait to see these helpful little robots in action!
I'd love to use this for QA to test video games for indie developers
So many jobs to be replaced, what a time to be unemployed!
Yeah but governments will create a system of financial help to everybody, based on natural resources from your country, so you're gonna be unemployed but able to do anything you want with your time and not worry about money, but thats only for like 3 or a little bit more years from now
@@shirowolff9147 Exactly I hate the doomers that don't actually think about what solutions are in place, they're quick on the trigger for the worst outcome yet can't perceive that humanity has a plan for that.
@shirowolff9147 the drugs you are taking are too strong, only thing Western Governments are doing is replacing Western people.
braindead take. Read about luddities. Automating jobs is good for humanity
@@shirowolff9147 bruh
Thanks to this new paper, I was able to watch this video in, get ready, hold on to your papers!! ......-23ms! Wow!
Hahahaha
“Time dilation “
will become an issue to resolve
Dojo - Using simulation, the AI car can “drive” millions of virtual miles and encounter situations that might be rare in reality. This accelerates learning and validation without any physical harm. Only after the AI consistently performs well in these virtual worlds do engineers consider enabling it more broadly in the real one, with careful monitoring and gradual rollouts
Cant wait for 4 hour internet guides on "how to walk on two legs" with examples, neat tricks and $2 a month subscription based series about running and new trends, findings and discoveiries from runner community daily.
but how do they synthesize the variants?
I was dissatisfied with the video, but I wasn't sure why. This is the reason.
People working on training robots should provide augmented reality glasses to companies performing the tasks they want robots to learn. The company employees get free tech, and you get valuable training data. Funny enough, this idea is straight out of The Simpsons. We all know how eerily accurate that show is at predicting the future!
This reminds me of the SMOTE method which does something similar for more simple machine learning tasks like regression and classification.
Love this! SkillGen and Hover are total game-changers for robotics. Can't wait to see the breakthroughs that come from this tech!
So if this technology/research truly works then the humanoids can literally do anything by next year and we should see them in homes commercially by 2026. They should be able to build others/themselves and exponentially increase as long as they have the parts to build. I don’t see why it would take several years if they can get thousands of years of training within a year now. Not hours, if those are really years then that’s insane!
do they implement a cerebellum and primary mortex cortex like heirarchy over the network architecture, or is there no need as that is just a evolutionary kluge?
I'm loving the potential for robots to learn from few human demonstrations! SkillGen and Hover are game-changers. Can't wait to see what the future holds for robotics and AI applications.
Robots as slaves to human laziness is the worst idea possible, as owning slaves corrupts human beings.
Such machines should always be reserved for human needs, and not human wants.
....it is a tool, just like our phones or computers. I don't think the word slave is correct in that scenario.
ps:- Inspiration taken from Naruto Multi shadow clone jutsu, kakashi must have been plased into NVIDIA
05:27 What are they doing in the soccer field?🤣🤣
Learning to spazz out in fake pain / injury like real players do.
It seems to be trained on real data watching football players being injured all the time. Looks very realistic to me.
AI is evolving so fast that i cant keep up, im just 26 years, but im feeling like a old person who dont know what is a computer
They should make an algo that overlays an exoskeleton on millions of hours of video youtube video. Then use that exoskeleton as the training data for the robots.
This needs to be open source. The AI has to be uniformed to be modified by the user.
The algorithm, not the robot. The robot can be manufactured separately from the code.
People should have free demonstrations of the technology that their algorithms are working with before the final product and payment.
And they can adjust the codes to suit the customers needs and price wouldn’t alter because of the algorithm as the robot is demonstrating what is in the codes.
It’s like a virtual drive-in before you are given the food they present to your order and if your order is wrong, it can be replaced for the item that you asked for.
But it of course comes with caveats.
You don't know how many stupid people out there. Give them this technology and they will harm you in a way you couldn't comprehend and for no apparent reason. I was always advocating for the opposite - a single ASI that would serve every human and every nation equally. Two or more competing ASIs trying promote opposing world views, or imposing power of their creators will definitely end up disastrously for humanity.
Would it be fair to condense this video into the sentence: "robots can also be trained using synthetic data"?
(limitations and compromises were not mentioned)
I am really looking forward to
video games using these
advanced AI techniques for NPCs
4:22 - this is what you get from the US educational system too
The robot needs to have a self image a 3d model of itself so it can use its 3rd eye perspective to see its hand moving and alongside tiny sensors can correlate index finger of robot hand is the same as the 3d model of that robots hand.
We are getting really close to simulating real reality like ours. I know that there are still some fellows that don't believe this, but don't worry, time always will prove one's point
I'm still waiting on Nidiva MMO. 'Till then, I'm holding onto my papers!
Károly... 2 more papers down the line they will read the papers and you keep folding the laundry full time. :D
10x is from the past, 10kx is the new norm
I am ready to serve my future AI overlords.
Simulated training + robotic = 101% of all jobs. Can’t wait to finally live without work 😢🎉🎉🎉😊
What a time to be alive!
How long did it take mother nature to learn walking on two legs? long ass time.
we should invest on nvidia
Interesting way around one-shot learning. Generate many fake examples from the single example to train on. However, seems more like sophisticated mimicry rather than intelligence.
Brb teaching my robots to hold onto my papers for me
What a time to be alive. Nvidia is winning.
If anybody out there needs someone to do a thousand demonstrations of something hit me up
This will lead to Terminator hands down
I need 10,000x more time too. So I could procrastinate all I want😅😂
woldnt nvidias omniverse solve this problem tho?
Damn, now I am afraid of robots that spent thousands of years in combat training 😭
I find it funny when people say there is not enough data to train the machines... I get it the algorithms we use still aren't that good. But really insects animals and humans all train on the data available in the world and get by with substantially less training data. Machine learning experts should likly go back to more of the neuroscience. I bet we neuroscientists could great simulations with the equivalent amount of compute that some of these large models learn with.
Not sure about humans, but I bet insects know how to move right when they're born. So they don't "learn". It's different. Some animals, too.
We've all seen this before😃
Gaining the training experience of 1000 copies of oneself at the same time......
Naruto🍥 did it to learn the Rasenshuriken in days, what would have taken him years to learn!
Nvidea omniverse factories will speed up now
What can AMDumb do? Nothing
HOLD ON TO YOUR PAPERS
This comment doesn't apply to this video. But please stop covering unreal engine, blender, and generative AI releases.
You are great at reporting on scientific papers. You are not great at reporting on business publications.
This video is about a scientific paper. It covers information that the public wouldn't be learning about otherwise.
But your videos about unreal engine, for example, contain no information that cannot be found elsewhere. Also you end up ignoring the issues when you cover business publications. It makes you seem incredibly disingenuous in those videos.
I understand you are genuinely excited and interested. But please focus on what you are good at doing. That thing is explaining science papers in an easy to understand manner, so that people can get an understanding of the developments that are happening.
Your video on ReSTIR and VICMA are great examples of important developments that would otherwise go uncovered. Your videos on AlphaFold are great examples of videos that are about topics that might have gotten mainstream attention but for which your videos provide a better understanding of why it is important and what was done.
You are going to keep covering LLM developments, so please cover papers like Mixture of a Million Experts, or Anthropic's research into mechanistic interpretability. Those are things where your videos would be both helpful and enjoyable.
I think every youtuber should do whatever they want. But at the same time, I have a feeling that TwoMinutePapers might be just doing what gives more views right now. So I agree with you.
What a confusing video. Sadly I think your channel is going down hill. It used to be great, but your over-use of that affected way of speaking and your inability to clearly state what is going on in the video snippets you present is creating a not very helpful mush.
I got it 🤷♂️ some clips are just B roll you know
I did not get can that learned skill be copied?
let's goooooooooooo UBI + FDVR all day erry day
the terminators are soon rdy for deployment🎉
🤯🔥
"10k times compare to our own" that's exactly what the folks in the world above ours have said.
Faster and better than humans!!!!
Oh dear!
Very nice, great success 🙌
The dishes
Haha we're fucked😅
Americans don't play soccer. We are safe.
So a very small model can run a robot ?? in a phone?? or a game virtual character??, its just software if by so small models, could be a small model male movement, small model women movement, so lets say there is a small model of movement and another for talking and actions and another to create sounds, and another for etc.. there should be a way to have all in one software, like a extremely virtuos software that optimize everything to the maximum efficiency and low resources, instead of focusing on bigger... like a model that is doing several stuff at the same time, trained to at the same time run a robot and have a conversations, is AGI, is fake, is just a script, IT JUST A GOOD SRIPT, its not real conscience, its enough... or a multimodal small models trained at the same time to work together like the brain does, a big model, and small models do other stuff, but all trained at the same time... Like a human brain its so possible but the main model need to have spacial awareness, like a 3d world representation of itself, imagination, time to tink its answer, do several stuff at the same time, while thinking at the same time doing dishes, and be coherence, and good, and service, follow characters, act, etc.... You don't need a big model, just like a group of models and their training data would be a SET OF RULES, or a script to follow, so you have the software on your phone to run a robot??? you can buy a robot an by fast wifi run it by your phone?? that is crazy, the cost of running a robot will crash, instead of high cost computers a phone running the robots.
Data for robots is why you should do your research and invest in Tesla! :)
8/10 is still very bad
Robot chef. It's the main thing I want from AI before I die.
Deere felou scalars
I love your videos - but I find you are tending to talk faster than ever - over video clips that themselves contain lots of information - often annotations - so to watch your videos I am forever pausing and rewinding pausing and rewinding - as I can’t listen to what you say and also process the video esp I can’t Listen to what you say and read the text on the screen at the same time - the two streams of language clash ! - so please slow down your narration somewhat and leave gaps for us to u see stand the video clip you’re talking over ‘my estimate is you need to leave the same length of time after each of your sentences over a fresh clip with no speech for the viewer to understand the clip. I am a native English speaker / so it must be worse for people who aren’t native English speakers !
I, for one, welcome our 10,000x accelerated, robotic overlords!
this cancer needs to be banned either you raise the populations fluid intelligence or were all going jobless
UARATAITOBALAII !!!
GO ROBOTICS 🎉
Repost
Nice try diddy
NVIDIA really kage bunshin no jutsud AI training
I didn't even need to play the video or access the links to know this is Omniverse. Omniverse is phantastic!
NVIDIA could incentivize the collection of tracking and motion data by creating a line of wearable apparel embedded with sensors. Users who wear this apparel, allowing their data to be recorded and analyzed, would be compensated with access to NVIDIA's cloud-based GPU compute resources. This would function as a trade, effectively exchanging data for processing time.
It's over 9000!
Humans receive feedback from their muscles to their brains so can adapt to new situations - robots rely on pre-programmed paths, and so if a single thing is out of place the whole thing falls flat.
what is the tech that made deep learning takes only half billion parameters? and why use this annoying voice
We still need to figure out to tell the robot what "Hot" actually mean. Not Give them 10 billion of Hot items to "learn" what hot actualy mean. and IMO "Learn" is misused word for this thing, similar to "AI", the truth about this tools is "pattern recognition machine", and they actually can't learn.
idk
Why. Do you. Talk. Like this? I. Cannot. Listen. To this.