How had this got under 20,000 views and under like 10 comments?? This guy is literally one of the leaders in the industry, he's right at the bleeding edge. That's utterly diabolical
Compare with Ben Goertzel's discussions with Charles Hoskinson where they're literally openly sharing their ideas and thoughts around the bleeding edge in decentralized tech and decentralized AI applications with the entire world. Also getting low 5 digits view counts. Oh well. Just means people like you and I are ahead of the curve!
6:45 This is a nice proof of concept to me and explains why they could make progress much faster. First is an example of a hybrid heuristics approach (used by Waymo and others) and a pure deep learning approach (used by Tesla). The second one is superior in recognizing parked cars but requires massive amounts of real-world data. Only Tesla has that real-world data, because it's system is installed in about half a million cars by now. All the other "experts" thaught it was too dangerous to test self-driving software in cars being driven by consumers. They also thaught it was necessary to have Lidar, still too expensive to apply on a massive scale.
Actually, Tesla does use heuristics for certain things, not everything is done through deep and machine learning approaches. Elon Musk says this himself during Autonomy Day, that sometimes it's quite stupid to have a neural network for simple things, you might as well just use heuristics.
@@SyeamTechDemon makes you wonder about this NN for running the wipers then... maybe he thought it would help the NN see through rain if it could recognize it
This whole idea of software 2.0 where we human define the scope/region/boundary of the problem and let the program/machine/AI find optimization(s) for the problem reminds one of how we approach PDE problems.
There is a book named "Working Effectively with Legacy Code" for procedural/algorithmic/classical (software 1.0) code. It would be nice to see a book like this for statistical based/deep learning networks code (software 2.0). I guess since all of this is relatively new there is no deep learning legacy code yet (although companies like Google might have a bunch of it already). But it would be nice to see how maintenance and refactoring would be done in not only in legacy code for NN, but also legacy network parameters that have been calibrated with potentially years of training (if that exists). I am not sure if this really matters for software 2.0.
@@wimveninga1714 Makes sense. The maintenance is on the data to train the model. But it seems to me that the weights of the network are unmaintainable. The only things a DL engineer modifies are the model and training data, hence, that is the only thing to maintain. Maintaining weights are like maintaining binaries, it does not make sense.
Whole issue is: the presented notion of Software 2.0 collapses on it self. Suddenly the data is the entire program? No. The data is still the data. Software 2.0 is essentially Data 2.0 Software 2.0 would incorporate Data 2.0 and Networks 2.0 and so on Reactive, layered programming style with a functional category approach probably makes most sense for the Programming 2.0 of Software 2.0
Why don’t you just use the user wipe input to train? If there is ketchup on the windscreen, the driver will use the wiper stick to wipe. To you can detect this as an “automated system failure” and retrain on this. End to end. Isn’t the pretty straight forward? Even for gals positives. If it gets to excited in a tunnel, the driver should be able to turn it down or off. So just use this as an intervention. You do this with autopilot. Why not with the wipers
@@Splish_Splash of cause. The user disables the wiper and this data is collected. So this way tunnel data gets send to Tesla in order to train on these .. automatically
@@Trashbag-Sounds but you will need huge amount of time to collect this data with tunnels and wipers, and they don't push new release of autopilot if it doesn't properly work
Could AI recode the net for more slower rural users. Could a company have a lite site maker. If I wanted to get rich I would make an eternal search engine that was always learning about what it less understood to autogenerate labeled datasets.
hmm. sounds like "we have no idea why it works or when it will not, but we sell it anyway"... Even a wiper can become safety relevant if it starts wiping like mad at the wrong moment, distracting the driver (as long as there is one).
you train the AI to label the data by having humans do it first. AIs have no idea of what they're looking at - they're (mostly) just smarter data processing engines using Big Data human input
Intelligence isn't about struggling to differentiate between the droplets a few cm away and the sun a few light minutes away. The end doesn't justify the means. Your company's entropy reduction methods suck. Big time.
How had this got under 20,000 views and under like 10 comments?? This guy is literally one of the leaders in the industry, he's right at the bleeding edge. That's utterly diabolical
Compare with Ben Goertzel's discussions with Charles Hoskinson where they're literally openly sharing their ideas and thoughts around the bleeding edge in decentralized tech and decentralized AI applications with the entire world. Also getting low 5 digits view counts.
Oh well. Just means people like you and I are ahead of the curve!
Only a small number can be at the bleeding edge by definition!
Tools for Datasets writing software.
@@paulcassidy4559 agreed :) most of us are here due to a shared sense of foresight. good luck to all of you on your projects :)
agree. I am seeing the light today
This should be seen by a lot more people!
I did labeling job in the past and this guy described it perfectly, I feel better now for sucking at labeling :)
6:45 This is a nice proof of concept to me and explains why they could make progress much faster. First is an example of a hybrid heuristics approach (used by Waymo and others) and a pure deep learning approach (used by Tesla). The second one is superior in recognizing parked cars but requires massive amounts of real-world data.
Only Tesla has that real-world data, because it's system is installed in about half a million cars by now. All the other "experts" thaught it was too dangerous to test self-driving software in cars being driven by consumers. They also thaught it was necessary to have Lidar, still too expensive to apply on a massive scale.
Actually, Tesla does use heuristics for certain things, not everything is done through deep and machine learning approaches. Elon Musk says this himself during Autonomy Day, that sometimes it's quite stupid to have a neural network for simple things, you might as well just use heuristics.
@@SyeamTechDemon makes you wonder about this NN for running the wipers then... maybe he thought it would help the NN see through rain if it could recognize it
"Elon was like, vision can see raindrops...And now it's my problem" I feel like that's a fair snapshot at life in the FSD team lol
Ahead of his time. Very thought provoking.
Karpathy, you will always be a legend in my book.
This was very insightful, definitely more people needs to see this!
This whole idea of software 2.0 where we human define the scope/region/boundary of the problem and let the program/machine/AI find optimization(s) for the problem reminds one of how we approach PDE problems.
Andrej Legend Karpathy
There is a book named "Working Effectively with Legacy Code" for procedural/algorithmic/classical (software 1.0) code. It would be nice to see a book like this for statistical based/deep learning networks code (software 2.0). I guess since all of this is relatively new there is no deep learning legacy code yet (although companies like Google might have a bunch of it already). But it would be nice to see how maintenance and refactoring would be done in not only in legacy code for NN, but also legacy network parameters that have been calibrated with potentially years of training (if that exists). I am not sure if this really matters for software 2.0.
Wouldn't the "maintenance part be" on the data, the quality of the tagging and the process of continuously improving the model based on new data?
@@wimveninga1714 Makes sense. The maintenance is on the data to train the model. But it seems to me that the weights of the network are unmaintainable. The only things a DL engineer modifies are the model and training data, hence, that is the only thing to maintain. Maintaining weights are like maintaining binaries, it does not make sense.
who is this guy man
Great talk!
Whole issue is: the presented notion of Software 2.0 collapses on it self.
Suddenly the data is the entire program?
No. The data is still the data.
Software 2.0 is essentially Data 2.0
Software 2.0 would incorporate Data 2.0 and Networks 2.0 and so on
Reactive, layered programming style with a functional category approach probably makes most sense for the Programming 2.0 of Software 2.0
functional programming bruh
Why don’t you just use the user wipe input to train? If there is ketchup on the windscreen, the driver will use the wiper stick to wipe. To you can detect this as an “automated system failure” and retrain on this.
End to end. Isn’t the pretty straight forward? Even for gals positives. If it gets to excited in a tunnel, the driver should be able to turn it down or off. So just use this as an intervention. You do this with autopilot. Why not with the wipers
Well they probably have it a little harder than just throwing a bunch of pictures of a tunnel rear view to the trainer
it doesn't solve the problem with tunnels
@@Splish_Splash of cause. The user disables the wiper and this data is collected. So this way tunnel data gets send to Tesla in order to train on these .. automatically
@@Trashbag-Sounds but you will need huge amount of time to collect this data with tunnels and wipers, and they don't push new release of autopilot if it doesn't properly work
@@Splish_Splash well I assumed they already had a version on the road that wasn’t working properly
Could AI recode the net for more slower rural users.
Could a company have a lite site maker.
If I wanted to get rich I would make an eternal search engine that was always learning about what it less understood to autogenerate labeled datasets.
Tools for Datasets writing software.
I wish I could work there. But I dont even have a 1st degree
hmm. sounds like "we have no idea why it works or when it will not, but we sell it anyway"... Even a wiper can become safety relevant if it starts wiping like mad at the wrong moment, distracting the driver (as long as there is one).
there's a thing naming "testing" and "shadow mode"
Are people who annotate your data paid decently?
hahahaha
08:42 - You know this is literally the secret to how God runs the Universe, right? lots of data, few rules. the Universe figures it out.
Also shouldn't dub it "2.0" it "1.0" is not replaced...
🎯
After jensen, I thought to see what does it 2.0 actually means?
Muaadh Rilwan
Good!!
I want to understand what he means by programming 2.0 so i'm here.
very good video
A problem does not exist
Andrej karpathy
To solve aproblem you must see no problem
A problem does not exist
wrong!
program space consisting problems is a waste of space
Why not use AI to label data? Won't that be more accurate?
you train the AI to label the data by having humans do it first. AIs have no idea of what they're looking at - they're (mostly) just smarter data processing engines using Big Data human input
he doesnt say anything actually. just like stupid ai
this is all wasted time thinking about
Intelligence isn't about struggling to differentiate between the droplets a few cm away and the sun a few light minutes away. The end doesn't justify the means. Your company's entropy reduction methods suck. Big time.
This dude be like: do not try to explain it, it works... most of the time. What a naive problem solver.