33:04 “Luckily we are in position with these neural networks that, as long as the dataset is improving, there’s no real upper bound on the performance of the network.” Full Self Driving looking good!!
I like these kind of podcast. Im enthusiastic about ai and machine learning. Society would benefit from you creating a course on how train these deep neural networks like you all are. Maybe show to do it in multiple ways.
Great Podcast. When I was at Tesla we had to evaluate the first version of autopilot before the roll out. It was mind blowing experience enabling AP1 on highway 880 and letting go of the steering wheel. This version was a bit sketchy with it abrupt changes direction at 8ph and it’s in this experience that led me to satisfy my curiosity and understand this process more. This Lead my to UC Berkeley’s course in where I met you Pieter.
This is my 3rd time revisiting this interview. Can’t get enough. Got a question for anyone who may have the machine learning expertise for. I’m an FSD Beta tester. The neural nets running in the Tesla FSD system… as far as active or real time learning or improvement. Is that possible? Or does it require the subsequent update to realize improvements in the data? The latter makes more sense but I’ve noticed improvements or apparent improvements on several obvious occasions before an update actually comes to me.
@@Doug97803 I appreciate your input. I do think that's right. I've been learning and familiarizing myself with some of the intricacies and now I think the apparent improvement is due more likely to image memory data used in generating the vector space. It does persist, but just not sure how long. Given that I'm on the road pretty much all day every day, I hypothesize that it's maintaining a higher memory persistence cache. This is all me speculating lol.
All the compute or close to all of it is being used on the FSD computer for operating the AI, data input from sensors, and driving the car. There is no room for anything else useful. This was done on purpose, its safer and makes more sense to do training in a datacenter, and build a cost effective and powerful computer to run the trained NNs from those datacenters. They do use dedicated “shelved/racked” FSD computers for certain things in a datacenter.
Could you have dual networks which learned separately and see whether the they agree on the model constructed? Can we measure how confident we are of the model constructed?
The deep learning mentioned from U of T sounds like "feature detection", as done in the brain? Our model of our environments seems to be collections of objects which are each collections of features?
We can also approach FSD from an infrastructure position.. With all this money flowing into roadways, it would be beneficial to reduce edge cases by improving and standardizing road markings and road rules nationally in the US to start. 🤔
I love how he refers to the mnist set as a toy problem when that's the basic set they teach first timers with these days lol! This was first time I heard your cast, and this was awesome conversation, I subscribed! One thing I wish he would have talked a bit more about was synthesizing of the training data they do these days because I thought they teamed with Nvidia and they now make good enough computer graphics engines that they can synthesis whole drive scenes with CGI thats easily good enough to fool the Tesla cameras into thinking it's real and use that to train on?
It's one thing for an AI system to recognize a fire hydrant but it will take a whole nother level to see a shadow of a fire hydrant and not only know that it is the shadow of a fire hydrant but also why a fire hydrant would have a shadow in the first place not to mention also knowing what a fire hydrant is used for, or that water comes out of it, why it would be painted red or yellow, the list goes on and on about things humans would infer just from seeing the shadow of a fire hydrant.
Self-driving car is impressive problem to solve but it is hard and without human level reasoning it would not be accepted. People do not like to trust black box. Human reasoning is required to make human level decisions. Would be better to solve human reasoning so deep network can talk to people and do some less critical/dangerous tasks like programming. Once this is solved and DN really can write code better than humans and explain/debate it in native language than you can really start working on self-driving problem. Human are social animals, and even animals are much better than black box, animals at least have some predictable intuition like horse that used to be self driving vehicle for a while in human evolution. Humans should be able to ask their car on why it turned here and not there and if there is no good answer it will not be trusted for a while.
33:04 “Luckily we are in position with these neural networks that, as long as the dataset is improving, there’s no real upper bound on the performance of the network.”
Full Self Driving looking good!!
Always inspiring to see Karpathy!
I like these kind of podcast. Im enthusiastic about ai and machine learning. Society would benefit from you creating a course on how train these deep neural networks like you all are. Maybe show to do it in multiple ways.
38:01 Here he basically says that Self Driving Cars are guaranteed to happen. It's only a matter of Data collection, Training Cluster(Dojo) and time.
Great Podcast. When I was at Tesla we had to evaluate the first version of autopilot before the roll out. It was mind blowing experience enabling AP1 on highway 880 and letting go of the steering wheel. This version was a bit sketchy with it abrupt changes direction at 8ph and it’s in this experience that led me to satisfy my curiosity and understand this process more. This Lead my to UC Berkeley’s course in where I met you Pieter.
23:52 Karpathy talks about Elon Musk
You should mention in the video description that this episode was first launched in March 2021.
This is my 3rd time revisiting this interview. Can’t get enough.
Got a question for anyone who may have the machine learning expertise for. I’m an FSD Beta tester. The neural nets running in the Tesla FSD system… as far as active or real time learning or improvement. Is that possible? Or does it require the subsequent update to realize improvements in the data? The latter makes more sense but I’ve noticed improvements or apparent improvements on several obvious occasions before an update actually comes to me.
@@Doug97803 I appreciate your input. I do think that's right. I've been learning and familiarizing myself with some of the intricacies and now I think the apparent improvement is due more likely to image memory data used in generating the vector space. It does persist, but just not sure how long. Given that I'm on the road pretty much all day every day, I hypothesize that it's maintaining a higher memory persistence cache. This is all me speculating lol.
Glad to hear its been useful to you!
All the compute or close to all of it is being used on the FSD computer for operating the AI, data input from sensors, and driving the car. There is no room for anything else useful. This was done on purpose, its safer and makes more sense to do training in a datacenter, and build a cost effective and powerful computer to run the trained NNs from those datacenters. They do use dedicated “shelved/racked” FSD computers for certain things in a datacenter.
Why is the screen frame is so small :). Thanks for uploading the video.
i think it looks pretty good actually
Great video! Really interesting insights into computer vision, deep learning and Tesla strategy!
Glad you enjoyed it!
Could you have dual networks which learned separately and see whether the they agree on the model constructed? Can we measure how confident we are of the model constructed?
The deep learning mentioned from U of T sounds like "feature detection", as done in the brain?
Our model of our environments seems to be collections of objects which are each collections of features?
We can also approach FSD from an infrastructure position.. With all this money flowing into roadways, it would be beneficial to reduce edge cases by improving and standardizing road markings and road rules nationally in the US to start. 🤔
Its time to bring back again in few months
As he speaks, he visualizes. That's expertise. You can see it in his gestures.
Great podcast, thanks a lot 🙏
QB
I love how he refers to the mnist set as a toy problem when that's the basic set they teach first timers with these days lol! This was first time I heard your cast, and this was awesome conversation, I subscribed! One thing I wish he would have talked a bit more about was synthesizing of the training data they do these days because I thought they teamed with Nvidia and they now make good enough computer graphics engines that they can synthesis whole drive scenes with CGI thats easily good enough to fool the Tesla cameras into thinking it's real and use that to train on?
i love your avatar
Cs231n class is also popular in Korea :)
How do you build confidence in the system? Not exactly the problem. The problem is making it work. if it works , we'll be confident in no time
Great stuff
It's one thing for an AI system to recognize a fire hydrant but it will take a whole nother level to see a shadow of a fire hydrant and not only know that it is the shadow of a fire hydrant but also why a fire hydrant would have a shadow in the first place not to mention also knowing what a fire hydrant is used for, or that water comes out of it, why it would be painted red or yellow, the list goes on and on about things humans would infer just from seeing the shadow of a fire hydrant.
Is the number of fire hydrants required related to the facrt humans can re-use features and don't build fire hydrants from the ground up?
1:18:50 AI technology is not upper bounded in any real way… What a profound statement.
Good stuff!
Glad you enjoyed it
Karpathy website link please?
karpathy.ai/
Video of Andrey Karpathy talking to his AI bot
Why is Andrej speaking slowly cf. to his younger years??
Self-driving car is impressive problem to solve but it is hard and without human level reasoning it would not be accepted. People do not like to trust black box. Human reasoning is required to make human level decisions. Would be better to solve human reasoning so deep network can talk to people and do some less critical/dangerous tasks like programming. Once this is solved and DN really can write code better than humans and explain/debate it in native language than you can really start working on self-driving problem. Human are social animals, and even animals are much better than black box, animals at least have some predictable intuition like horse that used to be self driving vehicle for a while in human evolution. Humans should be able to ask their car on why it turned here and not there and if there is no good answer it will not be trusted for a while.
Andrej Karpathy's Antipathy
Andrey looks sad and probably tired of this work at tesla. Musk is genius but top-down management kills your own motivation pretty fast. :(
俺はことば和わかりませんだけどいいろんまっくすさんはそんけいしますすばらしいひとだとおもういいろんさんがつくってるのうえのえいあいあれであらそいのないしゃかいおつくってほしいです
潜在にも是非自動運転は人が居たら玉が出ない人は殺さない建物ぐらい壊せるエイアイニ感謝
He left the bullshit behind...
Maybe learn to pronounce TESLA correctly, I don't see any " Z" in there
Hater
I lasted 21 minutes, as much as I love hearing what Karpathy has to say I just can't stand the next "Uhum" from the interviewer.