Not bad for a "car" company. I bet GM, Ford and VW will be announcing their own AI chips in a few weeks and CNBS will carry hour long stories on how Tesla is loosing geound to them. LMAO.
This has become one of my favorite channels, so grateful for the education. I was a music major in college 20 years ago, but my favorite classes were in renewable energy physics. I feel like I’m back at University geeking out in my favorite classes. Thanks professor. Hope you get a great vacation in soon.
Exact same feeling here Mike. He is a wonderful orator, easy to listen to but with a deep well of knowledge. Each upload is fast becoming UNMISSABLE !!
Hi@@DrKnowitallKnows, Thank you for this series. At least I'm starting to feel what all the excitment & hype around AI day is about. What is the absolute smallest FAB machine that is used to run tests on? Back in my day we had small machines to run tests on various materials/processes etc, that we sometimes used when our customers wanted such a small batch of a new product. It was much more economical to do (total cost of batch), than to set up on a machine that would be able to produce at a much lower cost/item on a decent run. I wonder if you're right & Tesla are producing their own chips, as they don't need to be competative/chip. Also if you look at the chip wafer (5 x 5 layout), it has curved edges, indicating the diameter of the chip is small. Also looks like they didn't go for max number of nodes (?) per wafer/chip, but spaced them out so they didn't have to stick them down. I'm not sure if I'm using the right terminology above, as all this is still way above my understanding. But I do understand manufacturing and hope I've made what I'm observing clear enough
I'm almost 50.. when I was 7.. computer's came to many homes.. C=64 :). Then.. at age 11 I read a book about the law of accelerated returns. That curve can be felt since about 1995... And it feels like a tsunami since 2016.. I guess from 2025 tot 2030 is where miracles will seemingly happen.. I learned my lesson about exponentials only by living through them.
The Matrix is totally what I thought of when at AI day they showed video of the real world transformed into a virtual simulation world for training purposes. Great thumbnail :)
Thank you 3000!!! I needed this fantastic explanation. Professor you have done us layman a massive favor. Love the channel congrats on 25k I hope it grows exponentially and that you get that job at Tesla.
So exciting! This English major is now very close to understanding this amazing achievement, thanks to your interspersing explanations with the Tesla engineer's comments accompanied by visual scripts. Love your enthusiasm for the mind-blowing achievements of Tesla teams. Please keep these coming to educate those of us who find this kind of thinking so difficult.
I think you forget that most Tesla are not made in the US and the US Government has nothing to say about them. These vehicles are generating knowledge all over the world and all mankind, certainly all Tesla owners deserve the benefit. Tesla Self driving will be available in cars and robots made by many companies and in every auto producing country. Tesla Self driving will be bigger than Tesla Auto and Tesla Power combined.
It is easy to think Tesla is all about the USA. Tesla US should catch up with Tesla China after the Freemont shutdown and rework but once the model 2 line is up in China the US may remain number 2.
This channel is a youtube gem. That said, this video is among my top 3 picks for super interesting and informative videos. F***ing brilliant. Since this subject matter goes completely over my head, I appreciate Dr. Know It All breaking it down by stating along the lines of "this is what it means"...I need that. I need the translation into 10th grade English to allow this technology to penetrate my understanding. The "what's happening next" gave me chills. Tesla is building their own Dojo chips? OK...but how could Tesla have already bought and operated a Fab? Super expensive. Never mind that the expense line-items are not listed in the quarterly reports....there is little evidence within their financials of any such massive use of funds into a Fab investment .
Training Computer EXACTLY...Sooo Interesting...have to plan in ADVANCE to make sure we have it by the time we might Need This Never Cease to Amaze... it...Amazing...
I think a big piece of why they want so much training compute is reinforcement learning for the planner. That’s a huge compute sink, but likely critical to getting something to drive really well. That;s how they’re going to get to the end-to-end NN they want.
Especially because of how they use their planner, not just for the Tesla, but for every other vehicle in the scene. Which means the planner not only has to be able to predict the correct choice for its car, based on present and past data, but also the behavior of other cars around it entirely based on what the Tesla itself can see, likely driven by humans which ARE highly unpredictable and will often chose to adopt a solution that isn't optimal. I was amazed when i saw this in action in a wham bam vid, where the Tesla anticipated that a speeding car behind it on another lane would be forced to fork into the Tesla's lane due to a truck in front and pre-emptively slowed down way before the pilot even realized a danger was approaching, this to me can only be guessed if the predictor was ran on the speeding car and correctly guessed that the most likely course of action for the speeder was an aggressive lane change that would cut off the Tesla
Soon, Tesla will work simulation of the design of dojo chip in the current dojo training tiles and find out the most efficient design from the simulation and develop the next generation Dojo chip.
@@gregbailey45 If you can build better than others , why rent one. Tesla dojo chip has better off chip bandwidth than the state of the art competitors. Tesla offers 36TB/s off tile bandwidth so they can seamlessly interconnect as much tesla tile as they want.. “This was entirely designed by Tesla team internally, all the way from the architecture to the package. This chip is like GPU-level compute with a CPU level flexibility and twice the network chip-level I/O bandwidth.” This is only the beginning they will reach 10x development of the tile soon.
Definitely an in-house fab plant would be awesome, but don’t think they have it yet or we would know about it. BTW 7nano fab is seriously tricky, needing serious kit not to mention operators. Somewhere I recall AMD being mentioned as the supplier but don’t quote me on that.
Bandwidth is the width of the hose but Latency is more like the time it takes for the flow to get through the hose (the inverse of speed of data through the hose) so shorter hoses are better. Waiting around for other processes to complete is about maximum concurrency and how well synchonised the components are, which has ultimate scalability limits but depends on how well written the software is.
Nice. One point about the Floating Point Precision. For neural networks higher floating point values like 32 are generally a waste of time in the trade of training time versus results. GPUs are more like 32 for the sake of colour, but Google’s TPU started the process of halving the precision to double the throughput. CFP8 is likely more like 8 that can be reconfigured to 16 when needed. That’s my guess anyway. Neuron activation functions just don’t care about high precision basically
I appreciate your simplified version of this, as always it is educational and entertaining and above my reading level! Listening to you go over AI day with a reasonable understanding of what it is that your discussing, reminds me of a fourth grader trying to read a science book that is at a 12th grade reading level.. just keep flipping through the pages taking note of the words you don’t know to look up definitions for later and eventually only one or two words per page will need to be looked up ⬆️ and then to the next level
Incredible that a car company is doing this. I have an IT background (long retired) and have followed supercomputer performance over the last 50+ years. When I saw exoflop speed I about had a heart attack, that’s a quintillion floating point operations per second, unbelievable, yet true. Car company? Tesla Killers are you watching?
It’s becoming apparent that the AI day was so informationally dense that it will require a 20-40m video for every 2m of the presentation in order to explore/explain the full implications of what was presented. (For us idiots anyway).
I think you are right about the hardware 2 chips are do chips. In the presentation they said those are the verified good chips. What about the ones that fail verification? They will undoubtedly be failed dojo cops.
"World is becoming Teslas playground" I don't know men. I am a Tesla fan too, but I start to wonder is it good for the future that a single man, and after he dies a single company will control so many aspects of our lives? There were powerful men and companies in the history and they never ended caring for the good of all mankind, whatever it was their initial purpose. I hope competition will catch up eventually.
That's likely because the morality of human society back then (dutch east india company times) permitted a company to become so corrupt. Do you think Tesla will have an actual army of 250,000 soldiers? No. Arguably, that's what made the Dutch East India Company such an oppressive force. Tesla will be different, trust me.
@Giantdad Elon won't have an army of 250,000 human soldiers. He'll have an army of 100 million robots and take over the world! BWAAAHAAAHAAHAAAAAAAAA!!!! ;)
...and after generating self awareness in our Tesla bot 2.0 using just %1 of the computing resources in the second revision of the DOJO architecture did we stop there? No! By rerouting the thermal waste heat from our giga-exapod DOJO omni compute cabinet heatsinks directly into the still incomplete ITER test fusion reactor, we easily generated break even nuclear fusion capable of supplying 1.4 million Tesla Starlink Terminator model cars with enough energy to each drive 111 times around the world, at a market equivalent of around $0.001 per kilometre....
@Piññed by Morning Invest Yeah, My first investment with Mrs Joan Kathy earned me profit of over $25,530 US dollars, and ever since then she has been delivering
They were so busy figuring out if they could build it, they forgot to ask if they should. Sounds like this AI could become self aware, and we know how that turns out.
Regarding the speculation, If what we heard at the time was correct Tesla was suggesting they wanted to invest in US chip manufacturing capacity but wanted an exemption for importing the hw3 computers because the manufacturing didn’t exist in the US and it was invest OR pay the tariffs. They were denied the exemption. Doesn’t mean they didn’t go ahead overseas but 1) Elon has been asked directly about that and denied it and said that would take years 2) any chance THAT was kept secret? That would be tough. Possible, but I’m skeptical. :)
In the Q&A that followed, it was pointed out that Tesla hasn't built/coded the compiler yet, and that there are some fundamental science yet to be discovered to do so. I was hoping you would address that problem.
the idea of compute plane is something I've (baselessly) believed to be the future of computing for a long time. It just makes so much more sense... - One day, probably even 3D and in "diamond" lattice using photons instead of electrons and whatnot.. - But it's amazing, that my "crackpot" dreams of supercomputers of the future... are approached _today_ in reality, by Tesla :D
In April 2019 Tesla "penned a deal" with Samsung. Samsung agreed to build a Chip Foundry in Austin, Texas (with Tesla promising NOT to build their own Foundry for a specified period of time). Plus Samsung will build a 4680 battery "line" near Tesla's Nevada battery factory. In conjunction with these agreements, Tesla also agreed to purchase IN advice numerous Samsung products in YEARLY increments - i.e. pay for a year's worth of batteries in January, etc. Reportedly Samsung has already produced "chips" for Tesla in quantity.
A few years back, there was a topic about which microchip company will make the first exaflop or exascale computer. Some people say it's intel, some say it's Nvidia and some people say it's AMD. Never have they imagined in their wildest dream that it will be Tesla. And yet still, some people consider Tesla as "just an automobile company like the others" and nothing more. They didn't have any idea what's coming.
First off the thumbnail is excellent. Best scene of Neo with Elon’s head is genius. I do like your tshirt too the don’t mess with Tesla over the Texas state is great. Too bad Texas won’t allow Tesla’s to be sold there. With all the money and future money and jobs they are creating and maintaining you’d think Texas would be kissing their rear on everything.
Please help me understand how Dojo and the on-board cpu are connected. Granted, Dojo is very powerful but that power is not always connected to the car to help it drive, is it? Second question is about the Teslabot. I understand that the 2 million cars are piling up huge amounts of data to enable Dojo to understand the world of driving, but what is going to accumulate all the data of a general nature that will feed Dojo?
Loved the video. :) Thanks for the thoughts. Regarding the fabbing partner, your right that it would be an awesome opportunity for Tesla to build their own fab, but it would be hard to hide. Also we do know that TSMC has been working on Fan-out Wafer level packaging. So even though there was no announcement, it would make sense if it TSMC is the foundry of choice As to using a D1 chip for self driving, the D1 was made for training, would it also be good as an inference chip? I've heard that they are two fairly different processes. Could you explain a little more as to why/how this would work? :) Thanks again!
I was thinking that too. ASML is the premier supplier of lithography systems to produce nanometer technology chips. Not many of these systems are sold and people with knowledge of the industry could easily find out who ASML is selling too.
Work has to be done also there is a price to be paid for that work also a desired time for completion combined with the cost, dojo is a tool employed to satisfy the work to be done and the cost of meeting the timeline.
Great Job. Problem: You didn't use the terms "training node", "CPU" and "D1 chip" in a consistent fashion. There were a couple of instances that I didn't understand which term was correct. Q's: Is the fan-out wafer a passive structure? How are the "known good" D1s attached to the fan-out wafer? How does this compare to Cerebras' wafer scale chip approach?
It’s funny. With Tesla building major portions of their vehicles in house. Being able to see the battery crisis 5 years ago... I said a month and a half ago, how long before they start making their own chips?! 😏 They’re always ahead of the game!
Dojo is amazing, and deploying a production system is going to take the team probably a year. At the moment the highest priority is to pump new versions of FSD as often as practical. I bet Samsung is going to build the chips. Your speculations are are 3 to 5 years ahead.
Seriously doubt Tesla is investing in their own chip fabrication. There would be some benefits.. but the costs and complexities and delay to even get the fab going and staffed.. would be insane
The key is "give them the money" to develop Dojo. In EU you do not get it, you have to prefinance everything yourself and then hope you will get it afterwards back by way of reward or subsidy. It is the very opposite of how venture capital should work. Each year, Europe is lagging two years more behind and this has been going on since March 2000 when corruption divulged the €72 billion allocated for projects like this. NONE of it was spent on anything of value but many friends of government officials had some good years.
It is not really a direct question of "do I have enough compute to train this", you configure the system in a size that you can handle. If you can wait for a week, configure it for so it takes a week. There may be a minimum value of compute for an AI Bandwidth and latency are completely independent. Driving down the highway in a truck full of hard drives has a high latency, but the bandwidth is extremely high. When you walk through a door with a hard drive, a hard drive full of data was transferred through the door in less than a second. For low latency, use optical fibres to transfer a bit to the target, but you have to transfer one at a time. The truck could be faster.
You need to get all the energy out as heat, computing does not use up heat energy. You can put in 1 MW for very short time but over some seconds no more than the 15 kW you can get out.
This is a big stride towards GAI. Send GIA into space, not people. GAI can far better visualize quanta spacetime than we ever can. We have changed our environment to such a degree that we will be dependent on GIA to survive and or to be supplanted.
@15:35 There are very few FABS that can produce these chips. Maybe three of four on the *_planet._* The lithography and the rest of the system required to achieve results like these needs *E-UV* (Extreme Ultra-Violet.)
Not bad for a "car" company. I bet GM, Ford and VW will be announcing their own AI chips in a few weeks and CNBS will carry hour long stories on how Tesla is loosing geound to them. LMAO.
Haha... Do they develop their own software for their vehicles or do they still outsource it? Lol.
Based on Raspberry Pi. 🤣😂🤪
@@graememudie7921 doubt most execs in legacy autos even know what that is haha.
@@graememudie7921 More like a D1 Mini!!!
Anyone remember Ford Aerospace? Concentrating on short short profits is in retrospect a poor strategy.
This has become one of my favorite channels, so grateful for the education. I was a music major in college 20 years ago, but my favorite classes were in renewable energy physics. I feel like I’m back at University geeking out in my favorite classes. Thanks professor. Hope you get a great vacation in soon.
Thank you for watching! And I was a music minor in college, so I'm right there with you: physics and music are a good pairing :)
Exact same feeling here Mike. He is a wonderful orator, easy to listen to but with a deep well of knowledge. Each upload is fast becoming UNMISSABLE !!
Hi@@DrKnowitallKnows, Thank you for this series. At least I'm starting to feel what all the excitment & hype around AI day is about.
What is the absolute smallest FAB machine that is used to run tests on?
Back in my day we had small machines to run tests on various materials/processes etc, that we sometimes used when our customers wanted such a small batch of a new product. It was much more economical to do (total cost of batch), than to set up on a machine that would be able to produce at a much lower cost/item on a decent run.
I wonder if you're right & Tesla are producing their own chips, as they don't need to be competative/chip. Also if you look at the chip wafer (5 x 5 layout), it has curved edges, indicating the diameter of the chip is small. Also looks like they didn't go for max number of nodes (?) per wafer/chip, but spaced them out so they didn't have to stick them down.
I'm not sure if I'm using the right terminology above, as all this is still way above my understanding. But I do understand manufacturing and hope I've made what I'm observing clear enough
@@DrKnowitallKnows Hark! Music IS sheer physics. So physics sounds like music to me. 🤗
I heard Ford and GM are working together to develop their first transistor. They hope to have a working Nand gate by 2025.
And they're building the gates out of old diesel engines. Genius move!
What a burn👏
@@DrKnowitallKnows LOL, good one! Seems like it would be a good time to go short on GM and Ford
I don’t think they’l have the NAND gate until they can get their AND gate working more reliably🧐
@@Kenlwallace Or if not?
I'm almost 50.. when I was 7.. computer's came to many homes.. C=64 :). Then.. at age 11 I read a book about the law of accelerated returns. That curve can be felt since about 1995... And it feels like a tsunami since 2016.. I guess from 2025 tot 2030 is where miracles will seemingly happen.. I learned my lesson about exponentials only by living through them.
The Matrix is totally what I thought of when at AI day they showed video of the real world transformed into a virtual simulation world for training purposes. Great thumbnail :)
The speculation part was super interesting! 👍
Thank you 3000!!!
I needed this fantastic explanation. Professor you have done us layman a massive favor. Love the channel congrats on 25k I hope it grows exponentially and that you get that job at Tesla.
So exciting! This English major is now very close to understanding this amazing achievement, thanks to your interspersing explanations with the Tesla engineer's comments accompanied by visual scripts. Love your enthusiasm for the mind-blowing achievements of Tesla teams. Please keep these coming to educate those of us who find this kind of thinking so difficult.
I think you forget that most Tesla are not made in the US and the US Government has nothing to say about them. These vehicles are generating knowledge all over the world and all mankind, certainly all Tesla owners deserve the benefit. Tesla Self driving will be available in cars and robots made by many companies and in every auto producing country. Tesla Self driving will be bigger than Tesla Auto and Tesla Power combined.
It is easy to think Tesla is all about the USA. Tesla US should catch up with Tesla China after the Freemont shutdown and rework but once the model 2 line is up in China the US may remain number 2.
@@collegiateindependentstudy6437 We will always remember that Tesla was Started by a South African in that country, , wherever. I can't remember now.
@@edocioprovost1896 America.
@@earthengineer8344 No thats not it, , , I will think of it.
FYI: It's 9TB/edge so 36TB per tile.
This channel is a youtube gem. That said, this video is among my top 3 picks for super interesting and informative videos. F***ing brilliant. Since this subject matter goes completely over my head, I appreciate Dr. Know It All breaking it down by stating along the lines of "this is what it means"...I need that. I need the translation into 10th grade English to allow this technology to penetrate my understanding.
The "what's happening next" gave me chills. Tesla is building their own Dojo chips? OK...but how could Tesla have already bought and operated a Fab? Super expensive. Never mind that the expense line-items are not listed in the quarterly reports....there is little evidence within their financials of any such massive use of funds into a Fab investment .
Training Computer EXACTLY...Sooo Interesting...have to plan in ADVANCE to make sure we have it by the time we might Need This Never Cease to Amaze... it...Amazing...
Really appreciate the easy to understand explanations and your knowledge on the subject matter! Keep up the good work!
I think a big piece of why they want so much training compute is reinforcement learning for the planner. That’s a huge compute sink, but likely critical to getting something to drive really well. That;s how they’re going to get to the end-to-end NN they want.
Especially because of how they use their planner, not just for the Tesla, but for every other vehicle in the scene. Which means the planner not only has to be able to predict the correct choice for its car, based on present and past data, but also the behavior of other cars around it entirely based on what the Tesla itself can see, likely driven by humans which ARE highly unpredictable and will often chose to adopt a solution that isn't optimal.
I was amazed when i saw this in action in a wham bam vid, where the Tesla anticipated that a speeding car behind it on another lane would be forced to fork into the Tesla's lane due to a truck in front and pre-emptively slowed down way before the pilot even realized a danger was approaching, this to me can only be guessed if the predictor was ran on the speeding car and correctly guessed that the most likely course of action for the speeder was an aggressive lane change that would cut off the Tesla
These AI day vids are great. Thanks so much for breaking it down for us.
Explaining AI Day is like unpacking a "zip" file.
Soon, Tesla will work simulation of the design of dojo chip in the current dojo training tiles and find out the most efficient design from the simulation and develop the next generation Dojo chip.
Why build the 'machine to build the machine' when you can rent it?
@@gregbailey45 If you can build better than others , why rent one. Tesla dojo chip has better off chip bandwidth than the state of the art competitors. Tesla offers 36TB/s off tile bandwidth so they can seamlessly interconnect as much tesla tile as they want.. “This was entirely designed by Tesla team internally, all the way from the architecture to the package. This chip is like GPU-level compute with a CPU level flexibility and twice the network chip-level I/O bandwidth.” This is only the beginning they will reach 10x development of the tile soon.
Fabulous explication, DKIA!
Cool thumb! And congratulations on 25k:))
Outstanding as always, Doc! I LOVE that T-shirt too!
Thank you
Best video icon of the month. And another well done distillation of AI Day.
Very informative and entertaining too. Love your blog professor...
Definitely an in-house fab plant would be awesome, but don’t think they have it yet or we would know about it. BTW 7nano fab is seriously tricky, needing serious kit not to mention operators. Somewhere I recall AMD being mentioned as the supplier but don’t quote me on that.
Bandwidth is the width of the hose but Latency is more like the time it takes for the flow to get through the hose (the inverse of speed of data through the hose) so shorter hoses are better. Waiting around for other processes to complete is about maximum concurrency and how well synchonised the components are, which has ultimate scalability limits but depends on how well written the software is.
Love this series Dr. Know It All! Can't wait for the next parts.
Nice. One point about the Floating Point Precision. For neural networks higher floating point values like 32 are generally a waste of time in the trade of training time versus results. GPUs are more like 32 for the sake of colour, but Google’s TPU started the process of halving the precision to double the throughput. CFP8 is likely more like 8 that can be reconfigured to 16 when needed. That’s my guess anyway. Neuron activation functions just don’t care about high precision basically
I appreciate your simplified version of this, as always it is educational and entertaining and above my reading level! Listening to you go over AI day with a reasonable understanding of what it is that your discussing, reminds me of a fourth grader trying to read a science book that is at a 12th grade reading level.. just keep flipping through the pages taking note of the words you don’t know to look up definitions for later and eventually only one or two words per page will need to be looked up ⬆️ and then to the next level
DOJO is huge. Will be making a video on this soon as well, and will reference to this video!
One of your finest episodes. And that's saying something! I learned a boatload.
Best thumbnail ever
really interesting thought. only now I understand the breakthrough with Dojo. exciting futur.
Good point about the bandwidth - hope you get to take that vacation soon!
Wow great job of explaining. I think I will hold on to my stock forever
Incredible that a car company is doing this. I have an IT background (long retired) and have followed supercomputer performance over the last 50+ years. When I saw exoflop speed I about had a heart attack, that’s a quintillion floating point operations per second, unbelievable, yet true. Car company? Tesla Killers are you watching?
"Singularity" Here we come!
Thanks a lot for unpacking all of this from your perspective!!
It’s becoming apparent that the AI day was so informationally dense that it will require a 20-40m video for every 2m of the presentation in order to explore/explain the full implications of what was presented.
(For us idiots anyway).
keep in mind, watching videos and trying to learn is not a trait of an idiot
Hahaha! That was why I was crying the night of the presentation. Talk about a fire hose of information!
There is no way they bought/built thier own fab. Those are way too expensive.
I think you are right about the hardware 2 chips are do chips.
In the presentation they said those are the verified good chips. What about the ones that fail verification? They will undoubtedly be failed dojo cops.
Rooting for you...
Good summary and the speculation part nails it!
"World is becoming Teslas playground"
I don't know men. I am a Tesla fan too, but I start to wonder is it good for the future that a single man, and after he dies a single company will control so many aspects of our lives?
There were powerful men and companies in the history and they never ended caring for the good of all mankind, whatever it was their initial purpose.
I hope competition will catch up eventually.
That's likely because the morality of human society back then (dutch east india company times) permitted a company to become so corrupt. Do you think Tesla will have an actual army of 250,000 soldiers? No. Arguably, that's what made the Dutch East India Company such an oppressive force. Tesla will be different, trust me.
@Giantdad Elon won't have an army of 250,000 human soldiers. He'll have an army of 100 million robots and take over the world! BWAAAHAAAHAAHAAAAAAAAA!!!! ;)
Great video once again. One of my favorite
@22:43 Ooo... A Cray-1 computer. (I even recognize the pattern on the carpet. [Acrylic. How _Seventies..._ 😉 ])
Tesla is “Stark Industries”
...and after generating self awareness in our Tesla bot 2.0 using just %1 of the computing resources in the second revision of the DOJO architecture did we stop there? No! By rerouting the thermal waste heat from our giga-exapod DOJO omni compute cabinet heatsinks directly into the still incomplete ITER test fusion reactor, we easily generated break even nuclear fusion capable of supplying 1.4 million Tesla Starlink Terminator model cars with enough energy to each drive 111 times around the world, at a market equivalent of around $0.001 per kilometre....
Am investing in crypto now....... this dip is a clear sign for new investors to come in ✅✅
*Investing in crypto now should be in every wise individuals list, in some months time you'll be ecstatic with the decision you made today.*
Most intelligent words I've heard.
Crypto is the new gold
I wanted to trade Crypto but got discouraged by the fluctuations in price
@Piññed by Morning Invest Yeah, My first investment with Mrs Joan Kathy earned me profit of over $25,530 US dollars, and ever since then she has been delivering
They were so busy figuring out if they could build it, they forgot to ask if they should. Sounds like this AI could become self aware, and we know how that turns out.
Just saw your video on your climbing of the Matterhorn, great video!!
Regarding the speculation, If what we heard at the time was correct Tesla was suggesting they wanted to invest in US chip manufacturing capacity but wanted an exemption for importing the hw3 computers because the manufacturing didn’t exist in the US and it was invest OR pay the tariffs. They were denied the exemption. Doesn’t mean they didn’t go ahead overseas but 1) Elon has been asked directly about that and denied it and said that would take years 2) any chance THAT was kept secret? That would be tough. Possible, but I’m skeptical. :)
How many ads can be in 1 video, sheesh. Used to feel bad about using ad block but didn’t realize how much it saved me.
Haha, I like it when you nerd out on things :-) Thanks for the video!
Thank you!
Good content!
But wait, there’s more …
Love it!
In the Q&A that followed, it was pointed out that Tesla hasn't built/coded the compiler yet, and that there are some fundamental science yet to be discovered to do so. I was hoping you would address that problem.
the idea of compute plane is something I've (baselessly) believed to be the future of computing for a long time. It just makes so much more sense...
- One day, probably even 3D and in "diamond" lattice using photons instead of electrons and whatnot..
- But it's amazing, that my "crackpot" dreams of supercomputers of the future... are approached _today_ in reality, by Tesla :D
The Perfect Thumbnail
James Cameron called it. Cyberdyne = Tesla. Seriously.
I’m not sure Cameron has had an original thought in his entire life.
The computer to design the computer. It will be like the day they turned Skynet on.
In April 2019 Tesla "penned a deal" with Samsung. Samsung agreed to build a Chip Foundry in Austin, Texas (with Tesla promising NOT to build their own Foundry for a specified period of time). Plus Samsung will build a 4680 battery "line" near Tesla's Nevada battery factory. In conjunction with these agreements, Tesla also agreed to purchase IN advice numerous Samsung products in YEARLY increments - i.e. pay for a year's worth of batteries in January, etc. Reportedly Samsung has already produced "chips" for Tesla in quantity.
A few years back, there was a topic about which microchip company will make the first exaflop or exascale computer. Some people say it's intel, some say it's Nvidia and some people say it's AMD. Never have they imagined in their wildest dream that it will be Tesla. And yet still, some people consider Tesla as "just an automobile company like the others" and nothing more. They didn't have any idea what's coming.
What about the comment that Musk said about how, "we can't just start up a chip factory, that takes years.." paraphrasing...
Live in Canada but family lives in Texas. Love the shirt.
First off the thumbnail is excellent. Best scene of Neo with Elon’s head is genius.
I do like your tshirt too the don’t mess with Tesla over the Texas state is great.
Too bad Texas won’t allow Tesla’s to be sold there.
With all the money and future money and jobs they are creating and maintaining you’d think Texas would be kissing their rear on everything.
Your shirt could read "Don't Mess With Texla" ;-)
Please help me understand how Dojo and the on-board cpu are connected. Granted, Dojo is very powerful but that power is not always connected to the car to help it drive, is it?
Second question is about the Teslabot. I understand that the 2 million cars are piling up huge amounts of data to enable Dojo to understand the world of driving, but what is going to accumulate all the data of a general nature that will feed Dojo?
You missed the use of the computer for Starlink...a massive undertaking as well.
Could you discuss the compiler for Dojo
Hey I'm a bartender at a Caribbean bar I take offense of that sir. Lol. Just kidding being a bartender at a Caribbean island is my retirement plan lol
Loved the video. :) Thanks for the thoughts.
Regarding the fabbing partner, your right that it would be an awesome opportunity for Tesla to build their own fab, but it would be hard to hide. Also we do know that TSMC has been working on Fan-out Wafer level packaging. So even though there was no announcement, it would make sense if it TSMC is the foundry of choice
As to using a D1 chip for self driving, the D1 was made for training, would it also be good as an inference chip? I've heard that they are two fairly different processes. Could you explain a little more as to why/how this would work? :)
Thanks again!
I was thinking that too. ASML is the premier supplier of lithography systems to produce nanometer technology chips. Not many of these systems are sold and people with knowledge of the industry could easily find out who ASML is selling too.
Work has to be done also there is a price to be paid for that work also a desired time for completion combined with the cost, dojo is a tool employed to satisfy the work to be done and the cost of meeting the timeline.
Great Job. Problem: You didn't use the terms "training node", "CPU" and "D1 chip" in a consistent fashion. There were a couple of instances that I didn't understand which term was correct. Q's: Is the fan-out wafer a passive structure? How are the "known good" D1s attached to the fan-out wafer? How does this compare to Cerebras' wafer scale chip approach?
It’s funny. With Tesla building major portions of their vehicles in house. Being able to see the battery crisis 5 years ago... I said a month and a half ago, how long before they start making their own chips?! 😏 They’re always ahead of the game!
Love the series.
I think that a Dojo computer will also end up going to mars
Wow! Awesome episode. Any thoughts on how they increase the compute power 10x in the future?
Don’t forget Moore’s Law. In about 6 months all this State of the Art will simply be obsolete.
Dojo is amazing, and deploying a production system is going to take the team probably a year. At the moment the highest priority is to pump new versions of FSD as often as practical. I bet Samsung is going to build the chips. Your speculations are are 3 to 5 years ahead.
Seriously doubt Tesla is investing in their own chip fabrication. There would be some benefits.. but the costs and complexities and delay to even get the fab going and staffed.. would be insane
The key is "give them the money" to develop Dojo. In EU you do not get it, you have to prefinance everything yourself and then hope you will get it afterwards back by way of reward or subsidy. It is the very opposite of how venture capital should work. Each year, Europe is lagging two years more behind and this has been going on since March 2000 when corruption divulged the €72 billion allocated for projects like this. NONE of it was spent on anything of value but many friends of government officials had some good years.
Luv the matrix graphic meme!
Don't mess with Tesla! Yeah! 💪💪💪
Great thumbnail
Awesome!
They could definitely fab in the future. I think they're limited by EUV machines. Intel, Samsung, TSMC, have bought them out for the next few years.
thanks for this, good stuff
GOSH I SURE WISHED I WOULD HAVE FINISHED GRADE 8!!!! I COULD HAVE APPLIED FOR A JOB WITH THESE GUYS AND GET OUT OF MY MIN. WAGE JOB
The first thing one learns in 9th grade is to turn off the Caps lock key.
@@brianbeasley7270 see you are borderline GENIUS just with 1 moe grade of education!!GOOD FOR YOU
This seems important to National Security, especially with incorporation into Drones and similar.
Can any spare cycles on Dojo be used in the future for mining crypto?
Did you watch the Hyperchange interview on Dojo?
I wonder if Gordon Johnson has seen this vid? Might be over his head though.
Even today, I can't find even a rumor of who built the chips on that Dojo tile.
I'm really wondering if they're doing it on their own.
Carnival has some great deals my friend
It is not really a direct question of "do I have enough compute to train this", you configure the system in a size that you can handle. If you can wait for a week, configure it for so it takes a week.
There may be a minimum value of compute for an AI
Bandwidth and latency are completely independent. Driving down the highway in a truck full of hard drives has a high latency, but the bandwidth is extremely high. When you walk through a door with a hard drive, a hard drive full of data was transferred through the door in less than a second.
For low latency, use optical fibres to transfer a bit to the target, but you have to transfer one at a time. The truck could be faster.
Just Wow.
You need to get all the energy out as heat, computing does not use up heat energy. You can put in 1 MW for very short time but over some seconds no more than the 15 kW you can get out.
This is a big stride towards GAI. Send GIA into space, not people. GAI can far better visualize quanta spacetime than we ever can. We have changed our environment to such a degree that we will be dependent on GIA to survive and or to be supplanted.
You said 9 terabytes per second. It is actually 36 terabytes per second on the training tile.
@15:35 There are very few FABS that can produce these chips. Maybe three of four on the *_planet._* The lithography and the rest of the system required to achieve results like these needs *E-UV* (Extreme Ultra-Violet.)
I/O bandwidth is more important, and harder to achieve, than computation power.