The Real Reason Tesla Built The DOJO Supercomputer!

Поділитися
Вставка
  • Опубліковано 6 лис 2021
  • The first 100 people to go to www.blinkist.com/theteslaspace are going to get unlimited
    access for 1 week to try it out. You'll also get 25% off if you want the full membership!
    The Real Reason Tesla Built The DOJO Supercomputer! Detailing Tesla's Supercomputer project Dojo, how it works, chip specs, and why Tesla is investing so much into AI.
    Last video: The 2022 Tesla Battery Update Is Here
    • The 2022 Tesla Battery...
    ► Subscribe to our sister channel, The Space Race: / @thespaceraceyt
    ► Subscribe to The Tesla Space newsletter: www.theteslaspace.com
    ► Get up to $250 in Digital Currency With BlockFi: blockfi.com/theteslaspace
    ►You can use my referral link to get 1,500 free Supercharger km on a new Tesla:
    ts.la/trevor61038
    Subscribe: / @theteslaspace
    🚘 Tesla Videos: • Why Tesla Will Destroy...
    🚀 SpaceX Videos: • SpaceX Videos
    👽 Elon Musk Videos: • Elon Musk Developing C...
    🚘 Tesla 🚀 SpaceX 👽 Elon Musk
    Welcome to the Tesla Space, where we share the latest news, rumors, and insights into all things Tesla, Space X, Elon Musk, and the future! We'll be showing you all of the new details around the Tesla Model 3 2021, Tesla Model Y 2021, along with the Tesla Cybertruck when it finally arrives, it's already ordered!
    Instagram: / theteslaspace
    Twitter: / theteslaspace
    Business Email: tesla@ellifyagency.com
    You can use my referral link to get 1,500 free Supercharger km on a new Tesla:
    ts.la/trevor61038
    #Tesla #TheTeslaSpace #dojo
  • Наука та технологія

КОМЕНТАРІ • 491

  • @TheTeslaSpace
    @TheTeslaSpace  2 роки тому +26

    The first 100 people to go to www.blinkist.com/theteslaspace are going to get unlimited
    access for 1 week to try it out. You'll also get 25% off if you want the full membership!

    • @ryvyr
      @ryvyr 2 роки тому +1

      I removed the comment from main body since it did not seem relevant to subject material, and am relegating here. Why do you employ the seamless mid-video sponcorship method rather than announce at beginning, at the very least, if insisting on mid-video reel? It really kills the rest of video and at times just click off at that point. Is there an ethical/moral consideration, or no?
      Per your recent video with the CT photoshopped to be black, along with the title, and noting to be "self aware" per clickbait - was that a sort of hand-wave relying on enough of us to not care?
      I do enjoy your content, though am disheartened when people seem misleading or irreverent with mid-video seamless sponsorship reels, which feel like a betrayal of trust.

    • @texasblaze1016
      @texasblaze1016 2 роки тому

      Where is the DOJO super computer being built?

    • @nathanthomas8184
      @nathanthomas8184 2 роки тому

      Is it plugged into the Black ooze ?

    • @glidercoach
      @glidercoach 2 роки тому

      Not sure if using climate change models as an example, was a good idea, seeing as all models have failed miserably.
      As they say, _"Garbage in, garbage out."_

    • @martinheath5947
      @martinheath5947 2 роки тому

      While these computers and AI breakthroughs may in themselves be pure, scientifically and mathematically speaking, the potential for malevolent usage is enormous eg 24/7 real time, comprehensive, monitoring and tracking surveillance of entire populations for an all pervasive and totalitarian social credit system. Recent events around the world relating to pandemic "control measures" suggest our leaders do not have our best interests at the forefront of their concerns. Control is the goal and I foresee a very dangerous coalescence of supranational elite power once this technology is pressed into service for *their* benefit.

  • @nujuat
    @nujuat 2 роки тому +88

    Im an experimental physics phd student and I've written a quantum mechanics simulator that runs on graphics cards. When i was writing that, the top priority was to retain the highest accuracy possible with 64 bit floating point numbers (since we want to know exactly what's going to happen when we test the experiment out in the lab). I think most supercomputers are built to do things like that. However, having that accuracy is unnecessary for things like graphics and machine learning. So it makes perfect sense that tesla would cut down on that when they're designing a supercomputer only for machine learning purposes. I don't think you got anything wrong.

    • @superchickensoup
      @superchickensoup 2 роки тому +8

      I once used a search bar on a computer

    • @karlos6918
      @karlos6918 2 роки тому +1

      The Chern Simons number has a modulo 64 factorization heavenly equation representation which can map onto a binary cellular automaton with states.

    • @muhorozibb2777
      @muhorozibb2777 2 роки тому +3

      @@karlos6918 In human words that means😳😳😳?

    • @BezzantSam
      @BezzantSam 2 роки тому

      @@superchickensoup I remember my first beer

    • @BezzantSam
      @BezzantSam 2 роки тому

      Do you mine ethereum on the side?

  • @TheMrCougarful
    @TheMrCougarful 2 роки тому +24

    This is probably another example of a philosophy most often seen working at SpaceX: The best part is no part. I would probably call the dojo a super-abacus. But for their purpose, an abacus was perfect, so they built the correct machine.

  • @anthonykeller5120
    @anthonykeller5120 2 роки тому +9

    40+ years of software engineering starting with machine interfaces. Very good presentation. If I was at the start of my career this is where I would want to spend my waking hours.

  • @denismilic1878
    @denismilic1878 2 роки тому +33

    Very smart approach. Less precise data and more neural networks. Simply said, it's not important if a pedestrian is distanced 15.1 m or 15.1256335980...m, important is if he going to step on the road or not. For decision making precise data is not necessary, interpreting and understanding data is crucial. The second factor why low precision is acceptable all predictions are made for a short time span, and calculations are done repeatedly. The third reason is sensors inputs are also relatively low quality but huge amount.
    edit: very good and understandable video.

  • @StephenRayner
    @StephenRayner 2 роки тому +7

    Software engineer here with 15 years experience. You did a good job

  • @thomasruwart1722
    @thomasruwart1722 2 роки тому +56

    Great video! I spent my entire 45-year career in High Performance Computing specializing in the performance of data storage systems at the various DoE and DoD labs. I am very impressed with Dojo, it's design and implementation not to mention its purpose. Truly amazing and fascinating!
    Tuesday Humor: Frontera: The only computer that can give you one hexabazillion wrong answers per second! 😈

    • @efrainrosso6557
      @efrainrosso6557 2 роки тому +2

      So Frontera is the Joe Brandon Biden of computers. Always wrong with authority and confidence. Not one right decision in 50 years.

    • @prashanthb6521
      @prashanthb6521 2 роки тому +3

      Awesome career you had sir. I am right now struggling to string together few computers to make money from stock market from my basement :)

    • @thomasruwart1722
      @thomasruwart1722 2 роки тому

      @@prashanthb6521 - that sounds like fun! There are lots of inexpensive single board computers that you can build clusters with. Some have AI coprocessors as well to do Tensor Flow or whatever suits your needs. I wish to all the best luck with your projects!

  • @MichaelAlvanos
    @MichaelAlvanos 2 роки тому +33

    Great presentation! It filled in the gaps & I learnt some things I wasn't even aware of. Even your comment section is filled with great info!!

  • @stevedowler2366
    @stevedowler2366 2 роки тому +17

    Thanks for a very clear explanation of task-specific computing machine design. I've read ... well, skimmed ... er, sampled that DOJO white paper to the point where I glommed the idea of lower but sufficient precision yields higher throughput thus compute power for a specific task. Your pi example was the best! Keep these videos coming, cheers.

  • @jaybyrdcybertruck1082
    @jaybyrdcybertruck1082 2 роки тому +53

    fun fact, the computers Tesla has been using to train FSD software today amount to being the 5th largest super computer in the world. It isnt good enough at that level so they are leap frogging everything.

    • @ClockworksOfGL
      @ClockworksOfGL 2 роки тому +10

      I have no idea if that’s true, but it sounds like something Tesla would do. They’re not trying to break records, they’re trying to solve problems.

    • @jaybyrdcybertruck1082
      @jaybyrdcybertruck1082 2 роки тому +4

      @@ClockworksOfGL here is the actual presentation by Tesla which explains everything, its a but long but holy cow its awesome.
      ua-cam.com/video/j0z4FweCy4M/v-deo.html

    • @scottn7cy
      @scottn7cy 2 роки тому +3

      @@ClockworksOfGL They're trying for world domination. Elon Musk is merely a robotic shell. Inside you will find Brain from Pinky and the Brain.

    • @jaybyrdcybertruck1082
      @jaybyrdcybertruck1082 2 роки тому

      @@stefanms8803 small potatoes for a car company then I guess, remind me what GM Ford and VW have?

    • @abrakadavra3193
      @abrakadavra3193 2 роки тому

      @@ClockworksOfGL It's not true.

  • @j.manuelrios5901
    @j.manuelrios5901 2 роки тому +4

    Great video! It was never about the EV’s for me, but instead more about the Ai and energy storage. TSLA

  • @robert.2730
    @robert.2730 2 роки тому +30

    GO TESLA GO 🚀🚀🚀👍🏻😀

  • @lonniebearden9923
    @lonniebearden9923 2 роки тому +10

    You did a great job of presenting this information . Thank you.

  • @oneproductivemusk1pm565
    @oneproductivemusk1pm565 2 роки тому +10

    Like I told you before!
    I love your commentary very natural and conversational!
    Keep it up my man!

  • @jaybyrdcybertruck1082
    @jaybyrdcybertruck1082 2 роки тому +21

    Its worth mentioning that Tesla is already planning out the next upgraded version of DOJO which will be 10 x the performance of the one they are building today.
    Dojo will be up and running sometime in the second half of 2022, after that I give it 1 year to turn Full Self driving into something the world has never seen. It will take all 8 cameras video and simultaneously label everything they see in real time through time.
    Today its labeling small clips from individual cameras. This will be a HUGE step change in training once its running.
    Its going to save millions of lives.

    • @gianni.santi.
      @gianni.santi. 2 роки тому

      "after that I give it 1 year to turn Full Self driving into something the world has never seen."
      What we're seeing right now is also never seen before.

    • @TusharRathi-zj1wu
      @TusharRathi-zj1wu 3 місяці тому

      Not yet

    • @TusharRathi-zj1wu
      @TusharRathi-zj1wu 3 місяці тому

      Not yet

  • @neuralearth
    @neuralearth Рік тому +1

    The amount of love I felt for this community when you compared it to Goku and Frieza made me feel like there might be somewhere on this planet where I might fit in and that I am not as alone as I feel. Thank you TESLA and ELON and NARRATOR GUY.

  • @thefoss721
    @thefoss721 2 роки тому +3

    Dude your videos are super solid! I’m super impressed with the info and knowledge and slight bit of humor to keep things moving swiftly
    Can’t wait to hear some more info!

  • @costiqueR
    @costiqueR 2 роки тому +2

    I really enjoy it, a comprehensive and clear presentation. Thanks!

  • @raymondtonkin6755
    @raymondtonkin6755 2 роки тому +5

    It's not just flops, it's the adaptive algorithms too ! The structure of dimensions in a neural network ... pattern recognition, nondeterministc weighted resolution 🤔 and memory

  • @scotttaylor3334
    @scotttaylor3334 2 роки тому +2

    I, for one, welcome our computer overlords... Three comments about the video:
    Fantastic video! Tons of data and lots of background. Love it.
    You made an analogy with Canada getting rid of the $1 bill, and I think you indicated that it reduced the number of coins we carry around, but my experience is exactly the opposite. I find that I come home with a pocket full of change every time I go out and use cash...
    Second thing, Nvidia, is pronounced, "invidia/envidia". I used to play on the hockey team down in San Jose California.
    Again thanks for the great video and great presentation.

  • @dan92677
    @dan92677 2 роки тому +1

    Both interesting and informative!! Thank you...

  • @d.c.monday4153
    @d.c.monday4153 2 роки тому +24

    Well, I am not a computer nerd! But, the parts you explained that I knew, were right, the parts you explained that I didn't know, sounded right! So I am happy with that. Well done.

  • @incognitotorpedo42
    @incognitotorpedo42 2 роки тому +53

    When you start the video with a long (sometimes angry/defensive) tirade about you not knowing anything about supercomputers, it makes me wonder if any of it is going to be worth listening to. You actually did a pretty good job, once you got to it.

    • @KineticEV
      @KineticEV 2 роки тому +1

      I was thinking the same thing. Especially at the beginning with the super computer vs. the human brain. I think that was the only thing I disagreed with since we know the whole point some companies are trying to do is solve the AI problem but always come short.

    • @kiaroscyuro
      @kiaroscyuro 2 роки тому +3

      I listened to it anyway and he got quite a big wrong

    • @ravinereedy204
      @ravinereedy204 2 роки тому +3

      Not everyone has a degree in CS... I do, and he explained a lot of things pretty good. The thing is, he knows the limits of his knowledge and he does his best to explain anyways. How you gonna bash the guy for that? lol I suppose I understand what you mean though. At least he is upfront about it and doesnt lie to the views to fill the gaps?

    • @vsiegel
      @vsiegel 2 роки тому

      @@ravinereedy204 I think he did not bash the author, he pointed out that there is a risk of loosing viewers early because they misunderstand what he says.

    • @ravinereedy204
      @ravinereedy204 2 роки тому

      @@vsiegel Sure, maybe thats what he was implying, but thats not what he said though lol

  • @PeterDoingStuff
    @PeterDoingStuff 2 роки тому

    Thanks for making this video, very informative about HPC

  • @emilsantiz3816
    @emilsantiz3816 2 роки тому +2

    Excellent Video!!! A very concise explanation of what Dojo is and is not, and its capabilities and limitations!!!!!!

  • @craigruchman7007
    @craigruchman7007 2 роки тому +3

    Best explanation of Dojo I’ve heard,

  • @JayTemaatFinance
    @JayTemaatFinance 2 роки тому +1

    Great content. Funny analogies. Commenting for the algorithm. 👍🏼

  • @sowjourner
    @sowjourner Рік тому

    Amazing...exactly on my level of comprehension without googling in conjunction with listening. Impressive. I immediately subscribed..... i never subscribe to any channel. my expectation is hearing more at this perfect and engaging level. a BIG thanks !!

  • @markbullock3741
    @markbullock3741 2 роки тому

    Thank you for the upload.

  • @owenbradshaw9302
    @owenbradshaw9302 2 роки тому +8

    Greet video , I will say, dojo has the advantage of incredibly low latency so that the entire super computer can process data efficiently, regardless of floating points . Lots of floating points is useless if you can’t transfer that data between nodes very fast. It’s like trying to push a fire hydrant of water through a garden hose . This is one of the big factors for how good dojo is .

    • @vsiegel
      @vsiegel 2 роки тому +1

      Is still is floating point numbers, but less precise, with lower resolution basically. You do not need the precision, and if a number uses less precision, it uses less memory. You can not transfer the data faster, but you can transfer more in the same time. The latency does not change, the throughput doubles if you use half the precision.

  • @NarekAvetisyan
    @NarekAvetisyan 2 роки тому +8

    The PS5 is 10.2 TFLOPS of FP32 btw so one of these Tesla tiles is only 2 times faster not 35.

  • @oneproductivemusk1pm565
    @oneproductivemusk1pm565 2 роки тому +5

    I agree that image is too graphic but it's perfect for the occasion! Lol😂😂😂

  • @pwells10
    @pwells10 2 роки тому +4

    I subscribed based off the thumbnail. I liked and commented because of the quality of content.

  • @BreauxSegreto
    @BreauxSegreto 2 роки тому +4

    Well done 👍 ⚡️

  • @ChaJ67
    @ChaJ67 2 роки тому +30

    To my understanding at least, with current technology it is impossible to make a chip over a certain size and get perfection. This is what limits GPU sizes, which is way smaller than a wafer. The only way to do wafer size is to design it to work around any and all defects. So they may actually use nearly 100% of the wafers, just with a number of sub-components disabled because of defects.
    The reason wafer scale is so important is heat dissipation of interconnects. The reason we have gone so long without GPU chiplets is with all of the interconnects, you can't just distribute the GPU across multiple die to get better performance. Instead you have a multi-pronged interconnect nightmare with one of those problems the shear heat generated in the die to die interconnects outweighs any benefit from spreading across to more die. While there is talk of MCM GPUs from AMD and AMD already has MCM CPUs, the CPUs are done with particular limitations to allow chiplets to work and the issues making an MCM GPU possible have been studied for years and it looks like they may have come up with an acceptable solution to there being a benefit to spreading across multiple die. Wafer scale takes a different approach in that everything is on the same wafer and so the interconnect issue is eliminated at the cost of you have to deal with the defects of neighboring silicon on the wafer instead of chopping everything up and throwing out all of the defective pieces, at least defective to the point where more common designs cannot work.
    The only way to dissipate the heat from so much silicon in one spot is through liquid cooling. So there is actually another layer on top which is the water block if I understand correctly. Another great thing about liquid cooling is you can just bring the heat to outdoor radiators and dissipate it. Something I would be interested in is it seems Tesla has high temperatures figured out, allowing them to boost the performance of the power electronics for the Tesla car, so it would be interesting to know what is going on with Dojo to see if they can have a simple high heat load outdoor radiator to cool the supercomputer and thus save a bunch on cooling. Cooling can be quite an expensive process, especially if traditional forced air CRACs are used, so a simple liquid loop with mainly just pumps to move the liquid and fans over the radiators from a power perspective would be a huge power savings. Chilling air to 65 F (or 15 C) and then blowing it over high performance computer parts with crazy high powered fans burns a tonne of power to do, especially if it is 115 F (over 45 C) outside.

    • @goldnutter412
      @goldnutter412 2 роки тому +1

      If the car is moving, you can get near free airflow. RAM it in with the right fluteing or whatnot..
      Clock speed and routing on chip knows what is coming well before cognition of a human would kick in. I stopped it must be underclock time.. it happened well over a second ago..slowing down with expected stop is a high prediction case.. most of the time it won't be getting fooled. Even if it does, clocking back up from 5% to 100% is so fast it is "instant" to our perception.
      So zero issues should be expected.. the wafers that go in should really last by the sounds. Nice essay cheers.. do enjoy when someone doesn't ignore CENTI grade, History channel shame on you and ALONE show.. always F still NEVER a conversion.. was always telling someone uh just saying but - that in F is under 0 in C as well.. which from the basic explain to a layman.. doesn't seem right because you first -32 then almost halve ! lucky the temps dont swing to -40 aka -40 lol the coolest temp to almost die in but not seen it yet.

    • @denismilic1878
      @denismilic1878 2 роки тому

      Of course, all these wafers have redundancy built in them, but this is not a new idea. ua-cam.com/video/LiJaHflemKU/v-deo.html

    • @davidelliott5843
      @davidelliott5843 2 роки тому

      The simple way to cool computer processors is to chill the server room. It’s not efficient but does the job. Direct cooling the wafer with “water” cooled heat sinks is far more efficient but the plumbing soon gets seriously complicated.

    • @vsiegel
      @vsiegel 2 роки тому

      @@goldnutter412 Thank you for fighting for correct or even sensible use of temperature units. (Maybe it is good that no aliens visit this planet. Not using common units would be really embarrassing.)

    • @traniel123456789
      @traniel123456789 2 роки тому

      @@davidelliott5843 Plumbing is complicated when you need 3rd party manufacturers to install their equipment. It is the preferred way of doing things in a homogenous datacenter. Fans consume a *lot* of power, and you can't make them go faster. There are even immersion cooling systems in some new datacenters to improve energy efficiency.

  • @miketharp4914
    @miketharp4914 2 роки тому

    Great report.

  • @sundownaruddock
    @sundownaruddock 2 роки тому

    Thank you for your awesome work

  • @Bianchi77
    @Bianchi77 2 роки тому

    Nice video clip, keep it up, thank you :)

  • @Fitoro67
    @Fitoro67 2 роки тому +2

    Excelente apresentação! Essa forma de abordagem da TESLA em seu projeto DOJO, vem de encontro a questão de que: as coisas mais complexas, são formadas por partes simples.
    Esse tipo de pensamento, contrário à idéia da perfeição absoluta, nos leva a potenciais incriveis.
    😀

  • @gregkail4348
    @gregkail4348 2 роки тому

    Good presentation !!!

  • @Leopold5100
    @Leopold5100 2 роки тому

    excellent

  • @MrGeorgesm
    @MrGeorgesm 2 роки тому

    Bravo! It does help understand the evolution of Tesla’s competitive advantage in FSD and related. Thank you!

  • @henrycarlson7514
    @henrycarlson7514 2 роки тому

    Interesting , Thank You

  • @konradd8545
    @konradd8545 2 роки тому +34

    ASI is beyond our reach for at least 100 years or until we have AGI (Artificial General Intelligence). AGI in itself is infinitely much more complex than a very small task of learning how to drive. Obviously, I'm not saying that self-driving cars is an easy task in terms of computing, but our brain does it infinitely better, faster and on 20W of energy only. I love how lay people overestimate the power of HPC or Machine Learning and underestimate the power of our brains. It's like comparing a single light bulb to a massive star 😂

    • @vivekpraseed918
      @vivekpraseed918 2 роки тому +3

      Exactly...not all supercomputers put together can rival the ingenuity of a single rat's or bird's brain (or maybe even bacterial colonies with zero neurons). Apes are nearly AGI

    • @memocappa5495
      @memocappa5495 2 роки тому +3

      Advancements here are exponential, doubles every 9 months, and that rate itself is improving. It’ll be in the next 5-10 years

    • @dogecoinx3093
      @dogecoinx3093 2 роки тому +2

      100 years? More like 6 months ago 5/3/21

    • @konradd8545
      @konradd8545 2 роки тому +2

      @@memocappa5495 yeah, sure. The same exact predictions were made around 50-60 years ago. And do we have AGI (let alone ASI)? Not even remotely close to it. It's not about computing and crunching trillions of FLOPS, it's about being able to learn and adapt to any situation based on experiences and about milion other things. There are two main problems with developing AGI. Human intelligence is not yet well known. Even the definitions differ from scientist to scientist. So how on earth are we naive enough to think that we can develop something similar if we don't understand our own natural intelligence? Second main problem is that we are trying to develop AGI on Von Neumann architecture which is a futile attempt in itself, unless we want to spend energy of the entire universe for a 1s simulation of human brain 😂 I can only see neuromorphic computing as a possible candidate but these are in their infancy. So, despite what media and lay sources say, we are nowhere near AGI. Sorry (not sorry) to burst the bubble.

    • @konradd8545
      @konradd8545 2 роки тому

      @@dogecoinx3093 what are you talking about?

  • @amosbatto3051
    @amosbatto3051 2 роки тому +7

    Very poor info on the D1 at 7:55. The wafer of 25 D1 chips is probably designed to be able to work around bad chips, so they don't have to throw away the entire wafer. Also, Tesla is not the first to make whole wafer chips with many processors. Both UCLA and Cerebras have been doing this since 2019, and there was a company back in the 1980s doing the same.

  • @arthurwagar6224
    @arthurwagar6224 2 роки тому

    Thanks. Interesting but beyond my understanding.

  • @howardjohnson2138
    @howardjohnson2138 2 роки тому

    Thank you

  • @Jesse_Golden
    @Jesse_Golden 2 роки тому

    Good content 👍

  • @markrowland1366
    @markrowland1366 2 роки тому +1

    When mentioning Dojo needing twelve units to do what is impressive, the architecture is infinaitily expandable. A stand alone single unit might fit in a bedside cabinet. Maybe twelve might take up one wall of a bedroom.

  • @helder4u
    @helder4u 2 роки тому

    refreshing, thanx.

  • @EdwardTilley
    @EdwardTilley Рік тому

    Smart video!

  • @erickdanielsson6710
    @erickdanielsson6710 2 роки тому

    Kool Beans, I worked on array processors "FPS Floating Point" in the late 70's 12MFLOP 64 bit systems, Hot stuff then. It would take months to solve problem. Progressed thru the years. Ending my industry work with SGI/Cray. Last 15 years with DOD and High speed machines. But This is a step above. Thanks for sharing.

  • @nickarnoldi4304
    @nickarnoldi4304 2 роки тому +6

    Tesla will most likely keep all exopods in-house, and offer a subscription to tile time.
    The Tesla bot platform will use a Dojo subscription service for training. A VR headset with tactile gloves would allow a user to perform their very complex task, and the client can send builds up to the cloud. Tesla made Dojo compute with scalability at its core.
    Dojo is the gateway to AGI.

  • @meshuggeneh14850
    @meshuggeneh14850 2 роки тому

    Well done

  • @matthewtaylor9066
    @matthewtaylor9066 2 роки тому

    Thanks that's cool fantastic work on the story could you do more on dojo

  • @norwegianblue2017
    @norwegianblue2017 2 роки тому +3

    Anyone else remember when there was talk about hitting the ceiling on computing power with the 486 processor? This was back in the early 1990s.

    • @goldnutter412
      @goldnutter412 2 роки тому

      MS-DOS 3.3.. hmm okay easy enough.. might be a coder
      next decade.. not a chance in hell no thankyou and goodbye.

  • @johntempest267
    @johntempest267 2 роки тому

    Good job.

  • @Nobody-Nowhere
    @Nobody-Nowhere 2 роки тому +10

    Cerebras is doing wafer scale AI chips. This year they released the 2nd gen chip. They announced the first version already in 2019. So Tesla is not the only one or first doing this.

    • @godslayer1415
      @godslayer1415 2 роки тому

      You are fucking clueless.

    • @godslayer1415
      @godslayer1415 2 роки тому

      @@IOFLOOD With TSMC's atrocious defect levels - prob half that "wafer" is dead.

    • @gabrielramuglia2055
      @gabrielramuglia2055 2 роки тому +1

      @@godslayer1415 In a traditional "monolithic die" design, one bad single transistor could potentially require you disable an entire CPU core, memory channel, or other critical large structure. If you design with a larger number of smaller structures that are intended to work together and route around any dead spots, your effective "working" / "active" silicon rate can be dramatically higher even with the same number of actual defects. For example, one could presume as few as a dozen defects might make a 1 billion transistor CPU be completely unusable. If you end up with 1/100,000,000 defects on average, that means most of your CPUs will be unusuable, which seems silly to criticize the fab and say that the defect rate is very high (1 in 100 million is pretty insanely good), just the tolerances required are insane -- maybe with that design of CPU die, you need 1 in 500 million defect rate. Whereas a design that is fault tolerant may lose only 1% of computing capacity for the exact same defects.

    • @zoltanberkes8559
      @zoltanberkes8559 2 роки тому +2

      Tesla DOJO is not a wafer scale chip. They use normal chip tehcnology and put them on a wafer sized interconnect.

  • @teddygreene2000
    @teddygreene2000 2 роки тому

    Very interesting

  • @rkaid7
    @rkaid7 2 роки тому

    Enjoyed the pants flop and odd swear word. Great video.

  • @francisgricejr
    @francisgricejr 2 роки тому

    Wow that's one hella fast Super Computer!

  • @raphaelgarcia3636
    @raphaelgarcia3636 2 роки тому

    Well explained ,...I understood it & Im no computer expert by any means ..lol ..& entertaining ..TY :)

  • @ModernDayGeeks
    @ModernDayGeeks 2 роки тому

    Awesome video explaining Tesla's Supercomputer. Knowing the possibility of Tesla integrating this to their AI work like Tesla Bot means they can further improve how we understand AIs today!

  • @Philibuster92
    @Philibuster92 2 роки тому +2

    This was communicated so well and so clearly. Thank you.

  • @somaday2595
    @somaday2595 10 місяців тому

    @ 9:20 -- 1 tile, 18,000 A & 15 kW heat load? Is something like liquid nitrogen removing the heat? Also, is that 18 kA the max A, and the avg is more like 5 kA?

  • @vsiegel
    @vsiegel 2 роки тому +1

    Practically speaking:
    AI training normally runs on nVidia graphics cards, which are AI training accelerators at the same time.
    Dojo is just a fast AI training accelerator. Ideally you can simply choose to use Dojo instead of nVidia, and your program does the same as before, but much faster.
    Alternatively, you can make your AI larger, similar to a higher resolution on a screen, so much that it runs at the same speed as before, but the AI is better in what it does.
    How it is done and how much faster it is is mind blowing.

  • @GlennJTison
    @GlennJTison 2 роки тому

    Dojo can be configured for larger floating point formats.

  • @ottebya
    @ottebya 2 роки тому

    BEST summary of that white paper I have heard, really impressive since every other video that tries to explain it is a mess, this is such complex stuff jeez

  • @randolphtorres4172
    @randolphtorres4172 2 роки тому

    THANKSGIVING

  • @Human-uv3qx
    @Human-uv3qx 2 роки тому +2

    Support ♥️

  • @ambercook6775
    @ambercook6775 2 роки тому

    It all sounded logical to me ! Lol. I love your channel.

  • @TheRealTomahawk
    @TheRealTomahawk 2 роки тому +3

    hey did Alan Turing use a supercomputer to crack the Enigma code? Thats what this reminded me of...

    • @jabulaniharvey
      @jabulaniharvey 2 роки тому +2

      found this...A young man named Alan Turing designed a machine called a Bombe, judged by many to be the foundation of modern computing. What might take a mathematician years to complete by hand, took the Bombe just 15 hours. (Modern computers would be able to crack the code in several minutes...thirteen to be precise)

  • @annielankford984
    @annielankford984 2 роки тому

    Tesla’s genuine company!!👍👍👍👍

  • @citylockapolytechnikeyllcc7936
    @citylockapolytechnikeyllcc7936 2 роки тому +2

    Dumb this down one more level, and it will be comprehensible to those of us outside the labcoat set. Very interesting presentation

    • @jamesluccin4553
      @jamesluccin4553 2 роки тому

      🤣🤣🤣 bruh I understood like 70% of it

  • @charleslauter5035
    @charleslauter5035 2 роки тому +1

    Where is this computer located?

  • @zachariahstovall1744
    @zachariahstovall1744 2 роки тому

    Smooooth Segway

  • @invespriatigation6113
    @invespriatigation6113 2 роки тому

    Good

  • @kstaxman2
    @kstaxman2 2 роки тому

    Tesla is always ahead on science and technology.

  • @robertmont100
    @robertmont100 2 роки тому

    Adding Double precision is 15% area hit for the total chip

  • @kimwilliams722
    @kimwilliams722 2 роки тому

    I also appreciate when people keep their grafic language to themselves

  • @tireman91
    @tireman91 2 роки тому

    Beautiful! Just want to remind everyone... DOJO 4 DOGE!

  • @thegreatdeconstruction
    @thegreatdeconstruction 2 роки тому

    IBM made a tile based CPU for supercomputers as well. In the 90s

  • @rgap3944
    @rgap3944 2 роки тому

    What is a computer price and what os Tesla is available ?

  • @alexforget
    @alexforget 2 роки тому

    Another thing that strikes me with Dojo is the bandwidth.
    Most computers can only achieve a small fraction of their advertised power because of bandwidth limitations.
    Dojo interconnects between chips and wafers mean no slowdown over data access. There is probably a 10X factor in speed right there that is easily overloked.

  • @ManMountainManX
    @ManMountainManX 2 роки тому

    TY.
    /

  • @mmenjic
    @mmenjic 2 роки тому

    15:48 if that is the case then every first big thing in history would have resulted in major development in the field but often that is not the case, usually first just proves the concept and then second, third and others improve and really innovate and change stuff significantly.

  • @larryroben1683
    @larryroben1683 2 роки тому

    GOD *** THE AUTHORITY & CREATOR ****

  • @kenleach2516
    @kenleach2516 2 роки тому

    Interesting

  • @charleslauter5035
    @charleslauter5035 2 роки тому

    Where is this computer located? Please

  • @gti189
    @gti189 2 роки тому

    I’m an idiot and I understood this easily. Great video thank you.

  • @donwanthemagicma
    @donwanthemagicma 2 роки тому +3

    A lot of companies don't wanna put in the risk of making a system like what Tesla is doing and have it not be adopted because it also brings down the amount of computing that something would need to have in order to get the calculations proper And that's only if everyone adopts it

    • @menghawtok7837
      @menghawtok7837 2 роки тому

      If Tesla cracks the autonomous driving puzzle then the financial return would be many times the investment put in. Perhaps most companies don’t have a single use case that can potentially reap such a high return, or management that’s willing to put in the investment to do it.

    • @donwanthemagicma
      @donwanthemagicma 2 роки тому +1

      @@menghawtok7837 most other companies do not have the people that could even begin to design a system like that in the first place

  • @sandiegoray01
    @sandiegoray01 2 роки тому

    Thank You. I'm only concerned about FSD, at this point. As far as I can see, not a super far distance, all other computing needs are gradually being fulfilled. And my association with computers in business has been terminated, as I'm retired. Now my only real connection with computers is trying to find one that will actually be delivered to me. And after that one which doesn't die on me after 3 months, as my last computer purchase. And combining that need with a high end personal computer that will satisfy my rather complex personal computer needs in one package.

  • @CardGamesTV1
    @CardGamesTV1 2 роки тому +1

    Skynet approves

  • @automateTec
    @automateTec 2 роки тому +1

    No matter how large the computer, GIGO (garbage in garbage out) still applies

  • @yulpiy
    @yulpiy 2 роки тому +4

    its N-vidia not nevidia btw

    • @gohansaru7821
      @gohansaru7821 2 роки тому

      UA-cam offered to translate that into English!

  • @carloscruz7317
    @carloscruz7317 2 роки тому

    the days are numbered.

  • @iamthetriplet
    @iamthetriplet 2 роки тому

    😂😂😂Great video!!!

  • @thomasruwart1722
    @thomasruwart1722 2 роки тому +3

    As a retired supercomputing weenie, the benchmarks used to determine the speed of a supercomputer use all the floating point and integer sizes. So, your statement about 64-bit floating point, if I may paraphrase, is the most important, that is not entirely correct. Yes, it is important. But one researcher I have known and worked with for over 35 years wrote and regularly runs his Computational Fluid Dynamics (CFD) code on every new DoE and DoD supercomputer. His CFD code performs best with 32-bit floating point and is being developed to utilize the 16-bit floating point capabilities of newer Xeon processors.

    • @knightwolf3511
      @knightwolf3511 2 роки тому

      looking at the comment section the video got a few things wrong...

    • @thomasruwart1722
      @thomasruwart1722 2 роки тому

      @@knightwolf3511 - yup - but overall Dojo is pretty interesting and amazing. Out of curiosity, are you an old retired computer guy like me?

  • @mmenjic
    @mmenjic 2 роки тому

    14:34 why do you think 64 is better for weather than 16 or 8 ?

  • @yorakillisunt2390
    @yorakillisunt2390 2 роки тому

    T600 is being worked on now