Tesla Reveals The New DOJO Supercomputer!

Поділитися
Вставка
  • Опубліковано 4 тра 2024
  • Tesla Reveals The New DOJO Supercomputer!
    Last video: Tesla Reveals The New DOJO Supercomputer!
    ►The Tesla Space Merch Store Is Live! Shop our first release while quantities last: shop.theteslaspace.com/
    ► Patreon: / theteslaspace
    ► Join Our Discord Server: / discord
    ► Subscribe to our other channel, The Space Race: / @thespaceraceyt
    ► Subscribe to The Tesla Space newsletter: www.theteslaspace.com
    ► Use my referral link to purchase a Tesla product and get up to $1,300 off and other exclusive benefits. ts.la/trevor61038
    Subscribe: / @theteslaspace
    Welcome to the Tesla Space, where we share the latest news, rumors, and insights into all things Tesla, Space X, Elon Musk, and the future! We'll be showing you all of the new details around the Tesla Model 3 2023, Tesla Model Y 2023, along with the Tesla Cybertruck when it finally arrives, it's already ordered!
    Instagram: / theteslaspace
    Twitter: / theteslaspace
    Business Email: tesla@ellify.com
    #Tesla #TheTeslaSpace #Elon
  • Наука та технологія

КОМЕНТАРІ • 756

  • @maxwellhouse750
    @maxwellhouse750 9 місяців тому +38

    In an unprecedented move, Dojo changed its name to Skynet.

  • @terryterry1655
    @terryterry1655 9 місяців тому +128

    The fundamental unit of the Dojo supercomputer is the D1 chip,[21] designed by a team at Tesla led by ex-AMD CPU designer Ganesh Venkataramanan, including Emil Talpes, Debjit Das Sarma, Douglas Williams, Bill Chang, and Rajiv Kurian.[5]
    The D1 chip is manufactured by the Taiwan Semiconductor Manufacturing Company (TSMC) using 7 nanometer (nm) semiconductor nodes, has 50 billion transistors and a large die size of 645 mm2 (1.0 square inch).[22]
    As an update at Artificial Intelligence (AI) Day in 2022, Tesla announced that Dojo would scale by deploying multiple ExaPODs, in which there would be:[20]
    354 computing cores per D1 chip
    25 D1 chips per Training Tile (8,850 cores)
    6 Training Tiles per System Tray (53,100 cores, along with host interface hardware)
    2 System Trays per Cabinet (106,200 cores, 300 D1 chips)
    10 Cabinets per ExaPOD (1,062,000 cores, 3,000 D1 chips)
    Tesla Dojo architecture overview
    According to Venkataramanan, Tesla's senior director of Autopilot hardware, Dojo will have more than an exaflop (a million teraflops) of computing power.[23] For comparison, according to Nvidia, in August 2021, the (pre-Dojo) Tesla AI-training center used 720 nodes, each with eight Nvidia A100 Tensor Core GPUs for 5,760 GPUs in total, providing up to 1.8 exaflops of performance.[24] credit: wiki

    • @balaji-kartha
      @balaji-kartha 9 місяців тому +8

      Elon , with his outrageously audacious visions attracts the most talented and brilliant people to his companies ❤

    • @jaimeduncan6167
      @jaimeduncan6167 9 місяців тому +2

      Thanks for sharing the actual numbers. Do you know if Tesla's numbers are fo reduced precision like the ones used for AI inference (16 bit) or Training (32 bits)? Thanks!

    • @tatradak9781
      @tatradak9781 9 місяців тому +2

      The speed of change and "successful change" is going to be staggering....

    • @chrisrogers1092
      @chrisrogers1092 9 місяців тому +1

      It should be noted that Telsa is still buying as many Nvidia GPUs as they can get their hands on.

    • @Toxa1300
      @Toxa1300 9 місяців тому +3

      So they had 1.8 exaflops in 2021 and now are building a computer that only has one exaflop?

  • @user-uj2lt9po7s
    @user-uj2lt9po7s 8 місяців тому +2

    Well brought presentation with understable analogies!. Thank you for your hard work .

  • @EdwinAbalain
    @EdwinAbalain 9 місяців тому +7

    Well brought presentation with understable analogies!

  • @lab.growth
    @lab.growth 9 місяців тому +9

    Amazing video for better understanding the implications and funcionality of dojo! Thanks :)

  • @Mr7O5
    @Mr7O5 9 місяців тому +198

    It’s crazy to see how far ahead Tesla is in the auto industry

    • @JrbWheaton
      @JrbWheaton 9 місяців тому +29

      Not just auto industry

    • @fredfrond6148
      @fredfrond6148 9 місяців тому +6

      @@JrbWheaton well said AI too.

    • @JrbWheaton
      @JrbWheaton 9 місяців тому +18

      @@fredfrond6148 Energy, computing, solar, robotics, mining, the list goes on

    • @davidbeppler3032
      @davidbeppler3032 9 місяців тому

      ​@@fredfrond6148and lithium refining.

    • @bits2646
      @bits2646 9 місяців тому +11

      Yeah, autonomous self driving is working for them... Except there are at least 5 manufacturers that already have Level 3 ... And they scrapped all that work and went from visual to AI powered self driving.... Well said, really... amateurs

  • @paulm7267
    @paulm7267 9 місяців тому +33

    Actually, the semiconductor trend for the past few years is moving away from single chip SoC designs to multi-chip packages, which means the SoC is not on a single piece of silicon, but multiple pieces of silicon inside a single “cpu” package. This is what is used in the M1, the chips in the iPhone, and inside AMD and Intel’s latest cutting edge CPUa etc. Multiple chiplets are placed very close to each other, even stacked one on top of the other inside a “cpu package,” but the SoC is no longer a single piece of silicon in cutting edge products.
    The reason this is happening is, of course, economics. The different chips are produced in the process nodes that are most economical. So the I/O hub in an AMD cpu is in one process while the cpu clusters are on cutting edge processes in units of 8 or 16 cores per cluster. Then the cpu package has one or more of these separate cluster chiplets placed around an I/O hub chiplet in the AMD example. In an Apple products, the A-series and M1 cpus, separate pieces of silicon for CPU and for memory are stacked inside the CPU package. This is why your M-series computers system memory can’t be upgraded.

    • @Trust_but_Verify
      @Trust_but_Verify 9 місяців тому +2

      Technically they could add additional bus logic to allow external memory for expansion, but that defeats the purpose of being compact.

    • @TheWallReports
      @TheWallReports 9 місяців тому

      🎯💯

    • @schrodingerscat1863
      @schrodingerscat1863 9 місяців тому +7

      The reason chiplets work well is also yields, smaller chips mean higher yields per wafer. Large chips can be made useless by one tiny imperfection where as with say 8 smaller chips covering the same area that same imperfection only loses one smaller chip with all the others being fully functional. Interposers are then constructed using very old and reliable techniques to stitch all the chiplets together. Not quite as fast as a single large chip but considerably cheaper.

  • @Rafsways
    @Rafsways 9 місяців тому +28

    🎯 Key Takeaways for quick navigation:
    00:00 🖥️ Tesla's AI Division has created a supercomputer called Dojo, already operational and growing in power rapidly, set to become a top 5 supercomputer by early 2024.
    01:25 💹 Dojo's computing power forecasted to reach over 30 exaflops by Feb 2024, with plans to ramp up to 100 exaflops by Oct 2024.
    03:02 💰 Tesla's Dojo, a specialized AI training cluster, equates to a $3 billion supercomputer, offering remarkable AI model training capabilities.
    04:00 🚗Dojo focuses on training Tesla's full self-driving neural network, surpassing standard supercomputer definitions for specialized AI training.
    05:38 📸 Dojo processes immense amounts of visual data for AI model training through labeling, aiming to automate a task previously done by humans.
    07:01 🧠 Dojo adopts a unique "system on a chip" architecture, like Apple's M1, optimizing efficiency and minimizing power and cooling requirements.
    08:10 💼 Dojo operates on tile levels, fusing multiple chips to create unified systems, enhancing efficiency and power in AI training.
    10:00 ⚙️ Tesla can add computing power through Dojo at a lower cost, avoiding competition for industry-standard GPUs, potentially leading to a new business model.
    11:23 🌐 Future versions of Dojo could be used for general-purpose AI training, enabling Tesla to rent out computing power as a lucrative business model.
    12:45 🔄 Renting out excess computing power from Dojo can potentially revolutionize Tesla's profitability, similar to Amazon Web Services.
    Made with HARPA AI

    • @F3PIZZA
      @F3PIZZA 9 місяців тому +3

      You feed a link somewhere and it spits these out?! Please share the secrets of your ways?

    • @prashanthb6521
      @prashanthb6521 8 місяців тому

      Thanks.

  • @patrikhadorn200
    @patrikhadorn200 9 місяців тому +12

    You need to use the tensor core throughput of the A100. Probably even at lower precision (BF16) to have something realistic to compare against

    • @MagusArtStudios
      @MagusArtStudios 9 місяців тому

      it looks like they have the memory right on the chip to maximize the memory speed

    • @TheBann90
      @TheBann90 9 місяців тому +4

      The Dojo compute figure is 8 bit.
      The A100 compute figure he uses are 16 bit.

  • @intheshell35ify
    @intheshell35ify 9 місяців тому +25

    Nice pace, good graphics, not too "fanboy", plenty of terminology, and raised a few questions I need to go look up and think about. All around effective UA-cam. Well done.

    • @johnwillemsen6872
      @johnwillemsen6872 9 місяців тому

      Except for "Artificial Intelligence *traning* cluster" @ 04:12 :/

    • @intheshell35ify
      @intheshell35ify 9 місяців тому

      @@johnwillemsen6872 oh man we got an English major in our midst!! I could have watched that a thousand times and not caught it cause that is one superfluous "e" in my estimation and yet we still need to know how to differentiate long and short vowels.

  • @WhosAmI
    @WhosAmI 9 місяців тому +1

    Thank you for your hard work ❤

  • @brucesearl4407
    @brucesearl4407 7 місяців тому

    This is very well done. Thank you!

  • @Star_Dust___
    @Star_Dust___ 9 місяців тому +4

    Can you Imagine, hundreds of thousands of teslas are feeding data to this machine every day

    • @schrodingerscat1863
      @schrodingerscat1863 9 місяців тому +1

      That is their main advantage, the limiting factor for AI systems is becoming the amount of training data available.

  • @robertb1508
    @robertb1508 8 місяців тому

    I more than liked this video. It was a wealth of information in less than 15 minutes. 🙂

  • @TheOlvan
    @TheOlvan 9 місяців тому +2

    At 2:11 your big number is missing three more zeros! That number is only 1 quadrillion.

  • @KenCharleton
    @KenCharleton 9 місяців тому +1

    Great description of Dojo.

  • @daviddickey9832
    @daviddickey9832 9 місяців тому +5

    Imagine car insurance companies deciding to only insure driverless cars.

    • @bitcoinburzanet80
      @bitcoinburzanet80 5 місяців тому

      Thats so stupid to say. Think about what you just said

    • @phvaessen
      @phvaessen 12 днів тому +1

      that's going to happen sooner than you think ! Governments will refuse people to drive cars not being autonomous ! 95% of accidents are due to human error, that's an enormous cost to the social security.

    • @daviddickey9832
      @daviddickey9832 12 днів тому

      @phvaessen it's going to happen but it shouldn't, even if it means a higher mortality rate

    • @11insertusernamehere
      @11insertusernamehere 12 днів тому

      @@daviddickey9832 what do you mean higher mortality rate? will driverless cars cause more accidents than human drivers in your opinion?

    • @daviddickey9832
      @daviddickey9832 11 днів тому

      @@11insertusernamehere what I'm saying is that automated cars have a lower mortality rate, but we shouldn't allow institutions to effectively prevent any person from driving even though people driving has a higher mortality rate

  • @Sammasambuddha
    @Sammasambuddha 9 місяців тому +15

    If every vehicle on public streets had a "gps" transmitter giving out data like direction, speed, etc. FSD could take advantage incorporating this localized data (car to car) to help determine its next action.
    A Futurama episode when the gang went to Robot Planet the robots move like vehicle traffic but fit between each other at high speeds. Perfect trafgic management.

    • @tomservo5007
      @tomservo5007 7 місяців тому

      no one wants to put in a tracking device in their car ffs, this isn't China

    • @Fastotec9
      @Fastotec9 7 місяців тому +1

      Privacy has left the chat

    • @getsideways7257
      @getsideways7257 7 місяців тому +1

      @@Fastotec9 What privacy are you talking of this day and age?

    • @davidbowl9650
      @davidbowl9650 7 місяців тому

      @@getsideways7257trust me, we still have a lot of privacy in this day and age. And although I want technology to improve and would love the sharing of location data and things between cars without intrusive being able to monitor people. I would avoid any reduction in privacy

    • @bitcoinburzanet80
      @bitcoinburzanet80 5 місяців тому

      They will kill us all

  • @MeatMechArchitect
    @MeatMechArchitect 9 місяців тому +5

    What’s amazing is that the auto industry is just the beginning. This will be the foundation of advances in gaming, MMO-VR, physics research, simulations, and more.

  • @redredred1
    @redredred1 9 місяців тому +1

    @2:10 - You're either missing 3 zeroes, or an exaflop is 15 zeroes.

  • @IvddeVRC
    @IvddeVRC 9 місяців тому +7

    This is the beginning of true FSD, and will be an epic win if tesla plays their cards correctly.

    • @L3nny666
      @L3nny666 9 місяців тому +3

      which new marketing scam term will it be next? FULL self driving? TRUE FULL self driving? I SWEAR BY GOD THIS IS THE TRUEST AND FULLEST self driving? THIS TIME FOR REAL FULL self driving? I PROMISE NEXT YEAR IT'S READY FULL self driving?

    • @Astra2
      @Astra2 9 місяців тому

      ​@@L3nny666The term is just full self driving. Always has been and always will be.

    • @L3nny666
      @L3nny666 9 місяців тому

      @@Astra2 as we all know, FULL SELF DRIVING is a marketing term as it's not fully autonomous. Now the comment above me said "true" full self driving...which is rather funny considering for how long musk has promised true autonomy...if you don't get a joke and rather be a butthurt tesla fanboy and billionaire boot licker, go ahead.

    • @Astra2
      @Astra2 9 місяців тому

      @@L3nny666 Full self driving means fully autonomous. It's currently in beta, that's why it's not fully autonomous. I understand what you mean but I think it would be unwise to doubt the same person who figured out how to land rockets.

    • @L3nny666
      @L3nny666 9 місяців тому +1

      @@Astra2 yeah sure..."beta". tesla is still at SAE level 2, while mercedes and toyota are already at sae level 3.
      and you don't really believe musk figuered out any of this technology, right? this man is an investor and not an engineer.

  • @jaysmithdesign
    @jaysmithdesign 9 місяців тому +4

    Elon: "Hey world don't do AI"
    Elon: "Welcome Dojo"

  • @IngeniousDimensions369
    @IngeniousDimensions369 8 місяців тому +1

    Keep up the great work, Elon & Tesla Team.💯💯
    Ready to see the luxury Tesla RVs also, Boss.😉😉

    • @corbinf7313
      @corbinf7313 8 місяців тому

      What is a wait if we’ve ever been sursnagged to unforgivable faulty price presumptions👽

  • @bigpicture3
    @bigpicture3 8 місяців тому

    To put it simply, FSD must produce a set of correct and safe driving responses, to a set of situational images created by the cars cameras. That requires some amount of "prediction", or what each object in the image is, and what it is likely to do next. Ignoring inattentiveness, even human drivers can get that wrong a lot of times. If FSD is to be successful it needs to get that right more often than human drivers do. Also driving responses need to be different under different road surface and weather conditions, and I don't even know if FSD accommodates this. But in any case the "computational power" required for this, probably cannot be "on board" the vehicle. It might resolve to image analysis, object identification within the image, and probabilities of what each object will do next, and the driving response to that. That is a lot of possible "image" - "driving response" combinations to be processed in real time. Even if the super computer could do it, then there is also the "real time" communication between the computer and the vehicle. (the bandwidth)

  • @stevemccrea2688
    @stevemccrea2688 9 місяців тому

    A 'flop' is a floating point operation which is more complicated than a mere computer instruction.

  • @v.gedace1519
    @v.gedace1519 9 місяців тому +1

    Realy a great video! Thanks!

  • @johnsample9640
    @johnsample9640 9 місяців тому

    Awesome video. funny too how at about 2 minutes in while explaining what an exa-flop is and this powerful computer, they show some basic html and css hehe

  • @berated4541
    @berated4541 9 місяців тому +3

    AMD deserves the credit for the MCM design. As they were the first to show its benefits large scale with their Ryzen processors.

  • @rosslawrence4628
    @rosslawrence4628 9 місяців тому +10

    @1:55 Too funny. An Exoflop is a 1 with 18 zeros behind it.... and the video shows 15 zeros. A lot of good info here on Dojo... thanks for the update.

    • @vinnylamoureux1187
      @vinnylamoureux1187 9 місяців тому +2

      18 zeros would be too small on the display. We don't all have your perfect eyesight. Hehe

  • @mitch_the_-itch
    @mitch_the_-itch 9 місяців тому +5

    I wonder if that would run DCS in VR with full graphics options?

    • @NA-rh7iy
      @NA-rh7iy 3 місяці тому

      Nvidia can do it

  • @Swizzenator
    @Swizzenator 9 місяців тому

    Great video. Keep it up and dont make me hit the new button🤣

  • @sociopathicnarcissist8810
    @sociopathicnarcissist8810 9 місяців тому +3

    Be interested in seeing how it compares with the new Grace Hopper processor and scalability capability from nvidia.

    • @caralosmendezgomez9048
      @caralosmendezgomez9048 7 місяців тому

      probably Nvidia will be faster and for all purposes, in the same way AMD will epic thread reaper 128 cores smokes into oblivion anything from Apple, apart from being able to compute open source

  • @flavioprojects8974
    @flavioprojects8974 9 місяців тому

    Great stuff! Please turn up the background music a little more in the next videos.

  • @MikkoRantalainen
    @MikkoRantalainen 9 місяців тому +2

    Compared to Apple CPUs that still use DRAM for memory, Dojo is using lots of SRAM which is highly expensive but much faster than DRAM. Most computers use SRAM only of L1 cache in the CPU and the main memory is using cheaper DRAM tech.

    • @schrodingerscat1863
      @schrodingerscat1863 9 місяців тому +1

      Totally different applications, the Dojo processors only need small amounts of memory because their task is very specific and highly optimised for that single task. Apple CPUs are just general purpose CPUs with a lot of sub systems integrated into a single package to reduce communication power consumption and latency. Dojo is more like a GPU than a CPU.

    • @MikkoRantalainen
      @MikkoRantalainen 9 місяців тому

      @@schrodingerscat1863 Dojo also has system-wide DDR4 SDRAM but it's used as fast storage device instead of treating it like a traditional RAM. Load and store speeds to storage (I would assume SDRAM) are 400 GB/s and 270 GB/s according to Wikipedia article. If you compare this to modern computers, Intel i9-13900K has max memory bandwidth about 90 GB/s while using all cores in optimal memory channel configuration.
      But yes, SRAM has single clock latency: Dojo runs at 2 GHz so that would be 0.5 ns vs best available DDR4 SDRAM has latency around CL12 or about 6.7 ns. So obviously you would try to write apps so that you can use only the memory that as 13x smaller latency. However, that doesn't mean that Dojo cannot run other apps, too, only that you cannot get optimal performance with apps that cannot fit at least the full inner loop into the available SRAM.

  • @Zoom_1012
    @Zoom_1012 9 місяців тому

    That was quite interesting. Thanks.
    🌴☀️🌴

  • @GEOsustainable
    @GEOsustainable 8 місяців тому +1

    I take it he is Mac man. In the old days we called this 'cascading', and we had 27 iMacs connected. No one ever talks about the software needed to use this configuration.
    This sounds impressive, but the Hardware is far beyond available Software to run them. They still don't have much to do.
    Back then we thought 10 gigaflops was incredible. Working on these things is what I used to do and explains why I garden now.

  • @nexpro6985
    @nexpro6985 9 місяців тому

    Apart from the inaccuracies and generalizations in this video there were some nice images.

  • @biggstile
    @biggstile 8 місяців тому

    Keep up the good work

  • @MRF77
    @MRF77 9 місяців тому

    Excellent video my man!

  • @1Eagler
    @1Eagler 9 місяців тому +8

    Its crazy to find a company like Tesla in auto industry

  • @guruware8612
    @guruware8612 9 місяців тому

    love it when talking about supercomputers and showing html and javascript, exactly the thing which needs exa-flops
    a one with EIGHTEEN zeros- and showing 15 in the video, btw here we call that trillions.

  • @petersimon985
    @petersimon985 9 місяців тому

    Huge Thank you 🎉❤

  • @SomeThingElseYT
    @SomeThingElseYT 9 місяців тому

    We squandered the train

  • @JarrodR84
    @JarrodR84 8 місяців тому

    I love your news letter!!

  • @BikeNutt1970
    @BikeNutt1970 9 місяців тому

    Another great vid. Thanks 👍

  • @jacoblf
    @jacoblf 9 місяців тому

    I imagine a scenario where Tesla sells Trainjng Tiles and makes more profit from TTs than cars. Your “game changer” is spot on.

  • @ctwolf
    @ctwolf 8 місяців тому

    @4:30 -- that casing making it look like an asic imo.

  • @clothcapkev2088
    @clothcapkev2088 9 місяців тому +3

    Time will tell like the hyperloop and Tesla truck could go either way.

  • @jonasnitz7678
    @jonasnitz7678 7 місяців тому

    "That is a one with 18 zeroes behind it" and they show 15 zeroes... brilliant.

  • @spontaneousarttrip
    @spontaneousarttrip 7 місяців тому

    super liked the video thank you so much

  • @stevenrushing334
    @stevenrushing334 7 місяців тому

    Great explanation, thanks!

  • @Asterra2
    @Asterra2 9 місяців тому +1

    2:10 Says 18 zeros. Shows 15 zeros.

  • @user-mz3ek4rm7f
    @user-mz3ek4rm7f 3 місяці тому

    This
    Is out of my mind, amazing ❤❤❤❤❤

  • @kathleenwhitten7120
    @kathleenwhitten7120 9 місяців тому +2

    I wonder what the compute per watt is for Dojo vs A100?

    • @ThrashLawPatentsAndTMs
      @ThrashLawPatentsAndTMs 9 місяців тому

      exactly. ^^^^^^^^^^^^^^^^
      THIS is what people should be asking and talking about.

    • @schrodingerscat1863
      @schrodingerscat1863 9 місяців тому

      Ad Dojo is more highly optimised for a specific task it is almost certainly way more efficient than the A100 for that particular task.

    • @jakubiskra523
      @jakubiskra523 9 місяців тому

      They should compare it to h100, a100 is last gen so the comparisons are more favorable.

    • @schrodingerscat1863
      @schrodingerscat1863 9 місяців тому

      @@jakubiskra523 For AI the a100 is still the better card as it has hardware specific to deep learning tasks with the H100 being the better option for raw processing scientific workloads. The A100 is also more energy efficient making it a better fit for large multi card systems. They are basically designed for different tasks rather than being different generations of the same thing.

    • @jakubiskra523
      @jakubiskra523 9 місяців тому

      @@schrodingerscat1863 this is why all of ai companies are using h100 for they clusters, and h100 is more energy efficient in every way, your source of information is not trustworthy

  • @HobokenHam
    @HobokenHam 7 місяців тому

    I loved your video but.......to define an exaflop you show 1 followed by 15 zeros and say the 1 should be followed by 18 zeros. I am just curious which you intended.

  • @HansvanBinsbergen
    @HansvanBinsbergen 7 місяців тому

    Well...
    it could be that Tesla or any other company invest in different branches. But don't worry, as kid opening an old stereo set I was surprised to find some mitsubishi components as well

  • @jeffharmed1616
    @jeffharmed1616 9 місяців тому

    Nice work thanks

  • @rippingbag
    @rippingbag 9 місяців тому +14

    Fear does not exist in this Dojo!

  • @KDzilna
    @KDzilna 9 місяців тому

    You said 1 with 18 zeros, but put 15, make sure minor details add up!

  • @skaltura
    @skaltura 9 місяців тому +1

    AWS is far from "just rent excess" these days, that's how it started tho

  • @pvic6959
    @pvic6959 9 місяців тому +1

    not sure why google/google cloud was not mentioned. This seems to be similar to googles "TPU" or Tensor Processing Unit right? Just asking to make sure i ve understood correctly. Google uses TPUs for AI training as well. I imagine its been used for their self driving car too

  • @jacobmosovich
    @jacobmosovich 9 місяців тому

    I want one... I want to simulate AI in social situations so it can learn to blend in better and create a more emotional AI.

  • @StopTheBurn
    @StopTheBurn 9 місяців тому +1

    Get that DoJo working on a solid-state battery 🔋 Elon!

    • @biggles9604
      @biggles9604 9 місяців тому

      what

    • @Luke-Ryan
      @Luke-Ryan 9 місяців тому

      @@biggles9604 search solid-state batteries, if as advertised they are most likely the brightest path for future battery tech holds more power, charges faster, costs WAY less, and can be made in a way that rare earth minerals are either not needed or are needed in a way lesser capacity. So yeah sick that supercomputer or fleshing out the science or solid-state batteries.

  • @haroondabbagh2869
    @haroondabbagh2869 9 місяців тому +1

    Great video

  • @arthtv808
    @arthtv808 9 місяців тому

    Wow I love it🎉😮🎉

  • @MarksShrimpTanks
    @MarksShrimpTanks 9 місяців тому

    Dojo is making the matrix!

  • @cosmoshfa88savant66
    @cosmoshfa88savant66 8 місяців тому

    Good content well presented,,,

  • @mpd8633
    @mpd8633 7 місяців тому

    Excellent video.

  • @georgeginsburg545
    @georgeginsburg545 8 місяців тому

    I think that first picture of the number of instructions in an exa-flop, is wrong. It should show 18 zeros for 10 to the eighteenth power, no? It’s showing 15 zeros.

  • @toddsmith4280
    @toddsmith4280 9 місяців тому +1

    I thought i heard someone say that the Tesla AI used for FSD was no longer using labels.

    • @scamchan
      @scamchan 9 місяців тому +1

      Yeah they cracked the code - which means they probably unlocked it to mirror driving of all the live drivers on the fly and learn from it side by side weighing in on what counts as perfect driving.

  • @gzfashions
    @gzfashions 9 місяців тому

    350million miles of FSD data 🎉🎉🎉

  • @user-me5eb8pk5v
    @user-me5eb8pk5v 9 місяців тому

    t's just hard to deliver a lighting engine, you know laser sparkes that have 3 dimensionality on a 2d line in space, otherwise its a cell phone.

  • @Highestnugg
    @Highestnugg 9 місяців тому

    Amazing but also makes me curious, cause if it’s so powerful why do they stop at putting it in a Tesla or possibly rocket, there’s so much this computer would do. They have the resources so why not? 😳

    • @Tonatsi
      @Tonatsi 8 місяців тому

      These wouldn't go in a Tesla or a Rocket. It's purely for the Tesla Warehouses, and it's custom designed for processing AI through videos, and is worse than other types of supercomputers. It's essentially hyper-specialised, and isn't really helpful in circumstances other than these.

  • @deolinda9403
    @deolinda9403 7 місяців тому

    ELE CONSEGUE SUPRIR TODAS ÀS NECESSIDADES

  • @navg42
    @navg42 7 місяців тому

    Looks like this video was on point, and now here we are, in the future, with v12 on the horizon. This changes things dramatically and could likely be the watershed "ChatGTP / Iphone" moment we have all been waiting for. The problem is much easier to solve now, DOJO doesn't have to worry about human labeling anymore and can learn on its own instead. We are shifting from humans writing code and labeling items in the video feed, to ai learning how to drive and interact in the world all on its own. We will be labeling quality of videos and trying to feed quality content, from competent drivers. This should be relatively easy, considering Tesla has already built a solution for this, with its drivers safety score, and Tesla insurance product. Autonomy is coming sooner than most realize...

  • @carpenter3069
    @carpenter3069 9 місяців тому

    I just hope DOJO doesn't morph into the supercomputer portrayed in I Robot.

  • @michaelrichey8516
    @michaelrichey8516 9 місяців тому +1

    2:11 - that's only 15 zeros, you're 3 zeros short.

  • @rocketscience777999
    @rocketscience777999 7 місяців тому

    IMO, Tesla will eventually provide FSD as a service to other OEM car companies.

  • @tibiloft7471
    @tibiloft7471 9 місяців тому

    Yeye when I see it in action I will believe it

  • @somaday2595
    @somaday2595 9 місяців тому

    Is Tesla constantly uploading vehicle driving data to revise its autonomous driving program and then downloading those revisions?

  • @landofstan246
    @landofstan246 9 місяців тому

    Well done.

  • @ps3301
    @ps3301 9 місяців тому

    Is dojo controlled by arm cpu?

  • @ThrashLawPatentsAndTMs
    @ThrashLawPatentsAndTMs 9 місяців тому

    Does Tesla also manufacture the DoJo chips?

  • @Simone_85
    @Simone_85 8 місяців тому

    Congratulations Minah, you have given yourself so many reasons to look just better, and do alright. Congratulations ♥️Minah sim1

  • @nicoenriquelimongecolomer63
    @nicoenriquelimongecolomer63 9 місяців тому

    That's all nice and good but the real question is: can it run doom?

  • @fernandogarciadeleon6292
    @fernandogarciadeleon6292 9 місяців тому

    nice video!!!!

  • @CyberSamurai4Life
    @CyberSamurai4Life 9 місяців тому +7

    Please, somebody explain to me how this is not going to be used as the most advanced targeting system for integrated weapons in the world

    • @maxidaho
      @maxidaho 9 місяців тому +3

      It's not. We promise.

    • @davidbeppler3032
      @davidbeppler3032 9 місяців тому +1

      @@maxidaho Good enough for me.

  • @lelebeltrao
    @lelebeltrao 7 місяців тому

    Its revolucionary!!!

  • @mb345
    @mb345 9 місяців тому

    What is the new dojo supercomputer? A lot of this content seems to explain what was revealed when they showed off dojo?

  • @IraQNid
    @IraQNid 8 місяців тому

    Sounds like a fractal array.

  • @_StoicMindset_
    @_StoicMindset_ 9 місяців тому

    Wait so you want to say on black Friday, Amazon close all the renter's services, if needed?

  • @curtiswfranks
    @curtiswfranks 9 місяців тому +1

    Needing AWS servers for basically only Black Fridays is brilliant because no-one else (in the U.S.) needs those resources at that time. And likewise for other holidays and elsewhere.

  • @JasonPruett
    @JasonPruett 9 місяців тому

    great video ty

  • @andrewsolomon9877
    @andrewsolomon9877 9 місяців тому +2

    The Mojo Dojo Casa Supercomputer!

  • @daniellove162
    @daniellove162 9 місяців тому

    Paint it black and it looks like the neuro-processor in T2.

  • @arieltraasdahl-xh6ri
    @arieltraasdahl-xh6ri 9 місяців тому

    I can't help but think that the name of the new chip is somehow intended as an insult to the Department of Justice.
    I approve.

  • @HumanAction76
    @HumanAction76 9 місяців тому +2

    In the future, Tesla will make trillions selling Dojo systems to other AI companies.

  • @tfhmobil
    @tfhmobil 8 місяців тому

    Basically back to mainframe computing.

  • @xavierlx
    @xavierlx 9 місяців тому

    what apple is not telling you with the M1 is that it in most situation cannot run software that uses SQL

  • @shiwansvaishya4803
    @shiwansvaishya4803 9 місяців тому

    Mind blown up