This New AI Supercomputer Outperforms NVIDIA

Поділитися
Вставка
  • Опубліковано 1 чер 2024
  • In this video I discuss New Cerebras Supercomputer with Cerebras's CEO Andrew Feldman.
    Timestamps:
    00:00 - Introduction
    02:15 - Why such a HUGE Chip?
    02:37 - New AI Supercomputer Explained
    04:06 - Main Architectural Advantage
    05:47 - Software Stack NVIDIA CUDA vs Cerebras
    06:55 - Costs
    07:51 - Key Applications & Customers
    09:48 - Next Generation - WSE3
    10:27 - NVIDIA vs Cerebras Comparison
    Mentioned Papers:
    Massively scalable stencil algorithm: arxiv.org/abs/2204.03775
    www.cerebras.net/blog/harness...
    www.cerebras.net/press-releas...
    Programming at Scale:
    8968533.fs1.hubspotuserconten...
    Massively Distributed Finite-Volume Flux Computation: arxiv.org/abs/2304.11274
    Mentioned Video:
    New CPU Technology: • New CPU Technology jus...
    👉 Support me at Patreon ➜ / anastasiintech
    📩 Sign up for my Deep In Tech Newsletter for free! ➜ anastasiintech.substack.com

КОМЕНТАРІ • 598

  • @AnastasiInTech
    @AnastasiInTech  9 місяців тому +100

    Let me know what you think!

    • @CircuitSageMatheus
      @CircuitSageMatheus 9 місяців тому +8

      Have you ever thought of creating a community for hardware engineers?

    • @InfinitelyCurious
      @InfinitelyCurious 9 місяців тому +2

      Can you dive into the superconducting elements added to these advanced technologies(ex.Niobium)

    • @dchdch8290
      @dchdch8290 9 місяців тому +7

      @@CircuitSageMatheusI believe this is one, and we are part of it ;)

    • @RAM_845
      @RAM_845 9 місяців тому +5

      I think Quantum computing will be the thing for A.i> GPUs and A.I chips will be obsolete, IMO

    • @kevinsho2601
      @kevinsho2601 9 місяців тому +4

      Please do a vid on the company out of dubai who is creating medical AGI and what kinda of technology they are using and what they plan to do. Medical AGI is very broad. Would love to see what that means.

  • @CircuitSageMatheus
    @CircuitSageMatheus 9 місяців тому +116

    Awesome content, nowadays is very difficult to find channels rich in information like yours! Cheers to you for a job well done! 👏

    • @DihelsonMendonca
      @DihelsonMendonca 9 місяців тому +1

      Linus tech tips is all about it, with excellent up-to-date articles on top of the edge technology, and many other excellent channels, BTW. It's just a tip. Perhaps you are not aware of. Good luck. 🎉❤

    • @univera1111
      @univera1111 9 місяців тому +1

      Truly excellent content

  • @tanzeelrahman7835
    @tanzeelrahman7835 9 місяців тому +61

    Your content is always very special and informative. You tend to choose topics that are not commonly found on other channels. The most important thing is the way you explain complex concepts so easily; that's truly awesome.

  • @pbananaandfriends
    @pbananaandfriends 9 місяців тому +69

    The sheer compute power of this chip are promising a new era in AI technology. I’m eager to see how this will be utilized in various applications. Kudos to the team behind this innovation!

    • @christophermullins7163
      @christophermullins7163 9 місяців тому +2

      Aliens going to start taking our AI computers like theyve been taking the nukes to protect us?

    • @perc-ai
      @perc-ai 9 місяців тому +2

      at this rate bitcoin will be susceptible to a 51% attack lol thats so much power

    • @JohnSmith-ut5th
      @JohnSmith-ut5th 9 місяців тому +1

      Not really, but there are a lot of investors that are going to making a killing shorting Cerebras.

    • @zool201975
      @zool201975 9 місяців тому

      yeah dont go counting the benefits just yet.. that is a chip that draws in a 100 fucking megawat hour.
      that thing cant run for more then moments withouth having parts of it being vaporized into heat.
      with computing almost ALL of the enrgy goes into heat so that is a bloody 99 megawat heater the size of a chesboard you got there....
      you litteraly need a powerplant to run this crazy thing.

    • @zool201975
      @zool201975 9 місяців тому

      why would they need either? and with the power consumption of these things we do not need nukes to bloody glass the planet lol

  • @jp7585
    @jp7585 9 місяців тому +55

    It seems like an apples to oranges comparison. Put it against GH200 Superpod with 256 Grace Hopper Superchips. That is Nvidias latest offering. It's not only fast, but energy efficient.

    • @635574
      @635574 9 місяців тому +4

      12x the gains. How the hell is nobody talking about it?

    • @W1ldTangent
      @W1ldTangent 9 місяців тому

      @@635574 VHS vs betamax, Bluray vs HD-DVD... the latter was better in both cases and still lost because they couldn't get adoption. Nvidia gave away a lot of very expensive silicon for nothing in some cases or a small pittance to get CUDA in the hands of research teams at universities, who standardized on it, and eventually started teaching it. I love the idea of a competitor but they won't have an easy road if they're not willing to give away a lot of compute, and unlike Nvidia they don't have the gamers and crypto addicts buying every graphics GPU they could get their hands on for double MSRP to bankroll it.

    • @waterflowzz
      @waterflowzz 9 місяців тому +7

      Nvidia fanboy spotted. Power efficiency is a nonissue when you’re talking about the most powerful compute power cuz most people won’t have access to this power until way later.

    • @nicknorthcutt7680
      @nicknorthcutt7680 9 місяців тому

      ​@@waterflowzzexactly, power efficiency is not the main issue 🤦

    • @Leptospirosi
      @Leptospirosi 9 місяців тому +4

      You don't get the point: as the complexity of the problems you gave to feed on the AI grows these two systems stop scaling together.
      90% of the raw costs is on people working on the project, so having a system that does NOT requires more work at all when your workload increases by orders of magnitudes, is a not brainer.
      You can start training your AI system months before you will on any Nvidia systems.
      The only thing Nvidia has on its side right now is the shear mass of chips produced each month, so I gues you can build a GH200 ai much faster then you can on Cerebras: not cheaper, but faster, despite being way behind in practicality and raw results.

  • @lllllMlllll
    @lllllMlllll 9 місяців тому +36

    Anastasi is such an amazing person

  • @brucoder
    @brucoder 9 місяців тому +16

    If you ignore politics and AI conspiracies, it's a great time to be alive! Thank you for sharing these positive breakthroughs.

  • @aseeldee.1965
    @aseeldee.1965 9 місяців тому +15

    This is very cool!
    Thank you for keeping us up to date with the AI evolution!

  • @danielmurogonzalez1911
    @danielmurogonzalez1911 9 місяців тому +3

    I am a simple man, video I see from Anastasi, video I like.

  • @ZoOnTheYT
    @ZoOnTheYT 9 місяців тому +10

    Another awesome video Ana! Doing direct interviews is a great addition to your repertoire. I have an interest in AI as a social science person. A lot of videos either go way beyond my ability to comprehend, or are filled with superfluous information just to fill time. You consistently put out interesting and coherent information, that I also trust is valid, because of your background.

  • @junpengqiu4054
    @junpengqiu4054 9 місяців тому +5

    wonderful video, did some research about cerebras innovative and found out they really have done different and valuable things.
    "wafer scale engine" is what cerebras been known for, unlike traditional GPU, it is produced on an entire wafer. Conventionally, multiple cpu or gpu are 'printed' by EV on a single wafer, and later processes will cut them off the wafer. Therefore, one reason cerebras is delivering much better performance is because its 'GPU' is bigger.
    But this also leads to one problem: its even harder to produce than NVDIA GPUs, wafer often comes with defects, individual defected chips from conventional manufacture technique can be discarded. However, cerabras wafer scale engine needs the whole wafer to have no defects. In addtion, heat dissipation, even powering across whole surface are big challenges.
    Right now, cerebras is cheaper because it's not yet that popular, once market sees advantages from their super computer, their price can go higher than h100 since they are really difficult to make under current tech level.

  • @PalimpsestProd
    @PalimpsestProd 9 місяців тому +27

    I'd love to see a breakdown and compare of this tech against Dojo. Code scaling, Watt's per output unit, data types, and flexibility.

    • @Wirmish
      @Wirmish 9 місяців тому +2

      ... and cost.

  • @MichaelLloydMobile
    @MichaelLloydMobile 9 місяців тому +2

    OMG...
    I had to watch this video because your introductory image is adorable!

  • @Arthur-ue5vz
    @Arthur-ue5vz 9 місяців тому +3

    Thank you for doing these videos and helping the rest of us to see what's going on in the world of AI and computing in general.
    I appreciate your efforts 😊

  • @Krishna-zw6ls
    @Krishna-zw6ls 9 місяців тому +6

    Thank you for bringing the next one to my watch list, I love your content.

  • @wpg_dude1930
    @wpg_dude1930 9 місяців тому +1

    nice to see you are back. great show as always

  • @zandrrlife
    @zandrrlife 9 місяців тому +3

    I've been selfishly hoping this company would stay a hidden gem 😂😂. Superior compute in terms of training models and on-premise inference. SUPERIOR.

  • @MrWingman2009
    @MrWingman2009 9 місяців тому +5

    Maybe I'm not looking hard enough, but this is the only place I've found good, well summarized info on AI hardware progress. Thanks Anastasi! 😊

  • @DinDjarin369
    @DinDjarin369 4 місяці тому

    Thanks for the update.

  • @JMeyer-qj1pv
    @JMeyer-qj1pv 9 місяців тому +13

    The bane of wafer scale computing has always been that some percentage of the wafer will have defects and be unusable. Does Cerebras has some way around that problem? There was a famous attempt at this back in the 80's and the company couldn't solve the problem and went bankrupt (Trilogy Systems).

    • @prashanthb6521
      @prashanthb6521 9 місяців тому +4

      The final wafer processor always is quoted after taking into account the dysfunctional parts of that wafer. Meaning its always assumed to lose some parts to imperfections.

  • @u9Nails
    @u9Nails 9 місяців тому +2

    Whoa! This is awesome! Always brilliant content. Love this channel! Learning new words, like Wafer-scale, is eye opening!

  • @TheMusaic
    @TheMusaic 9 місяців тому +1

    Nicely done, super interesting. I think your best yet

  • @windmillfire
    @windmillfire 9 місяців тому +4

    Thanks for making these videos 😀

  • @HenryCalderonJr
    @HenryCalderonJr 9 місяців тому +2

    Love that everything you post you have a great explanation and always back up your information with real facts as a document video! Thank you 😊 your awesome! Your brilliance in awesome

  • @CYI3ERPUNK
    @CYI3ERPUNK 9 місяців тому +1

    what a time to be alive XD ; luv to see the competition heat up between these top tier tech firms and the smaller startups that are rocking the boat =]

  • @Bobby.Kristensen
    @Bobby.Kristensen 9 місяців тому +3

    Great video! Thanks!

  • @nikitasapozhnikov2449
    @nikitasapozhnikov2449 9 місяців тому +1

    Very intriguing! Thanks so much for sharing

  • @alexhernandez7262
    @alexhernandez7262 9 місяців тому

    Great work! Keep on rocking : )

  • @dchdch8290
    @dchdch8290 9 місяців тому +3

    Wow , nice summary. I was actually wondering how they utilise all those wafer scale engines. Now it is clear. Thank you !

  • @snjsilvan
    @snjsilvan 9 місяців тому +1

    Thanks once again for bringing us great content.

  • @jpmcnown1
    @jpmcnown1 9 місяців тому +1

    Very well presented, thank you!

  • @xAgentVFX
    @xAgentVFX 9 місяців тому +1

    I really liked that cover photo. Keep up the good info.

  • @kindaplayerone4128
    @kindaplayerone4128 9 місяців тому

    This is exciting indeed. like it so much. You doing great Anastasia. God bless you and your family. this goes for all involved in your vid crew.

  • @ZeroIQ2
    @ZeroIQ2 9 місяців тому

    This was a great video btw, thanks for the information 🙂

  • @willykang1293
    @willykang1293 9 місяців тому +1

    Thank you for your deeper introduction on Cereras!!! I won’t know this despite I stayed around Fremont and Santa Clara last month if I didn’t get into it much deeper…😄

  • @andreasschaetze2930
    @andreasschaetze2930 9 місяців тому +3

    I remember a time where my uncle as an engineer got a PC with 40MB storage and I wondered how he would ever fill that much space. Today I need that space for one single digital raw photo 😅
    It’s amazing how fast and capable hardware and software (some not so much 😂) has become

    • @federicomasetti8809
      @federicomasetti8809 8 місяців тому

      1998, my first computer (well, the "family computer", because they were very expensive, for what they could do): Pentium II at 300mHz, 32megabytes of RAM and I think something like 500megabytes of hard drive, but I'm not sure about this. Of course with floppy disk and cd drives, in that distinctive "greyed white" of the time. I was 13 back then and it feels like another era in the history of humanity 😅😂

  • @solidreactor
    @solidreactor 9 місяців тому +10

    I am very interested in Cerebras and Tenstorrent, where they seem to be the most viable alternative to Nvidia, both being companies that makes AI chip that is very scalable.
    The interesting differentiation between Cerebras and Tenstorrent is that Cerebras started with big chips working their way down (in a sense with enabling PyTorch compatibility) while Tenstorrent works from small chips and evolutionary works their way up.
    It's interesting to see these different contrasting startup philosophies work in the same industry having basically the same main competitors. Hope to see you cover these two companies in future videos.

    • @geekinasuit8333
      @geekinasuit8333 9 місяців тому +5

      Actually the most viable alternative to Nvidia right now is AMD's MI series of processors. The MI300 series is due to be widely available in 2024, and it will probably beat the H100 in terms of performance and flexibility. The research I've done indicates that Cerebras and Tenstorrent are very distant alternatives at this point in time relative to both Nvidia and AMD. There's also Intel with their Gaudi series, where it fits in comparatively is probably along with Cerebras and Tens, the most worrisome aspect being the longevity of the roadmap, Intel has been cutting product lines over the last for years. As we know anything can change quickly since the AI sector is in very early stages, so it's worth looking at all the players including the current batch of underdogs.

    • @BienestarMutuo
      @BienestarMutuo 9 місяців тому

      @@geekinasuit8333 We agree, if cerebras can lower her prices by 10 can be in the competition if not AMD will be the best alternative. 1 cerebras power computation = 50 nvidia power computation, but for the price of 1 cerebras (2.000.000 $) = you can buy 10 nvidia DGX (200.000 $ , 8 x a100 (10.000 $) ), in price nvidia win. And take in consideration that nvidia is expensive, very expensive. cerebras need to lower her price 4x to be competitive, 10x if want to be competitor.

  • @GeinponemYT
    @GeinponemYT 9 місяців тому +2

    Never heard of this channel before, until it just popped up on my homepage. And I'm glad it did: great, clear information, with appropriate graphics (when needed), very in depth, but still understandable.
    One minor piece of constructive feedback: maybe tweak your audio settings a bit to decrease the harsh 's' sounds. I'm using heaphones, and your 's'-es are a bit uncomfortable. Otherwise: great video!

  • @thedeadbatterydepot
    @thedeadbatterydepot 9 місяців тому +2

    Smaller isn't always better. I theorized such a computer with the whole wafer, the whole complier part was out of my skills, parallel data bus would be the only way. They have achieved the removal of the 2 compiler stages to get to machine language, the single stage compiler with whole wafer design has Nvidia beat, for much cheaper for title of most powerful AI. Dude knows what he has, I will seek to buy one of his systems, for a upcoming product. Thank you great video!

  • @HonestyLies
    @HonestyLies 9 місяців тому +7

    very interesting, I wonder how saleable they are for production, honestly seems like companies will be fighting for these limited quantity high speed chips, surprised ive never heard of them! Great vid

  • @ZigamusRainbowWizard
    @ZigamusRainbowWizard 3 місяці тому

    Excellent video, thank you! :o)

  • @methlonstorm2027
    @methlonstorm2027 9 місяців тому +1

    very informative thank you

  • @chillcopyrightfreemusic
    @chillcopyrightfreemusic 9 місяців тому

    Fantastic video I just subscribed. Mr. Feldman was speaking my mind when addressing the tokenization of the arabic language. I don't speak arabic sadly but have been trying to find good models to handle it and found that only gpt4 and bloom were decent. I think his company is on to something forging connections to the gulf. Great video thank you!

  • @joen5000
    @joen5000 6 місяців тому

    Very interesting. Thank you.

  • @user-uq1ny8me3v
    @user-uq1ny8me3v 9 місяців тому +2

    You de a wonderful job. Thank you very much for your outstanding content

  • @luapo2233
    @luapo2233 9 місяців тому

    Thank you for the education.

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc 9 місяців тому +2

    Excellent information

  • @therealb888
    @therealb888 9 місяців тому +2

    We need more channels that focus on the compute chips & infrastructure of AI. All the buzz is around the software but its the hardware that makes it work.

  • @rameshnamburi4384
    @rameshnamburi4384 9 місяців тому

    Amazing content is delivered with high clarity by the amazing presenter.

  • @PaulPiedrahita
    @PaulPiedrahita 9 місяців тому +1

    Thumbnail is 🔥😎🙌🏼

  • @marktahu2932
    @marktahu2932 9 місяців тому +4

    Very interesting and shows a broader view than just the Nvidia or AMD approach. Mind boggling how fast and how far this work is going.

  • @MrFoxRobert
    @MrFoxRobert 9 місяців тому +1

    Thank you!

  • @617steve7
    @617steve7 9 місяців тому +1

    Anastasia In Tech my engineering crush!( Not to be confused with my academic crush, Sabine Hossenfelder)Exceptional content! keep them coming!

  • @alanreader4815
    @alanreader4815 9 місяців тому

    Looks like Good News for for AI. And bigger chips sounds positive. Great video Anastasi

  • @woolfel
    @woolfel 9 місяців тому +8

    it's great to see so many people and companies working on AI hardware, but without a full software stack, it won't be a credible competitor to NVidia. As ML technology advances, they'll have to make sure their compiler handles the workload scheduling efficiently. That's not an easy task.

    • @rilwanj
      @rilwanj 9 місяців тому

      What if they made their hardware compatible with the Nvidia software? I think in this video it was mentioned that existing tensorflow code for cuda can also work on their hardware.

  • @Rhomagus
    @Rhomagus 9 місяців тому +1

    Cerebras: AI supercomputer networked across three of the same type
    Cerberus: Three headed hound that guards the gate to Hades
    ... just in case you may have been confused. Don't be.

  • @geekinasuit8333
    @geekinasuit8333 9 місяців тому +3

    There's not a lot of information about Cerebras, so thanks for making this video. I'd like to know how flexible a machine like this is with experimenting with different models? Will you be limited to only a few kinds of models, if so then what exactly are those limitations? One known issue that Cerebras acknowledges as an intentional trade off, is that a machine like this is limited with floating point accuracy and will not be suitable for models that require higher 64bit precision. It appears the machine is optimized for 16bit precision only. I expect there will be other limitations besides the FB accuracy and a summary of what those limitations and what the tradeoffs are (pros and cons) will be nice to know about.

  • @ncascini01
    @ncascini01 9 місяців тому

    Wow! That bit at the end about not needing to write more code to expand the parameters/ use more chips.

  • @cizgifilmkolikcizgifilm1963
    @cizgifilmkolikcizgifilm1963 9 місяців тому

    Very professional👏🙏

  • @dannyboy4940
    @dannyboy4940 9 місяців тому +1

    I am astonished to see that beauty and science can coexist

  • @DS-uy6jw
    @DS-uy6jw 2 місяці тому

    Amazing content.

  • @TiborDevenyi-wd2ep
    @TiborDevenyi-wd2ep 6 місяців тому

    Köszönjük!

  • @Eugbreeze1
    @Eugbreeze1 9 місяців тому +1

    Good info 👍 I was able to get some shares as ipo .......

  • @mdb1239
    @mdb1239 5 місяців тому

    Thanks. I never heard of this company. Amazing.

  • @pakjohn48
    @pakjohn48 9 місяців тому

    As an old-timer I appreciated the CEO's commentary when he threw in the term "sneaker net" while describing his AI monster.

  • @youdj_app
    @youdj_app 9 місяців тому

    Nice tan, super interesting video. AGI is around the corner... Merci!

  • @florianhofmann7553
    @florianhofmann7553 9 місяців тому +3

    With a core that size aren't the yields extremely low or is it even possible as there is always an error on the whole wafer? Or do the cores have some sort of fault tolerance built in like deactivating the affected sections?

    • @Deciheximal
      @Deciheximal 9 місяців тому +1

      It's the fault tolerance thing, it's the only way they can make it work on waferscale with all the defects.

  • @jonmichaelgalindo
    @jonmichaelgalindo 9 місяців тому +2

    Data quality matters vastly more than parameter count though. Improving LLMs and Stable Diffusion right now is all about figuring out how to get better data.

  • @dreamphoenix
    @dreamphoenix 9 місяців тому

    Thank you.

  • @oker59
    @oker59 9 місяців тому

    Loved the ASML shot

    • @oker59
      @oker59 9 місяців тому

      I saw this yesterday, and tried to see what the largest supercomputers are. I could have sworn I found 1.1 something exaflop; and the combined Cerebras was like 64 exaflops. Do I have that right?

    • @oker59
      @oker59 9 місяців тому

      I found a great article yesterday; this one quote stuck in my mind for some strange reason,
      "For example, a 40 billion-parameter network can be trained in about the same time as a 1 billion-parameter network if you devote 40-fold more hardware resources to it. Importantly, such a scale-up doesn’t require additional lines of code. Demonstrating linear scaling has historically been very troublesome because of the difficulty of dividing up big neural networks so they operate efficiently. “We scale linearly from 1 to 32 [CS-2s] with a keystroke,” he says."

    • @oker59
      @oker59 9 місяців тому

      Only 100 million dollars for one Condor Galaxy? that's the same price for a one off Formula 1 car or even a Stealth fighter; i'd call that a pretty good deal.

  • @nicknorthcutt7680
    @nicknorthcutt7680 9 місяців тому

    Wow that is a HUGE chip!! Amazing 😍

  • @swamihuman9395
    @swamihuman9395 9 місяців тому

    - Always interesting.
    - Thx.
    - I especially like the detail about important role of prep time to set up for training. These nuances can be lost in certain presentations of the data.
    - As a teacher/consultant, I find that the fundamental problem is an incorrect, and/or incomplete understanding of things. One must study wide, and deep, and question understanding along the way. Many people are not willing, or able to do this, or perhaps just don't think it's worth the time - but in some cases, they do so at their detriment; and others will succeed where they fail. But, to each their own, I guess.

  • @Bassotronics
    @Bassotronics 9 місяців тому +2

    Garfield was being nice to Odie when he was constantly trying to send him to Abu Dahbi.

  • @punk3900
    @punk3900 9 місяців тому

    Startup - end of story :D :D :D

  • @nitinhshah
    @nitinhshah 9 місяців тому

    You r an amazing presenter!!!

  • @PremiumUserUltra
    @PremiumUserUltra 9 місяців тому

    I love the art behind him.

  • @quantumsodapop
    @quantumsodapop 9 місяців тому +1

    Cool stuff

  • @SocialMediaSoup
    @SocialMediaSoup 7 місяців тому +1

    I have no idea what she is talking about, but I keep watching her videos.

  • @Dr-AK
    @Dr-AK 9 місяців тому

    This information popped up on my feed and turned out to be interesting ax well. First time watching your channel. Some info was a bit over my head like teraflop and what does billions of parameters mean? Thanks for interesting video.

  • @revcrussell
    @revcrussell 9 місяців тому +1

    "Not everyone will get it" at 0:17 I got it. The _bootleneck_ with a boot on the neck of GPU supply. I kid, I love your content.

  • @OriginalRaveParty
    @OriginalRaveParty 9 місяців тому +2

    Fascinating

  • @TropicalCoder
    @TropicalCoder 9 місяців тому +1

    Fascinating! I wonder how they bridge all those wafers? Also wonder how they transport heat away from them. There is megawatts of heat produced in a much smaller volume than with GPU cabinets.

  • @DaScorp
    @DaScorp 9 місяців тому

    Wow.. that's really professional information... Geil, son shit gibts net häufig! :D Danke!

  • @RalphDratman
    @RalphDratman 8 місяців тому +2

    What exactly does Feldman mean by "gradients" in the context of what is transmitted between geographically remote clusters?

  • @bapsy1
    @bapsy1 9 місяців тому

    Excellent presentation. Is the presentation done by a real person or a virtual AI clone supported by a super computer?

  • @KingTubeAR
    @KingTubeAR 9 місяців тому +3

    Do you think this will be successful enough to fill the supply gap in the ai gpu market?
    It would be amazing because ai startups are starting to buy gaming GPUs which are not desined with enough VRam that it takes to work on ai

  • @dontfollowthinkforyourself
    @dontfollowthinkforyourself 9 місяців тому +1

    My human brain/ biological computer operates at 1 exaFLOP and uses about 20 watt of power in normal state.when I see your channel I consume 40 watt and compute 1,4 exaflops because your beauty and intelligence is upgrading my flops to reproduce my code. Love your channel you have the best AI news

  • @Ryan256
    @Ryan256 9 місяців тому

    Awesome!

  • @ChuckNorris-lf6vo
    @ChuckNorris-lf6vo 9 місяців тому

    Good job.

  • @richardsantomauro6947
    @richardsantomauro6947 9 місяців тому

    Bright, beautiful, charismatic, informative, relevant, entertaining

  • @beautifulsmall
    @beautifulsmall 9 місяців тому

    Everything on a single wafer must have some fascinating methods to isolate and bypass faults.

  • @norik1616
    @norik1616 9 місяців тому +2

    What is cost per TFLOP? Power per TFLOP? Is it 64 wafers each 50x the power of A100 all taking 1.75 MW? If so, they'll be taking 10 % more power than the 500W NVIDIA A100 (64x50 A100s).

  • @florin2tube
    @florin2tube 9 місяців тому

    A comparative analysis with the Tesla 's Dojo will be great 😊

  • @LuisMartinez-el6sn
    @LuisMartinez-el6sn 9 місяців тому

    Interesting!

  • @Raymond-rr5iv
    @Raymond-rr5iv 9 місяців тому +4

    What an interesting channel and a fabulous, clear presentation on this groundbreaking million dollar AI hardware that will facilitate probably the unimaginable in the near future. Anastasi and her co-host deliver a vivid picture what the company of Nvidia created. It is very exciting ... seeing the future unfold in this enormous leap foward. It tickles me to think that gamers... with their need for the fastest speed ten or fifteen years ago who were willing to pay top dollar to get what they wanted would create a niche market to spawn the likes of this billion plus computer chip made by Ceberas is only the size of an average floor tile, but more powerful than anyting known. This feeling of excitement seems like to me what it must have been to see the Wright brothers flying across the New York City skies for the first time. The significance of this chip is as unknown yet greatly anticipated to become probably the biggest scientific tsunami that will change our civilized world as we know it. Amazing development to learn about and thank you for your excellent presentation.

  • @sebassanchezc-1379
    @sebassanchezc-1379 9 місяців тому

    Awesome🤯

  • @dawnrazor
    @dawnrazor 2 місяці тому

    This is a really interesting video. The part I struggle with is the performance comparison between nvidia and cerebras, seems like comparing apples to oranges. How many nvidia chips are equivalent to 1 cerebras? And then how do you define this equivalence? I suppose those papers you link to will have some details lurking in there somewhere but for now I’ll just rely on what is presented in this video.

  • @zonoskar
    @zonoskar 9 місяців тому +2

    Wondering what the yields are of those wafer scale units.

  • @craighutchinson1087
    @craighutchinson1087 9 місяців тому +1

    I guessed the company accurately before listening to video. Tech tech potato youtube channel had some good content on this waffersized chip
    Your video was very well presented

  • @SubtleReed
    @SubtleReed 9 місяців тому

    From hierarchy to.... computation storage you create the "omni-sphere" - my new nerd word regarding the kernel portion.