Unboxing the Tenstorrent Grayskull AI Accelerator!

Поділитися
Вставка
  • Опубліковано 26 вер 2024
  • With all these AI hardware startups, people ask me when they can actually go buy them. Turns out, now you can! Here's some hands-on with the Tenstorrent Grayskull, and a chat with Jasmina from Tenstorrent about how to get up and started!
    tenstorrent.co...
    -----------------------
    Need POTATO merch? There's a chip for that!
    merch.techtechp...
    more-moore.com : Sign up to the More Than Moore Newsletter
    / techtechpotato : Patreon gets you access to the TTP Discord server!
    Follow Ian on Twitter at / iancutress
    Follow TechTechPotato on Twitter at / techtechpotato
    If you're in the market for something from Amazon, please use the following links. TTP may receive a commission if you purchase anything through these links.
    Amazon USA : geni.us/Amazon...
    Amazon UK : geni.us/Amazon...
    Amazon CAN : geni.us/Amazon...
    Amazon GER : geni.us/Amazon...
    Amazon Other : geni.us/TTPAma...
    Ending music: • An Jone - Night Run Away
    -----------------------
    Welcome to the TechTechPotato (c) Dr. Ian Cutress
    Ramblings about things related to Technology from an analyst for More Than Moore
    #techtechpotato #ai #tenstorrent
    ------------
    More Than Moore, as with other research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, which may include advertising on TTP. The companies that fall under this banner include AMD, Applied Materials, Armari, Baidu, Facebook, IBM, Infineon, Intel, Lattice Semi, Linode, MediaTek, NordPass, ProteanTecs, Qualcomm, SiFive, Supermicro, Tenstorrent, TSMC.

КОМЕНТАРІ • 139

  • @danielreed5199
    @danielreed5199 7 місяців тому +43

    Can't wait to get one of these, hold it above my head and shout "I HAVE THE POWER!!!!!"

    • @henriksundt7148
      @henriksundt7148 7 місяців тому +1

      I can literally hear the guitar riffs!

    • @kayakMike1000
      @kayakMike1000 7 місяців тому +2

      As Cringer transforms into Battle Cat!

  • @tipoomaster
    @tipoomaster 6 місяців тому +7

    8:49 lol at Ian playing it off like he was going for a smell and not a bite when she thought that

  • @magfal
    @magfal 7 місяців тому +29

    I hope these specialized chips completely take over the inference market and that future chips take over training at scale too.
    I would like to see sane prices for GPUs again.

    • @InnuendoXP
      @InnuendoXP 5 місяців тому

      Yeah though hopefully we'll see fab capacity scale to account for both. A lot of the price of the GPU is determined by what price a chip of that size on that fab standard can be sold for. It's one reason why AMD doesn't bring Radeon prices down as low as they could, because it would make it less profitable than Zen, eat into their margins,, and they both need to share capacity for what AMD can get from TSMC. Having more market share as a publicly traded company isn't valuable if it doesn't also mean higher net profit to be able to reinvest into R&D for future performance/feature gains & AMD already learned & got burned that way with Vega.

    • @bits360wastaken
      @bits360wastaken 3 місяці тому

      Pretty sure ai isnt what people are buying those cards for, running things locally is a very very tiny market, and actually running ai models on a commercial level with consumer hardware is... not economical.

    • @incription
      @incription 2 місяці тому

      @@bits360wastakenvirtually every tech company is building software and hardware for local ai, what are you talking about?

  • @woolfel
    @woolfel 7 місяців тому +17

    like the conversation about how you keep backward and forward compatibility. As a software engineer in the consulting space. Compatibility is the blessing and curse of maintaining code.

    • @aapje
      @aapje 7 місяців тому +3

      Yeah, it seems a bit optimistic, especially when the API is very similar to the hardware.

  • @esra_erimez
    @esra_erimez 7 місяців тому +69

    I think I'm going to gift myself a Grayskull AI Accelerator for my birthday

  • @philt-one
    @philt-one 7 місяців тому +18

    The "It does fit in your desktop", is such a underrated burn on NVIDIA/AMD haha 🔥

  • @Veptis
    @Veptis 7 місяців тому +23

    So Grayskull is useful for 2016 workloads?
    Wormhole seems barely useful for today's task either? Maybe image decoders? With that memory limitation. Does the compilation run completely on the card? I mean there is a lot of compute on board - so could I run it as a language model server and then use my system for something else?
    Or am I supposed to buy 8 of these ... Und put them into a server board with a server CPU? A single groq card is 20k and no memory.
    Perhaps developer kit, but not researcher friendly. It seems.
    I want an inference card to run 70B models in my workstation. And preferable directly via accelerate. So I can write device agnostic code. Any model from HF at any precision from fp32, to bf16 to fp16 to quantized models. So your roadmap is to be upstreamed to PyTorch 2.0 natively? That is like half a year late. And today we had the release of pytorch 2.2. Intel is aiming to get their GPU upstreamed by Pytorch 2.5 in October. Which will also be a backend switch to triton.
    Perhaps I should sign up on the website and share my requirements.

    • @danielme17
      @danielme17 7 місяців тому +1

      they should forget that 8GB LDDR and just give us fast access to one or two NVMes, done. I would never complain about memory again.

    • @kazioo2
      @kazioo2 7 місяців тому +5

      @@danielme17 For what? You wouldn't be able to feed that compute with the puny bandwidth of an NVMe.

  • @solidreactor
    @solidreactor 7 місяців тому +4

    This is amazing news! Looking forward to order one

  • @KiraSlith
    @KiraSlith 7 місяців тому +7

    6:20 Delta makes an 60x20 blower that'd fit the formfactor far better, slim down that unit to properly occupy a low-profile slot in say a compact machine like HP's e1000 micro servers, I'd recommend bringing in a cheap low-power microcontroller to monitor and manage the fan's speed as well, to reduce overall system noise and allow you to optimize the fan curve.

  • @dinoscheidt
    @dinoscheidt 7 місяців тому +7

    Even for Jim Keller it will be a hard task to catch up on 10 years of CUDA and the whole software stack that rests on top of it. I really hope they succeed. Software-Hardware co-design is really the crucial aspect here.

    • @arthurswanson3285
      @arthurswanson3285 6 місяців тому +1

      They have to hit the hobbyist entry point to make a mark.

  • @emeraldbonsai
    @emeraldbonsai 7 місяців тому +20

    Maybe they explain this in the video but it says TT-Buda: Run any model right away but the grey skull card is only 8GB wont you be limited to models under 8GB or can it leverage your cpu's ram

    • @kil98q
      @kil98q 7 місяців тому +1

      yhe same question.. im down to paying that price but if its barely an advantage over a same prices gpu then might as wel buy a more flexible gpu..

    • @wool1701
      @wool1701 7 місяців тому +4

      The latency is too high on the PCIe bus to use CPU RAM for large models with good performance. The only tensor accelerator which I have seen that can effectively run large models fast in shared memory is the Apple M GPU. Apple M can do this because they have a very good unified memory model and high bandwidth internal bus. (I have tried doing this on Ryzen with unified memory but the iGPU is not significantly faster than the CPU for LLM inference. I tested pytorch 2.1 / ROCm 5.7.1, on RDNA2 with Llama2:13b - AMD does not officially support ROCm on this GPU.)

    • @dinoscheidt
      @dinoscheidt 7 місяців тому +5

      It’s simply not a llm inference machine. Transformers, through highly in hype right now, are a small subset of machine learning. Also, when you nail the architecture, it might be easier to extend into larger memory (bandwidth is the problem, not the size). Asking for more memory is like asking for higher megapixels on cameras, completely forgetting that you need to be able to fill and page that large bucket.

    • @DigitalJedi
      @DigitalJedi 5 місяців тому +2

      @@dinoscheidt Agreed, and given they are positioned as a dev kit of sorts, this is more than enough memory for someone to get small test builds up and running that will scale to larger pools on future hardware.

  • @hedleyfurio
    @hedleyfurio 7 місяців тому +1

    Every success with these dev kits that allow developers to get their heads around the hardware and software stacks . The level of transparency and authenticity displayed in all tenstorrent interviews is very encouraging vs watching a slick marketing pitch to hype up the crowd . Many comments are about the LDDR size, and perhaps those are from people wanting to plug in a card, and run a LLM - the amazing tech in the chip and software stacks - with accessibility, is where the value is , as it is not difficult to place more LDDR chips . Our application is a multimodal authentication inference engine at the edge where speed , low power and accuracy are key figures of merit , so we are looking forward to getting our hands on the devkit.

  • @movax20h
    @movax20h 7 місяців тому +16

    Not too bad start. While it might not quite outperform something like 7900XT, pricing is decent, smaller, slightly more efficient, and software support looks already pretty good.
    But I think 8GB is going to be a bit limiting. Maybe with two cards installed, it could be worth for bigger models.
    Looking at website, documentation and repos, it is all rather strightforward to use, instructions and structure of pieces is easy to understand. So already ahead of AMD for example.
    I really hope that the tt-kmd driver gets mainlined into upstream kernel first.

  • @carpetbomberz
    @carpetbomberz 7 місяців тому +3

    Sorry to use this reference, but as SJ used to say, "Great Products Ship". You cannot try things out unless they're manufactured and in your hands. 'Announcements' don't run LLMs. 😸

  • @Mr.Kim.T
    @Mr.Kim.T 7 місяців тому +5

    This reminds me of the Physx add-in cards some 15 years ago. Unfortunately for them, single graphics cards very quickly became fast enough to do in-game physics themselves without requiring a separate card for the purpose. NVIDIA just swallowed Physx whole… as it had done with 3dfx before it. Since then, NVIDIA’s dominance has become all-encompassing. I’ve known NVIDIA almost since its inception…. it’s a hard-nosed company that takes no prisoners. My advice for other A.I. companies is to keep out of NVIDIA’s crosshairs.

    • @arthurswanson3285
      @arthurswanson3285 6 місяців тому

      I remember those! Always wondered why they disappeared.

  • @colinmaharaj
    @colinmaharaj 7 місяців тому +9

    Doctorate in FPGA, impressive

  • @1x4x9
    @1x4x9 7 місяців тому +24

    Hopefully these guys knock nVIDIA down a peg in the future. Competition is good.

    • @Slav4o911
      @Slav4o911 7 місяців тому

      Not with 8GB LPDDR... they need VRAM (a lot of VRAM) and as high as possible bandwidth.

  • @thegoldbug
    @thegoldbug 6 місяців тому +2

    I love the C64 tshirt!

  • @triularity
    @triularity 7 місяців тому +6

    AI-Man: "By the power of Grayskull.... I have the power!" 🙃

  • @fteoOpty64
    @fteoOpty64 7 місяців тому +7

    So Ian, you finally met your match!. A solid Phd in FPGA really good at the stuff with elegance and beauty to match. What can I say ?. A unicorn is so so rare.....yet, we are looking at one!.

    • @fteoOpty64
      @fteoOpty64 7 місяців тому +2

      Get a stack of these and revive your Pi calculations to 100 trillion please!.

  • @zeljkanenad
    @zeljkanenad 7 місяців тому +3

    So many start-ups / companies today are built with only and only one goal; to demonstrate something narrow and not sustainable on its own, and finally (the Goal) be sold to big tech. Unfortunately, in the process, they must sell their beta or 'dev-kit' product to customers, basically using them as free work force. Competing with Nvidia? Oh, please.
    This is presented as dev-kit, but for what purpose will someone invest their energy hoping that whole proprietary stack will not die and that it will be able to scale in future? Basically, an example of real life use-case for this dev-kit today in its current form.
    Regardless of above, it was a pleasure listening Jasmina and Ian discussing the topic. Good job, Ian. And all the best, Jasmina. Hope Nvidia buy you for billions :)

  • @esra_erimez
    @esra_erimez 7 місяців тому +19

    I'm very envious of people that can program FPGAs. I have a masters in Comp Sci and no matter how much I try, I can't get my head wrapped around FPGAs and emacs.

    • @first-thoughtgiver-of-will2456
      @first-thoughtgiver-of-will2456 7 місяців тому

      just start with VHDL on a FPGA with a good GUI studio. If thats still too difficult and you have money for a liscense I'd recommend LabView. It can target FPGAs as well as CPUs and is a graphical programming environment (no code solution) that is extremely approachable.

    • @danielreed5199
      @danielreed5199 7 місяців тому +9

      I find that they are easier to program if you take them to their natural habitat.... the countryside.
      I think they are mainly used by cattle farmers to systematically control a set of access points to the pastures.
      Basically... if you are out standing in your field (Computer Science) you will be able to figure it out.
      On a serious not though.... don't give up trying... every attempt, although it may not seem like it, you are getting better at it, somethings just have crazy steep learning curves.
      I am pretty sure that a lot of concepts you learned in CS took a while to sink in, but they did :)
      I hope you are able to envy your future self :)

    • @s7473
      @s7473 7 місяців тому +2

      when i studied digital electronics around 2001 we started started with basic logic gates and built a traffic light system. I can't remember the name of the software we used but it was a xilinx fpga we worked with and it was mostly drag and drop placement to bulit up a digital circuit diagram that could be exported to the chip, it was much easier than programming an 8086 microcontroller in assembly language. :)

    • @sailorbob74133
      @sailorbob74133 7 місяців тому +2

      What? FPGAs can't be programmed in VIM?

    • @esra_erimez
      @esra_erimez 7 місяців тому

      @@sailorbob74133 🤣

  • @Delease
    @Delease 7 місяців тому +2

    I'm very interested to know what Tenstorrent's plans are, if any, for getting their Linux drivers upstreamed into the mainline kernel. Having upstreamed drivers would really go a long way in giving me confidence these cards are going to have long term software support, independent of the fortunes of the company which created them.

  • @theworddoner
    @theworddoner 7 місяців тому +9

    I wish them all the best and success.
    The dev kit memory seems a bit tiny doesn’t it? It’s 8gb with a bandwidth of 118GB/s.
    What can you do with that?

    • @lbgstzockt8493
      @lbgstzockt8493 7 місяців тому +2

      Maybe it streams from your system RAM and just caches?

    • @LunarLaker
      @LunarLaker 7 місяців тому +4

      start developing on it :) it's like how arm workstations absolutely sucked until stuff like Altra showed up, there just had to be something available to see what works

    • @LtdJorge
      @LtdJorge 7 місяців тому

      @@lbgstzockt8493RAM has abysmally low bandwidth.

    • @Slav4o911
      @Slav4o911 7 місяців тому +1

      @@lbgstzockt8493 That would be slow... I can tell you from practice once your model spills outside of VRAM... it gets very slow. Some small spillover sometimes is not very detrimental, but it slashes your speed 2x or 3x times... of course it's still better than 20x times slower. Nvidia GPUs are literal AI monsters.

    • @WyattFredaCowie
      @WyattFredaCowie 7 місяців тому

      Yeah that's a $200 6600XT. Not quite sure what their idea is here, especially when GPUs are already extremely efficient for machine learning

  • @cem_kaya
    @cem_kaya 7 місяців тому +7

    in the development of windows support do you consider WSL ?

  • @OpenAITutor
    @OpenAITutor 5 місяців тому +1

    I like to see how much it can accelerate inference. Some performance numbers would be great.

  • @ErikS-
    @ErikS- 7 місяців тому +2

    The logo will be something that will catch the attention of AMD's legal department...
    If I would be judge / jury in a trial on the IP, I would most certainly see a conflict with AMD's logo.

  • @davorcapalija9383
    @davorcapalija9383 7 місяців тому +4

  • @danielmeegan6259
    @danielmeegan6259 7 місяців тому

    Thank you 👍

  • @nathanfife2890
    @nathanfife2890 7 місяців тому +3

    I'm interested. What kind of performance difference between Nvidia graphics cards you get with these accelerators?
    I'm assuming it's not as good as a 4090 or something, but it's still probably significantly better than just running on my 16 core CPU.
    So like where in that range does this thing sit? Or is it more about the interesting framework that enables more creative development?

    • @Slav4o911
      @Slav4o911 7 місяців тому

      Just looking at the bandwidth it would be about 3x slower than RTX 3060... for 2x more money... so not good.... and I don't believe they have faster tensors... but even with faster tensors, the limiting factor is the bandwidth not the tensors.

  • @tinto278
    @tinto278 7 місяців тому +3

    Is this the SDI/3dfx 3d accelerator moment for AI Accelerator's?

    • @Slav4o911
      @Slav4o911 7 місяців тому

      Nah.... not even close.

    • @tinto278
      @tinto278 7 місяців тому

      @@Slav4o911 Regan star wars and Jurassic park are coming.

  • @Zero-oq1jk
    @Zero-oq1jk 5 місяців тому

    Is there any chance we see RISC-V laptops and pc's? Like Ascalon or anything else.. Or ARM will be only option there?

  • @MultiMojo
    @MultiMojo 7 місяців тому +7

    8 GB of memory in 2024 just plain sucks. LLMs are all the rage right now, and the smallest one with 7B parameters needs atleast 16 GB VRAM (DDR6 not DDR4). I don't see how anyone would be interested in these over the H100, which everyone drools over. Atleast increase the memory to 128 GB + to drive some interest.

    • @TechTechPotato
      @TechTechPotato  7 місяців тому +2

      They're dev kits :) It's in the name.

    • @nadiaplaysgames2550
      @nadiaplaysgames2550 7 місяців тому

      @@TechTechPotato would they be looking into using standard DDR ram as a slower chace or even you an SSD as direct connection.
      have an SSD throw a lump of bulk transfer of contunes memory load a chunk of model then load in to ram in FILO and have it running in a loop

    • @jjhw2941
      @jjhw2941 7 місяців тому

      @@TechTechPotatoMy NVidia AGX Orin with 64GB of RAM is also a dev kit :)

    • @Slav4o911
      @Slav4o911 7 місяців тому

      @@nadiaplaysgames2550 Unloading to SSD will be very slow, even on the fastest SSD. Even spillover to RAM makes the models very slow... I don't even try to unload to the SSD (and possibly destroy it, because it would be very heavily used). I mean if a model fits fully inside the VRAM, if you use the "streaming option", the model will start to answer around 5 seconds, if there is small RAM spillover the answer time will progressively slowdown... to 20 - 30 seconds... if the model runs fully in RAM.... you'll wait 200 - 300 seconds...(depending on context length) which does not look like chat but like sending an e-mail and waiting for an answer... it's possible but not fun at all. If it spills over to the SSD, the answer will probably come after an hour... if the SSD doesn't explode before that.

    • @GeekProdigyGuy
      @GeekProdigyGuy 7 місяців тому +3

      Hilarious how much ignorance one comment demonstrates. How are you comparing an $800 dev kit card to an H100, which is north of $40K? A literal 50x price difference. Not to mention calling VRAM "DDR6, not DDR4" when GDDR6 is generationally aligned with DDR4, just specialized for GPUs.

  • @OOMed
    @OOMed 7 місяців тому

    That's right. That's right. That's right.

  • @Maximilian-sd9xy
    @Maximilian-sd9xy 3 місяці тому

    Hello, can you stack them to a cluster?

  • @Phil-D83
    @Phil-D83 7 місяців тому +1

    By the power of...AI!

  • @cannesahs
    @cannesahs 7 місяців тому +14

    For once real engineering talk instead of pure marketing s*it 👍

  • @Johan-rm6ec
    @Johan-rm6ec Місяць тому

    Nice Ian, but the C64 was before your time.

    • @TechTechPotato
      @TechTechPotato  Місяць тому

      I'm older than I look. The C64 was my first system when I was young.

  • @MrAtomUniverse
    @MrAtomUniverse 4 місяці тому

    GROQ is way better right ?

  • @qeter129
    @qeter129 7 місяців тому +2

    Get a consumer card with 48gb or more memory out there for less than 1500$ and you'll make hundreds of billions on edge AI computing. Pls free us from the green giant and his little red minion.

  • @pengcheng8299
    @pengcheng8299 7 місяців тому +1

    why was it branded "Taiwan" if the contract went to Samsung Fab?

    • @TechTechPotato
      @TechTechPotato  7 місяців тому +3

      This chip was technically GF I think. Packaging likely done in TW.

    • @predabot__6778
      @predabot__6778 7 місяців тому +1

      @@TechTechPotato Wait, Global Foundries...? But GF doesn't even have a 7/10nm fab -- how would these cards be able to even match a 4060 Ti with a process as old as 14/12nm?

  • @zebobm
    @zebobm 7 місяців тому +3

    Doctorate corn: 2 PhD for the price of 1.
    But seriously, where will these chips used from a consumer standpoint?

    • @NotAnonymousNo80014
      @NotAnonymousNo80014 7 місяців тому +3

      In pushing Nvidia out of AI so they can return to making graphics cards. :D

    • @LunarLaker
      @LunarLaker 7 місяців тому +4

      for training models used by consumers, probably smaller/more niche ones given big software players have their own chips or have the capital for NV. As a consumer you're probably never going to buy your own inferencing card, but maybe much further down you could see tenstorrent IP in your CPU

  • @kitastro
    @kitastro 7 місяців тому +2

    take my money

  • @NNokia-jz6jb
    @NNokia-jz6jb 7 місяців тому

    Good commodore tshirt. ❤

  • @ChrisJackson-js8rd
    @ChrisJackson-js8rd 7 місяців тому +2

    are they hiring?

  • @theHardwareBench
    @theHardwareBench 7 місяців тому

    Not heard Jim Keller mentioned since he bailed out of the Ryzen project in 2015. Considering how bad those early CPUs were I’m guessing AMD didn’t listen to his advice. Pretty sure he wouldn’t think having the cache speed locked to RAM speed would be a good idea.

    • @TechTechPotato
      @TechTechPotato  7 місяців тому +1

      I've interviewed him multiple times! Covered his time at Intel, when he left, and his new ventures!

    • @theHardwareBench
      @theHardwareBench 7 місяців тому

      Cool, I’ll look through your old videos. I discovered you through an article you wrote interviewing the head of Intel’s OC lab in 2020. I’m getting back into overclocking and looking for an edge. He said per core overclocking was the way forward but I can’t see how that’s going to improve an of my cpu or 3dmark scores lol.

    • @woobilicious.
      @woobilicious. 4 місяці тому

      What the hell are you talking about? Zen 1 saved AMD, it was wildly considered the saving grace for AMD, and even though it wasn't completely killing Intel's Core, it was a viable alternative that everyone celebrated. I also doubt you have any clue how to build a CPU to be commenting on AMD tying RAM and L3 cache clocks together, cache coherency is literally the hardest problem is computer engineering and I would gamble that Intel's Chiplet design will do something similar.

  • @Arcticwhir
    @Arcticwhir 7 місяців тому +1

    whats the performance like

    • @Slav4o911
      @Slav4o911 7 місяців тому +1

      Just looking at the bandwidth number, it would be slow, even if their tensor cores are fast. You need fast ram and a lot of bandwidth to feed the tensors, otherwise it's slow. Just look at how much bandwidth Nvidia GPUs have. Even if their tensors are faster than Nvidia (which seems impossible to believe), they would need to feed them. Also why didn't they put a lot of that RAM, at least 32GB. 8GB is very small, you can buy 16GB RTX for about that price, which can start working immediately without any hassle.

  • @metallurgico
    @metallurgico 7 місяців тому +6

    that fan placement looks so janky lol

  • @NNokia-jz6jb
    @NNokia-jz6jb 7 місяців тому

    What can i do with this card?

  • @sheevys
    @sheevys 7 місяців тому +1

    That's right

  • @tindy128011
    @tindy128011 6 місяців тому

    can i use it to run a Minecraft server?

  • @MrChurch69
    @MrChurch69 6 місяців тому

    Can you play games on that thing

  • @pebre79
    @pebre79 7 місяців тому

    Benchmarks?

  • @igor25able
    @igor25able 7 місяців тому +1

    No need to support Windows, Cuda does not do it anymore so community shifted to linux completely, support for wsl is quite enough

  • @nadiaplaysgames2550
    @nadiaplaysgames2550 7 місяців тому

    what card do i need for a local LLM

    • @nadiaplaysgames2550
      @nadiaplaysgames2550 7 місяців тому

      @user-ef2rv9el9x yup i fix that today o got new card to day

    • @jjhw2941
      @jjhw2941 7 місяців тому

      @user-ef2rv9el9xPeople are using Macs and Macbooks because of the unified high speed memory as well.

    • @Slav4o911
      @Slav4o911 7 місяців тому

      Some Nvidia RTX with at least 16GB or more VRAM... so definitely not 8GB... I have an RTX 3060 12GB GPU and it's not enough for the bigger models, and once your model spills into the regular RAM it becomes slow, so more VRAM is better. Also have in mind AMD and Intel will not help you, you'll have a hard time with them (as if you have a problem, almost nobody will help you, because everybody uses Nvidia) to run LLM models on them and the models are optimized only for Nvidia GPUs.

    • @nadiaplaysgames2550
      @nadiaplaysgames2550 7 місяців тому

      @@Slav4o911 doing some research the 4060ti is highest you can get without spelling a organ I just hope the split memory buss and 8x lanes won't mess me up

    • @nadiaplaysgames2550
      @nadiaplaysgames2550 7 місяців тому

      @@Slav4o911 anything bigger than 16gb its 4090

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 7 місяців тому

    Would be nice if you can use it in combination with Matlab, interesting product. Interesting woman very eloquent.

  • @michela1537
    @michela1537 7 місяців тому

    Thank you for sharing ;-) We need more female in AI ....Urgent to balance outcome of humanity and AI !!

  • @Johnmoe_
    @Johnmoe_ 7 місяців тому +1

    8gb of lpddr4…….. for $599…….. bruh 💀. it’s an interesting project don’t get me wrong, but I could do better with an off-the-shelf Nvidia gpu.

  • @ErikS-
    @ErikS- 7 місяців тому

    I was originally very enthousiast on risc-v.
    But what I hear and see, is that is is just not performant and crashes continuously.
    I am hopeful for the future, but until it is picked up by a credible company like Qualcom / Intel / AMD / Nvidia / ARM / Samsung / ..., I doubt it will get to a mature point.

  • @mirerek
    @mirerek 7 місяців тому

    so many RISC-V cores to process ML? I don't believe it worth it

    • @bartios
      @bartios 7 місяців тому +1

      Don't know if you're aware but those cores implement a ton of custom instructions optimized for AI. That and all the networking etc is is where they get their tops/flops.

    • @mirerek
      @mirerek 7 місяців тому

      @@bartios keep custom, remove cores)))

    • @TechTechPotato
      @TechTechPotato  7 місяців тому +7

      No, this isn't RISC-V cores. It's Tensix cores.

    • @oj0024
      @oj0024 7 місяців тому +2

      The tensix cores are aupposed to have five control RISC-V core and a large compute engine. I'm not sure what the RISC-V cores in Grayskull actually are though (extension wise).

  • @waldmensch2010
    @waldmensch2010 7 місяців тому +1

    nice hardwarep0rn 🙂

  • @maruma2013
    @maruma2013 6 місяців тому

    Will they sell AI chips for consumer?
    We need a person who save us from Jensen Huang.

    • @TechTechPotato
      @TechTechPotato  6 місяців тому

      You can buy them now

    • @maruma2013
      @maruma2013 6 місяців тому

      I see
      Anyway, I want see demo running with GraySkull.

  • @julianneEVdmca
    @julianneEVdmca 7 місяців тому

    OKAY ! WHAT IS AI Accelerator again!??!! CUZ YOU ALL SHOWING HARDWARE BUT IT JUST SOFTWARE!! why keep showing me pci-card when you can literally use usb.2!!
    is it funny to sell FREE-chatgpt as a new monster graphic chip!!??
    i not gamer to fool me by DLSS & RTX !! YOU TALKING TO I.T VIEWER NOT SOME HOME GAMING USER! SO WHO YOU WANT TO FOOL WITH THIS??! WHO!!

  • @pynchia4119
    @pynchia4119 6 місяців тому +1

    Pity there's too much BOTOX. She cannot even move her mouth anymore, left alone smiling fully. OMGoodness

  • @TDCIYB77
    @TDCIYB77 7 місяців тому

    Is Ian flirting? 😂

  • @iseverynametakenwtf1
    @iseverynametakenwtf1 7 місяців тому

    I feel uncomfortable watching this. Such an awkward thing

  • @666Maeglin
    @666Maeglin 7 місяців тому +7

    So pretty and smart..

    • @tuqe
      @tuqe 7 місяців тому +18

      Saying things like that makes someone feel uncomfortable and is weird

    • @solarin_
      @solarin_ 7 місяців тому +22

      I do love Ian's Commodore shirt yes

    • @packapunchburger
      @packapunchburger 7 місяців тому +2

      I do love some well designed and placed pogo pins myself

    • @AK-vx4dy
      @AK-vx4dy 7 місяців тому

      @@tuqe But is true, but i would say smart first

    • @tuqe
      @tuqe 7 місяців тому +5

      @@AK-vx4dy nah still comes across as someone who has not spent enough time around women to realize that they are humans

  • @BUY_YOUTUB_VIEWS_e0e108
    @BUY_YOUTUB_VIEWS_e0e108 7 місяців тому +1

    This video is a great resource for beginners.

  • @DanielJoyce
    @DanielJoyce 7 місяців тому +1

    So this dev board can only run tiny models from a few years ago? Disappointing. Even their bigger boards only have like 12Gb

  • @joealtona2532
    @joealtona2532 7 місяців тому +1

    Grayskull is tagged 2021 on their own roadmap? Isn't it too little too late?

  • @UnicornLaunching
    @UnicornLaunching 6 місяців тому

    Now that they have enterprise partnerships, they're getting into the rhythm of shipping. Devs will test, extrapolate use cases, and report feedback. Same as it ever was. Not enough memory? Same for every piece of hardware that gained traction.

  • @danielgomez2503
    @danielgomez2503 7 місяців тому

    Does anyone have a discount code to share ?