How long can Nvidia stay monolithic?

Поділитися
Вставка
  • Опубліковано 10 чер 2024
  • Is Nvidia losing the chiplet race? AMD & Intel are already building complex chiplet GPUs while Nvidia seems to be stuck with monolithic designs. Let's take a closer look!
    Support me on Patreon: www.patreon.com/user?u=46978634
    Follow me on Twitter: / highyieldyt
    0:00 Intro
    1:24 Nvidia Chiplet R&D
    3:48 Gaming vs HPC/AI GPUs
    7:05 Process Node Implications / TSMC N3E
    9:24 High-NA EU Implications
    10:50 Energy & Space Efficiency
    11:50 Competition from AMD & Intel
    12:44 Conclusion
  • Наука та технологія

КОМЕНТАРІ • 207

  • @deeznuts-lj2lv
    @deeznuts-lj2lv 10 місяців тому +82

    I think Nvidia will max out the die limit and only use chiplet as a compromise. AMD and Intel has less to risk and more to gain from this approach.

    • @CrazyKingJammy
      @CrazyKingJammy 9 місяців тому +3

      Yah, Nvidia is too ahead of the game, they need chiplets in their back pocket to remain competetivie, incase Intel or AMD has a breakthrough. lol. just how im viewing it.

    • @heyhoe168
      @heyhoe168 9 місяців тому

      @@CrazyKingJammy I suspect one does not make chiplets in just one generation. Which means Nvidia progress in single chiplet design may actually mean... heavy underdevelopement in direction of chiplet splitting.

    • @CrazyKingJammy
      @CrazyKingJammy 9 місяців тому +1

      @@heyhoe168 I think Nvidia is ahead of the RnD game, they may have allowed chiplets development to slip, but technically, apple is pulling off chiplets via their ULTRA M series.
      Nvidia just doesnt need chiplets right now, and lets see if they lose their throne in the near future but i dont see that.
      They are ahead with the ARM/GPU stack, only other folks in that league is Apple.

    • @heyhoe168
      @heyhoe168 9 місяців тому +3

      @@CrazyKingJammy chiplets means lower price. Nvidia might not need it, but we surely do. Btw, amd gpu chiplets are underdeveloped too. For some reason only cpu has worthy chiplet structure.

    • @CrazyKingJammy
      @CrazyKingJammy 9 місяців тому

      @@heyhoe168 of course we would love more affordable consumer products!
      But I think it’s wise of Nvidia to keep that card in their back pocket, once their competitors gets too cheap/powerful, then Nvidia has a way to move and be more competitive.

  • @HighYield
    @HighYield  10 місяців тому +34

    This is the same video I shared on Patreon almost two weeks ago, so if you watched this, you have already seen it. Next video will come sooner, pinky promise!

    • @zesanurrahman6778
      @zesanurrahman6778 10 місяців тому

      It can't cause pc master race creating a gpu that is faster than nvidia and cheeper

  • @pwmaudio
    @pwmaudio 10 місяців тому +73

    Overall good analysis on this video but you forget the most important (and in fact the only) reason why NVDA didn't move to chiplets yet is the limited packaging/interposer capacity (CoWos in case of H100) and HBM TSV production machines. Currently, NV can get any quantity of dies from TSMC N4 but can't get enough HBM and package them fast enough to meet the market demand... to the point that a third packaging factory is opening in Taiwan and NV already booked the production for the next year. Samsung is also opening a new packaging fab in Korea (for HBM CoWos) to sustain NV business.
    Otherwise, keep the good work. Nice channel 👍

    • @SirMo
      @SirMo 10 місяців тому +7

      CoWos capacity is not difficult to scale. The packaging machines are nowhere near the complexity of the lithography equipment. TSMC is expanding this capacity rapidly, and I don't see it being a bottleneck long term.

    • @HighYield
      @HighYield  10 місяців тому +17

      I agree with you in the sense that packaging (and HBM) currently is the limiting factor for manufacturing high-end GPUs, but H100 already uses CoWoS, only just to attach the die and the HBM chips onto the interposer, and not to connect multiple chiplets. So advanced packaging is already used for current gen Nvidia HPC/AI GPUs.
      For gaming the argument stands.

    • @pwmaudio
      @pwmaudio 10 місяців тому +1

      Totally agree but what I said is still true. Capacity is being build to meet the future demand but was not enough for current client GPUs that sales in much much higher numbers than DC H100. And I don't even talk about cost... @@SirMo

    • @pwmaudio
      @pwmaudio 10 місяців тому +3

      Client GPU quantity is a total different scale than DC A/H100. And it's even much easier for AMD when you have less than 10% market share and only one SKU is chiplet...@@HighYield

  • @MickenCZProfi
    @MickenCZProfi 10 місяців тому +68

    Thank you for this video, always very informative, I had no idea that the EUV-next lithiography shrinks the reticle limit by a factor of 2, that changes everything.

    • @HighYield
      @HighYield  10 місяців тому +16

      I knew it was getting smaller, but I was again reminded about the huge decrease by a random comment on twitter. I think many ppl underestimate the impact high-NA EUV will have.

    • @MickenCZProfi
      @MickenCZProfi 10 місяців тому +4

      @@HighYield Yeah for sure, actually I heard about it today on MLID's leak video and it brought up a good point, this might actually make nvidia cancel the 90 class of gpus for a few years, because it will have to be smaller and won't be able to compete with the previous generations. Of course I do expect them to use chiplets on hyperscaler + AI stuff as you said, but for consumer gpus, it might be harder to justify a new design.

    • @maynardburger
      @maynardburger 9 місяців тому

      @@MickenCZProfi Intel is expected to have first High NA machines active in manufacturing sometime in 2025(assuming no delays). TSMC will likely not have the same capabilities til 2026. And even then we know that gaming GPU's are usually at least a year behind on leading edge nodes, so Nvidia's Blackwell GeForce parts will likely be unaffected by any reticle limit issues. And beyond that, it's more than likely gonna be another two years for the next generation, at which point they'll have had time to get on top of things. Shouldn't be an issue and they will continue to have high end consumer GPU's every generation. Also, dont get caught up with naming. 90 series used to be what was 80 series. It's not actually a new class of part.

    • @JavoCover
      @JavoCover 9 місяців тому

      ​​@@maynardburgerIs that why Intel kept with 14+++++++++ node for so long? Like waiting for the big change.

  • @RobBCactive
    @RobBCactive 10 місяців тому +33

    Rather than just calling the game gfx latency dependent it's better to realise that the frames are tightly coupled in a way that HPC calculations aren't.
    An algorithm bouncing rays off a surface need the texture & colour to be known for example, if these rays are scattered and reflected then you need all of that early pass data to be available.
    I've seen explanations that games effectively have a global area, splitting it across dies is believed to cause problems.
    The difference is that if you could pipeline frames without synchronous requirements then each could take longer than the frame time, so long as they can start early on a wide enough GPU that can process several in flight frames.
    So long as the output frames respond to user input quickly latency would still appear low.

    • @hammerheadcorvette4
      @hammerheadcorvette4 10 місяців тому +3

      VERY solid points. Some could be solved in software with a form of checkerboarding as you process.

    • @GeekProdigyGuy
      @GeekProdigyGuy 10 місяців тому +3

      1. Before realtime ray tracing was available, there were very few "global" calculations. However, two separate GPUs (Xfire/SLI/dual-chip designs) would have to be synchronized on processing of each frame to avoid tearing. To my knowledge the latency contributes significantly to making this synchronization difficult, even with relatively low inter-chip bandwidth usage.
      2. Of course as modern games increasingly implement and rely on RT, what you said about global information may become more applicable.
      3. There is no way to pipeline frames which can reduce the fundamental input lag; if it takes 10ms to render a frame, the dependency on user input means the input lag can never drop below 10ms. While you can increase the framerate with such pipelining, and possibly as a result smooth out the input lag, the total frame render time will be observable as input lag by the end user.

    • @RobBCactive
      @RobBCactive 10 місяців тому +1

      @@GeekProdigyGuy originally dual GPU duplicated VRAM data and each handled alternate frames. Tearing is caused by changing the display in the midst of its refresh hence setting v-sync or free-sync with the monitor avoids it.
      The point about pipelining was to show the limit of asynchronous operation in some super wide GPU, we know algorithms now use movement vectors and differences between frames, but you'd need to sample user input late enough to meet latency requirements.
      But seriously those differ between games, not every game is a twitch shooter.

    • @shanent5793
      @shanent5793 10 місяців тому

      Pixels are mostly computed independently, even when ray-tracing. GPUs were invented for rasterization workloads where the same sequence of instructions are executed with data individual to each pixel and the CPU has already decided which triangles to draw. It's different for ray-tracing, each pixel requires multiple rays and the rays can scatter randomly. Rays are grouped and assigned to a compute unit or GPU core, and some rays will immediately hit a light source and terminate while others will reflect and scatter until the iteration limit is reached. When a ray terminates early the CU resources for that ray sit idle waiting for the others to finish, unlike a serial CPU which could immediately start processing the next ray. The trick is to find a way to maximize utilization by grouping rays that follow a similar path onto the same CU.
      Bounding Volume Hierarchy (BVH) is one such optimization, but it creates a dependency that has to be completed before the rays can be assigned resources, though the BVH is usually small enough to fit inside cache, so duplication across GPU chiplets isn't a great waste. BVH could even have its own specialized accelerators, like an array of simple CPU cores that execute the same cached program and can be reassigned to a new task while others iterate.
      The frames should not be pipelined in a latency sensitive game, ideally a frame is displayed, then inputs gathered, then geometry calculated and submitted to the GPU, then the frame is drawn and displayed, with no overlap between the stages. This gives the lowest possible latency. If your pipeline is five frames deep taking 50 ms to draw a frame, 50 ms is the minimum latency even if a new frame is displayed every 10ms (100 fps). Widening the processor so it draws the frame in 15ms with no pipeline means 15ms minimum latency despite the frame rate dropping to 67 FPS.
      Milliseconds is plenty of time to exploit pipelining and streaming across the individual pixels and effectively hide nanosecond VRAM and inter GPU module communication latency. Chiplets may be slower in some areas but since the majority of the work is still parallel there is plenty of performance to gain with a chiplet design.

    • @RobBCactive
      @RobBCactive 10 місяців тому

      @@shanent5793 your explanation suggests a cause of interdependence, utilisation is depending on correct grouping. Rendering pixels independently of each other doesn't mean they have no dependency on the same data. We know for lighting and other algorithms they depend on each other, with a frame constructed in passes.
      The question is how you break up that work across GCDs and how the large volume of data moves efficiently between them while meeting cost targets.
      That's different from long runs of predictable calculations on large vectors.
      We do know that multi-GPU has required v. high bandwidth connections between the parts, which are expensive. HPC & render farms without a real-time constraint can break up tasks over many processors.
      Now the best information known suggests the plans for multi-GPU RDNA4 has been shelved. Also RDNA3 split memory control & cache away from the GCD, but chose a single GCD in the first iteration.
      Right now we know RDNA3 missed its expected launch performance, and it doesn't appear to have a simple fix, with no word of a new stepping and refresh leaking.

  • @mikelay5360
    @mikelay5360 10 місяців тому +37

    They will stick with monolithic for as long as they need to, in gaming at least.
    Remember NVIDIA is not one to lay on their laurels , I am 100% sure they have chiplet based chips in their RnD labs just waiting for the right time to pull the trigger.

    • @user-lp5wb2rb3v
      @user-lp5wb2rb3v 10 місяців тому +11

      exactly they will keep milking the market, and if they cant they will market their way is better.
      For example nvidia could have released the 780ti in 2012, the 980ti in 2014 and the 1080ti (which is cut down with 11gb not 12) in 2016, but they milked instead.
      Notice how people cried about the r9 290x consuming too much power/ loud noise, yet look at how silly the 4090 is in comparison. And somehow people would rather buy the 4090 than a car lol

    • @mikelay5360
      @mikelay5360 10 місяців тому +3

      @@N_N23296 intel's fall was 10 years in the making. When NVIDIA starts to fall, we will definitely know from experience.. actually rumours suggest that AMD is actually the one giving up 😂 but let's see

    • @mikelay5360
      @mikelay5360 10 місяців тому +1

      @@N_N23296 you go where the money is. Even AMD and Intel tend to focus more on the server side because 'money'! Gaming is a niche in these times !

    • @mikelay5360
      @mikelay5360 10 місяців тому +2

      @@N_N23296 ohh I see now 🤣 AMD this! AMD that !

    • @26Guenter
      @26Guenter 8 місяців тому

      If Nvidia had a chiplet architecture they would release it.

  • @samghost13
    @samghost13 10 місяців тому +8

    Thank you very much! I'am always looking forward for new videos from your Channel

  • @Innosos
    @Innosos 10 місяців тому +19

    If I had to make a guess, the next gen will just be a small refinement of Lovelace with larger dies (a.k.a. a 50, 60, 70 and 80 class GPUs with a typical 50, 60, 70 and 80 class die sizes) since there's so much space left this generation.

    • @charleshorseman55
      @charleshorseman55 3 місяці тому

      Try smaller dies, larger transistor, run higher frequency. Oh wait that's what usually happens.

  • @VideogamesAsArt
    @VideogamesAsArt 8 місяців тому

    Always enjoy watching and hearing your opinions. You do very good analysis, keep up the good work!

  • @SirMo
    @SirMo 10 місяців тому +8

    Nvidia's entire origin story has always been about building the biggest chip possible. The reason they haven't went to chiplets is related to this paradigm of always having the largest chip. As you said, their margins and scale allowed for this and no one else could follow since they simply didn't have volumes which could justify the cost. But this advantage is going away. And I think companies like AMD have far more experience with chiplets.

    • @maynardburger
      @maynardburger 9 місяців тому +4

      I think underestimating Nvidia on the technology front is a very big mistake. There are only a tiny handful of processor companies in the world with comparable resources, and Nvidia has a pretty strong track record of execution. I expect when they do make a move to MCM/stacking, they're gonna do very well with it. We should also not forget that AMD is piggybacking heavily on TSMC's technologies, which Nvidia will also have access to when it comes time.

    • @SirMo
      @SirMo 9 місяців тому +7

      @@maynardburger People underestimate AMD's technology. It is Nvidia who's piggy backing on AMD's technology for example. AMD invented HBM which Nvidia uses heavily in datacenter. AMD also has the strongest CPU and FPGA development cadre as well.

  • @ramr7051
    @ramr7051 10 місяців тому

    good to see you back :) hope everything is going well for you

    • @HighYield
      @HighYield  10 місяців тому +1

      It's actually going very well, both in my job and personally. Let's see if I can get back to at least bi-weekly videos. I have been slacking a bit... ;)

  • @lahma69
    @lahma69 9 місяців тому

    First time viewer of your channel here and I really enjoyed hearing your opinion on this topic which I've been thinking a lot about lately. I look forward to exploring your past and future content!

    • @HighYield
      @HighYield  9 місяців тому

      I hope my other content doesnt disappoint ;)

  • @theevilmuppet
    @theevilmuppet 10 місяців тому +10

    Wow - your work has always been amazing but you're continuing to improve your presentation and focus on the critical details.
    Please, keep going!

    • @HighYield
      @HighYield  10 місяців тому +2

      Thank you so much! I will keep making videos as long as I have fun doing so :)

    • @theevilmuppet
      @theevilmuppet 10 місяців тому

      @@HighYieldand I'll keep watching them as long as you're making them!

  • @81Treez
    @81Treez 10 місяців тому

    You deserve more subs. Great content.

  • @ipurelike
    @ipurelike 10 місяців тому

    make sense, thanks for being super informative!

  • @andikunar7183
    @andikunar7183 10 місяців тому

    Great video, thanks a lot!

  • @i_scopes_i3914
    @i_scopes_i3914 9 місяців тому

    Hey max, what do you think of the gen-z interconnect possibilities and if it will be utilized and when?

  • @BecomeMonke
    @BecomeMonke 9 місяців тому

    Wow you made some real dry topic real interesting to listen to, thanks for the video

  • @RealLifeTech187
    @RealLifeTech187 10 місяців тому +4

    I would say Hopper Next is monolithic as Nvidia tries to capitalize on the AI boom with an early release and before the competition can launch something more interesting. Big corporations aren't that willing to take risk as they have their leadership role to loose while the under dog(s) can as they don't have a brand to lose if it doesn't work. Hopper Next Next will for sure be MCM because of the reticle limit. Maybe Hopper next is an intermediary generation and we see both a monolithic chip launching first to ensure leadership which is followed up by a risky MCM on the same architecture which takes longer to develop and has the potential to beat it

  • @Alex-ii5pm
    @Alex-ii5pm 10 місяців тому +7

    Chiplets are used for cost saving, they get better yields from smaller silicon and less wastage, monolithic will always be superior for gaming gpu's.

    • @HighYield
      @HighYield  10 місяців тому +8

      Chiplets are not always used for cost saving, even tho the most famous chiplet design (AMDs Zen 2) used it for that.
      For example, Meteor Lake is most likely more expensive to produce than its monolithic predecessors and chiplets can also be used to achieve much higher performance, because a monolithic chip has a hard die-size and thus transistor count limit. MI300 for example is faster than any possible monolithic chip AMD could design. I even quote a Nvidia research paper in the video which states, that a proposed chiplet architecture can be 45% faster than the largest monolithic chip.

    • @Alex-ii5pm
      @Alex-ii5pm 10 місяців тому +1

      @@HighYield in production based tasks it will be better however with the increase in latency due to the nature of chiplets will suck for gaming, I can see why Nvidia still uses monolithic designs at least for their gaming products. Compare the first Zen CPU to the monolithic Intel CPUs in production workloads they where awesome, in gaming they were horrible in gaming due to the high latency of the chiplet design. I can see the new chiplet/tile based Intel CPUs having the same issue in gaming tasks, we will either see no performance change or a regression. Only time will tell.

    • @Fractal_32
      @Fractal_32 10 місяців тому

      @@Alex-ii5pm well current implementation of chiplets are not the best for gaming latency wise they may be in the future since it’s a new technology that hasn’t been fully adapted to gaming applications.
      Maybe chiplets will be even better in the future since a given chipset could be focused on a fixed function/operation instead of more general use cases.

    • @soraaoixxthebluesky
      @soraaoixxthebluesky 10 місяців тому

      @@N_N23296If you look at Ryzen, on Zen+ they’re using 4+4 config for 2700 & 2700X but then switch to a single monolithic design (maintaining separate I/O die) for Zen 2 on 3700X $ 3800X you can clearly see a huge performance increase (also part of it is them switching to TSMC)
      3100 vs 3300X also a real world testimony to that.
      The only reason why you see a performance gain (as stated in Nvidia research paper) is due to massive transistors count differences between the monolithic design and the chiplets as you can easily scale the transistors up with chiplets on the same process node technology.
      On latency sensitive application like gaming where saturating the compute unit becomes a huge challenge, chiplets design with similar transistor counts will always fall behind.

    • @lefthornet
      @lefthornet 9 місяців тому

      ​@@Alex-ii5pmAs far as I know the main issue with RDNA 3 chiplets was the render issue at high clocks, that's why they missed the performance target, the Chiplets didn't affect the gaming performance, because all the Computer units were together, so probably in the short and medium term that will be the future of gaming GPUs until the latency issue gets solved, probably if some console use a Chiplet design the engines will improve the optimisation for distribution of workflow.
      In the other hand, Ryzen with 3D cache is the best for gaming right now and don't have any latency issues, a monolithic Ryzen has 40 - 50 ns between CCDs (how we get that data, the APUs that are monolithic), while the Ryzen Chiplets get ,50 - 70 ns (the variation depends on the frequency of infinity fabric and because of that in the Ram frequency), at that scale no human can see that difference is literally scales of magnitude below our senses. Chiplets and other manufacture innovations are necessary, because the chip manufacturing is too close to the limit of physics and there is no viable replacement for silicon, yes graphene is a candidate, but there is not a huge improvement in scale manufacturing of it, until that, is only silicon and it has a limit, physics, that is a hard limit and we are really close to that.

  • @morgan3392
    @morgan3392 10 місяців тому

    Thoroughly enjoyed this video. Understood nothing, but appreciate it all the same!

  • @El.Duder-ino
    @El.Duder-ino 10 місяців тому +2

    I completely agree with u and your predictions. Chiplets r without a doubt future of the chip designs and Nvidia will have to jump on this wagon sooner than latee especially with the enterprice chips.

  • @2dozen22s
    @2dozen22s 9 місяців тому

    There is a lot of upcoming tech that will primarily only push logic density forward.
    With high-NA halving the recital limit, and GAA + Backside power delivery increasing complexity, it might be unwise, or even uneconomical to put L3 or L2 on the die at all. Necessitating die stacking to maintain the necessary bandwidth/latency.
    Hopefully the thermal reductions gained from GAA and backside power will be enough to just stack cache directly onto the logic without issues.

    • @maynardburger
      @maynardburger 9 місяців тому

      Yea, large cache chips that can be stacked underneath the compute die are the future. Lets you have a lot more cache, while also freeing up room for more compute(or just going with a smaller die with the same amount of compute).

  • @josephm3615
    @josephm3615 9 місяців тому

    Great video.

  • @MrArunvappukuttan
    @MrArunvappukuttan 9 місяців тому

    Very good analysis.. One generic drawback of chiplets is the higher power , area and latency that Die2Die PHYs and controllers contribute. But none of this would matter if the max reticle size is reduced to half!

  • @andycarr3711
    @andycarr3711 9 місяців тому

    You were excellent on Broken Silicon. Like, subscribe and best wishes.

    • @HighYield
      @HighYield  9 місяців тому

      Thank you. New video should be coming up soonishTM

  • @EmreHepsag
    @EmreHepsag 9 місяців тому

    Great video

    • @HighYield
      @HighYield  9 місяців тому +1

      Great comment ;)

  • @jabcreations
    @jabcreations 9 місяців тому +3

    Nvidia's engineers are damn capable, the problem is they work for Nvidia.

  • @darrell857
    @darrell857 3 місяці тому +1

    nvidia will continue to produce giant chips, since they have perfected how to do it and the margins support it. To stretch that as far as it can go, they will make chips more and more specialized for particular models or customers.

  • @timparker9174
    @timparker9174 Місяць тому

    Do a deep dive into Nvidia's next chip! You explain these complicated processes very well. Although, with hindsight Nvidia made another monolithic chip. Love to hear your take on it. Thanks

  • @chriskaradimos9394
    @chriskaradimos9394 10 місяців тому

    great video

  • @Akveet
    @Akveet 10 місяців тому +13

    Nvidia historically waits for a generation to implement some new technology compared to the competitors. Because given their lead they can outperform the opposition on older tech saving money in the process. As soon as the savings from the new tech become measurable, Nvidia switches to the new tech.

    • @Wobbothe3rd
      @Wobbothe3rd 10 місяців тому +2

      Lol, "some new technology" CHIPLETS ARE BAD

    • @thomasfischer9259
      @thomasfischer9259 10 місяців тому +2

      Major green cope

    • @Akveet
      @Akveet 10 місяців тому +6

      @@thomasfischer9259 I don't even have and Nvidia gpu, I'm rocking a 5700XT. I'm just stating the facts. Nvidia is technologically ahead, so they juice every last cent out of the cheaper technologies before switching to the never ones.

    • @How23497
      @How23497 10 місяців тому

      @@Wobbothe3rdyou literally watched a 14minute video explaining how Chiplets are the only way forward to continue increasing computational performance, and you make this dumbass comment? Why 😂

    • @baoquoc3710
      @baoquoc3710 10 місяців тому +1

      @@thomasfischer9259 well if he coped, the 7900 XTX will be way better than the RTX 4070TI without any problems of gargantuan amount of power consumption

  • @niyazzmoithu20
    @niyazzmoithu20 10 місяців тому +1

    Isnt the monolithic more efficient?

  • @R4K1B-
    @R4K1B- 10 місяців тому +6

    Does it matter if they deliver industry leading the performance, effeciency and profit?
    Nvidia wont rush into things just for the sake of doing it.

    • @GeekProdigyGuy
      @GeekProdigyGuy 10 місяців тому +2

      Did you watch the video? The whole point is he thinks they CAN'T keep leading performance and efficiency forever without switching to chiplets...

  • @fatplanediaries
    @fatplanediaries 10 місяців тому

    Your videos are chips and cheese in video form. I hope you grow big!

    • @HighYield
      @HighYield  9 місяців тому

      Thank you for the compliment, but chips and cheese goes much more in-depth than I ever could. These guys are on another level!

  • @DJaquithFL
    @DJaquithFL 10 місяців тому +1

    **Chiplet is synonymous with Cheap.** There's no other upside. A Monolithic CPU or GPU doesn't have their chiplets separated by millions of nanometers of added unnecessary latency. Even in Intel's Tile, each tile is specialized. The GPU, SoC, and CPU are all on their own tiles to avoid the latency cost from downgrading from a monolithic design.

  • @simplemechanics246
    @simplemechanics246 10 місяців тому

    Chiplets makes possible to make custom made final assembly. Add more compute units, mix different clock speed cores, increase or decrease L3, graphics etc. Consumers can pay load of money to get very-very unique assembled units. All that requires to make the future systems ready for easy modifications. Yes, every unit need very special motherboard firmware update but that is no big deal to add for custom made assembly bill. Everything base anyway on customer selected chipset, not rocket science add the custom made software code. I am 100% sure the could sell that way crazy things, ever several thousands of euros costing consumer units.

  • @rookiebird9382
    @rookiebird9382 10 місяців тому

    High NA EUV was said to be available in 2023. Now they say it will be available in 2025.

  • @JoeLion55
    @JoeLion55 6 місяців тому +1

    Why has SRAM size stopped scaling?

    • @HighYield
      @HighYield  6 місяців тому +1

      Check out this video: ua-cam.com/video/vQ5JPqeFitM/v-deo.html

    • @JoeLion55
      @JoeLion55 6 місяців тому

      @@HighYield thanks, that’s great info. Do you have any explainers as to what the physical limitation for SRAM scaling is? As a DRAM engineer I am well aware of the manufacturing problems that DRAM has as we try to scale the Wordlines, bitlines, and capacitors. However, I had always assumed that SRAM would continue to scale directly with the logic transistor sizes.
      And now that I write that, it occurred to me that the latest logic process nodes are less focused on transistor scaling, and more on block layout, optimizing power delivery, minimizing black silicon, etc. And I assume all of that optimization has already occurred in the SRAM arrays, so as long as the transistors are staying the same size there’s nothing else to do in the SRAM array. Is that the right track?

  • @jjdizz1l
    @jjdizz1l 8 місяців тому

    Interesting take. I would have to agree that standing still is not the best course of action.

  • @henrycook859
    @henrycook859 10 місяців тому

    I think Google's TPU's will be on track to be competitive with Nvidia and AMD for AI training, not consumer gpu though

  • @thevillain8151
    @thevillain8151 9 місяців тому

    So why not 3D monolithic chips over chiplets? Wont that be way better since you wont need different things to connect the chiplets to communicate together?

    • @maynardburger
      @maynardburger 9 місяців тому

      At some point, perhaps yea. But stacking compute layers on top of each other has huge heat problems that need to be solved first. That may take a while for any kind of high performance applications.

  • @BGS_1
    @BGS_1 9 місяців тому +1

    I know RDNA 3's chiplet structure seems like a dud as of right now but I believe AMD made the right choice to try it out early on so they can work out the kinks as soon as possible before Nvidia takes over the consumer market with chiplets. Nvidia, with their insane war chest for R&D, might just knock it out the park from the get-go.

  • @Kaptime
    @Kaptime 10 місяців тому +1

    The economics of a chiplet based design speaks for itself, it's the clear choice going forward.

  • @ahmedp8009
    @ahmedp8009 9 місяців тому

    Can you make a video explaining why CPUs are limited to 2-threads per core?
    Why don't we have, lets say a 4-core CPU/12-threads (3-threads per core), etc?

    • @HighYield
      @HighYield  9 місяців тому

      This is simply due to the fact that Intel and AMD only implement SMT2 (which means a single core can run two threads). IBM for example has CPUs that offer SMT4 and even SMT8. The more SMT threads you use, the lower the over all scaling, but it's possible to run more than 2-threads per core if you design it that way.

    • @ahmedp8009
      @ahmedp8009 9 місяців тому

      @@HighYield I see, thanks!

    • @pyromen321
      @pyromen321 8 місяців тому

      @@HighYield​also worth adding, under certain workloads SMT literally does not improve performance so it doesn’t make sense to add more threads. It’s really only good for tasks that have frequent waits on high latency things (or programs that haven’t been optimized at all).
      When you have two or more threads running optimized code competing for ports and execution units, each thread will be capable of filling more than half of the ports and execution units. Typical CPUs now evaluate well over 200 instructions at a time and find a way to reorder them to run as many instructions in parallel as possible (search reorder buffer for more info).
      From what I’ve seen, as branch prediction and reorder buffers have improved, practical benefits from SMT have plummeted.
      You could theoretically design a program that would run just as fast on a single core with SMT compared to two cores without SMT, but it would either be incredibly naive or incredibly tricky.
      A naive solution I just thought of would be one thread doing an integer cumulative sum and another thread doing a floating point cumulative sum. In this case, the reorder buffer wouldn’t be much help to either thread, and neither thread would slam the other’s arithmetic ports (depending on the architecture, that is).

  • @WSS_the_OG
    @WSS_the_OG 10 місяців тому +7

    In my view, Nvidia can stay monolithic for as long as it likes due to the high margins on its products. The main advantage to moving to chiplet or tile-based designs is lowered silicon cost. So while it might mean more money in Nvidia's pocket, it's not like they're hurting for money at the moment; they're swimming in profits, with AI only providing a new golden era of profit potential for them.
    There's nothing wrong with monolithic chips inherently, except for the large write-off a chip defect might incur. If you're making as much money as Nvidia, you can afford that loss.
    Also, if we look at AMD, it's not like they're passing the savings of their chiplet designs down to consumers anyway; they're just pocketing the money they're saving.

    • @MacA60230
      @MacA60230 10 місяців тому +5

      You didn't watch the video did you

  • @Timberjac
    @Timberjac 5 місяців тому

    Since Nvidia is testing manufacturing processes at Intel's Angstrong, I don't think they'll have much trouble adapting.

  • @sailorbob74133
    @sailorbob74133 10 місяців тому +1

    Nvidia and Jensen are super smart, but also a bit arrogant. I could see them waiting until the High-NA generation to deploy chiplets...

  • @labloke5020
    @labloke5020 8 місяців тому

    How about Gaudi?

  • @darwinmarinda7866
    @darwinmarinda7866 10 місяців тому

    why are there no triangle microchips? all of them are squares

    • @charleshorseman55
      @charleshorseman55 3 місяці тому

      Or how about amorphous? Infinite divisions of pi!

  • @dr.python
    @dr.python 10 місяців тому +1

    I just hope either intel, amd or nvidia be the first to move away from x86 towards ARM architecture and the next generation consoles solely based on ARM architecture. It is clear it will eventually happen, but the question is when.

    • @maynardburger
      @maynardburger 9 місяців тому

      I dont know why we'd hope for that, personally. ARM isn't really inherently better as a whole and its efficiency advantages and whatnot that people tout now will get reduced as it is further developed and complicated, and I really dont look forward to the software issues that ARM PC's will face for quite a number of years as compatibility problems and translation software and whatnot need to be ironed out. Consoles especially might require losing all backwards compatibility, which will be a heavy blow for both gamers and the industry in general.

    • @dr.python
      @dr.python 9 місяців тому +1

      @@maynardburger Its not a question of whether it is better or not, but the future we're headed towards and how we get there. If there is only one manufacturer (Apple) who uses ARM in a world where most devs have optimised for ARM then it'll be a monopoly and won't be good, since transition to ARM is inevitable. If you can argue that transition to ARM is not inevitable then you might have a case.

  • @leorickpccenter
    @leorickpccenter 10 місяців тому +1

    Nvidia knows the problems with the chiplet approach on graphics. They have looked at it and deemed not ready or problematic. But at some point, they will have to and this will be a problem. As by that time, Intel and AMD may have solved these issues by then.

  • @mattmexor2882
    @mattmexor2882 9 місяців тому

    From what I remember, that hypothetical MCM research chip from Nvidia was faster because it used more die area. Monolithic is always better for performance and energy efficiency, as least for the scale of what fits on one monolithic die. Since Nvidia GPUs are scaled up much larger than what can fit on a single interposer - for the vast majority of their revenue they lash 8 reticle-limit GPUs together with NVLink to make a single node and then lash many nodes together with NVLink and/or Infiniband to make pods - any advantage chiplets give for larger-sized packages mostly gets washed out during that further scaling.
    I believe Nvidia would like to skip the excessive use of modules as much as they can and instead rely on their serdes expertise and in-package optical I/O. Of course they likely will eventually need to use tiles to some extent, and in-package optical I/O itself will rely on chiplets, but I think they would like to limit tile/chiplet use to where it is most economically advantageous and tackle scaling and bandwidth issues with optics rather than with advanced packaging.

  • @przemekbundy
    @przemekbundy Місяць тому

    I always wonder how "they" do it all. My point is that they won't get it all wrong. that they won't get lost in these millions of transistors. not to mention every reconstruction of every structure. the way I look at it. it's like looking at a sky full of stars.

  • @tiagomnm
    @tiagomnm 9 місяців тому

    NVIDIA announced it will supply Mediatek with GPU chiplets to use in automotive chips.
    GPUs but not exactly consumer ones.

  • @Savitarax
    @Savitarax 10 місяців тому +2

    I feel so confidently that nvidia is going to make the 5090 a MCM design because of just how massive the 4090 is and how much TSMC is struggling to make smaller and smaller chips

    • @maynardburger
      @maynardburger 9 місяців тому +2

      The 4090 isn't especially massive. It's smaller than the 3090/AD102 was. Quite a bit smaller than the 2080Ti/TU102 was. And heck, the 4090 is actually more cut down than the 3090 was, even with the slightly smaller die. 4090 is more like what the 3080Ti was.

    • @kaystephan2610
      @kaystephan2610 9 місяців тому +1

      4090 isn't particularly massive.
      3090Ti was 628mm²
      2080Ti was 754mm²
      Only 1080Ti was significantly smaller for reasons mentioned in the video
      980Ti was 601mm²
      780Ti was 561mm²
      So the 4090 isn't especially large. 600+mm² surely is very big for consumer cards, but it's a regular thing in the enthusiast space.

    • @mrrolandlawrence
      @mrrolandlawrence 9 місяців тому

      TSMC are not struggling. They are at the cutting edge & creating new technology as we speak. Creating new technology is hard. Always has been.

  • @MacA60230
    @MacA60230 10 місяців тому +3

    Yeah Nvidia is moving to chiplets sooner rather than later. I also think they'll do so in an impressive way, out of the trio of AMD intel and Nvidia they're the absolute best when it comes to executing. It's one of the reasons Nvidia is so dominant, they just don't mess up.
    As such I don't expect some timid first try for Hopper Next, but a full fledged cutting edge chiplet design.

  • @yoppindia
    @yoppindia 10 місяців тому

    Only couple of years ago NVIDIA used to promote SLI based GPU's, How can you say GPU does not scale with multiple chiplets, latency in SLI based configuration will be more than they would be in chiplets. it is the question of will, not the way.

  • @Lu5ck
    @Lu5ck 10 місяців тому

    Chiplet on gaming GPU is just too difficult unless there is a breakthrough on how to send huge amount of data. AMD will have advantage in chiplet design as AMD does both general purpose CPU and GPU so they got more ways to gain more knowledge and experiment.

    • @DetectiveAMPM
      @DetectiveAMPM 10 місяців тому

      Just to difficult until PS 6 or PS 7 using chiplet based from AMD

  • @ZackSNetwork
    @ZackSNetwork 10 місяців тому +1

    I don’t see Nvidia going Multi Chip until RTX 60 series exclusive to the RTX 6090 in 2027. Multi chip should then be seen on the 90, and 80 class GPU’s in the 70 series in 2029. Unlike AMD, Nvidia will only do Multi chip when they need to.

  • @Anonymous______________
    @Anonymous______________ 9 місяців тому

    Ummm ignoring latency for the sake of throughput/bandwidth will inevitably come back to screw you. This is especially true for wiring and connections at the nm scale.

  • @bigcazza5260
    @bigcazza5260 10 місяців тому +1

    stuck lol nvidia has the best mcm and is just waiting to need it

  • @juancarlospizarromendez3954
    @juancarlospizarromendez3954 10 місяців тому

    together chips for saving golden wires

  • @grospoulpe951
    @grospoulpe951 10 місяців тому +1

    AI chips? I guess they will go chiplet.
    GPU chips? Well, as the rumors saying that RDNA 4 will not have high end chip (aka: chiplet design like Navi 31 and Navi 32) (latency problems as you mentioned ?) focusing on Navi 43 et Navi 44 (probably monolithics); and Nvidia Ada Next will probably be monolithic, even on the high end GPU (xx102) using, I guess TSMC N3 or better and some architectural improvement.
    So, Maybe in 2026+ AMD will come back with RDNA 5 in the high end using chiplet (Navi 51?) to compete with NVidia on chiplet, too (using TSMC / Samsung 2N or so)
    2026= is still a long way to go...

    • @lunascomments3024
      @lunascomments3024 7 місяців тому

      it's because the prices are not sustainable for AMD to produce high end products. going to newer nodes not only increases the price but also the design complexity.

    • @grospoulpe951
      @grospoulpe951 7 місяців тому

      True. AMD has, at least, two choices : increase prices (as Nvidia did) or sell more units to compensate those arguments...@@lunascomments3024

    • @grospoulpe951
      @grospoulpe951 7 місяців тому

      an, of course, (really) increase performance, especially in the "mid-range" GPU (aka Navi 42/52/...) (Navi 21/31/51... are high ends for me...)

  • @falsevacuum1988
    @falsevacuum1988 Місяць тому

    And you were right, Nvidia made Blackwell from 2 chiplets.

  • @oscarcharliezulu
    @oscarcharliezulu 10 місяців тому

    I’m sure when Nvidia brings out a chiplet or tile design it will bow us away.

  • @ChinchillaBONK
    @ChinchillaBONK 10 місяців тому +2

    thanks for addressing this issue. i was wondering why the stock market is pushing Nvidia's stock price so high knowing we are beginning to reach the silicon physical limits of monolithic designs.
    chiplet design seems to be the immediate future of traditional silicon chips for at least next 10-15 years before other computing tech like photonic neural network chips or quantum chips , start to take over.

    • @LeonardTavast
      @LeonardTavast 10 місяців тому +1

      Quantum computing is only faster than traditional computing for a limited set of workloads and requires cooling the chips down almost to 0K. It will probably never become mainstream.

  • @shanent5793
    @shanent5793 10 місяців тому

    Rendering computer graphics for interactive computer games is the easiest thing to adapt to multithreading, ie. an embarrassingly parallel workload. Gamers are sensitive to latency measured in milliseconds, while nanoseconds can bottleneck an HPC or ML job. Graphics calculations are mostly independent and processed as streams, completely hiding any latency. All that matters for interactive graphics is that all the pixels get drawn in time, and there are very few dependencies that don't fit in cache. So I think you have the latency sensitivities of games vs. AI/HPC completely backwards.
    AMD GPUs currently only use memory-cache chiplets because it's the first generation and the least risk with the highest reward potential, and not because of any limitations in scaling graphics applications to modular GPUs

  • @hishnash
    @hishnash 7 місяців тому

    they might be but more like apples ultra chips with a massive die to die bandwidth bridge.

  • @mikebruzzone9570
    @mikebruzzone9570 10 місяців тому +1

    Nvidia owns TSMC 4 and will simply ride 4 nm depreciated cost curve down to introduce BW return to desktop design generation in mass market volumes from Ada mobile design generation produced at a higher cost : price but good for 50 M units of AMD and Intel mobile H attach during H mobile producers ramp plus some HPC cards at 4 m risk production also more costly like x3 TSMC 5 nm cost but Nvidia is making money with 4 nm now and into the future. Pursuant SIP slowly but surely. mb

  • @pandoorapirat8644
    @pandoorapirat8644 6 місяців тому

    The blackwell will use chiplet design.

  • @stellabckw2033
    @stellabckw2033 10 місяців тому +3

    why call a *new* technology "ponte vecchio" if it means *old* bridge in italian? lol

    • @RobBCactive
      @RobBCactive 10 місяців тому

      Same reason they chose Crater & Cougar Lake as codenames ~snigger~

  • @mannyc19
    @mannyc19 10 місяців тому

    9 min 6 seconds, you are forgetting about '3DVcache.... nVIDIA can stack in 3d,same with reticle limits,stack upward. How long for Massive dies? honestly ? Several years to come. So said Jim Keller a few months ago when asked. He would know with his insider knowledge. I can think of at least two more who know for sure. Jenson Huang is #2,but there are more as well versed as Jim,etc

  • @venzoah
    @venzoah 9 місяців тому

    An even better question is, how long can Apple stay monolithic? M1 and M2 are huge.

  • @TheEclecticDyslexic
    @TheEclecticDyslexic 10 місяців тому

    They will put it off as long as humanly possible. Because they are comfortable where they are and would prefer to do nothing if they can.

  • @mylittlepimo736
    @mylittlepimo736 9 місяців тому

    Why do you think Apple hasn’t adopted a chiplet design?

  • @semape292
    @semape292 10 місяців тому +1

    i think nvidia will use chiplets with rtx 6000.

  • @aacasd
    @aacasd 10 місяців тому

    Considering the GH200 specs NVDA still has an edge over AMD and INTC. But their software stack is more than decade ahead, so even if AMD wins on chiplet, they will not wider adoption due to poor software support. This gives NVDA enough time to spend on chiplet R&D and still stay ahead of AMD. INTC is much behind AMD so it's not fair to compare them.

  • @baumstamp5989
    @baumstamp5989 9 місяців тому

    nvidia have put so much energy and effort into their gaming gpu market share that they truly have lost the compute/datacenter development out of their sights.

  • @Ludak021
    @Ludak021 10 місяців тому

    Who told you that nVidia is in the chiplet race?

  • @tek_soup
    @tek_soup 8 місяців тому

    yeh i agree. we gamers are screwed. im pissed cause they did not put displayport 2.1 on the 4090, and so we will hopefully get a refresh of the 4090, but that going to cost$$$ because 5 series not till 2025. im shure they planned it this way, bastards.

  • @7lllll
    @7lllll 10 місяців тому

    i hope the latency issue won't get gaming gpus stuck in the mud with monolithic dies and performance stagnation

  • @user-me5eb8pk5v
    @user-me5eb8pk5v 10 місяців тому

    long as the money pig keeps shaking it;s rear end, ohh budy, now thats some serious generocity, just keep them on their bellies. My methodolidy would be to grab the sow and make photonic wells matrix convolution, then chipleps need pick and place hardware, so these are cell phone optical to electric converters. We can squaze the piggy bak, I mean if you have a very pyramidal investmement, ooh budy, oh yah.

  • @danburke6568
    @danburke6568 10 місяців тому

    Nvidia series 5000 is not chiplet design, AMD is having problems with rdna 4.
    No way and no point pushing themselves when they are the only ones with the crown.
    The problem maybe the 6000/7000 series, when AMD will be putting out some soild hardware.
    Will Nvidia have a intel moment and fail in development letting AMD run away from them. Maybe but Nvidia will have mind space like in intel did and will have many years to come out on top.

  • @chrisgarner5765
    @chrisgarner5765 10 місяців тому

    They already have a faster, more stable interconnect than AMD, so they can do what they want at anytime! Nvidia can connect GPUs together faster than AMD can connect chiplets so all of it is kind of mute!

  • @bobbyboygaming2157
    @bobbyboygaming2157 4 місяці тому

    Isn't monolithic "Better" anyway? Chiplet seems like you just create more problems to solve. It is just a production cost thing more than anything else, however since all the costs get passed to the consumer, I guess you could say it is better for us that they all start using chiplets.

  • @Raja995mh33
    @Raja995mh33 10 місяців тому +1

    I mean Nvidia but also Apple don't use chiplets and so far they're doing great and beat the competition in many areas 😅

    • @skirata3144
      @skirata3144 10 місяців тому +3

      Well technically Apple is using chiplets with their Mx-Max chips which just stitches together two of the lower class chips.

    • @aravindpallippara1577
      @aravindpallippara1577 10 місяців тому

      ​@@skirata3144and sadly ultra (2 connected max chips) have a lower gaming performance than the max monolith variant
      It's amazing what rdna3 achieved as such, but I have faith AMD will figure it out going ahead.
      Nvidia was always at the forefront of technology I don't doubt they will also switch to multi chips but probably follow the intel/apple model of expensive interposers as opposed to AMD's interconnects

  • @tringuyen7519
    @tringuyen7519 9 місяців тому

    Nope, Blackwell will be monolithic on TSMC’s 3nm node. Blackwell will hit TSMC’s reticle limit on 3nm.

  • @ATrollAssNigga
    @ATrollAssNigga 10 місяців тому

    Amd chiplet gpus have been planned for a while, and while rdna 3 was the first generation made to have multiple cu chips, issues were met early and later on that made it not viable, once again that is. Next gen is heavily based on the multi die designs so its very likely they will surpass the issues soon enough.

  • @AuroraLex
    @AuroraLex 10 місяців тому

    Nvidia could probably stay monolithic for another couple of generations if they wanted to.
    High NA is a resolution bump so it can probably rekindle SRAM scaling to some extent like EUV did, but with GDDR7 comming, the need for a large SRAM cache won't be as important anymore, and for larger than 400 mm2 dies, dual masks + stitching is an option if Nvidia is willing to pay the price.

  • @DDD-xx4mg
    @DDD-xx4mg 10 місяців тому

    Chiplets no good for gaming not yet anyway maybe we’ll start to see them with 6000/7000 series

  • @profounddamas
    @profounddamas 10 місяців тому

    "How long can Nvidia stay monolithic?" As if you know...

  • @przemekbundy
    @przemekbundy Місяць тому

    I don't know if I'm backward. am I the only one who is backward? but no normal person can understand this. especially this technology. what are you talking about. After downloading, it can be assumed that this is understandable. but who really understands it. and knows how to use this technology...
    or this rat race. it is a race for the very principle of being the best. I guess it's all about money... I guess there are no higher goals... does anyone know where this is all going... someone started the machine. but it all has no end. it's all a rush. I wonder when it will stop....

  • @lil----lil
    @lil----lil 10 місяців тому

    AMD *HAD* to try something different, it was do or die for them and it paid off Big Time.
    Intel was in "No Rush" to innovate and they paid a HUGE price for it. So much so that the company is in shaky ground now.
    And Nvidia? Nvidia lucked out. They saw what chiplet did for AMD that DECIMATED Intel's CPU performance. With a hyper aware engineer CEO,, they won't be making that mistake and you can count on it.

  • @samlebon9884
    @samlebon9884 9 місяців тому

    A question to all those who are praising Nividia;
    How far is AMD is ahead of Nvidia in chiplet tech and homogeneous computing?
    Here is a hint: when EL Capitan super comes online, you'll have your answer.

  • @Sheerwinter
    @Sheerwinter 10 місяців тому +1

    @_@ nvidia apu would be amazing like a 7600x and a 3060 in just a single cpu. With dlss 4

  • @roilevi2
    @roilevi2 2 місяці тому

    Blackwell is not monolithic ...

  • @tofu_golem
    @tofu_golem 10 місяців тому +1

    Who cares? Graphics cards are too expensive, and it looks like that state of affairs is permanent. So I genuinely don't care if AMD beats Nvidia or not. I don't even game much anymore because the whole industry is so depressing.

  • @wakannnai1
    @wakannnai1 10 місяців тому

    Not so important for Nvidia. When you're selling GPUs for $30-40k a pop, and you still can't meet demand, chiplets are not important. Furthermore, these clients and their workloads work just fine with NVLink and multiple GPUs. There's literally no incentive for Nvidia to go to chiplets because they're selling these dies at such a premium, it's not worth the cost to move to chiplet architecture.

  • @HeroDai2448
    @HeroDai2448 9 місяців тому

    AMD will take market share