[SPON: UGREEN Magnetic Wireless 10,000mAH 36% OFF till 12/2 (Black): amzn.to/3UV9wGW ] [SPON: UGREEN Magnetic Wireless 10,000mAH 36% OFF till 12/2 (Purple): bit.ly/4fHMr35 ] [SPON: UGREEN Uno Charger 100W(20% OFF): amzn.to/3YUWASx ] [SPON: UGREEN Uno Lineup(Amazon): amzn.to/3CzEtKD ] [SPON: UGREEN Uno Lineup(Official Store): bit.ly/3CHycwd ]
"if" it is possible to do so and doesn't cause more problems than it creates, those "rather large" interposers could include L3 or L4 cache (although L4 is rather unlikely as it would require changes elsewhere, so not a bolt on, and a waste if not used as it will use some die space). I have speculated about this ever since your first Zen 6 leak that the interposers could actually be "live", if so, this would essentially make every singe Zen 6 product a 3D stacked chip with 3D V-Cache built into the "live" interposer that also gives a very real latency reduction for that additional performance boost above and beyond that 3D V-Cache.! . How exactly that would work on the EPYC / Threadripper products that use the 32-core CCD's I do not know (there will obviously be 12-core CCD models for the higher clocks, and larger L3 cache). . When looking at this from a historical perspective of AMD and TSMC, it seems like a real possibility that this is a "step" product to a full 3D chip, and that 3D chip of the future would be called Zen 7 if they stick with the Zen name (I would, it is an excellent brand name), but there are many financial concerns that can make or break something that sounds like it has massive potential, such as Intel Adamantine. Something like that could really deliver if is was 128MB and the memory controller/cache controller made it make sense from a power perspective and performance gain perspective and I suspect this is why Adamantine never came to fruition, the costs outweighed the benefits.
I still don't know how they want to solve the latency between chiplets though. This has always been a critical challenge. I only believe it when I see it
Also there new patent which allows for semi vertical core /iod stacking allowing to increase density w/o increasing latency too much w/o 3d vache while also allowing cooling to reach the lower stacked dies
zen6 x3d are going to be massive cause it's the first 12 core x3d viable, plus the new IO die will allow for faster infinity fabric and faster bandwith
@@aladdin8623 "I still don't know how they want to solve the latency between chiplets though. This has always been a critical challenge. I only believe it when I see it" Zen 2 - 5 use a multiplexer on the IOD to connect the CCDs and the IOD. This is terribly slow. The data path ALSO has to go from inside a die, to being transmitted onto the CPU PCB to move to another die. This requires a parallel to serial conversion and then serial transfer and then when the data is where it needs to be, if that destination is a CCD the data has to be converted back to parallel. So, there is a LOT of latency in CCD to CCD transfers when you have a desktop part that's 2 CCDs. A high speed interposer for which I believe is what's used with RDNA 3 doesn't need the data conversions so data is always parallel. That gets rid of part of the latency issues. It's also a high speed interposer. Right now memory is limited not by the speed of the mem controller, but the speed of the IOD - CCD transfer rate of about 3GHz for Zen 4 and 3.2GHz for Zen 5. The type of interposer that's rumored for Zen 6 is classified as a direct connect between die, so there is no need for a multiplexer on one of the die. So, no multiplexer really means there is no more Infinity Fabric and a removal of part of the latency. No more data conversions means another removal of latency. A high speed interposer will PROBABLY allow for transfer speeds of at least 4GHz, which now means DRAM running 8000 MT/s would give the full benefit of its speed. It was over a year ago that rumors hit the internet that AMD was moving to direct connects between die for desktop Zen 6. I assumed that based on the ability to deal with much faster DRAM along with a LOT of latency removed from the CPU, and the worst type which slows down memory access and getting data to the cores, this by itself would be about a 10% IPC improvement. It doesn't surprise me one bit that Tom is showing that figure. And, it WILL be across applications and games, and in fact it should affect games more than many apps because many apps can fit algorithms into L1 - L2 already so it's simply a matter of data processing. Games are both algorithm AND data intensive which means a LOT of memory access, and this is why there isn't a big uplift moving from Zen 4 to 5, because the Infinity Fabric becomes more of a bottleneck the faster the cores get. And to reframe that last sentence, Zen 5 cores are held back by the Infinity Fabric in memory intensive workloads and games are one of them. Zen 6 will use improved Zen 5 cores, so we'll be able to see what a much better MCM hardware architecture looks like, comparing Zen 5 to 6.
especially if its without the current enormous infinity fabric limitations. Single ccd ryzen cpus can't even fully use pcie gen 5 because the write bandwidth caps you at gen 4 x16 speeds lmao. Thats also a completely unrealistic overestimation because the cpu would be left with 0 write bandwidth so its even worse in reality... lowkey crazy it has taken this long to replace it lmao
3:56 My laptop uses a 15W eighth-gen i5. Was also looking forward to a 30W-or-less monster. A 12-core chplet is exciting. 8 cores is still plenty for most tasks, but an extra 50% headroom would be welcome, just in case. Thanks, Tom! Have a wonderful Thanksgiving. 🦃
Im currently using a zen 3 r5 laptop and its pretty good man,I can boost tdp upto 35-40W if i want to but the most gains are from 15 to 25W without harming thermals a lot.I can play 2010ish titles at medium/high settings at 50-60fps.max payne 3 ran at high 60fps gta 5 50 fps mafia 2 also ran well above 50 at high settings.amd always had good igpus the memory is what bottlenecked these chips,I thin kthe ai hx 370 igpu is close to the 3050 6gb laptop in performance already. just a matter of time before hopefully they pump out some budget gaming apus
Here be me with my coffee lake i7 8750h using a Dyson sphere for power. Pretty good though since it runs a -0.125v offset. Patiently waiting for some of this new stuff
@@tuckerhiggins4336 The good part is that the slightly older stuff like Core Ultra and Zen5 laptop will become much cheaper when this comes out, which could be a big boost for you as well. :3
Intel-Nvidis is always ahead in single core performance and gaming .. I had a ryzen 3500u on a laptop ...bought it for the integrated gpu and for office work. Later bought a i7-12650H with a 3070ti laptop it was miles ahead in performance. Granted though the AMD lappy was less than 1/3 rd the cost and I played farcry 4 on it in somehow and loads of dota 2
I think the AI hype in the laptops is kind of laughable. Unless the operating system does something useful with it, I don’t care. Maybe if AMD implements upscaling using it, OK. But my gut instinct is there’s a better use of those transistors in a laptop.
Thats actually completely non-viable as the latency to go render on the gpu, go back to the cpu to upscale, then go back to the gpu yet again for video output would be not very fun to play with.
anyone who actually needs or wants ai already has a competent desktop, there is zero purpose for mobile hardware to have dedicated ai silicon integrated
The Strix halo LP sounds exciting, 8 full zen 5 cores and 4 more CUs than Strix, should allow it to be more cost effective than the full Strix halo design!
Looks sick! But I remember from last generation with Dragon range, the battery life was significantly worse on the chiplet-based units as compared to the monolithic dies from either Intel or AMD. I wonder if AMD will/can manage to still improve battery life while also pursuing this chiplet approach. No doubt it will have awesome performance, but if it can't make at least a whole workday under standard work scenarios, I think that will limit its adoption in creative, and professional contexts. It really doesn't need to compete with Apple's insane battery life (it would be nice if it did), but it does need to be good enough that I don't need to worry about micromanaging the laptops battery life when I'm out and about. I guess we'll see once strix halo arrives...
Dragon range is just a desktop chip squeezed into laptops. It uses desktop IOD which has zero consideration for low idle power consumption. However, Medusa should be different as it will be specifically designed for mobile. Optimizing for idle power consumption for chiplet-based design is not that hard (just look at Lunar Lake), it's just completely not considered this time. Also, AMD would use a silicon interposer, which would help reduce the power draw of the interconnections.
AMD's doing these chiplets differently than DGR, these are more like Intel. Look at how closely these chips are from each other. They're using a TSMC technology (I don't remember the exact name, but it's Foveros-adjacent iirc) to package the dies together with minimal power overhead.
@@MaxIronsThird The chiplets themselves not, at least not the CCDs. What's really consuming energy is the IF, which is coupled to the RAM frequency up to Zen3. Especially if your RAM frequency goes beyond the offical specs, so beyond 3200MT/s on Zen3. The chiplets are quite far from each other. Big distance + high frequency = high power consumption. Not sure how it is for Zen 4 and Zen 5. Not sure if this could have been done, but I assume idle power consumption could have been significantly decreased, if the IF frequency could have been modulated, just like core frequencies.
@@johnscaramis2515 IF still works best when tied to ram speed, but no, AMD's current chip designs do waste a ton of energy, since they're still using the organic substrate for communication between CCD and IOD, from Zen1
I think the time frame on the slide means from design to ship. as you can see Zen 3 block started before 2020 but it just launched in Q3 of 2020, which is where the block stopped. But that still means Zen 5 was half a year behind and Zen 6 might be a year behind. Sad that Intel cannot capitalize on that since they can't make shit that works.
@@TheGuruStud Not all update to latest and greatest. Depends on how old your previous config was, it may be good decision to buy one of previous generations if price is good.
If cores per CCD is going up to 12, maybe we will finally get 8 core ryzen 5. Originally rumors said we are getting that with zen 2, but 3 generations later we still didn't get it.
_"maybe we will finally get 8 core ryzen 5..."_ For what possible purpose? It's been categorically proven that the differences are negligible between 6/8 cores since there are extremely few commonly-used apps/games that benefit, or even scale in any useful way, with more cores. Sure, there are myriad, disparate, edge-cases where core-count is relevant, but they are basically statistical noise...
Which also means, that we may see CHEAPER V-cache variants sooner! :D If the R5's have 8 cores, then it makes sense to start the V-caching at the R5-stage. 11600X3D could be launched at the same time as the 11800X3D! ^^ (note how I think AMD will do what they usually do, and have some laptop-generation eat up the 10000-series naming, so Zen6 will be 11000-series on desktop...and yes, I dislike it as well)
@@awebuser5914 ""maybe we will finally get 8 core ryzen 5..." For what possible purpose?" For lower-midrange PCs, like any other Ryzen 5. To compete with recent and future i5/Core 5 CPUs, which all have 4-8 E-cores. Is this not blindingly obvious? Some people similarly asked why Ryzen 5s needed 6 cores and 12 threads in 2017 to compete with 4-thread Core i5s which were faster in games at the time. Just 3 years later, the Ryzen 5 1600X was usually faster in new games than the i5-7600K, especially when comparing minimum fps. Progress happens. Why are you questioning it as if it's a bad thing. Yes, even in 2024, plenty of software still doesn't use more than 4 cores, but some of it does. Over time, more software will be optimised for higher core counts. Even if you still think most people wouldn't ever benefit from 8-cores (which I think is extremely short-sighted), why would AMD let Intel keep the lead in multithreaded performance in midrange CPUs and leave Ryzen 5s as less appealing options for people who care about future-proofing or workstation performance, if they can add more cores without significantly increasing manufacturing cost? People who want a midrange CPU with a lot of multithreaded performance might not be the majority of the market, but they still exist, so why not cater to them? AMD isn't even _making_ 4-core desktop CPUs any more, so they have no reason to _not_ shift their desktop Ryzen 3s up to 6 cores and move the rest of their product stack up accordingly.
@awebuser5914 i mean it's like 5% currently in cpu bound games at least in lows and that's today not the future when it releases. Id instead ask why tf would every single tier move up in core counts except the ryzen 5's? Theres no ecores to take on background tasks so the 6 cores can go full focus on the game either, its 6 cores including being taxed by the OS. That's not very much
@@Frozoken _"6 cores including being taxed by the OS"_ There this magical thing called multitasking where small time-slices of CPU are used for background tasks and in any case, the OS *never* "taxes" any CPU to any meaningful amount, it's quite literally background noise, a few perfect here & there...
@@Ahamshep I mean no way they'll make a cpu that costs more than the price of 4070 laptop GPU+CPU,i think that they'll initially aim for the m4 pro/max market,so they'll have a higher profit margin but i suspect they'll later lower margins for gaming ultrabooks.
I already have a 7840U based "coffee table laptop", and my previous laptop for that role lasted me 14 years, so I'll be good for a really long while. What I find interesting is that AMD is trying to cover all possible levels of "below proper gaming laptop" graphics performance. I mean, who is this for? I bought the 7840U for longevity. 8 cores, 12CUs on LP-DDR5 6400 is "trash tier" graphics performance for anything that actually requires graphics today (roughly on par with an RX 470). But in 10 years that's going to be about the performance of "default integrated graphics" which is exactly what I want, as it was the absolutely abysmal Iron Lake graphics in my old laptop that completely gave up (Last driver update in 2012. Thanks Intel!).... But I think my line of reasoning is REALLY not big market. In fact, my line of reasoning is typically a market segment they want to avoid. "Never sell people good longevity" and all that.
8 CPU core are still the sweet spot for gaming. The 16CPU cores are for workstation class systems. And maybe to fight with high end MacBooks. A Strix halo with 40 CUs vs MacBook Pro Max with 40 GPU cores will be an interesting comparison, if anybody would actually do it.
the number mean nothing in themself, would need to see the area in mmsq and node and then the power consumption. now you are right as far as APU go that would be a very nice match-up
Why are people even talking about MacBooks ? There’s currently no reason to buy products that have higher price and lower performance and worse features. Also what apple did to their staff in china during Covid is purely shameful
Get that idea out of your head. That's an un-helpful over-simplification. There's nothing INHERENTLY useful about having 8-cores vs. 6-cores, or 6-cores vs. 4-cores, at least not when it comes to running games. A smaller number of higher performing cores is usually preferable to a CPU with a larger number of lower performing cores. For example, the six-core Ryzen 5600 will SIGNIFICANTLY out-perform the 8-core 3700X in games, on average, despite being only one generation newer. Games are becoming better optimized for 8-core CPUs now, but I wouldn't recommend a Ryzen 7700 over the 7600 just yet. With recent discounts, it's becoming a more attractive CPU, but it is definitely not the clear choice over the six-core Ryzen 7600. The same is true when it comes to Intel six-core CPUs vs. 8-core CPUs from the same generation (talking about P-cores only, not E-cores). You pay a lot more for those two extra cores of the 7700 vs. the six-core 7600, and for most people looking to build a good bang-per-buck system, I think it's probably better to save your money and get a 7600. The 7700 might age a little better, but, by the time the performance difference between the two is likely to matter, it would likely be best to upgrade to Zen 6, or to a heavily discounted 9800X3D or a used 7800X3D instead anyway. Save your money and get a 7600, and then IF you end up needing a CPU upgrade in a few years, you can upgrade to something much better than a 7700. On the other hand, if you just want to upgrade an AM4 system, an 8-core CPU is more likely to be a good choice, in part because the 5700X3D is a great gaming CPU, and the 6-core variant isn't available for most people, if at all, and might be too close in price even if it is, but also because the extra performance of more than 6-cores is more likely to matter sooner, and more often, with that older and weaker CPU generation, and also because unlike with the 7600, you won't be able to upgrade to a newer generation beyond Zen 3 without upgrading the motherboard and RAM as well. A game never actually requires a certain number of cores, it just requires enough performance to be able to run at a good frame rate. Any game could run well even on ONE core if that core and the rest of the resources of the CPU were fast enough.
I'm really interested and excited for Zen 6 Medusa, more then Strix Halo, since we saw, and you told us, how many problems Zen 5 had with its design and the TSMC 3nm dalays. I hope AMD will be able to bring down the performance they have now on desktops cpus to an SOC that can fit into a minipc or a laptop with 30-50W max TDP, really competing with Apple's Max chips in terms of multicore performance efficiency (imo X86 will never catchup Arm, especially Apple, in single core performance and efficiency). I think their weakest point at the moment are the Single core "speed", the media engines for videomaking workloads, and the RT performance compared to Nvidia, Intel and even Apple. But if what we're seeing with the PS5 pro is an indication of what we can expect from RDNA4, and we'll probably have even better performance and efficiency with RDNA5 or UDNA 1, or whatever they will name the next graphic architechture, well, i'm more then excited. 12 cores for each CCD means 24 cores and 48 threads for an hypothetical Medusa Halo apu, with 40 RDNA5 CUs? That's basically a threadripper with possibly a rtx 4080 or even 4090 mobile gpu, inside a mobile workstation. This thing could actually turn out insane considering the huge amount of softwares that can benefit from an x86 architecture in the professional world, Strix Halo might be just the beginning of a new type of APUs for Windows mobile workstations.
Interesting info Tom, makes me wonder if the 10800X3D will be a 12 core with 3D V Cache then, hence the 12core CCD's being a probably thing. Would be a nice upgrade,ZEN6 will be a very interesting change. Also have a nice thanks giving,. even that it's just a normal day here in the Netherlands, hope all goes well.
Yep, I hope Zen 6 performs well. My Ryzen 5 7600 will be in a need for a replacement by the time it comes out anyway. My next upgrade will probably be Zen 6 X9950X3D.
Strix Halo (AKA Ryzen AI Max) is all I want for Christmas. I don't care about anything else AMD, Nvidia, or Intel are doing right now, a 40 CU APU with a fat 32gb high-speed RAM pool will be revolutionary for mobile gaming and AI.
AMD needs to add infinity cache to APUs even if it's half or a quarter of what a discreet version of that GPU would get. Their CPUs and GPUs have plenty of performance but are bandwidth starved.
It would be nice to see full Halo on SFF desktops as well. Though I must admit - I don't know if RAM slots on current motherboards can function as one slot, one channel (we'd need four) and it's all about the I/O die ... Or is that impossible and we'd need a new chipset and a new class of motherboards ...
With Medusa, very feasible. As it uses standard CCD, unless AMD abandons V-cache fully, it will be possible to make Medusa Point and Halo with V-cache. Similar to how it would be possible for AMD to do V-cache Strix Halo.
I hope so. APUs need more cache, and we've been out of luck so far. 16 MB is just not good enough. Maybe bump that to 32 MB, and add 128 or 256 MB L4, shared with the iGPU?
@@panthera8286I consider it very unlikely for AMD to use an L4 cache design similar to what Intel Adamantine is shaping up to be. The whole advantage of X3D comes from having vastly increased capacity of the shared L3 cache at very low latency. Oh, and X3D APUs are already possible with Strix Halo, it's just a question if AMD sees a market for such a product - it might go down the same route X3D-MCDs went for RDNA3.
I do not quite see a need for this. The GPU will be a bottleneck in any games, so no need to boost the CPU’s gaming performance with additional cache. Is there a certain use case? I’m interested.
@roB3rnd As Strix Halo appears to be aimed towards high-end mobile workstations more so than towards gaming (at least rumors suggest so), there are certainly usecases for X3D. The whole fanfare is about gaming, sure, but lets not forget that 3D-vCache started out as a datacenter product (Milan X) for scientific compute, CFD and the likes. The question remains - does AMD expect the market for X3D-APUs to be sufficiently large to be worth adressing.
Bummer. I was hoping since the year old leak of 32 core CCDs that atleast for client side we get 16 core CCD. Perfect 16 core 32 thread 1 CCD X3D chip. Not just 12 core CCD, gotta have to wait and 3.5 years in total then for zen 7 to get that i gueess.
I think 12 cores will be plenty for gaming for a while, and honestly 24C is still starting to get into MT diminishing returns in even some powerful apps.
@@gertjanvandermeij4265 Lunar Lake doesn't actually have that much latency, not sure why Arrow Lake put the memory controller on a different tile, thank god, next gen(Panther Lake), it will be moved back to the compute tile.
@MaxIronsThird Lol lunar lake has way more latency than arrow lake. Its just we were used to meteor lake that had an even worse amount of latency. They both use lpddr which has terrible latency, most of the latency is the memory itself not how its packaged, especially with lpddr which has awful latency. It's just that arrow lake is bugged and going from monolithic with high ring clocks to bugged chiplets with low ring clocks is pretty bad at xmp latency, especially when you have weaker caches. Regardless arrow lake has 90-100ns of access latency while lunar lake has about 110-120 which looks good compared to meteor lake with like 150.
100% agree. I really wish AMD and Intel would streamline their offerings to the following: Entry level gaming and budget office PC’s - 2 channel memory and up to 8 core CPU with entry to mid-level integrated GPU (to eliminate the need for discrete GPU). No vCache to cut costs and keep price points down. Mainstream gaming and power user office PC’s - 4 channel memory and up to 24 core CPU with no integrated graphics. VCache on all CPU’s. High end workstation - 8 channel memory and > 24 core CPU. Minimal integrated graphics (for 2D display output only - to facilitate GPU’s to be used as dedicated accelerators when desired).
AMD have said it. To simplify the developers work. What they need are indie AI developers to adopt their platform. So if the indie developers can use their gaming GPU for research and development then move to their Big Iron GPU when scaling up and commercializing that is a big win for AMD.
I really wish AMD would come up with a better naming scheme , what they have at the moment doesn't tell me anything. I have no idea what platforms these things are targeting or how they relate to each other.
It would be hard to believe that AMD would throw sum 600mm2 of dies to where it used to use 200mm2. I'm more inclined to believe that the 325 should be a total size - nowadays They should be able to use some bridge for connection (eg mi200) rather Than a full base die... unless that base die is full of cache. (Or the IOD itself)
Hello have a question because i'm not really good in english... That's mean than the new ryzen ai max + igpu is cut down to 128 bit bus and 20 core ? That's right ?
If we say the early leaks of Nova Lake are true and Intel is going with 16p + 32e for their flagship, and that Coyote/Panther Cove P-Cores are a big step up in performance and that Arctic Wolf E-Cores also hold their own then I'm not sure a 24c48t will compete too favourably again 48c Nova Lake (possibly with an additional 4 low power cores on the I/O die but I wouldn't expect much from those). Obviously Intel have done nothing in the last 10+ years to make me believe they will execute this well but it could be a very interesting showdown!
So far amd threads have either almost matched or surpassed Intel's physical core count. Unless intel does something unexpected, I'd expect them to tie with intel maybe being 5% ahead in muticore while being demolished in gaming. Unless amd puts a 24c/32c chiplets on ryzen
@@NadeemAhmed-nv2brNo they haven't? The 285k still beats the 9950x overall in productivity despite having 8 less threads. Doubling both p and e cores would demolish a 24 core 10950x even if there were 0 improvements from arrow lake which there definitely will be seeing nova lake the first stage of rentable units. Ultimately I don't care about productivity but let's not lie to ourselves because amd is better for what u and most of us do (gaming)
RDNA 5 is a possibility for Medusa Point? That would be great. Honestly I was expecting AMD to stick with RDNA 3.5 for a long time like they did with Vega. I'd even consider using RDNA 4 to be a small win.
They're unifying alot of things to save on costs. It makes more sense to not waste money on rdna when the boatload money making AI team which has alot more resources at its disposal, to just use their designs and alter it a bit. U save money and get alot more performance increases b/c they can afford a much larger R&D budget
GUARANTEED Medusa Ridge has a fat NPU... They will be shrinking the IOD to put it there... But I would like to see them go to 8CUs for Ridge chips...They said Zen 6 would have accelerators everywhere (all SKUs)... I do enjoy how no one has talked about how Win1124H2 added 20-30% to Zen4/5... I talked about that on every page... Because Intel is so much of the market MS has to pivot with Intel HW changes FIRST... All the way back to the Vista debacle and the FMAC Challenge (dest vs. non dest)... I would bet that AMD is still using XOP for INT... But that's off-topic... I can understand them going wksta with Halo since they gave Sony RDNA4... An EliteBook or ProBook are in the cards for Feb ( or OEM availability ) but I want that (Ryzen AI Max 390) in a nice miniPC at 100W...
24 core desktop CPUs then on AM5? That would be stretching memory bandwidth I guess, but I think we will get it. I really hope we go to 3 channel memory in AM6. (1 dimm per channel will be plenty enough for most uses, and allow higher frequencies)
This will be at least a year from now though, and involve new high speed methods of communicating between chiplets. Idk, if DDR5-6000 was enough for 16C Zen 4, I don't see why DDR5-10000 in 2026 can't be enough for 24C with a more advanced architecture. It should be... AND - Remember most of the lineup don't have 24C anyways. It's not insane to think AMD may say "hey if you can afford a 24C CPU, get a decent RAM kit".
I think we need to see benchmarks with the 5090 to know that for sure. A lot of initial benchmarks of the 9800X3D were GPU limited which would suggest there’s quite a bit of headroom on the CPU already.
Hey, I really hope someone might be able to answer this: With Halo Strix coming up, do you think we will see laptop configurations with BOTH the AMD APU and a NVIDIA 50 series GPU, or will they only have the AMD APU?
*Finally no more LOW-end GPU's , because AMD/Radeon will make APU's, for that 15% of gamers!* Than 75% of gamers, will buy an Radeon MID-END GPU, and JUST 10% will buy an HIGH-END Nvidia GPU ! Oh Yeah ! *AMD & Radeon, will be the real future, for GAMING !*
I was really hoping for 16 x Zen 6 cores per CCD, I guess 12 is a decent improvement over 8, I just think 16 would have put to bed any questions of do I want a single CCD for low latency gaming or do I want a dual CCD in case in want to do some multitasking work loads too.
"I assume it's RDNA5, but I don't know that" RDNA5 confirmed!!! 1y later, turns out it's RDNA5.5. OMG MLiD got it *wrong*! We all know that's how it's going to go. :)
Not gonna' lie... I wanted 18-20 CU's. : [ Remember - Nvidia has APU's incoming - if AMD wants to compete, they have to up the CU-count. It's true that the APU's need more bandwidth though, so I hope the Infinity Cache speculation turns out correct - even better if it's V-cache!
@@NadeemAhmed-nv2br Hmm, true! I guess it depends on what Nvidia has planned for compatibility, or perhaps they're going to do a *massive* work of recompiling a crap-ton of games - which seems very implausible, upon further consideration...
It might still be 8. Current plan for PS6 is Zen4/Zen5, (seems there and UDNA (successor to RDNA4). Zen6 on console doesn't seem to be in discussion yet.
@TruzzleBruh I just don't think they would want an already-three-year-old architecture on a 9 year life span product. Considering PS6 will due in 27 or 28.
When you have chiplet CCDs there's little to gain by mixing 6 std with 6 dense cores that have half L2, to save die area you would need the lower frequency cores on one side so the heat density on heavier used fastest cores will be worse than on an all std core design. A unified L3 cache means you are mainly saving a small % area from halving L2 & denser logic in just half the cores. So a small 75 sq mm CCD won't cost much less even if a 10% area reduction of dense cores is achieved as so much of the die will be still be L3 cache. Meanwhile having 2 pools of cores complicates binning of chips with defects. The savings of area on monolithic with denser cores are greater, because discrete blocks of the chip are merged into larger areas for layout, so the space saved by the shorter cores can be used for other parts of the APU, but these logic blocks are simply are not present on CCD dies as they reside on the IOD if included at all.
sounds weird to make 3 IODs for 1 gen, when the major reason AMD cited for chiplets is that making an IOD is complicated and expensive, and this saved them having to make a new one per gen... now they make 3 ?
15:30 - RDNA5 ? Ehmm... Wernt you saying that after RDNA4 AMD will switch to "UDNA" ? Just asking... Also - like for the 12c CCD, and after 9800X3D, and the other leaks with how AMD will be stacking things on top of each other, I can imagine - at the bottom - big L3 cache for everything and on top of it - IO die, mem controlers, iGPU and the needed CCDs. so just pushing all the data through an big L3 cache for "everything". and so the CCD will not have L3, but only L2 and L1 cache for the cores and thats how you already can put 12 cores on the same space.
1.Why are chiplets in future mobile apus are suddenly not a big battery (standby)problem ? 2.wasnt there a story a while back saying that AMD is sticking with rdna 3.5 untill 2026 or 7?
There will be "allegedly" some small cores straight on the IOD, so the CPU would offload everything from the high-performance cores and shut the entire thing down.
[SPON: UGREEN Magnetic Wireless 10,000mAH 36% OFF till 12/2 (Black): amzn.to/3UV9wGW ]
[SPON: UGREEN Magnetic Wireless 10,000mAH 36% OFF till 12/2 (Purple): bit.ly/4fHMr35 ]
[SPON: UGREEN Uno Charger 100W(20% OFF): amzn.to/3YUWASx ]
[SPON: UGREEN Uno Lineup(Amazon): amzn.to/3CzEtKD ]
[SPON: UGREEN Uno Lineup(Official Store): bit.ly/3CHycwd ]
"if" it is possible to do so and doesn't cause more problems than it creates, those "rather large" interposers could include L3 or L4 cache (although L4 is rather unlikely as it would require changes elsewhere, so not a bolt on, and a waste if not used as it will use some die space). I have speculated about this ever since your first Zen 6 leak that the interposers could actually be "live", if so, this would essentially make every singe Zen 6 product a 3D stacked chip with 3D V-Cache built into the "live" interposer that also gives a very real latency reduction for that additional performance boost above and beyond that 3D V-Cache.!
.
How exactly that would work on the EPYC / Threadripper products that use the 32-core CCD's I do not know (there will obviously be 12-core CCD models for the higher clocks, and larger L3 cache).
.
When looking at this from a historical perspective of AMD and TSMC, it seems like a real possibility that this is a "step" product to a full 3D chip, and that 3D chip of the future would be called Zen 7 if they stick with the Zen name (I would, it is an excellent brand name), but there are many financial concerns that can make or break something that sounds like it has massive potential, such as Intel Adamantine. Something like that could really deliver if is was 128MB and the memory controller/cache controller made it make sense from a power perspective and performance gain perspective and I suspect this is why Adamantine never came to fruition, the costs outweighed the benefits.
High Yield has a new video on the 9800X3D which I am about to watch, nerds will want to. ua-cam.com/video/OlRLuajAgIc/v-deo.html
Zen 6 and Zen 7 are where Amd proves it can use all these lego bricks to build whatever the customer wants. Hopefully it all pans out
I still don't know how they want to solve the latency between chiplets though. This has always been a critical challenge. I only believe it when I see it
3d v cache nicely solves that problem
Also there new patent which allows for semi vertical core /iod stacking allowing to increase density w/o increasing latency too much w/o 3d vache while also allowing cooling to reach the lower stacked dies
zen6 x3d are going to be massive cause it's the first 12 core x3d viable, plus the new IO die will allow for faster infinity fabric and faster bandwith
@@aladdin8623 "I still don't know how they want to solve the latency between chiplets though. This has always been a critical challenge. I only believe it when I see it"
Zen 2 - 5 use a multiplexer on the IOD to connect the CCDs and the IOD. This is terribly slow. The data path ALSO has to go from inside a die, to being transmitted onto the CPU PCB to move to another die. This requires a parallel to serial conversion and then serial transfer and then when the data is where it needs to be, if that destination is a CCD the data has to be converted back to parallel. So, there is a LOT of latency in CCD to CCD transfers when you have a desktop part that's 2 CCDs.
A high speed interposer for which I believe is what's used with RDNA 3 doesn't need the data conversions so data is always parallel. That gets rid of part of the latency issues. It's also a high speed interposer. Right now memory is limited not by the speed of the mem controller, but the speed of the IOD - CCD transfer rate of about 3GHz for Zen 4 and 3.2GHz for Zen 5.
The type of interposer that's rumored for Zen 6 is classified as a direct connect between die, so there is no need for a multiplexer on one of the die.
So, no multiplexer really means there is no more Infinity Fabric and a removal of part of the latency. No more data conversions means another removal of latency. A high speed interposer will PROBABLY allow for transfer speeds of at least 4GHz, which now means DRAM running 8000 MT/s would give the full benefit of its speed.
It was over a year ago that rumors hit the internet that AMD was moving to direct connects between die for desktop Zen 6. I assumed that based on the ability to deal with much faster DRAM along with a LOT of latency removed from the CPU, and the worst type which slows down memory access and getting data to the cores, this by itself would be about a 10% IPC improvement. It doesn't surprise me one bit that Tom is showing that figure.
And, it WILL be across applications and games, and in fact it should affect games more than many apps because many apps can fit algorithms into L1 - L2 already so it's simply a matter of data processing. Games are both algorithm AND data intensive which means a LOT of memory access, and this is why there isn't a big uplift moving from Zen 4 to 5, because the Infinity Fabric becomes more of a bottleneck the faster the cores get. And to reframe that last sentence, Zen 5 cores are held back by the Infinity Fabric in memory intensive workloads and games are one of them.
Zen 6 will use improved Zen 5 cores, so we'll be able to see what a much better MCM hardware architecture looks like, comparing Zen 5 to 6.
A 12 core zen 6 x3d single ccd gaming moster next gen will be insane.
especially if its without the current enormous infinity fabric limitations. Single ccd ryzen cpus can't even fully use pcie gen 5 because the write bandwidth caps you at gen 4 x16 speeds lmao. Thats also a completely unrealistic overestimation because the cpu would be left with 0 write bandwidth so its even worse in reality...
lowkey crazy it has taken this long to replace it lmao
too bad it's only coming in mid to late 2026.
This
@@Frozokenhow can I learn more about this!? Like what do you even search.
@@Frozoken yeah IF link cap at R64 W32 GB/s for single CCD. they have to increase that surely.
Mobile using standard CCDs is exciting because it could mean we get mobile APUs with vcache which would be amazing
Yeah, reducing the problem with ram bandwidth for apus seems like it can give crazy results
3:56 My laptop uses a 15W eighth-gen i5. Was also looking forward to a 30W-or-less monster.
A 12-core chplet is exciting. 8 cores is still plenty for most tasks, but an extra 50% headroom would be welcome, just in case.
Thanks, Tom! Have a wonderful Thanksgiving. 🦃
Im currently using a zen 3 r5 laptop and its pretty good man,I can boost tdp upto 35-40W if i want to but the most gains are from 15 to 25W without harming thermals a lot.I can play 2010ish titles at medium/high settings at 50-60fps.max payne 3 ran at high 60fps gta 5 50 fps mafia 2 also ran well above 50 at high settings.amd always had good igpus the memory is what bottlenecked these chips,I thin kthe ai hx 370 igpu is close to the 3050 6gb laptop in performance already. just a matter of time before hopefully they pump out some budget gaming apus
Here be me with my coffee lake i7 8750h using a Dyson sphere for power. Pretty good though since it runs a -0.125v offset. Patiently waiting for some of this new stuff
@@tuckerhiggins4336 The good part is that the slightly older stuff like Core Ultra and Zen5 laptop will become much cheaper when this comes out, which could be a big boost for you as well. :3
Intel-Nvidis is always ahead in single core performance and gaming .. I had a ryzen 3500u on a laptop ...bought it for the integrated gpu and for office work. Later bought a i7-12650H with a 3070ti laptop it was miles ahead in performance. Granted though the AMD lappy was less than 1/3 rd the cost and I played farcry 4 on it in somehow and loads of dota 2
Happy Thanksgiving, Tom.
I think the AI hype in the laptops is kind of laughable. Unless the operating system does something useful with it, I don’t care.
Maybe if AMD implements upscaling using it, OK. But my gut instinct is there’s a better use of those transistors in a laptop.
Thats actually completely non-viable as the latency to go render on the gpu, go back to the cpu to upscale, then go back to the gpu yet again for video output would be not very fun to play with.
Photo editing .... maybe ? (via adobe photoshop, which does use ai)
@@richardconway6425 Dont think generative fill is local, it requires internet and is generated on Adobe servers.
anyone who actually needs or wants ai already has a competent desktop, there is zero purpose for mobile hardware to have dedicated ai silicon integrated
@@bits360wastakenit's fine, that's how compute only dedicated gpus worked in laptops. They'd just have to get the software stack right.
The Strix halo LP sounds exciting, 8 full zen 5 cores and 4 more CUs than Strix, should allow it to be more cost effective than the full Strix halo design!
Pretty big tdp, but for non gaming, it's configurable. So hopefully good battery life when doing lighter work
@@deansmits006 TDP != battery life
Has infinty cache & alot more bandwidth so should perform better too
Looks sick! But I remember from last generation with Dragon range, the battery life was significantly worse on the chiplet-based units as compared to the monolithic dies from either Intel or AMD. I wonder if AMD will/can manage to still improve battery life while also pursuing this chiplet approach. No doubt it will have awesome performance, but if it can't make at least a whole workday under standard work scenarios, I think that will limit its adoption in creative, and professional contexts. It really doesn't need to compete with Apple's insane battery life (it would be nice if it did), but it does need to be good enough that I don't need to worry about micromanaging the laptops battery life when I'm out and about. I guess we'll see once strix halo arrives...
AMD chiplets use way too much power at idle
Dragon range is just a desktop chip squeezed into laptops. It uses desktop IOD which has zero consideration for low idle power consumption. However, Medusa should be different as it will be specifically designed for mobile. Optimizing for idle power consumption for chiplet-based design is not that hard (just look at Lunar Lake), it's just completely not considered this time. Also, AMD would use a silicon interposer, which would help reduce the power draw of the interconnections.
AMD's doing these chiplets differently than DGR, these are more like Intel. Look at how closely these chips are from each other. They're using a TSMC technology (I don't remember the exact name, but it's Foveros-adjacent iirc) to package the dies together with minimal power overhead.
@@MaxIronsThird The chiplets themselves not, at least not the CCDs. What's really consuming energy is the IF, which is coupled to the RAM frequency up to Zen3. Especially if your RAM frequency goes beyond the offical specs, so beyond 3200MT/s on Zen3.
The chiplets are quite far from each other. Big distance + high frequency = high power consumption.
Not sure how it is for Zen 4 and Zen 5.
Not sure if this could have been done, but I assume idle power consumption could have been significantly decreased, if the IF frequency could have been modulated, just like core frequencies.
@@johnscaramis2515 IF still works best when tied to ram speed, but no, AMD's current chip designs do waste a ton of energy, since they're still using the organic substrate for communication between CCD and IOD, from Zen1
Happy Thanksgiving. I hope everything is going well with your new home.
I'm looking forward to CES and the announcement of Ryzen AI Max laptops.
Happy Thanksgiving to you, Dan and team!🦃
The fact that the slide shows Zen5 for 2023 and Zen6 for 2024 are hilarious, AMD delayed these dies by a lot.
The era of Elon timeframe and delays.
I think the time frame on the slide means from design to ship. as you can see Zen 3 block started before 2020 but it just launched in Q3 of 2020, which is where the block stopped. But that still means Zen 5 was half a year behind and Zen 6 might be a year behind. Sad that Intel cannot capitalize on that since they can't make shit that works.
@@xpk0228I'm sure most of the delay was getting rid of zen 4 with no Intel competition. And there's still tons of zen 4.
Meh didn't bother updating after covid19 happening.
@@TheGuruStud Not all update to latest and greatest. Depends on how old your previous config was, it may be good decision to buy one of previous generations if price is good.
I'm super excited for zen 6 give me that 24c x3d baby thank you as always for the great videos
If cores per CCD is going up to 12, maybe we will finally get 8 core ryzen 5. Originally rumors said we are getting that with zen 2, but 3 generations later we still didn't get it.
_"maybe we will finally get 8 core ryzen 5..."_ For what possible purpose? It's been categorically proven that the differences are negligible between 6/8 cores since there are extremely few commonly-used apps/games that benefit, or even scale in any useful way, with more cores.
Sure, there are myriad, disparate, edge-cases where core-count is relevant, but they are basically statistical noise...
Which also means, that we may see CHEAPER V-cache variants sooner! :D If the R5's have 8 cores, then it makes sense to start the V-caching at the R5-stage. 11600X3D could be launched at the same time as the 11800X3D! ^^
(note how I think AMD will do what they usually do, and have some laptop-generation eat up the 10000-series naming, so Zen6 will be 11000-series on desktop...and yes, I dislike it as well)
@@awebuser5914 ""maybe we will finally get 8 core ryzen 5..." For what possible purpose?"
For lower-midrange PCs, like any other Ryzen 5. To compete with recent and future i5/Core 5 CPUs, which all have 4-8 E-cores. Is this not blindingly obvious?
Some people similarly asked why Ryzen 5s needed 6 cores and 12 threads in 2017 to compete with 4-thread Core i5s which were faster in games at the time. Just 3 years later, the Ryzen 5 1600X was usually faster in new games than the i5-7600K, especially when comparing minimum fps. Progress happens. Why are you questioning it as if it's a bad thing.
Yes, even in 2024, plenty of software still doesn't use more than 4 cores, but some of it does. Over time, more software will be optimised for higher core counts.
Even if you still think most people wouldn't ever benefit from 8-cores (which I think is extremely short-sighted), why would AMD let Intel keep the lead in multithreaded performance in midrange CPUs and leave Ryzen 5s as less appealing options for people who care about future-proofing or workstation performance, if they can add more cores without significantly increasing manufacturing cost? People who want a midrange CPU with a lot of multithreaded performance might not be the majority of the market, but they still exist, so why not cater to them?
AMD isn't even _making_ 4-core desktop CPUs any more, so they have no reason to _not_ shift their desktop Ryzen 3s up to 6 cores and move the rest of their product stack up accordingly.
@awebuser5914 i mean it's like 5% currently in cpu bound games at least in lows and that's today not the future when it releases.
Id instead ask why tf would every single tier move up in core counts except the ryzen 5's?
Theres no ecores to take on background tasks so the 6 cores can go full focus on the game either, its 6 cores including being taxed by the OS. That's not very much
@@Frozoken _"6 cores including being taxed by the OS"_ There this magical thing called multitasking where small time-slices of CPU are used for background tasks and in any case, the OS *never* "taxes" any CPU to any meaningful amount, it's quite literally background noise, a few perfect here & there...
Hope they don't mess up Halo,this could be their redeeming generation in the laptop market.
there has been so much hype. But I don't think the price and performance will be anything to write home about compared to what already exists.
@@Ahamshep I mean no way they'll make a cpu that costs more than the price of 4070 laptop GPU+CPU,i think that they'll initially aim for the m4 pro/max market,so they'll have a higher profit margin but i suspect they'll later lower margins for gaming ultrabooks.
Thanks for sharing. Thanks for what you do. Happy holidays!
AMD right now can do a Zen 5 refresh with a faster io die and infinity fabric and will be a huge market success. Zen 5 is much much more capable.
I already have a 7840U based "coffee table laptop", and my previous laptop for that role lasted me 14 years, so I'll be good for a really long while. What I find interesting is that AMD is trying to cover all possible levels of "below proper gaming laptop" graphics performance. I mean, who is this for? I bought the 7840U for longevity. 8 cores, 12CUs on LP-DDR5 6400 is "trash tier" graphics performance for anything that actually requires graphics today (roughly on par with an RX 470). But in 10 years that's going to be about the performance of "default integrated graphics" which is exactly what I want, as it was the absolutely abysmal Iron Lake graphics in my old laptop that completely gave up (Last driver update in 2012. Thanks Intel!).... But I think my line of reasoning is REALLY not big market. In fact, my line of reasoning is typically a market segment they want to avoid. "Never sell people good longevity" and all that.
Battlemage desktop GPUs are showing up in benchmarks running at 2.8GHz vs 2.1 on Lunar Lake. XeSS is using the GPU matrix units.
Very Interesting. I will wait for actual hardware before getting too excited.
8 CPU core are still the sweet spot for gaming.
The 16CPU cores are for workstation class systems. And maybe to fight with high end MacBooks. A Strix halo with 40 CUs vs MacBook Pro Max with 40 GPU cores will be an interesting comparison, if anybody would actually do it.
the number mean nothing in themself, would need to see the area in mmsq and node and then the power consumption.
now you are right as far as APU go that would be a very nice match-up
Why are people even talking about MacBooks ? There’s currently no reason to buy products that have higher price and lower performance and worse features. Also what apple did to their staff in china during Covid is purely shameful
Why would they target gaming? When money is in semi/professional - AI workloads^^
Why is this even a part of the discussion? The only thing macbooks do well is power efficiency on mobile and draining your wallet.
Get that idea out of your head. That's an un-helpful over-simplification.
There's nothing INHERENTLY useful about having 8-cores vs. 6-cores, or 6-cores vs. 4-cores, at least not when it comes to running games. A smaller number of higher performing cores is usually preferable to a CPU with a larger number of lower performing cores. For example, the six-core Ryzen 5600 will SIGNIFICANTLY out-perform the 8-core 3700X in games, on average, despite being only one generation newer.
Games are becoming better optimized for 8-core CPUs now, but I wouldn't recommend a Ryzen 7700 over the 7600 just yet. With recent discounts, it's becoming a more attractive CPU, but it is definitely not the clear choice over the six-core Ryzen 7600. The same is true when it comes to Intel six-core CPUs vs. 8-core CPUs from the same generation (talking about P-cores only, not E-cores).
You pay a lot more for those two extra cores of the 7700 vs. the six-core 7600, and for most people looking to build a good bang-per-buck system, I think it's probably better to save your money and get a 7600. The 7700 might age a little better, but, by the time the performance difference between the two is likely to matter, it would likely be best to upgrade to Zen 6, or to a heavily discounted 9800X3D or a used 7800X3D instead anyway. Save your money and get a 7600, and then IF you end up needing a CPU upgrade in a few years, you can upgrade to something much better than a 7700.
On the other hand, if you just want to upgrade an AM4 system, an 8-core CPU is more likely to be a good choice, in part because the 5700X3D is a great gaming CPU, and the 6-core variant isn't available for most people, if at all, and might be too close in price even if it is, but also because the extra performance of more than 6-cores is more likely to matter sooner, and more often, with that older and weaker CPU generation, and also because unlike with the 7600, you won't be able to upgrade to a newer generation beyond Zen 3 without upgrading the motherboard and RAM as well.
A game never actually requires a certain number of cores, it just requires enough performance to be able to run at a good frame rate. Any game could run well even on ONE core if that core and the rest of the resources of the CPU were fast enough.
I'm really interested and excited for Zen 6 Medusa, more then Strix Halo, since we saw, and you told us, how many problems Zen 5 had with its design and the TSMC 3nm dalays. I hope AMD will be able to bring down the performance they have now on desktops cpus to an SOC that can fit into a minipc or a laptop with 30-50W max TDP, really competing with Apple's Max chips in terms of multicore performance efficiency (imo X86 will never catchup Arm, especially Apple, in single core performance and efficiency). I think their weakest point at the moment are the Single core "speed", the media engines for videomaking workloads, and the RT performance compared to Nvidia, Intel and even Apple. But if what we're seeing with the PS5 pro is an indication of what we can expect from RDNA4, and we'll probably have even better performance and efficiency with RDNA5 or UDNA 1, or whatever they will name the next graphic architechture, well, i'm more then excited. 12 cores for each CCD means 24 cores and 48 threads for an hypothetical Medusa Halo apu, with 40 RDNA5 CUs? That's basically a threadripper with possibly a rtx 4080 or even 4090 mobile gpu, inside a mobile workstation. This thing could actually turn out insane considering the huge amount of softwares that can benefit from an x86 architecture in the professional world, Strix Halo might be just the beginning of a new type of APUs for Windows mobile workstations.
Finally a good interconnect, big if true
Top content once again bro ❤
128MB infinity cache + 8GB on package gddr6. Make iGPUs great again.
For the low cost of 3000 usd.
@@Deeptesh97 still cheaper than video card. Tons of people don't upgrade till new platform. But still tiny market.
lol, go and ask Panasonic some de_dust-proofin case😊
Finally 8+ cores in a CCD
Happy holidays 🍗🥃
Interesting info Tom, makes me wonder if the 10800X3D will be a 12 core with 3D V Cache then, hence the 12core CCD's being a probably thing.
Would be a nice upgrade,ZEN6 will be a very interesting change.
Also have a nice thanks giving,. even that it's just a normal day here in the Netherlands, hope all goes well.
I mean yeah why wouldn't it?
Cant wait for the Medusa gaming X3D CPU!
Can't wait for an an X6D stacked cache ! ( a dual X3D )
Would the iGPU be able to use the CCD's 3D vcache? Even if it uses only half of that 32MB is still quite a lot.
Found a 7900 xtx for $579 on Amazon new. Deal or no deal?
That's a smoking deal if not a scam.
Sounds too good to be true, very scammy
We are sorry for you kid. Maybe it's pre-used XTX? If no, 64.32 TFLOPS does not appear at $580 for a new one.
Yep, I hope Zen 6 performs well. My Ryzen 5 7600 will be in a need for a replacement by the time it comes out anyway. My next upgrade will probably be Zen 6 X9950X3D.
Perfect timing just in for dinner!!
And dude started talking about huge Nvidia D xD
Strix Halo (AKA Ryzen AI Max) is all I want for Christmas. I don't care about anything else AMD, Nvidia, or Intel are doing right now, a 40 CU APU with a fat 32gb high-speed RAM pool will be revolutionary for mobile gaming and AI.
Yeah, expect it at least a month after CES 2025.
shouldn't it be udna 1 instead of rdna 5?
UDNA comes after RDNA5 IIRC
@@TheKazragore no, it seems it's UDNA next, no RDNA5
yeah, I think he like myself is so used to calling the successor to RNDA4, RDNA5, that we just call it that without thinking.
Happy Thanksgiving! 🦃
AMD needs to add infinity cache to APUs even if it's half or a quarter of what a discreet version of that GPU would get. Their CPUs and GPUs have plenty of performance but are bandwidth starved.
Happy thanksgiving
If the next X3D chip has a 12 core CCD and hopefully a better memory controller , it's high to be a huge upgrade!
Here's hoping Strix Halo low power is just the next gen Ryzen HX470 fitting into AM5 with 20 CUs over 16
It would be nice to see full Halo on SFF desktops as well. Though I must admit - I don't know if RAM slots on current motherboards can function as one slot, one channel (we'd need four) and it's all about the I/O die ... Or is that impossible and we'd need a new chipset and a new class of motherboards ...
I hope there will be an 8 core variant with the full igpu aswell
That would be a good product.
Same here.
happy thanksgiving
Does Medusa Point have Infinity Cache or SLC?
I just need an APU with V-cache but that's not feasible (yet)
With Medusa, very feasible. As it uses standard CCD, unless AMD abandons V-cache fully, it will be possible to make Medusa Point and Halo with V-cache. Similar to how it would be possible for AMD to do V-cache Strix Halo.
I hope so. APUs need more cache, and we've been out of luck so far. 16 MB is just not good enough. Maybe bump that to 32 MB, and add 128 or 256 MB L4, shared with the iGPU?
@@panthera8286I consider it very unlikely for AMD to use an L4 cache design similar to what Intel Adamantine is shaping up to be. The whole advantage of X3D comes from having vastly increased capacity of the shared L3 cache at very low latency.
Oh, and X3D APUs are already possible with Strix Halo, it's just a question if AMD sees a market for such a product - it might go down the same route X3D-MCDs went for RDNA3.
I do not quite see a need for this. The GPU will be a bottleneck in any games, so no need to boost the CPU’s gaming performance with additional cache. Is there a certain use case? I’m interested.
@roB3rnd As Strix Halo appears to be aimed towards high-end mobile workstations more so than towards gaming (at least rumors suggest so), there are certainly usecases for X3D. The whole fanfare is about gaming, sure, but lets not forget that 3D-vCache started out as a datacenter product (Milan X) for scientific compute, CFD and the likes. The question remains - does AMD expect the market for X3D-APUs to be sufficiently large to be worth adressing.
Bummer. I was hoping since the year old leak of 32 core CCDs that atleast for client side we get 16 core CCD.
Perfect 16 core 32 thread 1 CCD X3D chip.
Not just 12 core CCD, gotta have to wait and 3.5 years in total then for zen 7 to get that i gueess.
I think 12 cores will be plenty for gaming for a while, and honestly 24C is still starting to get into MT diminishing returns in even some powerful apps.
The big question is how much better the latency is on Zen 6.
they're using a silicon interposer like intel, so probably better
@@MaxIronsThird And less power consumption of the fabric, as the distances are heavily decreased.
Well, at least MUCH better than Intel ! 😜
@@gertjanvandermeij4265 Lunar Lake doesn't actually have that much latency, not sure why Arrow Lake put the memory controller on a different tile, thank god, next gen(Panther Lake), it will be moved back to the compute tile.
@MaxIronsThird Lol lunar lake has way more latency than arrow lake. Its just we were used to meteor lake that had an even worse amount of latency. They both use lpddr which has terrible latency, most of the latency is the memory itself not how its packaged, especially with lpddr which has awful latency.
It's just that arrow lake is bugged and going from monolithic with high ring clocks to bugged chiplets with low ring clocks is pretty bad at xmp latency, especially when you have weaker caches.
Regardless arrow lake has 90-100ns of access latency while lunar lake has about 110-120 which looks good compared to meteor lake with like 150.
Oooh, a 32 Core CCD with Vcache with a IOD and iGPU? that would be amazing haha, then again I'm in the niche there lol.
Medusa Point is pretty much tailor suited for a next gen console, with customized IOD -- larger GPU and no NPU.
It would be nice if we could get quad channel ram with the ryzen line.
100% agree. I really wish AMD and Intel would streamline their offerings to the following:
Entry level gaming and budget office PC’s - 2 channel memory and up to 8 core CPU with entry to mid-level integrated GPU (to eliminate the need for discrete GPU). No vCache to cut costs and keep price points down.
Mainstream gaming and power user office PC’s - 4 channel memory and up to 24 core CPU with no integrated graphics. VCache on all CPU’s.
High end workstation - 8 channel memory and > 24 core CPU. Minimal integrated graphics (for 2D display output only - to facilitate GPU’s to be used as dedicated accelerators when desired).
that will never happen. Quad Channel will remain Threadripper or Strix Halo only.
any UDNA leaks? What is AMD's end game for incorporating Tensor cores in Gaming GPUs?
AMD have said it. To simplify the developers work. What they need are indie AI developers to adopt their platform. So if the indie developers can use their gaming GPU for research and development then move to their Big Iron GPU when scaling up and commercializing that is a big win for AMD.
I can wait to learn what the GPU and CPU for the playstation 6.
Will a 12 core 10800 X3D be enough? How many cores will the PS6 have and how many dedicated ASICs to offload stuff away from those cores?
Celestial and Druid desktop GPU still in the plans, according to latest leaks.
8-соre Ryzen 5? Would be exciting.
I really wish AMD would come up with a better naming scheme , what they have at the moment doesn't tell me anything. I have no idea what platforms these things are targeting or how they relate to each other.
Starting that Turkey early.
It would be hard to believe that AMD would throw sum 600mm2 of dies to where it used to use 200mm2.
I'm more inclined to believe that the 325 should be a total size - nowadays They should be able to use some bridge for connection (eg mi200) rather Than a full base die... unless that base die is full of cache. (Or the IOD itself)
Hello have a question because i'm not really good in english...
That's mean than the new ryzen ai max + igpu is cut down to 128 bit bus and 20 core ?
That's right ?
If we say the early leaks of Nova Lake are true and Intel is going with 16p + 32e for their flagship, and that Coyote/Panther Cove P-Cores are a big step up in performance and that Arctic Wolf E-Cores also hold their own then I'm not sure a 24c48t will compete too favourably again 48c Nova Lake (possibly with an additional 4 low power cores on the I/O die but I wouldn't expect much from those).
Obviously Intel have done nothing in the last 10+ years to make me believe they will execute this well but it could be a very interesting showdown!
So far amd threads have either almost matched or surpassed Intel's physical core count. Unless intel does something unexpected, I'd expect them to tie with intel maybe being 5% ahead in muticore while being demolished in gaming.
Unless amd puts a 24c/32c chiplets on ryzen
@@NadeemAhmed-nv2brNo they haven't? The 285k still beats the 9950x overall in productivity despite having 8 less threads. Doubling both p and e cores would demolish a 24 core 10950x even if there were 0 improvements from arrow lake which there definitely will be seeing nova lake the first stage of rentable units.
Ultimately I don't care about productivity but let's not lie to ourselves because amd is better for what u and most of us do (gaming)
Nice leak.
For 12 core 16 cu apu to have 275mm die space is quite unlikely.
Plus an interposer
It will be too expensive for small apu.
Have you seen what Strix is selling for? Current gen APUs have become a premium for AMD, for the lower end they're still refreshing older stuff
Next gen consoles in 2 years will be fun. 25th Anniversary 2026 Nov for Xbox
RDNA 5 is a possibility for Medusa Point? That would be great. Honestly I was expecting AMD to stick with RDNA 3.5 for a long time like they did with Vega. I'd even consider using RDNA 4 to be a small win.
They're unifying alot of things to save on costs.
It makes more sense to not waste money on rdna when the boatload money making AI team which has alot more resources at its disposal, to just use their designs and alter it a bit.
U save money and get alot more performance increases b/c they can afford a much larger R&D budget
I'm eager for Medusa news in hopes it might be THE tech for the Steam Deck 2, with maybe RDNA5?
It probably will be, I hope for 4 3D-Cores and 4 C-Cores to provide both performance and efficiency a wide variety of games
Strix Halo Low Power or whatever it's called, I hope comes to 2 in 1 tablets as it would be perfect.
13600kf $175 black friday on amazon
zen6 on AM5?
Should be the case.
GUARANTEED Medusa Ridge has a fat NPU... They will be shrinking the IOD to put it there... But I would like to see them go to 8CUs for Ridge chips...They said Zen 6 would have accelerators everywhere (all SKUs)... I do enjoy how no one has talked about how Win1124H2 added 20-30% to Zen4/5... I talked about that on every page... Because Intel is so much of the market MS has to pivot with Intel HW changes FIRST...
All the way back to the Vista debacle and the FMAC Challenge (dest vs. non dest)... I would bet that AMD is still using XOP for INT... But that's off-topic...
I can understand them going wksta with Halo since they gave Sony RDNA4... An EliteBook or ProBook are in the cards for Feb ( or OEM availability ) but I want that (Ryzen AI Max 390) in a nice miniPC at 100W...
24 core desktop CPUs then on AM5? That would be stretching memory bandwidth I guess, but I think we will get it.
I really hope we go to 3 channel memory in AM6. (1 dimm per channel will be plenty enough for most uses, and allow higher frequencies)
This will be at least a year from now though, and involve new high speed methods of communicating between chiplets. Idk, if DDR5-6000 was enough for 16C Zen 4, I don't see why DDR5-10000 in 2026 can't be enough for 24C with a more advanced architecture. It should be...
AND - Remember most of the lineup don't have 24C anyways. It's not insane to think AMD may say "hey if you can afford a 24C CPU, get a decent RAM kit".
@@MooresLawIsDead For sure. You are right dual DDR5-10000 probably will be enough. I think triple channel will happen, but maybe 2027 or 2028.
what does that mean in fps terms?
Will 3nm Venice use 12 or 16 core CCD's?
We need a bit of a leap. 9800X3D is good but it doesn't really move the needle.
I think we need to see benchmarks with the 5090 to know that for sure. A lot of initial benchmarks of the 9800X3D were GPU limited which would suggest there’s quite a bit of headroom on the CPU already.
Hard to move the needle when the only competition is yourself
@tuckerhiggins4336 Nova Lake will force amd to move forward.
Will certainly be for me when I upgrade from Zen+ 2700X
Hey, I really hope someone might be able to answer this: With Halo Strix coming up, do you think we will see laptop configurations with BOTH the AMD APU and a NVIDIA 50 series GPU, or will they only have the AMD APU?
No 32 cores? That's sad man.
Any idea if AMD going to push something in flavor of 8XXX series with better igpu?
*Finally no more LOW-end GPU's , because AMD/Radeon will make APU's, for that 15% of gamers!*
Than 75% of gamers, will buy an Radeon MID-END GPU, and JUST 10% will buy an HIGH-END Nvidia GPU ! Oh Yeah ! *AMD & Radeon, will be the real future, for GAMING !*
I was really hoping for 16 x Zen 6 cores per CCD, I guess 12 is a decent improvement over 8, I just think 16 would have put to bed any questions of do I want a single CCD for low latency gaming or do I want a dual CCD in case in want to do some multitasking work loads too.
Are 12 cores not enough for that? I get having issues with 8 but 12 on the same die is pretty damn good.
Venice looks like terminator brain. lol
I thought there was a different term for multiple small bridges and an interposer was under a bunch of chiplets.
AMD need to start delivering on their promise instead of overhyping their APU again.
niceee
"I assume it's RDNA5, but I don't know that"
RDNA5 confirmed!!!
1y later, turns out it's RDNA5.5. OMG MLiD got it *wrong*!
We all know that's how it's going to go. :)
for the sake of god i hope they will give the thing atleast 40 pcie lanes
Not gonna' lie... I wanted 18-20 CU's. : [ Remember - Nvidia has APU's incoming - if AMD wants to compete, they have to up the CU-count. It's true that the APU's need more bandwidth though, so I hope the Infinity Cache speculation turns out correct - even better if it's V-cache!
Nvidia apu will be running on Windows on arm which is shit and has problems with every other App
@@NadeemAhmed-nv2br Hmm, true! I guess it depends on what Nvidia has planned for compatibility, or perhaps they're going to do a *massive* work of recompiling a crap-ton of games - which seems very implausible, upon further consideration...
3Halo coming in a month but we still have so little info?
This means PS6 goes 12-cores! OMG. Time flies.
It might still be 8. Current plan for PS6 is Zen4/Zen5, (seems there and UDNA (successor to RDNA4). Zen6 on console doesn't seem to be in discussion yet.
@TruzzleBruh I just don't think they would want an already-three-year-old architecture on a 9 year life span product. Considering PS6 will due in 27 or 28.
Perhaps 10800X3D (for lack of a better name) with 12 cores ?
So Zen6 will be:
x600 8 core
x700 12 core
x900 16 core
x950 24 core
Still waiting for Strix Point on AM5, it's taking them forever to launch it.
Go Duzaa !!
Low Power = 105 W. hahaha
New daytails
When you have chiplet CCDs there's little to gain by mixing 6 std with 6 dense cores that have half L2, to save die area you would need the lower frequency cores on one side so the heat density on heavier used fastest cores will be worse than on an all std core design. A unified L3 cache means you are mainly saving a small % area from halving L2 & denser logic in just half the cores. So a small 75 sq mm CCD won't cost much less even if a 10% area reduction of dense cores is achieved as so much of the die will be still be L3 cache. Meanwhile having 2 pools of cores complicates binning of chips with defects.
The savings of area on monolithic with denser cores are greater, because discrete blocks of the chip are merged into larger areas for layout, so the space saved by the shorter cores can be used for other parts of the APU, but these logic blocks are simply are not present on CCD dies as they reside on the IOD if included at all.
Isnt the strix point halo straight slapping a cut down low end rdna4 die (navi44) and connecting it to a CPU
sounds weird to make 3 IODs for 1 gen, when the major reason AMD cited for chiplets is that making an IOD is complicated and expensive, and this saved them having to make a new one per gen... now they make 3 ?
This is what the next stab at the steam machine should be.
15:30 - RDNA5 ? Ehmm... Wernt you saying that after RDNA4 AMD will switch to "UDNA" ? Just asking...
Also - like for the 12c CCD, and after 9800X3D, and the other leaks with how AMD will be stacking things on top of each other, I can imagine - at the bottom - big L3 cache for everything and on top of it - IO die, mem controlers, iGPU and the needed CCDs. so just pushing all the data through an big L3 cache for "everything". and so the CCD will not have L3, but only L2 and L1 cache for the cores and thats how you already can put 12 cores on the same space.
Originally: RDNA4 -> RDNA5 -> UDNA
But they changed it to just: RDNA4 -> UDNA
So he made a mistake.
Juicy stuff as always. :) Anything new about Intel except from more failures? Nova Lake?
Come on after the year of Meteor Lake, then 13/14th degradation and then the saviour Arrow Lake flops hard, you want even more???
Tom might be hesitant to commit to leak Intel stuff cuz half of the time it just don't turn out well
1.Why are chiplets in future mobile apus are suddenly not a big battery (standby)problem ?
2.wasnt there a story a while back saying that AMD is sticking with rdna 3.5 untill 2026 or 7?
There will be "allegedly" some small cores straight on the IOD, so the CPU would offload everything from the high-performance cores and shut the entire thing down.