I think the fact that, without AMD, we would still be stuck with 4 cores and 8 threads from intel, maybe 6 cores 12 threads max, but at around £500, has to be the topic... I think the zen project as a whole needs the credit, as the foundation blocks for zen 2 were laid strong. When you are seeing *predicted* performance jumps that get people disregarding them because of that fact independently just shows the thinking that AMD made years ago has now come back to reward them, and deservedly so.
People couldn't comprehend the impact of having more than one core. Why have like 2 CPUs instead of 1 fast? people asked. The closest image people had was dual socket systems, and they were expensive and inefficient. And Intel spun hard on that too. Pentium 4 was an insanely strong brand despite its shortcomings.
Hes absolutely right though. Think of about ten years ago, thats the time of the first generations of the Core i Series with pretty much the same design that we had until Ryzen.
I'll say that last image looks like a good false-color image of the Matisse chip. So what you're seeing there is a center trace line, assuming that those line up with what is in the zen2 layout, then I agree, there must be a multi-mode infinity fabric link in the chiplet - up to 4 of them. And this might explain the IO die-size in the Rome chiplet - 4 complete replicas of the IO, cache and memory to allow 4 links to each chiplet. With that much logic and capacity, the size of the IO die makes much more sense. Instead of going chiplet-to-chiplet, each zen2 chiplet likely has multiple links to the Rome IO die. 4 IO links, to four different segments of the IO die, for each zen2 chiplet. Meaning that massive chiplet in the center of Rome can handily link each core to multiple resources simultaneously. Whether that link is to transfer from one chiplet to another, to transfer from chiplet to memory, or from chiplet to PCI bus. It can have 4 links to whatever each chiplet needs depending on the application and execution requirements. (I fully expect windows' drivers to really treat this poorly thread-wise until someone calls them on their poor software) I'd wager that the same thing will happen in Threadripper with much the same end result. And the reason? When you have more than 2 processor chiplets, it makes far more sense to centralize the resources to better control which workloads are sent where and what resources are accessed - so it might have some latency, but at least you won't have the latency of a workload having to go through multiple chiplets to get to where it needs to go. On a Matisse desktop with only two zen chiplets, this makes sense that there would be two channel links to the other chiplet, and two channel links to the desktop IO die. Which means the IPC performance we're seeing out of a single zen chiplet paired with the IO die is actually probably less than what we'd see if we had dual 4c/8t chiplets as you say the final production will be. (Although I really can't see AMD shutting down 6c out of 8 to make a 4c/4t or 4c/8t low cost ryzen 3 unless TSMC 7nm is really bad for defects.) There were only two links being used to the IO die, and the other two weren't going anywhere. Its also important that if for some reason a chiplet is saturating its two links to the IO die, it could (at a slight latency cost) use its dual links to its brother chiplet to get data back to the IO die if that chiplet isn't saturating its IO link. Its better to retrieve data from a round-a-bout path if necessary than to not have the bandwidth to feed the processor cores at all. Just my thoughts on it. I continue to believe that the IO die is literally the most important chip design decision AMD has ever made.
@@mbraun777 All but a few leading-edge distros will probably struggle on day-one, but not as bad as windows, with the leading edge being the first to use the newest kernels which will likely get day one updates or even before-day-one to support this topology in a way that makes use of the bandwidth it offers while minimizing the latency cost.
Datacentre guy here - - The threat from ARM in the datacentre was always the massive core counts running at lower frequencies but with very low power draw. It seems to me that AMD have now made a CPU that fits those same qualities but it's X86, allowing all your apps and code to run... Could you have a think Jim about the power efficiency of low frequency Rome and compare it in theory to the arguments in the industry for ARM servers?? Would love to see your thoughts.
ARM is on the way for the desktop. Things will get going once apple move to all arm platform for mac os. Popularity of win on arm will then follow. 5 years though, not overnight.
Another awesome video Jim and feel free to talk alot more about Zen 2 in future videos :) Zen 2 really is looking like it's going to be an awesome cpu in every task, including low resolution gaming, if all this info turns out to be true.
All this information makes me pretty eager to see what the Zen 2 Threadripper series will be like. I upgraded early to a 1920x when I initially planned to hold out for those.
Oh cmon man. Don't make me feel even more behind with my 1700x. I am already lamenting how its already outdated >_< (Naw that's a good thing. World is changing faster than my money)
@@ArynCrinn IDK. This new stuff is pretty revolutionary. PCIe 4.0 to boot (assuming new MB) At least you can keep your RAM. And maybe use the old CPU and MB for another build (with cheaper RAM). That's what I usually do ¯\_(ツ)_/¯ That would work awesome for a homelab server. Run a headless version of CentOS or Debian on it. :)
That being said, I don't think my workloads need more CPU power then what the 1700x provides. Lol. Ill at least do 32 or 64GB of RAM to do crazy stuff.
I can't wait for the 3700X. Buying one for certain when it comes out. Really gonna need it for all the plans I have, and all the footage I'll be recording. Unloading a camera with 80GB of footage. Pretty easy to compress with 12 cores.
@@Kazya1988 My plan is to buy a 2600X as that will already be a substantial upgrade over my i7-4710mq but I do plan on buying that 3700X if it comes true.
JoeAceJR haha I remember the first computer I saw with over 1GB. I was upgrading a local photographer's computer. When It booted and I saw that magical 1GB, it was like heaven opened up and angels were singing.
My first pc had a 30MB hard drive, 10mhz 286 CPU, and 1MB ram. My first real build was a TMC TI5VG+ super socket 7 board with a K6-2 350mhz. 64MB ram and a Riva TNT 16MB video card. I have been a proponent of AMD's best bang for the buck model ever since. Have not been this excited about the pc industry since the original Athlon processor launched.
Looking at that road map made me more excited for Zen 3 2020 than Zen 2. I have a R5 1600, and a Zen 2 would be a massive upgrade, but not necessarily a needed one. Waiting about 1 more year would net me an even better upgrade, one that I would feel confident I wouldn't want to upgrade from for a good while. But then there's the next thing after that.....PC building can be ridiculous with the feeling of always wanting to upgrade something even when you don't need to.
I'm in a similar boat. 1700X is going strong, but I could really do with more single threaded performance, and my virtualisation shenanigans are begging for more cores. Hopefully AMD come good on their promise of supporting AM4 through to 2020. If I can buy a Zen 3 chip and drop it straight into my existing setup that is going to be amazing.
@@TheBackyardChemist TBH I'm expecting a new socket for DDR5. it seems very unlikely that we'll see an AM4+ that supports both due to the way DDR5 works. It all depends on exactly when DDR5 is consumer ready. Many people have said 2020, but that could mean late 2020, which really means we won't see it until 2021 (Which would be my bet). AMD are on record saying they want to be first to market with DDR5 though, so it's anyone's guess what they'll do.
My 1600 does bottleneck my gtx1070 since I play games at low setting 1080p with 144hz monitor, I will get an 8 core zen2 once it released. I really wish they can release it Q1 2019.
I think we should be a bit more carefull with our speculation, because the Ryzen 2 first was going to be 30% more efficient then Ryzen 1+ but two months later was already drop to 25%. Also after the presentation everybody was disappointed over the demo that was shown while the performance gain and efficiency of the demo CPU was absolutely mind boggling good. And that was all because of the unrealistic high expectations that the hype has created. On top of that you should not forget that these typse of channels make a 'living' from demostrating information And how more out rages the information is how more traffic they wil recieve and more mind share.
Great Video man! Thanks for being on top of this!! Really looking forward to seeing what this new series can do. I wish we had more info on launch date as well as launch date for the new motherboards.
Don't feel bad about talking about Zen 2 so much... You're right, it's the biggest thing happening in tech right now, and I can't get enough of it. I've loved every minute of all of your videos since I first found out about your channel in November. Keep it up!
Go AMD. Go Zen 2. Go Zen 3. The better AMD does, the more competition. The more competition, the better CPUs we all get. AMD deserves to win for a while. BTW, feel free to talk about Zen and nothing but Zen for the next 6 months. Then you can find something else to discuss. :-)
@@baronvonlimbourgh1716 : Hahaha. After being trounced for so many years, me thinks AMD will need to remain in the lead for one year to get used to that. Plus, Intel has all the money in the world to remain at least fairly close to AMD, while AMD didn't (and barely stayed alive). AMD needs to keep the pedal pushed firmly against the metal to get ahead and stay ahead for any sustained period of time. The computer industry and consumers need AMD to stay in the lead for several years, at least.
@@baronvonlimbourgh1716 : Sorry, guess I don't pay enough attention to politics. Oh, wait! It's impossible to pay too little attention to politics! :-)
Zen 2 is New Era for CPU, same transition was decade ago from 1core-2c-4c-4c/8t final. Zen 2 is the new Sandy Bridge, tremendous potential for another decade to employ.
The hype is real... Probably surreal and like you've put it... the most talked about topic since i've been into computers and thats close to 15 years... Great job Jim... The commitment you have is admirable... I'm admiting that i'm not a patreon but as soon as I get my student life together and can afford to become one I will because your work and the work people like you do should be supported in one way or another! Regards from Slovenia ^^
I honestly didn't really think I could get excited for hardware the way I used to in the glory days, but by God, I feel like a kid waiting for Christmas when it comes to Zen 2. I swear this is like total nerd viagra.
I for one welcome any new videos about the Zen 2, as long as there's some new information and/or analysis (which you've delivered on every single time so far). I'm really exited about this, and this brings memories back to the 90's / early 2000's when there were constant massive improvements in the performance, as opposed to the near-stagnation of the last decade until Ryzen CPUs became available. And I also love seeing the underdog getting ready for getting on top of the game!
*PC Gamers:* Now that AMD is having their products being fabricated at TSMC instead of GlobalFoundries, it's lights out for Nvidia and Intel. All AMD needs right now is for Software (Games) to catch up to Hardware (GPUs, CPUs). Chances are, AMD wants developers to implement Ray-Tracing instead of them creating them own proprietary library a la Nvidia's RTX Gameworks. It will be more work for developers though, but with the new Next-gen consoles coming out next year, developers will be able to put in the work. Just look at the news about AMD being able to do DLSS via DirectML. By the way, people should look at AMD's 8-bit compute performance against Nvidia.
AMD has also been fostering Ray Tracing support through various tool sets for quite some time in the game dev/commercial world where the ecosystem may be maturing quite quickly behind the scenes. Its a presentation from almost a year ago but still valuable from the insight perspective to what may be coming up: gpuopen.com/gdc-2018-presentation-real-time-ray-tracing-techniques-integration-existing-renderers/ at the time they had the RX580 doing over 200 MEGArays (not gigarays like current NVIDIA products) and *I* am quite interested to see where they land with the upcoming GPUs and if this next generation will even support Real Time options. I have seen zero information on performance from the VII in these measurements.
@@baronvonlimbourgh1716 Yup this time AMD is letting nVidia get the backslash while they just wait for the technology to be adopted widely, we all know that AMD cards have always been exeptionally good at computing tasks, i bet once nVidia have pushed developers to adopt RTX they will compete pretty equally.
Baron von Limbourgh yah I mean what Nvidia has is not even real Ray tracing. They are just ray tracing some of the parts while 98% is still rasterization. So yes it’s 100% a marketing gimmick the backfired on them.
@@yottaXT i think it is developed for the growing rendering market. I think they saw an oppertunity there to sell a lot of cards. Or any other market where these cards do well. And when they have the design anyway why not push it as a gaming feature. I bet this what happened. It never was primarily a gaming feature.
Things have not been this exciting since early 2000's when AMD released the first 1 GHz processor. Very excited for AMD and what they can do to eat away at Intel's pie. As for first computers? Well, mine came standard with 256 kb of memory and I did buy the 256 kb upgrade. As for PC, my first graphics card from '95 was maybe 1 GB? I still have it but don't even know what card it is. Thanks for the work you do on these videos.
I had one of the fabled AXIA 1GHz Athlons in the early 2000s which overclocked to 1.4GHz just by increasing the multiplier. I remember sitting at my PC one evening and hearing a strange noise, the PC freezing and smelling something burning: the clip on my CPU fan had failed and the CPU fried instantly. I cleaned the heatsink paste off and could see a discoloured section all around the die. I almost cried because the AXIA processors had sold out and I couldn't afford a 'real' 1.4GHz chip. (I must still have it somewhere because I could never bring myself to throw it out...)
I still have both of my Socket-A based AthlonXP cpu's and motherboards. I've been wanting to do a retro gaming computer with the better of the two cpu and motherboards. But I don't have a spare case to put them in sadly.
@@adriankelly_edinburgh Intel fanboys at the time would also try to rub it in your face that they had thermal protection that would prevent them from having that same problem. I know several motherboards added thermal protection for Athlon/AthlonXP, but I don't think AMD added true thermal protection until the Athlon 64, that is also when they added the heat spreader. Athlon CPU's were much cheaper, so if you REALLY had to replace them you could for much cheaper then an Intel, but damn.. I'm glad I never went through any of that with my AthlonXP 2000+.
I found the old chip at the bottom of a box of old bits and pieces. I can still discern the pencil lines on it where I'd redrawn the broken connections that were meant to limit the multiplier. Happy days :-)
@AdoredTV: Brad Sams at Thurrott.com leaked that the next Surface Laptop will use an AMD APU in his recently released book “Beneath A Surface.” Presumably, this would be the Surface Laptop 3. But what is the processor? In the past, Microsoft has gotten first dibs access to processors for their Surface lineup (e.g. the Surface 3 in May 2015 had the Intel Atom z7-Z8700 months before anyone else; I believe the Surface Pro 4 and Surface Book had first dibs on the Skylake Core mobile CPUs as well in fall 2015). So it would not be unprecedented for Microsoft to call first dibs on the initial supply of Zen 2 “Renoir” Mobile chips for the Surface Laptop 3 and-since the Pro release is generally in lock-step cadence with the Laptop-the Surface Pro 7. Add the fact that Panos Panay, who had originally led the Surface Team, now is the chief product officer at the head of the Microsoft Devices group which includes Xbox that already has a close-knit connection with AMD for their Xbox systems and the path becomes clear. AMD, through their long partnership of Xbox co-development, can now reach to Panos Panay, who now oversees this both Xbox and Surface and bring their technology in an industry first-style move to Surface as well.
I Love your content. it's hard waiting for you to upload content. Keep up the great work and i really appreciate the information and honesty you bring to us.
Awesome video Jim, love the way you put all the pieces together. Ryzen 3000 for the desktop based on Zen 2 will be absolutely astonishing. Can't wait! Also don't stress to hard about making videos on the same topic. Keep it going, I'm sure everyone is just as curious as you are
usually wait until the end to thumbs up a video, but once you started on some stuff I hadn't already heard / speculated myself, I had to do it...couldn't have been past 8:00 in. always enjoy the content!
indeed trastewere thats exactly what im going to do-even though ive only had the 2700 x for a month ha ha ,swore id never buy another intel chip again.it was hard but i stuck to my guns,so go to hell intell
I had a 1MB VRAM graphics card in my IBM PS/2 Model 55SX back around then... it probably cost about the same as these damned nVidia RTX cards do these days!!
What a great channel! Keep up the great work. I am literally waiting for Ryzen 3 to hit the market to upgrade my build. My last AMD was an Athlon 64, from which I then went to the 2600K (which, let's face it, was fantastic). Been on it for the last 7 years and it's seen me through 2 PSUs and 3 GPU upgrades, but it's now high time for an upgrade at the most fundamental level. I've been biding my time for this very moment. Thankfully, it looks like I don't have much longer to wait!
And what if for Zen 2 Renoir APU's they have global foundries produce IO chiplets with small graphics sub-system integrated into there..? 🤔 I bet nobody would have seen that exact config coming, but it might make some sense. GPU as it's own chiplet might get pushed back into future products so as not to take on too much innovation risk all at once?
During my current heartbreak there's little that helps me more than a nice, long AdoredTV video along with some unhealthy food I will regret eating but will do anyway because I need to numb myself to the pain
@@fastcx That's where I got spoiled, we got a p120 16MB with a Trident 2mb PCI video card. It ran Descent really well but when Quake 2 came out I was gaming at less than 15FPS on a 320x240 resolution.
Wrong; it's actually Zen 1, then Zen+, then Zen 2. Better question: is it worth talking about Zen+? Reasons for doing so: small boost in overall performance, Threadripper's max core count doubled; reasons for not doing so: no improvements to Epyc for reasons explained in vid
@@InvidiousIgnoramus I'm sincerely sorry about that; I legit thought people forgot that Zen+ was a thing. Whatever; guess I'll go down as someone who doesn't have a sense of humour.
If AMD actually pulls off chiplet-chiplet communication along with the I/O die, that will be the absolutely most impressive thing about the whole thing to me. Doing that effectively creates a high-speed, high-bandwidth asynchronous system... Not exactly something that's easy to pull off in silicon. There's a reason why all modern CPUs (outside of some academic experimental asynchronous processor) use strictly hierarchical structures like a single bus internally. Once you start to go away from this architecture, it just gets really hard to make all of the bits arrive where you want them to be in time. So, this is why I'll be very skeptical about the chiplet-chiplet links, until there's some extraordinary evidence to justify the extraordinary claims.
@@kazedcat I can't find any sources for that, what exactly do you mean? All I see in zen1 is two ccx's being on the same sdf and scf, which isn't really comparable. Or do you mean the IF links between the MCPs? Because my understanding is that these are only used to link the sdf of the two chiplets for communication with dram etc, which is more similar but also not really comparable imo.
@@squelchedotter The communication between CCX inside a die is asymmetric compared to communicating on CCX outside die. So in Threadripper if core9 needs to communicate to core1 the communication path is longer and have to hop on two sdf and two scf. Compared to single hop with core1 to core2.
If the crossbar communication between cores becomes a reality, I'm all in! This processor design will be a work of art! I'll be buying this Rembrandt as soon as it becomes available!
Saw 8350's hit 5.0 ON AIR COOLING..... lol. Even then AMD was working their way to what is coming out this year. Think on the ccx design a minute. It just wasn't there yet though, and people bashed Bulldozer for years. Yet i had an 8350 for a couple years, and it was just fine...
"threadripper will be strictly workstation" i will still buy threadripper non-wx as an everything cpu. im fairly sure AMD will also still advertize their non-wx threadrippers for everything, including gaming. its a wonder if they even make workstation specific threadrippers this time.
I dont think WX Threadripper will even exist. The reason they were called WX was because two dies didn't have direct memory access. IO die in 3rd Gen TR will take care of it. Though its worth pointing out that Level1Tech found that the performance drop in WX chips was because of Windows, not due to 2 dies (16 cores) not having direct memory access.
@@stayfrost04 me either. with the I/O die they can function the same no matter how many cores/chiplets they put in the package. even a 64 core i dont see getting a wx branding or purposing, but a power requirement bump if the clocks go higher than the 2990wx.
Given that they cannot completely remove the latency between chiplets even with IF decoupled from memory speeds, I hope that you are wrong about the 8 cores being done via 2 chiplets with 4 cores each. 1 chiplet with 8 cores is vastly superior.
Super-Jim to the rescue ... Just when you almost feared that Jim went back on vacation (to make up for the interrupted one), when all the other tech news are uninspiring and lame and you are threatened to succumb to boredom - in short, when all hope in the world of tech fades: in jumps Jim at last second and wrests you from the claws of lethargy. Thank you Jim for your relentless effort to feed our brains with useful information!
Jim, I really don't understand why you keep clinging to the idea that all Ryzen 3000 CPU's will consist of two chiplets. It makes absolutely no economical nor technical sense. I think you demonstrated enough that those chiplets will not have significant yields issues. Neither should TSMC be a problem seeing as larger ARM chips have been produced on TSMC 7nm for months. Further if a chiplet has a defect, there is a good change the part can still be salvaged. The majority however should still be fully functional 8c chips. Why would you take two fully functional chiplets, disable halve the cores and sell them for $200 as a Ryzen 5 while one of those chiplets can do the exact same thing. It doesn't make any sense. Nether does it make sense on the technological level. There will obviously be an IF connection on the substrate between the chiplets and the I/O chip. Substrates aren't exactly the most complex not expensive part of a CPU so making every substrate capable of holding two chiplets make sense from a production standpoint. There however doesn't seem to be any reason to actually have to place the second chiplet on there. The only reason I can think off would be a limited bandwidth between the I/O chip and the chiplet when it comes to RAM access and latency. AMD however seems to have demoed an 8c Ryzen 3000 ES with one chiplet against an 9900K. For me that says everything on the technological level as well. Then further the main target for a hexa or octacore will be gamers. Games are latency sensitive and a lot of those games are not able to properly recognise differences between threads and jump loads between threads all the time. Introducing more latency by splitting the CPU design into two chiplets and having games throw calculations between both chiplets makes absolutely no sense to me at all. I think AMD will release a bunch of hexa and octacores as Ryzen 3/5 parts with one chiplet. 4c might also be Ryzen 3, but it could also be branded as low end Athlon. Then Ryzen 7/9 will be two chiplets and 12/16c SKU's. That makes sense on both the economic and technical aspects.
It all depends on clockspeed yields. Can they bin enough chips with all cores clocking high. And those chips will most likely be also most power efficient They will also need all core high clockers in 16 core ryzen and in threadripper. As well as the most efficient ones for their epycs. It is a good solution if they expect to not be able to produce enough high end bins to fullfil demand, or if they simply do not want to run that risk. Especially if it in performance only makes a marginal difference between 1 or 2 chiplets. Plus it doubles the cache when using 2 chiplets. There might even be a double and single chiplet version. It could be the difference between the x and non x version. Or a single chiplet being branded as black edition if gaming gains turn out to be substantial in a single chiplet setup. In the end more production means better binning acros the board. So doing dubble chiplet where possible is a good strategy.
@@baronvonlimbourgh1716 I have to agree, 4 best of 8 cores + 4 best of 8 cores should yield significantly better results than the average 8 core single chiplet in my estimation. Keep the best 8 core chiplets for high end of each segment.
@@mattroy3154 indeed, 2 chiplets could also enable higher clock sku's then would be possible with a single 8 core chiplet. If chiplets with all 8 cores at 4.5 base for example are relitivly rare, but ones with 4,5 or 6 cores at that clock are very common they could not release a single chiplet 4.5 base sku, but with 2 chiplets that then does become possible. A lot of variables go into making that kind of evaluations. Yields, bins, performance impact, economics, production capacity and what performance brackets they want to reach with their products. And probably a lot more still we have no idea of.
Wholeheartedly agree. For my interpretation Zen2 is obviously yielding really well meaning there is plenty of fully functional 8c dies to cover all use case scenarios. The only exception being the possibility of a dummy die being added to aid with the application of the IHS.
@@baronvonlimbourgh1716 A nice side effect of this binning process could also be improved thermal characteristics: it must be easier to cool the CPU if the heat is spread across two chiplets rather than concentrated in one (with potential gains here for overclockers).
lmao i remember a dumster intel i486 having 8mb ram soldered to the motherboard, with 2x2mbyte 70pin (simm) RAM extra, missing the old days of optimized sostware tho (maybe make a vid of how unoptimized stuff is nowadays?)
I think two d1ckheads see a way to try and make a buck off the backs of others. That is what I think. I think these two d1ckheads should kick rocks. Maybe a piano will fall on both of them.
AMd is figting the lawsuit instead of settling,that only means they have a big shot of winning..the lawsuit doesn't have anything to do with with Ryzen..I don't see any point of your comment.
Well all those lawsuits are always the same, people complaining it was not 8 cores, well thing is no matter what they say and show the fact is that it is 8 cores. The 8 cores are there and that can't be denied, now if they are mad that they bought 8 AMD cores and thought it would perform better than 8 Intel cores then that is another story. More cores and more mhz and even a smaller architecture does not always mean it will perform much better. The only way to really know is to see reviewers and get the right information by them before purchasing a chip I mean cmon we are not in the 90's and early 2000's anymore, we have a ton of reviewers in every social media so if people keep making mistakes like thinking that the chip they bough will be better just because it has more cores or more mhz then they need to stop living under a rock and the fault is on them.
I was really craving for your commentary towards the new release of the latest Zen architecture. I'll have some munchie in the meanwhile. Keep it up the good work, Jim.
Alrite guyz, howsit goain
*intel stock plummets*
😂
Have my like sir, you deserve it.
I still want my butter donught t-shirt in addition to this one.
Seems about time to take some short positions on intel 🙃
Haha, brilliant comment ;)
"It is the biggest TECH TOPIC of a decade. END of discussion!" - AdoredTV 2019 (Love ya dude, now let's discuss!)
I think the fact that, without AMD, we would still be stuck with 4 cores and 8 threads from intel, maybe 6 cores 12 threads max, but at around £500, has to be the topic... I think the zen project as a whole needs the credit, as the foundation blocks for zen 2 were laid strong. When you are seeing *predicted* performance jumps that get people disregarding them because of that fact independently just shows the thinking that AMD made years ago has now come back to reward them, and deservedly so.
It seems to be even much more news than the advent of dual and quad core chips, which is actually surprising.
People couldn't comprehend the impact of having more than one core. Why have like 2 CPUs instead of 1 fast? people asked. The closest image people had was dual socket systems, and they were expensive and inefficient. And Intel spun hard on that too. Pentium 4 was an insanely strong brand despite its shortcomings.
Hes absolutely right though. Think of about ten years ago, thats the time of the first generations of the Core i Series with pretty much the same design that we had until Ryzen.
here i was being bored zipping through youtube videos and bam!
28 minutes of exciting analysis with a beautiful accent and voice appear! :D
Cheers bud.
wasnt so exited for a new arch since Athlon (Slot A)
You must be at least as old as me :)
Big fan of your work!
#NeverForgetAthlonXP xD
We are all old here i guess lol.
Wasnt slot A vs socket A already a significant improvement?
I'll say that last image looks like a good false-color image of the Matisse chip. So what you're seeing there is a center trace line, assuming that those line up with what is in the zen2 layout, then I agree, there must be a multi-mode infinity fabric link in the chiplet - up to 4 of them. And this might explain the IO die-size in the Rome chiplet - 4 complete replicas of the IO, cache and memory to allow 4 links to each chiplet. With that much logic and capacity, the size of the IO die makes much more sense. Instead of going chiplet-to-chiplet, each zen2 chiplet likely has multiple links to the Rome IO die. 4 IO links, to four different segments of the IO die, for each zen2 chiplet. Meaning that massive chiplet in the center of Rome can handily link each core to multiple resources simultaneously.
Whether that link is to transfer from one chiplet to another, to transfer from chiplet to memory, or from chiplet to PCI bus. It can have 4 links to whatever each chiplet needs depending on the application and execution requirements. (I fully expect windows' drivers to really treat this poorly thread-wise until someone calls them on their poor software) I'd wager that the same thing will happen in Threadripper with much the same end result. And the reason? When you have more than 2 processor chiplets, it makes far more sense to centralize the resources to better control which workloads are sent where and what resources are accessed - so it might have some latency, but at least you won't have the latency of a workload having to go through multiple chiplets to get to where it needs to go.
On a Matisse desktop with only two zen chiplets, this makes sense that there would be two channel links to the other chiplet, and two channel links to the desktop IO die. Which means the IPC performance we're seeing out of a single zen chiplet paired with the IO die is actually probably less than what we'd see if we had dual 4c/8t chiplets as you say the final production will be. (Although I really can't see AMD shutting down 6c out of 8 to make a 4c/4t or 4c/8t low cost ryzen 3 unless TSMC 7nm is really bad for defects.) There were only two links being used to the IO die, and the other two weren't going anywhere. Its also important that if for some reason a chiplet is saturating its two links to the IO die, it could (at a slight latency cost) use its dual links to its brother chiplet to get data back to the IO die if that chiplet isn't saturating its IO link. Its better to retrieve data from a round-a-bout path if necessary than to not have the bandwidth to feed the processor cores at all. Just my thoughts on it. I continue to believe that the IO die is literally the most important chip design decision AMD has ever made.
You mentioned that MS might stumble with software drivers. Should we expect better from some of the Linux distros?
@@mbraun777 All but a few leading-edge distros will probably struggle on day-one, but not as bad as windows, with the leading edge being the first to use the newest kernels which will likely get day one updates or even before-day-one to support this topology in a way that makes use of the bandwidth it offers while minimizing the latency cost.
lets see how amds gpu work with asynchronous loads. I think in zen 2 same principle
Datacentre guy here - - The threat from ARM in the datacentre was always the massive core counts running at lower frequencies but with very low power draw. It seems to me that AMD have now made a CPU that fits those same qualities but it's X86, allowing all your apps and code to run... Could you have a think Jim about the power efficiency of low frequency Rome and compare it in theory to the arguments in the industry for ARM servers?? Would love to see your thoughts.
What do you think of ARM being a contender for AMD and Intel. You are my best shot at answering this question.
ARM is on the way for the desktop. Things will get going once apple move to all arm platform for mac os. Popularity of win on arm will then follow. 5 years though, not overnight.
@@mrrolandlawrence Your prediction came correct my guy with Apple's move to M1
It's going alright!
Time To take 28min pause from everything once again.
why did I read this as 28nm
@@F2bnp that mindshare
oh hell yeah, a surprise to go with my dinner
Absolutely agree with this, eating as I discover Adored has a new video out. mmmmm... That sweet Adored dessert. :)
A plate of food goes good with some ryzen 2 news (:
Lunch for me goes great with adored TV
shhh we are sleeping here !
Just got my pizza...
Another awesome video Jim and feel free to talk alot more about Zen 2 in future videos :)
Zen 2 really is looking like it's going to be an awesome cpu in every task, including low resolution gaming, if all this info turns out to be true.
Alright guys, how's it going?
intel: I don't feel so good.
Intel: b b but we will release a xxxx Lake pretty soon in 14nm++++(++)
More zen 2 is never enough - like bacon, but with less calories :)
Yes more Zen 2
On the other hand Navi would be exciting too
less calories because of the 7nm ofc :D
All this information makes me pretty eager to see what the Zen 2 Threadripper series will be like. I upgraded early to a 1920x when I initially planned to hold out for those.
I've already said goodbye to my R7 1700x. We've had a good run, but Zen 2 is far too sexy to pass on.
Oh cmon man. Don't make me feel even more behind with my 1700x. I am already lamenting how its already outdated >_<
(Naw that's a good thing. World is changing faster than my money)
It might try and hold onto my 1800X until Ryzen 4000 series... assuming I don't need a new motherboard.
@Vlad Cristea Are you rocking Windows 10? Hows the experience? :)
@@ArynCrinn IDK. This new stuff is pretty revolutionary. PCIe 4.0 to boot (assuming new MB)
At least you can keep your RAM. And maybe use the old CPU and MB for another build (with cheaper RAM). That's what I usually do ¯\_(ツ)_/¯
That would work awesome for a homelab server. Run a headless version of CentOS or Debian on it. :)
That being said, I don't think my workloads need more CPU power then what the 1700x provides. Lol.
Ill at least do 32 or 64GB of RAM to do crazy stuff.
@AdoredTV There is this CPU-Userbench "leak" which basically confirms those 15%+ IPC gains... :)
Yep I know of it. ;)
Sauce chaps?
@@adoredtv Sause!!!!
@@adoredtv Of course you do, just wanted to point that out... ...you know for education! ;-)
Probably made the vid before that was leaked
I am simple man, i see adoredtv video I watch it immediately.
I can't wait for the 3700X. Buying one for certain when it comes out. Really gonna need it for all the plans I have, and all the footage I'll be recording. Unloading a camera with 80GB of footage. Pretty easy to compress with 12 cores.
Just as Jim sayed we dont know how long we have to wait for those... I Will replace my R7 1700 4Ghz for something that have fast single core speed...
@@Kazya1988 My plan is to buy a 2600X as that will already be a substantial upgrade over my i7-4710mq but I do plan on buying that 3700X if it comes true.
I skipped the 9700k/9900k because of the lower power and similar performance preview on stage. looking forward to 3700x too!
I never did this in my internet life, I subscribed to your channel for every youtube account on my family.
Just started watching. Thank you so much for this - just what I needed to prepare for monday! :)
My first computer I built had 64mb of system ram...
oof
JoeAceJR haha I remember the first computer I saw with over 1GB. I was upgrading a local photographer's computer. When It booted and I saw that magical 1GB, it was like heaven opened up and angels were singing.
My first pc had a 30MB hard drive, 10mhz 286 CPU, and 1MB ram. My first real build was a TMC TI5VG+ super socket 7 board with a K6-2 350mhz. 64MB ram and a Riva TNT 16MB video card. I have been a proponent of AMD's best bang for the buck model ever since. Have not been this excited about the pc industry since the original Athlon processor launched.
32 MB here !
My first computer had 16K of video RAM and 256 bytes of RAM reserved for the OS ((TI-99/4A).
I'm so excited about 3rd gen Ryzen...
I might actually build my first PC with a Ryzen7 ♥️
Just do it. AMD machines are easier to build anyway because of the old school socketing method.
@@TheXev
Are you serious about what you said?
please don't be...
Why is installing an Intel CPU any harder? I know it isn't...
Keep them coming! I for one am very excited about Zen 2. Great videos!
This has aged well.
Fantastic work brother. I was waiting the entire day to sit down and watch this video properly. Man, I am just getting excited.
Looking at that road map made me more excited for Zen 3 2020 than Zen 2. I have a R5 1600, and a Zen 2 would be a massive upgrade, but not necessarily a needed one. Waiting about 1 more year would net me an even better upgrade, one that I would feel confident I wouldn't want to upgrade from for a good while. But then there's the next thing after that.....PC building can be ridiculous with the feeling of always wanting to upgrade something even when you don't need to.
Yup a 1600 is a beast that wont let you down anytime soon, but youhave to admit that this 3700x part looks impressive xD.
I'm in a similar boat. 1700X is going strong, but I could really do with more single threaded performance, and my virtualisation shenanigans are begging for more cores.
Hopefully AMD come good on their promise of supporting AM4 through to 2020. If I can buy a Zen 3 chip and drop it straight into my existing setup that is going to be amazing.
Dont hold your breath, Zen 3 might need a new socket, especially if it goes DDR5
@@TheBackyardChemist TBH I'm expecting a new socket for DDR5. it seems very unlikely that we'll see an AM4+ that supports both due to the way DDR5 works.
It all depends on exactly when DDR5 is consumer ready. Many people have said 2020, but that could mean late 2020, which really means we won't see it until 2021 (Which would be my bet). AMD are on record saying they want to be first to market with DDR5 though, so it's anyone's guess what they'll do.
My 1600 does bottleneck my gtx1070 since I play games at low setting 1080p with 144hz monitor, I will get an 8 core zen2 once it released. I really wish they can release it Q1 2019.
Make as many Zen 2 videos as you like bud I'll certainly not be getting tired of it!! Can't get enough great vid.
Cheers bud.
The 17% ipc gain does seem very high... But I love the way you calculated it, you always impress me with your knowledge and deductive methods, Jim :)
He start To project how good it will get and then he start To debug why it isint posible. He is smart
There is this CPU-Userbench "leak" which basically confirms those 15%+ IPC gains... :)
With a reduction in latency it isn't that extraordenary, add to that the doubling of cache. I bet he isn't far off.
I think we should be a bit more carefull with our speculation,
because the Ryzen 2 first was going to be 30% more efficient then Ryzen 1+
but two months later was already drop to 25%.
Also after the presentation everybody was disappointed over the demo that was shown while the performance gain and efficiency
of the demo CPU was absolutely mind boggling good.
And that was all because of the unrealistic high expectations that the hype has created.
On top of that you should not forget that these typse of channels make a 'living' from demostrating information
And how more out rages the information is how more traffic they wil recieve and more mind share.
@@wertin200 unrealistic?! so you still think we will only get 8 Cores!? I personally was blown away!
Aaaw yiz! Best way to end a Sunday evening is watching a long video of yours! Cheers mate!
Man this feels just like the Athlon XP and Conroe C2D times...
true that....those were very exciting times, and I bought a C2D e8400 at a later time and an athlon 64 3500+ in the amd days.
Great Video man! Thanks for being on top of this!! Really looking forward to seeing what this new series can do.
I wish we had more info on launch date as well as launch date for the new motherboards.
Ohh yesss
28mins of goodness
*godness
Don't feel bad about talking about Zen 2 so much... You're right, it's the biggest thing happening in tech right now, and I can't get enough of it. I've loved every minute of all of your videos since I first found out about your channel in November. Keep it up!
Go AMD. Go Zen 2. Go Zen 3. The better AMD does, the more competition. The more competition, the better CPUs we all get. AMD deserves to win for a while. BTW, feel free to talk about Zen and nothing but Zen for the next 6 months. Then you can find something else to discuss. :-)
They will be tired of winning so much 🤣
@@baronvonlimbourgh1716 : Hahaha. After being trounced for so many years, me thinks AMD will need to remain in the lead for one year to get used to that. Plus, Intel has all the money in the world to remain at least fairly close to AMD, while AMD didn't (and barely stayed alive). AMD needs to keep the pedal pushed firmly against the metal to get ahead and stay ahead for any sustained period of time. The computer industry and consumers need AMD to stay in the lead for several years, at least.
@@maxbootstrap7397 it was a trump joke......
@@baronvonlimbourgh1716 : Sorry, guess I don't pay enough attention to politics. Oh, wait! It's impossible to pay too little attention to politics! :-)
@@maxbootstrap7397 i'd hardly call it politics though.. I'd call it a sad circus lol 🙃
I'll gladly watch every Zen 2 video you come out with. Again, great video as always.
It's Monday 12:30 AM here and AdoredTV just uploaded a 30 minutes video. I guess I have to have more coffee tomorrow.
Why do you have to watch right away?
You mean 00:30 ... Normal people use more logical 24h system.
@@MrQuay03 No PC enthusiast can resist not watching Jim's analysis.
You are from the future!
haha it's 2am over here when I start watching :D
Jim you are the best tech news guy on UA-cam. Amazing insight! Cannot wait for Zen 2!!
Zen 2 is New Era for CPU, same transition was decade ago from 1core-2c-4c-4c/8t final. Zen 2 is the new Sandy Bridge, tremendous
potential for another decade to employ.
The hype is real... Probably surreal and like you've put it... the most talked about topic since i've been into computers and thats close to 15 years... Great job Jim... The commitment you have is admirable... I'm admiting that i'm not a patreon but as soon as I get my student life together and can afford to become one I will because your work and the work people like you do should be supported in one way or another!
Regards from Slovenia ^^
22:20
Chiplet to Chiplet... hmmmm.... :) that would be interesting
Did you check your email btw?
@@adoredtv I did. Just got Info about you liking my comment. I think I could give wrong Email to Ram or something :P
@@adoredtv ok nevermind, It landed in Spam folder lol
@@The_Nihl I wondered if that happened. I decided to play it safe and only mention a few details and not the whole thing. ;)
@@adoredtv Better play it safe, but give "us" a few hints about it so that "we" could read between the lines! ;)
Frankly I am amazed at how others fail to see the things you always see. Another great analysis thanks Jim
I honestly didn't really think I could get excited for hardware the way I used to in the glory days, but by God, I feel like a kid waiting for Christmas when it comes to Zen 2. I swear this is like total nerd viagra.
I for one welcome any new videos about the Zen 2, as long as there's some new information and/or analysis (which you've delivered on every single time so far). I'm really exited about this, and this brings memories back to the 90's / early 2000's when there were constant massive improvements in the performance, as opposed to the near-stagnation of the last decade until Ryzen CPUs became available. And I also love seeing the underdog getting ready for getting on top of the game!
*PC Gamers:* Now that AMD is having their products being fabricated at TSMC instead of GlobalFoundries, it's lights out for Nvidia and Intel. All AMD needs right now is for Software (Games) to catch up to Hardware (GPUs, CPUs). Chances are, AMD wants developers to implement Ray-Tracing instead of them creating them own proprietary library a la Nvidia's RTX Gameworks. It will be more work for developers though, but with the new Next-gen consoles coming out next year, developers will be able to put in the work. Just look at the news about AMD being able to do DLSS via DirectML.
By the way, people should look at AMD's 8-bit compute performance against Nvidia.
AMD has also been fostering Ray Tracing support through various tool sets for quite some time in the game dev/commercial world where the ecosystem may be maturing quite quickly behind the scenes. Its a presentation from almost a year ago but still valuable from the insight perspective to what may be coming up: gpuopen.com/gdc-2018-presentation-real-time-ray-tracing-techniques-integration-existing-renderers/ at the time they had the RX580 doing over 200 MEGArays (not gigarays like current NVIDIA products) and *I* am quite interested to see where they land with the upcoming GPUs and if this next generation will even support Real Time options. I have seen zero information on performance from the VII in these measurements.
Raytracing is still just a gimick. It is not yet ready for major addoption.
It is inaccurate and barely implemented in the games that do support it.
@@baronvonlimbourgh1716 Yup this time AMD is letting nVidia get the backslash while they just wait for the technology to be adopted widely, we all know that AMD cards have always been exeptionally good at computing tasks, i bet once nVidia have pushed developers to adopt RTX they will compete pretty equally.
Baron von Limbourgh yah I mean what Nvidia has is not even real Ray tracing. They are just ray tracing some of the parts while 98% is still rasterization. So yes it’s 100% a marketing gimmick the backfired on them.
@@yottaXT i think it is developed for the growing rendering market. I think they saw an oppertunity there to sell a lot of cards. Or any other market where these cards do well.
And when they have the design anyway why not push it as a gaming feature.
I bet this what happened. It never was primarily a gaming feature.
Can't wait to get my hands on Zen 2 .
Yay AdornedTV! 💖 ((the biggest tick tock since SANDYBRIDGE))💖
I'm on holiday in Egypt but I always find time to listen to Jim. :)
Things have not been this exciting since early 2000's when AMD released the first 1 GHz processor. Very excited for AMD and what they can do to eat away at Intel's pie. As for first computers? Well, mine came standard with 256 kb of memory and I did buy the 256 kb upgrade. As for PC, my first graphics card from '95 was maybe 1 GB? I still have it but don't even know what card it is. Thanks for the work you do on these videos.
I had one of the fabled AXIA 1GHz Athlons in the early 2000s which overclocked to 1.4GHz just by increasing the multiplier. I remember sitting at my PC one evening and hearing a strange noise, the PC freezing and smelling something burning: the clip on my CPU fan had failed and the CPU fried instantly. I cleaned the heatsink paste off and could see a discoloured section all around the die. I almost cried because the AXIA processors had sold out and I couldn't afford a 'real' 1.4GHz chip. (I must still have it somewhere because I could never bring myself to throw it out...)
I still have both of my Socket-A based AthlonXP cpu's and motherboards. I've been wanting to do a retro gaming computer with the better of the two cpu and motherboards. But I don't have a spare case to put them in sadly.
@@adriankelly_edinburgh Intel fanboys at the time would also try to rub it in your face that they had thermal protection that would prevent them from having that same problem. I know several motherboards added thermal protection for Athlon/AthlonXP, but I don't think AMD added true thermal protection until the Athlon 64, that is also when they added the heat spreader. Athlon CPU's were much cheaper, so if you REALLY had to replace them you could for much cheaper then an Intel, but damn.. I'm glad I never went through any of that with my AthlonXP 2000+.
I found the old chip at the bottom of a box of old bits and pieces. I can still discern the pencil lines on it where I'd redrawn the broken connections that were meant to limit the multiplier. Happy days :-)
The only fact that lisa su herself thanked Jim for the analysis in a tweet makes this video even more interesting
@AdoredTV: Brad Sams at Thurrott.com leaked that the next Surface Laptop will use an AMD APU in his recently released book “Beneath A Surface.” Presumably, this would be the Surface Laptop 3. But what is the processor? In the past, Microsoft has gotten first dibs access to processors for their Surface lineup (e.g. the Surface 3 in May 2015 had the Intel Atom z7-Z8700 months before anyone else; I believe the Surface Pro 4 and Surface Book had first dibs on the Skylake Core mobile CPUs as well in fall 2015). So it would not be unprecedented for Microsoft to call first dibs on the initial supply of Zen 2 “Renoir” Mobile chips for the Surface Laptop 3 and-since the Pro release is generally in lock-step cadence with the Laptop-the Surface Pro 7. Add the fact that Panos Panay, who had originally led the Surface Team, now is the chief product officer at the head of the Microsoft Devices group which includes Xbox that already has a close-knit connection with AMD for their Xbox systems and the path becomes clear. AMD, through their long partnership of Xbox co-development, can now reach to Panos Panay, who now oversees this both Xbox and Surface and bring their technology in an industry first-style move to Surface as well.
I Love your content. it's hard waiting for you to upload content. Keep up the great work and i really appreciate the information and honesty you bring to us.
Before even starting to watch I had to like this.
Awesome video Jim, love the way you put all the pieces together. Ryzen 3000 for the desktop based on Zen 2 will be absolutely astonishing. Can't wait!
Also don't stress to hard about making videos on the same topic. Keep it going, I'm sure everyone is just as curious as you are
I just love this channel.
usually wait until the end to thumbs up a video, but once you started on some stuff I hadn't already heard / speculated myself, I had to do it...couldn't have been past 8:00 in. always enjoy the content!
What is Zen 2? The CPU that will replace my current Ryzen 7 2700X 😊
why not get a zen 3. (or a 4700x)
indeed trastewere thats exactly what im going to do-even though ive only had the 2700 x for a month ha ha ,swore id never buy another intel chip again.it was hard but i stuck to my guns,so go to hell intell
I just want that 48 core TR
@@herbetrono4373 welcome to the dark side 😁
I have a ryzen 5 1600.. its even better for me
"Alright guys, how's it goin ?" gets me everytime!
I'm expecting Zen 2 to finally end my run of Intel CPU purchases. Stay tuned.
P.S. Great vid. :)
Thank you! I was struggling with all the different names myself, and you made it very clear. Well done!
My first GFX card had 1mb. Tseng ET4000 on a 486DX2/66 in 1995,
I had a 1MB VRAM graphics card in my IBM PS/2 Model 55SX back around then... it probably cost about the same as these damned nVidia RTX cards do these days!!
Yay, always a new video when i am on my mobile patrol shift. Thank you adoredTv
20 Intel engineers disliking the video taking notes XD
Lisa Su tweats on AdoredTV’s latest “What is Zen 2?” video twitter.com/lisasu/status/1089715760895852545?s=21
I already know the content is of the video is amazing lol
What a great channel! Keep up the great work. I am literally waiting for Ryzen 3 to hit the market to upgrade my build. My last AMD was an Athlon 64, from which I then went to the 2600K (which, let's face it, was fantastic). Been on it for the last 7 years and it's seen me through 2 PSUs and 3 GPU upgrades, but it's now high time for an upgrade at the most fundamental level. I've been biding my time for this very moment. Thankfully, it looks like I don't have much longer to wait!
Just because they say "at this time, there will not be a integrated GPU", doesn't mean it *won't* be, in the near future!
Good Video, and very Interesting stuff. I looking so hard forward to the future of AMD! Keep on doing AdoreTV!!!
And what if for Zen 2 Renoir APU's they have global foundries produce IO chiplets with small graphics sub-system integrated into there..? 🤔
I bet nobody would have seen that exact config coming, but it might make some sense.
GPU as it's own chiplet might get pushed back into future products so as not to take on too much innovation risk all at once?
I see a lot of people using your work, keep up the good work and keep the updates coming.
Lisa Su is a master of speaking in tongues >
During my current heartbreak there's little that helps me more than a nice, long AdoredTV video along with some unhealthy food I will regret eating but will do anyway because I need to numb myself to the pain
My first PC had less memory than these have cache :D
My first PC had a smaller HDD
Man, reminds me of my 1st PC with 386dx,8mb edoram😂😂
@@fastcx My 386 was my 2nd PC, a 5000$ beast with 8MB as well. It was 30 pin SIMM
@@mattroy3154 Man, we are old XD my 2nd pc was pentium 66mhz, then 75mhz. all of which was running like ass when paired with 8mb ram without any cache
@@fastcx That's where I got spoiled, we got a p120 16MB with a Trident 2mb PCI video card. It ran Descent really well but when Quake 2 came out I was gaming at less than 15FPS on a 320x240 resolution.
Love your work, as always. Worth every minute of my time.
What is Zen 2? The Zen after Zen 1, of course.
Oh thank god, now I dont need to watch 28 mins video :D
DELET THIS
Wrong; it's actually Zen 1, then Zen+, then Zen 2.
Better question: is it worth talking about Zen+? Reasons for doing so: small boost in overall performance, Threadripper's max core count doubled; reasons for not doing so: no improvements to Epyc for reasons explained in vid
@@ganaraminukshuk0 It was a joke about numerical progression. Thanks for being overly dense.
@@InvidiousIgnoramus I'm sincerely sorry about that; I legit thought people forgot that Zen+ was a thing. Whatever; guess I'll go down as someone who doesn't have a sense of humour.
Just what I needed a 30 min video to enjoy
If AMD actually pulls off chiplet-chiplet communication along with the I/O die, that will be the absolutely most impressive thing about the whole thing to me. Doing that effectively creates a high-speed, high-bandwidth asynchronous system... Not exactly something that's easy to pull off in silicon. There's a reason why all modern CPUs (outside of some academic experimental asynchronous processor) use strictly hierarchical structures like a single bus internally. Once you start to go away from this architecture, it just gets really hard to make all of the bits arrive where you want them to be in time. So, this is why I'll be very skeptical about the chiplet-chiplet links, until there's some extraordinary evidence to justify the extraordinary claims.
They already do this internally with zen. The two CCX have their own dedicated link.
@@kazedcat I can't find any sources for that, what exactly do you mean? All I see in zen1 is two ccx's being on the same sdf and scf, which isn't really comparable. Or do you mean the IF links between the MCPs? Because my understanding is that these are only used to link the sdf of the two chiplets for communication with dram etc, which is more similar but also not really comparable imo.
@@squelchedotter The communication between CCX inside a die is asymmetric compared to communicating on CCX outside die. So in Threadripper if core9 needs to communicate to core1 the communication path is longer and have to hop on two sdf and two scf. Compared to single hop with core1 to core2.
Oh man, I have let myself get fully hyped for Zen 2, please don't disappoint AMD!
@AdoredTV you sound like the "Whats heavier a kilogram of steel or a kilogram of feathers" guy
But steel is heavier than feathers... I dont understand? Hahaha
If the crossbar communication between cores becomes a reality, I'm all in! This processor design will be a work of art! I'll be buying this Rembrandt as soon as it becomes available!
*Everyone out here forgetting the FX 9590 OG 5Ghz*
Saw 8350's hit 5.0 ON AIR COOLING..... lol.
Even then AMD was working their way to what is coming out this year.
Think on the ccx design a minute. It just wasn't there yet though, and people bashed Bulldozer for years. Yet i had an 8350 for a couple years, and it was just fine...
Another lovely part of my Sunday afternoon. =) looking forward to this cup of coffee shared with your thoughts Jim. Thanks!
"threadripper will be strictly workstation"
i will still buy threadripper non-wx as an everything cpu. im fairly sure AMD will also still advertize their non-wx threadrippers for everything, including gaming. its a wonder if they even make workstation specific threadrippers this time.
I dont think WX Threadripper will even exist. The reason they were called WX was because two dies didn't have direct memory access. IO die in 3rd Gen TR will take care of it. Though its worth pointing out that Level1Tech found that the performance drop in WX chips was because of Windows, not due to 2 dies (16 cores) not having direct memory access.
@@stayfrost04 me either. with the I/O die they can function the same no matter how many cores/chiplets they put in the package. even a 64 core i dont see getting a wx branding or purposing, but a power requirement bump if the clocks go higher than the 2990wx.
My god how much in love with AMD are you? I can HEAR you smile as you speak :D ... great content, keep em coming.
You're hearing things then. Don't mistake my desire to see Intel get thrashed as a love for AMD.
Given that they cannot completely remove the latency between chiplets even with IF decoupled from memory speeds, I hope that you are wrong about the 8 cores being done via 2 chiplets with 4 cores each. 1 chiplet with 8 cores is vastly superior.
Super-Jim to the rescue ...
Just when you almost feared that Jim went back on vacation (to make up for the interrupted one), when all the other tech news are uninspiring and lame and you are threatened to succumb to boredom - in short, when all hope in the world of tech fades: in jumps Jim at last second and wrests you from the claws of lethargy.
Thank you Jim for your relentless effort to feed our brains with useful information!
Zen 2 will destroy intel
Zen allready does pretty much.
I love when you upload, it makes me happy XD
Posted 6 mins ago? I’m late!!!
loltyler1dotcomdiscountcodealpha
Chiplet-to-chiplet communication and decoupled IF clock speed would be a godsend. A bit more excited now.
Jim, I really don't understand why you keep clinging to the idea that all Ryzen 3000 CPU's will consist of two chiplets. It makes absolutely no economical nor technical sense.
I think you demonstrated enough that those chiplets will not have significant yields issues. Neither should TSMC be a problem seeing as larger ARM chips have been produced on TSMC 7nm for months. Further if a chiplet has a defect, there is a good change the part can still be salvaged. The majority however should still be fully functional 8c chips. Why would you take two fully functional chiplets, disable halve the cores and sell them for $200 as a Ryzen 5 while one of those chiplets can do the exact same thing. It doesn't make any sense.
Nether does it make sense on the technological level. There will obviously be an IF connection on the substrate between the chiplets and the I/O chip. Substrates aren't exactly the most complex not expensive part of a CPU so making every substrate capable of holding two chiplets make sense from a production standpoint. There however doesn't seem to be any reason to actually have to place the second chiplet on there. The only reason I can think off would be a limited bandwidth between the I/O chip and the chiplet when it comes to RAM access and latency. AMD however seems to have demoed an 8c Ryzen 3000 ES with one chiplet against an 9900K. For me that says everything on the technological level as well. Then further the main target for a hexa or octacore will be gamers. Games are latency sensitive and a lot of those games are not able to properly recognise differences between threads and jump loads between threads all the time. Introducing more latency by splitting the CPU design into two chiplets and having games throw calculations between both chiplets makes absolutely no sense to me at all.
I think AMD will release a bunch of hexa and octacores as Ryzen 3/5 parts with one chiplet. 4c might also be Ryzen 3, but it could also be branded as low end Athlon. Then Ryzen 7/9 will be two chiplets and 12/16c SKU's. That makes sense on both the economic and technical aspects.
It all depends on clockspeed yields. Can they bin enough chips with all cores clocking high. And those chips will most likely be also most power efficient
They will also need all core high clockers in 16 core ryzen and in threadripper. As well as the most efficient ones for their epycs.
It is a good solution if they expect to not be able to produce enough high end bins to fullfil demand, or if they simply do not want to run that risk.
Especially if it in performance only makes a marginal difference between 1 or 2 chiplets. Plus it doubles the cache when using 2 chiplets.
There might even be a double and single chiplet version. It could be the difference between the x and non x version. Or a single chiplet being branded as black edition if gaming gains turn out to be substantial in a single chiplet setup.
In the end more production means better binning acros the board. So doing dubble chiplet where possible is a good strategy.
@@baronvonlimbourgh1716 I have to agree, 4 best of 8 cores + 4 best of 8 cores should yield significantly better results than the average 8 core single chiplet in my estimation. Keep the best 8 core chiplets for high end of each segment.
@@mattroy3154 indeed, 2 chiplets could also enable higher clock sku's then would be possible with a single 8 core chiplet.
If chiplets with all 8 cores at 4.5 base for example are relitivly rare, but ones with 4,5 or 6 cores at that clock are very common they could not release a single chiplet 4.5 base sku, but with 2 chiplets that then does become possible.
A lot of variables go into making that kind of evaluations. Yields, bins, performance impact, economics, production capacity and what performance brackets they want to reach with their products. And probably a lot more still we have no idea of.
Wholeheartedly agree.
For my interpretation Zen2 is obviously yielding really well meaning there is plenty of fully functional 8c dies to cover all use case scenarios.
The only exception being the possibility of a dummy die being added to aid with the application of the IHS.
@@baronvonlimbourgh1716 A nice side effect of this binning process could also be improved thermal characteristics: it must be easier to cool the CPU if the heat is spread across two chiplets rather than concentrated in one (with potential gains here for overclockers).
Great video as always, i really like your way of analyzing :)
Short answer. It's the death of intel.
Zen 2 in your channel is like butter in cooking, it just makes it better...
lmao i remember a dumster intel i486 having 8mb ram soldered to the motherboard, with 2x2mbyte 70pin (simm) RAM extra, missing the old days of optimized sostware tho (maybe make a vid of how unoptimized stuff is nowadays?)
Loved the video. Your ability to analyze is very impressive
Sunday Funday i guess?
Well, he did bring Jim on.
Great vid dude, subbed. Can't wait for these chips
What do you think about AMD (again?) getting hit with a lawsuit about Bulldozer cores?
I think two d1ckheads see a way to try and make a buck off the backs of others. That is what I think.
I think these two d1ckheads should kick rocks.
Maybe a piano will fall on both of them.
It's sad but amd is a company that will manage a lawsuit
I think Nvidia and Intel have really pissed when they noticed that FX8150 count as 4c/8t after 8 years running on their marketing team computers.
AMd is figting the lawsuit instead of settling,that only means they have a big shot of winning..the lawsuit doesn't have anything to do with with Ryzen..I don't see any point of your comment.
Well all those lawsuits are always the same, people complaining it was not 8 cores, well thing is no matter what they say and show the fact is that it is 8 cores. The 8 cores are there and that can't be denied, now if they are mad that they bought 8 AMD cores and thought it would perform better than 8 Intel cores then that is another story. More cores and more mhz and even a smaller architecture does not always mean it will perform much better. The only way to really know is to see reviewers and get the right information by them before purchasing a chip I mean cmon we are not in the 90's and early 2000's anymore, we have a ton of reviewers in every social media so if people keep making mistakes like thinking that the chip they bough will be better just because it has more cores or more mhz then they need to stop living under a rock and the fault is on them.
I was really craving for your commentary towards the new release of the latest Zen architecture. I'll have some munchie in the meanwhile. Keep it up the good work, Jim.
Not first, but notif squad OP!
Great information. Good thing that I decided to wait until summer.