I wish motherboard manufacturers took this opportunity to spend big on PCIe switches, to turn those 28 PCIe 5.0 lanes from the CPU into 56 PCIe 4.0 lanes. Yeah, PCIe switches are expensive as all heck, especially 5.0 ones, but PCIe lanes are exactly what I need in a server context
The AM5 socket doesn't have enough pins for that. You could split them up with an external solution, but now you're introducing other problems. Plus, you wouldn't be able to split them into separate IOMMU groups for hardware passthrough to VM guests.
@@TheChadXperience909 Still, for a desktop cpu where they want everyone on NVMe and like to sell that from the ssd maker side as well, they should have space for at least 3 or better yet 4 of these drives. I may not want to be forced to cough up for 1×8TB drive for €1200 minimum, I have 1×1TRB, 1×2TB and now must decide to get either 4TB throwing the 2TB in the trash or wipe the Steam and Epic libraries partially. I am annoyed. I should be able to just plug in 2TB extra. An E-key as a bonus should be in there as well. We have 12VO coming, we have bigger and better plugs everything, just make DTX more the norm, use the extra width, put them all right next to the cpu socket, very short traces means no expensive retimers and when we move up to PCIe 6 and 7 you could even just 2× every ssd and NVMe slot, it's fine. Also smaller motherboard, slekker build, still have a spare slot (use a riser, or not). Or put 2× 16× slots, give them both 8×, all done. Put GPU top or bottom.
@@TheChadXperience909 is IOMMU groups an big deal for an storage server? it's not like want to multi VM's with video cards doing 3d loads on the an desktop chip.
@@JoebDragonSince most storage servers don't need all that processing power, it might make sense to virtualize it, and you may want to pass your HBA to the guest.
I like the look of the 7900 non-x Epyc counterpart - the 65W, 12 core one - 4464P. Pretty cheap and low power but plenty of cores for a home lab scenario.
Outrageous. Do you not understand the value of X? AMD are doing such favours in allowing you to buy a product with an X for a crazy low price, they're breaking the market. Outrageous. /s
@@ChrispyNut X is just short for eXtended frequency range, meaning slightly higher clock speeds, This shares the same name as the automated overclocking feature dating back to Zen 1.
@@tim3172 It's also commonly mocked on TechTube for what I said. If you don't know the meme, it's probably not as funny as with existing meme knowledge.
Yeah, I really want to see more 35 watts and under parts. They make them, but only in industrial PCs which you have to buy in bulk. It would be nice to have something which competes with Atom CPUs for the homelab.
@@TheChadXperience909 Yes yes and yes, I need a 35W 4 or 8 Core chip which has some grunt to it and about 14 SATA ports with 10GB eth. IPMI, ECC - I'd be very happy.
@@nathanscarlett4772 If Intel goes into a Bulldozer style rut, they will somewhat, except that ARM is finally looking to make some inroads for desktop and servers, meaning AMD at least has to keep representing the larger x86 ecosphere with gusto.
@@Moshenokoji intel in infinitely richer than amd. when amd didn't innovate it was because they financially couldn't. when intel didn't was because they don't give a f about humanity.
@@GewelReal AMD's marketing post a certain um... individual... leaving, has gotten generally better. That being said, some of the marketing was absolute money, the issue is it was all over promise, under deliver. Couple that with being plagued with firmware and driver issues and... there were problems. That being said: Ignore marketing, turn directly to trusted actor reviews+comparisons. Stick to stuff you can actually personally validate. In short, if the person pushing the message has the accreditation of a marketing degree: treat them like an idiot, right up until they prove they have the know how and personal integrity to provide honest information.
@@tomstech4390 I wish they'd give 32MB + 64MB on each chiplet, so 192 MB total. Some application really benefit from it. I have one that uses about 50 MB of working data and is memory bandwidth bound, so each chiplet having 96 MB would be excellent.
@@MarkRose1337 The market for desktop parts with dual v-cache is low because of the clockspeed penalty - in theory AMD's expected role for the dual CCD v-cache parts is so that you can use one CCD for cache heavy loads while still being able to use the other for clockspeed limited loads, given that desktop users are likely to encounter a mix. Is your application one that can justify moving up to full size Epyc? There's fully v-cached Epyc options, although that's less a step up and more a quantum leap with an option for over a gigabyte of L3 (you could potentially go with a non 3D design with 256MB as well)
@@bosstowndynamics5488 Yes, the v-cache Epycs would work, but it's a bit difficult to justify a $15k 9684X when I could build 12 7800X3D systems for less, with double the main memory bandwidth per core (which would be useful in other situations). Like just make a dual v-cache AM5 CPU and charge more for it, and call it the 9945X3D so the ignorant don't think it's "better" because it has a higher number.
The CPUs are only one part. We need support for them on customer motherboards. Intel shows that if you need special motherboards to run them including ECC they are extremly expensive. A Intel i5-12500 for 194€ needs a 500€ motherboard for ECC. Nuts. I hope these Epvcs will be supported on cheaper boards......
I think, it's the chipset which is most at fault. AMD knows people don't want a fan on the chipset, but they can't provide us a decent solution without it.
@@eat.a.dick.google Yes there are if they allowed more slots to be just 4.0 or some 3.0. The benefit of PCIe 5.0 is you can use less lanes for the same bandwidth more previous gen lanes needed.
For every "head scratching decision" AMD make they seem to have a nack for making you forget about it with a really cool decision albeit a tad late. I had forgotten about this announcement. But seen it work on an AM5...very tempting. I can see these selling like hotcakes.
Getting cheap server cpus for desktop platforms at end of life are a nice option and a bit of a small flex for late adopters and for people needing cred as serious computer users..
Yeah, the socket won't support that, and since they've just doubled-down on their AM5 commitment, we'll have to wait until 2027 before that might change. Thus, you can rest assured that a Threadripper is the way to go if you need PCIe lanes.... Well, or Team Blue, of course.
Nice to see this. I used to buy used E3 Xeons to put in budget LGA1155 builds back in the day. Will have to keep my eyes open here in 2-3 years when these start hitting the used market.
I upgraded my home server last year from a dual socket Xeon v1 to a single Ryzen 7600. I ended up halving the ram, but it performs better as a game server (which is the only task that pushes it) and runs 70W less at idle.
if you have a bit of time, you could undervolt the CPU and test around. I set my 7950X to undervolt -15, and now it achieves around 55W on near idle (5% CPU load), and even below 40W on full idle (after boot with no programs started). And that (I know I know) on Windows 11. I might be able to push it to -20, but probably will reduce just 1 every now and then to test stability. Hopefully optimizer Curve is coming to X670(E) and Zen4...
@@bernds6587 yeah, that's worth doing too. I've got some time coming up, and the server needs a BIOS update and some tweaking, I'll add that to the list, thanks 🙂
Ah man, I was really hoping this would mean AM5 but with a few extra PCIe lanes. Even like 32 or 36 would be nice, instead of 28. I guess desktop CPUs are just so physically constrained at this point that architecturally it's not possible. The only reason I say this is because my first computer was during the heyday of SLi and Crossfire, and I think my EVGA motherboards all supported three-way SLi (though I never used it since I was using hand-me down monitors for the longest time on a kickass gaming rig; I didn't know any better thankfully haha. Too bad it was a hand-me down LCD Dell monitor and not our old Sony Trinitron. If only I had the sense to hook that bad boy up). So, because of that, all the high-end enthusiast boards had at least like four x16 PCIe slots. They weren't necessarily electrically x16 (either physically or they'd switch to x8 or x4 depending on the use case), but what that meant was I never had trouble with expansion. Now with my x570 board, because of the limitations of modern boards, you really can't get PCIe NVME expansion cards on there unless you want to switch your GPU to the second slot, which depending on the BIOS, might be hinky to begin with. It's really sad since you have these wild Epyc and Threadripper processors with 128 PCIe lanes, but then we're stuck with a dinky 28 lanes on AM5. Though Intel has been even more anemic than AMD's AM4 and AM5 platforms, so really, I should just be counting my lucky stars, I guess.
Theoretically it is possible with desktop CPUs. I don't think an IO die with a couple more lanes would be that much bigger to fit in the package. But the AM5 socket is not designed for it, you'd also need the extra pins on the socket to route those lanes out. What is possible though is to use PCIe switches (or the chipsets) to split the existing lanes into slower, but more lanes. Doesn't increase the total bandwidth, but you could say take the 16 gen5 lanes that go to the GPU slot and make 32 gen4 lanes with full bandwidth out of them. I'd love to have an AM5 board like that, don't need fast lanes, I'll take gen3 lanes too. But esp. for a home server type scenario, a board with a good number of PCIe slots would be nice.
My B550-XE STRIX has the graphics card in the first slot, and I use the included expansion card for additional NVMe SSDs in the second X16 slot. Keep in mind it's not easy to utilize the bandwidth of multiple NVMe SSDs unless you're doing something like a RAID. Their bandwidth requires multiple threads and queues.
@@erkinalp It does? interesting! Did AMD say that they cut it down, or is there a die shot that shows it or something? Would love more details about that.
Awesome! This would be a great upgrade from my current system of a Xeon E3-1275L in a Supermicro MBD-X10SL7-F-O with 16G of ECC but it would be nice to find a board with builtin SAS that can run in IT mode. I need all those sweet drive ports and built-in always works.
Did I hear AMD Ryzen 7700 (No X)? Be still my beating heart! That's the chip I have currently in my battlerig. I chose it for a balance of power, temps, and savings. It's not like I can't upgrade down the road.
Part of me thinks it's not really EPYC without all the extra PCIe lanes. But... the higher clocks are nice and I begrudgingly have to admit that Gen5 doesn't need more lanes to do more work.
Mind-blowing how they get an EPYC in that SKU, Also, how they developed from Athlon to these days. I'm floored, to be honest. The only downside I see is the lack of memory lanes on the main board from a normal desktop. So to have the full benefits, you probably have to buy a workstation board. Thanks, Wendel.
The only issue there is the lack of added pcie lanes which so ma y mbs screw the pooch on 1x 16x is common for obvious reasons But having a bunch of useless 1x slots is irritating same as the constant failure to use open ended slots Many smaller servers woudl be better served with 1x16 and 1x8 signal wise ot 1x16 with division to 8x signal for both plus a red 8x slot since so many HBAs and sfp28 etc class cards need x8 pcie or better not the x4 slots or smaller
You brought up something that I think is overlooked but sorely needed. A change in PCIE layout capacities on ATX and MATX motherboards. Not long ago the first PCIE was 16 lane electrical and 16 lane positions. A second PCIE was 8 lane electrical and 16 lane positions. A third PCIE and even a fourth, was single lane only or maybe 4 lane on some boards. Today we have 16 lanes on PCIE1 and maybe a 4 lane on PCIE2 if fitted and maybe a single lane PCIE as well. I'm with you, I would rather see PCIE 1 and 2 both being 8 lane and 16 lane positions, with additional single lane(s) as well. The supply of 16 lanes to PCIE 1 is a waste for most PC users. Firstly they don't game or so infrequently they don't need a 16 lane GPU, secondly, not all GPUs are even 16 lanes. There are lots of PCIE plugin boards that are available and in use today that need to be plugged into the 16 lane slot even when they are 8 lanes, with some boards 4 lanes. Thus requiring the GPU to go or be relocated to a 4 lane chipset slot. Changing the 16 lane slot to 4x4x4x4 or 8X8 or whatever does not help when the unused lanes cannot be reallocated as there are no electrical connections to be had on the board. And USB 3.2 gen2 or USB4 cannot be used as an alternative as many MBs only have USB2 or 3.2 gen1. I don't think that motherboard makers are actually supporting the market well. Many boards are simply clones of another with a different name and "go faster" paint stripes added or removed. They are all obsessed with gaming. Yes, its big but many are casual gamers who use a pc for other purposes as well or exclusively. Many of these users have moved to big brand prebuilt basic boxes or laptops as they can no longer build a desktop PC that works for them at a reasonable price. Also I am not surprised that Asrock came up with a beta bios, I have put them as the best support company for years and have had special beta bios private download links from them in the past in response to email enquiries.
This has long been a gap in AMD's Server/Workstation lineup. With Milan the lowest power and cost EPYC was the 7203P at 120W at $338. The market was screaming for something lighter and cheaper. An AM5 part validated for ECC use on servers changes everything. If this had existed previously, I would have bought it instead of my Rocket Lake E-2314 Xeon in my secondary server to my EPYC 7543.
@@magfal The AGESA updates that brought support for AM5 Epyc also patched in unregistered ECC support on their consumer Ryzen counterparts. The only benefit, as Wendell said, is enterprise grade hardware support and generally better validation for better expected uptime
9:43 Wendell puts Athlon II X3 435 boxed cooler on 8 core Zen4. Just gotta love it. Now lets see an equivalent Rapter Lake CPU under that same cooler ^^
my bad ass 16tb server running trueNAS is so happy with its puny little 35w Athlon 3000g with 32 gig of ddr 4 2666 ram .....really surprises me how fast data transfer rate is and how stable it has been knock on wood ..epic video and so much great information Thanks
The one thing missing from the EPYC 4004 series are low power 35W SKUs similar to Intel's T SKUs. A 35W CPU would be useful for a server solution that needs to constantly be active while not using too much power.
Should be possible to run the existing parts at 35W, if those BIOS allow you to change precision boost limits like desktop boards do. However even official 35W variants of the same parts wouldn't have much lower power draw when idling than the higher TDP versions. They're just limited in how much power they can draw when boosting clocks under load. That is, unless AMD releases Epyc variants of the APUs. Those generally have notably lower idle power draw than the chiplet based Ryzens.
A 65W part would just finish quicker and then go to sleep for longer time then the 35W part. The power needed to run a 65W or a 35W system for a year is close to each other, if the work feed to them both is the same. You might want to go with a 7840u / 8840u (pro) if you can live with the lower lanes count and PCI 4. Less idle power draw and more work per watt.
They could offer more lanes pretty easily by either making a x8 upstream (or gen 5 x4 even) chipset with more downstream lanes, it's just that most people who want more lanes can afford to go Threadripper so they'd just be undercutting themselves unless Intel starts offering more lanes on their side
That's one reason I'm sticking with AM4. Plus, the boards often don't have good support for things like IOMMU, and still lack encrypted RAM, which is useful for keeping VM guests from being able to escape and read another's memory space. Also, unregistered ECC RAM is still woefully expensive. Not to mention the fact that board makers still treat AMD as second class and stick lower quality components, like Realtek NICs, on their boards. Oh well, at least AM4 is more affordable, now.
@@bosstowndynamics5488 instead of 1x 16x5.0 pcie i would be very happy with 2x 16x4.0 pcie , same bandwidth. Very tempted by threadripper but it’s always lagging by a year or so from the consumer chips and I need the fastest chip at the time. Not much benefit for me past 16 cores. This time round I have ended up with 2 PC’s to get my work done, one for test dockers,vms and the other my main pc.
AMD, you may have just made my dreams come true, heres hoping there's a mATX board capable of running this thing, HELLA keen to run this thing in my rig.
I want to run one of these as a file server for my house and to try out remote management. Now only if ASRock Rack could make a board that was half as expensive so it would make sense to run the 6 core in a server board. (I do not need the extra performance and of course cost of the higher end chips.)
Awesome video. Love this type of content. If AMD hadn't jumped ship from PGA to LGA you probably wouldn't been able to have Epyc on AM5 mb's. AM5 + Epyc vs ThreadRipper maybe we could get some comparisons. Maybe ThreadRipper on AM5 as well or maybe AM6 after it's released. How far can AMD go.
Pretty good timing, they probably worked on it for a while now and released a solution for a problem that intel took a long time to make and which is appearing currently in the wild.
Seriously thinking about grabbing one of these for my NAS rebuild now if they've fixed the ECC problems. 6-8 cores would be more than enough and if the mobo has 10gb on it all I'd need is an HBA as far as PCIE cards.
@@Cynyr I don't need a separate l2arc/Zil for my home NAS from my testing (if it were busier or hosting VM's or something that might be different), and mirrored SATA is fine for boot drives.
I have been tempted to upgrade too these since they released. The 12/16 core is what i would go for, do you have any mobo recs that are not server rack boards?
thanks for doing this video, I made a proxmox cluster for work with 7700 before these were announced. Not sure if its worth the upgrade with my work loads. ECC has been working good for me on the ASUS Pro B650M-CT-CSM and they were only $150 but no ipmi
Working ECC support is very important. Thanks for the review/comparison. Intel is in trouble, hopefully they have something in the pipeline since competition is a great thing! The slots are indeed illogical, since 8x at gen 5 speed is going to support 95% of the GPU needs.
Supermicro H13SAE seems really interesting. With 128GB of ECC memory and an EPYC 4464P as well as a Nvidia RTX 4000SFF with a custom waterblock it would give a very nice ECO workstation…
I honestly don't get it, what's so compelling about relabeled, locked Ryzen parts that lack most the features that real EPYC has? There's no additional lanes or notable security features that I can see. ECC support and remote management just comes down to the motherboard. This feels like a way for AMD to sell off a bunch of old Zen4 dies at a premium before Zen5 releases and then do the same thing again next year.
x8 x8 x4 Wendell... with AM5? Behold, the Supermicro H13SAE-MF. Still need a W680 counterpart in this market segment as it still provides more IO, but it's the best on the market right now.
@@TheChadXperience909 I generally don't use consumer motherboards for my servers. Not sure why you think I need to rebuild my workstation which is fine for my needs at the moment.
I hate the fact that AMD has been the better CPU for a few years now but in my country, Intel has a Darth Vader choke hold on retailers as basically 80-90% of the OEM stuff especially laptops are still Intel.
As I said on STH video about these chip, I really wish AMD would release a host driver for the integrated GPU for ESXi so you can enable 3d acceleration on VMs without passing the entire GPU to the VM with SR-IOV
What is the use case for 3D V-Cache outside of gaming? There has been tons of testing going back to 5800X3D that all concluded that it was slower than the 5800X due to the slightly lower clock speed. When equalized for clock speed, the cache provided no performance gain from 3D V-Cache. AMD's own website on the matter says "GAMING", "GAMING", "GAMING for the desktop and the mention of it on servers is "With the addition of AMD 3D V-Cache technology, EPYC processors reach new heights to become the world’s highest performing x86 server processors for technical computing."2 The 2 says that it's the fastest with SPEC... without any indicator that the 3D V-Cache is what enables that.
It helps a little in a lot of situations and a lot in some. But an average of 12% is quite good. If you can fitt all your code in the cache, you will get a lot higher performance. From Phoronix article "The Performance Impact Of Genoa-X's 3D V-Cache With The AMD EPYC 9684X" (The results are after testing 130+ different benchmarks. We are not talking games, but actual applications and workloads that you would use on servers and workstations.). "Across that wide range of diverse Linux workloads, the EPYC 9684X was on average boosted by around 12% when utilizing the 3D V-Cache."
I really wish they release a 5950x3D version with both CCD having 3D cache, it has the level of MT that I need and for gaming task, it will last me a while!
I wish we could get something like 48 lanes on AM5... 28 is so little that you put a GPU in, couple of NVMEs and you have just few lanes left for everything else - like, what's even the point of ATX motherboards... all that extra space for nothing...
Is there a chart somewhere that shows which RAM speeds and types are supported by which motherboards and CPUs? I think I saw 8,000 MT/s registered ECC RAM for sale, although I think it was $1,000 when i last looked.
AMD has been killing it. I am surprised they are doing this (server CPU support on consumer sockets). Is that the compromise to the amount of effort being put into chip design?
tbh, considering the price of ram my old 1600x is not moving for now, it is doing it just fine, support pck bifurcation on a 470x mobo, and support 4 stick of ram better
I wish they would release 24 and 32 core for am5. Don't think that's happening any time soon though. I am in a stupid niche of wanting more multicore but not caring about memory bandwidth and other extra features. My only options are 7950X (which I have now), or threadripper / epyc which has way worse price / performance.
@@jesh879 Yeah, I'm probably getting a 9950X in the meantime. I just can't stomach spending 3500+ on a completely new Threadripper system just to have double the corecount (using 7970x as an example) 🥲
@@nikolaforzane2285 I'm very happy with it. Yes it will go to 95C on an all-core workload, but that's what it's designed to do. I've run it for hundreds of hours like that with no issue. If it really bothers you you can always just set a slightly lower PPT limit in the Ryzen Master software. It's a fantastic chip for multi-threaded workloads.
I’ve often wondered why there were plenty of dual socket Intel or AMD consumer socket motherboards. I would definitely buy one or two over the Xeons I have in all my servers.
That's what I thought at well. Wendell doesn't seem to cover much Supermicro gear for some reason? For some annoying reason it omits overclocking support. I like server/workstation motherboard in my builds as I keep them for a long time.
*_W E N D E L L !!!_* . . . I have a question: what is better for internet browsing and 2k/4k streaming , a higher boost clock, or a higher base clock?
I wish motherboard manufacturers took this opportunity to spend big on PCIe switches, to turn those 28 PCIe 5.0 lanes from the CPU into 56 PCIe 4.0 lanes. Yeah, PCIe switches are expensive as all heck, especially 5.0 ones, but PCIe lanes are exactly what I need in a server context
The AM5 socket doesn't have enough pins for that. You could split them up with an external solution, but now you're introducing other problems. Plus, you wouldn't be able to split them into separate IOMMU groups for hardware passthrough to VM guests.
@@TheChadXperience909 That is explicitly why I mentioned the PCIe switch...
@@TheChadXperience909 Still, for a desktop cpu where they want everyone on NVMe and like to sell that from the ssd maker side as well, they should have space for at least 3 or better yet 4 of these drives. I may not want to be forced to cough up for 1×8TB drive for €1200 minimum, I have 1×1TRB, 1×2TB and now must decide to get either 4TB throwing the 2TB in the trash or wipe the Steam and Epic libraries partially. I am annoyed. I should be able to just plug in 2TB extra. An E-key as a bonus should be in there as well.
We have 12VO coming, we have bigger and better plugs everything, just make DTX more the norm, use the extra width, put them all right next to the cpu socket, very short traces means no expensive retimers and when we move up to PCIe 6 and 7 you could even just 2× every ssd and NVMe slot, it's fine. Also smaller motherboard, slekker build, still have a spare slot (use a riser, or not). Or put 2× 16× slots, give them both 8×, all done. Put GPU top or bottom.
@@TheChadXperience909 is IOMMU groups an big deal for an storage server? it's not like want to multi VM's with video cards doing 3d loads on the an desktop chip.
@@JoebDragonSince most storage servers don't need all that processing power, it might make sense to virtualize it, and you may want to pass your HBA to the guest.
Already got a 4244P running in the homelab. Excited so far to see how it goes!
I like the look of the 7900 non-x Epyc counterpart - the 65W, 12 core one - 4464P. Pretty cheap and low power but plenty of cores for a home lab scenario.
This, and an asrock rack with 16x/0x, 8x/8x and dual 10gbit, but no bcm
You might find yourself wanting more PCIe, though. And, stay away from "B" MBs, if you need IOMMU.
Outrageous. Do you not understand the value of X? AMD are doing such favours in allowing you to buy a product with an X for a crazy low price, they're breaking the market. Outrageous.
/s
@@ChrispyNut X is just short for eXtended frequency range, meaning slightly higher clock speeds,
This shares the same name as the automated overclocking feature dating back to Zen 1.
@@tim3172 It's also commonly mocked on TechTube for what I said.
If you don't know the meme, it's probably not as funny as with existing meme knowledge.
I really like that we put more emphasize on efficiency recently, would love to see more efficient CPU hit the market. Especially for home server.
Yeah, I really want to see more 35 watts and under parts. They make them, but only in industrial PCs which you have to buy in bulk. It would be nice to have something which competes with Atom CPUs for the homelab.
Athlon 3 with 35w TDP is coming 😉
@@twinnie38 Athlon's dead and buried.
@@TheChadXperience909 Yes yes and yes, I need a 35W 4 or 8 Core chip which has some grunt to it and about 14 SATA ports with 10GB eth. IPMI, ECC - I'd be very happy.
@@scottylans Well, I'm not a salesman. So, I can't help you.
AMD is basically just competing with itself
After having Intel not give any real new innovation for a decade, I'm Ok with this. I hope AMD doesn't go the same way as Intel down the road
@@nathanscarlett4772 If Intel goes into a Bulldozer style rut, they will somewhat, except that ARM is finally looking to make some inroads for desktop and servers, meaning AMD at least has to keep representing the larger x86 ecosphere with gusto.
@@nathanscarlett4772 If I were AMD, I would always be checking my rear-view mirrors... Intel is pretty brutal.
@@nathanscarlett4772 Well, Intel was competing with itself.. which is why we didn't get anything for that decade. So, now it's just turned around.
@@Moshenokoji intel in infinitely richer than amd. when amd didn't innovate it was because they financially couldn't. when intel didn't was because they don't give a f about humanity.
AMD continues to make good moves...
Execution was the problem about 10 years ago. Things in this light have definitely improved.
Minus their marketing team lol
@@GewelReal AMD's marketing post a certain um... individual... leaving, has gotten generally better. That being said, some of the marketing was absolute money, the issue is it was all over promise, under deliver. Couple that with being plagued with firmware and driver issues and... there were problems.
That being said: Ignore marketing, turn directly to trusted actor reviews+comparisons. Stick to stuff you can actually personally validate.
In short, if the person pushing the message has the accreditation of a marketing degree: treat them like an idiot, right up until they prove they have the know how and personal integrity to provide honest information.
I wish they'd give us the dual v-cache chip. It really helps in some applications.
@@tomstech4390 there was a prototype dual v cache that they never released
@@tomstech4390 I wish they'd give 32MB + 64MB on each chiplet, so 192 MB total. Some application really benefit from it. I have one that uses about 50 MB of working data and is memory bandwidth bound, so each chiplet having 96 MB would be excellent.
Unlikely, those are essentially enterprise market rebrands.
@@MarkRose1337 The market for desktop parts with dual v-cache is low because of the clockspeed penalty - in theory AMD's expected role for the dual CCD v-cache parts is so that you can use one CCD for cache heavy loads while still being able to use the other for clockspeed limited loads, given that desktop users are likely to encounter a mix. Is your application one that can justify moving up to full size Epyc? There's fully v-cached Epyc options, although that's less a step up and more a quantum leap with an option for over a gigabyte of L3 (you could potentially go with a non 3D design with 256MB as well)
@@bosstowndynamics5488 Yes, the v-cache Epycs would work, but it's a bit difficult to justify a $15k 9684X when I could build 12 7800X3D systems for less, with double the main memory bandwidth per core (which would be useful in other situations). Like just make a dual v-cache AM5 CPU and charge more for it, and call it the 9945X3D so the ignorant don't think it's "better" because it has a higher number.
I've used tons of those ASRock server. Love them.
The CPUs are only one part. We need support for them on customer motherboards. Intel shows that if you need special motherboards to run them including ECC they are extremly expensive. A Intel i5-12500 for 194€ needs a 500€ motherboard for ECC. Nuts. I hope these Epvcs will be supported on cheaper boards......
Crazy that nobody whatsoever makes sensible PCIe slot layouts for AM5, neither on nor workstation/server.
Could a riser be used to separate different lanes or does PCIE not split up like that?
I think, it's the chipset which is most at fault. AMD knows people don't want a fan on the chipset, but they can't provide us a decent solution without it.
Because there is no lanes on AM5, 24 lanes is nothing in 2024.
There are not enough PCIe lanes to be able to do so.
@@eat.a.dick.google Yes there are if they allowed more slots to be just 4.0 or some 3.0. The benefit of PCIe 5.0 is you can use less lanes for the same bandwidth more previous gen lanes needed.
10:02 IT'S BOOTING!!
Yeah! Luckily the motherboard does not care for markings on the heat spreader
This brings me memories of the AMD Socket 939, with Athlon and Opteron.
Thank you, wendel
This channel is nuts!! Good work Wendell!
For every "head scratching decision" AMD make they seem to have a nack for making you forget about it with a really cool decision albeit a tad late. I had forgotten about this announcement. But seen it work on an AM5...very tempting. I can see these selling like hotcakes.
Getting cheap server cpus for desktop platforms at end of life are a nice option and a bit of a small flex for late adopters and for people needing cred as serious computer users..
Got excited at first thinking this would get me away from having to do a Threadripper build, but not nearly enough PCIe lanes. Bummer.
Yeah, the socket won't support that, and since they've just doubled-down on their AM5 commitment, we'll have to wait until 2027 before that might change. Thus, you can rest assured that a Threadripper is the way to go if you need PCIe lanes.... Well, or Team Blue, of course.
@@TheChadXperience909I wouldn't trust team blue with anything right now. Not until they give us some official explanations for current chip failures.
@@Freakmaster480 Yeah... But, it can be a lot cheaper than Threadripper.
@@TheChadXperience909 If you're one to gamble then sure.
Nice to see this. I used to buy used E3 Xeons to put in budget LGA1155 builds back in the day. Will have to keep my eyes open here in 2-3 years when these start hitting the used market.
I upgraded my home server last year from a dual socket Xeon v1 to a single Ryzen 7600. I ended up halving the ram, but it performs better as a game server (which is the only task that pushes it) and runs 70W less at idle.
Have you tried it in Eco Mode? If it still performs well enough for your use, that should save on electricity.
@@TheChadXperience909 I haven't, but doesn't that just affect the peak power draw?This thing spends most of it's time at idle.
if you have a bit of time, you could undervolt the CPU and test around. I set my 7950X to undervolt -15, and now it achieves around 55W on near idle (5% CPU load), and even below 40W on full idle (after boot with no programs started).
And that (I know I know) on Windows 11.
I might be able to push it to -20, but probably will reduce just 1 every now and then to test stability.
Hopefully optimizer Curve is coming to X670(E) and Zen4...
@@bernds6587 yeah, that's worth doing too. I've got some time coming up, and the server needs a BIOS update and some tweaking, I'll add that to the list, thanks 🙂
Why am I so giddy hearing about this??!! It makes a lot of sense! WOW! Excitement in the server space!
Ah man, I was really hoping this would mean AM5 but with a few extra PCIe lanes. Even like 32 or 36 would be nice, instead of 28. I guess desktop CPUs are just so physically constrained at this point that architecturally it's not possible. The only reason I say this is because my first computer was during the heyday of SLi and Crossfire, and I think my EVGA motherboards all supported three-way SLi (though I never used it since I was using hand-me down monitors for the longest time on a kickass gaming rig; I didn't know any better thankfully haha. Too bad it was a hand-me down LCD Dell monitor and not our old Sony Trinitron. If only I had the sense to hook that bad boy up). So, because of that, all the high-end enthusiast boards had at least like four x16 PCIe slots. They weren't necessarily electrically x16 (either physically or they'd switch to x8 or x4 depending on the use case), but what that meant was I never had trouble with expansion.
Now with my x570 board, because of the limitations of modern boards, you really can't get PCIe NVME expansion cards on there unless you want to switch your GPU to the second slot, which depending on the BIOS, might be hinky to begin with. It's really sad since you have these wild Epyc and Threadripper processors with 128 PCIe lanes, but then we're stuck with a dinky 28 lanes on AM5. Though Intel has been even more anemic than AMD's AM4 and AM5 platforms, so really, I should just be counting my lucky stars, I guess.
Theoretically it is possible with desktop CPUs. I don't think an IO die with a couple more lanes would be that much bigger to fit in the package. But the AM5 socket is not designed for it, you'd also need the extra pins on the socket to route those lanes out.
What is possible though is to use PCIe switches (or the chipsets) to split the existing lanes into slower, but more lanes. Doesn't increase the total bandwidth, but you could say take the 16 gen5 lanes that go to the GPU slot and make 32 gen4 lanes with full bandwidth out of them. I'd love to have an AM5 board like that, don't need fast lanes, I'll take gen3 lanes too. But esp. for a home server type scenario, a board with a good number of PCIe slots would be nice.
My B550-XE STRIX has the graphics card in the first slot, and I use the included expansion card for additional NVMe SSDs in the second X16 slot.
Keep in mind it's not easy to utilize the bandwidth of multiple NVMe SSDs unless you're doing something like a RAID. Their bandwidth requires multiple threads and queues.
@@Hugh_I the IO die already has 64 lanes but the socket only has 28 🤦
@@erkinalp It does? interesting! Did AMD say that they cut it down, or is there a die shot that shows it or something? Would love more details about that.
AM5 Epyc looks very interested, i'd be interested to see how this progresses and if they keep going with it.
These AM5 Epyc's couldn't have come out at a better time with Intel CPU's in self destruction mode.
Awesome! This would be a great upgrade from my current system of a Xeon E3-1275L in a Supermicro MBD-X10SL7-F-O with 16G of ECC but it would be nice to find a board with builtin SAS that can run in IT mode. I need all those sweet drive ports and built-in always works.
Did I hear AMD Ryzen 7700 (No X)? Be still my beating heart! That's the chip I have currently in my battlerig. I chose it for a balance of power, temps, and savings. It's not like I can't upgrade down the road.
Part of me thinks it's not really EPYC without all the extra PCIe lanes. But... the higher clocks are nice and I begrudgingly have to admit that Gen5 doesn't need more lanes to do more work.
Mind-blowing how they get an EPYC in that SKU, Also, how they developed from Athlon to these days. I'm floored, to be honest. The only downside I see is the lack of memory lanes on the main board from a normal desktop. So to have the full benefits, you probably have to buy a workstation board. Thanks, Wendel.
The only issue there is the lack of added pcie lanes which so ma y mbs screw the pooch on
1x 16x is common for obvious reasons
But having a bunch of useless 1x slots is irritating same as the constant failure to use open ended slots
Many smaller servers woudl be better served with 1x16 and 1x8 signal wise ot 1x16 with division to 8x signal for both plus a red 8x slot since so many HBAs and sfp28 etc class cards need x8 pcie or better not the x4 slots or smaller
You brought up something that I think is overlooked but sorely needed. A change in PCIE layout capacities on ATX and MATX motherboards. Not long ago the first PCIE was 16 lane electrical and 16 lane positions. A second PCIE was 8 lane electrical and 16 lane positions. A third PCIE and even a fourth, was single lane only or maybe 4 lane on some boards.
Today we have 16 lanes on PCIE1 and maybe a 4 lane on PCIE2 if fitted and maybe a single lane PCIE as well.
I'm with you, I would rather see PCIE 1 and 2 both being 8 lane and 16 lane positions, with additional single lane(s) as well. The supply of 16 lanes to PCIE 1 is a waste for most PC users.
Firstly they don't game or so infrequently they don't need a 16 lane GPU, secondly, not all GPUs are even 16 lanes. There are lots of PCIE plugin boards that are available and in use today that need to be plugged into the 16 lane slot even when they are 8 lanes, with some boards 4 lanes. Thus requiring the GPU to go or be relocated to a 4 lane chipset slot.
Changing the 16 lane slot to 4x4x4x4 or 8X8 or whatever does not help when the unused lanes cannot be reallocated as there are no electrical connections to be had on the board. And USB 3.2 gen2 or USB4 cannot be used as an alternative as many MBs only have USB2 or 3.2 gen1.
I don't think that motherboard makers are actually supporting the market well. Many boards are simply clones of another with a different name and "go faster" paint stripes added or removed. They are all obsessed with gaming. Yes, its big but many are casual gamers who use a pc for other purposes as well or exclusively. Many of these users have moved to big brand prebuilt basic boxes or laptops as they can no longer build a desktop PC that works for them at a reasonable price.
Also I am not surprised that Asrock came up with a beta bios, I have put them as the best support company for years and have had special beta bios private download links from them in the past in response to email enquiries.
12:58 "Now you know the rest of the story" Paul Harvey would be proud.
I just want more PCIe lanes
Or a pcx chip in the chipset
This has long been a gap in AMD's Server/Workstation lineup. With Milan the lowest power and cost EPYC was the 7203P at 120W at $338. The market was screaming for something lighter and cheaper. An AM5 part validated for ECC use on servers changes everything. If this had existed previously, I would have bought it instead of my Rocket Lake E-2314 Xeon in my secondary server to my EPYC 7543.
AM5 Epyc doesn't make a ton of sense to me, since the epycs don't have more pcie lanes; the single biggest upgrade I want.
Proper ECC is a worthwhile upgrade too
i agree, i dont see any reason to use one of these epyc out side of the entry server level, why not just use a 7x00 series
@@magfal true
@@magfal The AGESA updates that brought support for AM5 Epyc also patched in unregistered ECC support on their consumer Ryzen counterparts. The only benefit, as Wendell said, is enterprise grade hardware support and generally better validation for better expected uptime
@@magfal yea only for servers though, you dont want ECC for gaming and the extra pcie lanes would actually allow me to use cool pcie cards
9:43 Wendell puts Athlon II X3 435 boxed cooler on 8 core Zen4. Just gotta love it.
Now lets see an equivalent Rapter Lake CPU under that same cooler ^^
my bad ass 16tb server running trueNAS is so happy with its puny little 35w Athlon 3000g with 32 gig of ddr 4 2666 ram .....really surprises me how fast data transfer rate is and how stable it has been knock on wood ..epic video and so much great information Thanks
That Silverstone case is nice but it's >$240 on amazon when I checked. Whew.
Love the jazz intro btw
The one thing missing from the EPYC 4004 series are low power 35W SKUs similar to Intel's T SKUs. A 35W CPU would be useful for a server solution that needs to constantly be active while not using too much power.
Should be possible to run the existing parts at 35W, if those BIOS allow you to change precision boost limits like desktop boards do. However even official 35W variants of the same parts wouldn't have much lower power draw when idling than the higher TDP versions. They're just limited in how much power they can draw when boosting clocks under load. That is, unless AMD releases Epyc variants of the APUs. Those generally have notably lower idle power draw than the chiplet based Ryzens.
A 65W part would just finish quicker and then go to sleep for longer time then the 35W part.
The power needed to run a 65W or a 35W system for a year is close to each other, if the work feed to them both is the same.
You might want to go with a 7840u / 8840u (pro) if you can live with the lower lanes count and PCI 4.
Less idle power draw and more work per watt.
I used to run an Opteron 165 for Windows XP once, damn that thing ran hot. Mainly because I couldn't get the desktop CPUs at the time.
It would be great if you reviewed an AM5 EPYC motherboard.
yeah I am wondering too about these pci layouts. Could Wendell use his magic powers with board partners here?
I think his powers extend to "kinda make them wanna talk to me" instead of "kinda make them wanna design boards for me"
Time to retire E3 Xeons???
Wish they would go the other way and give as more pic-e lanes(double), maybe with the next consumer socket.
They could offer more lanes pretty easily by either making a x8 upstream (or gen 5 x4 even) chipset with more downstream lanes, it's just that most people who want more lanes can afford to go Threadripper so they'd just be undercutting themselves unless Intel starts offering more lanes on their side
That's one reason I'm sticking with AM4. Plus, the boards often don't have good support for things like IOMMU, and still lack encrypted RAM, which is useful for keeping VM guests from being able to escape and read another's memory space. Also, unregistered ECC RAM is still woefully expensive. Not to mention the fact that board makers still treat AMD as second class and stick lower quality components, like Realtek NICs, on their boards. Oh well, at least AM4 is more affordable, now.
@@bosstowndynamics5488 instead of 1x 16x5.0 pcie i would be very happy with 2x 16x4.0 pcie , same bandwidth. Very tempted by threadripper but it’s always lagging by a year or so from the consumer chips and I need the fastest chip at the time. Not much benefit for me past 16 cores. This time round I have ended up with 2 PC’s to get my work done, one for test dockers,vms and the other my main pc.
AMD, you may have just made my dreams come true, heres hoping there's a mATX board capable of running this thing, HELLA keen to run this thing in my rig.
Good for reasonable cost, low power NAS appliances with true ECC for SOHO. (Like the Synology DS 723+ and 923+)
I want to run one of these as a file server for my house and to try out remote management.
Now only if ASRock Rack could make a board that was half as expensive so it would make sense to run the 6 core in a server board. (I do not need the extra performance and of course cost of the higher end chips.)
Awesome video. Love this type of content. If AMD hadn't jumped ship from PGA to LGA you probably wouldn't been able to have Epyc on AM5 mb's. AM5 + Epyc vs ThreadRipper maybe we could get some comparisons. Maybe ThreadRipper on AM5 as well or maybe AM6 after it's released. How far can AMD go.
I love it! Back in 2015 Amd was almost dead! Fast forward to 2024! There top dog and Intel swapped spots with them!
Pretty good timing, they probably worked on it for a while now and released a solution for a problem that intel took a long time to make and which is appearing currently in the wild.
Seriously thinking about grabbing one of these for my NAS rebuild now if they've fixed the ECC problems. 6-8 cores would be more than enough and if the mobo has 10gb on it all I'd need is an HBA as far as PCIE cards.
don't you want dual nvme for boot, and dual nvme for l2arc? these don't really have the lanes for that.
@@Cynyr I don't need a separate l2arc/Zil for my home NAS from my testing (if it were busier or hosting VM's or something that might be different), and mirrored SATA is fine for boot drives.
I have been tempted to upgrade too these since they released. The 12/16 core is what i would go for, do you have any mobo recs that are not server rack boards?
great video, as always! Really stoked to see where this is going for AMD.
thanks for doing this video, I made a proxmox cluster for work with 7700 before these were announced. Not sure if its worth the upgrade with my work loads. ECC has been working good for me on the ASUS Pro B650M-CT-CSM and they were only $150 but no ipmi
The whole "x8" and "x4" thing instead of "by 8" and "by 4" was like fingernails on a chalkboard to me.
Working ECC support is very important. Thanks for the review/comparison.
Intel is in trouble, hopefully they have something in the pipeline since competition is a great thing!
The slots are indeed illogical, since 8x at gen 5 speed is going to support 95% of the GPU needs.
Supermicro H13SAE seems really interesting. With 128GB of ECC memory and an EPYC 4464P as well as a Nvidia RTX 4000SFF with a custom waterblock it would give a very nice ECO workstation…
I lost it when he said "you can go both ways"
Epyc could step up and take advantage of an epic failure...
Now I want that dual Hygon System to play around too.
I honestly don't get it, what's so compelling about relabeled, locked Ryzen parts that lack most the features that real EPYC has? There's no additional lanes or notable security features that I can see. ECC support and remote management just comes down to the motherboard. This feels like a way for AMD to sell off a bunch of old Zen4 dies at a premium before Zen5 releases and then do the same thing again next year.
x8 x8 x4 Wendell... with AM5? Behold, the Supermicro H13SAE-MF.
Still need a W680 counterpart in this market segment as it still provides more IO, but it's the best on the market right now.
Oh man, thanks for mentioning this one as I hadn't run across it on SM's site. This looks like it will be perfect for my NAS rebuild later this year.
@@nadtz I build my servers out of my old desktop parts, and spend my upgrade money where it really counts... On my desktop PC.
@@TheChadXperience909 And that has what to do with me exactly?
@@nadtz Trying to save you money so you can build a better Desktop, which you'll likely appreciate a lot more.
@@TheChadXperience909 I generally don't use consumer motherboards for my servers. Not sure why you think I need to rebuild my workstation which is fine for my needs at the moment.
0:45 is that a HP Datavault NAS from the 2000s? - EDIT - Yes it is lol, they mirrored the image but that's the X510 model I had in 2009 haha.
Wonder if this works in a Minisforum MS-A1, would be a nice AMD alternative to the MS-01. Unfortunate that the MS-A1 doesn't have PCIe expansion. :(
1:50 is there no way to use a external secondary case for pci components maybe?
The x870e motherboard's are "dual die solutions"
I interpreted that as dual socket 😅
Fingers crossed 🤞
Hey maybe i need this. How do these compare to their non-EPYC brethren?
I hate the fact that AMD has been the better CPU for a few years now but in my country, Intel has a Darth Vader choke hold on retailers as basically 80-90% of the OEM stuff especially laptops are still Intel.
Great vid; One lil issue though==> asrock's website lists ethernet ports on 650D4U as follows; 2 RJ45 (1GbE) by intel i210 NOT dual 10Gb ?
I use both platforms. UA-cam is just easier to do searches.
I know AM4 could support ECC but that may be also dependent on the motherboard.
If they enabled SEV-SNP support this would be the ultimate development workstation for AMD environments
As I said on STH video about these chip, I really wish AMD would release a host driver for the integrated GPU for ESXi so you can enable 3d acceleration on VMs without passing the entire GPU to the VM with SR-IOV
What is the use case for 3D V-Cache outside of gaming?
There has been tons of testing going back to 5800X3D that all concluded that it was slower than the 5800X due to the slightly lower clock speed.
When equalized for clock speed, the cache provided no performance gain from 3D V-Cache.
AMD's own website on the matter says "GAMING", "GAMING", "GAMING for the desktop and the mention of it on servers is "With the addition of AMD 3D V-Cache technology, EPYC processors reach new heights to become the world’s highest performing x86 server processors for technical computing."2
The 2 says that it's the fastest with SPEC... without any indicator that the 3D V-Cache is what enables that.
there are other workloads out there that really benefit from more cache. CFD/FEA both come to mind.
It helps a little in a lot of situations and a lot in some. But an average of 12% is quite good. If you can fitt all your code in the cache, you will get a lot higher performance.
From Phoronix article "The Performance Impact Of Genoa-X's 3D V-Cache With The AMD EPYC 9684X" (The results are after testing 130+ different benchmarks. We are not talking games, but actual applications and workloads that you would use on servers and workstations.).
"Across that wide range of diverse Linux workloads, the EPYC 9684X was on average boosted by around 12% when utilizing the 3D V-Cache."
I really wish they release a 5950x3D version with both CCD having 3D cache, it has the level of MT that I need and for gaming task, it will last me a while!
I misheard at the start there.
"Hi god remember I reviewed those forbidden servers"
These CPU names are so confusing.
Would love to know more about the R9 7900 motherboard and ECC combo
I wish we could get something like 48 lanes on AM5... 28 is so little that you put a GPU in, couple of NVMEs and you have just few lanes left for everything else - like, what's even the point of ATX motherboards... all that extra space for nothing...
Is there a chart somewhere that shows which RAM speeds and types are supported by which motherboards and CPUs?
I think I saw 8,000 MT/s registered ECC RAM for sale, although I think it was $1,000 when i last looked.
AMD has been killing it. I am surprised they are doing this (server CPU support on consumer sockets). Is that the compromise to the amount of effort being put into chip design?
Love you Wendell!
Intel has left the building....
not worth the lack of PCIe lanes.
Where can you buy these processors?
tbh, considering the price of ram my old 1600x is not moving for now, it is doing it just fine, support pck bifurcation on a 470x mobo, and support 4 stick of ram better
I wish they would release 24 and 32 core for am5. Don't think that's happening any time soon though. I am in a stupid niche of wanting more multicore but not caring about memory bandwidth and other extra features. My only options are 7950X (which I have now), or threadripper / epyc which has way worse price / performance.
zen 6 will likely see double core count
@@jesh879 Yeah, I'm probably getting a 9950X in the meantime. I just can't stomach spending 3500+ on a completely new Threadripper system just to have double the corecount (using 7970x as an example) 🥲
How is the 7950x treating you? I'm a hair-trigger from buying it, but 95degCelsius scares me.
@@nikolaforzane2285 I'm very happy with it. Yes it will go to 95C on an all-core workload, but that's what it's designed to do. I've run it for hundreds of hours like that with no issue. If it really bothers you you can always just set a slightly lower PPT limit in the Ryzen Master software. It's a fantastic chip for multi-threaded workloads.
I’ve often wondered why there were plenty of dual socket Intel or AMD consumer socket motherboards. I would definitely buy one or two over the Xeons I have in all my servers.
Supermicro SAE13-MF is your pick for the combo of IO on your wish list... plus it's awesome because it's SuperO
That's what I thought at well. Wendell doesn't seem to cover much Supermicro gear for some reason? For some annoying reason it omits overclocking support. I like server/workstation motherboard in my builds as I keep them for a long time.
@@ky5666 conspicuous in it's absence. Wendell does have an agreement with Asrock, so perhaps that's the issue!
Now I want DDR5 ECC to get cheaper so that consumer systems can benefit from the extra stability.
I wouldn't hold my breath.
Wow seems like I can reasonably gift my brother an Epyc powered computer
the gnome desktop on the background.
What? Never heard never saw that one! Would that mean a even cheaper to buy version?
At the moment i am thinking about building my own rig hmmm...
RISC would have a place today if Harvard had been less restrictive in its proprietary licensing and architecture so 40 years later.....
You seem to have forgotten that Intel's spectacularly failed Itanium was a RISC architecture.
@@andersjjensen Itanium could not support 32bit natively, this was an intel failure
I hope DDR6 goes back same layout for ECC as non-ECC
Are you thinking of registered vs unregistered instead?
DDR6 hopefully will just only have ECC, but still both with and without the extra buffer.
I want chipset-less ryzen
Epic or Ryzen for Adobe Premiere? I'm interested in seeing Puget benchmarks
Gosh darnit. I have my Unraid NAS on Intel. Do they do Hardware AV1 yet?
*_W E N D E L L !!!_* . . . I have a question: what is better for internet browsing and 2k/4k streaming , a higher boost clock, or a higher base clock?
The correct hardware encoder for the streaming part is more important then raw CPU power, as far as I know.
@@larsjrgensen5975 . . are you Wendell? I was certain to specifically ask Wendell. I didn't ask about encoding, I asked about CPU clock speeds.
@@RANDOMNATION907 Sorry for trying to help.
Those prices are NOT MSRP, they are per 1,000 units.
I'm eyeballing the 4564P.. These do have an additional 4 PCIe 5.0 lanes (28) when compared to 7X00X?