Update 2023-09-03: ASRock Rack reached out to us to let us know that the 400W 80Plus Gold indeed is correct. The Platinum on the spec page was left over from the preliminary specs, but the Gold is the correct PSU.
Serial and VGA - Bleh this just makes me puke(especially for desktop/workstation MBs). I know I know, DataCenter and stuff, but please are we still in 1991?
Serial and vga are still standard in the enterprise space. What harm does it do being there? Nobody's forcing you to use it, you can shove whatever gpu you want in there and use it's video output, and for data tranfer you can just use universal serial bus (yknow, usb, that serial standard we all still use) I really dont understand why people feel the need to complain about compatibility, let alone get it wrong by claiming that we should be ditching serial when usb itself is a type of serial connection
@@oggilein1 I know I work in it, and such I don't use GPU(RGB and GPUs are for kids). Oh and BTW all Data Center carts come with USB mouse and keyboards, and all the monitors are HDMI capable, and I mean ALL of them. What harm? It makes me sick. I can't past the PS/2 when I am shopping for MBs or workstations.
@@Tempo_Gigante You're correct. I'm in favor of sunsetting legacy ports when they have outlived their utility. We don't need to give any incentive to a vendor to push product with 30 year I/O
Sometimes painfully hard to find, but ya if you can fine one (server or mobo) they are pretty great. Have had a amazing luck with my x570 equivalent to this. I mean I’d love just a few more pci lanes but… lol
We have the previous one with a Ryzen 5 Pro 4650G 128GB ECC ram 1*16TB hdd and 3*2TB ssd drives. +- 30 watt in production. The html5 IPMI is fantastic, installed proxmox in no time without using usb or display.
Well, 2U was to loud for my homelab, so 1U is out of the question. But I like the direction ASRock is taking these systems. The motherboard would make for a great homelab server platform. The choice of putting the PCIe 5x on the m.2 is unexpected though. Not sure whether to like or dislike it.
I've built a few homelab amd systems - power efficiency is good, so you can basically get them to be quiet. I've got a temp controlled fan for external venting and the spot is now outside of living area so less of an issue now. That said, I really don't like fans blasting away, so I'd love to get the board itself and maybe just stick it in a compact tower case.
There's another version (the B650D4U) of this board which swaps the 10g nic for another m.2 slot (albeit at pcie gen 4 speeds). I think I would prefer that version as I could bring my own sfp+ based nic.
15:27 "avoid names with slashes, stick with dashes" this may seem like a very arbitrary thing to complain about. my background is in web-dev and I think you're right. this is not something that should matter, but it does. it just shows... you know what... everyone needs to make their own mistakes before they learn anything ;)
Must be a darn good product when the "Key" lessons learned includes what type of plastic thingy does the airflow, and the choice of special characters in the product name! I do hope we get a bit more depth in key lessons learned in the future.
Wow AMD keeps their tradition from year 2000 Amd always have supported ECC on consumer grade north bridges (from 2003 in CPU) I was using ECC RAM with AMD761 north bridge, AMD Athlon 64, AMD Athlon 64 X2, AMD Phenom II and AMD FX I won't mention Opteron
I really liked the days of DDR3 and DDR4, where you could troubleshoot a server using UDIMM or RDIMM because they were pin compatible, but i do understand this is a rare use case as most people are going to have proper spare R/U available to test with
It's telling how Patrick's most emotional takedown is, in order of intensity: (1) The flimsy airflow guides (which he kinda grudgingly accept), and (2) THE SLASH IN THE NAME OH GOSH THE HORROR 🤣 Joking aside, this is a VERY interesting product! If they manufacture a tower version I can see myself purchasing one for a home server!
Honestly - I'd say that's the kind of low cost server you should go for and deal with redundancy and HA on the software side using K8S. I'm super happy running an AsrockRack X570D4U-2L2T here 24/7 for I think about 2 years now, which I believe is kind of the predecessor. Although a price around 1200 would make me get the board only (maybe around 400?) and search for good, used 1U cases that fit...
This thing is so cool, i am in a IT aprenticeship in germany right now and am working on a project where a server like this would be perfect. If only i could already buy one!
I'd love it if you'd do a review/take a look at the Asrock Rack 4U18N-B550/2T, 18 AM4 nodes in 4U chassis. It's bonkers, but in a good way! Would make for an entertaining video.
@@magog6852 25k is peanuts in the space this is intended for. The probability of the backplane and chassis controller failing is pretty low... and could be replaced separately if necessary.
Oh my God! I was waiting for ever for some amazing AMD greatness. Thank you and this is too GREAT so officially subscribed because of the wonderful topics. Thanks again for those blue categories separating the videos on the screen rather have to use UA-cam to key frame the timeline! Although which operating systems are we talking? Mac XServe, Microsoft Windows 10 pro with MxGPU or even an ATi Radeon yet AMD Instinct low profile card. Hint: Absolutely Great even with the blue Intelligence in the background neighbors with the EPYC sitting right there.
@3:30 - Is 'kydex' what you're looking for? The stuff used for probably most of the knife sheaths(especially custom knives) & pistol holsters. May be interesting to see the 7900X(or 7950X for that matter) run in 65w mode vs the 7900 - the X SKUs should 'in theory' be better binned chips than the non-X SKUs, so is there any meaningful performance gain(greater per per watt) from the X chips? Mind you, it's always been said that EPYC gets the quality chip priority(since the enterprise space is where their money is really made) over Ryzen since the adoption of MCM architectures across the board(besides mobile where it still doesn't make sense yet), and we also saw services like Silicon Lottery basically give up on binning Ryzen chips because there's been such minimal variance between them over the last couple generations(shout out to fantastic yields, lol), so maybe it's still pretty minimal? Regardless, I would like to see this comparison with 7000-series.
I would like to see one a little bigger that can have a full size gpu. The cost would go up but would still be a very powerful server for $3000-$5000 built (depending on what crazy gpu you put in it). 16 cores, high end gpu, and fast storage. Downright affordable compared to other solutions.
I'm still using a R7-1700 @ 15W cTDP on an AsRock B450 Pro4. 2x8GB DDR4-2666 ECC non-r. Works perfectly with Unraid. ECC working (AsRock seems to be the only firm that activates ECC on consumer boards).
I've been looking at ASRock Rack boards for a while. I've been lusting over the Epyc boards for a Truenas build with the 8 core second gen chip. Pcie for days and server stability.
Pay attention to the headers sticking up on the pci-e expansion side. You're going to have trouble getting anything but the smallest 1 slot cards to fit in there. They could have easily gone with headers coming out of the side of the board, and that would have allowed it to handle longer 2 slot cards, but nope. At least they actually made the fans hot swap now. I have an older version on the X470 chipset, and those fans are individually wired to the motherboard fan headers and zip tied together, and ASRock claimed those were "hot swap".
This is really cool! Very similar to what I’m sketching out for a white box build. I would be interested in a 2U or 3U version. I want the extra space for (1) quieter fans, (2) more drives, and (3) bigger graphics card. Agree that redundant NVMe would also be a big plus.
They also sell the motherboard in this server so one can build a larger/ quieter server. I was talking to the team about doing a build video with this in a more of a NAS-style setup.
Hey! Just wanted to let you know that the previous generation had an ASPEED AST2500 built into the motherboard to provide a display output. This was a requirement for the IPMI features.
Usually Blue and Teal USB ports differ, Blue is USB 3.0 (5Gb) and Teal is USB 3.1 (10Gb), but in this case they are same per their site description (USB3.2 Gen1 which is 10Gb). Also what are these blue surface mount components seen at 8:19? I've never seen anything like that
I would love one of these in my homelab for something I only want to run a few services on. Since my home is space constrained, my server rack is in my office. The end result is I can't really use 1U servers because they are way to loud. Other than being 1U this is a homerun.
7:02 a lot of workloads that are more demanding than Cinebench? Could you name a few? In general I thought Cinebench is great for simulating an all-core workload since it's able to peg all cores at 100% and keep them there thus making the CPU run about as hot as it can get. But if you know a lot of workloads that are far more demanding than I'd love to hear about those.
Just to give you some sense, we have seen dual socket servers use >300W more running things like GROMACS and LS-Dyna versus Cinebench R23. It is easy to run and it looks nice so Cinebench is very popular.
@@ServeTheHomeVideo Ok, but how is GROMACS more demanding? From what I can tell GROMACS will not produce a higher thermal load than Cinebench. At least not on the CPU, which is what we're talking about here with this Asrockrack system.
I mean it uses AVX-512. Especially on like a Cascade Lake era Xeon it is a huge amount of additional power per CPU. It increases thermal load. Even Cinebench R23-like applications, e.g. the Linux c-ray benchmark, ran on things like the original ThunderX (edit TX1 not TX2) 1P server, but running HPC workloads would cause that system to overheat and shut down.
Having built several hundred 1U servers using a lot of chassis from different manufacturers I'd say there's something like a 90% chance that chassis was bought from AIC. That's not criticism just an observation. The design details look very familiar though it doesn't exactly match any product from AIC I can remember using. But they do a lot of customization for larger customers.
Only issue I see with using an AM5 CPU is the limited PCIe lanes, mine runs into an issue and starts trying to reduce the speed on the lanes to 8x or 4x if using too many slots...
10:20 There are passive pcie to m2 boards to put into that slot. This should be the default option on any motherboard, as doing it backwards is not easy.
5:15 What i wish is that instead of Display Port servers would use USB-C I CANNOT WAIT for the day that that we have USB-C KVMs, that provide an isolated management network, and on higher end KVMs support L3 on the internal switch, and include a 1200w PSU to supply power to low powered devices like laptops and mini desktops Do you know how much easier my job of imaging labs of computers and laptops would be if i only needed one cable for KVM, network, and power?
Always love these Ryzen AsRock Rack boards. Looking forward to what they do with the ITX formfactor this generation :D As an aside, looks like the X570D4I-2T is finally back instock on Newegg. Might pick up one before they go out of stock again for 4 months
Ever since they removed the LSI RAID Controller from their rack series, I won't touch them. I still have their Xeon D-1541 based board with LSI SAS and dual 10GBit networking. Makes for a fast file server, but VM CPU power is very much lacking.
The IMC on many AMD consumer CPUs have ECC memory support as far back as the Athlon 64 even, it's usually mainboard manufacturers crippling the chip's features.
It's cute, but I hate the fact that the 7900 and the RAM both said made in china or product of china. Until that gets corrected, I will buy older CPUs in AM4.
We also have been looking into this server one with Ryzen 5600 for office with Windows Server and Asterisk virtualized and one with the 7900 for development. The only limitation this server has in my opinion is the lack of SAS and more 2.5" bays at the front. Ideally with e.g. ZFS Raid-Z1 you want uneven pairs of disks so 3, 5, etc. So for the office version the system is limited to 3x SATA Raid-Z1 with a max throughput of 1GBs. For a development server you might want something faster with more redundancy like Raid-Z2 with 6 disks for a total throughput of 2GBs. So the big question is: Can the front panel be swapped for an 8-12 SATA or SAS 2.5" panel, connected to a SATA or SAS controller in the PCIe x16 slot? 8x SAS is useless with dual 10Gb Ethernet so SATA will do fine.
Seems like a great little node for low density, low power, and high per-core performance. How loud are the fans? If they're not too loud to be shoved in a broom closet I think this could be a great small business all-in-one solution.
That's my complaint. Go to 2U, keep the depth short, and quiet the fans if you can. Otherwise great, especially if broadcom suported in vmware directly.
I didn't hear a price for the unit (before adding the hardware you need to add yourself). I need two cheap servers for a home hypervisor setup. This might or might not do the trick.
@@ServeTheHomeVideo Sorry, indeed there is. Seems I neatly tuned out the first 15 seconds of the Pricing and Cost part because I remember you saying things will add up eventually with the final build.. didn't hear the more important part.
Thank you for the video. Can you make video about the difference of this setup VS EPYC, in terms of what pros and cons for each, as well as for what application you wouldn't recommend Ryzen but is better to use EPYC?
i've had all sort of issues everytime ive bought asrock for my desktop cpus.... from 2014 .. even the latest one x570.. im at a point where im about to say "never again". ...but they do some crazy and fun stuff. i have a 3950x and a 5900x that I don't plan to get rid of, so maybe a system like this is the way forward.
Extreme chipset was split into 2 for high end motherboards specifically so it can be passively cooled. Probably used the B as they didn't want to use PCIe gen 5, to expensive. X3D chips run on even less power. 12 and 16 core don't make a lot of sense for normal consumers as the cache is only on one die and that can cause problems with gaming.
Ok first off X670 doesn't require Active cooling for the chipset. Remember X670 is just 2 B650's. The reason ASRock went with just a B650 is that on a 1U server with limited space and limited connectivity, there is no need for a X670. B650 has ALL the same features as X670 but cut down due to the single chip design. The CPU will be providing both the primary (x16) and the secondard (x4) PCIe lanes that board has, and the x1 slot at the bottom will be running off the B650 chipset. Primary M.2 will be provided off the CPU and up to 2 additional M.2's can be provided off the B650. since you only need 4 SATA ports for the 4 hot swap drives B650 has enough SATA ports. X670 on a setup like this is totally overkill and all the additional SATA and PCIe lanes that the 2nd chipset provides would not be used, hence a waste of space and money
The the B650 Chipset is the same as the Z670 Chipset it's just that the Z670 is two B650s as in they use them in a kind of double bridge config or both are direct to the CPU not sure. So heat should not the the reason well it might but unlikely
The 2nd chipset Prom21 is connected via 4 lanes daisy chained from the first, it's a way of maintaining signal integrity and spacing out the controllers with less parts on the board. There a great article on Angstronomics about the design and how it's intended to reduce costs. One example is how the PCIE5x4 NVME M2 slot is direct to the CPU but allows minimal length traces, while the GPU slot can be lower cost PCIE4x16. Unfortunately the motherboard manufacturers have tried to recoup their investment in year 1 so the development costs are front loaded rather than paid off over an expected 3 year life cycle, so the mobos have had massive early adoption taxes.
so a nas+plex server (get transcode with the amd igpu and ecc for zfs) would be nice with this but maybe in the form an ATX or E-ATX mobo in a standard tower with 8 bays at least. quieter fans. would be nice if someone made that.
@@MsSgent in this video they were using a 7900 and udimm ecc memory. i think he mentioned there was a video of udimm vs rdim ecc. not sure what it is but im assuming udimm still protects you.
Sweet. I like it. What idle power and fan noise it has? I do agree that there should be no slash in a product name. I don't mind plastic airducts tho. They work fine. Second m.2 would nice. I hope the price is right, then definitely going to grab one. I own other ASRock Rack Ryzen platform (in standard fractal chassis), and love it. I also had ASRock workstation and another server based on ASrock (non Rack) in the past, and I like them all. For current workstation I needed to switch from ASRock, to MSI. But next could be ASrock again, as I don't like MSI one.
@@Mr.Leeroy Its called off-by-one error :D /jk The utility is called s-tui, there was a reply and i still have the mail ... so not sure what happened....
ASrock has had a lot of issues on these. We've had customers deploy these and the old version with issues. Gigabyte also has some of these as well but are limited for 1x m.2
@@ServeTheHomeVideo Yeah we found their last gen had a lot of issues with ram and boot failures. But we are talking with Gigabyte for options as well! It's great to see as the performance on the Ryzens are quite amazing units. As said in the video now that it's finished :) 2 m.2 would be ideal.
I would be interesting to use that as a SAS controller for a whole bunch of disk shelves. Pity though, that it only has 1 PCIe 4.0 x16 slot, so it isn't like you can put a SAS controller AND a high speed networking card in the system.
Strange lesson "cost optimized platform". Server farms that are actually cost-optimized will make their own servers and not even bother with rack-mounting them, e.g. hetzner.
i wouldn't mind the sata, my current micr server has 4x 12tb iron wolf pro nas drives tieds to pice 1gb lsi dell raid card in raid 10, it is enough to easily saturate 1Gbps but, is on a 10Gbps link to the switch so it's great file server, among other things. about 1.1gbps reads and 600 MBs writes and the 4k isn't horrid. its fast as an SMB share i can say that!
Where is all the ram? Why does it only have one socket? Wouldn't Epyc be make more sense? I colo my servers for shared hosting and gaming. Mostly gaming. All the info you give overload me so I'm getting curious because I know people that actually send their pc boxes to CBT or Steadfast for colo instead of typical 1-2u. They need what you are showing.
Will the ASRock Rack X470D4U ever get an UEFI with the security fixes in AGESA 1208? AGESA 1200 in UEFI 4.20 is “a little” outdated. Beta versions have not been updated in over half a year.
Now, does it half-ass the serial-over-lan support like the AM4 board, the X570D4U-2L2T? I ended up ditching that board and sticking it on my shelf because it ran hot and the firmware was garbage.
video:"120W is little energy for a device that will alwayb be on and so should use little power" me tuning my server to run under 2w average usage with it all still working fine and fast enough, only to eventually notice the reason I couldn't get lower was due to the psu idling at 2w usage even when there is no load. I really hope servers and such soon start using extra chips/microcpu's or cpu's get a extra ultra low energy core so it ca do things like low power under low load without needing to make it auto undervolt and underclock when not under much load and just hope it work. after all for basic things you need almost no processing power, so 1 to 2W is easily doable.
If you need less processing power, then there are a lot of better options than this one that hits 120W max. For example, the Project TinyMiniMicro nodes out of the box like the one we reviewed today are ~4W at idle for a Core i7 CPU to ~60-65W max. For servers, the ASPEED AST2600 Arm-based BMC uses 4-5W alone, not including the rest of the server making a natural floor ~10W for any modern server with a BMC. Add in the 10Gbase-T ports that also will use several watts when connected, a half watt or so each for the 1GbE ports, and the actual Ryzen part of the system is not using that much at idle.
A very great option for Steam , epic -cache server or nas or media server The one non accessible 2.5" for boot And 4 hot swap for storage cmr will do as once it's written on it reading is good ... And dvd bay for reaping movie dvd..
2:46 Potential use-- there are slimline BDXL M-Disc burners out there. While 100/128GB a disc is pathetic by today's standards the media is pretty much indestructible.
Update 2023-09-03: ASRock Rack reached out to us to let us know that the 400W 80Plus Gold indeed is correct. The Platinum on the spec page was left over from the preliminary specs, but the Gold is the correct PSU.
Serial and VGA - Bleh this just makes me puke(especially for desktop/workstation MBs). I know I know, DataCenter and stuff, but please are we still in 1991?
Serial and vga are still standard in the enterprise space. What harm does it do being there? Nobody's forcing you to use it, you can shove whatever gpu you want in there and use it's video output, and for data tranfer you can just use universal serial bus (yknow, usb, that serial standard we all still use)
I really dont understand why people feel the need to complain about compatibility, let alone get it wrong by claiming that we should be ditching serial when usb itself is a type of serial connection
@@oggilein1 I know I work in it, and such I don't use GPU(RGB and GPUs are for kids). Oh and BTW all Data Center carts come with USB mouse and keyboards, and all the monitors are HDMI capable, and I mean ALL of them. What harm? It makes me sick. I can't past the PS/2 when I am shopping for MBs or workstations.
@@oggilein1 If I wanted 20-30 years old technology I would go to the museum ... or get USB dongle.
@@Tempo_Gigante You're correct. I'm in favor of sunsetting legacy ports when they have outlived their utility. We don't need to give any incentive to a vendor to push product with 30 year I/O
praise AsrockRack for always delivering these kinds of products.
Yes. This is so cool.
If ASRock Rack would just put more effort into their firmware quality.
Both sides of ASRock!!!
@@ServeTheHomeVideo why not use eco mode for the 7950
Always love ASRock for making this kind of “funky” half server half consumer products. They’re perfect for home labs!
or ws and small buisness
A homelab without RGB lighting is not a homelab ;)
@@javiej anything can be a homelab. We don't do gatekeeping here!
Sometimes painfully hard to find, but ya if you can fine one (server or mobo) they are pretty great. Have had a amazing luck with my x570 equivalent to this. I mean I’d love just a few more pci lanes but… lol
overkill for homelab, no?
We have the previous one with a Ryzen 5 Pro 4650G 128GB ECC ram 1*16TB hdd and 3*2TB ssd drives. +- 30 watt in production. The html5 IPMI is fantastic, installed proxmox in no time without using usb or display.
Which Memory modules are you using? When I bought the previous version, the 32GB modules on the QVL were not available.
@@ingarnt Kingston Technology KSM26ED8/32HC 32 GB DDR4 2666 MHz ECC
Well, 2U was to loud for my homelab, so 1U is out of the question. But I like the direction ASRock is taking these systems. The motherboard would make for a great homelab server platform. The choice of putting the PCIe 5x on the m.2 is unexpected though. Not sure whether to like or dislike it.
The motherboard is available separately, so you can build a quiet 4u server if you fancied it.
I have been frothing at the mouth to add AM5 with ECC to my homelab. I'm going to toss it into a 4u case
Form factor doesn't necessarily determine fan noise.. Power density does
I've built a few homelab amd systems - power efficiency is good, so you can basically get them to be quiet. I've got a temp controlled fan for external venting and the spot is now outside of living area so less of an issue now. That said, I really don't like fans blasting away, so I'd love to get the board itself and maybe just stick it in a compact tower case.
There's another version (the B650D4U) of this board which swaps the 10g nic for another m.2 slot (albeit at pcie gen 4 speeds). I think I would prefer that version as I could bring my own sfp+ based nic.
Great, that config is totally preferred! I would defs put in an >25G card anyway.
Can always put ia pcie to dual or quad m.2 nvme adapter in it
One 10Gbps is nice to have, but I would be putting 25Gbps broadcom nic anyway. So yes, second m.2 for mirroring would be great.
@@movax20h This should have a 25Gb onboard, we need this to be the new trend.
Asrock Rack! The mad lads that made itx 2011 server boards are at it again
Later this month on STH, GENOAD8UD-2T/X550. Not sure if we are going to do a video on that one.
The ITX Epyc and 3647 boards too, lol
@@ServeTheHomeVideo If you do, I will ABSOLUTELY watch it!
Thanks for the power envelope limitation (in co-location centres) hint. It will help a lot of folks out there.
I thought there wasn’t gonna be the standard intro at first. You got me!
15:27 "avoid names with slashes, stick with dashes" this may seem like a very arbitrary thing to complain about. my background is in web-dev and I think you're right. this is not something that should matter, but it does. it just shows... you know what... everyone needs to make their own mistakes before they learn anything ;)
Must be a darn good product when the "Key" lessons learned includes what type of plastic thingy does the airflow, and the choice of special characters in the product name! I do hope we get a bit more depth in key lessons learned in the future.
Wow
AMD keeps their tradition from year 2000
Amd always have supported ECC on consumer grade north bridges (from 2003 in CPU)
I was using ECC RAM with AMD761 north bridge, AMD Athlon 64, AMD Athlon 64 X2, AMD Phenom II and AMD FX
I won't mention Opteron
I really liked the days of DDR3 and DDR4, where you could troubleshoot a server using UDIMM or RDIMM because they were pin compatible, but i do understand this is a rare use case as most people are going to have proper spare R/U available to test with
Totally agree with your comment about the name, especially the "/"! 👍
14:40 Just a thought, ASrock could have provided a 3D model file to allow user to 3D print a fan shroud.
There are CAD files on the site once the product is fully released which will help a little when you print yourself
Not hard to model in few minutes. Somebody will upload one to thingiverse if needed.
I would guess that most people don't have printers that can handle the material that can handle this kind of heat consistently
@@IggyJackson ABS plastic is good enough, requires a little more effort compared to PLA but not impossible with a little improvised booth
It's telling how Patrick's most emotional takedown is, in order of intensity: (1) The flimsy airflow guides (which he kinda grudgingly accept), and (2) THE SLASH IN THE NAME OH GOSH THE HORROR 🤣
Joking aside, this is a VERY interesting product! If they manufacture a tower version I can see myself purchasing one for a home server!
Love these style servers, nearly no use for them in my 9-5 but love them all the same.
Honestly - I'd say that's the kind of low cost server you should go for and deal with redundancy and HA on the software side using K8S. I'm super happy running an AsrockRack X570D4U-2L2T here 24/7 for I think about 2 years now, which I believe is kind of the predecessor. Although a price around 1200 would make me get the board only (maybe around 400?) and search for good, used 1U cases that fit...
As always with asrock rack, the fun part will be finding one! ;). (At least in the last 2-3 years)
These videos are just so well made - very professional indeed. If I was a big tech company, I know whom I would want to hire as head of marketing.
This thing is so cool, i am in a IT aprenticeship in germany right now and am working on a project where a server like this would be perfect. If only i could already buy one!
I'd love it if you'd do a review/take a look at the Asrock Rack 4U18N-B550/2T, 18 AM4 nodes in 4U chassis. It's bonkers, but in a good way! Would make for an entertaining video.
Maybe if they make an AM5 version. Usually it is harder for us to get older-gen gear.
25k for a single point of failure?
@@magog6852 25k is peanuts in the space this is intended for. The probability of the backplane and chassis controller failing is pretty low... and could be replaced separately if necessary.
Oh my God! I was waiting for ever for some amazing AMD greatness. Thank you and this is too GREAT so officially subscribed because of the wonderful topics. Thanks again for those blue categories separating the videos on the screen rather have to use UA-cam to key frame the timeline! Although which operating systems are we talking? Mac XServe, Microsoft Windows 10 pro with MxGPU or even an ATi Radeon yet AMD Instinct low profile card. Hint: Absolutely Great even with the blue Intelligence in the background neighbors with the EPYC sitting right there.
One day I hope to have a need for one of these. Great video as always, thanks!
Thanks! Have a great weekend
Epic work from Asrock!
Agreed
@3:30 - Is 'kydex' what you're looking for? The stuff used for probably most of the knife sheaths(especially custom knives) & pistol holsters.
May be interesting to see the 7900X(or 7950X for that matter) run in 65w mode vs the 7900 - the X SKUs should 'in theory' be better binned chips than the non-X SKUs, so is there any meaningful performance gain(greater per per watt) from the X chips? Mind you, it's always been said that EPYC gets the quality chip priority(since the enterprise space is where their money is really made) over Ryzen since the adoption of MCM architectures across the board(besides mobile where it still doesn't make sense yet), and we also saw services like Silicon Lottery basically give up on binning Ryzen chips because there's been such minimal variance between them over the last couple generations(shout out to fantastic yields, lol), so maybe it's still pretty minimal? Regardless, I would like to see this comparison with 7000-series.
🍌🐒🦾🥳👍cool stuff bruh!! good luck!
I would like to see one a little bigger that can have a full size gpu. The cost would go up but would still be a very powerful server for $3000-$5000 built (depending on what crazy gpu you put in it). 16 cores, high end gpu, and fast storage. Downright affordable compared to other solutions.
Got a bunch of rzyen asrockrack servers and they have been awesome of far. Really like the remote management on them and the overall low price.
I'm still using a R7-1700 @ 15W cTDP on an AsRock B450 Pro4. 2x8GB DDR4-2666 ECC non-r. Works perfectly with Unraid. ECC working (AsRock seems to be the only firm that activates ECC on consumer boards).
Super little box. The 1700 was my favorite of that generation
I've been looking at ASRock Rack boards for a while. I've been lusting over the Epyc boards for a Truenas build with the 8 core second gen chip. Pcie for days and server stability.
Pay attention to the headers sticking up on the pci-e expansion side. You're going to have trouble getting anything but the smallest 1 slot cards to fit in there. They could have easily gone with headers coming out of the side of the board, and that would have allowed it to handle longer 2 slot cards, but nope. At least they actually made the fans hot swap now. I have an older version on the X470 chipset, and those fans are individually wired to the motherboard fan headers and zip tied together, and ASRock claimed those were "hot swap".
A 670 chipset would just be two 650 dies at separate locations, wouldn't expect that to require active cooling.
The 7950X has 105W and 65W ECO modes.
This is really cool! Very similar to what I’m sketching out for a white box build. I would be interested in a 2U or 3U version. I want the extra space for (1) quieter fans, (2) more drives, and (3) bigger graphics card. Agree that redundant NVMe would also be a big plus.
They also sell the motherboard in this server so one can build a larger/ quieter server. I was talking to the team about doing a build video with this in a more of a NAS-style setup.
This would be great for servers that need not cores but clockspeed! Like Minecraft servers and such!
Or Factorio. Big bases really would love be some X3D CPUs.
I literally recreated this server motherboard's three variants in Minecraft awhile ago before stumbling upon this video.
I've had gobs of these in X470 and X570, great and super, super stable.
The Board I have wanted since AM5 Announced.... still waiting for the board to reach Australia :(
Yes, I was looking at the x570 version, but this seems like it would tick a lot of my boxes
Hey! Just wanted to let you know that the previous generation had an ASPEED AST2500 built into the motherboard to provide a display output. This was a requirement for the IPMI features.
This has a BMC as well for the VGA and IPMI.
Usually Blue and Teal USB ports differ, Blue is USB 3.0 (5Gb) and Teal is USB 3.1 (10Gb), but in this case they are same per their site description (USB3.2 Gen1 which is 10Gb).
Also what are these blue surface mount components seen at 8:19? I've never seen anything like that
I have never thought AMD has such a crazy powerful 1U server,RESPECT!!
When reviewing these kind of systems with 4x4 fans, you have to show the noise level at different loads. This is very important for home lab users ..
Home lab folks will buy the motherboard and put it into a quieter enclosure.
I would love one of these in my homelab for something I only want to run a few services on. Since my home is space constrained, my server rack is in my office. The end result is I can't really use 1U servers because they are way to loud. Other than being 1U this is a homerun.
What is the idle power consumption? I know it will vary depending on the CPU used etc, just want to have a rough Idea
20-35W. There is probably room for tuning since the fan speed is high. Also, this is a 80Plus Gold pre-production system PSU.
@@ServeTheHomeVideo Awesome, thanks!
7:02 a lot of workloads that are more demanding than Cinebench? Could you name a few? In general I thought Cinebench is great for simulating an all-core workload since it's able to peg all cores at 100% and keep them there thus making the CPU run about as hot as it can get. But if you know a lot of workloads that are far more demanding than I'd love to hear about those.
Just to give you some sense, we have seen dual socket servers use >300W more running things like GROMACS and LS-Dyna versus Cinebench R23. It is easy to run and it looks nice so Cinebench is very popular.
@@ServeTheHomeVideo Ok, but how is GROMACS more demanding? From what I can tell GROMACS will not produce a higher thermal load than Cinebench. At least not on the CPU, which is what we're talking about here with this Asrockrack system.
I mean it uses AVX-512. Especially on like a Cascade Lake era Xeon it is a huge amount of additional power per CPU. It increases thermal load. Even Cinebench R23-like applications, e.g. the Linux c-ray benchmark, ran on things like the original ThunderX (edit TX1 not TX2) 1P server, but running HPC workloads would cause that system to overheat and shut down.
just cool tbh. finally an oem taking our typical diy home server tradition and making a prebuilt.
Thanks for the review, I'm just bummed I can't buy it yet as I need to upgrade my 7 year old Asus RX100 home server.
Cool! It’s like a new cobalt raq!
Having built several hundred 1U servers using a lot of chassis from different manufacturers I'd say there's something like a 90% chance that chassis was bought from AIC. That's not criticism just an observation. The design details look very familiar though it doesn't exactly match any product from AIC I can remember using. But they do a lot of customization for larger customers.
Only issue I see with using an AM5 CPU is the limited PCIe lanes, mine runs into an issue and starts trying to reduce the speed on the lanes to 8x or 4x if using too many slots...
10:20 There are passive pcie to m2 boards to put into that slot. This should be the default option on any motherboard, as doing it backwards is not easy.
This is great! Hope they launch a workstation MoBo this year.
5:15 What i wish is that instead of Display Port servers would use USB-C
I
CANNOT WAIT
for the day that that we have USB-C KVMs, that provide an isolated management network, and on higher end KVMs support L3 on the internal switch, and include a 1200w PSU to supply power to low powered devices like laptops and mini desktops
Do you know how much easier my job of imaging labs of computers and laptops would be if i only needed one cable for KVM, network, and power?
Always love these Ryzen AsRock Rack boards. Looking forward to what they do with the ITX formfactor this generation :D
As an aside, looks like the X570D4I-2T is finally back instock on Newegg. Might pick up one before they go out of stock again for 4 months
12:54 What software are you using to display all the temps and frequency? That looks really handy!
s-tui
Ever since they removed the LSI RAID Controller from their rack series, I won't touch them. I still have their Xeon D-1541 based board with LSI SAS and dual 10GBit networking. Makes for a fast file server, but VM CPU power is very much lacking.
The IMC on many AMD consumer CPUs have ECC memory support as far back as the Athlon 64 even, it's usually mainboard manufacturers crippling the chip's features.
It's cute, but I hate the fact that the 7900 and the RAM both said made in china or product of china. Until that gets corrected, I will buy older CPUs in AM4.
You will enjoy this one then. ua-cam.com/video/uAF5prb9Hh0/v-deo.html
I wonder if this supports the “eco mode” feature. The 7950x loses very little by cutting the tdp and runs much cooler.
$1.2k is certainly a little odd for what's basically one of Asrock's own budget boards in an off the shelf chassis.
We also have been looking into this server one with Ryzen 5600 for office with Windows Server and Asterisk virtualized and one with the 7900 for development. The only limitation this server has in my opinion is the lack of SAS and more 2.5" bays at the front. Ideally with e.g. ZFS Raid-Z1 you want uneven pairs of disks so 3, 5, etc. So for the office version the system is limited to 3x SATA Raid-Z1 with a max throughput of 1GBs. For a development server you might want something faster with more redundancy like Raid-Z2 with 6 disks for a total throughput of 2GBs.
So the big question is: Can the front panel be swapped for an 8-12 SATA or SAS 2.5" panel, connected to a SATA or SAS controller in the PCIe x16 slot? 8x SAS is useless with dual 10Gb Ethernet so SATA will do fine.
I'm a fan of the standard D4U's without 10g. Add a mellanox nic and one x8 slot left over for dpu, gpu, etc.
Seems like a great little node for low density, low power, and high per-core performance. How loud are the fans? If they're not too loud to be shoved in a broom closet I think this could be a great small business all-in-one solution.
1U fans are not quiet. These are running pretty hard to keep the CPU at relatively low temps.
That's my complaint. Go to 2U, keep the depth short, and quiet the fans if you can. Otherwise great, especially if broadcom suported in vmware directly.
I want this in 2u or 3u with quieter fans for homelab. Would make a killer home Proxmox host without performance, storage or connectivity compromises.
You can buy motherboard, and out in custom 2u or 3u. I have previous ASRock Rack Ryzen mono in 2u case, with front ports, and love it.
I didn't hear a price for the unit (before adding the hardware you need to add yourself). I need two cheap servers for a home hypervisor setup. This might or might not do the trick.
Not sure what to do. Even have a "Pricing and Cost" chapter marker.
@@ServeTheHomeVideo Sorry, indeed there is. Seems I neatly tuned out the first 15 seconds of the Pricing and Cost part because I remember you saying things will add up eventually with the final build.. didn't hear the more important part.
Thank you for the video. Can you make video about the difference of this setup VS EPYC, in terms of what pros and cons for each, as well as for what application you wouldn't recommend Ryzen but is better to use EPYC?
Good idea
I watched this video last night and last "Have awesome day". and my whole night was awesome day. let me sleep in the morning
i've had all sort of issues everytime ive bought asrock for my desktop cpus.... from 2014 .. even the latest one x570.. im at a point where im about to say "never again". ...but they do some crazy and fun stuff. i have a 3950x and a 5900x that I don't plan to get rid of, so maybe a system like this is the way forward.
Extreme chipset was split into 2 for high end motherboards specifically so it can be passively cooled. Probably used the B as they didn't want to use PCIe gen 5, to expensive.
X3D chips run on even less power. 12 and 16 core don't make a lot of sense for normal consumers as the cache is only on one die and that can cause problems with gaming.
Ok first off X670 doesn't require Active cooling for the chipset. Remember X670 is just 2 B650's. The reason ASRock went with just a B650 is that on a 1U server with limited space and limited connectivity, there is no need for a X670. B650 has ALL the same features as X670 but cut down due to the single chip design. The CPU will be providing both the primary (x16) and the secondard (x4) PCIe lanes that board has, and the x1 slot at the bottom will be running off the B650 chipset. Primary M.2 will be provided off the CPU and up to 2 additional M.2's can be provided off the B650. since you only need 4 SATA ports for the 4 hot swap drives B650 has enough SATA ports.
X670 on a setup like this is totally overkill and all the additional SATA and PCIe lanes that the 2nd chipset provides would not be used, hence a waste of space and money
Imagine the Ryzen 9 Pro 7950 supported 3 dimms per channel.
MAn that would be sweet, 384GB of 4800C42 uDIMM ECC in a 1U rack
5/5 1U4LW-X470 was returned as defected after few months of working. Every mbo have same issue random restart.
The the B650 Chipset is the same as the Z670 Chipset it's just that the Z670 is two B650s as in they use them in a kind of double bridge config or both are direct to the CPU not sure.
So heat should not the the reason well it might but unlikely
The 2nd chipset Prom21 is connected via 4 lanes daisy chained from the first, it's a way of maintaining signal integrity and spacing out the controllers with less parts on the board.
There a great article on Angstronomics about the design and how it's intended to reduce costs.
One example is how the PCIE5x4 NVME M2 slot is direct to the CPU but allows minimal length traces, while the GPU slot can be lower cost PCIE4x16.
Unfortunately the motherboard manufacturers have tried to recoup their investment in year 1 so the development costs are front loaded rather than paid off over an expected 3 year life cycle, so the mobos have had massive early adoption taxes.
so a nas+plex server (get transcode with the amd igpu and ecc for zfs) would be nice with this but maybe in the form an ATX or E-ATX mobo in a standard tower with 8 bays at least. quieter fans. would be nice if someone made that.
I am thinking we should review the motherboard in this to build exactly that.
Please do.
Do the 7k Ryzen IGPs have ECC? In the past only the Pro series IGPs could control ECC ram.
@@MsSgent in this video they were using a 7900 and udimm ecc memory. i think he mentioned there was a video of udimm vs rdim ecc. not sure what it is but im assuming udimm still protects you.
yes please
Sweet. I like it.
What idle power and fan noise it has?
I do agree that there should be no slash in a product name.
I don't mind plastic airducts tho. They work fine.
Second m.2 would nice.
I hope the price is right, then definitely going to grab one. I own other ASRock Rack Ryzen platform (in standard fractal chassis), and love it. I also had ASRock workstation and another server based on ASrock (non Rack) in the past, and I like them all. For current workstation I needed to switch from ASRock, to MSI. But next could be ASrock again, as I don't like MSI one.
What utility is @13:00, it really satisfies my stats/graphs addiction :D ?
s-tui dude! just came down here to check if anyone asked, then went on a google search and found it pretty quick
The Stress Terminal UI: s-tui
@@MagicMANX Thanks a lot man :D
@@DmC944 YT being weird, reply count is 2, but I may only see yours.
So, what does it called?
@@Mr.Leeroy Its called off-by-one error :D /jk
The utility is called s-tui, there was a reply and i still have the mail ... so not sure what happened....
ASrock has had a lot of issues on these. We've had customers deploy these and the old version with issues. Gigabyte also has some of these as well but are limited for 1x m.2
Still nice to see more options out there.
Interesting. Will who does our SSD reviews loves these and has been deploying the X570 versions.
@@ServeTheHomeVideo Yeah we found their last gen had a lot of issues with ram and boot failures. But we are talking with Gigabyte for options as well! It's great to see as the performance on the Ryzens are quite amazing units. As said in the video now that it's finished :) 2 m.2 would be ideal.
I used to use xfce on my desktop but I wanted to use wayland and stop having theming issues on GTK3 and QT applications.
I would be interesting to use that as a SAS controller for a whole bunch of disk shelves.
Pity though, that it only has 1 PCIe 4.0 x16 slot, so it isn't like you can put a SAS controller AND a high speed networking card in the system.
Strange lesson "cost optimized platform". Server farms that are actually cost-optimized will make their own servers and not even bother with rack-mounting them, e.g. hetzner.
I know ASRock Rack made the motherboards for the Hetzner Ryzen boxes at least a few years ago.
@@ServeTheHomeVideo they use a Asus Pro WS 565-ACE (customized to disable OC) now.
i wouldn't mind the sata, my current micr server has 4x 12tb iron wolf pro nas drives tieds to pice 1gb lsi dell raid card in raid 10, it is enough to easily saturate 1Gbps but, is on a 10Gbps link to the switch so it's great file server, among other things. about 1.1gbps reads and 600 MBs writes and the 4k isn't horrid. its fast as an SMB share i can say that!
it's be super cool if the front panel was a double stack of U2 drives, with an option for EDSFF
Where is all the ram? Why does it only have one socket? Wouldn't Epyc be make more sense? I colo my servers for shared hosting and gaming. Mostly gaming. All the info you give overload me so I'm getting curious because I know people that actually send their pc boxes to CBT or Steadfast for colo instead of typical 1-2u. They need what you are showing.
now where's the threadripper part that has 7950x single thread performance and 1TB RAM support
Will the ASRock Rack X470D4U ever get an UEFI with the security fixes in AGESA 1208? AGESA 1200 in UEFI 4.20 is “a little” outdated. Beta versions have not been updated in over half a year.
Pity they removed the 3 internal drive space that the x570 model had. I was looking at it for a Proxmox backup server.
Now, does it half-ass the serial-over-lan support like the AM4 board, the X570D4U-2L2T? I ended up ditching that board and sticking it on my shelf because it ran hot and the firmware was garbage.
Wait, what Ryzen CPU's support ECC? I thought you had to go with threadripper or something to get that.
You didn't say what the speed of the networking is, but I've just noticed typing this, you did say its 10gbe in the chapter name.
I wonder if you can get this in a 2U chassis as well... 🤔
what is absolute idle consumption without fans?
Can you bifurcate the 16x slot and if so, to what configuration?
video:"120W is little energy for a device that will alwayb be on and so should use little power"
me tuning my server to run under 2w average usage with it all still working fine and fast enough, only to eventually notice the reason I couldn't get lower was due to the psu idling at 2w usage even when there is no load.
I really hope servers and such soon start using extra chips/microcpu's or cpu's get a extra ultra low energy core so it ca do things like low power under low load without needing to make it auto undervolt and underclock when not under much load and just hope it work. after all for basic things you need almost no processing power, so 1 to 2W is easily doable.
If you need less processing power, then there are a lot of better options than this one that hits 120W max. For example, the Project TinyMiniMicro nodes out of the box like the one we reviewed today are ~4W at idle for a Core i7 CPU to ~60-65W max. For servers, the ASPEED AST2600 Arm-based BMC uses 4-5W alone, not including the rest of the server making a natural floor ~10W for any modern server with a BMC. Add in the 10Gbase-T ports that also will use several watts when connected, a half watt or so each for the 1GbE ports, and the actual Ryzen part of the system is not using that much at idle.
yeee ,, love the server stuff ,, cheap 1200 for a motherboard , a case and psu ,,
you should review Supermicro AS-1015A-MT
A very great option for
Steam , epic -cache server or nas or media server
The one non accessible 2.5" for boot
And 4 hot swap for storage cmr will do as once it's written on it reading is good ...
And dvd bay for reaping movie dvd..
It's interesting what the noise profile of this server. Is it good for home use?
2:46 Potential use-- there are slimline BDXL M-Disc burners out there. While 100/128GB a disc is pathetic by today's standards the media is pretty much indestructible.
X670 is 2x B650 chips, so no active cooling for that is required.