Crysis is not a good game to test this stuff, its ultimately in design CPU limited, it was made for a future with single cores running at 9 ghz rather then multiple cores.
If I remember correctly there is a mod out there that allows cpu rendering of Crysis, I saw Linus test that, I think it was with an old threadripper. Would be cool to try that experiment here
@TalpaDK oh I would like to see just one, I really don't think more is feasible... you need all the cores to work for that task... I think Linus was getting in the 10s fps wise with that old threadripper but I may be wrong. It would just be cool to see a game like Crysis, regarded by the entire community as being a nightmare to run properly on a good pc from a few years ago being rendered on a CPU of all things
I just built this as well last month. I ran into all the same problem you did as well. I appreciate you making this videos. Now I can send it to my friends to better explain whats going on lol
it's alright, things get more powerful and cheaper in time. by the 2030s we're likely gonna have computers magnitudes more powerful than anything today, at current midrange prices.
>Dual VDI setup >full docker home lab stack. traefik, nextcloud, homeassistant, databases for each, graphana, plex, etc. >ollama with llama3:70B >another zfs pool made out of a bunch of 12TB sas drives connected to an HBA basically what im running on my 5950x desktop, without the vdi. Run the databases on a dataset thats on the ssd raidz1 you got. have it be a one unit wonder for a home. Maybe also run PXE off it for a home theater pc. maybe run a steam cache on the hdds as well.
As much as I'd love to have a Genoa box at home I'll be sticking with Rome until Genoa gets to a price I'm comfortable with second hand. Awesome that you get to play with one though, looking forward to seeing what you do with that monster.
At 10:45 Input voltage.... In the United States, voltages at the wall are 120 volts and 240 volts. I believe they used to be 110 and 220 50 or 60 years ago, but we upped the game slightly....
Thank you very much for you VDI work Jeff. I have couple of VMware esxi's and just bought DL380 Gen9 server for my first Proxmox experience. I am planning to buy 2080 Ti or RTX A5000 for my VDI.
Don't get me wrong, I'm all for the sneaky deal chop-shop chinese specials, but I'm happy for you getting to play with a top end shiny new rig. May all your bugs be squished before your beers go flat.
Like, I could complain that he does the drinking gimmick to the point that I nearly didn't click on this and probably won't click on the next one, but I won't ;)
You as far as homelab goes with our consumer budget restrictions. 7950X gets about 40,000 in cinebench and 100GB/s in RAM throughput. I think it would be more cost effective to just scale to the number of 7950X, RTX 4090, 2x48GB DDR5 6000 CAS 30 nodes that you need. There are obvious pcie limitations but I think pros would outweigh the cons.
If your worried about usb connections. For around $20-30. A Microsoft Internet Pro keyboard has your back. Not only does it have a PS/2 port connection. It has usb. AS WELL AS 2 usb connections in the back on the right. Perfect for your mouse or a slow ISO install. And the handful of times when I had only one working port. All 3 threw 1 port. And its pretty durable. I've had mine around 20 years as a daily driver.
Have you researched SR-iov features of SSD's? I would be grateful to hear what you think about using the SSD feature for VDI use cases. Long time viewer. Thanks.
In 2015 I built three X99 boards @ about $6000.00 a piece with all the latest and greatest whistles and bells. My wife thought it was a bit over kill. Those three builds total over $18,000.00. But one AMD CPU @ $9000.00? And BTW all three X99 boxes are still running flawlessly today in 2024. I put a lot of TLC into those (Linux) boxes which have serve me well. This VDI build is interesting, ultra fast large memory. Geeks are becoming virtual machine freaks. I prefer a system once down which can be easy to restored in just a few minutes.
VMs were "the future" in like...2005. Everything is done on VMs and containers now for a good reason. Less waste. Also, one of the biggest reasons to use a VM/container is to be able to move them easily or blow them away and rebuild them with much less effort. I don't understand how your bare hardware setup could be easier/faster to get running again than having it in a VM/Container would?
@@Prophes0r I have built quite a few VM's for the fun of it on Archlinux and Gentoo systems. OS2, OpenBSD, NetBSD, FreeBSD, Windows. It's easy to do. With better hardware, bandwidth, large number of cpu cores, large memory needed especially for a ZFS file system it's like running real metal. As a server with many clients I say yes. Those VM's can make life easy for those who really need it. Your right about "hardware setup could be easier/faster to get running" it seems to me setups are easier in a VM than setting up on bare metal hardware. Reasons for this are obvious, because the latest and greatest computer hardware with many new device drivers are not there yet. Simply put VM's are emulators.
@@CraftComputing Your punishment should be to " Drink 1 litre of castlemane xxxx (WARM) live on stream!" I Apologise for unttering such fowl language of such utter liguid but a punishment must be met! The internet has spoken. Hee Hee.
Funny that this video pops into my feed, as all my parts are arriving for my genoa build. I went Epyc 9354, and added a beefy RTX 6000 ada, and I'm stuffing it inside a fractal meshify 2 xl. I didn't do enough research on the motherboard, I probably would have bought this board had this video come out last Saturday. Oh well! Awesome video, and I'll be checking out more videos from your channel
In an unfortunate turn of events, the MOBO I bought was DOA, so I ordered the same MOBO in the video. Fortunately it was on prime so I didn't have to wait long, and now I finally have a working server.
Tripp-lite (Eaton) makes some "Plug-Lock Inserts" for C14 which I found help with the loose cables on a crowded PDU. They come in batches of 100, and they seem brittle, but damn are they useful. Just watch out, the force needed to get them in/out after the lock is installed ain't a joke.
as a casual user, still running his x79 platform on ivy bridge, this feels like a time lapse to distant future. damn, tech has made a leap forward last 10 years.
Hey Jeff! I'm thinking of making an Epyc based workstation/server at home, but going the Milan route just to save a huge amount of money. Rome and Milan parts are far cheaper than what they used to cost at launch, and that includes everything, the CPU's themselves, the motherboard options, the ram, everything. My plan is to get a dual socket, and upgrade it gradually. The sheer power of a 64 core system would be incredible by itself, but having say... 1TB of memory could allow me to do some pretty crazy things in certain applications.
I think local LLMs will be a large consumer of computer power in the next 4-5 years. Llamafile or Ollama in CPU or GFX mode with a Llama3.1 or Mistral Large 2 performance in tokens/seconds would be a great benchmark.
7 times 16x lanes is insane! Jealous? No, you deserve getting this kid of system. Envious? Also no, I sure don't deserve one either. Do I want one? Oh yes, but I can't justify it in a "home lab"! Awesome server Jeff! Congratulations!
@@LtdJorge A ton of stuff doesn't come close to needing gen 3, let alone later. Just be sure to check their specs etc, there are some things that will nerf&gimp themselves if the pcie is not the "right gen" even though they don't come close to needing it in terms of bandwidth. Some SSD's for sure but some other stuff too that I don't remember what it was now.
Looking at replacing my Epyc MOBO with a H12SSL, that has lots of lovely PCIe4 slot goodness! Sadly while trying to troubleshoot I dropped a CPU into the pin grid of the socket on my H11SSL-i 😢
@@morosis82 I really like the Gigabyte MZ32-AR0 rev 3 because of its OCP 2.0 slot. You can add decent networking without using a PCI slot. Only drawback is, if you want to use the top x16 slot you'd need to run 4 cables from the connectors below the CPU to the slot, but the other 6 slots are already plenty. Other than that it's the almost perfect Epyc board for me.
4:42 -- PCIe 4.0/5.0 . . . appreciate the commentary on signalling and Phison PS7101 (5:15) redrivers. More VDI/cloud gaming videos 👍 please. Highly entertaining, educational. Kindest regards, friends and neighbours. 12:53 -- "One of the main struggles I've had with VDI is the intense load on storage for loading the OS or launching a game." Interesting. Lkg fwd to your storage configs in your testing.
Let's go wild Jeff! This hardware is so WOW, that I thought, the whole neighborhood could benefit from it. This is growing bigger than homelab. Let's call it neighborhood Lab! Now imagine.. just for fun! We build the cult later. :)
With how much 950p's have come down in price, might be worth picking up a couple of testing given how loading applications tends to be latency rather than throughput bound
This is basically the setup I want to upgrade to in a few years after the prices have dropped. I would like to see how much you could potentially consolidate your rack. Could you virtualize and run your entire homelab off this single server? You already have a pretty amazing setup so obviously you aren't going to be able to throw a petabyte of storage into that chassis but could you do everything else? How much storage can you reasonably get in it? Would homelabbers be better off buying multiple lower cored systems and basically clustering them or would they be better off spending a little more buying an older 32-64+ core processor with gobs of pcie and virtualizing everything? What software would you use and how would it compare performance wise to your current setup? Also what are the power savings, if any, or consolidating like this? Yes that single chip pulls 274W alone but if you could replace 3 servers that pull 100W each you are going to be ahead. It would also impact your need for networking as you could pull switches out of your rack entirely if you didn't need to connect as many devices.
I would normally say "why didn't you just use a c19 plug lock insert as intended" but I'm assuming you ran into the exact same problem I ran into....it's virtually impossible to find those inserts sold individually and paying over $60 for 100pc seems exceedingly wasteful lol. I just got a 3D printer though, so I'm sure I can find or draw up a model to fix the issue.
Bought 3x 9354 32 core Genoa based servers semi recently, their improvement over Rome which they were replacing was noticeable just in using the virtual machines that were running on them, you pay for that improvement in power though as maxed out with a power virus load I was seeing 600-700w per system and these only have 1 cpu, 12 sticks of ram and 2 sata SSDs each, it is nice and fast though. The old Rome servers were never that power hungry, 32c, 8 sticks of ram and 2 SSDs would see it use ~300w.
DDR5, 12 lanes and 4800MT... that's some serious bandwidth, would that be ~900GB/s? that's close to an nvidia 3090, running an LLM on RAM might yield good speeds
Cooling the "Genoa" CPUs is very difficult because the largest fan that fits on the mainboard has 92mm side length. This small size requires the fans to run very fast and be very loud.
If you do end up running proxmox, a 6 drive, two vdev raidz1 (3 drives per vdev) is a config that performs quite impressively - at least in my use case with older micron 2300 nvme drives.
Really looking forward to the speed testing, if only to drool all over myself wishing I could afford such a thing. Umm, people suggested Crysis CPU rendering, which I'd definitely be interested in seeing, but also, I can't help but wonder how many copies of Cities: Skylines 2 you can get running before the framerate tanks to unplayable levels. Given how notoriously CPU-heavy that one is, and the fact that city sim games can usually get by with 30 FPS or lower, it'll be interesting to see if the dual GPUs or the Epyc ends up being the ultimate bottleneck.
Jeff i struggled a lot to get out some good performance with zfs and nvme drives. In my case a singel drive gets 6/GBs R/W but with zfs the speed drops a lot like 2/GBs - 3/GBs R/W. I tweakt a lot like zfs= metadataonly or atime....and ashift the performance reases a lit bit like 1/GBs more R/W speed. Also somenthing what i notice was if the proxmox host is under heavy load the performance of zfs is also droping cleary in my case i use also some "Epyc Rome" and enterprise U.2 SSD Gen4
Wel tah si gama 🐝 utra interesting yeas .. ohhhhfor tha sposers AMD Corsair foca . Yea . Me personally what time would have loved to see you running of course some... All the 3D marks that you can a sinner bench live cinebench all of them from.. a couple of games 5 or 10.. the ones that you also thank you to stress test the things.. of course not you useless stress this like some other things.. Carter zilla benchmark.. Al tha sisiros... Cristal mark plus normal copy past action etc . Wya a shizel . Osam stuf man
I like this! I've just got my hands on a Gigabyte ME03-CE1 (SP6) motherboard with an EPYC 8024P (I don't need much performance, but quite some fast I/O). After being absolutely disappointed with ASRockRack W680D4U-2L2T/G5 (I really can't recommend it - janky as sh!t), I am hoping this will work nicely. Got a SuperMicro ATX tower, a Noctua CPU cooler, 4*3.84TB U.2 NVMe drives. Only the RAM is so prohibitedly expensive, that I am coping with just one (soon 2) 16Gb DDR5 ECC RDIMM RAM stick(s). Still: I'd never ever want a server, that doesn't have a BMC with media redirection. Also if you are looking to stress tests, try running a LLM - should be fun, especially with the GPUs you've got.
Please share your setup for the software side of things for the cloud gaming VDI setup. I've tried both Moonlight and Sunshine my testing shows that the connection can be VERY spotty at times, even on hardwired 1 GbE LAN. Beyond that, there isn't a heck of a lot that you can throw at your Genoa server that would be a "standard" test.
Hey Jeff, I was considering to do the same thing if I ever gotten a house of my own to have a dedicated and grounded 220v circuit using a NEMA-30 plug for a APC Battery back-up but was concerned about power managing that UPS to support 120v devices like a couple of SPF+ switches, cable modem, controller and a 2U/3U custom server. If you do that upgrade, under your discretion, would you provide a video for that including any challenges you experience?
you have 12 chan support with this chip but opt for 8 chan - still good to see this content as the potential and possibilities are immense - should be a workhorse for you for years. by going to 22 or 240v you shoud save 3-5% on power - not negligible. you need to do some true open source ai on this monster - pair it with a really big array and let it run to digest and learn all your data
Electrical companies in the US measure usage in kilowatt hours so running 120v @ 10 amps or 240v @ 5 amps makes no difference in your power bill with the wattage being exactly the same. Some applications may run more efficiently on 240v but it's not 3% and won't help cost in any noticable way (I found this out the hard way). It will allow you to keep the amp load down if your running low since most homes have 100amp services with larger homes having 200amp so it is still better than 120v for permanent installs.
AMD is just killing it. One of the things i was most excited about from AMD was Epyc 4004. Now dont get me wrong, extra validation is great. But i was disappointment in no 16 core x3D option, and no 32 core Zen4C option, or maybe a 2 NUMA node 8X3D+16Zen4C. I was also holding out hope for a re-designed die for the 8700G that gave it 32-48MB of L3 cache instead of 16MB, and then the full 24 lanes of PCIe, Cherry on top of this fantasy would be CXL support for what would equate to a 3rd channel of RAM over 8 lanes of PCIe, but outside of handhelds and small formfactor gaming focused systems, that would basically be useless. If they had released an epyc that was this theoretical ryzen 9 8950G, even without the CXL support, i'd probably pick one up for my file server which right now is a 3950x on an asrock rack X470 motherboard with a BMC because i dont have enough PCIe lanes for a video card.
I’m don’t understand the homelab excitement for epyc 4004. They work the same as regular consumer ryzen and fit in the same motherboards. The only people who should be excited are corporate sysadmins who need to fill out a ton of certification paperwork for every part in a server
@@nickfarley2268 From what i can tell, you're basically guaranteed functionality, as opposed to my 7950x where i had to hunt around for boards and RAM that would actually work with ECC. Which is why i was dissapointed, i already have a working 7950x since launch, if this offered 32 cores at a lower power envelope i would throw down $1000 right now, but its basically just my 7950x.
Perhaps you can setup a Ollama AI solution / training with pdf documents. (Programmers handbooks and system administration along with some network handbooks ) would be nice to see this ... ;.)
Ideas: Sharing part of the GPU setup with VDI, and part with an ollama VM... will your 2xA5000's fit llama 3.1 405B? Also: Proxmox VDI with Guacamole and Netbird? Or similar? Maybe securing sunshine and moonlight with a netbird/similar and whole package latency? Thnx, enjoying videos
Hey Jeff. Can you please try not only GPU passthrought under Proxmox, but also vGPU setup with the RTX A5000? Also it would be interesting to see if the above are possible under TrueNAS SCALE as they now include latest Nvidia drivers. Thank you very much!
I have the Rome version of that board. The layout is exactly the same and I have run into the USB issue as well. I run unRAID on mine so one of the USBs is populated by the unRAID boot drive, and the other is for the communication to my battery backup. If I ever need to connect a mouse/keyboard directly to the board, I have to unplug the battery backup com connection. Great board otherwise. Have you tried running any of the Intel QAT cards in your servers to assist with VPN performance? Im currently trying to passthrough one to a pFsense VM, but Im having issues with the drivers being blacklisted in vfio.
I was surprised you didn't already have a 220 V circuit in the garage with how many KiloWatts you keep in there! I'm sure that if and when you do upgrade to 220 V out there it'll be a video, but I'd also be interested to see how the total power draw changes. Probably only a couple of percent, but when you have that many of them I bet it's a substantial difference.
Yeah, it's really not terrible. I've been as high as 1200W idle in the last few years, but 2023 was all about consolidating down and getting more efficient equipment.
I wonder how noisy that air-cooler is, though perhaps the PSUs may be noisier in that setup... Pity there's no somewhat quiet Arctic 4U update for the increased TDPs and socket sizes of the SP5-using generations... Maybe this cooler is not capable of reigning in the 360-400W TDP, so the CPU got downclocked a bit, and thus the lower power/heat observed while testing... or maybe you had more eco-friendly options in the BIOS to limit/restrict the power before the chip can spread its wings?
Ok I don’t get the complaint about usb ports, like Why buy a high end server with Ipmi to then boot with usb drive. I’d either pxe boot if there’s a lot of them or just use the ipmi.
He was explaining that in a workstation context, using this particular motherboard may be a bad idea due to the lack of connectivity. Personally I thing just losing the x8 port to a multiport card wouldn't be much of a sacrifice.
How about a web server available to the local network that provides updates on stats and such for game play, and other things from the system. How much of the CPU will be dedicated to the virtualized systems?
Run LLMs on this thing! Both on the GPUs but also on the CPU as the 8-channel RAM should really help. Go with Llama 3.1 70B or a similarly sized model, no one would buy this thing to run an 8B lol.
16:00 "it's not really realistic that anyone will be running.." As I sit here with four Tesla P40 just eagerly waiting for ANYTHING about configurations so I can finally get the best bang for buck CPU/Mobo to run them ='D **It's CRAZY how little documentation there is on the M40 (24gb)/P40 Even Nvidia has all but removed ALL documentation on them on their website.. I think it's the real reason they're only $75 now, because nobody can find enough information to risk the investment.
Not really a great test though. CPU is the least efficient (power/price speaking) way to encode video. Anything more than a few streams and you'd want dedicated hardware, like NVENC from NVIDIA or QuickSync via an Intel Arc GPU.
@@CraftComputing I knew it wouldn't be efficient. I was more trying to think of off the wall ways to stress the cpu that might be halfway interesting for a video. Of course dedicated hardware like nvenc is going the beat the pants off cpu encoding.
Damn that a beefy system. VDI at home? Lab? It will serve all residents in a Mil sq ft + mansion. I'm game to see more of vGPU sharing on Proxmox. But VMware and Horizon are dead to me. Video series idea - Proxmox HCI with Ceph deep dive. Help peeps get off VMware VSAN. I'm sure AMD will understand that Ceph requires minimum 3 node cluster and send 2 more 9554s right over. :)
Of course Jeff is using all of this as an excuse to get more free hardware to feed to his infinite "we have VDI at home" video series.
You read me like a book
@@CraftComputing looking for a 64 oz insulated tumbler with handle
I'll look into it.
Session-based multi-user > virtualization-based multi-user
How is this channel still under half a million subscribers? Sweet gear and craft bear What's not to love?
Obviously server hardware is quite niche.
Run Crysis of course. Cannot wait to see what this beast can do.
Crysis is not a good game to test this stuff, its ultimately in design CPU limited, it was made for a future with single cores running at 9 ghz rather then multiple cores.
If I remember correctly there is a mod out there that allows cpu rendering of Crysis, I saw Linus test that, I think it was with an old threadripper.
Would be cool to try that experiment here
CPU Rendering could be fun :-)
Sure... but how many at once?
@TalpaDK oh I would like to see just one, I really don't think more is feasible... you need all the cores to work for that task...
I think Linus was getting in the 10s fps wise with that old threadripper but I may be wrong. It would just be cool to see a game like Crysis, regarded by the entire community as being a nightmare to run properly on a good pc from a few years ago being rendered on a CPU of all things
I just built this as well last month. I ran into all the same problem you did as well. I appreciate you making this videos. Now I can send it to my friends to better explain whats going on lol
Going 220v on your server rack is a smart move. Didn't occur to me until you mentioned it.
Currently we have here in the Netherlands 250v for some reason.
Damn. The only time I'd be able to afford this is when this is like 10 years old lol.
The homelabber's lament. :(
I literally just said that out loud when he mentioned the cpu 🤣 "I know what I'm getting in about 10 years"
I think I'll come back and finish this in 2034 when I might be able to afford it... but for now off to something in my price range.
it's alright, things get more powerful and cheaper in time. by the 2030s we're likely gonna have computers magnitudes more powerful than anything today, at current midrange prices.
I want to hear the sentence "Through the power of buying three of them" as often as possible.
16:25 I audibly went "holy shit" when you pulled those suckers out.
Nice job Jeff!!!
>Dual VDI setup
>full docker home lab stack. traefik, nextcloud, homeassistant, databases for each, graphana, plex, etc.
>ollama with llama3:70B
>another zfs pool made out of a bunch of 12TB sas drives connected to an HBA
basically what im running on my 5950x desktop, without the vdi. Run the databases on a dataset thats on the ssd raidz1 you got. have it be a one unit wonder for a home. Maybe also run PXE off it for a home theater pc. maybe run a steam cache on the hdds as well.
@M1America What gpu are you using?
@@ochinchinsama 6600xt
As much as I'd love to have a Genoa box at home I'll be sticking with Rome until Genoa gets to a price I'm comfortable with second hand. Awesome that you get to play with one though, looking forward to seeing what you do with that monster.
Nice to see those cases still come with a blue-laser power LED!
360W TDP does also include the I/O Die and the I/O with all populated PCIe Slot can consume 100W alone.
At 10:45
Input voltage....
In the United States, voltages at the wall are 120 volts and 240 volts. I believe they used to be 110 and 220 50 or 60 years ago, but we upped the game slightly....
Still a joke
Thank you very much for you VDI work Jeff.
I have couple of VMware esxi's and just bought DL380 Gen9 server for my first Proxmox experience.
I am planning to buy 2080 Ti or RTX A5000 for my VDI.
Don't get me wrong, I'm all for the sneaky deal chop-shop chinese specials, but I'm happy for you getting to play with a top end shiny new rig. May all your bugs be squished before your beers go flat.
He plugged his merch store again! Quick, someone complain!
DON'T MAKE ME PIN YOU!
Like, I could complain that he does the drinking gimmick to the point that I nearly didn't click on this and probably won't click on the next one, but I won't ;)
Dude! I can't wait to see what this beast can do!! Serious cool hardware!!!
Nice video. I'll be waiting to see how this machine performs.
Wow this is the next level home lab
the Wall-e and plant holder take the spotlight in the b roll. :D
man they really just give this guy everything
You as far as homelab goes with our consumer budget restrictions. 7950X gets about 40,000 in cinebench and 100GB/s in RAM throughput. I think it would be more cost effective to just scale to the number of 7950X, RTX 4090, 2x48GB DDR5 6000 CAS 30 nodes that you need. There are obvious pcie limitations but I think pros would outweigh the cons.
Butterbot finally has meaning to his life!
If your worried about usb connections. For around $20-30. A Microsoft Internet Pro keyboard has your back. Not only does it have a PS/2 port connection. It has usb. AS WELL AS 2 usb connections in the back on the right. Perfect for your mouse or a slow ISO install. And the handful of times when I had only one working port. All 3 threw 1 port. And its pretty durable. I've had mine around 20 years as a daily driver.
Thumbs up for going 240V :)
Have you researched SR-iov features of SSD's? I would be grateful to hear what you think about using the SSD feature for VDI use cases. Long time viewer. Thanks.
Can't wait until this is a few gens old and I can purchase these on eBay! 😍
That is a super dope build out!
In 2015 I built three X99 boards @ about $6000.00 a piece with all the latest and greatest whistles and bells. My wife thought it was a bit over kill. Those three builds total over $18,000.00. But one AMD CPU @ $9000.00? And BTW all three X99 boxes are still running flawlessly today in 2024. I put a lot of TLC into those (Linux) boxes which have serve me well. This VDI build is interesting, ultra fast large memory. Geeks are becoming virtual machine freaks. I prefer a system once down which can be easy to restored in just a few minutes.
VMs were "the future" in like...2005.
Everything is done on VMs and containers now for a good reason.
Less waste.
Also, one of the biggest reasons to use a VM/container is to be able to move them easily or blow them away and rebuild them with much less effort.
I don't understand how your bare hardware setup could be easier/faster to get running again than having it in a VM/Container would?
@@Prophes0r I have built quite a few VM's for the fun of it on Archlinux and Gentoo systems. OS2, OpenBSD, NetBSD, FreeBSD, Windows. It's easy to do. With better hardware, bandwidth, large number of cpu cores, large memory needed especially for a ZFS file system it's like running real metal. As a server with many clients I say yes. Those VM's can make life easy for those who really need it. Your right about "hardware setup could be easier/faster to get running" it seems to me setups are easier in a VM than setting up on bare metal hardware. Reasons for this are obvious, because the latest and greatest computer hardware with many new device drivers are not there yet. Simply put VM's are emulators.
The 8004 platform is Siena, not Bergamo as stated… Bergamo is the Zen 4c / high core count SP5 chip in the 9004 series.
You are correct. I misspoke during that section.
@@CraftComputing Unforgivable. Your punishment is to try a beer in every video you make from now on.
It's my burden to bear.
@@CraftComputing ... to beer?
@@CraftComputing Your punishment should be to " Drink 1 litre of castlemane xxxx (WARM) live on stream!" I Apologise for unttering such fowl language of such utter liguid but a punishment must be met! The internet has spoken. Hee Hee.
OMG WALL-E TO HOLD THE CPU I LOVE THAT
Funny that this video pops into my feed, as all my parts are arriving for my genoa build. I went Epyc 9354, and added a beefy RTX 6000 ada, and I'm stuffing it inside a fractal meshify 2 xl. I didn't do enough research on the motherboard, I probably would have bought this board had this video come out last Saturday. Oh well! Awesome video, and I'll be checking out more videos from your channel
In an unfortunate turn of events, the MOBO I bought was DOA, so I ordered the same MOBO in the video. Fortunately it was on prime so I didn't have to wait long, and now I finally have a working server.
Tripp-lite (Eaton) makes some "Plug-Lock Inserts" for C14 which I found help with the loose cables on a crowded PDU.
They come in batches of 100, and they seem brittle, but damn are they useful.
Just watch out, the force needed to get them in/out after the lock is installed ain't a joke.
as a casual user, still running his x79 platform on ivy bridge, this feels like a time lapse to distant future. damn, tech has made a leap forward last 10 years.
Awesome rig to have at home. Super jealous! Slight nit pick... US is 120v/240v. We switched from 110v/220v like 75 years ago.
Came here to say that.
An All-In-One solution as always. Proxmox + Gaming VMs, NAS, etc.. whole home lab in a single box :)
Hey Jeff! I'm thinking of making an Epyc based workstation/server at home, but going the Milan route just to save a huge amount of money. Rome and Milan parts are far cheaper than what they used to cost at launch, and that includes everything, the CPU's themselves, the motherboard options, the ram, everything. My plan is to get a dual socket, and upgrade it gradually.
The sheer power of a 64 core system would be incredible by itself, but having say... 1TB of memory could allow me to do some pretty crazy things in certain applications.
Slaps case "You can fit so many plex transcodes in this baby"
I think local LLMs will be a large consumer of computer power in the next 4-5 years. Llamafile or Ollama in CPU or GFX mode with a Llama3.1 or Mistral Large 2 performance in tokens/seconds would be a great benchmark.
I love Pit Caribou
7 times 16x lanes is insane! Jealous? No, you deserve getting this kid of system. Envious? Also no, I sure don't deserve one either. Do I want one? Oh yes, but I can't justify it in a "home lab"! Awesome server Jeff! Congratulations!
You can get used Zen 3 Epycs with 7 16x slots for quite reasonable prices now.
@@LordApophis100Those are Gen 4 instead of Gen 5, although there’s not much difference today as most devices don’t take advantage of 5.0 😂
@@LtdJorge A ton of stuff doesn't come close to needing gen 3, let alone later. Just be sure to check their specs etc, there are some things that will nerf&gimp themselves if the pcie is not the "right gen" even though they don't come close to needing it in terms of bandwidth. Some SSD's for sure but some other stuff too that I don't remember what it was now.
Looking at replacing my Epyc MOBO with a H12SSL, that has lots of lovely PCIe4 slot goodness!
Sadly while trying to troubleshoot I dropped a CPU into the pin grid of the socket on my H11SSL-i 😢
@@morosis82 I really like the Gigabyte MZ32-AR0 rev 3 because of its OCP 2.0 slot.
You can add decent networking without using a PCI slot. Only drawback is, if you want to use the top x16 slot you'd need to run 4 cables from the connectors below the CPU to the slot, but the other 6 slots are already plenty. Other than that it's the almost perfect Epyc board for me.
And here I am deciding between Epyc 7001 and 7002 (Gen 1 and 2).
4:42 -- PCIe 4.0/5.0 . . . appreciate the commentary on signalling and Phison PS7101 (5:15) redrivers.
More VDI/cloud gaming videos 👍 please. Highly entertaining, educational.
Kindest regards, friends and neighbours.
12:53 -- "One of the main struggles I've had with VDI is the intense load on storage for loading the OS or launching a game." Interesting. Lkg fwd to your storage configs in your testing.
Let's go wild Jeff! This hardware is so WOW, that I thought, the whole neighborhood could benefit from it. This is growing bigger than homelab. Let's call it neighborhood Lab! Now imagine.. just for fun! We build the cult later. :)
With how much 950p's have come down in price, might be worth picking up a couple of testing given how loading applications tends to be latency rather than throughput bound
What is my purpose?
You hold CPU
Oh no....
I would like to see VDI implementation using the A series cards. I would also like to see some server virtualization at the same time.
Wall-E - immediate like on the video
Perhaps we'll be lucky enough to see... games?
It would be cool to see the performance of a VPP based routing and packet inspection setup using slurm.
I feel like you're doing the merch sponsor spot to spite the guy who whinged about it last time. If so, by all means continue.
After the ESC4000 I've moved onto the 2u 4 node cluster computing
This is basically the setup I want to upgrade to in a few years after the prices have dropped. I would like to see how much you could potentially consolidate your rack. Could you virtualize and run your entire homelab off this single server? You already have a pretty amazing setup so obviously you aren't going to be able to throw a petabyte of storage into that chassis but could you do everything else? How much storage can you reasonably get in it? Would homelabbers be better off buying multiple lower cored systems and basically clustering them or would they be better off spending a little more buying an older 32-64+ core processor with gobs of pcie and virtualizing everything? What software would you use and how would it compare performance wise to your current setup? Also what are the power savings, if any, or consolidating like this? Yes that single chip pulls 274W alone but if you could replace 3 servers that pull 100W each you are going to be ahead. It would also impact your need for networking as you could pull switches out of your rack entirely if you didn't need to connect as many devices.
LLM (Ollama), would be interesting to see what that box can do.
I would normally say "why didn't you just use a c19 plug lock insert as intended" but I'm assuming you ran into the exact same problem I ran into....it's virtually impossible to find those inserts sold individually and paying over $60 for 100pc seems exceedingly wasteful lol. I just got a 3D printer though, so I'm sure I can find or draw up a model to fix the issue.
My first thought when I heard Jeffs issue was to grab a hot glue gun and just accept the cable as near permanent 😅
I use APC cables with locking feature on both ends.
Can you please also test the four NVMe SSDs in a four drive "RAID 10" ZFS Pool (two striped mirror vdevs)?
RAID-10 could be interesting as well. I'll definitely give it a look.
Run some full etherem nodes! Raw storage required is around 10-15 TB flash
Bought 3x 9354 32 core Genoa based servers semi recently, their improvement over Rome which they were replacing was noticeable just in using the virtual machines that were running on them, you pay for that improvement in power though as maxed out with a power virus load I was seeing 600-700w per system and these only have 1 cpu, 12 sticks of ram and 2 sata SSDs each, it is nice and fast though. The old Rome servers were never that power hungry, 32c, 8 sticks of ram and 2 SSDs would see it use ~300w.
DDR5, 12 lanes and 4800MT... that's some serious bandwidth, would that be ~900GB/s? that's close to an nvidia 3090, running an LLM on RAM might yield good speeds
Cooling the "Genoa" CPUs is very difficult because the largest fan that fits on the mainboard has 92mm side length. This small size requires the fans to run very fast and be very loud.
If you do end up running proxmox, a 6 drive, two vdev raidz1 (3 drives per vdev) is a config that performs quite impressively - at least in my use case with older micron 2300 nvme drives.
Jeff bricked his Epyc Quanta motherboard, what's the worse thing that can happen if we give him another Eypc system? ... Thanks for the video!
Really looking forward to the speed testing, if only to drool all over myself wishing I could afford such a thing.
Umm, people suggested Crysis CPU rendering, which I'd definitely be interested in seeing, but also, I can't help but wonder how many copies of Cities: Skylines 2 you can get running before the framerate tanks to unplayable levels. Given how notoriously CPU-heavy that one is, and the fact that city sim games can usually get by with 30 FPS or lower, it'll be interesting to see if the dual GPUs or the Epyc ends up being the ultimate bottleneck.
killer setup .
Jeff i struggled a lot to get out some good performance with zfs and nvme drives. In my case a singel drive gets 6/GBs R/W but with zfs the speed drops a lot like 2/GBs - 3/GBs R/W. I tweakt a lot like zfs= metadataonly or atime....and ashift the performance reases a lit bit like 1/GBs more R/W speed. Also somenthing what i notice was if the proxmox host is under heavy load the performance of zfs is also droping cleary in my case i use also some "Epyc Rome" and enterprise U.2 SSD Gen4
Wel tah si gama 🐝 utra interesting yeas .. ohhhhfor tha sposers AMD Corsair foca
. Yea . Me personally what time would have loved to see you running of course some... All the 3D marks that you can a sinner bench live cinebench all of them from.. a couple of games 5 or 10.. the ones that you also thank you to stress test the things.. of course not you useless stress this like some other things.. Carter zilla benchmark.. Al tha sisiros... Cristal mark plus normal copy past action etc . Wya a shizel . Osam stuf man
I like this! I've just got my hands on a Gigabyte ME03-CE1 (SP6) motherboard with an EPYC 8024P (I don't need much performance, but quite some fast I/O). After being absolutely disappointed with ASRockRack W680D4U-2L2T/G5 (I really can't recommend it - janky as sh!t), I am hoping this will work nicely.
Got a SuperMicro ATX tower, a Noctua CPU cooler, 4*3.84TB U.2 NVMe drives. Only the RAM is so prohibitedly expensive, that I am coping with just one (soon 2) 16Gb DDR5 ECC RDIMM RAM stick(s).
Still: I'd never ever want a server, that doesn't have a BMC with media redirection.
Also if you are looking to stress tests, try running a LLM - should be fun, especially with the GPUs you've got.
Please share your setup for the software side of things for the cloud gaming VDI setup.
I've tried both Moonlight and Sunshine my testing shows that the connection can be VERY spotty at times, even on hardwired 1 GbE LAN.
Beyond that, there isn't a heck of a lot that you can throw at your Genoa server that would be a "standard" test.
Hey Jeff, I was considering to do the same thing if I ever gotten a house of my own to have a dedicated and grounded 220v circuit using a NEMA-30 plug for a APC Battery back-up but was concerned about power managing that UPS to support 120v devices like a couple of SPF+ switches, cable modem, controller and a 2U/3U custom server. If you do that upgrade, under your discretion, would you provide a video for that including any challenges you experience?
Looking for your whiskey stones but a really big one for an old fashioned big rock. Any chance that is coming?!
I don't need this, I have absolutely no use case for this, but I want this SO BAD!!
you have 12 chan support with this chip but opt for 8 chan - still good to see this content as the potential and possibilities are immense - should be a workhorse for you for years. by going to 22 or 240v you shoud save 3-5% on power - not negligible. you need to do some true open source ai on this monster - pair it with a really big array and let it run to digest and learn all your data
Electrical companies in the US measure usage in kilowatt hours so running 120v @ 10 amps or 240v @ 5 amps makes no difference in your power bill with the wattage being exactly the same. Some applications may run more efficiently on 240v but it's not 3% and won't help cost in any noticable way (I found this out the hard way). It will allow you to keep the amp load down if your running low since most homes have 100amp services with larger homes having 200amp so it is still better than 120v for permanent installs.
i want to see this rig chew through blender renders
Maybe to mainstream now, but could be interesting to see how two A5000 work with training ai models.
AMD is just killing it.
One of the things i was most excited about from AMD was Epyc 4004. Now dont get me wrong, extra validation is great.
But i was disappointment in no 16 core x3D option, and no 32 core Zen4C option, or maybe a 2 NUMA node 8X3D+16Zen4C. I was also holding out hope for a re-designed die for the 8700G that gave it 32-48MB of L3 cache instead of 16MB, and then the full 24 lanes of PCIe, Cherry on top of this fantasy would be CXL support for what would equate to a 3rd channel of RAM over 8 lanes of PCIe, but outside of handhelds and small formfactor gaming focused systems, that would basically be useless. If they had released an epyc that was this theoretical ryzen 9 8950G, even without the CXL support, i'd probably pick one up for my file server which right now is a 3950x on an asrock rack X470 motherboard with a BMC because i dont have enough PCIe lanes for a video card.
I’m don’t understand the homelab excitement for epyc 4004. They work the same as regular consumer ryzen and fit in the same motherboards. The only people who should be excited are corporate sysadmins who need to fill out a ton of certification paperwork for every part in a server
@@nickfarley2268 From what i can tell, you're basically guaranteed functionality, as opposed to my 7950x where i had to hunt around for boards and RAM that would actually work with ECC. Which is why i was dissapointed, i already have a working 7950x since launch, if this offered 32 cores at a lower power envelope i would throw down $1000 right now, but its basically just my 7950x.
Perhaps you can setup a Ollama AI solution / training with pdf documents. (Programmers handbooks and system administration along with some network handbooks ) would be nice to see this ...
;.)
Run llama-405B on that puppy's CPU. Maybe put 12 channel memory to fully uncover its potential.
Ideas: Sharing part of the GPU setup with VDI, and part with an ollama VM... will your 2xA5000's fit llama 3.1 405B? Also: Proxmox VDI with Guacamole and Netbird? Or similar? Maybe securing sunshine and moonlight with a netbird/similar and whole package latency?
Thnx, enjoying videos
I would love seeing wierd things like stereophotogrammetry and the like.
Hey Jeff. Can you please try not only GPU passthrought under Proxmox, but also vGPU setup with the RTX A5000? Also it would be interesting to see if the above are possible under TrueNAS SCALE as they now include latest Nvidia drivers. Thank you very much!
I have the Rome version of that board. The layout is exactly the same and I have run into the USB issue as well. I run unRAID on mine so one of the USBs is populated by the unRAID boot drive, and the other is for the communication to my battery backup. If I ever need to connect a mouse/keyboard directly to the board, I have to unplug the battery backup com connection. Great board otherwise.
Have you tried running any of the Intel QAT cards in your servers to assist with VPN performance? Im currently trying to passthrough one to a pFsense VM, but Im having issues with the drivers being blacklisted in vfio.
I was surprised you didn't already have a 220 V circuit in the garage with how many KiloWatts you keep in there!
I'm sure that if and when you do upgrade to 220 V out there it'll be a video, but I'd also be interested to see how the total power draw changes. Probably only a couple of percent, but when you have that many of them I bet it's a substantial difference.
Shockingly enough, my rack only needs around 900W at max, (plus another 600W on a second circuit for the AC). Idle is around 600W.
@@CraftComputing wow, that is substantially less than I expected, I thought the AC would be a couple of kW by itself!
Yeah, it's really not terrible. I've been as high as 1200W idle in the last few years, but 2023 was all about consolidating down and getting more efficient equipment.
at least it runs RDIMMs.. what is crazy expensive is UDIMM
run stable diffusion on the cpu. would be interesting to see how close we can get compared to a 3090 or 4090.
I wonder how noisy that air-cooler is, though perhaps the PSUs may be noisier in that setup... Pity there's no somewhat quiet Arctic 4U update for the increased TDPs and socket sizes of the SP5-using generations... Maybe this cooler is not capable of reigning in the 360-400W TDP, so the CPU got downclocked a bit, and thus the lower power/heat observed while testing... or maybe you had more eco-friendly options in the BIOS to limit/restrict the power before the chip can spread its wings?
Gaspésienne can be written as "La Gass-pay-zee-en" to be pronounced propely in english. Thank you and have a good one all. 17:52
I'd like to see you try training your own AI models on those GPUs. I don't know what exactly you'd train but it'd be cool nonetheless.
I'm sure I'll be doing some AI things. Not sure what yet.
Ollama. Load some huge model and see how the entire system performs
Ok I don’t get the complaint about usb ports, like Why buy a high end server with Ipmi to then boot with usb drive. I’d either pxe boot if there’s a lot of them or just use the ipmi.
He was explaining that in a workstation context, using this particular motherboard may be a bad idea due to the lack of connectivity.
Personally I thing just losing the x8 port to a multiport card wouldn't be much of a sacrifice.
Well Deserved!
Great video ! Will you ever try a Dual socket config with Epyc 9554?
I'm hoping to get a dual-socket board soon :-)
How about a web server available to the local network that provides updates on stats and such for game play, and other things from the system.
How much of the CPU will be dedicated to the virtualized systems?
Wall-e with eypc it so cute❤
Where can I get that Wall-E!?
Run LLMs on this thing! Both on the GPUs but also on the CPU as the 8-channel RAM should really help. Go with Llama 3.1 70B or a similarly sized model, no one would buy this thing to run an 8B lol.
16:00 "it's not really realistic that anyone will be running.."
As I sit here with four Tesla P40 just eagerly waiting for ANYTHING about configurations so I can finally get the best bang for buck CPU/Mobo to run them ='D
**It's CRAZY how little documentation there is on the M40 (24gb)/P40
Even Nvidia has all but removed ALL documentation on them on their website.. I think it's the real reason they're only $75 now, because nobody can find enough information to risk the investment.
Twelve instances of Crysis at a time? Also studio update?
Windows with steam GPU enabled + steam link
How many simultaneous Plex/Jellyfin streams can you CPU transcode before it chokes? That might be an interesting test.
Not really a great test though. CPU is the least efficient (power/price speaking) way to encode video. Anything more than a few streams and you'd want dedicated hardware, like NVENC from NVIDIA or QuickSync via an Intel Arc GPU.
@@CraftComputing I knew it wouldn't be efficient. I was more trying to think of off the wall ways to stress the cpu that might be halfway interesting for a video. Of course dedicated hardware like nvenc is going the beat the pants off cpu encoding.
Of course, the question is: can it run Doom?
Damn that a beefy system. VDI at home? Lab? It will serve all residents in a Mil sq ft + mansion. I'm game to see more of vGPU sharing on Proxmox. But VMware and Horizon are dead to me.
Video series idea - Proxmox HCI with Ceph deep dive. Help peeps get off VMware VSAN. I'm sure AMD will understand that Ceph requires minimum 3 node cluster and send 2 more 9554s right over. :)