Proxmox dev here. We have some workstations running with 12700k/13700k and I can't remember any ongoing stability issues that were CPU related. The only stability problem we encountered (AFAIR) was too high clocked memory speeds (early DDR5 modules though). If you can replicate the hangs/crashes maybe you could open a bug report with logs etc. on our bug tracker? Edit: to clarify, our development workstations run with Proxmox VE of course 😅
I was planning on getting a 12400 to run proxmox for my homelab, so I'm now very curious. I'l keep an eye on this. Good to see you guys are on top of it!
A Proxmox dev out in the wild? Thank you for the work you do! 😊 Proxmox makes my job so much easier than the other hypervisers I sometimes have to interact with.
I wrote the code to implement the CPU Affinity feature in proxmox. (Dot on the Proxmox Forums) I haven't tried using P vs E cores for gaming VMs, but I do leverage affinity for E cores to force lower power profiles on the CPU. By pinning some of my constant-load VMs to E cores, they are never responsible for putting the CPU into a high power draw state. This saves ~20W of power on average in my usage (security system designed to run off battery backup). It looks like my comments on this video are being immediately deleted after posting. Hopefully this comment won't disappear *crosses fingers*
URLs are disabled on all my videos to prevent malware or malicious links. Do you have a keyword to search for or a github user/project name you can share?
Yeah, there's 2 relevant links to the affinity discussion. 1. , Title: 'CPU Pinning?', forums url endpoint: '/threads/cpu-pinning.67805/' 2. , Bug Number: 3593 A user on the forum asked how one could achieve CPU pinning. One of the proxmox employees, t.lamprecht, suggested using the `hookscript` which calls arbitrary bash before, during, and after the VM lifecylce. During the startup hookscript, t.lamprecht suggested users put the 'taskset --cpu-list --all-tasks --pid 0-11 "$(< /run/qemu-server/104.pid)"' command, which will take a pid (and all of it's children) and restrict the pid to the specific cores. It works pretty well, but it was painful to setup this hookscript for every single vm. On this forum thread, the proxmox bugzilla bug was created 3593. I (Daniel Bowder) picked up that bug and made the relevant changes to the proxmox codebase to inject the `taskset` command before launcing the kvm/qemu process for the VM at startup. It was then added in PVE 7.3. It's my only contribution to PVE that I have made, but I am very proud of it. I found it super usefull. (I also got a little scared about the proxmox code base lol, Did you know they are mixing tabs and spaces for formatting!!! tears I can't get VScode to properly format the code files.). Anyway, the logic behind the affinity is really dumb, but this is now something that can potentially help you investigate further. The taskset command above can be called WHILE A VM IS RUNNING, and reallocate it's cores on the fly. CORES=0=11 QEMU_KVM_PARENT_PID=$(cat /run/qemu-server/104.pid) taskset --cpu-list --all-tasks --pid ${CORES} ${QEMU_KVM_PARENT_PID} You could use this to quickly flip back and forth between cinebench runs. (Running htop on the PVE host is really fun when you do this).
I have been running Proxmox on my 12th Gen i7 (8p/4e + HT) for 2 months now, passing trough the GPU to Win11Pro & Fedora Linux VM. I did not have a single issue, and it is rock solid. I only assign 10 out of 20 threads to any of the VMs. I an not treading a hyper-tread core, as something you can assign to the VM, but for Proxmox to use when needed/possible. - I will add some more workloads to proxmox, and investigate the stability. Keep up the great work!
@@manofwar9307 proxmox. when making a new VM i always used advanced for more settings. under CPU always set NUMA aware and you can also set it to host CPU so the guest OS and software can prefer its own core priority as needed as well. In programming languages they've long supported big little but thats down to the software to implement it. You can make it look like a 2 socket CPU but im not sure you can properly set the affinity in proxmox.
Great video! Was very entertained with all the chaotic permutations of results. I think I read somewhere that virtualising a Proxmox VM, and then virtualising Windows within that leads to better(or rather more stable) results.
Cheers Jeff! Enjoying a Coronado Brewing Orange Ave wit. Excellent video, your Erying series really interests me for my homelab cluster i am planning on building, thank you for the upload!
For VM CPU Affinity, we run the `taskset` program which allows us to specify which cores a given `pid` (and it's children) will be allowed to run on. We can get the QEMU parent `pid` for any VM by catting the /run/qemu-server/.pid file. (eg: `taskset --cpu-list --all-tasks --pid 0-11 "$(< /run/qemu-server/104.pid)"`) I don't know how we would do it for LXC. I haven't looked at that part of the pve codebase.
I'm actually curious about one thing, I am not a drinker, so the parts about beer etc at the end, I usually skip and go to the next video, I'm wondering if that hurts your analytics with people doing that or if I"m just an outlier.. great video as always, thanks man
The reason they're at the end is so people who aren't interested can skip them. If you click off the video at 16:00 or 18:45, it doesn't matter to me. Like and comment while you're there though ;-)
When running a single VM, I can confirm that Proxmox has effectively identical performance as bare metal on heterogeneous cores. I have a 13700K system with 64GB RAM. When I gave a Win11 VM all cores + 60GB RAM (4GB left alone for Proxmox itself), it gave pretty much the same benchmark results across the board as an identical bare metal Win11 install. I also ran single-threaded performance tests, not just full blast benchmarks. From that, it seems that P and E core scheduling was comparable for that scenario. That said, I don't know the intricacies of Proxmox CPU scheduling so it might behave differently with multiple VMs splitting the resources vs. one VM that's given everything.
I can confirm with proxmox running Windows VM, isolcpus and static CPU pinning, it runs at the same speed or faster than bare metal. Have a 13900k with all cores and threads giving me the same cinebench R23 score. If I give it 8 less E cores, use isolcpus to make the hypervisors to run exclusively on those, and statically pin all the threads. I emulated a 13700k and got higher R23 score than an average 13700k. Ultimately, I did isolcpus to keep the hypervisor and smaller VMs in the last 8 E cores. Then split the rest evenly across 4 Windows VM, each with 6 threads (2P+2E) with GPU passthrough, and even if I run R23 on them at the same time they outperforms a i7 8700. If you are wondering why high end motherboards have so many PCI-E slots, it is because of crazies like me and the cost is still lower than server-grade components
Thanks for covering the big/little proxmox setup. I've been wary of dropping the cash to try it. Hope to see some future updates around this where you get it to work.
I ran proxmox on a 13900k. It hosted 2 plex servers both with their own gpus and a truenas vm with hba passthrough. I didn't notice any stability issues and all 3 vms ran smooth for the entire time. None of which were Windows. Maybe theres something with KVM and Windows on Big/Little arch.
Interesting! My 13600K system (on Gigabyte Z690 Aero D) has been rock solid running Proxmox (multiple weeks uptime without issue). After watching this I decided to give it a little test. With one VM and some containers running in the background (though not doing much), I decided to spin up two VMs each with 10 CPU cores (maxing out the 20 threads of the 13600K with 6 P cores with HT and 8 E cores without HT) and I ran Cinebench r15 multicore on both at the same time (multiple runs) and the runs completed without any issues. CPU usage goes right up close to 100% on the system, but didn't seem to freeze or crash. May be something with the Erying boards (or the mobile CPUs they're using) that's causing an issue.
The channel Hardware Haven used a 13th Gen CPU with Proxmox and didn't report any problems. Maybe because he didn't run at full power. You should ask him if his Mini PC crashes at full load.
this is most likely due to a bad motherboard or BIOS, such Frankensteins have never been stable, especially from Chinese companies + there will be zero support, enthusiasts will make working versions of the BIOS
This video was perfectly timed as I was just checking out these Erying boards. I'm pretty happy with my current homelab (P360 Ultra) but i was eyeing the ITX boards for a low power gaming system. They are definitely priced better than the minisforum stuff.
Most people, even those in the IT space don't realize that Intel has a large enough product stack to hypersegment. They may sell several thousand units to a telecom for a single purpose and that company only needs a certain level of performance. Then another tier could be predicted to sell hundreds of thousands of units. That main tier may have slightly defective units that create yet another segment, sold at a discount to another market. Thus it's difficult to say that a given product is bad or good relative to another form Intel, since sales (generally) represent a balance of price and performance from the buyer's perspective. Hence why we need reviewers and testers who can judge how well something will work for its intended use case.
Love the Erying Motherboards, have an M-ATX Erying i7 12700H, using it as a gameserver, had mine since Jun 11, 2023, running 24/7, no issues :) Thinking of getting the New 13th gen Erying with over 5Ghz and DDR5 support.
@@CraftComputing i recently got the ES 13900h from "17029". its only a ddr4 model but unlike most of the other engineering samples my sample has pcie gen 4.0 8x. i was only able to find the 12700 / 12900 ES sample boards that have pcie gen 4.0 8x but there are also a large majority of them are pcie gen 2 (strangely they state m.2 is still 4.0 x4 so it might actually be feasable to put a pcie 4x card like a rx 6400 and use the card to its fullest. my supplier has also told me hes willing to give me the modified bios to make the 13900h ES overclockable! :D
My Tiger Lake boards have been running 24/7 for 8 months with pretty strenuous loads, and have been 100% stable. This was a software issue, not hardware.
@@CraftComputing It is hard to complain but man we were SOOO close, if only we had kicked that field goal to force overtime. Also, I didn't know you were a Lions fan! #OnePride
I found the exact same problems with an Erying board with ES i5-12450H. It usually crashed when trying to reboot a VM and PCIe passthrough was downright broken, with the passed through device remaining in an active state despite the VM being powered off, rendering it useless unless the system is rebooted. Let's say I was thoroughly disappointed about that. In XCP-NG passthrough was equally broken, but the VMs never crashed as XCP-NG is running Linux kernel 4.x still.
I seem to remember seeing a different video talking about how weak the VRMs are on this board, and to get it working without issue it needed a fan directly on them. I wonder if you were experiencing that issue?
The VRM's get so hot you can't touch them. I added a nvme heat sync to each VRM and problem is solved. Now they barley even get warm. Worth the extra $20 in my book.
Engineering sample CPU could be what's crashing the host. I had one 9700f QS a few years back and even though QS are supposed to be "final" as ES are not, mine crashed the pve host when the guests ran handbrake video conversion, very much like the way you described. I can't be 100% certain because I didn't have an alternative cpu for testing.
Cool to see people involved with Proxmox's development in the comments. Hopefully this leads to a followup if/when things get fixed. Considering the amount of people saying they're running 12+ gen chips with no issues, i'm wondering if Proxmox is detecting some oddities in the BIOS or chipset of these Erying boards rather than struggling with the CPUs.
Hi Jeff! So I can say that based on the places I have worked, it is common practice to overcommit resources and allow the running VM's to hang out waiting to be serviced (like being at the DMV). So as volatile as your test environment was, how much worse would it be if you overcommitted by say 33% or more? It would be curious to see if it was that much worse, much-much worse or surprisingly better (stability/crash wise of course). Just a thought...
Yes, I've seen (and done) quite a bit of overcommitting in my day as well. My comments in this video about scheduling are *pure speculation*, as even when undercommitting CPUs, I still had a ton of instability.
I've had huge instability on a 12th gen (i5-1240P) NUC using promox until I switched the memory to Intel validated ones. I haven't had another crash since.
New intro is cool. Constructive feedback; maybe half or so second shorter. Sound level is a bit too low can't hear it without turning up the sound to max. I thought it had no sound at first.
I've never had an issue with careful over-subscription in proxmox. I've been told that best practice is to not do so... but, consider that it's likely that your VMs won't see full load at the same time, or anywhere close to it. I should probably spin up a test node and try and see if I can break it though, would be curious to see. I know I've definitely mapped cores 1:1 with the host on VM's that are used for render workloads, but, the real test would probably be running actual stress tests, like linpack or stress to see if actual 100% utilization upsets proxmox. I would guess that KVM will eventually support all of this eventually, and likely well before everyone else does.
I've overprovisioned CPUs plenty without issue. The comments about underprovisioning to allow cycles for scheduling was completely speculative on my part.
@@CraftComputing ah, I was wondering if you knew something I didn't. It seems like many people say this, but, I can't find anything even manpage adjacent that says that'd be the case tbh
Curious if others have had weird issues with their erying boards. I have a 13420h matx board and first only one ram stick was detected, turned out this was because of using a 2-port (8 sata) hba, a 4-port (16x sata) solved the problem. But then the Realtek LAN stopped working, couldn't even detect it in hardware, in BIOS it's enabled. Lastly wifi randomly doesn't start in Windows (code 10 or 43). It was valued $25 on the package, i kinda get why. 🤷
Also, I use Hyper-V and your GPU-P guide to share my gaming PC with my girlfriend. I use a 12900 KF. Never had a problem, but I have also never oversubscribed system resources.
I would love to see you run the same tests via Hyper-V, since the Windows OS (as of Win11) is supposed to be BIG.little aware, and should be able to schedule work across the VM threads.
I have been running 2 servers using hybrid CPUs for nearly a year now without a single issue (i5-12600K (6p/4e, 16 threads) and i3-12100F). Both running ProxMox with TrueNAS as a VM. The i5-12600K server also running a Gaming VM and some other services (NextCloud, VPN etc...). I can send more info on my setup if anyone is interested, but those are my two cents.
I'm building a PC for containers but am confused what spec should I do! Photoprism for photos, Openmediavault Adguard DNS Jellyfin for tv shows and movies Home assistant for controlling lighting and for 2x home cams Nextcloud (Want to poweruse it!) Zfs raid shared 2x 2TB HDD Also came across Mycroft, like an Alexa/ Google assistant alternative. PS suggest some quality of life apps;)
@@JdownJdown I assume you mean power consumption? About 80W without the GPU (measured before I got it). Under somewhat standard load (TrueNAS, 4 HDDs - file access, snapshots, replication etc.) I measured 45kWh over 17 days (roughly 2.5kWh/day).
@@svejdik313 oof thats really high. I am running 8700k, 5 Drives, 2 NVMe drives, 10g SFP+ NIC, 4 constantly on fans, on Windows with Hyper V + 4 VMs for Automation etc and my power draw is 50W at idle. It does around 1.4 kWh per day. I was thinking of upgrading to 12th gen but that high power consumption is too much. I am running a Seasonic Prime Fanless PX 450W Platinum PSU though...which might be helping a bit
@@JdownJdown Yea... I don't even want to know how much it uses with the GPU and Gaming VM 😅 Honestly the CPU is way overkill for my uses. It averages 5% load and maybe gets up to 30% if I run some extreme compression. I'd definitely use a less powerful one if I was building this again.
VM' actively using 21 cores on a Ryzen 8 core 16 thread CPU. I think I'm fine. And loads are pretty low, so does not need to pump high performance. It's my happiness!
I purcahsed a Erying 12500H board 13 hours ago (video age 8hours at time of comment) I only plan to use it as an UNRAID NAS / Docker box - HBA card should work in the PCIE slot without too much issue I hope. I have seen some comments of the Engineering Sample ones and or these boards specifically can not do IOMMU passthrough (not an issue for my use case, but something to think about for others?) The value proposition of these boards even with the ES chips is great for home labbers so glad they are getting some coverage here!
I hope this is sorted out. SME and homelab hypervisor is the most promising use case of big little. I'd love a system with 2P cores and 8 or 16 E cores for containers and light VMs
I recently got an Erying with a i9-13980HX for the sole purpose of running VMs. Not everything needs to run on P cores. I run all my demanding stuff on my main server with a six-core i5-8400. But a lot of stuff I run in VMs happily runs on my secondary server with an ancient 15 year old (!) two core AMD CPU. With 12 GB of RAM, it runs 8 (!) VMs, for various low-demanding tasks while also sharing the storage space with the main server. Since that machine is really ancient (apart from the PSU), I plan to replace it with the Erying. That's where the E cores will shine, they will happily run the existing low-demanding VMs, while leaving a lot of room for adding new ones for more demanding tasks.
I have been running Proxmox on Intel NUC12 with i5-1240P for a month with no issues. At this moment I running 9 VMs without CPU Affinity, but only assign 1-4 vCores to each VM
Isn't this caused by the differing instruction sets supported by the E and P cores? That was always the thing holding me back als i like to pass AVX512 to the vms
@@CraftComputing I wonder if some KVM/scheduling code requires an instruction that's available on the P-Cores but not on the E-Cores, so when the scheduler schedules itself on an E-Core it just hangs. Tho I guess that would show different symptoms... Either way interesting results.
@@insu_naThe E and P cores use the same instruction set to eliminate that problem. None of the 13th Gen Intel chips support AVX-512 because of the E-Cores don’t have it.
Did you ever think that your instability problems might not be Proxmox, but board and cpu problems?? The second cpu seem to be more stable....could a better board that isn't so "weird" be an idea to work with?? You could also be working with a flaw in the mobile CPU architecture that may not be a problem with desktop chips. If you reran these tests with a more mainstream motherboard and desktop chips, you make it different results.... Thank you for the video; Monte
@CraftComputing Jeff, I'm with you on your opinion that Intel's move was on the bone-headed side. What they needed to do is offer differing models with and without ECores to fit the needed market sectors properly. You are correct into your original assessment.
Have gone with X99 E52680v4. 2 Huananzhi m'board. Will swap one with ASRock X99. Huananzhi support via Aliexpress message board is next to none - Lousy to say the least.
I don't do anything like this but I like the 13700k cpu I have. I dedicate the P-cores to cpu mining and the browser to the E-cores. No stutter, youtube plays fine, browser works fine. etc
I have been using Hyper-V with i9-13900H (Minisforum MS-01) and I encounter no stability issue at all. Yes, the performance load balancing is odd and sometimes heavy workload stuck at E cores, but at least I have nothing to worry about if I just want to put more VM running. Normal operation in Win10 inside Hyper-V guest is also very snappy, much snappier than my previous E-2244 / i7-8700.
I'd be tempted on doing this with a 14700. Leave a full E core cluster dedicated to proxmox and then have 8 big and 8 little available for the VMs. I would not have more than one VM on the same cluster due to the shared L2 cache though. I have a feeling that could potentially be the source of the trouble.
I wonder if the reliability would improve using desktop cpus, such as the i5-13500/i5-13500t. I've been eyeing one of those for home lab use for a while.
My Tiger Lake boards have been 100% stable since I installed them. Testing these Alder/Raptor boards in Windows they had no issues whatsoever. I think the stability was purely software.
@@CraftComputing Yeah, in that context that sounds like a valid conclusion. I think I'll have to keep watching this topic to see if big little on proxmox becomes viable, but at least I'm in no hurry.
So I could use a 10G network card on the ES board? I'm interested to use one of those for a Plex server (NAS has the media storage). Do the ES still have the same intergrated GPU as the retail chips? And what memmory speed can I go up to on those chips?
Please correct me if I'm wrong, but affinity in KVM isn't the same as on ESXI, there's a proper allocation based on sibling cores that should be considered during the cpu affinity. That can cause the VM to hang up
I've always thought virtualisation was the perfect use case for big/little architecure. Having the hypervisor run on the itself run on the little cores while the big cores ran the VMs sounds pretty great for efficiency at least to my untrained brain
@Craft Computing Can you try with split_lock_detect=off in grub? Split_lock_detect is a kernel "feature" intended to prevent DoS attacks from less optimized programs that aren't aware of cache coherence.
I'm curious to see a couple of scenarios: 1. 4 VM each with 1P + 1E core 2. 3 VM with 4P, the 4E cores reserved to proxmox which should be great for containers
Another possibility is that the dies being used for the xeon, having the GPU and/or E cores disabled, are dies that failed GPU and/or E core testing. and have little use in the consumer market..
i've been running proxmox on 11th gen erying with 8 cores. the only problem for me is finding a good 1U low profile cooler. i run some number crunching VMs from video encoding to AI so using the 12th or 13th gen is a terrible idea. however on a long run it clocks around 3.7Ghz on all cores. stability wise i've never had a hang or crash, and the CPU cores keep trying to clock to 4.8Ghz when possible. power use is also very low with a 20W idle from the wall with 2 HDDs, 2 ram and only the CPU fan. It runs as part of a proxmox cluster with other lesser known power efficient hardware that have no issues running proxmox such as AMD's embedded and does entirely fine combining NICs with usb adapters too.
For power saving purposes (much as possible), should I choose an Intel Xeon combo (Aliexpress used) or an Erying card. This is for a virtualization server for docker containers.?
I can't wait to hear your review of the Erying ITX i7-13620H motherborad in terms of gaming I'd also like to know the Xe performance as well. I got a tiny case that would be perfect for it, but I would like to know if the integrated GPU will be good enough to out perform a RX6400, or at least match a RX6500xt I'm wanting to make a low powered SteamOS machine with lots of storage for right now(since it dose have 3 M.2 ports). Once they get Nvidia support working I can later on add a LP 4060 (unless AMD brings out a newer LP 7000/8000 by that time).
I have heard that the amount of data that a video card passes through to and from the motherboard is not as much as people think. So, having a a degraded slot for the video card (like the one that has the sticker next to it) may only have a minor effect on gaming performance. It's worth a try to compare a cpu on that board with a given card, then switch the gpu and cpu to a gaming motherboard and see if there is much improvement.
Bandwidth to the video card really depends on your use case and even game to game. AI processing such as LLMs you need as much bandwidth as you can get, AAA gaming at 4K a similar situation, older games not as much, and crypto mining as we've seen with the mining motherboards with many x1 slots and breakout boards hardly needs and bandwidth at all.
I find that I can run test software that requires about 12+ cores on a computers while running 20+ threads minimum on another and works ok in a test environment using Alder Lake. I would expect to fall on a heap in a real production. Yet I like this for testing! I avoid other than alder lake due to the overheating. I am very concerned about Raptor Lake. Though want one.
I'm running ESXi 8 on a Minisforum NAD9 (i9-12900h). As Minisforum BIOS doesn't support disabling E-cores I had some issues setting it up so I had to add some kernel parameters I didn't get PSODs each time the system started. I have a nasty feeling Vmware won't be my next choice for home lab as Broadcom's acquisition of Vmware is doing it's own IBM for Redhat/CentOS, Oracle for Sun... and it's gonna end in tears for a lot of homelabers :(
Hmmm, I'd be interested in whether much lower CPU utilisation would be stable long term for home slabbing. I went with M500+s recently, though I'm liking the M600s now as I can use the 2.5 disk for proxmox and the dual NVMe drives for ZFS mirrored storage.
Big/little cores are a bigger problem if their capabilities differ too much. E.g. a55/a73 aarch64 have problems that qemu may not even start if the initial scheduling is crossing big/little cores. But for Intel cores they are mostly fine.
Did I miss something on the Erying Boards? I have an older engineering sample i5 board from them, which I also got for around 160 US$, but lately they cost around 600-800 US$. Jeff said, he got them for 160 US$. What are the 12th and 13th gen Erying boards costs for you?
Well, at least I feel less guilty spending a lot of money in older and less efficient systems. All tho I'm having a lot of trouble running VMs for gaming, or running VMs at all that for who's is interested to the idea I'm just gonna say to run Windows Natively on multiple machines and MAYBE try to emulate an other inside of windows. I changed an incredible amount if motherboards, I had to swap my 4x2tb ssd in zfe raid for better ones that I'm using without redundancy or the system would hang, and I still have problems with my 10900KF and 4060 that stutter A LOT inside games. I've bought a Gigabyte z590 based motherboard to try again with an 11900KF and rtx 4060 for my main gaming vm plus an rtx a2000 for my second gaming/photo editing machine. If everything goes smoothly I might try to add the third GPU for a third VM dedicated to AI tasks. My Zimaboard is doing much better with proxmox with 2 containers and 1 VM running everything Important I need, but I still am very unsure about a good backup plan to put in place if my single ssd fails, and with network related services I already feel the pain.
Couldn't the issue arise from cinebench using some instruction set which the E cores don't support? A thread get scheduled to a P core, cinebench gets the capabilities of the CPU, uses some P core only instruction set, proxmox schedules the VCPU to an E core -> crash. But it should be caught by an illegal instruction exception which can be gracefully handled, so maybe my theory is incorrect.
It's worth noting that ES CPUs do not support PCI Passthrough despite enabling VT-d. It might've only been the i7 11800H ES 2.2 boards but it screwed a project over for me.
I did a video specifically with the 11900H ES and PCIe passthrough a couple months ago. VT-d was working, and I was able to pass through network and HBA cards without issue. I had issues with a GPU, but that was likely an EFI boot issue, as I didn't have a BIOS dump of the card to apply.
We use HyperV(mostly), VMware and a few others. And at least for us we did not manage to get HyperV to actually use eCores on either Win10 or Win11. seems to be a problem with the images we are provided with (and they already have many other problems like being rather bloated to over 40GB without the actual source or any dev-tools). And for us that is kind of a problem cause for whatever reason the higherups decided that software development is done on local VMs on laptops ..... where they limited us to low-power Intel systesm. Yeah i got an 15 1250U ... 2p8e ... most of the performance can not be used -.-
You were going for Mac CPU Commit (and completely avoiding CPU Over Commit which is what VMWare is greatly abused for in corp-land) but what about.... not? Leaving 1 or 2 CPU cores short from Max Commited to VMs and seeing if that pans out for the big.little system? Odds are it will wind up being E cores that are leftover but that should be find for the scheduler and minor system management on average...
I'm sure that the crashes and issues you faces are either due to the eirying boards/bios/firmware being buggy or a flaw in the kernel. Big little CPUs have been supported in kvm and thus linux for quite some time now and overcommiting definitely should not be an issue. We run 16x overcommit on cpus in our cloud with any issues. Even if all threads light up and not all VMs get the configured amount of cpu time, for the end user it might feel slow, but nothing will crash or anything
A word of caution about Erying boards. My 11th Gen i7 ES 2.2GHz board now refuses to post after 11 months of ownership. Erying "warranty" is a joke anyways. I've tried everything to get it to post with no luck.
Proxmox dev here. We have some workstations running with 12700k/13700k and I can't remember any ongoing stability issues that were CPU related. The only stability problem we encountered (AFAIR) was too high clocked memory speeds (early DDR5 modules though). If you can replicate the hangs/crashes maybe you could open a bug report with logs etc. on our bug tracker?
Edit: to clarify, our development workstations run with Proxmox VE of course 😅
I was planning on getting a 12400 to run proxmox for my homelab, so I'm now very curious. I'l keep an eye on this. Good to see you guys are on top of it!
Thanks to you & the team for all your work!
The 12400 only has 6 p-cores and 0 e-cores
A Proxmox dev out in the wild? Thank you for the work you do! 😊 Proxmox makes my job so much easier than the other hypervisers I sometimes have to interact with.
Not using proxmox yet, but my friend says you are the best, he uses it a lot, thank you for your work
I wrote the code to implement the CPU Affinity feature in proxmox. (Dot on the Proxmox Forums)
I haven't tried using P vs E cores for gaming VMs, but I do leverage affinity for E cores to force lower power profiles on the CPU.
By pinning some of my constant-load VMs to E cores, they are never responsible for putting the CPU into a high power draw state.
This saves ~20W of power on average in my usage (security system designed to run off battery backup).
It looks like my comments on this video are being immediately deleted after posting. Hopefully this comment won't disappear *crosses fingers*
Yay, the comment didn't get deleted this time. It looks like adding url links to these comments will get your post deleted.
URLs are disabled on all my videos to prevent malware or malicious links. Do you have a keyword to search for or a github user/project name you can share?
Yeah, there's 2 relevant links to the affinity discussion.
1. , Title: 'CPU Pinning?', forums url endpoint: '/threads/cpu-pinning.67805/'
2. , Bug Number: 3593
A user on the forum asked how one could achieve CPU pinning. One of the proxmox employees, t.lamprecht, suggested using the `hookscript` which calls arbitrary bash before, during, and after the VM lifecylce. During the startup hookscript, t.lamprecht suggested users put the 'taskset --cpu-list --all-tasks --pid 0-11 "$(< /run/qemu-server/104.pid)"' command, which will take a pid (and all of it's children) and restrict the pid to the specific cores. It works pretty well, but it was painful to setup this hookscript for every single vm.
On this forum thread, the proxmox bugzilla bug was created 3593.
I (Daniel Bowder) picked up that bug and made the relevant changes to the proxmox codebase to inject the `taskset` command before launcing the kvm/qemu process for the VM at startup. It was then added in PVE 7.3. It's my only contribution to PVE that I have made, but I am very proud of it. I found it super usefull. (I also got a little scared about the proxmox code base lol, Did you know they are mixing tabs and spaces for formatting!!! tears I can't get VScode to properly format the code files.).
Anyway, the logic behind the affinity is really dumb, but this is now something that can potentially help you investigate further. The taskset command above can be called WHILE A VM IS RUNNING, and reallocate it's cores on the fly.
CORES=0=11
QEMU_KVM_PARENT_PID=$(cat /run/qemu-server/104.pid)
taskset --cpu-list --all-tasks --pid ${CORES} ${QEMU_KVM_PARENT_PID}
You could use this to quickly flip back and forth between cinebench runs. (Running htop on the PVE host is really fun when you do this).
@@CraftComputing My posts even without links are getting deleted. Not sure what's going on. :shrug:
I have been running Proxmox on my 12th Gen i7 (8p/4e + HT) for 2 months now, passing trough the GPU to Win11Pro & Fedora Linux VM. I did not have a single issue, and it is rock solid. I only assign 10 out of 20 threads to any of the VMs. I an not treading a hyper-tread core, as something you can assign to the VM, but for Proxmox to use when needed/possible. - I will add some more workloads to proxmox, and investigate the stability. Keep up the great work!
Did you end up noticing any performance issues? Genuinely curious as I've been seeing a lot of mini-PC deals with p/e core CPUs lately
you can set it to be NUMA aware, then set CPU 0 to use E cores and CPU 1 to use P cores.
@@System0Error0Message would this be in bios settings or in proxmox itself?
@@manofwar9307 proxmox. when making a new VM i always used advanced for more settings. under CPU always set NUMA aware and you can also set it to host CPU so the guest OS and software can prefer its own core priority as needed as well. In programming languages they've long supported big little but thats down to the software to implement it.
You can make it look like a 2 socket CPU but im not sure you can properly set the affinity in proxmox.
@@System0Error0Message awesome, thanks!
I am running MINISFORUM MS-01 and don't have any of these issues. Super interesting to see.
I'll be testing an MS-01 shortly, so thanks for chiming in with this.
Great video! Was very entertained with all the chaotic permutations of results. I think I read somewhere that virtualising a Proxmox VM, and then virtualising Windows within that leads to better(or rather more stable) results.
Some Winception right there lol
Cheers Jeff! Enjoying a Coronado Brewing Orange Ave wit. Excellent video, your Erying series really interests me for my homelab cluster i am planning on building, thank you for the upload!
if efficiency cores can be statically assigned in proxmox, i would definitely assign them to LXC containers and try to run them :(
For VM CPU Affinity, we run the `taskset` program which allows us to specify which cores a given `pid` (and it's children) will be allowed to run on. We can get the QEMU parent `pid` for any VM by catting the /run/qemu-server/.pid file. (eg: `taskset --cpu-list --all-tasks --pid 0-11 "$(< /run/qemu-server/104.pid)"`)
I don't know how we would do it for LXC. I haven't looked at that part of the pve codebase.
I can confirm this works. Just run lscpu -e and find the core number you want to assign
you can set affinity for separate processes or cgroups, but afaik you can't do that for the hypervisor itself, as it's not 1 process/cgroup
I'm actually curious about one thing, I am not a drinker, so the parts about beer etc at the end, I usually skip and go to the next video, I'm wondering if that hurts your analytics with people doing that or if I"m just an outlier.. great video as always, thanks man
The reason they're at the end is so people who aren't interested can skip them. If you click off the video at 16:00 or 18:45, it doesn't matter to me. Like and comment while you're there though ;-)
When running a single VM, I can confirm that Proxmox has effectively identical performance as bare metal on heterogeneous cores.
I have a 13700K system with 64GB RAM. When I gave a Win11 VM all cores + 60GB RAM (4GB left alone for Proxmox itself), it gave pretty much the same benchmark results across the board as an identical bare metal Win11 install. I also ran single-threaded performance tests, not just full blast benchmarks. From that, it seems that P and E core scheduling was comparable for that scenario.
That said, I don't know the intricacies of Proxmox CPU scheduling so it might behave differently with multiple VMs splitting the resources vs. one VM that's given everything.
I can confirm with proxmox running Windows VM, isolcpus and static CPU pinning, it runs at the same speed or faster than bare metal. Have a 13900k with all cores and threads giving me the same cinebench R23 score. If I give it 8 less E cores, use isolcpus to make the hypervisors to run exclusively on those, and statically pin all the threads. I emulated a 13700k and got higher R23 score than an average 13700k.
Ultimately, I did isolcpus to keep the hypervisor and smaller VMs in the last 8 E cores. Then split the rest evenly across 4 Windows VM, each with 6 threads (2P+2E) with GPU passthrough, and even if I run R23 on them at the same time they outperforms a i7 8700.
If you are wondering why high end motherboards have so many PCI-E slots, it is because of crazies like me and the cost is still lower than server-grade components
Thanks for covering the big/little proxmox setup. I've been wary of dropping the cash to try it. Hope to see some future updates around this where you get it to work.
just one moment - he's trying it on a weird chinese mobos, kinda a stretch to test it like that for stability
Thanks for doing that rundown Jeff, pretty interesting stuff. Its cool to understand what the limitations might be with these different core layouts
I ran proxmox on a 13900k. It hosted 2 plex servers both with their own gpus and a truenas vm with hba passthrough. I didn't notice any stability issues and all 3 vms ran smooth for the entire time. None of which were Windows. Maybe theres something with KVM and Windows on Big/Little arch.
It seems like this may be more a windows isue.
@@mikehathaway2842Generally if it's even remotely possible for a problem to be caused by Windows... the problem will be found to be with Windows. 😂
Interesting! My 13600K system (on Gigabyte Z690 Aero D) has been rock solid running Proxmox (multiple weeks uptime without issue). After watching this I decided to give it a little test. With one VM and some containers running in the background (though not doing much), I decided to spin up two VMs each with 10 CPU cores (maxing out the 20 threads of the 13600K with 6 P cores with HT and 8 E cores without HT) and I ran Cinebench r15 multicore on both at the same time (multiple runs) and the runs completed without any issues. CPU usage goes right up close to 100% on the system, but didn't seem to freeze or crash. May be something with the Erying boards (or the mobile CPUs they're using) that's causing an issue.
The channel Hardware Haven used a 13th Gen CPU with Proxmox and didn't report any problems. Maybe because he didn't run at full power. You should ask him if his Mini PC crashes at full load.
New opening for the video. Wasn't expecting that.
I suspect it's the boards that are unstable, MCE usually indicates a hardware error.
this is most likely due to a bad motherboard or BIOS, such Frankensteins have never been stable, especially from Chinese companies
+ there will be zero support, enthusiasts will make working versions of the BIOS
@@rpnXN it was because of the microcode, he made a new video about it
This video was perfectly timed as I was just checking out these Erying boards. I'm pretty happy with my current homelab (P360 Ultra) but i was eyeing the ITX boards for a low power gaming system. They are definitely priced better than the minisforum stuff.
How about them LIONS!!!!
Not many YT creators into computers and sports. Good to see one on here.
Most people, even those in the IT space don't realize that Intel has a large enough product stack to hypersegment. They may sell several thousand units to a telecom for a single purpose and that company only needs a certain level of performance. Then another tier could be predicted to sell hundreds of thousands of units. That main tier may have slightly defective units that create yet another segment, sold at a discount to another market.
Thus it's difficult to say that a given product is bad or good relative to another form Intel, since sales (generally) represent a balance of price and performance from the buyer's perspective. Hence why we need reviewers and testers who can judge how well something will work for its intended use case.
Love the Erying Motherboards, have an M-ATX Erying i7 12700H, using it as a gameserver, had mine since Jun 11, 2023, running 24/7, no issues :) Thinking of getting the New 13th gen Erying with over 5Ghz and DDR5 support.
One of my Tiger Lake 11900H boards is currently running Minecraft and Palworld servers :-)
@@CraftComputing i recently got the ES 13900h from "17029". its only a ddr4 model but unlike most of the other engineering samples my sample has pcie gen 4.0 8x.
i was only able to find the 12700 / 12900 ES sample boards that have pcie gen 4.0 8x but there are also a large majority of them are pcie gen 2 (strangely they state m.2 is still 4.0 x4 so it might actually be feasable to put a pcie 4x card like a rx 6400 and use the card to its fullest.
my supplier has also told me hes willing to give me the modified bios to make the 13900h ES overclockable! :D
Would be good to test out width the erying boards altogether as they are so weird and hacked together it might just be them causing the issues.
My Tiger Lake boards have been running 24/7 for 8 months with pretty strenuous loads, and have been 100% stable. This was a software issue, not hardware.
Try using a high quality board to rule it out? I run a w680 board with a 13500 it works great
agreed, this might be a board issue.
hope youre dealing with the loss okay. pretty bad feeling going to bed last night, feeling better about it now. go lions!
Won the division, and came 3 points shy of the NFC Title. Hard to complain about those results after 65 years of futility.
@@CraftComputing It is hard to complain but man we were SOOO close, if only we had kicked that field goal to force overtime. Also, I didn't know you were a Lions fan! #OnePride
This was a very interesting video Jeff. Always very interested in the virtualisation topics and learned a lot here !
I found the exact same problems with an Erying board with ES i5-12450H. It usually crashed when trying to reboot a VM and PCIe passthrough was downright broken, with the passed through device remaining in an active state despite the VM being powered off, rendering it useless unless the system is rebooted.
Let's say I was thoroughly disappointed about that. In XCP-NG passthrough was equally broken, but the VMs never crashed as XCP-NG is running Linux kernel 4.x still.
I seem to remember seeing a different video talking about how weak the VRMs are on this board, and to get it working without issue it needed a fan directly on them. I wonder if you were experiencing that issue?
The VRM's get so hot you can't touch them. I added a nvme heat sync to each VRM and problem is solved. Now they barley even get warm. Worth the extra $20 in my book.
No fan needed and they look cool in my book :)
Thermalright HR-09 2280 PRO Black 2280 SSD heatsink
Engineering sample CPU could be what's crashing the host. I had one 9700f QS a few years back and even though QS are supposed to be "final" as ES are not, mine crashed the pve host when the guests ran handbrake video conversion, very much like the way you described. I can't be 100% certain because I didn't have an alternative cpu for testing.
Cool to see people involved with Proxmox's development in the comments. Hopefully this leads to a followup if/when things get fixed. Considering the amount of people saying they're running 12+ gen chips with no issues, i'm wondering if Proxmox is detecting some oddities in the BIOS or chipset of these Erying boards rather than struggling with the CPUs.
Hi Jeff! So I can say that based on the places I have worked, it is common practice to overcommit resources and allow the running VM's to hang out waiting to be serviced (like being at the DMV). So as volatile as your test environment was, how much worse would it be if you overcommitted by say 33% or more? It would be curious to see if it was that much worse, much-much worse or surprisingly better (stability/crash wise of course). Just a thought...
Yes, I've seen (and done) quite a bit of overcommitting in my day as well. My comments in this video about scheduling are *pure speculation*, as even when undercommitting CPUs, I still had a ton of instability.
@@CraftComputing, understood sir. Thanks for the reply!
I've had huge instability on a 12th gen (i5-1240P) NUC using promox until I switched the memory to Intel validated ones. I haven't had another crash since.
Good point, bad memory can cause significant problems in servers
I was running the DDR4 at 2666, but didn't check the validation list. Might be worth investigating. Thanks!
I have running Proxmox on NUC12 with i5-1240P and Kingston FURY DDR4-3200 for a month with no issue.
New intro is cool. Constructive feedback; maybe half or so second shorter. Sound level is a bit too low can't hear it without turning up the sound to max. I thought it had no sound at first.
IMO if your intro is longer than 5 seconds and doesnt have any relevant information in it, it's too long and you need to shorten it.
I did not look at the timer. I just thought to myself, "this is outlasting my attention span".
it's like 30 seconds long the videos usually start at 1 minute though.@@_vilepenguin
I've never had an issue with careful over-subscription in proxmox. I've been told that best practice is to not do so... but, consider that it's likely that your VMs won't see full load at the same time, or anywhere close to it.
I should probably spin up a test node and try and see if I can break it though, would be curious to see. I know I've definitely mapped cores 1:1 with the host on VM's that are used for render workloads, but, the real test would probably be running actual stress tests, like linpack or stress to see if actual 100% utilization upsets proxmox.
I would guess that KVM will eventually support all of this eventually, and likely well before everyone else does.
I've overprovisioned CPUs plenty without issue. The comments about underprovisioning to allow cycles for scheduling was completely speculative on my part.
@@CraftComputing ah, I was wondering if you knew something I didn't. It seems like many people say this, but, I can't find anything even manpage adjacent that says that'd be the case tbh
Curious if others have had weird issues with their erying boards. I have a 13420h matx board and first only one ram stick was detected, turned out this was because of using a 2-port (8 sata) hba, a 4-port (16x sata) solved the problem. But then the Realtek LAN stopped working, couldn't even detect it in hardware, in BIOS it's enabled. Lastly wifi randomly doesn't start in Windows (code 10 or 43). It was valued $25 on the package, i kinda get why. 🤷
Also, I use Hyper-V and your GPU-P guide to share my gaming PC with my girlfriend. I use a 12900 KF. Never had a problem, but I have also never oversubscribed system resources.
I would love to see you run the same tests via Hyper-V, since the Windows OS (as of Win11) is supposed to be BIG.little aware, and should be able to schedule work across the VM threads.
Sadly so far no - HyperV does not utilise ecores
I have been running 2 servers using hybrid CPUs for nearly a year now without a single issue (i5-12600K (6p/4e, 16 threads) and i3-12100F).
Both running ProxMox with TrueNAS as a VM. The i5-12600K server also running a Gaming VM and some other services (NextCloud, VPN etc...).
I can send more info on my setup if anyone is interested, but those are my two cents.
Whats the idle on the 12600k system?
I'm building a PC for containers but am confused what spec should I do!
Photoprism for photos,
Openmediavault
Adguard DNS
Jellyfin for tv shows and movies
Home assistant for controlling lighting and for 2x home cams
Nextcloud (Want to poweruse it!)
Zfs raid shared 2x 2TB HDD
Also came across Mycroft, like an Alexa/ Google assistant alternative.
PS suggest some quality of life apps;)
@@JdownJdown I assume you mean power consumption?
About 80W without the GPU (measured before I got it). Under somewhat standard load (TrueNAS, 4 HDDs - file access, snapshots, replication etc.) I measured 45kWh over 17 days (roughly 2.5kWh/day).
@@svejdik313 oof thats really high. I am running 8700k, 5 Drives, 2 NVMe drives, 10g SFP+ NIC, 4 constantly on fans, on Windows with Hyper V + 4 VMs for Automation etc and my power draw is 50W at idle. It does around 1.4 kWh per day. I was thinking of upgrading to 12th gen but that high power consumption is too much.
I am running a Seasonic Prime Fanless PX 450W Platinum PSU though...which might be helping a bit
@@JdownJdown Yea... I don't even want to know how much it uses with the GPU and Gaming VM 😅
Honestly the CPU is way overkill for my uses. It averages 5% load and maybe gets up to 30% if I run some extreme compression.
I'd definitely use a less powerful one if I was building this again.
VM' actively using 21 cores on a Ryzen 8 core 16 thread CPU. I think I'm fine. And loads are pretty low, so does not need to pump high performance. It's my happiness!
Best year for the Lions since i was born! I miss Barry Sanders but I thought we really had a chance 😞
I purcahsed a Erying 12500H board 13 hours ago (video age 8hours at time of comment) I only plan to use it as an UNRAID NAS / Docker box - HBA card should work in the PCIE slot without too much issue I hope. I have seen some comments of the Engineering Sample ones and or these boards specifically can not do IOMMU passthrough (not an issue for my use case, but something to think about for others?)
The value proposition of these boards even with the ES chips is great for home labbers so glad they are getting some coverage here!
I hope this is sorted out. SME and homelab hypervisor is the most promising use case of big little. I'd love a system with 2P cores and 8 or 16 E cores for containers and light VMs
I recently got an Erying with a i9-13980HX for the sole purpose of running VMs. Not everything needs to run on P cores. I run all my demanding stuff on my main server with a six-core i5-8400. But a lot of stuff I run in VMs happily runs on my secondary server with an ancient 15 year old (!) two core AMD CPU. With 12 GB of RAM, it runs 8 (!) VMs, for various low-demanding tasks while also sharing the storage space with the main server. Since that machine is really ancient (apart from the PSU), I plan to replace it with the Erying. That's where the E cores will shine, they will happily run the existing low-demanding VMs, while leaving a lot of room for adding new ones for more demanding tasks.
I have been running Proxmox on Intel NUC12 with i5-1240P for a month with no issues. At this moment I running 9 VMs without CPU Affinity, but only assign 1-4 vCores to each VM
Can you try to iGPU performance on the 13620H? it is a UHD with 64EU (vs. 32EUs in desktop version)
Can't wait for my new order of 2 glasses and coffee tumbler.
Isn't this caused by the differing instruction sets supported by the E and P cores?
That was always the thing holding me back als i like to pass AVX512 to the vms
I passed through each CPU as an x86-64-KVMv2+AES, meaning I was already limiting instructions to sets both are capable of.
@@CraftComputing I wonder if some KVM/scheduling code requires an instruction that's available on the P-Cores but not on the E-Cores, so when the scheduler schedules itself on an E-Core it just hangs.
Tho I guess that would show different symptoms...
Either way interesting results.
@@insu_naThe E and P cores use the same instruction set to eliminate that problem. None of the 13th Gen Intel chips support AVX-512 because of the E-Cores don’t have it.
@@Knirin makes sense
Thank you Jeff! You always answer all my proxmox questions & ideas! 😊
Got a great deal on an HP mini PC with i7-12700T.
Works great for ProxMox, no issues whatsoever.
I too am having issues with Proxmox, on the MS-01. Lockups, web interface crashing, VNC constantly failing to work.
Great video. Been wondering about this for a while now.
I moved mostly away from virtualization in favor of containers. Debian + Docker on 12650h runs flawlessly and makes great use of P and E cores.
Did you ever think that your instability problems might not be Proxmox, but board and cpu problems?? The second cpu seem to be more stable....could a better board that isn't so "weird" be an idea to work with??
You could also be working with a flaw in the mobile CPU architecture that may not be a problem with desktop chips.
If you reran these tests with a more mainstream motherboard and desktop chips, you make it different results....
Thank you for the video;
Monte
@CraftComputing
Jeff, I'm with you on your opinion that Intel's move was on the bone-headed side. What they needed to do is offer differing models with and without ECores to fit the needed market sectors properly. You are correct into your original assessment.
Love this channel. I mean who doesn't like craft beer and pc hardware? :D
I'm working on my first Proxmox server. X79 Dual 2630v2
Have gone with X99 E52680v4. 2 Huananzhi m'board. Will swap one with ASRock X99. Huananzhi support via Aliexpress message board is next to none - Lousy to say the least.
What CPU type did you use? kvm64, host or something else?
x86-64-v2-AES
@@CraftComputing I'd try other types first and Linux server VMs with GeekBench second.
Proper mobo, desktop CPU third guess.
I don't do anything like this but I like the 13700k cpu I have. I dedicate the P-cores to cpu mining and the browser to the E-cores. No stutter, youtube plays fine, browser works fine. etc
I have been using Hyper-V with i9-13900H (Minisforum MS-01) and I encounter no stability issue at all. Yes, the performance load balancing is odd and sometimes heavy workload stuck at E cores, but at least I have nothing to worry about if I just want to put more VM running. Normal operation in Win10 inside Hyper-V guest is also very snappy, much snappier than my previous E-2244 / i7-8700.
Awesome video, thanx for testing 👌😊
I'd be tempted on doing this with a 14700. Leave a full E core cluster dedicated to proxmox and then have 8 big and 8 little available for the VMs. I would not have more than one VM on the same cluster due to the shared L2 cache though. I have a feeling that could potentially be the source of the trouble.
I9 12th gen had ecc enabled.
Id like to know if it works in these es samples
I wonder if the reliability would improve using desktop cpus, such as the i5-13500/i5-13500t. I've been eyeing one of those for home lab use for a while.
My Tiger Lake boards have been 100% stable since I installed them. Testing these Alder/Raptor boards in Windows they had no issues whatsoever. I think the stability was purely software.
@@CraftComputing Yeah, in that context that sounds like a valid conclusion. I think I'll have to keep watching this topic to see if big little on proxmox becomes viable, but at least I'm in no hurry.
So I could use a 10G network card on the ES board?
I'm interested to use one of those for a Plex server (NAS has the media storage). Do the ES still have the same intergrated GPU as the retail chips?
And what memmory speed can I go up to on those chips?
Please correct me if I'm wrong, but affinity in KVM isn't the same as on ESXI, there's a proper allocation based on sibling cores that should be considered during the cpu affinity. That can cause the VM to hang up
I've always thought virtualisation was the perfect use case for big/little architecure. Having the hypervisor run on the itself run on the little cores while the big cores ran the VMs sounds pretty great for efficiency at least to my untrained brain
@Craft Computing Can you try with split_lock_detect=off in grub? Split_lock_detect is a kernel "feature" intended to prevent DoS attacks from less optimized programs that aren't aware of cache coherence.
At least with xen and kvm based solutions we can always constrain the pool of CPUs available so it really doesn't matter much.
He is wrong about that Pci-e 2.0 x8 slot being unusable. You'd probably get around 75% of the fps with a 3080, it's less but definitely usable.
I'm curious to see a couple of scenarios:
1. 4 VM each with 1P + 1E core
2. 3 VM with 4P, the 4E cores reserved to proxmox which should be great for containers
We use Hyper-v at my company. asthe largest IT company for central heating in Denmark, we stick with Hyper-v due to cost alone.
Another possibility is that the dies being used for the xeon, having the GPU and/or E cores disabled, are dies that failed GPU and/or E core testing. and have little use in the consumer market..
i've been running proxmox on 11th gen erying with 8 cores. the only problem for me is finding a good 1U low profile cooler. i run some number crunching VMs from video encoding to AI so using the 12th or 13th gen is a terrible idea. however on a long run it clocks around 3.7Ghz on all cores. stability wise i've never had a hang or crash, and the CPU cores keep trying to clock to 4.8Ghz when possible. power use is also very low with a 20W idle from the wall with 2 HDDs, 2 ram and only the CPU fan. It runs as part of a proxmox cluster with other lesser known power efficient hardware that have no issues running proxmox such as AMD's embedded and does entirely fine combining NICs with usb adapters too.
Wonderful test! have you ever thought to test it on a router or server with Intel Ultra9?
For power saving purposes (much as possible), should I choose an Intel Xeon combo (Aliexpress used) or an Erying card. This is for a virtualization server for docker containers.?
I can't wait to hear your review of the Erying ITX i7-13620H motherborad in terms of gaming I'd also like to know the Xe performance as well. I got a tiny case that would be perfect for it, but I would like to know if the integrated GPU will be good enough to out perform a RX6400, or at least match a RX6500xt I'm wanting to make a low powered SteamOS machine with lots of storage for right now(since it dose have 3 M.2 ports). Once they get Nvidia support working I can later on add a LP 4060 (unless AMD brings out a newer LP 7000/8000 by that time).
I have heard that the amount of data that a video card passes through to and from the motherboard is not as much as people think. So, having a a degraded slot for the video card (like the one that has the sticker next to it) may only have a minor effect on gaming performance. It's worth a try to compare a cpu on that board with a given card, then switch the gpu and cpu to a gaming motherboard and see if there is much improvement.
Bandwidth to the video card really depends on your use case and even game to game. AI processing such as LLMs you need as much bandwidth as you can get, AAA gaming at 4K a similar situation, older games not as much, and crypto mining as we've seen with the mining motherboards with many x1 slots and breakout boards hardly needs and bandwidth at all.
I like the new intro
We missed you!
I find that I can run test software that requires about 12+ cores on a computers while running 20+ threads minimum on another and works ok in a test environment using Alder Lake. I would expect to fall on a heap in a real production. Yet I like this for testing!
I avoid other than alder lake due to the overheating. I am very concerned about Raptor Lake. Though want one.
I'm running ESXi 8 on a Minisforum NAD9 (i9-12900h). As Minisforum BIOS doesn't support disabling E-cores I had some issues setting it up so I had to add some kernel parameters I didn't get PSODs each time the system started. I have a nasty feeling Vmware won't be my next choice for home lab as Broadcom's acquisition of Vmware is doing it's own IBM for Redhat/CentOS, Oracle for Sun... and it's gonna end in tears for a lot of homelabers :(
I use Proxmox on NUC 13 i7-1360P (4P/8E/16TH) without any issue. But I didn't try to fully load system with benchmarks.
Hmmm, I'd be interested in whether much lower CPU utilisation would be stable long term for home slabbing. I went with M500+s recently, though I'm liking the M600s now as I can use the 2.5 disk for proxmox and the dual NVMe drives for ZFS mirrored storage.
@CraftComputing Do those CPU/board combos come in an ITX form factor?
That motherboard + laptop cpu seems like a very good deal, only thing is that it supports PCIE 2.0, which is a major issue.
i5 13500 has been solid in unraid so far Haven't noticed any issues with E and P cores
Big/little cores are a bigger problem if their capabilities differ too much.
E.g. a55/a73 aarch64 have problems that qemu may not even start if the initial scheduling is crossing big/little cores. But for Intel cores they are mostly fine.
Did I miss something on the Erying Boards? I have an older engineering sample i5 board from them, which I also got for around 160 US$, but lately they cost around 600-800 US$. Jeff said, he got them for 160 US$. What are the 12th and 13th gen Erying boards costs for you?
10:36 running 3 "Virtuer" machines lol
har har har, he flubbed a word in a 20 minute long monologue.
@@CraftComputing sometimes “stubbed toes” are worth a laugh instead of tears.
Well, at least I feel less guilty spending a lot of money in older and less efficient systems. All tho I'm having a lot of trouble running VMs for gaming, or running VMs at all that for who's is interested to the idea I'm just gonna say to run Windows Natively on multiple machines and MAYBE try to emulate an other inside of windows. I changed an incredible amount if motherboards, I had to swap my 4x2tb ssd in zfe raid for better ones that I'm using without redundancy or the system would hang, and I still have problems with my 10900KF and 4060 that stutter A LOT inside games. I've bought a Gigabyte z590 based motherboard to try again with an 11900KF and rtx 4060 for my main gaming vm plus an rtx a2000 for my second gaming/photo editing machine. If everything goes smoothly I might try to add the third GPU for a third VM dedicated to AI tasks. My Zimaboard is doing much better with proxmox with 2 containers and 1 VM running everything Important I need, but I still am very unsure about a good backup plan to put in place if my single ssd fails, and with network related services I already feel the pain.
Couldn't the issue arise from cinebench using some instruction set which the E cores don't support? A thread get scheduled to a P core, cinebench gets the capabilities of the CPU, uses some P core only instruction set, proxmox schedules the VCPU to an E core -> crash. But it should be caught by an illegal instruction exception which can be gracefully handled, so maybe my theory is incorrect.
It's worth noting that ES CPUs do not support PCI Passthrough despite enabling VT-d. It might've only been the i7 11800H ES 2.2 boards but it screwed a project over for me.
I did a video specifically with the 11900H ES and PCIe passthrough a couple months ago. VT-d was working, and I was able to pass through network and HBA cards without issue. I had issues with a GPU, but that was likely an EFI boot issue, as I didn't have a BIOS dump of the card to apply.
@@CraftComputing I must've missed that - I'll check it out. I was in fact trying to passthrough a GPU.
How about running all containers vs VM ? and TrueNas as the front end ?
sooo, for my home lab i will stay at ryzen+esxi) today it 5650ge+ esxi 6.7 and about 15 vms, in a few month it will be 7940hs+esxi 8.0
I have 16 cores allocated to VMs on a 4c/4t i5 6600 and it runs like a top.
i never allocate all threads to vm's on any hypervisor. i alwas leave two unallocated for the hypervisor's use.
We use HyperV(mostly), VMware and a few others. And at least for us we did not manage to get HyperV to actually use eCores on either Win10 or Win11. seems to be a problem with the images we are provided with (and they already have many other problems like being rather bloated to over 40GB without the actual source or any dev-tools).
And for us that is kind of a problem cause for whatever reason the higherups decided that software development is done on local VMs on laptops ..... where they limited us to low-power Intel systesm. Yeah i got an 15 1250U ... 2p8e ... most of the performance can not be used -.-
You were going for Mac CPU Commit (and completely avoiding CPU Over Commit which is what VMWare is greatly abused for in corp-land) but what about.... not? Leaving 1 or 2 CPU cores short from Max Commited to VMs and seeing if that pans out for the big.little system? Odds are it will wind up being E cores that are leftover but that should be find for the scheduler and minor system management on average...
"Science!" And with what Broadcom had done, VMware may not have that market share for long.
Thanks for the video❤
I'm sure that the crashes and issues you faces are either due to the eirying boards/bios/firmware being buggy or a flaw in the kernel. Big little CPUs have been supported in kvm and thus linux for quite some time now and overcommiting definitely should not be an issue. We run 16x overcommit on cpus in our cloud with any issues. Even if all threads light up and not all VMs get the configured amount of cpu time, for the end user it might feel slow, but nothing will crash or anything
Großartig.
A word of caution about Erying boards. My 11th Gen i7 ES 2.2GHz board now refuses to post after 11 months of ownership. Erying "warranty" is a joke anyways. I've tried everything to get it to post with no luck.