I wrote some of the ISO installer and Windows code for Citrix XenServer back when it was around v6. That team was the smartest group of people I've ever had the privilege to work with.
I am more bullish on Proxmox. Run it at home for pfSense and a few VMs. Like how it uses the latest version of Debian along with the latest Linux kernel so hardware support is great. XCP-ng still has a 2TB limit due to using decrepite VHD storage. This is a pain if you're dealing with large database VMs.
Absolutely right. We were migrating from ESXi to XCP and ran into this issue for big VMs of our customers. Basically hindering us and we finally switched to proxmox
Upgraded to 8.3 after watching you do it on the livestream the other day. Went flawless. Been on XCPNG for a number of years in my homelab now and it does what I want it to (for the most part, but that's usually my fault when something doesn't work)
The problem with XCP-NG 8.3 and why i ditched it in a cluster for Proxmox VE just recently is because the Centos version it is built on is so dam old, you can't even run any recent version of Ceph packages / drivers on it (RBD or otherwise)! I love XCP in most cases, but the super old Centos base it uses is becoming a right pain in the ass in some respects. If and when they fix that, I will seriously consider going back, as it has some features that Proxmox doesn't (like being able to live migrate between hosts NOT in the same cluster), but right now the Pro's just don't outweigh the cons sufficiently.
Dom0 isn't meant to be modified. Also, Xen is not KVM, it's vastly different (in XCP-ng, it's Xen handling all the important features, not the Dom0, unlike in KVM where it's the host itself). If you need to tinker or bend the solution to match your use cases, indeed it might not be the right fit :)
@@olivierlambert4101 Interesting to see a reply from a member of the XCP-NG Team themselves, thanks for that. I made my point because it's actually in your documentation that ceph-common (needed for RBD) while not officially recommended, can be installed (WITH INSTRUCTIONS ON HOW TO DO SO!) to dom0 and used which is great, except that the available packages will NOT talk to any recent version of ceph, especially Ceph Reef. Now that Caveat is not mentioned at all, and only by messing around did i figure that out. I then went looking to see if any newer packages were available, but the latest I can get is ~14.x (15.x if i import from other sources), where as the Ceph cluster I attempted to connect to is running 18.2.x Reef. I would have reasonably expected that if the possibility to use Ceph RBD was mentioned, that I could at least connect it to a modern cluster, else what is the point of even mentioning it in the documentation? The thing with Ceph packages is that to have any reasonable performance, they must run in Kernel Space, which to my knowledge implies it must run in Dom0 as I don't recall any Xen specific Ceph packages existing? I was originally going to use IXSystems Truecommand clustering to obtain redundancy via SMB / NFS, but IXSystems decided to deprecate that before it even got out of beta, and I needed storage redundancy on a budget (Small cluster, limited budget), so Ceph became the next idea, but when I discovered that XCP-NG simply would not talk to Ceph Reef, that was the nail in the coffin for XCP-NG. Yes, I looked at XOSAN v1, but just did not like it, and XOSAN v2 wasn't available at the time either (and I haven't recheck since).
@@KSSilenceAU If I'm a member of XCP-ng team, I'm also the creator of both XCP-ng & Xen Orchestra projects, and the CEO and co-founder of the company behind it. As per Ceph, our official documentation states clearly it's not officially supported (in the Storage page, see the table with the "Officially supported" in the dedicated row, Ceph isn't there). Also, specifically in the Ceph section, there's a big yellow warning about "it may work or not and it's not supported". If you aren't happy with the level of Ceph support, I can understand your frustration, but it's clearly documented that's not something supported nor working out of the box. It might be better in the future, but for now we have to choose priorities, and sadly there's not enough demand for that (vs other more pressing things). Also, if you think the documentation isn't clear enough, you have a link on the bottom of each page ("Edit") so you can improve it, contributions are welcome.
XCP-NG isn't meant to hack in a lot of crap into it, it's a corporate virtualization platform meant to run stable workloads. Home use of it is an option but support for bolting extra stuff to the hypervisor's dom0 can only make it less stable. If Proxmox serves better for that then fine, but the much more "chaotic" approach also makes it less attractive for corporations.
The style of the response that you get from the CEO is a HUGE part as to why I don't use xcp-ng over Proxmox VE. His response, basically, summarises down to RTFM. But that does nothing to address the actual, primary concern, which is that the source code base does not pull more up-to-date versions of Ceph. (I recently finally had to migrate off of CentOS and on to Rocky Linux, *because* CentOS is too old now, which is rather unfortunate. (Thanks IBM! [/s]))
That's not the impression I'm getting at all. Spoke to a guy doing a demo who worked for a major server manufacturer and he spoke of another larger corporation testing options to their VMware, they had already eliminated Proxmox but were cautiously optimistic about XCP-NG.
I currently work with esxi and Im surprised by how common issues with upgrades breaking esxi is. I though running something expensive and supported like vmware would give you peace in mind but this is not the case. you need to take care of a vmware cluster just the same way you have to take care of a proxmox or xcp-ng cluster with the difference beeing that you need support for vmvares products while you can repair xcp-ng and proxmox by yourself.
I concur, esxi is suprisingly mediocre for an "industry leader" and "best of breed" and all the buzzwords. You can do 90% of what most companies use it for with Proxmox or xcp-ng or even HyperV and get a more sane and reliable host/cluster
I used to use ESXi in my "home lab" setup and I lost 1.5TB of data because I didn't realized that by default esxi deletes the VMs disks if you delete a VM without confirming if you want to do so... I was dumb To make things even worse the partion style and the filesystem is a nightmare to use recovery tools... I have found just one tool that was able to read from the partitions and recover my data but since it costs $800 I couldn't afford it I was short on storage so I didn't had any extra backups! lesson learned...
Just bought and installed a fanless "router" style N100 PC that came with four 2.5 gig i226:es - XCP-NG 8.3 worked perfectly out of the box. XO Lite is great but very limited still but it's nice to have at least basic controls remotely - like starting up your Orchestra if that's down. Running a pfSense virtualized on it with passed through NICs, and some other housekeeping type home servers.
Liked your videos, however it’s lagging behind proxmox from multiple reasons such as CPU’s and old kernel of the host pass through PCI devices should not ask a host reboot once you excluded from the host maybe it’s stable, but it’s lagging not for my performance, but from innovation and use ability of the components compares to Proxmox it was my previous hypervisor, but it’s still lagging behind the other competitors
I often hear comments about 'Proxmox having a more recent kernel,' but it’s worth clarifying that in XCP-ng, the hypervisor itself is not Linux, so the kernel version isn’t directly relevant to performance or functionality. This is a bit like focusing on the gas tank size of an electric car-it misses the key point. There are certainly meaningful discussions to be had about XCP-ng and Xen, and understanding these nuances helps keep the conversation relevant.
@@olivierlambert4101 thanks for addressing it , however XCP-NG is based on centos and it’s a fact, even with if I installed xen on Debian win latest kernel other feature will not work at the moment, such as support for vgpu, device pass through without reboot the host on every assignment of pcie device and more. Btw- I used to run Proxmox for 4 year on an enterprise gear company and pivot to XCP-NG , but now moved back to Proxmox because the simplicity of things such as cloud-init deploy templates in a several clicks, and no virtual sound device on vm or different disk type and controller and the last part when there isn’t any good vdi for XCP-ng
Like it, however hyper-convergence is the top priority for my current professional situation. Otherwise xcpng looks great! Xo lite looks like something id use alot.
@LAWRENCESYSTEMS so looking forward to this; once the beta adds the ability to migrate those disks, I'm gonna be all over it (especially if it also increases that migration speed from 50MB/s)
It would be great if the Terraform and Packer providers got some love, and some examples that work reliably with 8.3. I'm also looking for solid descriptions of how to deploy Flatcar Linux onto this platform.
I’m excited for this, and the later addition of networking and VM creation. I’ve been looking for a replacement for ESXi 7.3 and this might do it. Also Vates, if you guys offer a cheap subscription for us home users that just want to tinker and run like a dozen VMs I think that might be popular.
I was going to say, yeah, it's literally free and open source for home usage, and their forums are pretty active if you need any support. There's even tools that will build the management for you with about 30 seconds of input
Some cool improvements. Will the VM snapshot disk exclusion functionally with other attached devices? For example, I have a USB Zigbee controller that I can't attach to my VM because of the snapshots that are created as part of my nightly backup job. I must attach it to another machine and then use USB over ethernet. Will I be able to attach the Zigbee controller and then exclude it from the snapshots?
At work We are actually running multiple sites with Hyper-V fail over clusters for our servers and VDI win10 vms. but I am starting to concider moving away from Microsoft since they are not licensed yet and the cost is insane. My only concern is veeam support which seems to be still in beta.
I don't understand how your costs will go down if you move? You still need to license your Windows servers, unless you're running Linux servers and Win10 VDI's only, then you can save on licensing for the Hyper-V host itself.
@@affieuk yes that's what I meant, to switch from windows server on the hosts to linux. The VM servers will still remain on windows if they already run on it and be licensed by virtual core if I am not mistaken.
@@Heartl3ss21 Yeah, it'll be core based. Last I looked a few years ago was 8 minimum, going up from there. Depending on number of VM's it's cheaper to move all Windows Server VM's to one node and license with Datacenter. Automatic activation is a nice bonus, but not by much since automation will take care of it either way.
@@affieuk true but who users anymore a single host to run critical services? You have to use at least two in fail over configuration and in that case you will have to license both hosts with data center since they both can have the full number of VMs at any given time
@@Heartl3ss21 Yup 100%, same goes if you run another hypervisor though. Microsoft licensing fees are crazy, but then there are lots of others that do the same. If you can use open source software for your needs and a support contract if needed, that would be the best outcome.
can xcpng be put directly on th net with restricted access tot he management? i do this with hyper-v and i am trying to find another hypervisor to replace it.
vTPM is completely virtualized and does not need hardware TPM on the host. Afaik it stores the keys in a small virtual disk together with the virtual machine disks. So it's not as "secure" as a hardware TPM where the keys are stored inside a physical chip in the TPM device. But it's not meant to. It's main goal is make Windows 11 happy so you can install a VM.
@@marcogenovesi8570 yeah, if no one has access to the physical hypervisor machine, the vTPM virtual disk is "secure" enough. If you run a VM with win11 and a malware is installed, the malware won't be able to access any keys, as they are stored in a "TPM device"
It comes with a web UI now as Tom demonstrated but it's highly limited still. But of course that doesn't matter, since with the Xen Orchestra appliance you get extensive control of all your XCP-NG servers from one interface.
I haven't tried with GPUs, but I've tried with other things and it's been petty flawless, so I can't imagine it would be a problem. If you've had issues specifically with GPUs but other stuff's worked, I'd be happy to test it though
@@joshuawaterhousify If you could. I've had success passing through GPUs via KVM and Proxmox to Linux VMs but it's never worked for me to Windows VMs. Really need Windows VM with CUDA for local AI, RPA, AutoCAD & Premiere.
@ericneo2 may not be till the weekend, but I'll throw my 2070 Super in and see what I can do. I know nvidia blocked things on consumer GPUs with code 43 for a while, but I think they opened that up a bit ago? I've been meaning to give it a shot for a gaming VM for a little while. Testing will be on games, Davinci Resolve, and maybe some AI stuff, with a bit of blender or something to make sure that side works as well. Either way, if you're already on Proxmox and want to stick with KVM, check out Craft Computing; he's got tutorials for it for everything from direct pass through to vGPU
I wrote some of the ISO installer and Windows code for Citrix XenServer back when it was around v6. That team was the smartest group of people I've ever had the privilege to work with.
Are you venting or trying to brag about stuff noone cares about?
Hi Tom! Been waiting for a your coverage of 8.3, finally dropped, thanks! I hope to see much more in depth coverage of 8.3. Cheers, Boris!
I am more bullish on Proxmox. Run it at home for pfSense and a few VMs. Like how it uses the latest version of Debian along with the latest Linux kernel so hardware support is great.
XCP-ng still has a 2TB limit due to using decrepite VHD storage. This is a pain if you're dealing with large database VMs.
Absolutely right. We were migrating from ESXi to XCP and ran into this issue for big VMs of our customers.
Basically hindering us and we finally switched to proxmox
This is great to know - thanks mate !
Upgraded to 8.3 after watching you do it on the livestream the other day. Went flawless. Been on XCPNG for a number of years in my homelab now and it does what I want it to (for the most part, but that's usually my fault when something doesn't work)
The problem with XCP-NG 8.3 and why i ditched it in a cluster for Proxmox VE just recently is because the Centos version it is built on is so dam old, you can't even run any recent version of Ceph packages / drivers on it (RBD or otherwise)!
I love XCP in most cases, but the super old Centos base it uses is becoming a right pain in the ass in some respects.
If and when they fix that, I will seriously consider going back, as it has some features that Proxmox doesn't (like being able to live migrate between hosts NOT in the same cluster), but right now the Pro's just don't outweigh the cons sufficiently.
Dom0 isn't meant to be modified. Also, Xen is not KVM, it's vastly different (in XCP-ng, it's Xen handling all the important features, not the Dom0, unlike in KVM where it's the host itself). If you need to tinker or bend the solution to match your use cases, indeed it might not be the right fit :)
@@olivierlambert4101 Interesting to see a reply from a member of the XCP-NG Team themselves, thanks for that.
I made my point because it's actually in your documentation that ceph-common (needed for RBD) while not officially recommended, can be installed (WITH INSTRUCTIONS ON HOW TO DO SO!) to dom0 and used which is great, except that the available packages will NOT talk to any recent version of ceph, especially Ceph Reef.
Now that Caveat is not mentioned at all, and only by messing around did i figure that out. I then went looking to see if any newer packages were available, but the latest I can get is ~14.x (15.x if i import from other sources), where as the Ceph cluster I attempted to connect to is running 18.2.x Reef.
I would have reasonably expected that if the possibility to use Ceph RBD was mentioned, that I could at least connect it to a modern cluster, else what is the point of even mentioning it in the documentation?
The thing with Ceph packages is that to have any reasonable performance, they must run in Kernel Space, which to my knowledge implies it must run in Dom0 as I don't recall any Xen specific Ceph packages existing?
I was originally going to use IXSystems Truecommand clustering to obtain redundancy via SMB / NFS, but IXSystems decided to deprecate that before it even got out of beta, and I needed storage redundancy on a budget (Small cluster, limited budget), so Ceph became the next idea, but when I discovered that XCP-NG simply would not talk to Ceph Reef, that was the nail in the coffin for XCP-NG. Yes, I looked at XOSAN v1, but just did not like it, and XOSAN v2 wasn't available at the time either (and I haven't recheck since).
@@KSSilenceAU If I'm a member of XCP-ng team, I'm also the creator of both XCP-ng & Xen Orchestra projects, and the CEO and co-founder of the company behind it.
As per Ceph, our official documentation states clearly it's not officially supported (in the Storage page, see the table with the "Officially supported" in the dedicated row, Ceph isn't there). Also, specifically in the Ceph section, there's a big yellow warning about "it may work or not and it's not supported".
If you aren't happy with the level of Ceph support, I can understand your frustration, but it's clearly documented that's not something supported nor working out of the box. It might be better in the future, but for now we have to choose priorities, and sadly there's not enough demand for that (vs other more pressing things).
Also, if you think the documentation isn't clear enough, you have a link on the bottom of each page ("Edit") so you can improve it, contributions are welcome.
XCP-NG isn't meant to hack in a lot of crap into it, it's a corporate virtualization platform meant to run stable workloads. Home use of it is an option but support for bolting extra stuff to the hypervisor's dom0 can only make it less stable. If Proxmox serves better for that then fine, but the much more "chaotic" approach also makes it less attractive for corporations.
The style of the response that you get from the CEO is a HUGE part as to why I don't use xcp-ng over Proxmox VE.
His response, basically, summarises down to RTFM.
But that does nothing to address the actual, primary concern, which is that the source code base does not pull more up-to-date versions of Ceph.
(I recently finally had to migrate off of CentOS and on to Rocky Linux, *because* CentOS is too old now, which is rather unfortunate. (Thanks IBM! [/s]))
The video that i was waiting for....greetings from Brazil...
Huehuehue BRBR
I think Proxmox can better capitalize on the VM Ware situation.
That's not the impression I'm getting at all. Spoke to a guy doing a demo who worked for a major server manufacturer and he spoke of another larger corporation testing options to their VMware, they had already eliminated Proxmox but were cautiously optimistic about XCP-NG.
I currently work with esxi and Im surprised by how common issues with upgrades breaking esxi is. I though running something expensive and supported like vmware would give you peace in mind but this is not the case. you need to take care of a vmware cluster just the same way you have to take care of a proxmox or xcp-ng cluster with the difference beeing that you need support for vmvares products while you can repair xcp-ng and proxmox by yourself.
I concur, esxi is suprisingly mediocre for an "industry leader" and "best of breed" and all the buzzwords. You can do 90% of what most companies use it for with Proxmox or xcp-ng or even HyperV and get a more sane and reliable host/cluster
I used to use ESXi in my "home lab" setup and I lost 1.5TB of data because I didn't realized that by default esxi deletes the VMs disks if you delete a VM without confirming if you want to do so... I was dumb
To make things even worse the partion style and the filesystem is a nightmare to use recovery tools... I have found just one tool that was able to read from the partitions and recover my data but since it costs $800 I couldn't afford it
I was short on storage so I didn't had any extra backups! lesson learned...
Just bought and installed a fanless "router" style N100 PC that came with four 2.5 gig i226:es - XCP-NG 8.3 worked perfectly out of the box. XO Lite is great but very limited still but it's nice to have at least basic controls remotely - like starting up your Orchestra if that's down. Running a pfSense virtualized on it with passed through NICs, and some other housekeeping type home servers.
Just need vGPU and VDI support, then I will be happy to move into XCP-NG.
By default no, but with a copying a binary from xenserver you can get it
Nice video! Really helpful and clear. Thanks 👍. Could you do a video to compare it with proxmox?
Liked your videos, however it’s lagging behind proxmox from multiple reasons such as CPU’s and old kernel of the host pass through PCI devices should not ask a host reboot once you excluded from the host maybe it’s stable, but it’s lagging not for my performance, but from innovation and use ability of the components compares to Proxmox it was my previous hypervisor, but it’s still lagging behind the other competitors
I often hear comments about 'Proxmox having a more recent kernel,' but it’s worth clarifying that in XCP-ng, the hypervisor itself is not Linux, so the kernel version isn’t directly relevant to performance or functionality. This is a bit like focusing on the gas tank size of an electric car-it misses the key point. There are certainly meaningful discussions to be had about XCP-ng and Xen, and understanding these nuances helps keep the conversation relevant.
@@olivierlambert4101 thanks for addressing it , however XCP-NG is based on centos and it’s a fact, even with if I installed xen on Debian win latest kernel other feature will not work at the moment, such as support for vgpu, device pass through without reboot the host on every assignment of pcie device and more.
Btw- I used to run Proxmox for 4 year on an enterprise gear company and pivot to XCP-NG , but now moved back to Proxmox because the simplicity of things such as cloud-init deploy templates in a several clicks, and no virtual sound device on vm or different disk type and controller and the last part when there isn’t any good vdi for XCP-ng
Thanks Tom. Always appreciated.
Switched to proxmox, since their installer completely ignores my nvme drives on a Genoa system.
Like it, however hyper-convergence is the top priority for my current professional situation. Otherwise xcpng looks great! Xo lite looks like something id use alot.
What are you on VMware? Have you tried Proxmox?
Hi Tom!! Love your videos. Finally got this feature but how to exclude the raw disks which are passed to vm while backup or snapshots ?
Naaah…..Proxmox Gang here fool, represent 😂😂
proxmox needs quorumless clustering so bad
@@manitoba-op4jxYeah, I’ve been bitten in the ass at least twice because of this.
I love how the comments suggest I translate to English 😮
Side question, does XCP-NG support Big/Little Intel CPU cores?
For anyone wondering I did successfully install it on an MS-01 with a 12900H, and had no issues with the CPU so far.
Still don’t support 2tb or more disks 🙃
The new storage server is in beta right now.
@LAWRENCESYSTEMS so looking forward to this; once the beta adds the ability to migrate those disks, I'm gonna be all over it (especially if it also increases that migration speed from 50MB/s)
It would be great if the Terraform and Packer providers got some love, and some examples that work reliably with 8.3. I'm also looking for solid descriptions of how to deploy Flatcar Linux onto this platform.
Packer builder for Xen is there, same as Terrafrom provider. Probably not so feature rich as alternative solutions but it's there.
I’m excited for this, and the later addition of networking and VM creation. I’ve been looking for a replacement for ESXi 7.3 and this might do it.
Also Vates, if you guys offer a cheap subscription for us home users that just want to tinker and run like a dozen VMs I think that might be popular.
You can compile from source which will have almost all of the features enabled. Just you won't get support.
I was going to say, yeah, it's literally free and open source for home usage, and their forums are pretty active if you need any support. There's even tools that will build the management for you with about 30 seconds of input
Proxmox is so much cooler..
Some cool improvements. Will the VM snapshot disk exclusion functionally with other attached devices? For example, I have a USB Zigbee controller that I can't attach to my VM because of the snapshots that are created as part of my nightly backup job. I must attach it to another machine and then use USB over ethernet. Will I be able to attach the Zigbee controller and then exclude it from the snapshots?
Is it just me or is Tom beginning to look like Mr. Miyagi?
What's the hours of Vates support since they are in France?
ill take a look when it hits 8.6, the current interface requires too many clicks to get things done.
At work We are actually running multiple sites with Hyper-V fail over clusters for our servers and VDI win10 vms. but I am starting to concider moving away from Microsoft since they are not licensed yet and the cost is insane. My only concern is veeam support which seems to be still in beta.
I don't understand how your costs will go down if you move? You still need to license your Windows servers, unless you're running Linux servers and Win10 VDI's only, then you can save on licensing for the Hyper-V host itself.
@@affieuk yes that's what I meant, to switch from windows server on the hosts to linux. The VM servers will still remain on windows if they already run on it and be licensed by virtual core if I am not mistaken.
@@Heartl3ss21 Yeah, it'll be core based. Last I looked a few years ago was 8 minimum, going up from there.
Depending on number of VM's it's cheaper to move all Windows Server VM's to one node and license with Datacenter. Automatic activation is a nice bonus, but not by much since automation will take care of it either way.
@@affieuk true but who users anymore a single host to run critical services? You have to use at least two in fail over configuration and in that case you will have to license both hosts with data center since they both can have the full number of VMs at any given time
@@Heartl3ss21 Yup 100%, same goes if you run another hypervisor though. Microsoft licensing fees are crazy, but then there are lots of others that do the same. If you can use open source software for your needs and a support contract if needed, that would be the best outcome.
can xcpng be put directly on th net with restricted access tot he management? i do this with hyper-v and i am trying to find another hypervisor to replace it.
Question on vTPM. Does your host hardware have to have its own supported hardware TPM in order to host VMs with vTPMs?
vTPM is completely virtualized and does not need hardware TPM on the host. Afaik it stores the keys in a small virtual disk together with the virtual machine disks. So it's not as "secure" as a hardware TPM where the keys are stored inside a physical chip in the TPM device. But it's not meant to. It's main goal is make Windows 11 happy so you can install a VM.
@@marcogenovesi8570 yeah, if no one has access to the physical hypervisor machine, the vTPM virtual disk is "secure" enough. If you run a VM with win11 and a malware is installed, the malware won't be able to access any keys, as they are stored in a "TPM device"
So do you want to tell me that now I can install XCPNG like Proxmox? where GUI will be out of the box without what was so far? :)
Eventually that is what XO Lite will provide, it won't be as full featured as Xen Orchestra
@@LAWRENCESYSTEMS Im asking from Home User perspective - so seems to be pretty good alternative ;)
Does it come with richful web UI out of box, like Proxmox?
Keep waiting 😂
It's in beta right now, but they're working on it
It comes with a web UI now as Tom demonstrated but it's highly limited still. But of course that doesn't matter, since with the Xen Orchestra appliance you get extensive control of all your XCP-NG servers from one interface.
Aggravation Switch 🤣
Does PCI Passthrough finally work for GPUs? Cause that would be a game changer.
I haven't tried with GPUs, but I've tried with other things and it's been petty flawless, so I can't imagine it would be a problem. If you've had issues specifically with GPUs but other stuff's worked, I'd be happy to test it though
@@joshuawaterhousify If you could. I've had success passing through GPUs via KVM and Proxmox to Linux VMs but it's never worked for me to Windows VMs.
Really need Windows VM with CUDA for local AI, RPA, AutoCAD & Premiere.
@ericneo2 may not be till the weekend, but I'll throw my 2070 Super in and see what I can do. I know nvidia blocked things on consumer GPUs with code 43 for a while, but I think they opened that up a bit ago? I've been meaning to give it a shot for a gaming VM for a little while.
Testing will be on games, Davinci Resolve, and maybe some AI stuff, with a bit of blender or something to make sure that side works as well.
Either way, if you're already on Proxmox and want to stick with KVM, check out Craft Computing; he's got tutorials for it for everything from direct pass through to vGPU
The stepping stone has been Nvidia literally blocking that on purpose on all consumer cards, I believe.
@KimmoJaskari I'm pretty sure they stopped actively blocking it though; I remember hearing that a while back.
You know too much. I hope you don't run into any Bond villains.
Linux 4.19? XEN? What? Feels like 2010…
The 4.19.0+ kernel is limiting some features for storage, since this kernel is EOL Dec. 2024, maybe we get something newer soon.
XCP-ng UI Very ugly.😅
First
Twirly
XCP-NG should support KVM virtualization!
The entire point of it is to support Xen virtualization...
@@KimmoJaskari I don’t agree, they want to create an enterprise virtualization solution. They choose Xen as the tech to do that
Hmmm, can I now take my mini PC running Proxmox finally over to XCP-NG 8.3. ... 8.2 wouldn't install on it.