I have Proxmox in my business on 6 physical servers, 40 LC and 10 VM . One local backup and one remote sync. as backup solution. Its being 1 year now since I moved from VMWare and I am so happy with support and product quality so far. 🙌🏻
Just moved off of VMware to a cluster of Proxmox machines. I feel there there is a little less of a learning curve for some because it’s kind of laid out like ESXi. I like it so far
When granted authority to chose the Hypervisor solution to upgrade the Development and Staging labs at the office from ESXi, ProxMox was hands down the winner for us. An 8 host cluster serves us quite well, pretty much all of base and extra features of ESXi combined with PBS, the teams I support are happy and productive.
After argument with VMware over my renewal this time, I'm switching to a small 3/4 node proxmox cluster at my company. We only have need for 8 VM's plus about 20 containers, so anything that effectively gives me fast redundant servers with a backup solution that I can put to a NAS and backup to the cloud works. My current plan (subject to change and I work through it) is to run ProxMox with backups to a Synology NAS, which is mirrored with another Synology NAS, which is backed up to Synology C2.
It’s easy to understand why Proxmox are the preferred choice for homelab, you can use it free without any limitations and it is a fully blown enterprise system, not only for homelab. We have several clusters, our primary is a hyperconverged cluster based on CEPH under the hood, its blazing fast and have all the storage intelligence from CEPH, can’t beat that. It has all the enterprise stuff and I see it more enterprise than XCP-NG. We also used Citrix for several years before changing to Proxmox.
The one thing that XCP-NG doesn't have is Hyper-covered Infrastructure (HCI). With Proxmox they have built in CEPH which is a trusted and proven distributed storage systems. There support offerings also include CEPH support. XOSTOR is still in Beta but CEPH has been around for years and used by CERN. Running close to 50 Proxmox Hypervisors with 6 different clusters in production for the last 5 years now. My background was running vmware for private cloud customers in Asia and I'm VCP Certified. I started with proxmox very skeptical but its has proven itself to be a production grade and I have had no issues with performing major upgrades from 6 to 8. There are some quality of life things I miss such as DRS, vDS, and centralized management of multiple clusters. Proxmox Backup is a great product as well. It just works.
We switched from hyper-v to xcp in Feb 23, and it was one long unmitigated disaster - xcp is buggy as hell to the point where I was afraid to even patch or reboot it. Backups and migrations are painfully slow (no matter what your network link speed is). I gave up on it in Nov 23 and switched to proxmox 8 (and now 8.1) and couldn't be happier - wish I had never heard of xcp.
Same with VMware - updates for VCSA brick it almost every time, you can't create backup via SMB, because no. I will also go for proxmox in my work due to it just being much easier to learn and operate, where in VMware you can't even see disk state from GUI, you can't add email notifications without O365 account to know what is going on. And there is PBS, with full deduplication, health check, email notifications of backups and retention policies - all for 100$/year...
@@brettemurphy Tried to update it 4 times, 1 time it bricked whole "OS" and i had to revert it back via snapshot, 2 times SMB share would be unavailable, even if 10 other machines had access with same credentials and it went OK once(!) :)
Tried XCP-NG for 6 months on a production network, worst and most stressfull months :-( 🙂 Many issues with iSCSI and an HP SAN. As official "documentation" on their site they have videos from this channel 🙂 You leave everything working, some random time after, random crashes happen and there was no way to troubleshoot what was going on :-( No matter what keywords I use to search for documentation, it almost always points me to this channel 🙂 The official documentation and the number of ressources (forums, videos, third party APIs, tools, ...) around Proxmox is just awesome I don't want to be in need for official support, I want to be the support :-) :-) Switched to Proxmox on that XCP-NG test for adoption site, not a signle issue since :-) Not going to try XCP-NG once more in the near future :-/ :-)
Couldn't agree more. There are things I love about it, but it can be a nightmare to manage. Simply because of the lack of resources and information I don't think its anything about the software itself. Edit: tho of course thats probably not an issue if you have a support contract
Sorry you had iSCSI only. Thats just the problem. Backups causes all this from iSCSI. Read the docs, basically it tells you you need 50% storage for any snapshots, then from snapshots on the SAN clogs everything up. No need for local snapshots when your uploading everything 3 2 1.
Coming from production on all of these products mentioned... Vates XCP support was responsive and well worth the price of support and XO licensing. Tickets open directly from the Xen-Orchestra interface and someone is in and working on your system within the SLA of the ticket. If they have questions, they ask and respond respectfully and get you operational. Xen-Orchesta sees updates on the regular as well. New backup methods without making new licensing hassles. From an enterprise perspective, these guys are a breath of fresh air to the industry. @@1979benmitchell
At work, I am using HPE Proliant DL380 and DL366 Gen 10 Servers running Proxmox perfectly for over 2 years now. Never needed support, and the reliability has been perfect.All servers the company is running is on those machines and I have had zero issues with Proxmox in the 2 years. It's simple to set up, reliable as you demand for an enterprise environment, and the pricing was excellent.
A few years ago i learned about xcp-ng and XOA from you and i've now deployed the system to over 20 customers with great success and we also use it inhouse for our internal cluster. I love it.
I've moved off of vmware to ProxMox for work several months ago for two reasons. First, the vmware 7's fiasco and upcoming Broadcom purchase of vmware. Since I've been using ProxMox for my homelab it was a no brainer to make the move. It's been rock solid with ZFS. Not using CEPH for performance reasons.
Proxmox works great in production. It's really just a KVM management system, and the workload and tooling required is much lower than KVM in general, so, unless you need that last little bit of overhead gone that you'd get with a pure KVM system, proxmox is a great way to go. And, I mean, realistically, it's just debian, and debian is very friendly.
I was a VMware Engineer for 10 years and Broadcom has destroyed them. There's very few jobs left requiring VMware as everyone is jumping ship to Azure, AWS and Google Cloud, due to renewal costs being 10x more expensive. What's it's done for me is its expedited my long term plans to move to Linux.
From what I’ve seen, infrastructure teams dont care much for the technical differences, they want available support contracts. People to call when things go wrong. For that reason in the businesses I’ve worked with HyperV is on the table, but mostly hyperscalers like Azure or GCP are also favored despite costs.
There are many reasons companies will choose a worse product that has a support contract. In my experience as an IT manager, it often comes down to not being the last ones to hold the bag if things hit the fan. If you have a support contract or cloud solution, you can often offload some (or potentially all, depending on the relationship) of the responsibility to someone else when unhappy customers or their lawyers come knocking. Not saying I like this, but it has been something I had to come to terms with after being forced off the "better" path in my younger professional days
@@nathangarvey797 exactly - it's not about day to day management, it's about what happens when SHTF. If a customer's server blows up and I can't get the right CPU in 24 hours, I'm suddenly responsible for loss of production and loss of income. F$%^ that. Dell/HP/MS/AWS have way deeper pockets than I do, and their warranty / SLA clauses will take care of that for me. (Depending on the scenario). Obviously, I have clients where the infra I have setup is less critical and for those I setup much cheaper (and still very reliable) options -- ie I have a few bakery chains that have an eCommerce site, runs well for them, but if they crash for a couple of hours it's an inconvenience. Now, by comparison many years ago I worked for a company that did website hosting for an online retailer that processed at LEAST $50k in sales per hour 24 hours a day every day. They could crash at 2AM in the morning for 2 hours that's $100k gone. Needless to say, you absolutely want a support contract and a team ready to pick up the phone pronto when you're dealing at that scale. In the context of this video, I've got to be honest Proxmox is pretty hard to break unless you're running upgrades so I'm not so sure how cruicial access to support would be - but the reality is businesses at an enterprise scale will want the option of 24/7 support.
I tried XCP-NG and found that some of the more advanced hardware pass through tasks required you to be extremely familiar with Linux command line. Not just basic command line, but deep dark corners of the command line.
Which translates to losing half a day to find the right commands if a problem arises. Kind of like a host whose management interface goofed up and an emergency reset did not help so i ended up having to reload the host. This was following a surprise reboot of the host and apparently really hates surprise reboots where ive never had to reload a Hyper-V or ESXi host for a surprise reboot. Have not tried yet with Proxmox.
I have manged around 12 proxmox hypervisor that where in in different cluster it was really stable it was made easier to manager with puppet. Once i did upgrad proxmox ve 5 to 7 and it worked. I have also worked and now working with vmware only nice thing is the tooling around the api with ansible modules and terraform/opentofu xcp-ng and xen orchestra is really cool i have tested it need some adjustment regarding api and tooling , but it seems to be good solution for migrating of vmware in my homelab i use proxmox since the i can get more vm per hypervisor. In my experience you can get more vm on kvm than xen , but both are much better than esxi .
Good video on talking about the support side of things. Definitely an important aspect to look at when business make decisions on top of evaluating how one is going to utilize the hypervisor, what the needs are, and how the product design decisions impact the use.
For example XCP-ng is more scaleible due to the design but these design decisions also limit and complicate certain tasks due to the abstractions required for that scale to work (ei fully decoupled control plain, resource pools, etc). Proxmox's is a more Linux-native experience allowing skillets to transfer but it has it's own downsides as well such as a more coupled control plain.
I don't think removing the free stand-alone Hyper-V option is that big a deal, as it is still available with WIn10 Pro/WIn11 Pro, and, I doubt many businesses are going to NOT use Hyper-V simply because they must also purchase an MS Server 2022 Standard license...
I run a production Domain Controller as a VM on a server running Hyper-V. Never toyed with HA across multiple server nodes but on a single box Hyper-V has been rock solid. I don't know why it gets so much hate other than general M$ animus. It's a perfectly viable enterprise product at least at small scale.
I love all of the different type of videos that Lawrence Systems publishes to UA-cam, however these shorter, "here is what you need to know" type videos are my absolute favourite. Right up there with the live videos. These types of videos really allow me to make technical decisions even faster than if I was only researching my options on my own. Not only do I enjoy these videos because they are helpful to me, but I also forward them to our manufacture- and software development-partners to help support what we are asking them to support. I use these videos to help support my case or my request with our other partners. Thanks again for doing these videos.
2TiB virtual drive limitations are going to screw a lot of people who want to go over to XCP-NG. Tons of massive SQL and file servers out there on vmware. Chances are, we'll just pass the cost along to our customers, since most are just using essentials plus, anyway. In the end, you have to consider if moving to a different hypervisor is worth the capex save vs the opex increase.
in corporate point of view, question is whether to go outsourcing route, and decide between cloud or "outsourced" in-house, with primary concern being security and data integrity, not pricing or deployment speed
I moved my home HPE DL360 Gen9 off of ESXi a few weeks ago to Proxmox. I exported 8 VM's to external storage, installed Proxmox and reimported them within the same day. The uncomplicated VM's were up and running the same day. PCI passthrough took some extra reading and research. I was struck by Proxmox's ease of setup and configuration. I'm blown away at how much faster all of the VM's run. Wish I had recorded some statistics before I migrated. I'm guessing that the increased speed is due to ZFS on the storage pools, vs the old controller RAID based storage on vmware.
Hi just wanted to stop by and let you know you're viewers are really recommending your content on my channel. I'll be checking out the xcp vids specifically and thanks for doing what you do!
Moving from VMware. Currently testing several hypervisors. First one eliminated? Hyper-V. Tomorrow we test Proxmox. Can't wait for XCP-NG! The support goes in the plus category.
After much frustration with VMware I moved to proxmox and had it set up easily. Forum community support is great. We're pulling the trigger on five nodes plus a PBS in production January 2nd and getting the licenses. I think for small businesses with limited budgets and no full time IT person it makes a lot of sense. Far easier to configure and use than VMware and less hardware restrictions.
Because I tinker around more as a hobbyist with a Mac and Windows background I have used Virtual Box on Mac (for Home Assistant) and HyperV on Windows (for various stuff). I intend to have a play with Proxmox and XCPMG. My suspicion is that Proxmox is going to be easier for me to get up and running based on my initial researching on both.
You didn't address storage. One of the reasons for Proxmox popularity is that it includes ZFS out of the box, and you can easily do software RAID for VM storage. Re: backups, Proxmox has a separate, also FOSS, storage solution. Proxmox Backup Server, and from what I've read it's quite good. Feels weird that you mentioned XCP's backup but not PBS.
Thumbs up for XCP-ng. We’re a small VMware customer, our renewals for this year have come in around 5x more expensive, so I’m currently persuading management to consider the Vates support model, exactly because of their responsiveness and engagement (first hand experience of that in my home lab). Good, informative video Tom! 👍🏻
The biggest problem that i ran into with HyperV-Server is, when people started to think that it's an brilliant idea, to install all kinds of software on it (e.g. Veeam B&R or AV Server) or turn it into an fileserver or even an DC, because of "it's windows server". Otherwise a valid Hypervisor for me, if you already have Windows Server licensed in your enviroment. Btw: Xenserver 8 is no longer part of Citrix and has now an Trial Version for Download, that doesn't need registration and isn't limited in terms of features. But no use for production.
For Hyper-V it must be dedicated to that function. It may be feasible to make a DC out of some Hyper-V Servers, but only for Hyper-V itself. No connection to the normal Domain. Just for Hyper-V to form clusters and manage itself. I hate it when Hyper-V servers are members of the production domain which itself is only hosted on these Hyper-V servers. Lots of problems when something isn't booting correctly. For bigger Installations I would install the Hypervisors as Server core and have dedicated DC and management Infrastructure along with them. This also prevents others to mess with the hypervisors installed software.
any tutorials and resources on how to setup window hyper v server 2019 core and running vms on it? looking to host 4 vms on it all window server 2019 one as DC , one as DC backup ,one file server and one DHCP server.
That's pretty dumb. And why I went with Hyper-V core, not to mention I don't let anyone mess with my server. You can make it as "DC"... by creating a VM that's a DC! Not rocket science. Need Veeam B&R? Create another VM and run it as a VM. Sounds like your IT dept (does it even have an IT dept?) is run by bunch of lightweights. I have 15 years enterprise experience and I'm humble enough to admit there's plenty of people who still know more than me out there, I DO know you don't run other roles on your hypervisor. Architected properly Hyper-V is perfectly good.
@@hifiandrew Actually not my IT Department, but what i have seen out there in the wild world. And for Backup-Server and DC: If the ressources and money are aviable, i would always put the backup-server on a dedicated bare metal server and don't domain join it and put one DC also on bare metal.
I started my "Homelab" with VMware many years ago on a HPE ProLiant MicroServer Gen8 and switched to Proxmox two years ago - to be honest, because the GUI looked nice and a bit like VMware. Then I saw some of your XCP-NG videos and tried it out. Since then I have stayed with XCP-NG because it just runs without any problems. I really like the solid updates and the backup features. Thanks a lot for all your great recommendations.
Here's the way I look at it. Running a Homelab or have an SMB with limited resources, i.e. single-server? Proxmox. My firewall has been virtualized on Proxmox for a long time now being backed up to my NAS via Proxmox Backup Server VM on said NAS with no issues. Home Assistant VM and a Linux Container with 21 Docker workloads also live on the same device and no isues. Looking to replace your multi-server VMware ESXi cluster? XCP-NG all day long and twice on Sunday.
I use mostly VMware in business and KVM in my lab. I think I'm going to change over to XCP-ng in the near future. Not happy with VMware with the business move. I never had issue with the product just their business BS. Another reason I am switching to XCP-NG is the fact is loads to sdcards which is how I setup my ESXI hosts to run. KVM and Proxmox are full OS's with KVM, similar to Hyper-V(Based on KVM).
not true. a xen node basically runs all the things proxmox runs minus the webinterface. it all is really jsut system stuff that needs to be on a node. you cant compare that to hyperv or vmware. proxmox overhead is almost null. its a just a nice set of mangement but all it does is rewriting configs. theres no overbvlow management layer. its just plain linux daemons for various services, like network, kvm etc. promox doest use databases or similar. most configs are directly written into daemon configs or in a couple plain text files that get synced across. just a few kb in total. if anything its more of a xen node on each plus a tiny webinterface. anything else is also the same on vmware or xen. each node needs more or less similar services that do the same thing a proxmox node does.
Also to add insult to vmware is they no longer support SD cards for boot. This is what prompt me to move our production servers from vmware to ProxMox because at the time when version 7 was installed users started having problems with failing SD cards due to excessive log writes. I was forced to either rebuild the vmware servers or move to another solution so I went with ProxMox.
Finalizing my homelab migration from XCP-NG to Proxmox. I still like XCP but I was never able to get backups functioning properly on XO source and I've had a few scares where I thought I lost VM data. The biggest issue I have with XCP-NG is the lack of thin provisioning on iSCSI luns (Proxmox has the same issue and I know both XCP and PVE can implement thin since VMware does it). I tried using NFS and there was a noticeable performance drop even with the recommended tweaks applied to the configuration. Note I was and still am using 10Gb networking in the lab so the slowdowns have no excuse. If they ever implement thin on iSCSI I will consider migrating back. I'm also downsizing from rack mount servers to SFF desktops to try and save on electricity costs and Proxmox made more sense this go around as I have successfully performed backups and restores in Proxmox in the past. The performance I'm experiencing with Ceph is also impressive.
0:09 My employer is doing exactly that ... migrating from VMware vSphere to Microsoft HyperV... We have the opportunity to do a new start, and they go straight into proprietary city without learning anything...
I used XCP for several years, for some reason I still can't stand the XO interface, ended up using Xen Center most of the time. Gave Proxmox another chance around a year ago and I doubt I'll ever go back.
For me is not only for the servers is how to manage and distribute vdis. Horizon works very well. With a fast protocol.... The problems is not only with the infrastructure. Is with other software. If you don't have vdis, only servers or standalone machines is far easy do the change
@@LAWRENCESYSTEMS If it weren't right on their very site, I wouldn't have written my comment. It is literally right on there. At least it was until about 5 mins ago. Wish I could paste links or screenshots on here.
@@Noodles.FreeUkraine I misread your statement, yes, it's "within a business day" based on their time zone and working days and IS NOT 24x7 as offered by XCP-NG.
@@LAWRENCESYSTEMS It just sounded to me as if they didn't offer any SLA at all. Tbh, I can't see any 24/7 offerings from XCP-ng, either. From Vates, you get it at a minimum of $5,400/yr. All the offers below, while starting at $2,000/yr, only offer "working day support" with a 24h response time. From that point of view, even the €325/yr Proxmox Basic sub offer sounds more attractive. In general, I like how many different support levels there are with Proxmox, the range goes from €105 up to $980/yr, covering all needs and purposes. Mind you, you're probably talking from an enterprise perspective, where a few thousand dollars more or less don't matter. For smaller businesses, XCP-ng's pricing gets pretty expensive. But if you're comfortable with XCP-ng and your clients don't mind the hefty upcharge (heck, most will be used to VMware pricing anyway), I don't see any harm in using it. I just think Proxmox is very stable and reliable with a great pricing model on top.
@@LAWRENCESYSTEMS Yep, It's 2 hours on business days vs. 1 hour 24/7 (You should test that on Christmas Eve 😉) for the Premium/Enterprise tier, and 4 hours on business days vs. 1 business day for the Standard tier. So standard support is even better (and cheaper) with Proxmox, while the top tier package with XPC-NG is slightly better (at least on paper), but also more expensive. Overall, they're pretty similar, but sure, if you're in the US, XCP-NG is probably better, but mainly because of the time differences, and because Austrians apparently don't like to work on weekends. 😉
Moving from VMWare to XCP-NG has been a pleasure. When we had issues, we reached out to Vates and even their CEO helped us with direction and support. For our infrastructure, 1:1 parity of features. Clustering, amazing and straight forward. Have a different CPU cluster; warm migrate. Continuous replication to a stack thousands of miles away, you havet in two clicks. When I was labbing and researching its hard to get out of the reddit static that is proxmox. Xen is still kicking......
There seems to be a lot of wrong information out there with regard to the Hyper-V Server product. As much as Hyper-V Server was free to use running Windows VMs on top required licensing, you had to license Windows Server Standard or Datacenter based on the number of physical CPU cores in the host so really there wasn’t any cost savings unless you only ran Linux VMs. We are a VMware shop and have been for 20+ years, unfortunately moving to alternatives isn’t easy as s lot of our infrastructure workflows use VMware specific technologies that don’t really exist in other solutions. I’d be interested to know why you don’t think Hyper-V is a good product? In previous roles I have supported Hyper- V environments from small to large scale with 2000+ VMs and it worked well without any issues. Like anything else when configured properly It performs as well as any other hypervisor platform.
This could be since Microsoft want to move everything to Azure. I don't know what is the current pace of updates(features) of Hyper-V role. Yes I use Hyper-V as role on small orgs.
@@kristeinsalmath1959 Even so, Hyper-V is still under active development and if you're starting from scratch, you can actually run the on-prem azure-ified hypervisor so you can migrate to the cloud quite readily if you want to go that route.
You forgot about Red Hat Virtualization (RHV) and it's equivalent Oracle Virtualization (OLV). OLV can be used for free without support. And both have good support options.
I've used ProxMox in a home lab for three years and I've only had two or three major breakages with network access or bootloader bugs. In fairness, the bootloader breakage and network device enumeration issues were inherited from Debian.
What do you think about Harvester HCI from SUSE / Rancher? Especially if you consider using Proxmox with CEPH or XCP-NG with XO-SAN for hyperconverged.
Valuable video 👍❤ Nice, succinct comparison of XCP-ng vs Proxmox for businesses. Awful about VMware situation. Kindest regards, friends and neighbours.
I love xcpng and was using it for about 2 years but I recently switched to vmware in my lab with vmug. One of the major reasons for that switch was esxi is better able to do automated orchestration and be automated with terraform. Xcpng does allow for this but it doesn’t expose as many options and is overall less polished also their cloudinit is pretty bad. Along with this ability to do vgpu was the point where I switched over. I am pretty happy with the switch for now but my wallet not so much
Hyper V is actually a great platform with System Center Virtual Machine Manager. Neither of them, matches vSpere imo. But if you don't need, what VMware offers., the other platform could get the job done just fine.
@@joejoe2452 If the hyper V core gets domain joined, and a client or gui server joined to the same domain it's straight forward. Otherwise it will require some tinkering to establish trust
For business it is irrelevant what they will use, specially really large ones like telcos,banks and simmilar as they usually buy servers from big vendors like hp,dell (in most of europe) and vendors like them officially supports only either vmware or redhat kvm, windows/hyper/v as they can guarantee support/upgrade/fixes new drivers and others stuff like NIC, FC cards, controllers and simmilar that will works with newer version of vmware for example. So they will go only with the vendor officially supported platforms. So i guess this migration from vmware it will be dependet from type of business and required certificates for compliance. But in case proxmox vs xcp-ng still irrelevant as both are good , as on the end in business all setups will be in HA mode with extra nodes sitting down and wait to take over if something will die, so not really need 1h response time in 99.9% of time. 1h response time is more crucial for NAS/SAN side as this is needed for all servers/nodes/clusters.
I love proxmox, mainly for linux purposes, however I dont feel the need to rely on support from them because I dont do out of the ordinary for proxmox customizations. I am happy with proxmox due to the networking abilities it gives me, I currently run 3 Dell Poweredge servers each with a proxmox installation and they all have quad ethernet nic's. LACP is a breeze to configure for proxmox by far, you can also LACP 10g as well sfp connections with proxmox so long as the OS recognizes the cards in the networking settings.
Same, for now. We were just refreshing a couple hosts and considered the move to Proxmox. But with the time available the refresh (storage) is getting done and then reinstalling ESXI. All our support infrastructure like backups are aimed at ESXI. Moving to Proxmox is just not feasible right now. I have a homelab Proxmox to learn it better. But I’m not yet fluent in it.
I use Hyper-v and I do not have any problems. I have a workstation with i5 and 32GB of ram it has 5 VM running with no issues what so ever. The system runs for more than 90 days without any problems. The only problem that I had is the power getting shut off because CA does not know how to build a power plant. So everyone stop crapping on Microsoft. I have professionally used their problem and I have never been lets down. From a crap AMD A8 to server with 48 cores. No issues here.
I use good old KVM at home because... I've used it for 10+ years after switching from my old free VMWare install. At work I've always used VMWare. Thankfully, at my current job, I am not involved as much in virtualization anymore - just Linux. I setup an XCP-NG server at the last job, but there was no sense of urgency at the time because the Broadcom sale was years away. I just setup XCP-NG at home over the weekend but am not ready to migrate my entire home server to it. Maybe when it's actually time for a new server I will consider it. My only complaints so far is that it's laid out so differently from other virtualization environments that it's hard for me to find what I want. Fortunately, XCP-NG Center helps a bit with that. Must be another UI team on that vs. XOA.
Virtualbox, right or wrong, my hypervisor since 2009. My oldest VM: Windows XP Home installed and activated March 2010. It survived 2 VBox owners; 3 desktops and 4 CPUs :) :) I still use it a few times per week to play the wma copies of my CDs and LPs!
I am going to do something for my home lab. At this point I am just doing the planning and checking for the necessary equipment to start with. I am thinking on an old Dell R720 with 8 3.5in HD bays. I want to decide for the Proxmox or XCP-NG. I am thinking that I would like to rebuild my NAS with something like FreeNAS or TrueNAS. Then in a server with 8 HD bays I have enough to put HDs and create the NAS. But I think having a machine with Dual Xeon processors just for a NAS is a waste of power and resources. Then I am planning to have installed the Virtualization Engine in there. Then, just want t know your opinion on what could be the best way to integrate these two in the same box. Virtualization (may be containers as well) and NAS. 1. Install the virtualization and then on top of that install the NAS on a VM with the physical HD attached to it, 2 Install the NAS on the same OS like another service on the physical machine. 3. Another option is to install the NAS and then install the Virtualization on top of the NAS software. I had experience playing in the past with Proxmox and XCP-NG but it was years ago on a different hardware and just creating test environments. I also think on a server like this could be good to have the capability to play with containers as well. I think it could be cool to have all this in only one powerful box with 8 HD, 256 GB or RAM and Dual Xeon. What can you tell me about this idea and what route would you take to have this done? Thanks in advance for the advice.
So what about nutanix, openstack, byhive? we are migrating from nutanix to vmware at work. I find vmware to be more geared at the boomer generation of windows users with its gui-first setup and linear wizard methodology forcing the user to click through multiple menus to achive what otherwise would be a simple task.
copy and paste from my other reply: Nutanix Absolutely YES. I get the feeling this video is geared towards home lab people or small businesses. I'm sys admin at a university department and I moved us from traditional 3 tier VMware 6.5 over to Nutanix AHV HCI a few years ago and zero regrets. Admin is a piece of cake. Updating is a dream, set it and forget it. Support is excellent, even my lowest priority tickets get response within a few hours. Integrates well with Veeam for our backups. These other hypervisors are cool for tinkering I'm all for that, I love to tinker in home labs as much as the next nerd. But for enterprise, you need a serious system
I'd like to interject for a moment. I recommend Hyper-V to small organizations with Windows Server licensing. This case Standard. Hv is not bad at all, just abandoned by Micrsoft for Azure. Probably Proxmox for small and XCP-NG for mid organizations. Can some tell Proxmox devs to make an agreement with other company to support its product? If they don't want support it 24/7, just make other company do it for you.
Hyper-V is not abondened. One of the new things that comes in the next Windows Server iteration and can be tested with vNext: GPU Partitioning. This was until recent, only aviable in the Client Hyper-V
Proxmox is easy, that's it's main selling point. Xcp-NG has the issue that if you want easy updates to XO you have to pay for it. For a larger business XCP is probably a no brainer.
We use HyperV for our internal kit as were MS Partner so get Datacenter licences which means we can activate the VM's using AVMA keys so dont burn through licences. How would i licecne say 100 windows VM's on XCP-NG without burning through 100 licences rather than a couple of host datacenter licences with AVMA keys?
I've used Proxmox in my home lab for over a year now and it's been solid. I had to create a few scripts to manage the memory usage, since running 3 VMs would cause the host memory usage to increase. No issues. I've used VMWare ESXi ARM on my RPi4 for a few months and found it had some lag on running simple VMs. At my previous job we used MS Hyper-V in a production envinronment for hosting all of our Server/Workstation based VMs and it did the job well. My choice for my home lab is Proxmox since I can customize it and run scripts to help manage the memory and learn things about it. I'm considering, in the near future, to migrate my Windows Server VM to an AMD based Proxmox server and create some extra VMs on it. My Intel based on will host my RHEL and Linux VMs since it plays better with the resources. RMRKs: Running Windows Server in both GUI and Headless seem to have the same memory usage. Running RHEL and Ubuntu have a lower memory usage and keep my Proxmox server stable.
Running Proxmox at home, just because it was the one i read about the most. Other than the upgrade from 7 to 8 borking my networking its been fine. I'm about to wipe the system and install larger a ssd mirror and with pbs it shouldn't be a problem, wink wink
Biggest question, what to do with vSAN hardware? Any thoughts on having the distributed storage in the way vSAN does. Also, the nice vMotion, Storage vMotion, DRS, and HA are great. Does either of the alternatives proposed here offer feature parity?
Ceph is really well-developed at this point. You could also use Proxmox as the OS (its basically Debian) for you Ceph cluster and export the storage to XCP-NG. Why not turn the vSAN hardware into more potential VM hosts. It may make more sense to run VM's with storage heavy requirements directly on those hosts. Adding some RAM might be a good move though.
I think this video is geared toward home lab people or very small businesses. Anyone running serious enterprise grade stuff is going to go with a commercial vendor. We switched from Vmware to Nutanix 3 uears ago and have been very happy. Has all those enterprise features you mention and their support is excellent. It's not cheap but you get what you pay for.
I'm wondering if Harvester gains traction in the world of managing VMs or if Tom is going to try it. It looked very promising to me, especially as I'm using rancher and kubernetes anyway but I'm a bit hesitant due to limited community info and experience out there. At least for now.
@@LampJustin that's interesting to hear. I know it ships with longhorn but I would have assumed you could just bring your own storage provider as well. I didn't try linstore yet and longhorn seems to slowly mature but I'd also prefer to stay independent
After running a VMware cluster with vSAN in my home lab for a few years (thanks to VMUG Advantage, it'd have been way too expensive otherwise), and then using XCP-ng for a few months (it was ... fine, but the networking setup was a disaster after getting used to vDS), I recently set up Harvester after I grew tired of waiting for XOSTOR to materialize. I'm still on the steep part of the learning curve, and it does still seem rough around the edges, and it's a bit too resource-hungry for home labbers with small/low power setups, but I hope SUSE sticks with it because it does seem like a promising solution.
@@JoshLiechtyI think so, too. I'm using rancher since version 1.x and they had a real history of being rough around the edges. At least before being acquired by suse. Sometimes it felt like "Just claim it's enterprise production ready and ship it". But now with RKE2 it really feels like everything is kind of done right. I'm definitely excited to follow the harvester project in the future.
Absolutely YES. I get the feeling this video is geared towards home lab people or small businesses. I'm sys admin at a university department and I moved us from traditional 3 tier VMware 6.5 over to Nutanix AHV HCI a few years ago and zero regrets. Admin is a piece of cake. Updating is a dream, set it and forget it. Support is excellent, even my lowest priority tickets get response within a few hours. Integrates well with Veeam for our backups. These other hypervisors are cool for tinkering I'm all for that, I love to tinker in home labs as much as the next nerd. But for enterprise, you need a serious system with full support. That costs bucks. That's the difference between enterprise and hobby systems. I was able to even get Nutanix certified for free, I'm NCP-MCI 6.
I have been a Linux KVM user for somwher around 25 years. I run it on debian and Ubuntu. Currently looking for an alternative. Leaning towards xcp-ng. Won't do harvester probably. Still irritated with Suse over their MS affinity
So, I've been looking around -- with the Broadcom/VMware new pricing plan, for us, for 3 servers with duel processors that have 12 cores each, it's $3800.00 for a Yr. Support for XCP-NG for equivalent support is $4000.00 per Yr, and support for Proxmox (from them) was about the same. Given the current support prices I cannot see a good reason to dump VMware and all the headaches that come along with it. For a home lab - self supported, it makes perfect sense to to migrate.
Iv tried XCP but the issue I had about two years ago was that the app I needed to run used docker, and I was running it on Windows for docker. That docker never initialized because it kept erroring that needed the nested virtualization enable, which it was.. So i gave up and moved to VMware.... (now I might have to move from that too at some point)
That is because Docker on Windows runs a Linux distro using Hyper-V, the containers are actually running inside a Linux distro. You were basically trying to run Windows virtualized (Xen) in XCP, and then Linux virtualized (Hyper-V) inside your Windows VM Even if you can enable this, this is something nobody would suggest. Running virtualization inside virtualization is like running 2 antivirus products at the same time. Possible but the overhead entirely overkill. You could just launch a Linux distro directly on XCP running Docker and avoid Windows entirely.
Have a lot of clusters with proxmox ve at scale with a lot of node's and ceph no issue with a lot of vm support we do it are self. what is the work load the for the clusters that have issue then ?
One of the problems I had with Proxmox is getting the installer to boot. I thought I’d try Proxmox about a year ago on a spare HP proliant. I created the USB. And it booted up to a Grub error. I fiddled with boot settings for a while, tried Rufus, tried etcher, etc. Nothing worked. So I put it away for a while and looked at it again recently. I created a boot USB. Tried booting. Nothing. It just won’t boot a Proxmox installer. Tried Ventoy finally. And that worked! But not for 8.1. I had to go with version 7.4. So the difficulty I had loading the Proxmox installer is concerning. I don’t have that issue with ESXI. And that makes it difficult to commit to the product for a production environment. Still, I’ll play with it. We’ll see what develops.
This is why Promox is not very popular in server grade hardware. It is based on Debian, and Debian refuses to boot on almost anything because it can't find the drives, unless its ancient hardware, its very picky. This is why Debian is not very popular as Linux distro on laptops or workstations. It's either Fedora (RHEL based) or Ubuntu because of hardware compatibility. Picking Debian as main OS is very nice in terms of open source, but horrible for the corporate environment or if you expect people to run it in as much hardware as possible. I guess many people could not even pass the boot screen in most servers.
I don't know anyone using it and I try to base my reviews on real world testing whenever possible. We have lots of real world production use with XCP-NG.
Agree. We used Nutanix AOS and hardware at my last job. It was rock solid, and that was 5 years ago. ❤ my current role has me back to VMware. As other people mentioned, if the hardware vendor doesn't support the hypervisor in an official capacity, we simply will not use the product. It is no fault of the others.
I have Proxmox in my business on 6 physical servers, 40 LC and 10 VM . One local backup and one remote sync. as backup solution. Its being 1 year now since I moved from VMWare and I am so happy with support and product quality so far. 🙌🏻
10vms? what is the use case? also what is the spec of the bare metal host?
/\ also bumping for more sauce, just because I like to look under the hood... certainly not to scrutinize.
how much do you pay for licensing?
But proxmox VMs are slower isn't it compared to VMware esxi
Just moved off of VMware to a cluster of Proxmox machines. I feel there there is a little less of a learning curve for some because it’s kind of laid out like ESXi. I like it so far
When granted authority to chose the Hypervisor solution to upgrade the Development and Staging labs at the office from ESXi, ProxMox was hands down the winner for us. An 8 host cluster serves us quite well, pretty much all of base and extra features of ESXi combined with PBS, the teams I support are happy and productive.
Proxmox are based in Austria, not Germany. I know it's the same timezone but my Austrian heart just started bleeding a bit 😅
My bad, I should have checked that detail instead of assuming by Timezone.
Well australia is realy far away.
@@manofwar556 the one with the cows not the kangaroos
@@thetux0815 lol, some of the best cows in the world are in Australia. We have fantastic wagyu. 😂
@@manofwar556Austria without the "lia" 😂
I've been using ProxMox in production for four years. Subscribed, and updated regularly. It has been an absolute rock star.
After argument with VMware over my renewal this time, I'm switching to a small 3/4 node proxmox cluster at my company. We only have need for 8 VM's plus about 20 containers, so anything that effectively gives me fast redundant servers with a backup solution that I can put to a NAS and backup to the cloud works. My current plan (subject to change and I work through it) is to run ProxMox with backups to a Synology NAS, which is mirrored with another Synology NAS, which is backed up to Synology C2.
It’s easy to understand why Proxmox are the preferred choice for homelab, you can use it free without any limitations and it is a fully blown enterprise system, not only for homelab. We have several clusters, our primary is a hyperconverged cluster based on CEPH under the hood, its blazing fast and have all the storage intelligence from CEPH, can’t beat that. It has all the enterprise stuff and I see it more enterprise than XCP-NG. We also used Citrix for several years before changing to Proxmox.
I have looked at XCP-NG for work with XO-SAN but their pricing on the XO-SAN is what turned me off so I went with ProxMox instead.
The one thing that XCP-NG doesn't have is Hyper-covered Infrastructure (HCI). With Proxmox they have built in CEPH which is a trusted and proven distributed storage systems. There support offerings also include CEPH support. XOSTOR is still in Beta but CEPH has been around for years and used by CERN.
Running close to 50 Proxmox Hypervisors with 6 different clusters in production for the last 5 years now.
My background was running vmware for private cloud customers in Asia and I'm VCP Certified.
I started with proxmox very skeptical but its has proven itself to be a production grade and I have had no issues with performing major upgrades from 6 to 8. There are some quality of life things I miss such as DRS, vDS, and centralized management of multiple clusters.
Proxmox Backup is a great product as well. It just works.
Not quite true they have hci in beta at the current moment
We switched from hyper-v to xcp in Feb 23, and it was one long unmitigated disaster - xcp is buggy as hell to the point where I was afraid to even patch or reboot it. Backups and migrations are painfully slow (no matter what your network link speed is). I gave up on it in Nov 23 and switched to proxmox 8 (and now 8.1) and couldn't be happier - wish I had never heard of xcp.
Same with VMware - updates for VCSA brick it almost every time, you can't create backup via SMB, because no. I will also go for proxmox in my work due to it just being much easier to learn and operate, where in VMware you can't even see disk state from GUI, you can't add email notifications without O365 account to know what is going on. And there is PBS, with full deduplication, health check, email notifications of backups and retention policies - all for 100$/year...
Funny because the opposite has been our experience. XCP-ng has been more stable than both VMware and Veeam.
I also found XCP to be a bit fragile, but I just assumed that it was me tinkering with it too much.
@@HendriuGaming I patch VCSA monthly, never "bricked" once?
@@brettemurphy Tried to update it 4 times, 1 time it bricked whole "OS" and i had to revert it back via snapshot, 2 times SMB share would be unavailable, even if 10 other machines had access with same credentials and it went OK once(!) :)
Good, reasonable, as always. Good stuff.
One thing seems missing. If you need VMS and LXC simultaneously the Proxmox is obvious choice.
Tried XCP-NG for 6 months on a production network, worst and most stressfull months :-( 🙂
Many issues with iSCSI and an HP SAN. As official "documentation" on their site they have videos from this channel 🙂
You leave everything working, some random time after, random crashes happen and there was no way to troubleshoot what was going on :-(
No matter what keywords I use to search for documentation, it almost always points me to this channel 🙂
The official documentation and the number of ressources (forums, videos, third party APIs, tools, ...) around Proxmox is just awesome
I don't want to be in need for official support, I want to be the support :-) :-)
Switched to Proxmox on that XCP-NG test for adoption site, not a signle issue since :-)
Not going to try XCP-NG once more in the near future :-/ :-)
Couldn't agree more. There are things I love about it, but it can be a nightmare to manage. Simply because of the lack of resources and information I don't think its anything about the software itself.
Edit: tho of course thats probably not an issue if you have a support contract
Sorry you had iSCSI only. Thats just the problem. Backups causes all this from iSCSI. Read the docs, basically it tells you you need 50% storage for any snapshots, then from snapshots on the SAN clogs everything up. No need for local snapshots when your uploading everything 3 2 1.
When it comes to Production, I want somebody I can call should the shit hit the fan. That is what I pay for with VMWare, Microsoft, RedHad, etc.
Coming from production on all of these products mentioned... Vates XCP support was responsive and well worth the price of support and XO licensing. Tickets open directly from the Xen-Orchestra interface and someone is in and working on your system within the SLA of the ticket. If they have questions, they ask and respond respectfully and get you operational. Xen-Orchesta sees updates on the regular as well. New backup methods without making new licensing hassles. From an enterprise perspective, these guys are a breath of fresh air to the industry. @@1979benmitchell
@@mattjoono one likes to hear anything like this but you are the truest messenger here.
At work, I am using HPE Proliant DL380 and DL366 Gen 10 Servers running Proxmox perfectly for over 2 years now. Never needed support, and the reliability has been perfect.All servers the company is running is on those machines and I have had zero issues with Proxmox in the 2 years. It's simple to set up, reliable as you demand for an enterprise environment, and the pricing was excellent.
A few years ago i learned about xcp-ng and XOA from you and i've now deployed the system to over 20 customers with great success and we also use it inhouse for our internal cluster. I love it.
I've moved off of vmware to ProxMox for work several months ago for two reasons. First, the vmware 7's fiasco and upcoming Broadcom purchase of vmware. Since I've been using ProxMox for my homelab it was a no brainer to make the move. It's been rock solid with ZFS. Not using CEPH for performance reasons.
Nice discussion, I like the practical experience from corporate support perspective. More topics like this please!
Proxmox works great in production. It's really just a KVM management system, and the workload and tooling required is much lower than KVM in general, so, unless you need that last little bit of overhead gone that you'd get with a pure KVM system, proxmox is a great way to go.
And, I mean, realistically, it's just debian, and debian is very friendly.
I was a VMware Engineer for 10 years and Broadcom has destroyed them. There's very few jobs left requiring VMware as everyone is jumping ship to Azure, AWS and Google Cloud, due to renewal costs being 10x more expensive. What's it's done for me is its expedited my long term plans to move to Linux.
From what I’ve seen, infrastructure teams dont care much for the technical differences, they want available support contracts. People to call when things go wrong. For that reason in the businesses I’ve worked with HyperV is on the table, but mostly hyperscalers like Azure or GCP are also favored despite costs.
then you have the wrong technical teams there with zero competence and no reason to exist. which is a common theme in that field sadly.
There are many reasons companies will choose a worse product that has a support contract. In my experience as an IT manager, it often comes down to not being the last ones to hold the bag if things hit the fan. If you have a support contract or cloud solution, you can often offload some (or potentially all, depending on the relationship) of the responsibility to someone else when unhappy customers or their lawyers come knocking.
Not saying I like this, but it has been something I had to come to terms with after being forced off the "better" path in my younger professional days
@@nathangarvey797 exactly - it's not about day to day management, it's about what happens when SHTF. If a customer's server blows up and I can't get the right CPU in 24 hours, I'm suddenly responsible for loss of production and loss of income.
F$%^ that. Dell/HP/MS/AWS have way deeper pockets than I do, and their warranty / SLA clauses will take care of that for me. (Depending on the scenario).
Obviously, I have clients where the infra I have setup is less critical and for those I setup much cheaper (and still very reliable) options -- ie I have a few bakery chains that have an eCommerce site, runs well for them, but if they crash for a couple of hours it's an inconvenience. Now, by comparison many years ago I worked for a company that did website hosting for an online retailer that processed at LEAST $50k in sales per hour 24 hours a day every day.
They could crash at 2AM in the morning for 2 hours that's $100k gone.
Needless to say, you absolutely want a support contract and a team ready to pick up the phone pronto when you're dealing at that scale.
In the context of this video, I've got to be honest Proxmox is pretty hard to break unless you're running upgrades so I'm not so sure how cruicial access to support would be - but the reality is businesses at an enterprise scale will want the option of 24/7 support.
@@nathangarvey797 Local virtualization instances (be it private cloud etc.) can also be outsourced.
@@woswasdenni1914 nope. like with vmware, shit just happens only they can fix and advise on.
Proxmox is the master.
But just saying the word Citrix still feels dirty in my mouth.
Citrix is awful
I tried XCP-NG and found that some of the more advanced hardware pass through tasks required you to be extremely familiar with Linux command line. Not just basic command line, but deep dark corners of the command line.
Which translates to losing half a day to find the right commands if a problem arises. Kind of like a host whose management interface goofed up and an emergency reset did not help so i ended up having to reload the host. This was following a surprise reboot of the host and apparently really hates surprise reboots where ive never had to reload a Hyper-V or ESXi host for a surprise reboot. Have not tried yet with Proxmox.
I have manged around 12 proxmox hypervisor that where in in different cluster it was really stable it was made easier to manager with puppet. Once i did upgrad proxmox ve 5 to 7 and it worked.
I have also worked and now working with vmware only nice thing is the tooling around the api with ansible modules and terraform/opentofu
xcp-ng and xen orchestra is really cool i have tested it need some adjustment regarding api and tooling , but it seems to be good solution for migrating of vmware in my homelab i use proxmox since the i can get more vm per hypervisor.
In my experience you can get more vm on kvm than xen , but both are much better than esxi .
"In my experience you can get more vm on kvm than xen , but both are much better than esxi ."
Interesting, I didn't know this.
Good video on talking about the support side of things. Definitely an important aspect to look at when business make decisions on top of evaluating how one is going to utilize the hypervisor, what the needs are, and how the product design decisions impact the use.
For example XCP-ng is more scaleible due to the design but these design decisions also limit and complicate certain tasks due to the abstractions required for that scale to work (ei fully decoupled control plain, resource pools, etc). Proxmox's is a more Linux-native experience allowing skillets to transfer but it has it's own downsides as well such as a more coupled control plain.
Hyper-V has worked very well for me from 2014 to this day. Thanks for making the video.
I don't think removing the free stand-alone Hyper-V option is that big a deal, as it is still available with WIn10 Pro/WIn11 Pro, and, I doubt many businesses are going to NOT use Hyper-V simply because they must also purchase an MS Server 2022 Standard license...
i think both of you missed the point.
@@GameON-Playstation I, too, think your comment is irrelevant but I'm wishing you a Happy New Year.
i can't find any tutorials on hyper-v server 2016,2019 core setup and running vms? any resources you know of?
I run a production Domain Controller as a VM on a server running Hyper-V. Never toyed with HA across multiple server nodes but on a single box Hyper-V has been rock solid. I don't know why it gets so much hate other than general M$ animus. It's a perfectly viable enterprise product at least at small scale.
I love all of the different type of videos that Lawrence Systems publishes to UA-cam, however these shorter, "here is what you need to know" type videos are my absolute favourite. Right up there with the live videos. These types of videos really allow me to make technical decisions even faster than if I was only researching my options on my own. Not only do I enjoy these videos because they are helpful to me, but I also forward them to our manufacture- and software development-partners to help support what we are asking them to support. I use these videos to help support my case or my request with our other partners. Thanks again for doing these videos.
I like Hyper-V.
At least a year and a half ago Proxmox had way more hardware support which influenced my choices a lot.
2TiB virtual drive limitations are going to screw a lot of people who want to go over to XCP-NG. Tons of massive SQL and file servers out there on vmware. Chances are, we'll just pass the cost along to our customers, since most are just using essentials plus, anyway. In the end, you have to consider if moving to a different hypervisor is worth the capex save vs the opex increase.
You can use RAW format disks, but then you lose a lot of essential features like snapshot and live migrations.
or just pass an iscsi strait into the os not via the hypervisor
Use extents.
in corporate point of view, question is whether to go outsourcing route, and decide between cloud or "outsourced" in-house, with primary concern being security and data integrity, not pricing or deployment speed
I moved my home HPE DL360 Gen9 off of ESXi a few weeks ago to Proxmox. I exported 8 VM's to external storage, installed Proxmox and reimported them within the same day. The uncomplicated VM's were up and running the same day. PCI passthrough took some extra reading and research. I was struck by Proxmox's ease of setup and configuration. I'm blown away at how much faster all of the VM's run. Wish I had recorded some statistics before I migrated. I'm guessing that the increased speed is due to ZFS on the storage pools, vs the old controller RAID based storage on vmware.
I’m on XCP-ng. Xen has been my go to for about 20 years.
Hi just wanted to stop by and let you know you're viewers are really recommending your content on my channel. I'll be checking out the xcp vids specifically and thanks for doing what you do!
Feel free to reach out to me if you have any questions. I always enjoy engaging with other tech creators.
@@LAWRENCESYSTEMS Will do for sure. I'm still evaluating all our options but apparently I snubbed xcp-ng way too early for some folks liking.
Moving from VMware. Currently testing several hypervisors. First one eliminated? Hyper-V. Tomorrow we test Proxmox. Can't wait for XCP-NG! The support goes in the plus category.
After much frustration with VMware I moved to proxmox and had it set up easily. Forum community support is great. We're pulling the trigger on five nodes plus a PBS in production January 2nd and getting the licenses. I think for small businesses with limited budgets and no full time IT person it makes a lot of sense. Far easier to configure and use than VMware and less hardware restrictions.
Because I tinker around more as a hobbyist with a Mac and Windows background I have used Virtual Box on Mac (for Home Assistant) and HyperV on Windows (for various stuff). I intend to have a play with Proxmox and XCPMG. My suspicion is that Proxmox is going to be easier for me to get up and running based on my initial researching on both.
You didn't address storage. One of the reasons for Proxmox popularity is that it includes ZFS out of the box, and you can easily do software RAID for VM storage.
Re: backups, Proxmox has a separate, also FOSS, storage solution. Proxmox Backup Server, and from what I've read it's quite good. Feels weird that you mentioned XCP's backup but not PBS.
Thumbs up for XCP-ng. We’re a small VMware customer, our renewals for this year have come in around 5x more expensive, so I’m currently persuading management to consider the Vates support model, exactly because of their responsiveness and engagement (first hand experience of that in my home lab).
Good, informative video Tom! 👍🏻
One big selling point for XCP-NG is the migration of vm from esx hosts. All you have to do i just create the network and you are good to go
The biggest problem that i ran into with HyperV-Server is, when people started to think that it's an brilliant idea, to install all kinds of software on it (e.g. Veeam B&R or AV Server) or turn it into an fileserver or even an DC, because of "it's windows server". Otherwise a valid Hypervisor for me, if you already have Windows Server licensed in your enviroment.
Btw: Xenserver 8 is no longer part of Citrix and has now an Trial Version for Download, that doesn't need registration and isn't limited in terms of features. But no use for production.
For Hyper-V it must be dedicated to that function. It may be feasible to make a DC out of some Hyper-V Servers, but only for Hyper-V itself. No connection to the normal Domain. Just for Hyper-V to form clusters and manage itself.
I hate it when Hyper-V servers are members of the production domain which itself is only hosted on these Hyper-V servers. Lots of problems when something isn't booting correctly.
For bigger Installations I would install the Hypervisors as Server core and have dedicated DC and management Infrastructure along with them. This also prevents others to mess with the hypervisors installed software.
Same I have customers ranging from 200 vms on Hyper V to small shops with 1 or 2 VMs no problems for years
any tutorials and resources on how to setup window hyper v server 2019 core and running vms on it? looking to host 4 vms on it all window server 2019 one as DC , one as DC backup ,one file server and one DHCP server.
That's pretty dumb. And why I went with Hyper-V core, not to mention I don't let anyone mess with my server. You can make it as "DC"... by creating a VM that's a DC! Not rocket science. Need Veeam B&R? Create another VM and run it as a VM. Sounds like your IT dept (does it even have an IT dept?) is run by bunch of lightweights. I have 15 years enterprise experience and I'm humble enough to admit there's plenty of people who still know more than me out there, I DO know you don't run other roles on your hypervisor. Architected properly Hyper-V is perfectly good.
@@hifiandrew Actually not my IT Department, but what i have seen out there in the wild world. And for Backup-Server and DC: If the ressources and money are aviable, i would always put the backup-server on a dedicated bare metal server and don't domain join it and put one DC also on bare metal.
I dont want HyperV even for free , killed it in our Schools for Proxmox
at Catholic Schools (which God created), it pays to be a Virtual Virgin.
I started my "Homelab" with VMware many years ago on a HPE ProLiant MicroServer Gen8 and switched to Proxmox two years ago - to be honest, because the GUI looked nice and a bit like VMware. Then I saw some of your XCP-NG videos and tried it out. Since then I have stayed with XCP-NG because it just runs without any problems. I really like the solid updates and the backup features. Thanks a lot for all your great recommendations.
Based on my experience thus far with both products..... Proxmox is better for homelab. XCP-NG is better for business use.
Here's the way I look at it.
Running a Homelab or have an SMB with limited resources, i.e. single-server? Proxmox.
My firewall has been virtualized on Proxmox for a long time now being backed up to my NAS via Proxmox Backup Server VM on said NAS with no issues. Home Assistant VM and a Linux Container with 21 Docker workloads also live on the same device and no isues.
Looking to replace your multi-server VMware ESXi cluster? XCP-NG all day long and twice on Sunday.
I use mostly VMware in business and KVM in my lab. I think I'm going to change over to XCP-ng in the near future. Not happy with VMware with the business move. I never had issue with the product just their business BS. Another reason I am switching to XCP-NG is the fact is loads to sdcards which is how I setup my ESXI hosts to run. KVM and Proxmox are full OS's with KVM, similar to Hyper-V(Based on KVM).
not true. a xen node basically runs all the things proxmox runs minus the webinterface. it all is really jsut system stuff that needs to be on a node.
you cant compare that to hyperv or vmware. proxmox overhead is almost null. its a just a nice set of mangement but all it does is rewriting configs. theres no overbvlow management layer. its just plain linux daemons for various services, like network, kvm etc.
promox doest use databases or similar. most configs are directly written into daemon configs or in a couple plain text files that get synced across. just a few kb in total.
if anything its more of a xen node on each plus a tiny webinterface.
anything else is also the same on vmware or xen. each node needs more or less similar services that do the same thing a proxmox node does.
Also to add insult to vmware is they no longer support SD cards for boot. This is what prompt me to move our production servers from vmware to ProxMox because at the time when version 7 was installed users started having problems with failing SD cards due to excessive log writes. I was forced to either rebuild the vmware servers or move to another solution so I went with ProxMox.
RIP. VMware I guess?
Finalizing my homelab migration from XCP-NG to Proxmox. I still like XCP but I was never able to get backups functioning properly on XO source and I've had a few scares where I thought I lost VM data. The biggest issue I have with XCP-NG is the lack of thin provisioning on iSCSI luns (Proxmox has the same issue and I know both XCP and PVE can implement thin since VMware does it). I tried using NFS and there was a noticeable performance drop even with the recommended tweaks applied to the configuration. Note I was and still am using 10Gb networking in the lab so the slowdowns have no excuse. If they ever implement thin on iSCSI I will consider migrating back.
I'm also downsizing from rack mount servers to SFF desktops to try and save on electricity costs and Proxmox made more sense this go around as I have successfully performed backups and restores in Proxmox in the past. The performance I'm experiencing with Ceph is also impressive.
Using Proxmox since 2011!
can it run on a HP proliant ML310e gen8 v2 server?
@@joejoe2452 Yes, It runs on everything i can get my hands on. Servers, client pc's, laptops, nuc's, ...
0:09 My employer is doing exactly that ... migrating from VMware vSphere to Microsoft HyperV...
We have the opportunity to do a new start, and they go straight into proprietary city without learning anything...
You do know that HyperV is going EOL in 5 years, right?
source?@@Leopold3131
I used XCP for several years, for some reason I still can't stand the XO interface, ended up using Xen Center most of the time. Gave Proxmox another chance around a year ago and I doubt I'll ever go back.
For me is not only for the servers is how to manage and distribute vdis. Horizon works very well. With a fast protocol....
The problems is not only with the infrastructure. Is with other software.
If you don't have vdis, only servers or standalone machines is far easy do the change
I thought Proxmox uses approved 3rd party companies for support, due to their small support window.
AFAIK, Proxmox does offer an SLA: Response time 2 hours within a business day on critical support requests @ $200 less than XCP-ng.
Not according to their site as I showed my video.
@@LAWRENCESYSTEMS If it weren't right on their very site, I wouldn't have written my comment. It is literally right on there. At least it was until about 5 mins ago. Wish I could paste links or screenshots on here.
@@Noodles.FreeUkraine I misread your statement, yes, it's "within a business day" based on their time zone and working days and IS NOT 24x7 as offered by XCP-NG.
@@LAWRENCESYSTEMS It just sounded to me as if they didn't offer any SLA at all. Tbh, I can't see any 24/7 offerings from XCP-ng, either.
From Vates, you get it at a minimum of $5,400/yr. All the offers below, while starting at $2,000/yr, only offer "working day support" with a 24h response time. From that point of view, even the €325/yr Proxmox Basic sub offer sounds more attractive.
In general, I like how many different support levels there are with Proxmox, the range goes from €105 up to $980/yr, covering all needs and purposes.
Mind you, you're probably talking from an enterprise perspective, where a few thousand dollars more or less don't matter. For smaller businesses, XCP-ng's pricing gets pretty expensive. But if you're comfortable with XCP-ng and your clients don't mind the hefty upcharge (heck, most will be used to VMware pricing anyway), I don't see any harm in using it. I just think Proxmox is very stable and reliable with a great pricing model on top.
@@LAWRENCESYSTEMS Yep, It's 2 hours on business days vs. 1 hour 24/7 (You should test that on Christmas Eve 😉) for the Premium/Enterprise tier, and 4 hours on business days vs. 1 business day for the Standard tier. So standard support is even better (and cheaper) with Proxmox, while the top tier package with XPC-NG is slightly better (at least on paper), but also more expensive.
Overall, they're pretty similar, but sure, if you're in the US, XCP-NG is probably better, but mainly because of the time differences, and because Austrians apparently don't like to work on weekends. 😉
Would be good to see more people get behind ProxMox biggest advantage being it’s free and open source.
Moving from VMWare to XCP-NG has been a pleasure. When we had issues, we reached out to Vates and even their CEO helped us with direction and support. For our infrastructure, 1:1 parity of features. Clustering, amazing and straight forward. Have a different CPU cluster; warm migrate. Continuous replication to a stack thousands of miles away, you havet in two clicks. When I was labbing and researching its hard to get out of the reddit static that is proxmox. Xen is still kicking......
Regardless of the two high recomended options, its an oportunity to change since Vmware is doing things not business friendly.
There seems to be a lot of wrong information out there with regard to the Hyper-V Server product.
As much as Hyper-V Server was free to use running Windows VMs on top required licensing, you had to license Windows Server Standard or Datacenter based on the number of physical CPU cores in the host so really there wasn’t any cost savings unless you only ran Linux VMs.
We are a VMware shop and have been for 20+ years, unfortunately moving to alternatives isn’t easy as s lot of our infrastructure workflows use VMware specific technologies that don’t really exist in other solutions.
I’d be interested to know why you don’t think Hyper-V is a good product? In previous roles I have supported Hyper- V environments from small to large scale with 2000+ VMs and it worked well without any issues. Like anything else when configured properly It performs as well as any other hypervisor platform.
This could be since Microsoft want to move everything to Azure. I don't know what is the current pace of updates(features) of Hyper-V role. Yes I use Hyper-V as role on small orgs.
any tutorials or resources on how to setup window hyper v server 2019 core and how to runs vms? cant find anything
@@kristeinsalmath1959 Even so, Hyper-V is still under active development and if you're starting from scratch, you can actually run the on-prem azure-ified hypervisor so you can migrate to the cloud quite readily if you want to go that route.
@@SkyOctopus1 Actually I will care more about on-prem Hyper-V than Azure.
You forgot about Red Hat Virtualization (RHV) and it's equivalent Oracle Virtualization (OLV). OLV can be used for free without support. And both have good support options.
Nutanix AHV is a good alternative to ESXi for businesses. They also have a free version
free for home use. unless you’re ready to blow $150k + it’s unobtainable. best solution out there
Nutanix does not support SAN storages - it's not usable for our scope.
I've used ProxMox in a home lab for three years and I've only had two or three major breakages with network access or bootloader bugs. In fairness, the bootloader breakage and network device enumeration issues were inherited from Debian.
agree with you on hyper-v. I used to use it & it sucks. I'm on proxmox
why do you say hyper v sucks?
What do you think about Harvester HCI from SUSE / Rancher? Especially if you consider using Proxmox with CEPH or XCP-NG with XO-SAN for hyperconverged.
Valuable video 👍❤
Nice, succinct comparison of XCP-ng vs Proxmox for businesses.
Awful about VMware situation.
Kindest regards, friends and neighbours.
Hyper-V becoming paid . . .
Yikes
Harvester is super promising. I hope it gains more traction.
I love xcpng and was using it for about 2 years but I recently switched to vmware in my lab with vmug. One of the major reasons for that switch was esxi is better able to do automated orchestration and be automated with terraform. Xcpng does allow for this but it doesn’t expose as many options and is overall less polished also their cloudinit is pretty bad. Along with this ability to do vgpu was the point where I switched over. I am pretty happy with the switch for now but my wallet not so much
VMWare is great ...when someone else is footing the tab! :)
Hyper V is actually a great platform with System Center Virtual Machine Manager. Neither of them, matches vSpere imo. But if you don't need, what VMware offers., the other platform could get the job done just fine.
any tutorials on setting up windows hyper v server 2019 core install and running vms on it?
@@joejoe2452
If the hyper V core gets domain joined, and a client or gui server joined to the same domain it's straight forward. Otherwise it will require some tinkering to establish trust
For business it is irrelevant what they will use, specially really large ones like telcos,banks and simmilar as they usually buy servers from big vendors like hp,dell (in most of europe) and vendors like them officially supports only either vmware or redhat kvm, windows/hyper/v as they can guarantee support/upgrade/fixes new drivers and others stuff like NIC, FC cards, controllers and simmilar that will works with newer version of vmware for example. So they will go only with the vendor officially supported platforms.
So i guess this migration from vmware it will be dependet from type of business and required certificates for compliance.
But in case proxmox vs xcp-ng still irrelevant as both are good , as on the end in business all setups will be in HA mode with extra nodes sitting down and wait to take over if something will die, so not really need 1h response time in 99.9% of time.
1h response time is more crucial for NAS/SAN side as this is needed for all servers/nodes/clusters.
I found Hyper-V to be the way to go. Live migrations, replications and if you have the datacentre edition of the host OS, unlimited windows guests.
Proxmox is the best choice of opensource hypervisor
I love proxmox, mainly for linux purposes, however I dont feel the need to rely on support from them because I dont do out of the ordinary for proxmox customizations.
I am happy with proxmox due to the networking abilities it gives me, I currently run 3 Dell Poweredge servers each with a proxmox installation and they all have quad ethernet nic's. LACP is a breeze to configure for proxmox by far, you can also LACP 10g as well sfp connections with proxmox so long as the OS recognizes the cards in the networking settings.
Sticking with VMware. Too early since the merger/takeover to consider doing anything right now.
Same, for now. We were just refreshing a couple hosts and considered the move to Proxmox. But with the time available the refresh (storage) is getting done and then reinstalling ESXI.
All our support infrastructure like backups are aimed at ESXI. Moving to Proxmox is just not feasible right now.
I have a homelab Proxmox to learn it better. But I’m not yet fluent in it.
I use Hyper-v and I do not have any problems. I have a workstation with i5 and 32GB of ram it has 5 VM running with no issues what so ever. The system runs for more than 90 days without any problems. The only problem that I had is the power getting shut off because CA does not know how to build a power plant.
So everyone stop crapping on Microsoft. I have professionally used their problem and I have never been lets down. From a crap AMD A8 to server with 48 cores. No issues here.
cant find any tutorials on how to setup window hyper v server core 2019 and runs vms on it?
I use good old KVM at home because... I've used it for 10+ years after switching from my old free VMWare install. At work I've always used VMWare. Thankfully, at my current job, I am not involved as much in virtualization anymore - just Linux. I setup an XCP-NG server at the last job, but there was no sense of urgency at the time because the Broadcom sale was years away. I just setup XCP-NG at home over the weekend but am not ready to migrate my entire home server to it. Maybe when it's actually time for a new server I will consider it. My only complaints so far is that it's laid out so differently from other virtualization environments that it's hard for me to find what I want. Fortunately, XCP-NG Center helps a bit with that. Must be another UI team on that vs. XOA.
Virtualbox, right or wrong, my hypervisor since 2009.
My oldest VM: Windows XP Home installed and activated March 2010. It survived 2 VBox owners; 3 desktops and 4 CPUs :) :) I still use it a few times per week to play the wma copies of my CDs and LPs!
This was an ad for XCP-NG not a debate.
I am going to do something for my home lab. At this point I am just doing the planning and checking for the necessary equipment to start with. I am thinking on an old Dell R720 with 8 3.5in HD bays. I want to decide for the Proxmox or XCP-NG. I am thinking that I would like to rebuild my NAS with something like FreeNAS or TrueNAS. Then in a server with 8 HD bays I have enough to put HDs and create the NAS. But I think having a machine with Dual Xeon processors just for a NAS is a waste of power and resources. Then I am planning to have installed the Virtualization Engine in there. Then, just want t know your opinion on what could be the best way to integrate these two in the same box. Virtualization (may be containers as well) and NAS. 1. Install the virtualization and then on top of that install the NAS on a VM with the physical HD attached to it, 2 Install the NAS on the same OS like another service on the physical machine. 3. Another option is to install the NAS and then install the Virtualization on top of the NAS software. I had experience playing in the past with Proxmox and XCP-NG but it was years ago on a different hardware and just creating test environments. I also think on a server like this could be good to have the capability to play with containers as well. I think it could be cool to have all this in only one powerful box with 8 HD, 256 GB or RAM and Dual Xeon. What can you tell me about this idea and what route would you take to have this done? Thanks in advance for the advice.
Haven’t even watched this, but I know Tom loves XCP-NG. Am I wrong?
No.
To the point of fanboyism. That was one thing I disliked of VMWare snobs spreading misinformation and smothering anything that detracts from it.
So what about nutanix, openstack, byhive? we are migrating from nutanix to vmware at work. I find vmware to be more geared at the boomer generation of windows users with its gui-first setup and linear wizard methodology forcing the user to click through multiple menus to achive what otherwise would be a simple task.
OpenStack is a different kind of sport, if you need that, I'm certain it's great.
copy and paste from my other reply: Nutanix Absolutely YES. I get the feeling this video is geared towards home lab people or small businesses. I'm sys admin at a university department and I moved us from traditional 3 tier VMware 6.5 over to Nutanix AHV HCI a few years ago and zero regrets. Admin is a piece of cake. Updating is a dream, set it and forget it. Support is excellent, even my lowest priority tickets get response within a few hours. Integrates well with Veeam for our backups. These other hypervisors are cool for tinkering I'm all for that, I love to tinker in home labs as much as the next nerd. But for enterprise, you need a serious system
I'd like to interject for a moment.
I recommend Hyper-V to small organizations with Windows Server licensing. This case Standard. Hv is not bad at all, just abandoned by Micrsoft for Azure.
Probably Proxmox for small and XCP-NG for mid organizations.
Can some tell Proxmox devs to make an agreement with other company to support its product? If they don't want support it 24/7, just make other company do it for you.
Hyper-V is not abondened. One of the new things that comes in the next Windows Server iteration and can be tested with vNext: GPU Partitioning. This was until recent, only aviable in the Client Hyper-V
What are alternatives for Horizon on Proxmox / xcp-ng?
ProxMox is amazing but I am up to learning something new for sure thanks so much for the video!
Hyper-V is commonly used in education as Microsoft gives the licensing for virtually free to educational institutions.
Proxmox is easy, that's it's main selling point. Xcp-NG has the issue that if you want easy updates to XO you have to pay for it. For a larger business XCP is probably a no brainer.
We use HyperV for our internal kit as were MS Partner so get Datacenter licences which means we can activate the VM's using AVMA keys so dont burn through licences. How would i licecne say 100 windows VM's on XCP-NG without burning through 100 licences rather than a couple of host datacenter licences with AVMA keys?
Can you do new series of XCP-NG with creating VM using demo version of windows server, windows file server so on from small business standpoint.
Proxmox *does have support* outside of the Austrian timezone. They have plenty of partners around the world that can accommodate any timezone.
I've used Proxmox in my home lab for over a year now and it's been solid. I had to create a few scripts to manage the memory usage, since running 3 VMs would cause the host memory usage to increase. No issues. I've used VMWare ESXi ARM on my RPi4 for a few months and found it had some lag on running simple VMs. At my previous job we used MS Hyper-V in a production envinronment for hosting all of our Server/Workstation based VMs and it did the job well.
My choice for my home lab is Proxmox since I can customize it and run scripts to help manage the memory and learn things about it. I'm considering, in the near future, to migrate my Windows Server VM to an AMD based Proxmox server and create some extra VMs on it. My Intel based on will host my RHEL and Linux VMs since it plays better with the resources.
RMRKs: Running Windows Server in both GUI and Headless seem to have the same memory usage. Running RHEL and Ubuntu have a lower memory usage and keep my Proxmox server stable.
Running Proxmox at home, just because it was the one i read about the most. Other than the upgrade from 7 to 8 borking my networking its been fine. I'm about to wipe the system and install larger a ssd mirror and with pbs it shouldn't be a problem, wink wink
Biggest question, what to do with vSAN hardware? Any thoughts on having the distributed storage in the way vSAN does.
Also, the nice vMotion, Storage vMotion, DRS, and HA are great. Does either of the alternatives proposed here offer feature parity?
Ceph is really well-developed at this point. You could also use Proxmox as the OS (its basically Debian) for you Ceph cluster and export the storage to XCP-NG. Why not turn the vSAN hardware into more potential VM hosts. It may make more sense to run VM's with storage heavy requirements directly on those hosts. Adding some RAM might be a good move though.
I think this video is geared toward home lab people or very small businesses. Anyone running serious enterprise grade stuff is going to go with a commercial vendor. We switched from Vmware to Nutanix 3 uears ago and have been very happy. Has all those enterprise features you mention and their support is excellent. It's not cheap but you get what you pay for.
I'm wondering if Harvester gains traction in the world of managing VMs or if Tom is going to try it. It looked very promising to me, especially as I'm using rancher and kubernetes anyway but I'm a bit hesitant due to limited community info and experience out there. At least for now.
I just hate that you're limited to longhorn and their CNI. Piraeus and Cilium or OVN Kubernetes would be much preferred... (Not to mention faster)
@@LampJustin that's interesting to hear. I know it ships with longhorn but I would have assumed you could just bring your own storage provider as well. I didn't try linstore yet and longhorn seems to slowly mature but I'd also prefer to stay independent
After running a VMware cluster with vSAN in my home lab for a few years (thanks to VMUG Advantage, it'd have been way too expensive otherwise), and then using XCP-ng for a few months (it was ... fine, but the networking setup was a disaster after getting used to vDS), I recently set up Harvester after I grew tired of waiting for XOSTOR to materialize. I'm still on the steep part of the learning curve, and it does still seem rough around the edges, and it's a bit too resource-hungry for home labbers with small/low power setups, but I hope SUSE sticks with it because it does seem like a promising solution.
@@JoshLiechtyI think so, too. I'm using rancher since version 1.x and they had a real history of being rough around the edges. At least before being acquired by suse. Sometimes it felt like "Just claim it's enterprise production ready and ship it". But now with RKE2 it really feels like everything is kind of done right. I'm definitely excited to follow the harvester project in the future.
Slag Hype_V all you want, but for home use it's great. Really easy to use, built into Windows, and still a Tier 1 hypervisor.
Use what makes you happy
What abount Nutanix / AHV?
Absolutely YES. I get the feeling this video is geared towards home lab people or small businesses. I'm sys admin at a university department and I moved us from traditional 3 tier VMware 6.5 over to Nutanix AHV HCI a few years ago and zero regrets. Admin is a piece of cake. Updating is a dream, set it and forget it. Support is excellent, even my lowest priority tickets get response within a few hours. Integrates well with Veeam for our backups. These other hypervisors are cool for tinkering I'm all for that, I love to tinker in home labs as much as the next nerd. But for enterprise, you need a serious system with full support. That costs bucks. That's the difference between enterprise and hobby systems. I was able to even get Nutanix certified for free, I'm NCP-MCI 6.
I have been a Linux KVM user for somwher around 25 years. I run it on debian and Ubuntu. Currently looking for an alternative. Leaning towards xcp-ng. Won't do harvester probably. Still irritated with Suse over their MS affinity
I guess no one wants to talk about the white elephant in the room. Hyper-V? :)
It was mentioned that XCP-NG & Proxmox had non-paid versions, but i only see the subscription versions.
ua-cam.com/video/bq1iKO-0jWs/v-deo.htmlsi=PE8DFkCkUkVVfWlF
Proxmox is bigger in SME in Europe than it probably is in North-America.
So, I've been looking around -- with the Broadcom/VMware new pricing plan, for us, for 3 servers with duel processors that have 12 cores each, it's $3800.00 for a Yr. Support for XCP-NG for equivalent support is $4000.00 per Yr, and support for Proxmox (from them) was about the same. Given the current support prices I cannot see a good reason to dump VMware and all the headaches that come along with it. For a home lab - self supported, it makes perfect sense to to migrate.
They have a 2K per year option for 3 hosts vates.tech/blog/introducing-vates-virtualization-management-stack/
Iv tried XCP but the issue I had about two years ago was that the app I needed to run used docker, and I was running it on Windows for docker. That docker never initialized because it kept erroring that needed the nested virtualization enable, which it was.. So i gave up and moved to VMware.... (now I might have to move from that too at some point)
That is because Docker on Windows runs a Linux distro using Hyper-V, the containers are actually running inside a Linux distro. You were basically trying to run Windows virtualized (Xen) in XCP, and then Linux virtualized (Hyper-V) inside your Windows VM Even if you can enable this, this is something nobody would suggest. Running virtualization inside virtualization is like running 2 antivirus products at the same time. Possible but the overhead entirely overkill.
You could just launch a Linux distro directly on XCP running Docker and avoid Windows entirely.
Were you still doing a video on that data center tour Tom?
Yes, I have started the editing, got sidetracked and need to finish it.
Have a lot of clusters with proxmox ve at scale with a lot of node's and ceph no issue with a lot of vm support we do it are self. what is the work load the for the clusters that have issue then ?
how large are the clusters?
As someone that has run both Proxmox and XCP-ng in production, I personally FAR prefer XCP-ng.
One of the problems I had with Proxmox is getting the installer to boot. I thought I’d try Proxmox about a year ago on a spare HP proliant. I created the USB. And it booted up to a Grub error. I fiddled with boot settings for a while, tried Rufus, tried etcher, etc. Nothing worked.
So I put it away for a while and looked at it again recently. I created a boot USB. Tried booting. Nothing. It just won’t boot a Proxmox installer.
Tried Ventoy finally. And that worked! But not for 8.1. I had to go with version 7.4.
So the difficulty I had loading the Proxmox installer is concerning. I don’t have that issue with ESXI. And that makes it difficult to commit to the product for a production environment.
Still, I’ll play with it. We’ll see what develops.
This is why Promox is not very popular in server grade hardware. It is based on Debian, and Debian refuses to boot on almost anything because it can't find the drives, unless its ancient hardware, its very picky. This is why Debian is not very popular as Linux distro on laptops or workstations. It's either Fedora (RHEL based) or Ubuntu because of hardware compatibility.
Picking Debian as main OS is very nice in terms of open source, but horrible for the corporate environment or if you expect people to run it in as much hardware as possible. I guess many people could not even pass the boot screen in most servers.
Nutanix could be an option too, would love to see your thoughts in form of a dedicated video on it.
I don't know anyone using it and I try to base my reviews on real world testing whenever possible. We have lots of real world production use with XCP-NG.
Agree. We used Nutanix AOS and hardware at my last job. It was rock solid, and that was 5 years ago. ❤ my current role has me back to VMware. As other people mentioned, if the hardware vendor doesn't support the hypervisor in an official capacity, we simply will not use the product. It is no fault of the others.
it is the goat. its wildly expensive, but if you buy it… it may replace your job
I heard that Oracle has a VM server software. Don't know much about it. Curious to see how it stacks up against the competition.