Currently running proxmox as the main Hypervisor, with truenas Core as the NAS - a couple of jails and single VM for redundancy. Thinking of migrating Core to Scale in prerparation for one day maybe scaling back the hardware to a single host once the new Scales KVM is a little more mature with some more features added
I have been running Proxmox for several years now, and am quite happy with the stability and featureset. Proxmox is the backbone for my OPNsense/piHole/Traefik/Authelia router box, and the hypervisor under my media server, game servers, and homelab configuration for testing new apps/toys.
I used to work with esxi for work, recently I've been trying to setup a proxmox home lab on my rpi4 and today I met harvester 🙃 I'm going crazy, too much stuff 🤓 but very happy haha
I am using unRAID for Docker and NAS and really like it. Once I get another box I would like to use Proxmox for virtualization using your awesome videos to help with setup :)
I'm using proxmox and truenas core and they have been awesome but after watching this video I decided I needed to learn and understand docker and kubernetes more. So I got two udemy courses and one of my proxmox nodes is going to be turned into my kubernetes/harvester lab. Thank you for the awesome content!
Great video. I've been running Harvester for over a year now in my home-lab - and it's just amazing to see how it evolved over the beta period. Congratulations to Suse/Rancher for a super open-source product. And congratulations to you for seeing how super exciting it is.
I've joined a startup and using VMs on K8s was odd at first for me as well. Then I realize how useful this can be for OS dependent software for like larger rendering farms. I work for a animation studio so this type of quick scaling is needed and a win.
Ordered a server today. Will use Harvester HCI as it is exactly what i always wanted. I want resource pooling like cloud providers do. So every new server will become a harvester node to extend my pool. Can‘t wait to play around with it!!! Also will make use of rancher integration … this is so awesome… what a time to be alive !!!
I have evaluated Azure Stack HCI but after watching this, i believe its more empowering and featureful. Thank you for an awesome 360 view of harvester. Our corporate application requires Active Directory and all other microservices are running in Kubernetes. I think, bringing this AD VM in harvester gives almost same level of trust and reliability as if it was containerized. Harvester can be used in deployments where you need VMs alongside your Kubernetes workloads for applications which cant be containerized.
Just saw your video that I had in my "see later list" on UA-cam. It's friggin awesome and it's really is a very good product. Really glad that you took the time to do this video. Awesome work!
Can you maybe do an episode where you talk about all the tools that abstract deploying and running services and how they tie together? Like e.g. running everything on barebones is bad, but giving everything a proxmox VM might also not be ideal. So we use TrueNAS Scale (on top of proxmox) and supply that to harvester which manages kubernetes which then runs Docker(Compose/Swarm), Portainer in-between of it all, and we tie everything together with Traefik to route stuff where it needs to go, etc, etc... There's so many individually complex tools and seemingly infinite configurations to all this, having a nice top-down and/or bottom-up view of all this would be awesome. Maybe something that organizes these technologies into layers or tiers of abstraction or whatever you think is fitting.
Awesome man thanks for sharing. Running VMWare Esxi today but also big fan of ProxMox. IMO ,as a big fan of Rancher as well, the main selling point is using the same ecosystem having Harverster as the main hypervisor, as you demonstrated here, It not only solves the cloud on the edge but also gives us the best of both worlds VM's and K3s OOTB.
Awesome video. I've not yet finished building a Proxmox cluster but I think I'm going to wipe it for an install of Harvester. I can see why you are excited!
Great job on this! For context, I own an MSP. I've been watching Harvester since it was announced and did my first install shortly after. What an amazing product it has become. I will be using this in my business as an offering and a service enabler for my business. For price sensitive companies, this could be an option that gets them into virtualization dirt cheap - no scary VMware cluster pricing. I also have clients that need to build and destroy labs regularly and can use some of their dated hardware for that purpose. So, while the edge is an ideal place to deploy Harvester, its usefulness will make it into the corporate DC. I see this as having massive potential for GSA contracts, too.
Nice video!, a bit dense for me as I don't dominate K3S and Rancher. So that means no more Proxmox? Are you looking to replace it with Harvester moving forward? If so can you do another video to explain more the differences? I am fully invested now in Proxmox and I like to know if I can switch (virtual TrueNAS and other services with pass through hardware )
Already using it and loving it! I hope more people get around to trying it. I opened an issue about a cloud config bug and it looks like it’ll be fixed in the next release 🙂
I've heard about Harvester some time ago, but I wasn't too interrested, I probably did not understand the concept all too well. BUT ... now that you reviewed it ... oh man, this fits my use case scenario very, very well! thanks, Tim! I feel I am going to lose one night for setup :P
@@jmcglock Im thinking of ditching proxmox as well! Im in devops so working with a platform thats close to what i do all the time is a better bet compared to traditional hypervisors.
This is absolutely freaking awesome good gawd, the potential is astronomical! Thanks Tim Good job man. I will be spinning this up in my home lab very very soon.😃
As a side note, if you want to get crazy (for real) how about nested virtualization of Harvester lol... That must be fun... Another awesome video Tim. Thank you so much for your work and dedication to the community ! Thanks to your videos, I am running a Proxmox Cluster on 3 nodes, with a Ceph shared storage for HA of my most important VM. Then on top of that I run a K3s cluster of 6 VM and Rancher with local storage (2 on each Proxmox node). I am in the process of deploying Rook inside the K3s cluster to access the Proxmox Ceph Cluster in order to have reliable PV/PVC for my Kube workloads. The reason being that Longhorn is unfortunately a big failure at the moment IMHO. I've tried hard to get it running for several weeks on much simpler architectures, but it is totally unreliable, most PVC not rebuilding properly at all after a node reboot, even using fast SSD and network. Ceph on another hand is just amazing in terms of stability and usability. I recommend that you give us a tutorial on Ceph, it is such a brilliant project especially the architecture and the thought that went into the CRUSH algorithm.
I'm old now. My tech stacks tend to be old too. I spun up a KASM machine and came out the other side thinking that all the K3 and Docker stuff might be *great* - but that they - at least to me - lacked the management and handling tools to make them top tier. I came away with how great KASM is, it still fundamentally requires deeper skills to really add anything else to it. So in theory its very expansive, not some much in practice. I watched this video. Really really cool and Tim - you always do a peer level job showing stuff. This to me is the kind of tooling that is needed. It takes what may be great technology - but places the tooling on top that takes it to a new level. I'm still a billion miles off using the new edge stuff, but products like this make me want to spin stuff up, rather than shut stuff down :)
This sounds like a great solution for a high availability installation of Home Assistant. It would be really interesting to see a comparison of Home Assistant deployed in K3S versus Home Assistant OS. What are the disadvantages of each?
I was into the same kind of thoughts for a long time myself, but the truth is that this is not the proper way to look at it... High availability should not be confused with scalability. Home Assistant can be made resilient through already existing solutions (running it as a docker stack on a dedicated VM for instance), then make the VM highly available by deploying it into a cluster with shared and reliable storage (Proxmox for instance). Kubernetes brings mostly scalability and parallelism of stateless applications, and unfortunately, the concurrency that it brings in the data storage layer (like MQTT queuing or DB persistence), cannot be handled by Home Assistant, which must have exclusive access to its storage layer in order for topic statefulness to be properly addressed. So you can only properly run 1 instance of Home Assistant at a time, but make it Highly Available with clustering and automatic migration of your instance, but you cannot really deploy multiple identical instances of Home Assistant and expect that the events will be properly handled. The same goes for deploying Home Assistant on Docker Swarm and expect to scale the instances. I went through this for a long time and ended up understanding these limitations... Now you still can use Harvester to have this dedicated instance of Home Assistant deploy easily, but it won't be easier (considering the complexity of the network layer in Kubernetes) than playing with straight VMs or baremetal installation of Home Assistant. Hope this helps...
I can't envision spinning this up on my home hardware just yet as most of my bare metal is already tied up either in "production" or main lab configurations (like having a big Proxmox hypervisor with tons of RAM and VM's already configured. I think this would be very cool to interconnect remote sites forming your own HA "cloud" without having to use a host like AWS, Linode, etc. Of course it'd only be as reliable as whatever MPLS or connections you have but..... if you integrated that with an SDWAN solution it could happen. I would certainly dig trying to deploy this all up to a Linode cluster or something. See if it is something that could be marketable "as a service" to small businesses. I'll bet there would be a business case for that to be added to the portfolio of smaller firms. -Shane
I had already heard about Harvester and thought it was interesting, but never tried it out. Was great seeing your exepreince with it, and I'm hoping that it could replace Proxmox as my hypervisor - however that might be long in the future.
If you are into automation, that future might be sooner than you think. Harvester's automation capabilities (thanks to Kubernetes base, iPXE, Harvester's bootstrapping options and the existence of a good official Terraform provider) are way beyond Proxmox's. If you do everything in the UI, then I agree with you, Proxmox is certainly good enough to not try and change it.
Ah! I've been trying my hardest to get a Harvester video out the door before someone beat me to it. I bought 3 new servers for the video but got held up because they were just _barely_ underspec'd for the job. Anyway, I'm at 0:01. Time to actually watch the video.
I was excited about harvester when it was first announced thinking I would be able to use network policies within my vm environment to create a micro-segmented dc. Usually that would cost a lot of money (*cough* cisco aci etc). Turns out this was not what the developers were after ;) Too bad but how cool would that be.
Great stuff? No way dude, for me this is by far the coolest of the coolest video you've posted so far, wonderful thanks to you!!!!. I guess you should have specified "cloud-init like config" instead of "cloud config" 😉 Lemme know if I'm wrong. Love your work brother! Cheerios !
I am waiting for some hardware to get here before I revamp my home lab. Originally a 2 node Pi4 cluster will now have a 6c/12t x86_64 system to play with. Now after this.... I don't know where to begin... I was gonna do Rancher, then I was gonna to proxmox... then I saw the TrueNAS scale video (i need a bit of a NAS)... now this..... We need a TechnoTim HomeLab series that takes TT's thoughts on the very beginning and how to build out the homelab.
Alright, Techno Tim, having watched this for a 3rd time. I think I am still confused by this kube-ception. If one did not have an exsisting cluster to add Harvester to. Let stay I was starting with one machine. I would install Harvester first them create a kube cluster on that?
now this is very cool, I think I'm gonna make some VM's on my proxmox servers to give this a try. I have 3+ proxmox servers I think I may make 3 vms to make a triple cluster on my proxmox cluster. your right VM inception
Seems like an easier to install, but crappier version of what you can do with OKD/OpenShift. OKD has had virtual machine support for a while now. OKD also doesn't have to put kubernetes in VMs on kubernetes. It just exists as both from the start. Also, your containers and VMs share the same storage pool with Rook CEPH. Looks like with Harvester you would be setting up longhorn on top of longhorn, replicating your storage on top of replicated storage...
Harvester is also a Kubernetes cluster that can run workloads side-by-side with VMs. It is just not recommended for production environments yet. Also, it offers a CSI interface to make pods inside K8s clusters share the same storage pool as the VMs. So you don't need longhorn on top of Longhorn.
Harvester had integrated Rancher to run workloads side by side at one point, then they removed it. I'm sure you still can, but it isn't the recommended method.
@@FlexibleToast It is not recommended for paying customers, there are no proven technical reasons to avoid it. Also, it is a shame that the integrated rancher is not showing anymore, but it is there, you just need to change the URL you are using. You can also always use kubectl
I like the idea of Rancher but they seem to have pushed it into being infrastructure. The “proper” way to run it is to build out a 3 node Kubernetes cluster just to run a Kubernetes management plain. Seems like a better approach would be to able to run Rancher as part of Harvester.
Ok the nesting here seems a bit unwieldy, but maybe fun ? Maybe there is a super great use case for it, other than 'we did because we could'. Not judging. I don't have much experience with containerization, more a background in virtualisation, but nonetheless curious. Rather than nesting, what would be a good way to go for deploying a working container hosting environment in a 'least nested' way ? Could you do something like Fedora CoreOS on baremetal, and run your orchestration in containers on top of that ? For poops and giggles nesting might be fun, but wouldn't every level just introduce a new set of maintenance concerns: updates/security/networks ? Enjoying the vids!
I like the idea of a single solution for both VMs and containers. You mention that it needs low system requirements though today I see it needs at least 64GB memory, 16 cpu cores (!) and 300GB storage for production and when installing it it seems that it consumes 50% of the 12 CPU cores all the time without even running any VM… also it takes too much time to become in ready state which is a no go for prod. Still today seems like harvester is not giving an impression of being production ready.
Great video again 🎉 Can you clarify this: “Already run a Rancher” Where? How to connect to this machine? And how it handles physical disks and volumes? I’ll try to replace my three node proxmox+ceph installation to a Harvester cluster. Got a lot of physical network interface and 3x5 ssd.
This seem like a sweet deal for bare metal installation skipping Proxmox or whatever hypervisor one does use. I would however have liked to see a scenario where you have several hosts with a bit different configs. E.g. Host A and B with more cores and memory, and host C, D, E with a bit less expensive hardware. Could you in that case create K3S VM:s to all hosts and configure so that only host A and B would be allowed to run "non k3s" VM:s? I'd also like to know if there is a common strategy for scenarios like this, and this goes for Proxmox etc as well I suppose, is it better to create a single k3s VM per host having all resources or should you diversify and have several smaller? Thanks for a great video by the way. I'm running both HyperV and Proxmox and have experience with VMware as well.
This sounds incredible. Since i started my linux sys admin course, i've put some of the fun stuff on hold.. well, python and docker mostly. Looking forward to getting back into it, your video is awesome!! SELinux is another layer of security and could add a some complexity to what your doing with permissions etc, although, from my limited time using it, it seems pretty good. Its not recommended for Ubuntu (which uses apparmor), but CentOS etc uses it. It's an area i need to go back over before exam day using centOS thats for sure... when i was testing it, i wiped apparmor from ubuntu and used SELinux on Ubuntu but the training material didn't recommend it, and i agree lol. I use proxmox as my hypervisor, and i have two nodes, but i generally leave one off as i don't really need the availability. On a side note, I also have written my own qemu script using dmenu (at the moment, these are just local to the desktop, but this could evolve), and these have been transferrable between linux distros. The idea behind this is i've written a master script to deploy the virtual machines, and once deployed, i can run a keyboard shortcut to launch a dmenu script, which has a list of all my qemu virtual machines scripts which were generated by the master script. Might sound weird, but would be straight forward for someone like you - its a one touch quick access to a virtual machine :)
Coredns is just a super simple DNS. I have it installed on my pihole, use it as upstream and kubernetes updates it from metallb so I can access each container by name. So not be that surprised to see it outside kubernetes environment
I have just bought a 2CPU x 10 core each (used) that is coming in a couple of days. I am thinking of using ESXi free (which has nearly no limits except 8 vcpu max per VM). I have just followed a course on rancher with SUSE people and discovered of harvester just few days before your video and I am seriously thinking of trying. I will see.
Thanks Tim for the video. I'm mulling over installing harvester as a helm chart over a k3s cluster. In a homelab, hardware are precious and limited resources plus I'm thinking that I would benefit from HA or should I try a harvester cluster running a k3s cluster running my workloads? I dare not ask what is the best pratice has we are venturing in uncharted territories.
This is my favorite video by far. So cool. Using proxmox now and it’s great to get started but requires a lot of work to do anything fancy IMO. I come from a cloud background and and definitely going to give harvester a try. Tim, could you see yourself using harvester to run your existing home-production vm workloads?
Thank you! I am running both at the moment. I do have some pretty advanced needs for my VMs and I am hoping to see more automation with Harvester and kubernetes before going all in!
Hello, Techno Tim Thank you for this video and your channel! Did you try to create an image for the Windows server? Which version? Can you share the URL for downloading this image? I have an issue with creating this image(
Tim, I loved this video, if you can, please have more content about the harvester for beginners.... :-) Another thing, what is your opinion about Harvester, Proxmox and KVM (Cockpit / Houston Command Center) as replacements for VMWARE after this, in my opinion, disastrous acquisition... thank you. You could create content or a series about this controversy, what do you think?
Excellent presentation , feature beats installation of Ovirt/ proxmox ( not to mention the other expensive options ) /Oracle Linux Virtualization , for vm management then adding k3s cluster , is it possible to create kubernetes cluster with other CNI option , for example cilium instead of flannel , replacing traefik with metallb or istio , replacing Longhorn with rook-ceph , VM snapshots creation , and of course what is the Life expectancy of this product ?
2 Yrs later and I just saw it. COOL - After 2 years, maybe an update of what you've seen about Harvester in new releases? and Hey, Have you tried to use Kasten for backups? maybe a comparison of embedded Backup vs Kasten?
Hi Techno-Tim, have been watching your excellent videos and subscribed after the first one... Thank you so much for your valuable work. I've got a question could you do an overview video on a realistic network, say three layered private networks (onion peeled) with deeply secured inside to on-the-edge nodes/docker instances or VMs in it? Such a video will bring your other videos into context...
SElinux is security enhancement toolkit for Linux. It's a set of scripts that hardens up Linux and it's built-in to Android and Ubuntu. Good enough for Android and Ubuntu, good enough for me.
Super interesting video! I've recently been planning out what software I'd like to run my homelab off of, and this proved to be really thought-provoking in how I want to run everything. After watching this, I went and looked at the TrueNAS SCALE roadmap, and saw that they're going to add KubeVirt and K8s clustering in Bluefin later this year or early next year. This makes me think, rather than doing: A. Proxmox with TrueNAS Core virtualized for NAS and a few Kubernetes instances managed by Rancher B. A bare metal Rancher server and a bare metal Harvester server tied together for Kubernetes goodness I could have two or three SCALE servers clustered together later this year, and from the one platform, have kubernetes and kubernetes clusters for containerized apps, VMs directly in KVM and VMs in kubernetes through KubeVirt, and also a NAS. To me, it seems like easy access to HA across the board: HA containers, HA VMs, distributed storage, and HA for said HA machines by clustering two SCALE servers together. What do you think about this idea?
@TechnoTim do you have any follow-ups for this software? I remember having a hard time with things being really unstable back in the early 2000s when first trying out SuSe. (Items that were recognized by other OS distros and BSD were never recognized out of the box by SuSe.) I am referencing this because that has been my experience trying to run HarvesterHCI 1.2.1 on a Xeon E3-12667v3 with 32GB ram. I keep getting kernel panics, and the bigger test box that has a dual socket E5-2667v3 and 128GB ram just boots into a kernel panic. both boxes run production services when not being used for this testing, so nothing on them is broken, to my knowledge. I literally caused a kernel panic by simply attempting to upload my very first image iso from my desktop to the Xeon E3 node, on a 10GBase-T network.
ok I've watched this video several times as Harvester is really interesting. Think I'm going to install it on one of my tiny/mini units. You said it may not be made to replace a hypervisor, but curious how long you've been running it and has it been able to be a replacement for Proxmox or xcp-ng in a homelab
It appears that you don’t think this is a Proxmox replacement. It does seem to provide most of what I use Proxmox for plus the ability to automate building machines. Is that ability externally accessible? It would be nice if you could use Vagrant. Another issue I ran into with Proxmox was giving my K3S servers larger disks. I had them running on Ubuntu with LVMs and I could increase the disks but getting that usable was a lot of work.
Hmm re-watching this (5th or 6th time now... I keep coming back to it) I think I should scrounge for hardware to make a 3 node harvester cluster... Then some how setup ZFS (or the like) to pool my drives from each node (multiple drives on each node) or if its possible pool the drives across all nodes (1 pool).
harvester does look promising, but I unfortunately could not get grafana/prometheus working. I enabled them but just got errors that they were not available in the console.
so is this any better than just using proxmox or unraid? It seems like same features but different way to do the same things as proxmox. I feel like all of this can be done setting up docker/docker compose, portainer on proxmox
Hi sir , i saw your video about harvester , but one thing how i can auto scale node when it necessary for kubernetes cluster ..if you give me a info about auto scaling node it would be great ..
@@TechnoTim Of course... It is useless just for me :) Because if I want to crease some cluster at home(docker swarm or k8s) - I just use different PVE hosts with VMs on them and there is no sense for me to virtualize once again on the top of the cluster.
it not actually VMs in K8s , it just uses Kubernetes APIs to provision VMs with kubevirt great presentation/video though , i would dare to say that it's better than the official ones :) thanks
Second question here. Id also like to know your opinion on running Harvester using k3s VM's vs Proxmox k3s in LXC, as far as I understood Harvester doesn't support LXC?
@@TechnoTim If applicable yes but I was thinking since K3S runs in VM's on the Hosts, if those VM's would be able to use LXC or if they are bound to use a "full" Linux VM.
Excellent video! harvester is still in the early stage, but I see a lot potential for that. Could you show how to install RKE2 HA from scratch since it is on early stage as well? Similar as you did for K3S...
Hi Tim, do you use any CI/CD tools for IaC automated deployments. I'm looking for setting up a DEV environement localy with Harvester and then be able to run the devolpments on my AWS EKS all through my CI/CD pipeline deployments. any tips on that may be Idea for a video :)
The hypervisor is *below* k8s. Just in terms of layering, bottom up: hardware, host kernel, hypervisor, guest kernel, guest apps. K8s is running at the same level as the guests, orchestrating them. I hate to criticize this otherwise good video, but I’m seeing folks here use the wrong terminology.
lab is setup, messing with this now. Question I noticed when you made the cluster you made 3 VMs in 1 pool. the three VMs ran etcd, control-plan, and worker nodes. Is there any reason to break out the pools in the cluster making pool1 be etcd, pool 2 control-plane and pool3 being worknodes?
Hi @Techo Tim, this is pretty cool and exciting video, thanks for sharing. Can this be setup on raspberry pi ? 😅 If yes, can you please make videos in future on setting up in raspberry pi 🙏
Which hypervisor are you running and are you happy with it?
Currently running proxmox as the main Hypervisor, with truenas Core as the NAS - a couple of jails and single VM for redundancy. Thinking of migrating Core to Scale in prerparation for one day maybe scaling back the hardware to a single host once the new Scales KVM is a little more mature with some more features added
I have been running Proxmox for several years now, and am quite happy with the stability and featureset. Proxmox is the backbone for my OPNsense/piHole/Traefik/Authelia router box, and the hypervisor under my media server, game servers, and homelab configuration for testing new apps/toys.
I used to work with esxi for work, recently I've been trying to setup a proxmox home lab on my rpi4 and today I met harvester 🙃 I'm going crazy, too much stuff 🤓 but very happy haha
I am using unRAID for Docker and NAS and really like it. Once I get another box I would like to use Proxmox for virtualization using your awesome videos to help with setup :)
I replaced Proxmox for Harvester the day it became GA 1.0 back in December 21 and I'm so happy with the way it works.
What a great time to be tinkering with hypervisors. Proxmox, TrueNas Scale, Harvester, and many others which I haven't really worked with recently.
Open source is great, isn’t it?
I found proxmox very easy to learn compared to some others I have used. It just works so well.
@@jaygreentree4394 it’s good. Not super production stable IMHO but if you have support then your solid
Nice video! To experiment, I've installed Harvester as a VM in Proxmox. Don't want to think about the layers of virtualization going on there
OMG, this guy is incredible, precise and direct to the point, if you are in the field already, this guy will save you weeks of research in 28 minutes.
maybe do a harvester vs proxmox video?
That would be epic
Epyc ;)
Epyc indeed
PLEASE i have been looking for a concise comparison everywhere
one big difference is LXC. harvester doesn't support that, but proxmox supports.
I'm using proxmox and truenas core and they have been awesome but after watching this video I decided I needed to learn and understand docker and kubernetes more. So I got two udemy courses and one of my proxmox nodes is going to be turned into my kubernetes/harvester lab. Thank you for the awesome content!
I applaud your work. This video could potentially be one of the best to explore non proprietary HCI.
Thank you!
Great video. I've been running Harvester for over a year now in my home-lab - and it's just amazing to see how it evolved over the beta period.
Congratulations to Suse/Rancher for a super open-source product. And congratulations to you for seeing how super exciting it is.
Thank you! Agreed!
I've joined a startup and using VMs on K8s was odd at first for me as well. Then I realize how useful this can be for OS dependent software for like larger rendering farms. I work for a animation studio so this type of quick scaling is needed and a win.
Nice desc of use case, thanks!
Ordered a server today. Will use Harvester HCI as it is exactly what i always wanted. I want resource pooling like cloud providers do. So every new server will become a harvester node to extend my pool. Can‘t wait to play around with it!!! Also will make use of rancher integration … this is so awesome… what a time to be alive !!!
Agreed!
I have evaluated Azure Stack HCI but after watching this, i believe its more empowering and featureful. Thank you for an awesome 360 view of harvester. Our corporate application requires Active Directory and all other microservices are running in Kubernetes. I think, bringing this AD VM in harvester gives almost same level of trust and reliability as if it was containerized. Harvester can be used in deployments where you need VMs alongside your Kubernetes workloads for applications which cant be containerized.
Just saw your video that I had in my "see later list" on UA-cam. It's friggin awesome and it's really is a very good product. Really glad that you took the time to do this video. Awesome work!
Harvester is open-source HCI , currently am working with Nutanix ,similar but expensive.
Can you maybe do an episode where you talk about all the tools that abstract deploying and running services and how they tie together? Like e.g. running everything on barebones is bad, but giving everything a proxmox VM might also not be ideal. So we use TrueNAS Scale (on top of proxmox) and supply that to harvester which manages kubernetes which then runs Docker(Compose/Swarm), Portainer in-between of it all, and we tie everything together with Traefik to route stuff where it needs to go, etc, etc...
There's so many individually complex tools and seemingly infinite configurations to all this, having a nice top-down and/or bottom-up view of all this would be awesome. Maybe something that organizes these technologies into layers or tiers of abstraction or whatever you think is fitting.
Awesome man thanks for sharing. Running VMWare Esxi today but also big fan of ProxMox. IMO ,as a big fan of Rancher as well, the main selling point is using the same ecosystem having Harverster as the main hypervisor, as you demonstrated here, It not only solves the cloud on the edge but also gives us the best of both worlds VM's and K3s OOTB.
Awesome video. I've not yet finished building a Proxmox cluster but I think I'm going to wipe it for an install of Harvester. I can see why you are excited!
My jaw dropped about 5 times during this demo. I'm going to have to rethink my homelab a bit after this.
Running HyperV with Windows 10, and currently happy with it. Definitely going to try this as I want to learn Kubernetes. Thanks for posting this!
Great job on this! For context, I own an MSP. I've been watching Harvester since it was announced and did my first install shortly after. What an amazing product it has become. I will be using this in my business as an offering and a service enabler for my business. For price sensitive companies, this could be an option that gets them into virtualization dirt cheap - no scary VMware cluster pricing. I also have clients that need to build and destroy labs regularly and can use some of their dated hardware for that purpose. So, while the edge is an ideal place to deploy Harvester, its usefulness will make it into the corporate DC. I see this as having massive potential for GSA contracts, too.
Talk about Kubernetes inception!! I'm trying to wrap my head around all of this. 🤯
Nice video!, a bit dense for me as I don't dominate K3S and Rancher. So that means no more Proxmox? Are you looking to replace it with Harvester moving forward? If so can you do another video to explain more the differences? I am fully invested now in Proxmox and I like to know if I can switch (virtual TrueNAS and other services with pass through hardware )
Already using it and loving it! I hope more people get around to trying it. I opened an issue about a cloud config bug and it looks like it’ll be fixed in the next release 🙂
I'm guessing it's the one where you have qemu guest agent checked and it ignores the settings you manually put in? That seems to be the most common
I've heard about Harvester some time ago, but I wasn't too interrested, I probably did not understand the concept all too well. BUT ... now that you reviewed it ... oh man, this fits my use case scenario very, very well! thanks, Tim! I feel I am going to lose one night for setup :P
Hope you enjoy it!
Been running proxmox for a minute but finally decided to give this a shot.
UPDATE: It works amazing!
@@jmcglock Im thinking of ditching proxmox as well! Im in devops so working with a platform thats close to what i do all the time is a better bet compared to traditional hypervisors.
This is absolutely freaking awesome good gawd, the potential is astronomical! Thanks Tim Good job man. I will be spinning this up in my home lab very very soon.😃
As a side note, if you want to get crazy (for real) how about nested virtualization of Harvester lol... That must be fun... Another awesome video Tim. Thank you so much for your work and dedication to the community ! Thanks to your videos, I am running a Proxmox Cluster on 3 nodes, with a Ceph shared storage for HA of my most important VM. Then on top of that I run a K3s cluster of 6 VM and Rancher with local storage (2 on each Proxmox node). I am in the process of deploying Rook inside the K3s cluster to access the Proxmox Ceph Cluster in order to have reliable PV/PVC for my Kube workloads. The reason being that Longhorn is unfortunately a big failure at the moment IMHO. I've tried hard to get it running for several weeks on much simpler architectures, but it is totally unreliable, most PVC not rebuilding properly at all after a node reboot, even using fast SSD and network. Ceph on another hand is just amazing in terms of stability and usability. I recommend that you give us a tutorial on Ceph, it is such a brilliant project especially the architecture and the thought that went into the CRUSH algorithm.
Thank you so much!
I'm old now. My tech stacks tend to be old too. I spun up a KASM machine and came out the other side thinking that all the K3 and Docker stuff might be *great* - but that they - at least to me - lacked the management and handling tools to make them top tier. I came away with how great KASM is, it still fundamentally requires deeper skills to really add anything else to it. So in theory its very expansive, not some much in practice. I watched this video. Really really cool and Tim - you always do a peer level job showing stuff. This to me is the kind of tooling that is needed. It takes what may be great technology - but places the tooling on top that takes it to a new level.
I'm still a billion miles off using the new edge stuff, but products like this make me want to spin stuff up, rather than shut stuff down :)
Thank you! You’re never too old! You got this! Thanks for the kind words!
Really interesting use case. Another layer of abstraction 😀
Thanks a lot for sharing!
This sounds like a great solution for a high availability installation of Home Assistant. It would be really interesting to see a comparison of Home Assistant deployed in K3S versus Home Assistant OS. What are the disadvantages of each?
I was into the same kind of thoughts for a long time myself, but the truth is that this is not the proper way to look at it... High availability should not be confused with scalability. Home Assistant can be made resilient through already existing solutions (running it as a docker stack on a dedicated VM for instance), then make the VM highly available by deploying it into a cluster with shared and reliable storage (Proxmox for instance). Kubernetes brings mostly scalability and parallelism of stateless applications, and unfortunately, the concurrency that it brings in the data storage layer (like MQTT queuing or DB persistence), cannot be handled by Home Assistant, which must have exclusive access to its storage layer in order for topic statefulness to be properly addressed. So you can only properly run 1 instance of Home Assistant at a time, but make it Highly Available with clustering and automatic migration of your instance, but you cannot really deploy multiple identical instances of Home Assistant and expect that the events will be properly handled. The same goes for deploying Home Assistant on Docker Swarm and expect to scale the instances. I went through this for a long time and ended up understanding these limitations... Now you still can use Harvester to have this dedicated instance of Home Assistant deploy easily, but it won't be easier (considering the complexity of the network layer in Kubernetes) than playing with straight VMs or baremetal installation of Home Assistant. Hope this helps...
Holy KubeCow §8-)
Thanks for this overview. That's really great to know...have to try it ASAP.
Happy farming Tim!
Very timely - I was about ready to install Proxmox on a spare machine, but I think I'm going to give this a try
thought the same but why have two Hypervisors that cant integrate as two nodes in one interface.
I can't envision spinning this up on my home hardware just yet as most of my bare metal is already tied up either in "production" or main lab configurations (like having a big Proxmox hypervisor with tons of RAM and VM's already configured. I think this would be very cool to interconnect remote sites forming your own HA "cloud" without having to use a host like AWS, Linode, etc. Of course it'd only be as reliable as whatever MPLS or connections you have but..... if you integrated that with an SDWAN solution it could happen. I would certainly dig trying to deploy this all up to a Linode cluster or something. See if it is something that could be marketable "as a service" to small businesses. I'll bet there would be a business case for that to be added to the portfolio of smaller firms.
-Shane
I had already heard about Harvester and thought it was interesting, but never tried it out. Was great seeing your exepreince with it, and I'm hoping that it could replace Proxmox as my hypervisor - however that might be long in the future.
If you are into automation, that future might be sooner than you think. Harvester's automation capabilities (thanks to Kubernetes base, iPXE, Harvester's bootstrapping options and the existence of a good official Terraform provider) are way beyond Proxmox's. If you do everything in the UI, then I agree with you, Proxmox is certainly good enough to not try and change it.
Sign language "thank you". Nice touch.
Ah! I've been trying my hardest to get a Harvester video out the door before someone beat me to it. I bought 3 new servers for the video but got held up because they were just _barely_ underspec'd for the job. Anyway, I'm at 0:01. Time to actually watch the video.
I was excited about harvester when it was first announced thinking I would be able to use network policies within my vm environment to create a micro-segmented dc. Usually that would cost a lot of money (*cough* cisco aci etc). Turns out this was not what the developers were after ;) Too bad but how cool would that be.
Great stuff? No way dude, for me this is by far the coolest of the coolest video you've posted so far, wonderful thanks to you!!!!.
I guess you should have specified "cloud-init like config" instead of "cloud config" 😉 Lemme know if I'm wrong.
Love your work brother! Cheerios !
Thank you for this awesome video, I have discovered harvester by you and playing with it right now
I am waiting for some hardware to get here before I revamp my home lab. Originally a 2 node Pi4 cluster will now have a 6c/12t x86_64 system to play with. Now after this.... I don't know where to begin... I was gonna do Rancher, then I was gonna to proxmox... then I saw the TrueNAS scale video (i need a bit of a NAS)... now this..... We need a TechnoTim HomeLab series that takes TT's thoughts on the very beginning and how to build out the homelab.
Alright, Techno Tim, having watched this for a 3rd time. I think I am still confused by this kube-ception. If one did not have an exsisting cluster to add Harvester to. Let stay I was starting with one machine. I would install Harvester first them create a kube cluster on that?
now this is very cool, I think I'm gonna make some VM's on my proxmox servers to give this a try. I have 3+ proxmox servers I think I may make 3 vms to make a triple cluster on my proxmox cluster. your right VM inception
Seems like an easier to install, but crappier version of what you can do with OKD/OpenShift. OKD has had virtual machine support for a while now. OKD also doesn't have to put kubernetes in VMs on kubernetes. It just exists as both from the start. Also, your containers and VMs share the same storage pool with Rook CEPH. Looks like with Harvester you would be setting up longhorn on top of longhorn, replicating your storage on top of replicated storage...
Harvester is also a Kubernetes cluster that can run workloads side-by-side with VMs. It is just not recommended for production environments yet. Also, it offers a CSI interface to make pods inside K8s clusters share the same storage pool as the VMs. So you don't need longhorn on top of Longhorn.
Harvester had integrated Rancher to run workloads side by side at one point, then they removed it. I'm sure you still can, but it isn't the recommended method.
@@FlexibleToast It is not recommended for paying customers, there are no proven technical reasons to avoid it. Also, it is a shame that the integrated rancher is not showing anymore, but it is there, you just need to change the URL you are using. You can also always use kubectl
I like the idea of Rancher but they seem to have pushed it into being infrastructure. The “proper” way to run it is to build out a 3 node Kubernetes cluster just to run a Kubernetes management plain. Seems like a better approach would be to able to run Rancher as part of Harvester.
Ok the nesting here seems a bit unwieldy, but maybe fun ? Maybe there is a super great use case for it, other than 'we did because we could'. Not judging.
I don't have much experience with containerization, more a background in virtualisation, but nonetheless curious. Rather than nesting, what would be a good way to go for deploying a working container hosting environment in a 'least nested' way ? Could you do something like Fedora CoreOS on baremetal, and run your orchestration in containers on top of that ?
For poops and giggles nesting might be fun, but wouldn't every level just introduce a new set of maintenance concerns: updates/security/networks ?
Enjoying the vids!
Thanks! I don't think it's nesting really, it just uses k8s as an orchestration layer.
I like the idea of a single solution for both VMs and containers. You mention that it needs low system requirements though today I see it needs at least 64GB memory, 16 cpu cores (!) and 300GB storage for production and when installing it it seems that it consumes 50% of the 12 CPU cores all the time without even running any VM… also it takes too much time to become in ready state which is a no go for prod. Still today seems like harvester is not giving an impression of being production ready.
It is really the next step up from traditional VM / HCI setup, hopefully more to come to the industry, making it an industry-wide adoption
Great video again 🎉 Can you clarify this: “Already run a Rancher” Where? How to connect to this machine? And how it handles physical disks and volumes? I’ll try to replace my three node proxmox+ceph installation to a Harvester cluster. Got a lot of physical network interface and 3x5 ssd.
This seem like a sweet deal for bare metal installation skipping Proxmox or whatever hypervisor one does use. I would however have liked to see a scenario where you have several hosts with a bit different configs. E.g. Host A and B with more cores and memory, and host C, D, E with a bit less expensive hardware. Could you in that case create K3S VM:s to all hosts and configure so that only host A and B would be allowed to run "non k3s" VM:s? I'd also like to know if there is a common strategy for scenarios like this, and this goes for Proxmox etc as well I suppose, is it better to create a single k3s VM per host having all resources or should you diversify and have several smaller?
Thanks for a great video by the way. I'm running both HyperV and Proxmox and have experience with VMware as well.
This sounds incredible. Since i started my linux sys admin course, i've put some of the fun stuff on hold.. well, python and docker mostly. Looking forward to getting back into it, your video is awesome!! SELinux is another layer of security and could add a some complexity to what your doing with permissions etc, although, from my limited time using it, it seems pretty good. Its not recommended for Ubuntu (which uses apparmor), but CentOS etc uses it. It's an area i need to go back over before exam day using centOS thats for sure... when i was testing it, i wiped apparmor from ubuntu and used SELinux on Ubuntu but the training material didn't recommend it, and i agree lol.
I use proxmox as my hypervisor, and i have two nodes, but i generally leave one off as i don't really need the availability.
On a side note, I also have written my own qemu script using dmenu (at the moment, these are just local to the desktop, but this could evolve), and these have been transferrable between linux distros. The idea behind this is i've written a master script to deploy the virtual machines, and once deployed, i can run a keyboard shortcut to launch a dmenu script, which has a list of all my qemu virtual machines scripts which were generated by the master script. Might sound weird, but would be straight forward for someone like you - its a one touch quick access to a virtual machine :)
Great video as always Tim, now if they just had something similar for a distributed block storage solution.
Thanks! Longhorn is their solution!
looks really good, but is it possible to do GPU passthrough?
just EPIC, Harvester and you!!!
This is really cool and well presented content! I Really appreciate your work making these types of videos. Keep up the great work!
Coredns is just a super simple DNS. I have it installed on my pihole, use it as upstream and kubernetes updates it from metallb so I can access each container by name. So not be that surprised to see it outside kubernetes environment
Nice! Didn’t realize! I’ve only seen it in k8s until you pointed this out. Thank you!
@@TechnoTim I have to admit that my setup is probably ad-hoc for my particular needs 😜
cant stop watching this vid......thanks tim!!!
I have just bought a 2CPU x 10 core each (used) that is coming in a couple of days. I am thinking of using ESXi free (which has nearly no limits except 8 vcpu max per VM). I have just followed a course on rancher with SUSE people and discovered of harvester just few days before your video and I am seriously thinking of trying. I will see.
Thank you Sir! Super Intro to Harvester.
Wow, another excellent vid! Thank you for another inspiring piece!
Thanks Tim for the video. I'm mulling over installing harvester as a helm chart over a k3s cluster. In a homelab, hardware are precious and limited resources plus I'm thinking that I would benefit from HA or should I try a harvester cluster running a k3s cluster running my workloads? I dare not ask what is the best pratice has we are venturing in uncharted territories.
This is my favorite video by far. So cool. Using proxmox now and it’s great to get started but requires a lot of work to do anything fancy IMO. I come from a cloud background and and definitely going to give harvester a try.
Tim, could you see yourself using harvester to run your existing home-production vm workloads?
Thank you! I am running both at the moment. I do have some pretty advanced needs for my VMs and I am hoping to see more automation with Harvester and kubernetes before going all in!
Hello, Techno Tim
Thank you for this video and your channel!
Did you try to create an image for the Windows server? Which version?
Can you share the URL for downloading this image?
I have an issue with creating this image(
Promox seems viable with the Salt and Terraform providers. Do check them out and Harvester can be junked, just like rancher and k3os.
Tim, I loved this video, if you can, please have more content about the harvester for beginners.... :-)
Another thing, what is your opinion about Harvester, Proxmox and KVM
(Cockpit / Houston Command Center) as replacements for VMWARE after this, in my opinion, disastrous acquisition... thank you.
You could create content or a series about this controversy, what do you think?
This is everything I've ever wanted!
Excellent presentation , feature beats installation of Ovirt/ proxmox ( not to mention the other expensive options ) /Oracle Linux Virtualization , for vm management then adding k3s cluster , is it possible to create kubernetes cluster with other CNI option , for example cilium instead of flannel , replacing traefik with metallb or istio , replacing Longhorn with rook-ceph , VM snapshots creation , and of course what is the Life expectancy of this product ?
This is absolutely nuts!
2 Yrs later and I just saw it. COOL - After 2 years, maybe an update of what you've seen about Harvester in new releases? and Hey, Have you tried to use Kasten for backups? maybe a comparison of embedded Backup vs Kasten?
Well now that is FREAKING COOL!!!
Awesome!! I think it's time to say goodbye to xenserver I use :)
Do you prefer this over PVE? Just curious. I love PVE but always willing to give other things a chance.
Hi Techno-Tim, have been watching your excellent videos and subscribed after the first one... Thank you so much for your valuable work. I've got a question could you do an overview video on a realistic network, say three layered private networks (onion peeled) with deeply secured inside to on-the-edge nodes/docker instances or VMs in it? Such a video will bring your other videos into context...
wow, I'm so hyped rn!
This is a game change for my home lab, tks
Good video. As always i learned a lot from you again. I am already running Proxmox, so i probably won't use Harvester, unless i can run it on a VM.
SElinux is security enhancement toolkit for Linux. It's a set of scripts that hardens up Linux and it's built-in to Android and Ubuntu. Good enough for Android and Ubuntu, good enough for me.
New fan here! Great work man, thank you! ✌🏼
Thank you!
Super interesting video! I've recently been planning out what software I'd like to run my homelab off of, and this proved to be really thought-provoking in how I want to run everything. After watching this, I went and looked at the TrueNAS SCALE roadmap, and saw that they're going to add KubeVirt and K8s clustering in Bluefin later this year or early next year. This makes me think, rather than doing:
A. Proxmox with TrueNAS Core virtualized for NAS and a few Kubernetes instances managed by Rancher
B. A bare metal Rancher server and a bare metal Harvester server tied together for Kubernetes goodness
I could have two or three SCALE servers clustered together later this year, and from the one platform, have kubernetes and kubernetes clusters for containerized apps, VMs directly in KVM and VMs in kubernetes through KubeVirt, and also a NAS. To me, it seems like easy access to HA across the board: HA containers, HA VMs, distributed storage, and HA for said HA machines by clustering two SCALE servers together. What do you think about this idea?
@TechnoTim do you have any follow-ups for this software? I remember having a hard time with things being really unstable back in the early 2000s when first trying out SuSe. (Items that were recognized by other OS distros and BSD were never recognized out of the box by SuSe.) I am referencing this because that has been my experience trying to run HarvesterHCI 1.2.1 on a Xeon E3-12667v3 with 32GB ram. I keep getting kernel panics, and the bigger test box that has a dual socket E5-2667v3 and 128GB ram just boots into a kernel panic. both boxes run production services when not being used for this testing, so nothing on them is broken, to my knowledge. I literally caused a kernel panic by simply attempting to upload my very first image iso from my desktop to the Xeon E3 node, on a 10GBase-T network.
ok I've watched this video several times as Harvester is really interesting. Think I'm going to install it on one of my tiny/mini units. You said it may not be made to replace a hypervisor, but curious how long you've been running it and has it been able to be a replacement for Proxmox or xcp-ng in a homelab
spectacular! My proxmox server has its days counted :D
Great video. I am testing this now on some old Dell 720.
Why use Klipper LB and Traefik for ingress? Can't Traefik do both?
It appears that you don’t think this is a Proxmox replacement. It does seem to provide most of what I use Proxmox for plus the ability to automate building machines. Is that ability externally accessible? It would be nice if you could use Vagrant. Another issue I ran into with Proxmox was giving my K3S servers larger disks. I had them running on Ubuntu with LVMs and I could increase the disks but getting that usable was a lot of work.
Really thank you for the greate and clear explanation of harvester i am planing to replace my hypervisore with harvester what do you think
Dude this is awesome !!!!! Thanks
Hmm re-watching this (5th or 6th time now... I keep coming back to it) I think I should scrounge for hardware to make a 3 node harvester cluster... Then some how setup ZFS (or the like) to pool my drives from each node (multiple drives on each node) or if its possible pool the drives across all nodes (1 pool).
harvester does look promising, but I unfortunately could not get grafana/prometheus working. I enabled them but just got errors that they were not available in the console.
Can you do a video comparing proxmox to harvester?
Thanks for the demo and info, have a great day
I just started using proxmox and quite happy with it. But my main goal is kubernetes. Do you think that is a good idea to switch to Harvester??
so is this any better than just using proxmox or unraid? It seems like same features but different way to do the same things as proxmox. I feel like all of this can be done setting up docker/docker compose, portainer on proxmox
Hi sir , i saw your video about harvester , but one thing how i can auto scale node when it necessary for kubernetes cluster ..if you give me a info about auto scaling node it would be great ..
Thank you for the video. But for me it looks like overhead 😅
Isn’t that the definition of a hypervisor in general but the benefit is shared resources? 😊
@@TechnoTim Of course... It is useless just for me :) Because if I want to crease some cluster at home(docker swarm or k8s) - I just use different PVE hosts with VMs on them and there is no sense for me to virtualize once again on the top of the cluster.
it not actually VMs in K8s , it just uses Kubernetes APIs to provision VMs with kubevirt
great presentation/video though , i would dare to say that it's better than the official ones :)
thanks
That's right!
more importantly, is that a Grafana dashboard on your watch :-)
Second question here. Id also like to know your opinion on running Harvester using k3s VM's vs Proxmox k3s in LXC, as far as I understood Harvester doesn't support LXC?
Not that I am aware of but why not just run a container at that point?
@@TechnoTim If applicable yes but I was thinking since K3S runs in VM's on the Hosts, if those VM's would be able to use LXC or if they are bound to use a "full" Linux VM.
Excellent video! harvester is still in the early stage, but I see a lot potential for that. Could you show how to install RKE2 HA from scratch since it is on early stage as well? Similar as you did for K3S...
Yes, you can. Just choose that over k3s!
Hi Tim, do you use any CI/CD tools for IaC automated deployments. I'm looking for setting up a DEV environement localy with Harvester and then be able to run the devolpments on my AWS EKS all through my CI/CD pipeline deployments. any tips on that may be Idea for a video :)
Very interesting information! How to create a cluster with macOS virtual machines? Thank you!!!
The hypervisor is *below* k8s. Just in terms of layering, bottom up: hardware, host kernel, hypervisor, guest kernel, guest apps. K8s is running at the same level as the guests, orchestrating them. I hate to criticize this otherwise good video, but I’m seeing folks here use the wrong terminology.
No worries and thank you for the feedback! I really appreciate it and I didn't take this as criticism!
lab is setup, messing with this now. Question I noticed when you made the cluster you made 3 VMs in 1 pool. the three VMs ran etcd, control-plan, and worker nodes. Is there any reason to break out the pools in the cluster making pool1 be etcd, pool 2 control-plane and pool3 being worknodes?
Hi @Techo Tim, this is pretty cool and exciting video, thanks for sharing.
Can this be setup on raspberry pi ? 😅
If yes, can you please make videos in future on setting up in raspberry pi 🙏