HIGH AVAILABILITY k3s (Kubernetes) in minutes!

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 473

  • @TechnoTim
    @TechnoTim  4 роки тому +27

    What are you going to run your k3s cluster on?

    • @sirsirae343
      @sirsirae343 4 роки тому +6

      Right now a Mac mini and a shuttle ;) works really well with about 30 containers

    • @pjotrvrolijk3537
      @pjotrvrolijk3537 4 роки тому +11

      Clear tutorial as always! I'm actually already running k3s (with Rancher) on three Lenovo minis, must say that I'm really enjoying the platform. Some differences are, in my install, I started with installing etcd on all three nodes first, and I am using a virtual (floating IP address) rather than having an external load balancer. So the cluster is self contained and doesn't depend on an external database or LB. I am actually running workloads on all three masters, figured that the small overhead of control node stuff isn't that big.

    • @TechnoTim
      @TechnoTim  4 роки тому

      @@pjotrvrolijk3537 Nice! Super flexible!

    • @morosis82
      @morosis82 4 роки тому

      We use eks/serverless at work, so after my dual e5-2660 R720 arrives this week I'll start moving some of my straight docker stuff into k3s on that and my R9 3900X server.

    • @sidefxs
      @sidefxs 4 роки тому +3

      I was actually going to ask about your thoughts on deploying a HA cluster using k3d, I saw a tutorial and it seems interesting at least to practice.
      Great tutorials, so far my favourite channel, straight to the point and useful tools that we all should be running.
      I was also wondering if you see value in doing “Kubernetes the Hard Way” to learn k8s or is there a better resource, I feel I need to learn more to follow your tutorials better.

  • @jessicalee462
    @jessicalee462 4 роки тому +15

    Excellent work. You keep it simple and accessible. I would never attempt these levels of sophistication in my work. You are boiling this down to the simplest terms. Much appreciated. You have earned a loyal follower

  • @harrycox6303
    @harrycox6303 3 роки тому +7

    The video tutorials you make are gold.

  • @JeffOwens
    @JeffOwens 3 роки тому +2

    Just started setting up my Homelab and learning Kubernetes for work. Your videos are the best I have seen. Thank you for clear and detailed directions.

  • @duffyscottc
    @duffyscottc Рік тому +3

    I was not able to follow along with this tutorial for three reasons: 1. I don't know how to configure a load balancer (is it on another VM, is it on my personal machine, etc). 2. I don't know how to set up the mysql database (again, where does this go). 3. how do I set up the vms for my servers and agents? I've already followed your tutorials on proxmox, but I feel like that was left out here. Otherwise, thanks for this tutorial, you make great content, and I appreciate everything!!

  • @andrebalsa203
    @andrebalsa203 Рік тому +1

    Great, great tutorial for deploying HA K3s, and after two years, everything still works as per your explanations (with a couple of minor adjustments). Thanks a bunch for your excellent work !

  • @pchasco
    @pchasco 3 роки тому +2

    I really appreciate this video! I've been struggling to get a k8s cluster up and running, and this video was exactly what I needed. Thanks so much!

  • @shinzoken1
    @shinzoken1 Рік тому

    Whoa you are throwing hot tutos, been watching and following along for 2weeks now, homelab is starting to look nice now :)

  • @uuutttuuubbbee
    @uuutttuuubbbee Рік тому +3

    i think a lot of things have updated now on deploying K3S. Could you make an updated video for newest version of K3S?

  • @gumtreeuser9768
    @gumtreeuser9768 8 місяців тому +2

    @TechnoTim, you may add two things:
    1. the k3s servers don't just pick up K3S_DATASTORE_ENDPOINT variable, and need --datastore-endpoint parameter
    2. second k3s server needs token argument
    In my setup, in April 2024, these are what made it working.
    Thanks anyway for this awesome video. Do you have any plans to have terraform based k3s deployment on XO or Proxmox or whatever homelab owners have?

  • @sacha8416
    @sacha8416 4 роки тому +8

    Hey Tim, love your tutorials. If you want an idea for a video, there is one thing very tricky but could be extremely powerful to setup.
    Can you add persistent storage on truenas for kubernetes nodes through NFS or iSCSI.

    • @helderferreira8498
      @helderferreira8498 4 роки тому

      Yeah I would love to see some examples as well. I'm thinking about migrating to k8s too.

    • @TechnoTim
      @TechnoTim  3 роки тому +1

      Just use the nfs client provisioner, many of our community members have already set this up. It comes up daily in our Discord! discord.gg/DJKexrJ

  • @GabREAL1983
    @GabREAL1983 4 роки тому

    you're becoming my favourite yt channel for this kind of stuff... a lot of others are just super annoying or try to sell you stuff all the time like network chuck etc.
    This is really useful stuff.

    • @TechnoTim
      @TechnoTim  4 роки тому +3

      Thank you. I love Network Chuck. Learned a ton of new topics from him!

  • @unmetplayer2727
    @unmetplayer2727 4 роки тому +1

    With all the problems with google photos at the moment a video on something like photo prism would go a long way. Thank you for continously making quality content, keep up the good work!

  • @yeezul
    @yeezul 4 роки тому +4

    Great tutorial! Been waiting for you to do something like this for a while.
    Thanks!
    Ps: your tutorials are amazing. Well structured and easy to follow.
    Keep up the good work!

  • @abrahamlora3650
    @abrahamlora3650 Рік тому

    Hey Tim, You're an inspiration for sure!
    I have been consuming content for long time and I am also looking to start paying that forward soon.

  • @EvilDesktop
    @EvilDesktop 4 роки тому

    I've been looking at lots of docs the past few days to set up k3s in HA and from what I remember it can handle 1 server failure if we have at least 3 of them, I may be mixed up by k8s or something though. Nice videos, that's the first one that I actually ended up with a working k3s cluster! Thank you

    • @TechnoTim
      @TechnoTim  4 роки тому +1

      With k3s and an eternal data store you need 2 at minimum :)

  • @nwareroot
    @nwareroot 4 роки тому +5

    Nice tutorial about spinning up the kubernetes cluster! But what about load balancing of all this worloads from client side?

  • @TillmannHuebner
    @TillmannHuebner 3 роки тому +7

    there is also an option for an embedded etcd so you don't need an external db :)

    • @sexualsmile
      @sexualsmile 2 роки тому

      Performance issues are a concern

    • @TillmannHuebner
      @TillmannHuebner 2 роки тому

      @@sexualsmile its a k3s … noones gonna host business critical infrastructure in a selfhosted homelab

    • @sexualsmile
      @sexualsmile 2 роки тому

      @@TillmannHuebner making assumptions and not reading the doc's.
      😂.

    • @sexualsmile
      @sexualsmile 2 роки тому

      You didn't even address my initial comment too. 😆

  • @VladimirBezuglyi
    @VladimirBezuglyi 3 роки тому +3

    Hey @Techno Tim, actually you need server token to start second/third/nth k3s server also (control-plane, master). Tested it on version 1.21.5

    • @jaycol12
      @jaycol12 3 роки тому

      when I specify the --token [value] param on the 2nd master, it starts but it never seems to actually join, both nodes show themselves when doing a kubectl get nodes but they don't show the other, any ideas?

    • @earthisadinosaur2338
      @earthisadinosaur2338 Рік тому

      @@jaycol12 hi ı'm experiencing the same problem right now. Did you by any chance solve this issue?

  • @nischalstha9
    @nischalstha9 2 роки тому

    So well explained! Got my cluster up and running so well!!

  • @michaelkasede1489
    @michaelkasede1489 3 роки тому +3

    Hi Timothy, I love your tutorials. Thanks for keeping the information all so updated. I am having trouble installing rancher on my k3s cluster. 1 master node with embedded datastore, 2 worker nodes and I have a LB as well. I used k3sup to deploy the k3s cluster. I installed cert-manager and there after installed rancher but I can't access the rancher UI or the traefik dashboard.

  • @eLCe_Wro
    @eLCe_Wro 3 роки тому +1

    Hey Tim. I have a quesiton about '12:17 - Get our k3s kube config and copy to our dev machine'. What machine is that? I guess it's not an agent? What do you mean by 'dev machine' ?

    • @TechnoTim
      @TechnoTim  3 роки тому

      Get the kube config from one of the servers and copy it to your local machine

  • @clumbo1567
    @clumbo1567 4 роки тому +3

    Would love to see a video on Metallb and OpenBGPD

  • @coletraintechgames2932
    @coletraintechgames2932 3 роки тому

    Trying to figure out how to type this.
    FIRST. Your videos are the best. Most pertinent to my interests. Most aligned with my current configuration. Most detailed.
    But I'm not to your level. I don't fully understand nginx, myseql, certs, etc.
    foundation issues...but, I'll get there! thanks for your help.
    saying thanks, and giving feedback for newbs like me!

    • @TechnoTim
      @TechnoTim  3 роки тому +1

      You can do it!

    • @coletraintechgames2932
      @coletraintechgames2932 3 роки тому

      @@TechnoTim I heard that in schwarzenegger voice. I do love schwarzenegger. Lol

  • @dougsellner9353
    @dougsellner9353 3 роки тому +5

    Q: Wouldn't you want to use MetalLB and then perhaps Traefik for more robust ingress/loadbalancer? (love your stuff)

    • @ThePandaGuitar
      @ThePandaGuitar 3 роки тому +3

      Traefik is built into k3s

    • @dougsellner9353
      @dougsellner9353 3 роки тому

      @@fuzzylogicq It is easier because Traefik V1 is included however the current version is V2 - SO I would suggest toggling the option not to install Traefik, then install V2 via Helm charts - from there you can permit it to route anything including individual static routes with many options via the Traefik CRD on your ingress, and/or inside the Traefik config/CRD. Mastering ingress/Traefik can be challenging but perhaps one of the most important learning steps of K8

    • @eduardmart1237
      @eduardmart1237 Рік тому

      @@dougsellner9353 Isn't MetalLB better overall than Traefik?

  • @mbigras
    @mbigras 3 роки тому

    Excellent video! Loved the pace and quality of information. I subscribed and am looking forward to browsing around your channel further! Thank you Tim 👍

  • @rstcologne
    @rstcologne Рік тому

    Thanks Tim, great tutorial. I only have one comment. With all the tabbed terminals and switching between them, it was sometimes a bit hard to follow where you actually are issuing the commands. Second point, your face was somtimes blocking the commands you were explaining. I guess using multiple windows would likely make this easier to follow. But then again, it's easy enough to go back a bit and look carefully. So again, thanks for creating this. Really helpful and still valuable after some years.

  • @HenrickSteele
    @HenrickSteele 2 роки тому +1

    Can you post a video about your db setup?

  • @ragequilt_
    @ragequilt_ 4 дні тому

    This is the first good video on k3s. Could you do more. I'd happily pay for paid tutorials on this.

  • @betterwithrum
    @betterwithrum 2 роки тому

    I just noticed you and Zack Nelson (Jerry Rigs Everything) UA-cam channel have the same exact cadence when speaking. It's very unique, soothing, and somewhat unsettling (because of the timing). Moreover, it holds my attention. Not because of the material but because of the almost 5/3 timing of your speach pattern.

    • @TechnoTim
      @TechnoTim  2 роки тому +1

      Maybe I can work on the material holding your attention too😂! Thank you!

    • @betterwithrum
      @betterwithrum 2 роки тому

      @@TechnoTim The material is pretty good too. I'm learning a bunch. Following your guide right now on setting up my K3S environment. Thank you!

  • @AndrewBradTanner
    @AndrewBradTanner 2 роки тому

    FYA explaining HA server configuration at a high level would have prevented me from going down a rabbit hole of sillyness trying to setup a HA configuration without the datastore. Specifically the part about HA server configuration requiring the datastore. Still though, as someone new to K8s very helpful!

  • @tektech440
    @tektech440 3 роки тому +1

    I don't know if it's just me... but for anyone else out there, for the 2nd server I had to use the /var/lib/rancher/k3s/server/node-token on the 2nd server install, not just the agents. journalctl output was saying a cluster was already bootstrapped with another token

  • @yogiwhy9531
    @yogiwhy9531 3 роки тому

    Thanks for the great video Tim!
    At 5:30 you said that the Nginx load balancer runs on a VW, docker or outside the kubernetes. Did you mean outside the kubernetes that the nginx doesn't run in a kubernetes pod? Is it still possible to run the load balancer without docker/VM on the same server parallely with the k3s server, isn't it?

    • @TechnoTim
      @TechnoTim  3 роки тому

      No, your Load balancer should not live on any of your k3s servers, because it's how you communicate with your k3s servers outside of k3s

  • @dimitryarmstrong
    @dimitryarmstrong 3 роки тому

    I had tried with other collation and this message appears at the moment of start the service:
    "creating storage endpoint: building kine: Error 1071: Specified key was too long; max key length is 767 bytes"
    Great video!

    • @TechnoTim
      @TechnoTim  3 роки тому

      Glad I checked the collation!

  • @bitsbytesandgigabytes
    @bitsbytesandgigabytes 3 роки тому +1

    Great tutorial as ever Tim, love the bit from the live stream at the end too. It's always good to pay it forward. Quick question, were all the nodes virtualised in this tutorial or cloud hosted?

    • @TechnoTim
      @TechnoTim  3 роки тому

      Thank you! All virtualized in my proxmox cluster!

  • @lukasblenk3684
    @lukasblenk3684 7 місяців тому +1

    I know this is a pretty old video but i have some question marks with high availability. I get that with 2 master nodes you get redundancy for the kubernetes managment stuff. But how do we setup high available Persistent volumes? as i understand they are still a single point of failure. I mean as i under stand if you have a pod running MySQL then you can eassyly say if it isn't running spin it up on another node. But they relly on Persistent Volumes wich in my case are NFS Mounts with fixed IP addresses. How do we make them Highavailable?

  • @asdflkj3809fjlkd3
    @asdflkj3809fjlkd3 3 роки тому

    Keep the good work Tim, excellent content. Thanks so much!

  • @jeffherdzina6716
    @jeffherdzina6716 4 роки тому +1

    Sounds like I might have a new project at work on Monday. Thx!!

  • @hackula8210
    @hackula8210 4 роки тому +1

    Though I have learned allot from your video's I wish you would come back to this video and go from A to Z. For alas I still cannot get it to work.
    1. Start with the load balancer once the VM's are up and IP's are available
    2. Installation of mysql or mariadb on a separate VM and show what app you use (DBeaver).
    3. How installation and state what version of docker you are putting on the master and worker nodes.

    • @TechnoTim
      @TechnoTim  4 роки тому

      Thanks for the feedback! I try to break them up otherwise they would be an hour long and less consumable. Hop in our discord or live stream for questions.

  • @Error_404-F.cks_Not_Found
    @Error_404-F.cks_Not_Found 3 роки тому

    I just wanted to say how much i love your blog. The documention to go along with the videos is spot on. And i love the layout. May I ask what its built on? It definitely doesnt look like wordpress.

    • @TechnoTim
      @TechnoTim  3 роки тому +1

      Thank you! It’s open source too. You can clone and fork it! It’s in my GitHub which is included in the blog too!

  • @bflnetworkengineer
    @bflnetworkengineer Рік тому

    Any drawbacks running the "load balancer" in front of the masters as say, a standalone management server running Docker that not only runs the Nginx container but also the MariaDB container to serve as the datastoreDB? I wouldn't think so personally but if we're talking about spinning up a fresh K3S cluster, i'd be nice to everything under one roof. Great video BTW!

  • @DJ-Manuel
    @DJ-Manuel 4 роки тому +3

    That sounds always easier to do when you explane it, but after i try it out it looks waaaay harder 😅👍
    Thank you anyhow for this video

  • @nbensa
    @nbensa 2 роки тому +1

    Hi Tim! Do you have a tutorial on load balancing mysql servers? Thanks!

  • @MrPowerGamerBR
    @MrPowerGamerBR 3 роки тому +1

    Not sure if something changed in k3s, however the tutorial sadly doesn't work for me
    Installing a single master node works fine, but if I try adding another master node, the Kubernetes service crashes with "starting kubernetes: preparing server: bootstrap data already found and encrypted with different token"
    This can be bypassed by copying the token from the first node and using "--token TokenHere" when setting up the server, however this is not mentioned anywhere, even in the original documentation this isn't mentioned! And even then, it still has some issues with K3s's Metric service not being able to access other nodes' metrics.

    • @MrPowerGamerBR
      @MrPowerGamerBR 3 роки тому

      Another issue that I've found: Using the "@tcp(IpHere:PortHere)" format doesn't work for me for some reason, the service fails with "preparing server: creating storage endpoint: building kine: parse \"postgres://postgres:passwordhere@tcp(172.29.2.1:5432)\": invalid port \":5432)\" after host". Maybe because I'm not using a load balancer for my PostgreSQL server? I don't know but I don't this is the issue.

    • @MrPowerGamerBR
      @MrPowerGamerBR 3 роки тому +1

      Smol update: Looks like they mentioned it in k3s' changelog about the bootstrapping issue! I think it would be nice to update the docs in the description to mention that change
      "If you are using K3s in a HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the --token CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores."

    • @TechnoTim
      @TechnoTim  3 роки тому

      Thank you! Can you open an issue in github and I will check it out and add notes!

    • @MrPowerGamerBR
      @MrPowerGamerBR 3 роки тому

      @@TechnoTim submitted an issue with all the issues I commented here + solutions! :3
      Currently my Kubernetes cluster is running fine and well, so I guess the fixes really worked :P

    • @jeanmarcos8265
      @jeanmarcos8265 3 роки тому +1

      Thanks buddy, you saved my afternoon!

  • @michaelkelly497
    @michaelkelly497 3 дні тому

    The current (December '24) version of Kubernetes uses Helm to install the kubernetes-dashboard.
    It would be great if you could update the video to reflect this.
    I'm having a lot of difficulty getting it to work.

  • @PascalBrax
    @PascalBrax Рік тому

    I know it's just for the demonstration purpose, but it's kinda funny you're using proxmox to run full VMs to run kubernets and load balancer while proxmox just has a shiny button for creating LXC containers right on the dashboard.

  • @jakubprogramming29
    @jakubprogramming29 2 роки тому

    What a great video. Very much wow. Love your style of presenting the content. Sweet sweet.

  • @robertdilworth1105
    @robertdilworth1105 3 роки тому +1

    Tim great video! However, this took me days, not minutes with many issues. Note that the uninstall scripts don't clear out the database, which was my problem. Once I did that, and re-installed k3s, everything was fine.

  • @oughtington1628
    @oughtington1628 4 роки тому +2

    Please make a video on how to allocate cpu and memory resources! I’m finding it hard to find a balance and what to watch for in a HA cluster

    • @TechnoTim
      @TechnoTim  3 роки тому

      I just made a video on Monitoring and Alerting, check it out, it might help!

  • @Clocen
    @Clocen 4 роки тому

    Thanks for the guide Tim! As Tim said, pay attention to the database collations (latin1_swedish_ci). This can cause issues when deploying the server nodes and come up only in /etc/var/syslog.

    • @TechnoTim
      @TechnoTim  4 роки тому +1

      Thank you! I figured it was better to know for sure but was unsure if it affected anything. Thanks for confirming!

  • @AndresGorostidi
    @AndresGorostidi 3 роки тому +1

    Tim, this is a great video, thks a lot! Btw, I have a question about the Db. You created a HA clusters having 3 servers, but the DB is not a point of failure ? Is not more appropiatte to have a etcd db created on each master and have then syncronized between them ?

    • @TechnoTim
      @TechnoTim  3 роки тому

      Sure, you can make your database HA. This is the HA install of k3s vs single node. Making your DB HA will be up to you. I’ve used etcd too and it has its own problem too

  • @chadmccluskey6465
    @chadmccluskey6465 3 роки тому

    Tim, great video, kubernetes in minutes is a bit of a stretch thought, hahaha, I say that after the 8 hours I just spent trying to get this up. I am stuck at getting the dashboard on my windows machine. Are you using a Linux machine to install the dashboard or one of the windows options? Also what GUI are you using for the mySQL DB? What would help me is if you were just a little more specific/clear on what machine you are installing the components.. Dont get me wrong your the best thing going on UA-cam, I have no idea how you do it all, you put in a ton of hard work!! truly grateful for your content!!

    • @TechnoTim
      @TechnoTim  3 роки тому

      I would skip the k8s dashboard and get rancher instead :) I use HeidiSQL for my SQL client on Windows. Thank you!! ua-cam.com/video/APsZJbnluXg/v-deo.html

  • @flobow8446
    @flobow8446 Рік тому

    It seems like the way of setting up 2 control servers has slightly change. You need to obtain the server token from server1 first and add it to the server2 setup command. You need to add token param e.g. ....server --token

  • @squalazzo
    @squalazzo 4 роки тому +3

    you say you have mysql on a load balancer, similar setup of the nginx you showed in video?
    and i'm setting up my test lab on my i7 with 32gb and using proxmox: what do you think of creating an nfs share for the storage part on the host itself, and mounted on the agent nodes, so it will be sure the storage is available way before the nodes are started....
    and, still, how do you setup boot order and which is the correct one for all thos vms in proxmox?
    thanks!

  • @xSnake75
    @xSnake75 4 роки тому

    Man really awesome tutorial!!! That was absolutely clear and easy to understand!! keep going with your work and hope one day we can share more experiences like this one !. Only one thing that I've missed! is your Load Balancer another VM or CT in your proxmox server? or is it installed inside each k8s server? Cheers man! awesome job!

    • @TechnoTim
      @TechnoTim  4 роки тому

      Thank you! My LB is nginx running outside of the cluster (because I want to be able to communicate with it if the nodes are down). To make things more manageable, my nginx runs in Docker on another VM, but doesn't have to.

  • @ricardorocha7832
    @ricardorocha7832 4 роки тому +1

    Great tutorial, very well explained, keep up the good work.

  • @weitanglau162
    @weitanglau162 3 роки тому

    Great Video!
    I am planning to deploy a kubernetes cluster using K3S and I have some questions. Hope you will reply!
    1) Should the external datastore (e.g. MySQL) be at a separate VM? Or living inside one of the master node will do?
    2) If I am planning of using Nginx Ingress Controller (instead of having an external LB like you showed here), how should I go about doing this? Or are they actually different things?

    • @TechnoTim
      @TechnoTim  3 роки тому +2

      1) You database should not live in your cluster
      2) This load balancer is not my ingress controller. k3s comes with traefik for this.
      Hope this makes sense

  • @charlesrodriguez3657
    @charlesrodriguez3657 4 роки тому +2

    Could you do a video using K3OS?

  • @AsfarAsfar-g7b
    @AsfarAsfar-g7b Рік тому

    Thanks for the video. How to setup ingress controller for path based routing for k3s cluster. Any documentations

  • @JiffyJames85
    @JiffyJames85 3 роки тому

    Can you demonstrate sever a simple webpage without needing to exec or proxy? Traefik is preinstalled, but maybe showing with nginx installation will be more helpful.

  • @michaelb8302
    @michaelb8302 3 місяці тому

    when you setup the taint on the control plane nodes, this also prevents monitoring pods from scheduling to them (node-exporter). how could I configure the taints such that general workload pods aren't scheduled to the control plane but, say, monitoring related pods do

  • @jharding65
    @jharding65 3 роки тому +1

    Tim, love the vidz. Great content. My company deploys 30-plus microservices and as a team lead I would like to find an inexpensive solution for my team for debugging k8s using the development laptops. K3s seems like a great candidate for this, considering its lightweight footprint. At the moment we use Docker and docker-compose to model the K8s for the Core 5-6 services that handle the majority of the work. I want my devs to understand how k8s works and knowing how Docker works is great but it does not the same as k8s. Q: Have you compared Docker for Windows w/ k8s vs. k3s?

    • @TechnoTim
      @TechnoTim  3 роки тому +1

      I would use WSL and take Windows out of the equation for local dev. Then, it’s just like a Linux server thank you!

  • @yongshengyang8144
    @yongshengyang8144 4 роки тому +1

    Hi, can you post a video of storage in Rancher? How to set it up in rancher and how the database to use it. Thanks

  • @soreLful
    @soreLful 3 роки тому

    Hey Tim, great video. In fact great channel I love all your videos, they are very well done.
    So I plan to build myself a ProxMox HA Cluster out of a few machines I have lying around at home and then build a k3s HA cluster like you mention in this tutorial. However I have this question that puzzles me since I first saw your tutorial and keeps me from starting the work on this. I can do this directly on the metal, with a linux server + k3s, and not bother with ProxMox.
    What's the advantage of doing it with ProxMox and is it worth it?

    • @TechnoTim
      @TechnoTim  3 роки тому

      You cal always go bare metal. I just just a hhypervisor to virtualize all the things! That way I can share resources.

  • @luca-leonhausdorfer8814
    @luca-leonhausdorfer8814 3 роки тому

    Hi Tim,
    what are you hosting in your K3s Kubernetes cluster?
    Can you elaborate on the load balancers (both internal (traeafik, etc.) and external (nginx))? How do I configure the Kubectl load balancers along with the Rancher HA LB? Why actually the svclb in the k3s cluster are not rolled out properly when I install an app via the app tab in the menu?

  • @itsathejoey
    @itsathejoey 2 роки тому

    When you mentioned setting up a TCP load balancer with nginx, how would that work for DNS say with PiHole for example where you use UDP as well?

  • @fgamberini2
    @fgamberini2 4 роки тому

    Thanks for the very clear video - one thing I did not get though is how to setup the networking to access the (nginx pods - ie the "hello nginx" page ) service from a client (let's say from your "personal host" machine - you are indeed showing that the nginx server is running but i'm not sure how it can be accessed from the oustide network - (maybe it is the LB on the right in the initial diagram - to the externet ? )... is there some doc you can point me to ?

    • @TechnoTim
      @TechnoTim  4 роки тому

      Yes, exactly. You'll need to set up an ingress controller and metal lb if you are going to expose these services outside of k3s

  • @zacontraption
    @zacontraption 4 роки тому +1

    Excellent. Thanks for this content

  • @AxelWerner
    @AxelWerner 3 роки тому

    NICE kickstart presentation!! THANKS! Especially for using 100% of your available screen area, using a proper font size!! However there are important points for me to put my finger on: Where is my valuable DATA of the apps i deployed stored exactly ? Are my files "highly available" too ? And what about your "single point of failure" services like your seperate mysql db and nginx load BL ? shouldnt it be possible to add or "migrate" somehow these services on to your K3s HA Cluster ?

  • @delduked
    @delduked 2 роки тому

    OMG THIS WAS SO COOL!! THANKS SO MUCH!!

  • @QuantumDrift-u5k
    @QuantumDrift-u5k 3 роки тому

    Great tutorial as always!
    Question, is it possible to specify multiple additions for the --tls-san option? Ie an IP address and domain name? If so how would that be done?

  • @Acentia
    @Acentia 3 роки тому

    I think you forgot, or at least didn't show, the last load balancer that would access the 20 pods.
    Would you just create it like the first one, just accessing the agents/workers and then setup an ingress?

  • @shellcatt
    @shellcatt 3 місяці тому

    I tried to find material on performing rolling updates with K3s, but nothing pops out. Some say that K3s can't do rolling updates. I'm confused - high availability, but no rolling updates...

  • @StephanKnabel
    @StephanKnabel 2 роки тому

    Hi,
    i am trying to build this.
    I am at Minute 13:26, connect with kubecli trough the LB.
    Error:
    Unable to connect to the server: x509: certificate is valid for IP1 localhost IP3, not LB IP.
    change the config to direct connect through IP1 works fine.
    Can anyone help me?

    • @TechnoTim
      @TechnoTim  2 роки тому

      I just released a video on how to automate this fully with load balancers and etcd. Might be worth checking out!

  • @itskagiso
    @itskagiso 2 роки тому +1

    Is there a way to run the external db in HA as well? So a replicated MySQL db in case one of them fail?

    • @eduardmart1237
      @eduardmart1237 Рік тому

      Yep. I would be great if there is a thorough guide on how to do it.

  • @SG-tq9tk
    @SG-tq9tk 4 роки тому

    Great video !! Do you not need to configure MetalLB for the HA cluster? Is that want the external Nginx provides instead? Can you use MetalLB instead of Nginx?

    • @TechnoTim
      @TechnoTim  4 роки тому

      You do need to configure MetalLB if you want to have an external load balancer.

  • @AlexanderDockham
    @AlexanderDockham 4 роки тому

    Main confusion for this guide is when you are suddenly able to get into hit the k3s dashboard from localhost...
    After a couple hours, finally figured out how to install/configure kubectl on Windows 10. Wasn't exactly straight forward, but could do with a mention next time.
    (total time for me to complete everything you did in the video was 10 hours)

    • @TechnoTim
      @TechnoTim  4 роки тому +1

      Sorry, yeah if you are going to run kubectl from Windows I highly recommend WSL. Then everything should just work. The proxy command should work too however not sure with WSL2 and it's odd networking.

    • @AlexanderDockham
      @AlexanderDockham 4 роки тому

      @@TechnoTim I missed what it was when you said it, but ended up getting there eventually.
      Seriously, thank you so much for these videos, they're amazing at getting through the majority of what needs to be done for these kinds of setups. Plus your documentation is a fantastic bonus!
      (pretty sure i'm like... at least 50 of the views on this video, the amount of times i've run through some of the steps after breaking everything over and over)

  • @anthonymacle1880
    @anthonymacle1880 2 роки тому

    Very clear explanations, your channel is a gold mine !! I was curious tho....
    Is it possible to include a server (let's name it Alpha 1) which already has docker application running on it to a k8s cluster so that these already existing applications could be automatically recreated within any of the cluster "worker nodes" should a problem occur? I understand that K8s solves this issues but my question really emphasize on the fact that the applications on the server "Alpha 1" were deployed before the cluster was created...In a nutshell I would like to know if it is possible to include a stand alone server into a cluster making sure its already exesting applications could be handled within a freshly created K8s cluster. I hope I could make my question clear to understand.

  • @boriss282
    @boriss282 Рік тому

    @TechnoTim Thanks for great video , is the nginx load balancer only in nginx plus ? not in the free version of nginx?

    • @TechnoTim
      @TechnoTim  Рік тому +1

      I think so but it’s included in the docker image last i checked!

  • @leonpinto5693
    @leonpinto5693 3 роки тому

    Hello... Great tutorial...I'm new to k3s and feeling my way around...could u kindly point to the kubectl installation link on the dev machine...? U did mention you had done this in an earlier video... Trying to look it up but not finding it... Probably, me not looking correctly..

  • @dtippit324
    @dtippit324 Рік тому

    Tim,
    I was watching your K3S spin-up video, but I did not see the vm config for the 6 vms.
    Currently, I have 6 Ubuntu VMs 1 socket and cpu, drive size 60 gb , but there was no defined VM in the video.
    Am I on the right track?

  • @vinc1793
    @vinc1793 4 роки тому

    awesome vids ! thx
    it is very funny because i'm litterally doing this at work since couple of months.
    i'm running rancher 1.6 for couple of years now.
    1 virtualized/saved singlenode at work for dev purpose.
    HA install on production webservers
    we pull up to rancher 2.0 at work for a while now but...
    i have only have two physical servers for webservers production and i ran into a breakdown in my head with K8s triple nodes and quorum things.
    first planning was two etcd/ctrlpane fixed VM on host and a third etcd VM with vsphere HA balancing.... weird config.
    so here is my point : what do you think about k3s and prodution HA environnements ? It better fit with my physical infrastructure but it seems also a young project.
    my first tests are mitigate i broke my k3s server with local helm install (i know not recommended but i like to try and broke things so i know how to get them stable after XD )
    again thx a lot for all your work share, very usefull and interesting !

  • @geoffhart
    @geoffhart 4 роки тому +1

    I'm completely new to Kubernetes, so maybe my question is stupid. But the title of this video starts with "HIGH AVAILABILITY" (HA), which I'm interested in. But when I watch the video, I don't see what I think of HA, I just see a way to scale. In other words, I see many single points of failure (the database, the load balancer, and in this case a single, non-HA ProxMox server ;). So, in the Kubernetes context does HA simply mean, "Scalable"?

    • @TechnoTim
      @TechnoTim  4 роки тому +1

      No, kubernetes is actually highly available. You can take down any of the k3s servers and the services continues to run, so this is an HA config. If you want HA database, make the DB HA and the same with your load blancers. You can quickly see the scope creep here in providing a HA solution. Kubernetes is just one piece to a solution. Also, my k3s nodes are spread across 2 proxmox servers in a cluster. Again, this is for HA k3s, HA’ing all the other things is up to you. Hopefully this clears it up.

    • @geoffhart
      @geoffhart 4 роки тому +1

      @@TechnoTim Thanks for answering, but that isn't what I mean by HA. For example, if my "service" was home-assistant running in a container - it's not as simple as running multiple copies of it. Think about what ProxMox means by HA: it keeps multiple copies of a VM across nodes, synchronizing using corosync (and possibly shared storage hardware). If I had a home-assistant HA VM running, and the ProxMox server it was running on failed, that VM would automagically migrate to a different node in the cluster. You'd also need 3 ProxMox servers for proper quorum, which is another complication. That's my understanding of HA: automatic fail-over if something dies. What you are describing is simply the ability to run more than one copy of a k3s server, with a single load balancer to distribute requests. But if one of the server dies in mid-process, there is no HA capability. Depending on how the service was designed, the entire request may need to be resubmitted to a different server and restarted from the beginning - in other words the service itself needs to implement any HA features it wants to have, Kubernetes isn't providing any.

  • @darkgodmaster
    @darkgodmaster 3 роки тому

    Having quite some issues configuring nginx load balancer, wouldn't mind a video on that.
    Will probably sort it out before that comes out but it would be nice to have

    • @TechnoTim
      @TechnoTim  3 роки тому

      Thanks! I have an example in the docs!

    • @MrVejovis
      @MrVejovis 3 роки тому

      @@TechnoTim Do you have a link to a docker-compose.yml for the nginx setup you have here?

  • @wstrater
    @wstrater 4 роки тому

    I am adding my vote for more on external load-balancing. It sounds like you are running 3 different load-Balancer for the API, NodePorts and MySQL. I suspect the API and MySQL would be similar but you may want a layer 7 load-balancer for the NodePorts. Are you running one load-balancer or multiple?

    • @TechnoTim
      @TechnoTim  4 роки тому +1

      I’ll be running 2, like in the diagram.

  • @oldmanscreaming
    @oldmanscreaming 2 роки тому

    Why did you use even number of worker nodes? I'm learning and I got to know online that we are supposed to use odd number off worker nodes to help with availability

  • @CrankyCoder
    @CrankyCoder 3 роки тому

    I have my latest cluster build on k3s with 3 masters (etcd build in cluster) and 7 agents all running on pi4s.
    I have a question.. you mentioned you have a load balancer infront of your mysql. Are you running clustered mariadb or something behind it? Any chance you have a video of that setup?! :)

  • @reasonmath
    @reasonmath Рік тому

    So what is the difference between setting up the cluster this way vs the rancher cluster in the portal?

  • @milakhan8734
    @milakhan8734 3 роки тому

    Great video ! Was trying to follow through however my agents are not joining the cluster? Returns with this "Failed to connect to proxy". Do I have to setup k3s external ip? If so do I do it in one of my servers and put in LB's ip? Thank you.

  • @ralmslb
    @ralmslb 11 місяців тому

    One thing that I feel could have had better explanation and consideration is regarding the database.
    It's recommended in the video to have 2x K3s Server machines, that use MySQL as its database.
    However, if we end-up using a single MySQL database, doesn't that cause a single point of failure?

    • @TechnoTim
      @TechnoTim  11 місяців тому

      Yes, you will need to make your SQL DB HA by running replicas.

    • @ralmslb
      @ralmslb 11 місяців тому +1

      @@TechnoTimWould love to have a video on that, its something that I was wondering the most efficient way to do it, while having 2 to 3 Server Nodes lol

  • @theobserver_
    @theobserver_ 9 місяців тому

    @TechnoTim - this is an old video, but I was wandering - what are you runing postgres on? is it container, or vm and in both cases how do you ensure that this is not your SPOF for K3s?

    • @TechnoTim
      @TechnoTim  9 місяців тому +1

      Now i run it in kubernetes but i am running the etcd version of k3s (linked in description). Otherwise you’ll have to build a mysql cluster for HA

  • @reesejenner3594
    @reesejenner3594 3 роки тому

    What do you do when you have multiple nodes running the same application code with user generated content?
    You need to use one application DB that all the nodes will use for your application? (Not referring to k3s itself.)
    How do you handle uploads? Shared storage that the application uses (across all nodes)?

  • @BrunoLessa
    @BrunoLessa Рік тому

    Very clear to me. But the only part that I didn't get was how you defined your Loadbalance IP. Newer versions of K3S come with traefik and it assumes as the load balance IP the IP from the server you are creating the cluster. How do I define a separate and exclusive IP to the default traefik load balancer?

  • @whatthefunction9140
    @whatthefunction9140 Рік тому

    You setup 2 master servers. You said the token is the same on each. Mine is different on each. How do both servers know about each other

  • @benjamincabalonajr6417
    @benjamincabalonajr6417 2 роки тому

    What’s the difference of this set up, vs using rancher, then adding nodes as worker to that?

  • @damu6678
    @damu6678 4 роки тому +1

    For people who don't have a cloud account where they can spin up instances, can you show how to do this with docker containers?

    • @TechnoTim
      @TechnoTim  4 роки тому +1

      I think you are looking for a single node rancher install then ua-cam.com/video/oILc0ywDVTk/v-deo.html

    • @damu6678
      @damu6678 4 роки тому

      @@TechnoTim No I was able to do pretty much your entire tutorial using k3d to create multiple server and worker nodes.

  • @shawnhank
    @shawnhank 3 роки тому +1

    Is your my SQL Server on a separate virtualized Ubuntu instance?

    • @TechnoTim
      @TechnoTim  3 роки тому +1

      It is, and then in a docker container. I couldn’t put the sql server inside since the cluster requires it, so I created a vm and then spun up mysql using docker (I never install anything that has a container unless I have to)

    • @shawnhank
      @shawnhank 3 роки тому

      @@TechnoTim Cool. I thought that was the case. I'm not sure whether to use K3S or Rancher and figured I'd try both. :-)
      Now I know that I'll need a total of 7 Ubuntu Server VMs to replicate what you've covered.
      Thanks!

    • @TechnoTim
      @TechnoTim  3 роки тому

      NP! Yeah, haha, it does add a lot of vms

  • @agirmani
    @agirmani 2 роки тому

    What exactly is the purpose of this "external datasource"? What is being stored in it?
    If I have an application running on my cluster, is there anything preventing me from talking to an external database not set with the --datasource option??

  • @HellStorm666NL
    @HellStorm666NL 4 роки тому

    Hey Tim, thank you for this video.
    Can you please explain how to upgrade the traefik to the latest version? k3s uses 1.81 as default and I want to use the traefik with v2.3.
    Can I just edit the traefik yaml, or can I make a new deployment?
    Also, how do I browse to the nginx test deployment? at this time only a curl from localhost works and not browsing to the load-balancer ip.

  • @CesardaSilva69
    @CesardaSilva69 3 роки тому +1

    Hi! Thank you for a great video. I am trying to setup a home lab environment as the one you described, using a two nodes Proxmox environment. I use an LXD container using Cent OS 8 for the loadbalancer. I have installed nginx on it and used your configuration, but I get an error I hope you could help me with. The error I get is "unknown directive "stream" in /etc/nginx/nginx.conf". Did you install any additional modules?

    • @TechnoTim
      @TechnoTim  3 роки тому

      I am using Docker so the stream module comes with it. You need to enable it in the config. I have that in the documentation in this video. See my nginx config

    • @mmockus
      @mmockus 3 роки тому

      @@TechnoTim I have watched this for the 3rd time and I am still not seeing how to avoid the "stream" directive issue. Current install is nginx on a vm. I have the same nginx.conf, but the stream is not supported. Is there a timestamp?

    • @mmockus
      @mmockus 3 роки тому

      Did you ever get this to work @Cesar da Silva

    • @mmockus
      @mmockus 3 роки тому +2

      Now I feel a bit dumb... looking 1 more time... it is there... Just commented out.

    • @-Giuseppe
      @-Giuseppe 2 роки тому

      @@mmockus HI there, did it work for you? I'm also trying to configure nginx, but it didn't work for me... this is sooo annoying. I tried to edit nginx.conf directly from /etc and also make a new file into /etc/nginx/config.d/ but no success