First of all, endless thanks for your priceless efforts to generate so precious content! Hats off! Keep going, it is and will be highly appreciated by the community, no doubts about that. For information only - All-in-One Installation of Kubernetes and KubeSphere works on Ubuntu 22.10 (generic/ubuntu2210 (libvirt, 4.2.6)) until the K8s release 1.24.9. This is (so far) the latest Kubernetes version supported by the latest (at the moment of this post) version 3.3.1 of KubeSphere. BTW, in case of KubeKey installation of only K8s cluster (without KubeSphere => kk create cluster [--with-kubernetes version]), the current latest release 1.26.0 (again, at this point in time) is deployed smoothly.
@@justmeandopensource hey did you create certificate from cert manager crd v1beta1 and then upgrade the cert manager CRD to v1? Is crd upgrade affect on existing certificate which were created with deprecated CRD api?
Hi, thanks for watching. You can install kubesphere on any existing cluster. There are few requirements. You got to have sufficient resources in your cluster worker nodes to be able to run all kubesphere related pods. And persistent volume provisioning set up in the cluster.
I have test it doubling the resources (4 vCPU and 8 GB RAM) and the prometheus-k8s-0 pod is still in pending state. It seems there's a problem with the VolumeBinding describing the pod: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 23m default-scheduler running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition I'm testing with nested VMs: I have Ubuntu Desktop as VM and, inside, I deploy the Vagrant infraestructure which enables 128 GB in the vagrant HD VM, and there's no such physical space on the host. is it maybe that? EDIT: I was able to make it work, with all pods running, increasing the HD space for the host.
First of all, endless thanks for your priceless efforts to generate so precious content! Hats off!
Keep going, it is and will be highly appreciated by the community, no doubts about that.
For information only - All-in-One Installation of Kubernetes and KubeSphere works on Ubuntu 22.10 (generic/ubuntu2210 (libvirt, 4.2.6)) until the K8s release 1.24.9.
This is (so far) the latest Kubernetes version supported by the latest (at the moment of this post) version 3.3.1 of KubeSphere.
BTW, in case of KubeKey installation of only K8s cluster (without KubeSphere => kk create cluster [--with-kubernetes version]), the current latest release 1.26.0 (again, at this point in time) is deployed smoothly.
Great video. Please keep going with Kubesphere. Thanks.
Sure. I have planned to cover few more kubesphere topics in this series. Thanks for your interest.
Hey, did you setup argocd on GKE cluster with ingress and let's encrypt?
I am afraid not.
@@justmeandopensource thanks no problem, I did it.
@@justmeandopensource hey did you create certificate from cert manager crd v1beta1 and then upgrade the cert manager CRD to v1? Is crd upgrade affect on existing certificate which were created with deprecated CRD api?
So how do you create more master and worker nodes?
Excellent video, can i only install kubesphere control plane into existing k0s cluster?
Hi, thanks for watching. You can install kubesphere on any existing cluster. There are few requirements. You got to have sufficient resources in your cluster worker nodes to be able to run all kubesphere related pods. And persistent volume provisioning set up in the cluster.
@@justmeandopensource thank you
@@RamakrishnnanEP you're welcome
Excellent video, following for long time. How do I change top-level domain .local to .dev or something??
Hi, thanks for watching. Where do you see the .local domain. I just accessed the web ui using the ip address.
I have test it doubling the resources (4 vCPU and 8 GB RAM) and the prometheus-k8s-0 pod is still in pending state. It seems there's a problem with the VolumeBinding describing the pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23m default-scheduler running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition
I'm testing with nested VMs: I have Ubuntu Desktop as VM and, inside, I deploy the Vagrant infraestructure which enables 128 GB in the vagrant HD VM, and there's no such physical space on the host. is it maybe that?
EDIT:
I was able to make it work, with all pods running, increasing the HD space for the host.
I was able to make it work, with all pods running, increasing the HD space for the host.
@@JorgeRodriguez-ux3yu Cool. You will find that I was able to get it working with 4cpu and 8G of ram. I will release the video this Wednesday. Cheers.
Feels like an OpenShift copy
Not exactly but many features are common. It looks more like Rancher. Thanks for watching.
Is this like Rancher or something more?
Yeah similar to Rancher in terms of unified cluster management.
Is kubesphere open source and free?