@@DevOpsToolkit Sounds to me like this video's audio is distorted due to clipping. You could have set up too high of a gain on the interface / recording software, leading you to max out all the dynamic range in the loud segments of the audio. Great video introduction to Velero, btw!
Merci Victor for another Monday video. I use Velero (not to the full extent) to test K8S upgrades by creating a new cluster in the existing version and restoring application artifacts from a production cluster and then doing the update to make sure that things work when I update the real production cluster.
Trying to think of something to justify backups over gitops: - PV-node mapping preservation - single-issue resources' preservation (e.g. cluster IPs, LB names, etc.) - post-mortem analysis (e.g. security breach RCA), where need the state that actually was LIVE; not the desired state - digging into the past (e.g. OCI images expired and removed from in ECR and we need THEM for some reason) Recreating those resources anew from desired state would introduce randomness we may not actually want (e.g. actual PV data is on node C and newly created PV resides on node B now) I can think of a few more corner-case like scenarios. Most likely I've missed smth more obvious, I'm still working on my first coffee :)
K10 is a great tool, free for up to 5 nodes, has direct support for DB backups (not just the PV). More than 5 nodes? you need to sell a kidney to afford it. Cloud Casa is a good mix of Kastens UI and Veleto in the back.
I tend to use KinD all the time. MicroK8s should be similar and I don't think there's much of a difference (from user's perspective). If you prefer using Ubuntu when developing (I'm on Mac) MicroK8s is probably a great choice. I'm not using it myself so I can't say much more.
@DevOpsToolkit Can it restore the cross plane composite resources too and I wanted to migrate my cross plane setup from one cluster to another cluster with xrds and the resources deployed it failed . Any best way you can suggest something
2 years ago this was awful to use. We decided against using it and instead made k8up back ups for databases and stateful apps, providing the cluster with terraform and argocd. So there was minimal stuff we really needed to backup actually. As in case of an error we will would have gotten the cluster back online in like 15 minutes
Audio in this video is bad. Sorry for that. I'll do my best not to repeat the same mistake in future videos.
with 32 bit you will be able to fix the audio in post even if you had blown the peaks during recording.
@haimari871 I'll try that next time.
@@DevOpsToolkit It requires a recording device (Hardware) with 32 Bit capability
@haimari871 I'm a newbie when sound is concerned. I have a rode ntg4 with elgato XLR. Not sure whether that's the hardware or something else...
@@DevOpsToolkit Sounds to me like this video's audio is distorted due to clipping. You could have set up too high of a gain on the interface / recording software, leading you to max out all the dynamic range in the loud segments of the audio.
Great video introduction to Velero, btw!
Merci Victor for another Monday video.
I use Velero (not to the full extent) to test K8S upgrades by creating a new cluster in the existing version and restoring application artifacts from a production cluster and then doing the update to make sure that things work when I update the real production cluster.
I like your intro.. 🔥
Why do we need this if we are using GitOps and we can easily rebuild the state?
That will be the subject of the next video 🙂
Doing the same here, but we still have some applications with persistent storage and state on them...
This is what I am wondering too 😊
Trying to think of something to justify backups over gitops:
- PV-node mapping preservation
- single-issue resources' preservation (e.g. cluster IPs, LB names, etc.)
- post-mortem analysis (e.g. security breach RCA), where need the state that actually was LIVE; not the desired state
- digging into the past (e.g. OCI images expired and removed from in ECR and we need THEM for some reason)
Recreating those resources anew from desired state would introduce randomness we may not actually want (e.g. actual PV data is on node C and newly created PV resides on node B now)
I can think of a few more corner-case like scenarios. Most likely I've missed smth more obvious, I'm still working on my first coffee :)
Great video, all paid platforms that I have used in the past use velero in the background 😊
I think that Veeam is an exception to that.
K10 is a great tool, free for up to 5 nodes, has direct support for DB backups (not just the PV). More than 5 nodes? you need to sell a kidney to afford it. Cloud Casa is a good mix of Kastens UI and Veleto in the back.
Hi Vik, great video. what do you think of MicroK8s instead of the entire Kubernetes? for dev and learning
I tend to use KinD all the time. MicroK8s should be similar and I don't think there's much of a difference (from user's perspective). If you prefer using Ubuntu when developing (I'm on Mac) MicroK8s is probably a great choice. I'm not using it myself so I can't say much more.
@ thanks
@DevOpsToolkit Can it restore the cross plane composite resources too and I wanted to migrate my cross plane setup from one cluster to another cluster with xrds and the resources deployed it failed . Any best way you can suggest something
Yes it can. That's what I used as one of the examples in the last video.
I would like to see a recovery process using VELERO in a different cloud environment (from AWS to GCP, for example).
Adding it to my todo list...
2 years ago this was awful to use. We decided against using it and instead made k8up back ups for databases and stateful apps, providing the cluster with terraform and argocd. So there was minimal stuff we really needed to backup actually. As in case of an error we will would have gotten the cluster back online in like 15 minutes
"the channel where we create backups that are never restored" 😅
Nor verified 😂