@@TechnoTim I found the video because I was searching for a production ready environment. UA-cam notified nothing about the new video release, as always.
I'd give a few props to the Zalando team who created Patroni way back now. As you cited, many ways to do the cert thing, but to slicken up the process, maybe bake into your docs a note or two about creating a StepCA instance (and enable ACME on it), and then just use 'certbot' on the HAproxy, etcd, and Postgres server nodes to allow the pull/refresh of needed certs automatically. The StepCA client auto-adds the rootCA of the StepCA server you're talking to into the local trusted cert store, and that prevent all the squawking about untrusted CA this/that/the-other. Great vid.
That’s a lot better solution than manually creating and managing certificate, but why run certbot on every node? Store your certificates in Hashicorp vault and use consul-template to deploy them, simpler, less moving parts and more elegant solution.
@@menruletheworld - lots of good choices just as you say. Whether Vault or OpenBao and their CA backend are all viable. I'll +1 Tim's work talking through the OpenSSL's .cnf config options, however, because - though a little lumpy - it's always useful knowledge.
Finally someone wich doesn't do only hello world tutorials. If I had this content when I was starting I would have skipped many headaches. Please provide more content like this 🙏
I went backwards. Started with Zalando in Kube. Way too old and is good for k8s only. then gave Crunchy a try.; not good for production.. then thought about streight Patroni, found hudreds of people doing their own way. I will give Tim`s a try... seems very complete and straight forward.Thanks buddy.
Amazing and very timely for me! Thank you! Heads up: The Video notes URL in the description has a typo and returns 404 currently. Had to go directly to your root address and get to it from the homepage.
What a great video ! And what great timing. Because, guess what I was going to be tasked with at work. Thank you!!!!! I would like to give you a huge thanks, for just how much hard work you have put into all of this, and the coding done on the website.
Damn man thats amazing, and extremely useful, i love how you always explain indl details the important things and choose the most important topics, looking forward for the automation tutorial and the kubernetes one❤
I am super excited you are sharing this with us. I am also super excited for your next hardware update, not that you have been living with your co-lo for a while.
Great video ! I hope you'll have the time one day and make a video demonstrating how to include Traefik Proxy in our Docker Compose file and use it in our architecture with PostgreSQL and pgAdmin service containers. I really learned too much from your course.
Great Video! Some stuff to add for keepalived. It usaly use multicast, the traffic will be send from the interface in the config. With vrrp3 you can swithc to unicast and then need to define a unicast source and the destionations. Also in the vrrp VIP you could add more than one ip and after the ip add the interface the ip is added to. The unicast checks are done by the interface defined in the top. This is very important for overlay networks since multicast can generate there much traffic.
thank you, just at the right time while my company is about to transition from a patroni-managed pg cluster in k8s towards managed postgres hosting - as i need to setup a test-system in-house for our development team ❣
Hey Tim. Followed along and got it working. Any chance you could make a followup tutorial on how to use this with my existing docker containers. I can't seem to figure out how to get the SSL certs to work on the the containers. I know that the server.crt and key have to be provided to the containers but i can't seem to figure out how to get them trusted.
Learned a lot about HA in general with this tutorial. Please please _please_ do one for ElasticSearch, because for the life of me I cannot figure out how to self-host an ES cluster for longer then 2-3 days before it grinds to a screeching halt, despite claiming everything is healthy.
Absolutely amazing content! This is exactly what I was planning on doing at my new job and I think you just saved me about a million headaches! I am curious though, how would you implement a flask api in this setup to also be high availabilty?
It is better to have separate LBs for primary and secondary nodes. Usually applications would have separate connection pools for master and replicas, so read only transactions would go to replicas while read/write will go master. Great guide nevertheless!
1. How does it handle transactions? 2. How do i remove or add node into the cluster? 3. How does it handle wal archiving? Is there any way I can restore point in time?
Very impressing! I am sure that for this almost an hour of content, you had to work for many more hours to polish the content and give it to us clean and clear. Thanks for that! Now for a question: if the replicas are able to get READ queries, how do you configure the HAProxy to balance the reads all over the cluster? And how do you control (in the Postgres replication process) if the data is eventually consistent (faster writes) or immediately consistent (slower writes, waiting for an acknowledge from other replicas) across all nodes?
Man i got all excited when you said stacks, then poof, FU it’s vm’s. :) really nice breakdown. I look forward to watching your brain melt when you do this under kube.
@@TechnoTim haha; you still need to add pgbouncer and pgpools as well. I think percona did a video presentation on the kube stuff. I’ve not watched it yet but should be current. Great presentation btw, pretty complete but way faster than most people get it done.
My eternal dilema leveled up to another level: docker container or hosted at the SO as showed by you? I would like to know how you set backups for this kind of environment of the video. Thank you very much, Tim!
This was a great video. But it got me thinking about two things. The first is could you have 2 of the Postgres nodes being receiving query requests from the HAProxy while the 3rd is just a replication node? The first two would be the DR/HA nodes while the 3rd was the "backup". Was thinking about a cloud provider and putting two nodes in one region and the 3rd in another region in case of outage? Any thoughts? Then the second thought was about just having two HAProxy machines? Do we need the 3 and maybe just build a check_script and let Keepalived verify it can reach the VIP through one of the HAProxies? Just wondering if you could save a VM on the front-end and still have enough availability and reachability? Would love to hear your thoughts and maybe a follow up video if my ideas make sense??? Great content - Thank you
@40:57 For anyone who sees this in the future... When I applied the configuration and went to view the syslog, the syslog file wasn't found. I then ran 'sudo systemctl status haproxy' and got the following message: "Server postgres_backend/postgresql-02 is DOWN, reason: Layer7 wrong status, code: 503, info: "Service Unavailable"" I got this for both postgress-02 and postgres-03. This is normal. For some reason, the patroni instance on the non-master nodes will report as Unavailable, even though you can curl the patroni endpoint and connect to the postgresql port. If you continue following Tim's instructions, keepalived will work just fine and the cluster will be created. If you then shut down one of the db nodes and 'sudo systemctl status haproxy', you'll see that the new master node magically became available. I'm running brand new Debian 12 VMs on Proxmox.
Hello guys, Trying to follow here :) Always have hard time understanding when it comes to certificates... Here the etcd nodes got certificates signed by local CA and this local CA will have to be in the trusted certs of etcd ? And for the postgres server certificate, we create only one cert but self-signed (without any CA involved) ? Thank for the free share of knowledge as usual !
Great video! A lot of moving parts (so a bit intimidating) :o If I already have HAproxy running on a pfSense box it will double as a LB. Can you use the health checks there to achieve the same/similar thing (using the patroni API to get the master)? Pro/cons?
Nice video, was waiting for this for a long time. Question about the keepalive config, is the authentication necessary, because mine logs that "VRRP version 3 does not support authentication. Ignoring." Oh and I did mine with all IPv6, was a but of a challenge but it works.
I've probably spent 10+ hours on this project. I am stuck on the etcd cluster. It just wont work. Restarted from scratch a couple of times certs are in place, data is in them, but this damn cluster just refuses to come up. I can manually connect the servers with other commands and using passed variables to the etcdctl command, and see healthy member lists, but these just refuse to actually connect using the env file. despite all the key/values being combed over and correct from the start. Also pg1 seems to just not want to listen on the peer port for some reason. troubleshooting that just lead me down a dead end rabbit hole. Maybe I'll give this another shot in a month or 2. Great video though. Love your content. Edit: The issue was the documentation here. I pulled up the video and slowly followed along while you skipped over the commands that you put in the documentation. You then ran the commands following the init of the etcd cluster showing that the cluster was online. When I skipped those commands I got the same results showing that my cluster was also online. I love the work you do here. It's great, but things like that just send people down rabbit holes wasting their time. I'd consider removing the 2 lines from your documentation telling people to run etcdctl endpoint health etcdctl member list
Great question! In general you probably should, I just didn't set up DNS for this cluster, which looking back I should have which would also illustrate better why having a SAN on the cert helps!
Do I get the correct impression that you basically can use this HA cluster config with way more stuff to cluster than "just" PostgreSQL? Would be interesting to automate that also for other stuff. 🤔
Hey Tim, While watching your video a bulb sparked and wondered the following: Since you're configuring etcd in HA to be able to talk to each other host, how can this be used to configure a K3S cluster without using Ansible? Any way we can get an updated video of K3S since its bee a few years?
@@abetechtips hi! I am not aware of a way to connect to an external etcd with k3s however it does support external mysql. I have videos on both without Ansible!
This! Also, having more nodes is good for 'production' sites, but you'll always need an odd number of nodes, I've seen some sites do 5 or 7 nodes, just because of the spread across availability zones
For sure, I talked about quorum quite a bit and there is a minimum of 3 nodes. When you add more nodes, you also install and configure etcd which means etcd will scale with postgres. Otherwise, you can host etcd youself. All explained in here.
Thank you! I'll check it out in the future! I am a fan of this because complex reads can be offloaded to replicas. I think both have their pros and cons but will look into this config too!
can you give more concrete information about how this works (and pointers to documentation)? The way I understand it is; the existing logical replication setup can now be extended natively (without add-ons) to replicate bi-directionally resulting in 2 master nodes, so the "restricted" relationship master -> slave is lifted?
Ah yes "the hard way" of doing things is still the most fun sometimes. Getting your hands dirty with config files while any change gets applied immediately to production is wild xD but I have to do it in some old legacy systems we have at work so it's always a game of "did I ruin everything just now or did I fix it?" :D
That is why I prefer pacemaker way, I dont need haproxy for managing vip address. Of course I thinkt that patroni is still a great tool but still pacemaker wins for HA in PostgreSQL
LOL @ how the title and dialog both have "Postgres SQL" (pronounced Post Gres See Quell), when it actually has one less 'S' than that, PostgreSQL, pronounced Post-GreS-Queue-Ell, or simply PostgreS (with the 'QL' silent). It's a silly thing we've been (not really, but sometimes) arguing about in nerd-fight fashion for the last 35 years, it's even in the PostgreSQL FAQ.
Great video. Yeah, don't bring the ludicrous KubeCtl pronunciations (even if a commiter says it is cuttle, as in fish?) to the rest of the sane world ;-) ctl = ConTroL , Systemctl)(ConTroL!), timedatectl)(ConTroL!), journnal(ctl)(ConTroL!), etcd(ctl)(ConTroL!), the only confusion is in Kubernetes, the rest of the world is fine with "control" ;-)
I've since moved postgres out of my k8s cluster due to performance and continued issues with k8s storage (pods sometimes do not reconnect to their storage and takes manual intervention). Neither are inherent issues with postgres or cpng however it still resulted in more issues and downtime than hosted in VMs/LXC (0 downtime so far in over a month). I agree, if I do go back to k8s, a dedicated cluster with a dedicated node pool is the way to go to avoid performance issues you get with shared environments, but that still doesn't address the bigger issue I had that was PVCs hanging or not reconnecting to pods.
I am currently trying to migrate our production environment to kubernetes. Still testing at the moment. And I would also like to have postgresql in Kubernetes via an operator like cnpg, stackgres or zalando. Would you advise against something like that?
@@TechnoTim Were you experiencing those issues with k3s/rke2 or on native k8s? I have run into PVC mounting issues on rke2 clusters on Harvester HCI before which is why I moved away at some point from Harvester and rke2/k3s.
Missed opportunity to call this video "installing Postgres the hard way" as a throwback to Kubernetes install tutorials
That was 100% going to be the title but I didn't want to scare anyone!
OK, I decided to rename it. Thanks for the nudge!
@@TechnoTim I prefer the hard way, as it teaches a lot. For a tutorial is perfect
@@TechnoTim I found the video because I was searching for a production ready environment. UA-cam notified nothing about the new video release, as always.
@@TechnoTim Whats the easy way to do this??
Awesome video, I'll definitely be checking this out in my lab.
I'd give a few props to the Zalando team who created Patroni way back now. As you cited, many ways to do the cert thing, but to slicken up the process, maybe bake into your docs a note or two about creating a StepCA instance (and enable ACME on it), and then just use 'certbot' on the HAproxy, etcd, and Postgres server nodes to allow the pull/refresh of needed certs automatically. The StepCA client auto-adds the rootCA of the StepCA server you're talking to into the local trusted cert store, and that prevent all the squawking about untrusted CA this/that/the-other. Great vid.
That’s a lot better solution than manually creating and managing certificate, but why run certbot on every node? Store your certificates in Hashicorp vault and use consul-template to deploy them, simpler, less moving parts and more elegant solution.
@@menruletheworld - lots of good choices just as you say. Whether Vault or OpenBao and their CA backend are all viable. I'll +1 Tim's work talking through the OpenSSL's .cnf config options, however, because - though a little lumpy - it's always useful knowledge.
Finally someone wich doesn't do only hello world tutorials. If I had this content when I was starting I would have skipped many headaches. Please provide more content like this 🙏
I went backwards. Started with Zalando in Kube. Way too old and is good for k8s only. then gave Crunchy a try.; not good for production.. then thought about streight Patroni, found hudreds of people doing their own way. I will give Tim`s a try... seems very complete and straight forward.Thanks buddy.
Amazing and very timely for me! Thank you! Heads up: The Video notes URL in the description has a typo and returns 404 currently. Had to go directly to your root address and get to it from the homepage.
Thanks you! I changed the URL and you beat me to it. Just refresh and the link should be updated!
What a great video ! And what great timing. Because, guess what I was going to be tasked with at work. Thank you!!!!!
I would like to give you a huge thanks, for just how much hard work you have put into all of this, and the coding done on the website.
Damn man thats amazing, and extremely useful, i love how you always explain indl details the important things and choose the most important topics, looking forward for the automation tutorial and the kubernetes one❤
Fantastic video! I love to see how it run under the hook, not just a automate process.
I am super excited you are sharing this with us.
I am also super excited for your next hardware update, not that you have been living with your co-lo for a while.
Great video ! I hope you'll have the time one day and make a video demonstrating how to include Traefik Proxy in our Docker Compose file and use it in our architecture with PostgreSQL and pgAdmin service containers. I really learned too much from your course.
This is an awesome rundown on how to install it "the hard way". This is the way. 🚀
I did this exact setup a while back. Was super tough. Good luck to everyone, probably going to be easier for you all with this video :)
Awesome!
Great Video! Some stuff to add for keepalived. It usaly use multicast, the traffic will be send from the interface in the config. With vrrp3 you can swithc to unicast and then need to define a unicast source and the destionations. Also in the vrrp VIP you could add more than one ip and after the ip add the interface the ip is added to. The unicast checks are done by the interface defined in the top. This is very important for overlay networks since multicast can generate there much traffic.
thank you, just at the right time while my company is about to transition from a patroni-managed pg cluster in k8s towards managed postgres hosting - as i need to setup a test-system in-house for our development team ❣
Can't wait for the automated version of this with IaC...
this is awesome content!!!
Thank you for the video. It's amazing!
Crunchy Data has a pretty good Kubernetes Operator that makes deployment easy
Hey Tim. Followed along and got it working. Any chance you could make a followup tutorial on how to use this with my existing docker containers. I can't seem to figure out how to get the SSL certs to work on the the containers. I know that the server.crt and key have to be provided to the containers but i can't seem to figure out how to get them trusted.
Great video! I would love to see similar content for MariaDB severs.
Learned a lot about HA in general with this tutorial. Please please _please_ do one for ElasticSearch, because for the life of me I cannot figure out how to self-host an ES cluster for longer then 2-3 days before it grinds to a screeching halt, despite claiming everything is healthy.
Absolutely amazing content! This is exactly what I was planning on doing at my new job and I think you just saved me about a million headaches! I am curious though, how would you implement a flask api in this setup to also be high availabilty?
It is better to have separate LBs for primary and secondary nodes. Usually applications would have separate connection pools for master and replicas, so read only transactions would go to replicas while read/write will go master. Great guide nevertheless!
You need a coffee after this.
@@MrEtoel thank you!!!
Awesome Tutorial!
1. How does it handle transactions?
2. How do i remove or add node into the cluster?
3. How does it handle wal archiving? Is there any way I can restore point in time?
Very impressing! I am sure that for this almost an hour of content, you had to work for many more hours to polish the content and give it to us clean and clear. Thanks for that!
Now for a question: if the replicas are able to get READ queries, how do you configure the HAProxy to balance the reads all over the cluster? And how do you control (in the Postgres replication process) if the data is eventually consistent (faster writes) or immediately consistent (slower writes, waiting for an acknowledge from other replicas) across all nodes?
Man i got all excited when you said stacks, then poof, FU it’s vm’s. :) really nice breakdown. I look forward to watching your brain melt when you do this under kube.
@@ClayBellBrews already did it and switched back 😅
@@TechnoTim haha; you still need to add pgbouncer and pgpools as well. I think percona did a video presentation on the kube stuff. I’ve not watched it yet but should be current. Great presentation btw, pretty complete but way faster than most people get it done.
My eternal dilema leveled up to another level: docker container or hosted at the SO as showed by you? I would like to know how you set backups for this kind of environment of the video. Thank you very much, Tim!
Fantastic job, Tim. Earned you a sub and 👍
Awesome, thank you!
Really great tutorial. Would really love a follow-up video for Kubernetes.
Great video!
@40:10 - could you put the root-ca cert on the proxy VM to verify the PostgreSQL certificates?
Yes, you absolutley could!
Awesome! Please make content on MySQL InnoDB cluster too
This was a great video. But it got me thinking about two things. The first is could you have 2 of the Postgres nodes being receiving query requests from the HAProxy while the 3rd is just a replication node? The first two would be the DR/HA nodes while the 3rd was the "backup". Was thinking about a cloud provider and putting two nodes in one region and the 3rd in another region in case of outage? Any thoughts? Then the second thought was about just having two HAProxy machines? Do we need the 3 and maybe just build a check_script and let Keepalived verify it can reach the VIP through one of the HAProxies? Just wondering if you could save a VM on the front-end and still have enough availability and reachability? Would love to hear your thoughts and maybe a follow up video if my ideas make sense??? Great content - Thank you
I hate that this is posted right now, right as I am actively planning an application that will be relying on Postgres...absolutely worst timing!! :D
Would love a video on the zalando k8s operator!
@40:57 For anyone who sees this in the future...
When I applied the configuration and went to view the syslog, the syslog file wasn't found. I then ran 'sudo systemctl status haproxy' and got the following message: "Server postgres_backend/postgresql-02 is DOWN, reason: Layer7 wrong status, code: 503, info: "Service Unavailable"" I got this for both postgress-02 and postgres-03.
This is normal. For some reason, the patroni instance on the non-master nodes will report as Unavailable, even though you can curl the patroni endpoint and connect to the postgresql port. If you continue following Tim's instructions, keepalived will work just fine and the cluster will be created. If you then shut down one of the db nodes and 'sudo systemctl status haproxy', you'll see that the new master node magically became available.
I'm running brand new Debian 12 VMs on Proxmox.
Well, I was wondering about "enterprisify" my kubernetes cluster with a distributed database. That's... coming at the right time!
ClusterSSH would a real asset for videos like this lol
Hello guys,
Trying to follow here :)
Always have hard time understanding when it comes to certificates... Here the etcd nodes got certificates signed by local CA and this local CA will have to be in the trusted certs of etcd ? And for the postgres server certificate, we create only one cert but self-signed (without any CA involved) ?
Thank for the free share of knowledge as usual !
What would be the added value of having a HA postgres cluster compared to a HA VM on which Postgres is installed ?
Have you used open source repmgr before? It removes the requirement for HA proxy and Keepalived.
30:21 why patroni need to have etc's private key?
Great video! A lot of moving parts (so a bit intimidating) :o If I already have HAproxy running on a pfSense box it will double as a LB. Can you use the health checks there to achieve the same/similar thing (using the patroni API to get the master)? Pro/cons?
Nice video, was waiting for this for a long time. Question about the keepalive config, is the authentication necessary, because mine logs that "VRRP version 3 does not support authentication. Ignoring." Oh and I did mine with all IPv6, was a but of a challenge but it works.
This video makes me feel even better about using ClickHouse instead of Postgres or CockroachDB
I've probably spent 10+ hours on this project. I am stuck on the etcd cluster. It just wont work. Restarted from scratch a couple of times certs are in place, data is in them, but this damn cluster just refuses to come up. I can manually connect the servers with other commands and using passed variables to the etcdctl command, and see healthy member lists, but these just refuse to actually connect using the env file. despite all the key/values being combed over and correct from the start. Also pg1 seems to just not want to listen on the peer port for some reason. troubleshooting that just lead me down a dead end rabbit hole. Maybe I'll give this another shot in a month or 2. Great video though. Love your content.
Edit: The issue was the documentation here. I pulled up the video and slowly followed along while you skipped over the commands that you put in the documentation. You then ran the commands following the init of the etcd cluster showing that the cluster was online. When I skipped those commands I got the same results showing that my cluster was also online. I love the work you do here. It's great, but things like that just send people down rabbit holes wasting their time. I'd consider removing the 2 lines from your documentation telling people to run
etcdctl endpoint health
etcdctl member list
@5:50 - can all these be setup on only 3 VM?
can't the proxy & keepalive be on the same SQL VM?
@16:00 - why did you not use the node fqdn in the cert?
Great question! In general you probably should, I just didn't set up DNS for this cluster, which looking back I should have which would also illustrate better why having a SAN on the cert helps!
Do I get the correct impression that you basically can use this HA cluster config with way more stuff to cluster than "just" PostgreSQL? Would be interesting to automate that also for other stuff. 🤔
that was awesome!
Hey Tim, While watching your video a bulb sparked and wondered the following:
Since you're configuring etcd in HA to be able to talk to each other host, how can this be used to configure a K3S cluster without using Ansible?
Any way we can get an updated video of K3S since its bee a few years?
@@abetechtips hi! I am not aware of a way to connect to an external etcd with k3s however it does support external mysql. I have videos on both without Ansible!
@ right, like you mentioned in your ansible video, who wants another VM to maintain 🥲
A Kubernetes "the hard way" would be nice
Etcd needs HA for its own three nodes to have quorum. So running etcd on the same nodes as postgresql would not be advisable.
This! Also, having more nodes is good for 'production' sites, but you'll always need an odd number of nodes, I've seen some sites do 5 or 7 nodes, just because of the spread across availability zones
For sure, I talked about quorum quite a bit and there is a minimum of 3 nodes. When you add more nodes, you also install and configure etcd which means etcd will scale with postgres. Otherwise, you can host etcd youself. All explained in here.
i keep getting TLS error when creating ETCD cluster the error is "rejected connection on client endpoint","remote-addr
Great video! Can you do it for mongodb ?
Postgres 17 supports node-based replication. A multi- master approach would be more robust and performant.
Thank you! I'll check it out in the future! I am a fan of this because complex reads can be offloaded to replicas. I think both have their pros and cons but will look into this config too!
can you give more concrete information about how this works (and pointers to documentation)?
The way I understand it is; the existing logical replication setup can now be extended natively (without add-ons) to replicate bi-directionally resulting in 2 master nodes, so the "restricted" relationship master -> slave is lifted?
can you know how to do this using microsoft SQL server for linux/docker ?
Will not be possible and easy to do it with kubernetes, instead 6 machine host?
Awesome video Tim! I might have missed it in this video or a previous one, but what are the specs of those xing proxmox nodes you used?
Hey! They are Intel NUCs 11th gen i7 with 64GB of RAM.
@TechnoTim great, found the details of those on your recommended hardware page, thanks!
Thanks for the useful tutorial! btw, I just noticed a typo "HA Porxy" in the description
Thank you! Good catch, I just fixed that!
i wanted to set up a "simple" posgtres replication for quite some time... i cannot express how overkill your solution feels :D
This reminds me of MySQL Cluster/NDB.
Ah yes "the hard way" of doing things is still the most fun sometimes. Getting your hands dirty with config files while any change gets applied immediately to production is wild xD but I have to do it in some old legacy systems we have at work so it's always a game of "did I ruin everything just now or did I fix it?" :D
Next video, a follow up on how to do it on kubernetes. Two steps, install Operator, let operator deploy HA setup 🤣
Sure. Right after you deploy k8s cluster using one Azure node, one AWS node and one GCP mode.
Wait, in which machine should I generate the certificates?
THIS MACHINE. 😂. I had to make it very clear.
Was wondering, can you do it with MariaDB? WordPress site?
Galera is what you are looking for with MariaDB
That is why I prefer pacemaker way, I dont need haproxy for managing vip address. Of course I thinkt that patroni is still a great tool but still pacemaker wins for HA in PostgreSQL
LOL @ how the title and dialog both have "Postgres SQL" (pronounced Post Gres See Quell), when it actually has one less 'S' than that, PostgreSQL, pronounced Post-GreS-Queue-Ell, or simply PostgreS (with the 'QL' silent).
It's a silly thing we've been (not really, but sometimes) arguing about in nerd-fight fashion for the last 35 years, it's even in the PostgreSQL FAQ.
like before watching!
54 mins! 😂 Holy baby Jesus!
Now when you are 50 years old, in addition to all the other issues, you will have all the damn certs suddenly expire. :))
@@romanrm1 i better up it to 100!
whats the easy??
Great video. Yeah, don't bring the ludicrous KubeCtl pronunciations (even if a commiter says it is cuttle, as in fish?) to the rest of the sane world ;-) ctl = ConTroL , Systemctl)(ConTroL!), timedatectl)(ConTroL!), journnal(ctl)(ConTroL!), etcd(ctl)(ConTroL!), the only confusion is in Kubernetes, the rest of the world is fine with "control" ;-)
I am a fan of saying "control" for ctl :). I never know how things cross over from k8s pronounciations :)
What about ClouNativePG?
Just one thing, if possible please use dark mode for explaining diagrams.
PS. Amazing video, Thanks for sharing.
Citus Data in Docker is way to go...
open relational😂
I'd rather take a small 3-node Kubernetes cluster and put cnpg in there.
I've since moved postgres out of my k8s cluster due to performance and continued issues with k8s storage (pods sometimes do not reconnect to their storage and takes manual intervention). Neither are inherent issues with postgres or cpng however it still resulted in more issues and downtime than hosted in VMs/LXC (0 downtime so far in over a month). I agree, if I do go back to k8s, a dedicated cluster with a dedicated node pool is the way to go to avoid performance issues you get with shared environments, but that still doesn't address the bigger issue I had that was PVCs hanging or not reconnecting to pods.
I am currently trying to migrate our production environment to kubernetes. Still testing at the moment. And I would also like to have postgresql in Kubernetes via an operator like cnpg, stackgres or zalando. Would you advise against something like that?
@@ralumbur See my comment above for some of my issues and how I would do it different in the future.
@@TechnoTim Were you experiencing those issues with k3s/rke2 or on native k8s? I have run into PVC mounting issues on rke2 clusters on Harvester HCI before which is why I moved away at some point from Harvester and rke2/k3s.
@@TechnoTim Were you using local storage or network storage? I am pretty sure CNPG recommends local storage given the HA is covered by replicas.