At the moment using crunchydata's postgres operator that served us well in the last years. It uses patroni for cluster management and their pgbackrest tool for backup. Version 5.3.x allows also for automated major version upgrade of postgres. Also open source. It has a weird versioning and they do not keep postgres container images longed than 1 year, you have to back them up if not upgrading. What peaked my interest in this one is logging as json and removal of patroni. Will give it a try. I am curious if I can migrate my current databases.
Made a comparison of postgres operators ~1 month ago and made a decision to go with cloudnative-pg. Running PG cluster on baremetal managed by k8s. cloudnative-pg looks very user-friendly, thanks to the well-thought-out CRDs and parameters + very nice kubectl plugin.
@@Artazar77 you have to know that we started this operator after having benchmarked Postgres in a bare metal Kubernetes cluster. You can google and find the blog article that covers this which I wrote in June 2020.
I’ve been using Cloudnative PG for almost 6-7months now in different Kubernetes cluster and work really good. Soon we’re moving it to our production cluster. 🎉
@ganeshbabu8263 I haven't done a review of CNPG outside of what I said in that video. I'll add it to my to-do list. Until i do, the short version is that it's awesome and that i strongly recommend it if you can't use a managed service.
I came across a cool feature that I don't believe is so easy, when running a database outside of k8s. I needed to move my cluster off of VPSs to get them on bare metal computers. So, I connected up the new much more powerful nodes into the cluster and cordoned off the old nodes. I then simply killed the "last" database node first. It then restarted on one of the new cluster nodes. I waited for the database HA process to replicated the data, then I killed the second to last database node (I had 4 - 1 master 3 replicas ), I continued this process until all database nodes were on the new cluster nodes. And it worked! I call it database crawling. The database didn't stop serving, except when the new master was voted up. Only a few seconds down time. Awesome. 😁
I am just getting started with kubernetes, and I couldn't find a good way to mimic Aws RDS and GCP CloudSql behaviour using k8s vanilla deployments and statefulsets. I'm glad that I found this video before deploying postgresql with those primitives in production.
Hi, Thanks for this video on CloudNativePG! I didn't know about this specific postgres operator. Completely agree, operators such as this really changed the game when it comes to deploying and managing databases on Kubernetes! I also have a video on this topic on my channel where I use Crunchy PGO (another PostgreSQL operator). It's getting to the point where you can actually build your own "self-service" platform, all with open source solutions, and operators on top of k8s!
I was worried about run db on kubernetes, but after watching this video I got hope for deploy pg on k8s. Thank you sir. More video on this topic please.
I'm using PG and Mongo operators, but have to say that running db in k8s with operator is still not very easy. Especially when you're doing cluster / operator / db upgrades, it is kind of scary. For production I would rather use managed db instance
It's more robust and designed from ground up as operator with all the options available as CRs. That being said, both are great and i would not switch from one to another unless something very specific is missing in one of those.
I love Cloudnative-PG, it just works. Very seldom that I find a operator that does what it says and is easy to use. What I dislike about it is creating additional users and databases as IAC is a pain.
Hi Viktor, Please consider new distributed ones like TiDB and Cockroachdb, they support mysql and pg wires and are fully automated by operators. I think we are approaching new type of data platforms which are game changers..
I think the biggest missing piece from what I could see here is an assumption that you will use CloudNativePG to run an OLTP, not an OLAP. In order to generate reports and analytics across a variety of online data sources, you will need a Data Warehouse, Query Federation, or some manner of data lake / lakehouse type solution. I think a continuous backup to S3 plus something like Redshift Spectrum (or some other S3-consuming ETL/streaming) might get you there, but if the goal of CloudNativePG is to be a one-stop-shop to run Postgres, I'd prefer if they built more tooling to support export to your offline data system without having to engineer much. That said, if we're running OLTPs in Kubernetes, we might as well put our data warehouses in there too, and if and when that happens, I'm sure someone will come up with some kind of CRD that deploys and manages your entire online and offline data system.
You are right, at the moment CloudNativePG is primarily focused on OLTP workloads - after all Rome wasn't built in a day. You can certainly already put data warehouses in PostgreSQL in Kubernetes, but at the moment there are a couple of limitations when talking about VLDB (multi-terabytes): we do not support yet declarative tablespaces so that you can place indexes or table partitions on different physical volumes, and we do not support yet K8s volume snapshots for backups and cloning. Having said this, I am not sure you are aware of PostgreSQL Foreign Data Wrappers to actually connect to different sources. Moreover we are now adding native support for the pg_failover_slots open source extension by EDB which enables you run Change Data Capture workloads in a Postgres cluster, making sure that failovers do not break your consumers queues. This allows you to add to ETL/ELT traditional processes the continuous live streaming of changes to additional sources. I would like to validate this scenario with Debezium once we have enabled this feature (which should be available in 1.20.1). Finally, thanks for bringing up the Data Warehouse topic, as that's the reason why I approached Postgres in the first place (I happen to be probably the first person to ever publicly speak about data warehousing with Postgres some 15 years ago). Happy to have conversations around this topic with you if you are interested.
For us - running mssql - it's a relatively simple matter of the database requiring more memory than I want a typical node to have. A big chonky non-sharded DB server might like 128 or 256gb ram, where my typical worker nodes are either 8/32 or 16/64 or something in that direction. With those challenges came new challenges like making a separate node pool, having it tagged and tainted. Then adding more nodes in that pool, then implementing weird flavors of availability groups. On top of all that, iops are kinda unpredictable. However, CloudNativePG looks interesting enough to take a look at, but I don't think it solves the memory issue. It does seem to solve a lot of other things in a great way though!
If you are on EKS than Karpenter can alleviate the node pools part for you. It's a group-less auto-scaler with simple, but powerful node requirements syntax.
Preaching… as you say though we have lots of people out there not at the operator level just yet. More videos like this are needed to identify and educate the community on this. My biggest ask to the community is STOP SAYING KUBERNETES IS STATELESS ONLY! It’s not true, storage is a constant enhancement in every release of Kubernetes.
Whether or not to run databases in k8s depends a lot on which database we're talking about. A lot of databases handles replication on the application level in a way that makes individual nodes/volumes not a single-point-of-failure. There's a huge difference between running, say, ScyllaDB or CockroachDB in k8s and trying to run a PostgresSQL cluster. ... PostgresSQL is not really helping you a lot to make any single node redundant.
Hey Viktor, I have two questions about your video (which is great by the way) - 1. Don't you find operators too complex as a "black box"? I find it much simpler to deploy a helm chart of PostgreSQL for example and not have it managed by code which I don't fully understand (nor have time to fully dig in)... In case of a bug in the operator I find it problematic to have no knowledge about it's inner workings. 2. What do you think about operators who manage Statefulsets themselves? do you find them problematic aswell? I tried CrunchyData Postgres Operator and it uses Statefulsets and has most of the features you mentioned. Again thank you for this video! I fully agree that K8s is the best platform for databases.
1) it all depends on what someone considers "code which i don't fully understand". It is open source so the code is there for anyone to discover. I could, theoretically, say the same for code someone else in my company wrote (without me being involved). Heck, i know people who wrote their own databases, schedulers, etc. Ultimately, there is always code that someone else wrote and i do not understand. That can be OS or database itself. 2) I'm not necessarily against statefulsets, as long as that is a low level implementation accompanied with everything else we might need to manage a specific stateful app.
Haahahahah no way some months ago I was just starting with kubernetes. I just thought about: hey I can manage my postgreSQL databases using statefulsets I did it Then I read in a post: man, you NEED backups And I thought: well, that's completely right, I will loose all my data if I don't have backups So I thought: I'm using local-path from rancher to store my databases in my host machine (Yes, I was running kubenetes only in one instance, overkill) So I decided to create a crontab to send my local-path folder to oracle container (because it's free haha) Thanks, today I discovered a definitely BETTER solution!!
I tried to find operators for OpenSearch, Redis and etcd databases (so quite popular I think): - for OpenSearch there is operator but not autoscaling... (at least not with Keda) - for Redis the only one which seems to be nice is Redis Enterprise operator (rest dont seems to be ready for autoscaling) but it is not OSS - for etcd I didn't found operator So maybe it is just me but this way (run database in k8s) seems for me not fully ready yet in 2023 - probably for Postgres yes but also don't know is it supporting autoscaling functionality? (You mentioned scaling but not all CRD have scaling subresource on) Wdyt? Maybe I'm missing something?
CNPG is great. One feature is missing though which is sometimes important for large databases, tablespaces. CNPG does not support tablespaces. Otherwise, it is the best operator I have evaluated so far.
I personally like Crunchy PGO, for example because it supports also Postgres major version upgrades. It looks to me like each operator has its pros and cons, some are more robust and production ready but which one is the best? I still have no definitive answer to that question
CloudNativePG supports too major version upgrades - look for the database import facility. FWIW, We have plans for both in place upgrades (still offline) and major online upgrades with logical replication. The fundamental difference is that in CloudNativePG these operations must be declarative and reconcile - they are not imperative.
@@GabrieleBartolini-o3k It looks like CloudNativePG does this by importing the database via pg_dump/pg_restore, while PGO does it via a k8s resource PGUpgrade and by using pg_upgrade, which I prefer to pg_dump/pg_restore
@@IvanRizzante It all makes sense. I will provide my point of view, so that you can understand better the difference in approach, not saying one is better than the other. In any case, given the level of control we have now on the underlying PVCs and the direct support of volume snapshots we are working on, we could implement a similar "imperative" resource, provided it makes sense to our users' base (after all the "PGUpgrade" resource is more of a job - it is a verb, hence imperative in my interpretation). On a technical level, we actually have already done 95% of the work in the code, we just need to think about a way to manage this that is not imperative and that also takes into account failures - I don't think that leaving the cluster in an irreparable state coexists with a declarative approach (for example, what to do if the upgrade fails? what to do if the extensions cannot be updated? - IMHO it is not a coincidence I guess that even with Crunchy operator updates with PostGIS are not supported yet). I'd be interested in knowing if, even an imperative approach like Crunchy's, is enough for users as that can be implemented as a command of the "cnpg" plugin.
@DevOpsToolkit I want to deploy CNPG on prem. Do you have advice on what should I use? I'm thinking about longhorn please any reference or suggestions would help
@elabeddhahbi3301 unfortunately, I rarely work with on-prem so my experience with on-prem storage is very limited, so I'm not a good person to recommend a solution.
What about non-postgres db's? There are also operators for mongo and redis but they use statefulsets, should wait for a cloud native operator like this one for the db that i want?
That's not an operator indeed. It is very good a failover management/HA tool for Postgres, that works also outside Kubernetes. CloudNativePG implements internally as part of the Kubernetes controller extension the high availability and self-healing part of a Postgres cluster, among other things. That's the fundamental difference: in CloudNativePg the status of a Postgres primary/standby cluster is in Kubernetes itself, not an external application.
@@DevOpsToolkit Thank you. Btw, I am not advocating CrunchyData operator. Of all that I evaluated, I found CNPG and CrunchyData operator both to be on top of my priority list. Will be waiting for your video :)
Hi, great for production, but how deploy PostgreSQL ephemeral for developer environments with approach for feature branch’s developers? I am looking for a approach like this, any tough? The challenge is to bring data and migrations to the instance for developers
Just create a db when a PR is created and destroy it when PR is closed. Data can be a simple SQL (typically you do not need much) and i tend to use schemahero for schemas.
Schemahero is nice, but not support triggers and other common objects. This solution providing a sql is kind a slow process, think about E2E tests I am looking for free alternatives like database lab ou spawn for data containers.
@@liciomatos This is exactly the reason why we built our operator this way. Have you looked at all the E2E tests we run for each commit? The idea is to run Postgres inside the application pipelines, and let developers own the end to end development process and be responsible for it. Join the community chat, as I'd like to know more and talk about this - I wrote an article last year describing this process, look for "Why run Postgres in Kubernetes?" by me.
@@GabrieleBartolini-o3k hi Gabriele, Nice tô meet you. It’s great the operator, but my point is one step before, I am looking for a way to deploy the PostgreSQL inside kubernetes by a fast way, like restoring a snapshot backup instead of run Sqlscript. Should be nice to talk about it and possible ideas. Tanks
It's great, but if you happen to fill up the disk you're done... in a classic vm you'd just extend the disk and move on with your life, in Kubernetes not so much since the pod is crashed. I was unable to recover the instance in k8s and posts in your forum provided nothing to solve this problem. Of course one can monitor etc. but it bad that it appears unrecoverable and no help to be gained on the official forum.
@@Tipsmark I think that the parameter is called `ExpandInUsePersistentVolumes`. Bear in mind that it ultimately depends on the storage provider (just as with VMs). Ceph and AWS EBS are examples of storages that can expand. When you describe StorageClass, there should be `allowVolumeExpansion` set to `true` or `false`.
@@DevOpsToolkit yep and it works just fine, but the requirement is the pod starts and finalized the volume expansion... with disk full you have a crash loop - it will never ever expand...
This enspired me to try cnpg, I have one question, with 3 worker nodes connected to 10G switch via a seperate lan interface 10.0.0.0/16 (not the pub IP interface), how do you benefin from this fast network for in-cluster communcation with calico? I've tried IPPool with no joy (:
I don’t run databases in kubernetes because it’s easier for the cloud provider to do it for me. Nothing to do with kubernetes, rather the convenience and time savings
I agree with that 100%. Managed databases are always my first option. I tend to go with self-managed option on Kubernetes only when managed services are not allowed or the price is too prohibitive.
Well. Having at least 3 replicas with cloud provider here costs about 10% of all income of our business. So, it's quite expensive -> not for us, we just have to, at least now, run cluster ourselves
I haven't used cnpg on prod yet so can't comment on detailed pros/cons of cnpg, it would probably deserve on its blog. The most notably known ones in prod are zalando(i guess the oldest) and crunchy data. There are also no blogs to compare all postgres operators which is just a shame But i can mark the bitnami/postgres HA one as the most dreadful experience for OPS, like no automated backup, no clusters option, no tls etc bitnami postgres and bitnami postgres HA are really good for educational purposes but must stay away from the prod 🤣
@@mr_wormhole The Data on Kubernetes Community is working on the so-called "Operator framework", an independent comparable matrix of all database related operators, starting with Postgres. In any case you can find blog articles that compare the operators, including CloudNativePG - which is the youngest, by googling "Postgres operators Kubernetes".
Think of it this way. Databases are running on VMs and use attached storage no matter whether all that is orchestrated through Kubernetes or some other means. What kubernetes gives you is an API to define the desired state and a controller that is trying to keep it in that state. You're not loosing anything with kubernetes but only getting additional capabilities. The problem in the past was that people were trying to run DBs in kubernetes using the built in capabilities which are insufficient. Now, however, we have controllers dedicated to DBs and that changed the situation drastically. As for the question whether we can rin Karla un kubernetes, the answer is yes.
@Vilayat_Khan would would an update through an API introduce incompatibilities and an update without not? If anything, kubernetes gives you a mechanism of health check to stop rollouts when something's wrong.
Where and how do you run databases? VMs? Baremetal? Kubernetes? Managed services? Something else?
Will definitely try this one in my playground cluster! Thanks!!
Preach it, Victor! 👏
At the moment using crunchydata's postgres operator that served us well in the last years. It uses patroni for cluster management and their pgbackrest tool for backup. Version 5.3.x allows also for automated major version upgrade of postgres. Also open source. It has a weird versioning and they do not keep postgres container images longed than 1 year, you have to back them up if not upgrading.
What peaked my interest in this one is logging as json and removal of patroni. Will give it a try. I am curious if I can migrate my current databases.
Made a comparison of postgres operators ~1 month ago and made a decision to go with cloudnative-pg. Running PG cluster on baremetal managed by k8s. cloudnative-pg looks very user-friendly, thanks to the well-thought-out CRDs and parameters + very nice kubectl plugin.
@@Artazar77 you have to know that we started this operator after having benchmarked Postgres in a bare metal Kubernetes cluster. You can google and find the blog article that covers this which I wrote in June 2020.
I’ve been using Cloudnative PG for almost 6-7months now in different Kubernetes cluster and work really good. Soon we’re moving it to our production cluster. 🎉
Can you share your review for us ? that would help us to think about it.
@ganeshbabu8263 I haven't done a review of CNPG outside of what I said in that video. I'll add it to my to-do list. Until i do, the short version is that it's awesome and that i strongly recommend it if you can't use a managed service.
I came across a cool feature that I don't believe is so easy, when running a database outside of k8s. I needed to move my cluster off of VPSs to get them on bare metal computers. So, I connected up the new much more powerful nodes into the cluster and cordoned off the old nodes. I then simply killed the "last" database node first. It then restarted on one of the new cluster nodes. I waited for the database HA process to replicated the data, then I killed the second to last database node (I had 4 - 1 master 3 replicas ), I continued this process until all database nodes were on the new cluster nodes. And it worked! I call it database crawling. The database didn't stop serving, except when the new master was voted up. Only a few seconds down time. Awesome. 😁
I am just getting started with kubernetes, and I couldn't find a good way to mimic Aws RDS and GCP CloudSql behaviour using k8s vanilla deployments and statefulsets. I'm glad that I found this video before deploying postgresql with those primitives in production.
Hi, Thanks for this video on CloudNativePG! I didn't know about this specific postgres operator.
Completely agree, operators such as this really changed the game when it comes to deploying and managing databases on Kubernetes!
I also have a video on this topic on my channel where I use Crunchy PGO (another PostgreSQL operator).
It's getting to the point where you can actually build your own "self-service" platform, all with open source solutions, and operators on top of k8s!
I was worried about run db on kubernetes, but after watching this video I got hope for deploy pg on k8s. Thank you sir. More video on this topic please.
I'm using PG and Mongo operators, but have to say that running db in k8s with operator is still not very easy. Especially when you're doing cluster / operator / db upgrades, it is kind of scary. For production I would rather use managed db instance
The video started with the sentence that i always wanted to say. Best video so far
We’re using it. Some growing problems (skill issues on our part), but otherwise works well.
What a nice vid.. Thanks as always bring quality videos for us. Cheers from Brazil.
I didn't know about this, but it does sounds great! Might have to give it a try.
Thanks a lot, great summary!
Thanks Viktor for this review of our operator!
hey, hi. I'm curious, what are the advantages of CloudNativePG over Crunchy Operator?
It's more robust and designed from ground up as operator with all the options available as CRs. That being said, both are great and i would not switch from one to another unless something very specific is missing in one of those.
I love Cloudnative-PG, it just works. Very seldom that I find a operator that does what it says and is easy to use. What I dislike about it is creating additional users and databases as IAC is a pain.
Best explanation i found so far, even for a beginner like me. You explain very well. Subscribed!
Hi Viktor, Please consider new distributed ones like TiDB and Cockroachdb, they support mysql and pg wires and are fully automated by operators. I think we are approaching new type of data platforms which are game changers..
That's true. I already have both on my to-do list.
@@DevOpsToolkit Looking forward to hear about these
I think the biggest missing piece from what I could see here is an assumption that you will use CloudNativePG to run an OLTP, not an OLAP. In order to generate reports and analytics across a variety of online data sources, you will need a Data Warehouse, Query Federation, or some manner of data lake / lakehouse type solution.
I think a continuous backup to S3 plus something like Redshift Spectrum (or some other S3-consuming ETL/streaming) might get you there, but if the goal of CloudNativePG is to be a one-stop-shop to run Postgres, I'd prefer if they built more tooling to support export to your offline data system without having to engineer much.
That said, if we're running OLTPs in Kubernetes, we might as well put our data warehouses in there too, and if and when that happens, I'm sure someone will come up with some kind of CRD that deploys and manages your entire online and offline data system.
You are right, at the moment CloudNativePG is primarily focused on OLTP workloads - after all Rome wasn't built in a day. You can certainly already put data warehouses in PostgreSQL in Kubernetes, but at the moment there are a couple of limitations when talking about VLDB (multi-terabytes): we do not support yet declarative tablespaces so that you can place indexes or table partitions on different physical volumes, and we do not support yet K8s volume snapshots for backups and cloning.
Having said this, I am not sure you are aware of PostgreSQL Foreign Data Wrappers to actually connect to different sources. Moreover we are now adding native support for the pg_failover_slots open source extension by EDB which enables you run Change Data Capture workloads in a Postgres cluster, making sure that failovers do not break your consumers queues. This allows you to add to ETL/ELT traditional processes the continuous live streaming of changes to additional sources. I would like to validate this scenario with Debezium once we have enabled this feature (which should be available in 1.20.1).
Finally, thanks for bringing up the Data Warehouse topic, as that's the reason why I approached Postgres in the first place (I happen to be probably the first person to ever publicly speak about data warehousing with Postgres some 15 years ago). Happy to have conversations around this topic with you if you are interested.
For us - running mssql - it's a relatively simple matter of the database requiring more memory than I want a typical node to have. A big chonky non-sharded DB server might like 128 or 256gb ram, where my typical worker nodes are either 8/32 or 16/64 or something in that direction.
With those challenges came new challenges like making a separate node pool, having it tagged and tainted. Then adding more nodes in that pool, then implementing weird flavors of availability groups. On top of all that, iops are kinda unpredictable.
However, CloudNativePG looks interesting enough to take a look at, but I don't think it solves the memory issue. It does seem to solve a lot of other things in a great way though!
That's big... I cannot say whether those challenges are worth driving in. It depends...
If you are on EKS than Karpenter can alleviate the node pools part for you. It's a group-less auto-scaler with simple, but powerful node requirements syntax.
Preaching… as you say though we have lots of people out there not at the operator level just yet. More videos like this are needed to identify and educate the community on this. My biggest ask to the community is STOP SAYING KUBERNETES IS STATELESS ONLY! It’s not true, storage is a constant enhancement in every release of Kubernetes.
You should join the DoK Community!
@@gbartolini I am in, great community
Whether or not to run databases in k8s depends a lot on which database we're talking about.
A lot of databases handles replication on the application level in a way that makes individual nodes/volumes not a single-point-of-failure.
There's a huge difference between running, say, ScyllaDB or CockroachDB in k8s and trying to run a PostgresSQL cluster. ... PostgresSQL is not really helping you a lot to make any single node redundant.
Hey Viktor, I have two questions about your video (which is great by the way) -
1. Don't you find operators too complex as a "black box"? I find it much simpler to deploy a helm chart of PostgreSQL for example and not have it managed by code which I don't fully understand (nor have time to fully dig in)... In case of a bug in the operator I find it problematic to have no knowledge about it's inner workings.
2. What do you think about operators who manage Statefulsets themselves? do you find them problematic aswell? I tried CrunchyData Postgres Operator and it uses Statefulsets and has most of the features you mentioned.
Again thank you for this video! I fully agree that K8s is the best platform for databases.
Good questions
1) it all depends on what someone considers "code which i don't fully understand". It is open source so the code is there for anyone to discover. I could, theoretically, say the same for code someone else in my company wrote (without me being involved). Heck, i know people who wrote their own databases, schedulers, etc. Ultimately, there is always code that someone else wrote and i do not understand. That can be OS or database itself.
2) I'm not necessarily against statefulsets, as long as that is a low level implementation accompanied with everything else we might need to manage a specific stateful app.
Haahahahah no way some months ago I was just starting with kubernetes.
I just thought about: hey I can manage my postgreSQL databases using statefulsets
I did it
Then I read in a post: man, you NEED backups
And I thought: well, that's completely right, I will loose all my data if I don't have backups
So I thought: I'm using local-path from rancher to store my databases in my host machine (Yes, I was running kubenetes only in one instance, overkill)
So I decided to create a crontab to send my local-path folder to oracle container (because it's free haha)
Thanks, today I discovered a definitely BETTER solution!!
Excelent content, congratulation for share...
I tried to find operators for OpenSearch, Redis and etcd databases (so quite popular I think):
- for OpenSearch there is operator but not autoscaling... (at least not with Keda)
- for Redis the only one which seems to be nice is Redis Enterprise operator (rest dont seems to be ready for autoscaling) but it is not OSS
- for etcd I didn't found operator
So maybe it is just me but this way (run database in k8s) seems for me not fully ready yet in 2023 - probably for Postgres yes but also don't know is it supporting autoscaling functionality? (You mentioned scaling but not all CRD have scaling subresource on)
Wdyt? Maybe I'm missing something?
You're right. It all depends on maturity of operators for a specific database.
I'd like to try cloudNativePG right away. k8s fu could remove DB ops works away and make k8s a better platform to run stateful workloads.
CNPG is great. One feature is missing though which is sometimes important for large databases, tablespaces. CNPG does not support tablespaces. Otherwise, it is the best operator I have evaluated so far.
We're working on them! ;)
@@GabrieleBartolini Thanks :)
enjoyed this video alot :) @DevOpsToolkit what operator would you recommend for MySQL ?
I haven't been using MySQL for a while so I might not be the best person to answer that question.
I personally like Crunchy PGO, for example because it supports also Postgres major version upgrades. It looks to me like each operator has its pros and cons, some are more robust and production ready but which one is the best?
I still have no definitive answer to that question
CloudNativePG supports too major version upgrades - look for the database import facility. FWIW, We have plans for both in place upgrades (still offline) and major online upgrades with logical replication. The fundamental difference is that in CloudNativePG these operations must be declarative and reconcile - they are not imperative.
@@GabrieleBartolini-o3k It looks like CloudNativePG does this by importing the database via pg_dump/pg_restore, while PGO does it via a k8s resource PGUpgrade and by using pg_upgrade, which I prefer to pg_dump/pg_restore
@@IvanRizzante It all makes sense. I will provide my point of view, so that you can understand better the difference in approach, not saying one is better than the other.
In any case, given the level of control we have now on the underlying PVCs and the direct support of volume snapshots we are working on, we could implement a similar "imperative" resource, provided it makes sense to our users' base (after all the "PGUpgrade" resource is more of a job - it is a verb, hence imperative in my interpretation).
On a technical level, we actually have already done 95% of the work in the code, we just need to think about a way to manage this that is not imperative and that also takes into account failures - I don't think that leaving the cluster in an irreparable state coexists with a declarative approach (for example, what to do if the upgrade fails? what to do if the extensions cannot be updated? - IMHO it is not a coincidence I guess that even with Crunchy operator updates with PostGIS are not supported yet).
I'd be interested in knowing if, even an imperative approach like Crunchy's, is enough for users as that can be implemented as a command of the "cnpg" plugin.
@@GabrieleBartolini-o3k Of course, thanks for clarifying that CloudNativePG supports it as well.
@@IvanRizzante thank you for raising the question!
Game changer!
Thanks for the discovery 😊
thank you for this video i wanna know how did you manage the volumes r u using a specific provisionner
Most of the time I use one of the Storage Classes provided by whichever hyperscaler I'm using.
@DevOpsToolkit I want to deploy CNPG on prem. Do you have advice on what should I use? I'm thinking about longhorn please any reference or suggestions would help
@elabeddhahbi3301 unfortunately, I rarely work with on-prem so my experience with on-prem storage is very limited, so I'm not a good person to recommend a solution.
What about non-postgres db's? There are also operators for mongo and redis but they use statefulsets, should wait for a cloud native operator like this one for the db that i want?
It depends on the db. I tend to use mostly postgresql so that's the one I'm familiar the most.
Yair 123 - can I ask you which operator are u using for Redis? :)
It does not run with sidecar proxy bc it has one for failsafe
There's another project called stolon but it doesn't use operators.
That's not an operator indeed. It is very good a failover management/HA tool for Postgres, that works also outside Kubernetes. CloudNativePG implements internally as part of the Kubernetes controller extension the high availability and self-healing part of a Postgres cluster, among other things. That's the fundamental difference: in CloudNativePg the status of a Postgres primary/standby cluster is in Kubernetes itself, not an external application.
Which other PostgreSQL K8s operator do you recommend? How about CrunchyData or Percona?
I haven't used Percona much so i cannot comment on it.
@@DevOpsToolkit Ok. And, for CrunchyData?
@@debkr CrunchyData is a good one. I already have it on my TODO list to create a video about it and, potentially, compare the two.
@@DevOpsToolkit Thank you. Btw, I am not advocating CrunchyData operator. Of all that I evaluated, I found CNPG and CrunchyData operator both to be on top of my priority list. Will be waiting for your video :)
Thx for the video. Is it possible to add extensions (e.g. TimescaleDB)?
I don't think so. TimescaleDB is more of an alternative than an addition.
What do you think about Zalando Postgres Operator with Patroni HA?
Patroni Is great.
We are runnng it in production, working great !
How would you configure mysql cluster in multiple kubernetes clusters. 🌚
Why in multiple kubernetes clusters?
Hi, great for production, but how deploy PostgreSQL ephemeral for developer environments with approach for feature branch’s developers? I am looking for a approach like this, any tough? The challenge is to bring data and migrations to the instance for developers
Just create a db when a PR is created and destroy it when PR is closed. Data can be a simple SQL (typically you do not need much) and i tend to use schemahero for schemas.
Schemahero is nice, but not support triggers and other common objects.
This solution providing a sql is kind a slow process, think about E2E tests I am looking for free alternatives like database lab ou spawn for data containers.
@@liciomatos This is exactly the reason why we built our operator this way. Have you looked at all the E2E tests we run for each commit? The idea is to run Postgres inside the application pipelines, and let developers own the end to end development process and be responsible for it. Join the community chat, as I'd like to know more and talk about this - I wrote an article last year describing this process, look for "Why run Postgres in Kubernetes?" by me.
@@GabrieleBartolini-o3k hi Gabriele, Nice tô meet you.
It’s great the operator, but my point is one step before, I am looking for a way to deploy the PostgreSQL inside kubernetes by a fast way, like restoring a snapshot backup instead of run Sqlscript.
Should be nice to talk about it and possible ideas.
Tanks
It's great, but if you happen to fill up the disk you're done... in a classic vm you'd just extend the disk and move on with your life, in Kubernetes not so much since the pod is crashed. I was unable to recover the instance in k8s and posts in your forum provided nothing to solve this problem. Of course one can monitor etc. but it bad that it appears unrecoverable and no help to be gained on the official forum.
Extending disk on a VM works no matter whether that disk is attached to that VM directly, through Docker, through Kubernetes, or anything else.
@@DevOpsToolkit sure, so could you then please explain how to expand a pvc that's full and where pgsql has crashed as a result...
@@Tipsmark I think that the parameter is called `ExpandInUsePersistentVolumes`. Bear in mind that it ultimately depends on the storage provider (just as with VMs). Ceph and AWS EBS are examples of storages that can expand. When you describe StorageClass, there should be `allowVolumeExpansion` set to `true` or `false`.
@@DevOpsToolkit yep and it works just fine, but the requirement is the pod starts and finalized the volume expansion... with disk full you have a crash loop - it will never ever expand...
@@DevOpsToolkit you try it yourself, just make small single instance pgsql db and fill it up until it crashes.... game over
This enspired me to try cnpg, I have one question, with 3 worker nodes connected to 10G switch via a seperate lan interface 10.0.0.0/16 (not the pub IP interface), how do you benefin from this fast network for in-cluster communcation with calico? I've tried IPPool with no joy (:
Unfortinately, I don't have thatuch experience with calico so I can't say.
❤
Do you have a feedback from a DBA ?
CNPG was made by DBAs for DBAs.
Hi Viktor, is there something similar for MongoDB?
I haven't been using mongo for a while so I'm not up to date. I do remember there were a few operators but i cannot say how good they are (or aren't).
@@DevOpsToolkit Did you try percona operator for mongodb? @ochanlee4414?
I think the percona operator might be worth a try for mongodb.
I don’t run databases in kubernetes because it’s easier for the cloud provider to do it for me. Nothing to do with kubernetes, rather the convenience and time savings
I agree with that 100%. Managed databases are always my first option. I tend to go with self-managed option on Kubernetes only when managed services are not allowed or the price is too prohibitive.
Well. Having at least 3 replicas with cloud provider here costs about 10% of all income of our business. So, it's quite expensive -> not for us, we just have to, at least now, run cluster ourselves
CrunchyData is more bullet-proof and fully packed with features for OPS
I also vouch for pgo from crunchdata
I'd be interested in knowing more about this. Could you please elaborate?
@mr_wormhole care to expand on that statement? Have you used cnpg to give an articulate pro/con between the two?
I haven't used cnpg on prod yet so can't comment on detailed pros/cons of cnpg, it would probably deserve on its blog. The most notably known ones in prod are zalando(i guess the oldest) and crunchy data. There are also no blogs to compare all postgres operators which is just a shame
But i can mark the bitnami/postgres HA one as the most dreadful experience for OPS, like no automated backup, no clusters option, no tls etc
bitnami postgres and bitnami postgres HA are really good for educational purposes but must stay away from the prod 🤣
@@mr_wormhole The Data on Kubernetes Community is working on the so-called "Operator framework", an independent comparable matrix of all database related operators, starting with Postgres. In any case you can find blog articles that compare the operators, including CloudNativePG - which is the youngest, by googling "Postgres operators Kubernetes".
With all these arguments u give, i still dont think i can put mission critical db on k8s. Can u put kafka on k8s?
Think of it this way. Databases are running on VMs and use attached storage no matter whether all that is orchestrated through Kubernetes or some other means. What kubernetes gives you is an API to define the desired state and a controller that is trying to keep it in that state. You're not loosing anything with kubernetes but only getting additional capabilities.
The problem in the past was that people were trying to run DBs in kubernetes using the built in capabilities which are insufficient. Now, however, we have controllers dedicated to DBs and that changed the situation drastically.
As for the question whether we can rin Karla un kubernetes, the answer is yes.
@@DevOpsToolkit ok ,good answer. But again when u need to update either db or operator running db or patch db- incompatibility issues may arise?
@Vilayat_Khan would would an update through an API introduce incompatibilities and an update without not? If anything, kubernetes gives you a mechanism of health check to stop rollouts when something's wrong.
Thx for your work, trying promote pathways to the present and the future, and thx for your talk at goto; today ❤