You missed the point of a deadlock. If payment service locks the table for update and other service tries to update that table and fails - it's not a deadlock, it's by design :) Deadlock is when User service locks [User] table and starts update operation in which it will also require to update [Payment] table. Meanwhile Payment service locks [Payment] table and starts operation in which it will also require to update [User] table. Before User service releases lock on [User] table, it needs to obtain lock on [Payment] table in order to complete operation. However, before Payment service releases lock on [Payment] table it needs to obtain lock on [User] table hence the deadlock.
Exactly. In that case the negative part will be the delay in the second transaction, which can also be considered really bad if we turn up the amount of updades between the services
@@MatheusSilva-kc2xn 02:40 Your definition of a deadlock was incorrect. Locking is a normal part of transaction management, but a deadlock is a specific situation where two or more transactions are stuck waiting for each other to release resources, creating a cycle that prevents ANY of them from progressing. What you demonstrated in your video was normal locking, not a deadlock scenario.
I thought noone had noticed this. Yeah , author definitively havent studied concurrency topic and types of locks , before started to talk about it. On the top of my had 2:40 pictures normal race conditions that happens Petabillions of times in all applications with shared DB and it does not lead to data corruption or forever-locking or some bad stuff.
Shared db shouldn't be called an anti-pattern. Becasue if you have a split payments and users dbs, you'll still have some dependencies between them, and you're complicating your life drastically by rolling out a multi-db transaction, it's super error prone. So multi-db schema hard to get right and hard to maintain.Deadlocks doesn't seem to be a relevant argument, because if you're updating tables that aren't related there cannot be a deadlock. And if you're updating user and a payment simultaneously, you can run into a deadlock condition either with a shared db, or during your multi-db transaction. 3:00 "If you're trying to scale in the future" - valid point, but "in the future" is the key here. If you'll have to scale it in the future, you'll be dealing with more problems of multi-db transactions in the future, rather than from the beginning. And watching further only assured how error-prone distributed transactions are. "1 phase: we begin and do inserts, 2 phase we do the commit" - no, this is wrong, it will be screwed up if your "commit" query fails in the second phase, this is not what 2 phase is. And this is the exact reason why everybody should avoid designing multi-db microservices at all costs without necessity, and do a thorough research if there is a necessity. "And a second pattern is Saga, but rollback here is too complicated for explanation, so let's move further, but remember that the simple way is an anti-pattern"
@@ml_serenity I really like the video that you have suggested. Separating dbs and syncing with the event bus is such a burden, but this video helped to grasp the idea better, and it makes sense.
As someone who designed and built one the most business critical platforms for a major back back in 2007 based on micro services - when this concept wasn’t event named , this whole micro services for business domains is one big stupid idea . Customers are piling tech debt upon themselves … For 40 years , enterprises spent effort in data integrity and suddenly now you have a teenager telling you that it’s OK to have ten copies of your customer ?? Bounded Domain Contexts and consequently each micro service having its own database may work for a streaming video content delivery business system but it will create a huge mess for most companies
As a DBA turned developer I couldn't agree more. I've noticed younger developers like to regurgitate these sweeping proclamations and overstatements they heard somewhere on the internet. I had a young dev try to get away with proclaiming relational databases as too slow for large applications. This kid had never even worked on a large-scale application. All he had ever done was small web apps with a few hundred concurrent users, maybe a few thousand tops. He had never worked with databases with millions or even hundreds of millions of records -- never fine-tuned indexes for column order or the like. There is a case to be made for nosql / document dbs but they are not the silver bullet younger gens have been brainwashed into believing. Relational still brings so many foundational benefits, and multi-tenancy is one of those benefits. The only thing harder than securing and maintaining one database is doing the same for a hundred, or even a thousand databases.
I'm getting my masters in computer engineering and I have to build an app using microservices "the pure way" with isolated dbs. Mostly it makes no sense, the whole thing if one services goes down the others keep running is a fallacy, it's enough to run another replica and it's solved. I think this may have true value only for very large systems with a lot of services that have a large degree of independence of each other's
Lots of basic concepts don't seem right. In microservices (especially implemented in HTTP protocol), how can you do 2 phase commit? Even with shared database, if involving 2 services, no way to guarantee ACID. That is why Compensating Transaction pattern are widely used in microservices architecture.
Have an idea - the first of all do not use HTTP rest services. Instead of that it should be Java EE Stateless bean. So Order Service will Java EE Stateless bean that invoke two another Java EE beans. All related methods should have Transaction required annotation. I such case Application Server should do everything else for you.
Sending calls to other services and waiting for their responses in a transaction will in drastically increase the transaction time, which can't be good ..
2:40 is not a deadlock. It is normal race conditions that are resolved with time, second req waits in the queue untill first is done. That is why changes in DB happens in transactional way , to prevent data corruption. Billions of applications work with shared DB , my latest project had 52 different services sharing one DB. It is not good idea to make a video about things , where you have shallow knowledge.
Kafka is an append log, basically a queue. It has the capability of persisting to disk every message it takes at the same time that it gives it via network to a subscriber to the topic. RabbitMQ is a message broker. It's similar, but instead of being mainly one message goes in and it's delivered to one consumer, you can deliver to many subscribers, set routing based on the message content, configure persistence or no persistence, etc. If you need an extremely fast message buffer, use Kafka. If you need a message broker (more suitable for event-driven architecture), use RabbitMQ (or NATS, or many others).
7:23 "..everything is good" What about the commit of Payment fails and the commit of User succeeds? We still have a problem, right? Or am I missing something?
No we can’t. The key difference is that a replica of a data store mean its the same data, cqrs doesnt nessarily mean the same data, it and optimize view of aggregate data from multiple data sources in a distributed system
Liked and subbed you have a cute voice btw, love it. two questions. 1) with CQRS if we're having a write and read DB. can we just use normalized OLTP for write and denormalized OLAP for read? Like why not read from OLAP if its faster at that. 2) same with streamed (kafka) event log. why don't we make the read DB from kafka a OLAP or event-stream processor like flink? please correct me where my thinking is wrong.
You certainly can. Materialized views (read db) should be optimized for reads, so it totally makes sense to use a document db for example. It will of course complicated the replication step, but can result in significantly better performance.
@@ml_serenity Thank you but can you please clarify what you mean because OLAP exists as both relational DBs (snowflake, bigquery) and document store DBs (couchbase, cosmos) and both document/relational can have materialized views. I know they are for faster queries but which are you suggesting? 1) document store write for application dev with document materialized view olap read? 2) document store write for application dev with relational materialized view olap read? 3) relational DB for application dev with document materialized view olap read? 4) relational DB write for application dev with relational materialized view olap read? feel like option 4 is easiest replication solution because everything is SQL based. but I have friends that swear by document store writes for application development so im interesting in your opinion.
Hmm - seems needlessly complex. Just use a solid transactional DB which are built for handling heavy volume and don’t try to code on the app side. Seems like you are using microservices just for the sake of the tech.
am sorry but you seriously don't know what you're talking about, please invest more time in understanding these technologies, i hope this comes out as constructive critisicm, i wish the best for you in the future, take care :)
Appreciate your criticism. Would mind being specific and telling me what exactly makes you think I don’t know what I’m talking about? Thanks in advance :)
You know, you're kind of right and it makes me sad. Cloud architecture is actually pretty fun, but you're bottle necked by the amount you have to spend. (Like most things in life lol) Although there is some fun to be had in trying to squeeze out as much as possible from free tiers.
You missed the point of a deadlock. If payment service locks the table for update and other service tries to update that table and fails - it's not a deadlock, it's by design :)
Deadlock is when User service locks [User] table and starts update operation in which it will also require to update [Payment] table. Meanwhile Payment service locks [Payment] table and starts operation in which it will also require to update [User] table. Before User service releases lock on [User] table, it needs to obtain lock on [Payment] table in order to complete operation. However, before Payment service releases lock on [Payment] table it needs to obtain lock on [User] table hence the deadlock.
Exactly. In that case the negative part will be the delay in the second transaction, which can also be considered really bad if we turn up the amount of updades between the services
@@MatheusSilva-kc2xn 02:40 Your definition of a deadlock was incorrect. Locking is a normal part of transaction management, but a deadlock is a specific situation where two or more transactions are stuck waiting for each other to release resources, creating a cycle that prevents ANY of them from progressing. What you demonstrated in your video was normal locking, not a deadlock scenario.
I thought noone had noticed this. Yeah , author definitively havent studied concurrency topic and types of locks , before started to talk about it. On the top of my had 2:40 pictures normal race conditions that happens Petabillions of times in all applications with shared DB and it does not lead to data corruption or forever-locking or some bad stuff.
Shared db shouldn't be called an anti-pattern. Becasue if you have a split payments and users dbs, you'll still have some dependencies between them, and you're complicating your life drastically by rolling out a multi-db transaction, it's super error prone. So multi-db schema hard to get right and hard to maintain.Deadlocks doesn't seem to be a relevant argument, because if you're updating tables that aren't related there cannot be a deadlock. And if you're updating user and a payment simultaneously, you can run into a deadlock condition either with a shared db, or during your multi-db transaction. 3:00 "If you're trying to scale in the future" - valid point, but "in the future" is the key here. If you'll have to scale it in the future, you'll be dealing with more problems of multi-db transactions in the future, rather than from the beginning.
And watching further only assured how error-prone distributed transactions are. "1 phase: we begin and do inserts, 2 phase we do the commit" - no, this is wrong, it will be screwed up if your "commit" query fails in the second phase, this is not what 2 phase is. And this is the exact reason why everybody should avoid designing multi-db microservices at all costs without necessity, and do a thorough research if there is a necessity.
"And a second pattern is Saga, but rollback here is too complicated for explanation, so let's move further, but remember that the simple way is an anti-pattern"
@@ml_serenity I really like the video that you have suggested. Separating dbs and syncing with the event bus is such a burden, but this video helped to grasp the idea better, and it makes sense.
As someone who designed and built one the most business critical platforms for a major back back in 2007 based on micro services - when this concept wasn’t event named , this whole micro services for business domains is one big stupid idea . Customers are piling tech debt upon themselves … For 40 years , enterprises spent effort in data integrity and suddenly now you have a teenager telling you that it’s OK to have ten copies of your customer ?? Bounded Domain Contexts and consequently each micro service having its own database may work for a streaming video content delivery business system but it will create a huge mess for most companies
As a DBA turned developer I couldn't agree more. I've noticed younger developers like to regurgitate these sweeping proclamations and overstatements they heard somewhere on the internet. I had a young dev try to get away with proclaiming relational databases as too slow for large applications. This kid had never even worked on a large-scale application. All he had ever done was small web apps with a few hundred concurrent users, maybe a few thousand tops. He had never worked with databases with millions or even hundreds of millions of records -- never fine-tuned indexes for column order or the like. There is a case to be made for nosql / document dbs but they are not the silver bullet younger gens have been brainwashed into believing. Relational still brings so many foundational benefits, and multi-tenancy is one of those benefits. The only thing harder than securing and maintaining one database is doing the same for a hundred, or even a thousand databases.
I'm getting my masters in computer engineering and I have to build an app using microservices "the pure way" with isolated dbs. Mostly it makes no sense, the whole thing if one services goes down the others keep running is a fallacy, it's enough to run another replica and it's solved. I think this may have true value only for very large systems with a lot of services that have a large degree of independence of each other's
MInute 15: One or two seconds of delay in an E-commerce is fine... Amazon people yelling at you!
Lol, sorry, but that picture came to my mind.
Lots of basic concepts don't seem right. In microservices (especially implemented in HTTP protocol), how can you do 2 phase commit? Even with shared database, if involving 2 services, no way to guarantee ACID. That is why Compensating Transaction pattern are widely used in microservices architecture.
Have an idea - the first of all do not use HTTP rest services. Instead of that it should be Java EE Stateless bean. So Order Service will Java EE Stateless bean that invoke two another Java EE beans. All related methods should have Transaction required annotation. I such case Application Server should do everything else for you.
Possible
2phase commit on DB will 100% fail you. Do not try
@@jasonsoong9483 could you highlight a bit of why, or give a link where to read.
@@yatsukJason Never Lie
I'm so early that higher resolution is just not available yet.
same for me 😅
Hahaha should be sharper now
Learned a lot and it helped me relax on a Friday night. Thanks man
Any time!
This is so useful! Needed done help untangling spaghetti and now I've got a deeper understanding of these tools
Perfect!
Nice content. Thanks for this awesome content.
What AI tool are you using to generate images?
Nicely explained, please add the timestamps too.
Great video, man! Really concise aswell.
Thanks mate!
Sending calls to other services and waiting for their responses in a transaction will in drastically increase the transaction time, which can't be good ..
Fantastic video! I especially liked you mentioning anti patterns. They helped a lot with understanding the concepts
Glad you enjoyed it!
Why should the user be updated when placing an order? That's the root of all evil.
Thank you for this video. I really enjoyed it!
Thank you. It helps a lot
2:40 is not a deadlock. It is normal race conditions that are resolved with time, second req waits in the queue untill first is done. That is why changes in DB happens in transactional way , to prevent data corruption. Billions of applications work with shared DB , my latest project had 52 different services sharing one DB. It is not good idea to make a video about things , where you have shallow knowledge.
I have one doubt also when to use kafka and when rabbitmq i read some where kafka is log base and rabbit mq is in memory..????
Kafka is an append log, basically a queue. It has the capability of persisting to disk every message it takes at the same time that it gives it via network to a subscriber to the topic.
RabbitMQ is a message broker. It's similar, but instead of being mainly one message goes in and it's delivered to one consumer, you can deliver to many subscribers, set routing based on the message content, configure persistence or no persistence, etc.
If you need an extremely fast message buffer, use Kafka. If you need a message broker (more suitable for event-driven architecture), use RabbitMQ (or NATS, or many others).
7:23 "..everything is good" What about the commit of Payment fails and the commit of User succeeds? We still have a problem, right? Or am I missing something?
I am so confused - when talking about first anti-pattern you use words database and table as synonyms. Care to explain why?
Bro can you make mern stack project with DDD clean architecture Cqrs event sourcing
Can we say that CQRS is similar to setting up read replica. Where there ia a master and multiple read slaves
No we can’t. The key difference is that a replica of a data store mean its the same data, cqrs doesnt nessarily mean the same data, it and optimize view of aggregate data from multiple data sources in a distributed system
This was well explained
Liked and subbed you have a cute voice btw, love it.
two questions.
1) with CQRS if we're having a write and read DB. can we just use normalized OLTP for write and denormalized OLAP for read? Like why not read from OLAP if its faster at that.
2) same with streamed (kafka) event log. why don't we make the read DB from kafka a OLAP or event-stream processor like flink?
please correct me where my thinking is wrong.
You certainly can. Materialized views (read db) should be optimized for reads, so it totally makes sense to use a document db for example. It will of course complicated the replication step, but can result in significantly better performance.
@@ml_serenity Thank you but can you please clarify what you mean because OLAP exists as both relational DBs (snowflake, bigquery) and document store DBs (couchbase, cosmos) and both document/relational can have materialized views. I know they are for faster queries but which are you suggesting?
1) document store write for application dev with document materialized view olap read?
2) document store write for application dev with relational materialized view olap read?
3) relational DB for application dev with document materialized view olap read?
4) relational DB write for application dev with relational materialized view olap read?
feel like option 4 is easiest replication solution because everything is SQL based. but I have friends that swear by document store writes for application development so im interesting in your opinion.
Superb Video. I would like to see practical samples of what we discussed here
Coming soon!
great video!
Awesome video
Deadlocks has nothing to do with the given example
this promotes over engineering
Hmm - seems needlessly complex. Just use a solid transactional DB which are built for handling heavy volume and don’t try to code on the app side. Seems like you are using microservices just for the sake of the tech.
am sorry but you seriously don't know what you're talking about, please invest more time in understanding these technologies, i hope this comes out as constructive critisicm, i wish the best for you in the future, take care :)
Appreciate your criticism. Would mind being specific and telling me what exactly makes you think I don’t know what I’m talking about? Thanks in advance :)
Sir do you earn 100k euros a year as a software engineer? Plz reply. Thanks a lot.
No, I don't 🙂
@@SoftwareDeveloperDiaries which country do you live in?
I do, even more than that 😂
@@Aleks-fp1kqok
hello sir this is best explanation video . i was send connection request linkdin please accept
Better architecture or you mean a lot money to spend
You know, you're kind of right and it makes me sad. Cloud architecture is actually pretty fun, but you're bottle necked by the amount you have to spend. (Like most things in life lol)
Although there is some fun to be had in trying to squeeze out as much as possible from free tiers.
great content