Brilliant talk - well done! I recently implemented some of these recommendations and was pleased to see that the approach I took aligns with common-sense principles. To be honest, my background in building low-level protocols above L3 (TCP) has provided me with significant insight into designing event driven schemes, especially around "metadata data" schema design.
12:24 I would like to argue that the main difference between events and the rest (commands and queries) is "the cardinality", if you may. It's true that with a command you instruct a system to execute certain action. My disagreement is that I believe that events don't "tell the system that something has happened", they announce that something happened but to no system in particular. And I believe that in this subtle difference is where the key to decoupling lies. Events should be designed agnostic of consumers, focusing solely on the producer's domain. While commands and queries are designed by the consumer and treated by the producer as an API.
Great video, really informative. The examples and explanations we're really well delivered and some thought provoking concepts delivered along the way.
Whatever you do, NEVER save the Kitchen data directly to the Kitchen Service! Save it to the Order Service and then use Callbacks, ECST, Caches, Schema Management Protocols, Governance , and CDC to get the kitchen data to the Kitchen Service. WT F?
Actually, this talk avoids most of the hard problems with event-driven systems. The solutions that are presented are naiive. Whenever you copy an event, or duplicate data, you introduce a whole new set of consistency and reliability problems. This is blithely ignored in several places.
@@mileselam641 If you pull an event off the main message bus, and put it in a queue (as suggested here), what happens if that queue falls over? If you take some event data and cache it somewhere else (as suggested here, e.g. in S3), what happens if that data becomes stale, or is lost? How do you reason about any of this? Assuming you have a durable, asychronous message queue? Then why mention setting up your own queue in the first place! Whenever you make a copy of something, you introduce new problems, failure modes, and edge cases. Such issues can be dealt with, when necessary, but this talk doesn't really give any good examples. It glosses over the hard problems.
So yeah -- some singularity exploded and somebody earns money on purpose at an NDC Con Notice the Datadog the sponsor's logo? Doing even architecture does benefit Datadog - one way or another. Regardless of whichever language or platform, to debug an event driven architecture developers needs to put back togather the list of events and request (by means of tracer/correlation-id blah) across all related services. Conclusion: Choose your poison but Datadog/Kibana should come to your mind.
19:32 "instead of talking about REST APIs and microservices, you are now talking about what should happen after an order is placed" Now you will be taking about messages, and queues, and eventual consistency. My point is that this is a strawman argument that you can use both ways. If the architecture changes how you discuss your business then you are doing software development wrong. You should have been talking about what happens when an order is placed back when you built your first monolith already.
Deprecation… Deprecation… Deprecation… It’s never “Depreciation” with an ‘i’ after the ‘c’. Depreciation is a financial term, relating to the diminishing value of an asset over time. When you phase a feature out, you deprecate it. You do not depreciate it. It winds me up (obviously) how this terminology has wrongly entered the tech world. Other than that, great video
Brilliant talk - well done! I recently implemented some of these recommendations and was pleased to see that the approach I took aligns with common-sense principles. To be honest, my background in building low-level protocols above L3 (TCP) has provided me with significant insight into designing event driven schemes, especially around "metadata data" schema design.
12:24 I would like to argue that the main difference between events and the rest (commands and queries) is "the cardinality", if you may.
It's true that with a command you instruct a system to execute certain action. My disagreement is that I believe that events don't "tell the system that something has happened", they announce that something happened but to no system in particular.
And I believe that in this subtle difference is where the key to decoupling lies.
Events should be designed agnostic of consumers, focusing solely on the producer's domain. While commands and queries are designed by the consumer and treated by the producer as an API.
Great video, really informative. The examples and explanations we're really well delivered and some thought provoking concepts delivered along the way.
Whatever you do, NEVER save the Kitchen data directly to the Kitchen Service!
Save it to the Order Service and then use Callbacks, ECST, Caches, Schema Management Protocols, Governance , and CDC to get the kitchen data to the Kitchen Service.
WT F?
I used to be religious. Then I learned about event-driven systems!
Actually, this talk avoids most of the hard problems with event-driven systems. The solutions that are presented are naiive. Whenever you copy an event, or duplicate data, you introduce a whole new set of consistency and reliability problems. This is blithely ignored in several places.
"Copy an event". Say more. I can't tell whether you are insightful or struggling with a fundamental misunderstanding.
@@mileselam641 If you pull an event off the main message bus, and put it in a queue (as suggested here), what happens if that queue falls over? If you take some event data and cache it somewhere else (as suggested here, e.g. in S3), what happens if that data becomes stale, or is lost? How do you reason about any of this? Assuming you have a durable, asychronous message queue? Then why mention setting up your own queue in the first place! Whenever you make a copy of something, you introduce new problems, failure modes, and edge cases. Such issues can be dealt with, when necessary, but this talk doesn't really give any good examples. It glosses over the hard problems.
@@johndunderhill Saga
So yeah -- some singularity exploded and somebody earns money on purpose at an NDC Con
Notice the Datadog the sponsor's logo? Doing even architecture does benefit Datadog - one way or another. Regardless of whichever language or platform, to debug an event driven architecture developers needs to put back togather the list of events and request (by means of tracer/correlation-id blah) across all related services. Conclusion: Choose your poison but Datadog/Kibana should come to your mind.
19:32 "instead of talking about REST APIs and microservices, you are now talking about what should happen after an order is placed"
Now you will be taking about messages, and queues, and eventual consistency. My point is that this is a strawman argument that you can use both ways.
If the architecture changes how you discuss your business then you are doing software development wrong. You should have been talking about what happens when an order is placed back when you built your first monolith already.
Deprecation…
Deprecation…
Deprecation…
It’s never “Depreciation” with an ‘i’ after the ‘c’. Depreciation is a financial term, relating to the diminishing value of an asset over time.
When you phase a feature out, you deprecate it. You do not depreciate it.
It winds me up (obviously) how this terminology has wrongly entered the tech world.
Other than that, great video
On premise
When you squeeze out a feature, you defecate on the old one, you defecate on it.