I suffered from data propagation between services, and yeah it's bad. The coupling between services was terrible, we had the service's database and another schema that was holding a partial copy of other services data. All was propagated through a CDC which was emitting data centric events on a service bus in an async way, no ordering guarantee. So you'd have to use versions to consume or ignore the events. It was your text book example of what not to do.
@@CodeOpinionevents are for communication, signalling - so using them for data transfer isn't wrong, but if people are just thinking of them like the outbox pattern, "row created", "row changed"... they're missing the internal role of events for interaction between components inside systems, rather than just for relay between systems
@@CodeOpinion I can see the problem though. Lets say we have a new event that does not match any of the event types we have now, so it isn't any of Order Received, Inventory Picked, Product Out of Stock, Order Shipped but something like inventory was taken by law enforcement as evidence of a crime. Now how do we fit this? Do we create a new event type for this rare event type? It is possible this inventory that we deducted would come back later and we have to increment it again. Do we create a new event for that?
I don't think this stops at events. I've seen many people misunderstand the ideas behind object-oriented programming, where a lot of people don't understand how to use and define objects and end up with an extremely coupled architecture, so I'm not surprised that the same issues will appear at a higher level. Unless you put the time and effort to understand the domain, it's extremely difficult to build complex systems that are easy to change and, unfortunately, a lot of people don't do that and implement solutions based on misunderstood concepts and so they are creating a lot of accidental complexity. Yes, a thousand times yes, it's "InventoryAdjusted", not "ProductUpdated", and it's "EmploymentEnded", not "EmployeeDeleted". But how can you encode business concepts in your code when all you do is resolve JIRA tickets?
Agree'd it's a fundamental problem of "jira ticket resolver". But we as an industry enable it by thinking of ourselves as a tool specific developers. Eg, I'm a React Developer.
@@CodeOpinion I was reading some comments and on this topic, I actually want to brag about an inhouse project i'm developing at my company. It's a SPA-like application made 100% with HTMX. I was mostly a backend dev and most of my frontend experience was in React. HyperMedia was a little mind bending with a traditional (non-nestable) router but it's working out quite nicely. I actually never have to worry about state management and consistency since the database people have already solved that for me :D
I see that OOP problem, when I do a job interview, there's 1 of 10 can explain OOP in a meaningful way. Some candidates take classes as groups of functions.
This is easily solved by defining the processess in their own domain (e.g. as bpmn process description) and infering business events from the state changes of instances of that flow. No need to do it the other way round (i.e. infering the process from events) as long as the event names from the process state transitions are unique.
if someone has a problem sharing data from one service and interpreting it for the needs of the other service then I'd reccomend using a monolith. It solves all wrong interpetations.
I think I know what you're getting at, but what would really have helped me in this video was throwing a scenario at the wall of how this "misuse" of event-driven-architecture would be processed and how it could blow up in my face in the future. Other than that another outstanding video! love your extremely professional presentation btw!
This happens over and over again with architecture, patterns, etc... What sounds good in theory doesn't always work well in practice so it gets altered to fit real-world usage.
The biggest is DEVOPS. It as never supposed to be a role in an organization / department. It was supposed to be a collaborative process between DEVelopers and OPerations to facilitate faster deployments. Now look at what it became.
We used event sourcing in the last project and we started with CRUD-based events. It wasn't really that much of an issue since there was (and ever would be) only one way most business objects could be created/updated/deleted - a user hitting a create/update/delete button. In one of the domains there was sort of a workflow, basically the main functionality of the app. This was where we had to focus more on the behavior rather than data, because the history of the business object would matter - the order of events and where the events came from (from a business POV). For example if the object was created by a user, a message broker from another system or imported while migrating data, the app might skip some steps of the flow or execute different commands. The other use for behavior-focused naming was logging - most of the time it was enough to just log individual events and nothing more. To be honest, it was a cool experience building an app in a different way but for future projects I'll prefer the bog-standard DDD. There's way too much work wiring up events into CQRS and there was way too many issues with just the event sourcing pattern itself, some of them we didn't even have time to fix.
I see a trend away from the over abstracted OOP/DDD world and just solving the problem. It turns out that, while powerful, it takes a perfect storm that rarely exists in practice.
Maybe the level of abstraction and DDD were not the right tool for the actual problem in the first place? DDD is not needed for simple CRUD like processes, its in fact overkill. You don't need a lot extendibility and flexibility? Fine, don't over-engineer and don't abstract too much. When I started I thought that DDD is a great idea for everything, but in fact, it is not, as every tool. Especially in architecture every concept has pros and cons, it's like a scale that must match the combination of pros and cons of the chosen solution to the actual requirements. You will *always* give something up for something else. The question is just if that trade off actually satisfies your requirements, functional and non-functional. And architecture evolves, do YAGNI/KISS, today CRUD might he better than CQRS, 3 years later, the project and company grew, new requirements appear, reevaluate and CQRS/DDD *might* be a good choice by now.
This seems a little simplified to me. Usually, when an event happens, there is almost always data associated with it coming from the source system of record. Otherwise, an event is almost meaningless. New User Create = data about the user, Employee Address Updated = the address data, Order Placed = order data, etc. Sending data with events is the easiest way to decouple systems, and not have to add additional overhead of API calls.
In a lot of systems, events often are more of an afterthought, used in very limited cases, rather than the philosophy throughout the entire system - the "driven" part is very important. I have seen mostly a small set of integration events - rather "notifications" - just carrying data and not signifying behavior. And I miss the concept of Domain Event within a domain. So in my view, they are not valuing events as much. Perhaps it points out the problem of lack of design - or even bad upfront design. It is also cool to say that you have an "Event-driven architecture" although events are not the driving anything.
Ya it could be an afterthought, as well as a "I need data from this service and we want to use EDA". The data distribution part is what is going to cause the coupling even if its not temporal via EDA.
Good point, and this failure is seen in a broader set of APIs, RESTful too can be slapped on existing services within consideration of consumers of the interface. Leading to lipstick API led architecture, which is just spaghetti integration over API.
About the example with Order and Payment services. The payment service, a part from the payment method needs to know the amount to charge. This amount will depend on the ordered items, potential discounts or coupons etc. Where this data comes from? Do we need extra events to share this data? Does the Payment service needs to contact the Order service for that? I agree with the message in the video, but in real world applications things are more complex than the example and it is hard not to end up sharing a big amount of data.
You don't necessarily have to choose. I like messages that do double duty: they should be an actual business event (like "order shipped") but they should also include a full copy of the entity associated with that event. This makes it much easier to distribute data because any event is also a full snapshot and you can simply throw away older events to compact the topic without having to keep some kind of full snapshot around in another data store.
Why do you need multiple services if they share the same data? Or why does one service need all data of the other service? As mentioned in the video, this couples one service to the Schema of the other service, and doesn’t fulfill one of the main benefits of event driven architecture: decoupling
@@pianochess1882 No it doesn't, because the schema of the entity that is included in the event can be different from the schema that the owning service uses internally. Consumers are _always_ coupled to the event schema. The important thing is that this schema can be evolved independently. There are many cases where one service needs data from another. For example in an e-commerce system there will be dozens of different recommendations that all ultimately need to know what products there are to sell.
The example shows the issue with sending a lightweight event. The payment service is trusting the client to send the correct payment details. End of the day different services will need different levels of detail of data in other services. Either you send a lightweight event and have a service to retrieve details or publish a heavyweight message. If you use the first approach you are introducing some coupling (unless you have another SOA-type layer on top of microservices)
I can see a few cases where the service receiving the event doesn't care at all why the data has changed, only that it has changed and even just to what. One example would be a webshop getting a message about a stock change in the warehouse. Why the stock changed is not interesting for the webshop, even by how much is not interesting, only how much the warehouse has in stock is relevant for the webshop. Getting multiple different messages telling the webshop that new stock came in, that something was sold, something was thrown out or inventory check showed items were missing just means the webshop has to parse through irrelevant information - that increases chances of bugs or missed messages because a new stock change reason hasn't been implemented in the webshop. Sometimes the simplest way is the correct way. In general I agree with the event types but sometimes they cause more problems than they solve.
The warehouse complexity sure does not need to be exposed to the shop. In DDD, you speak about domain translation when you cross boundaries. The shop should have a translation component that translates the events emitted by the warehouse domain to "simplified" events for the shop.
@@Fred-yq3fs"The shop should have a translation component that translates the events emitted by the warehouse domain to "simplified" events for the shop." Eh, what???
I do understand what you are saying. But I agree with the point of the video, have a ItemInventoryChanged event with an iD, previous, current, and quantityChanged. Instead of just a ProductIpdated event with all properties of the entity
A service can produce multiple events, ie ProducrtSold as well as InventoryChanged, and different consumers can react. That’s why DDD must be looked at as a whole and not just isolated to one domain. Each domain should at least know the best way to publish any updates that are happening in it
I was about to disagree strongly, but luckily watched all the way to the end. As you said, there are perfectly valid scenarios for the data-centric approach. And it’s not only “reporting”, but any other secondary use case for the data being propagated. Granted, those use cases are often analytical in nature.
Good points. The problem does not constrained to event driven architecture. This should be the mindset used in a lot different aspects too. I have seen too many (still seeing now) code that external APIs, internal domain artifacts (“service”) are providing bunch of data centric operations.
We do both. Each change results in a list of CRUD events over what state changed in the local domain model (one event for each concept that was changed), and in the metadata for this list is the name of the action that triggered this change. For consumers who want to project the data of the source into their own schema, they can do that. For consumers who want to listen to specific actions taken, they can do that. No need to choose one over the other.
Well said! Pretty much applies to any design pattern / tech / etc. The difference between (ab)using design patterns and following design patterns. I think the issue lies in a mindset where you can say "we use event driven / domain driven / etc, therefore our architecture is impeccable". Devil is in the details, high level concepts are meaningless if the same amount of thought doesn't go into all the individual parts of a system. That's where the spaghetti is born regardless of the architecture you follow.
So this seems to be the same issues facing many companies that have monoliths at the moment, lack of architectural boundaries, and making their code modules (or microservices) dependent on other parts creates brittle systems. I know a few companies personally that have just remade their systems as microservices, rather than refactoring to make them independent as a modular monolith before splitting to microservices... this stems from a lack of understanding the benefits of true decoupling
Your videos are pure gold, thanks for sharing! In 5:43, you talk about how to distribute data from a client to multiple services. I am very interested which implementation approaches exist to reliably perform these steps. Do you have a video about that aspect, or would you plan to do one?
In your example I don’t see how the orderId is going to be enough for the payment system. How was the payment system notified of the amount to be charged? BTW I’m not disagreeing with the overall message. Just that your example seems incomplete.
It's probably highly related to the types of applications I build and have built, but I have such a hard time finding a good fit for my apps where I benefit from really decoupling differents parts of the system. Sure I can use events and queues to offload processing to later for performance. But I've yet to see really big benefits of having 50 databases that only knows its thing, instead of 1 that can easily lookup data in queries instead of asking another system or have duplicated data. I'm sure there are types of apps that benefits and even require it, but the things I've built over 25 years I don't see the big benefits. And I've really tried, because I often like to use the new shiny toy just because I want to learn it.
Ultimately it comes down to whether the organisation one works in, has reached a certain scale and complexity that a single shared database is becoming a problem (data ownership enforcement, conceptual integrity of the domain model, performance and load, traceability etc). The problem usually is not the tech its how the organisation is designed and how it communicates amongst its various departments. Good boundaries in the organisation design should lead to good modularity in software, then whether to have those modules split up in async/sync services running in their own processes, is a decision you can defer until you need it. If you don't have a problem that a solution solves, don't use that solution no matter how cool or shiny. Don't try to invent a problem to justify using a solution.
@@cybernetic100 Yes that makes a lot of sense. And often, I’m have been the only developer or the team has been very small and responsible for the whole system.
I don't like duplicated data, but you don't necessarily need to duplicate data. For example, I work on a SAAS product where basically everything needs to know about tenants, so we have a tenant database, and we have a service in front of that database. We don't force every service to have a copy of the tenant data. Other services have their own data, and that includes a tenant id, but if they want to know something about the tenant they need to ask the service. Nothing needs to know everything about a tenant though, so there is no "get whole tenant entity" endpoint. The main advantage I've found is that you can change how data is stored and deploy one service, rather than having to redeploy several services when things change. We previously had a shared database for rules, the UI would write to it, and 3 other services would read the rules and evaluate them. Every time we added a new type of rule we needed to redeploy 4 services together. We put a rules service in front, which has a crud API for the UI and an RPC API for rules evaluation, and now changes to rules only affect that service. We use a mix of crud/domain/RPC/commands/events where appropriate. For example when a tenant is deleted we need to raise an event so that other services delete everything related to that tenant, for GDPR.
I believe in "the simpler the better" approach. I see benefits of splitting a monolith into microservices only when you assign different teams (owners) to each of the microservices, so you decouple not only the code, but the release schedule, the coding pace. It comes with a cost of contracts sharing and maintenance. Sometimes I see attempts to split solutions to microservices within a single team in the initial phase with no actual ownership splitting. This brings more complexity with no benefits at all.
I am not sure I fully agree with your definition of event driven systems, for me, the "driven" part of that name essentially means that the event carries enough business relevant state (call it a specialised case of Event Carried State Transfer), to "drive" the consumer i.e. for a consumer to be able to make local decisions without having to contact the producer service. For example, I might have a "purchase order rescheduled" event being emitted when my purchasing department reschedules a purchase order's delivery date, in this event I will not only include the purchase order id, but also what was the previous delivery date and what's the new delivery date. Events are about facts and the fact is that the delivery date changed so this is the minimum information that will make sense for this business event which is also useful for may be 90% of the workflow consumers within the org. I will need to make sure I am not emitting anything that's not under the control and ownership of my bounded context, but including enough data for the events to be relevant and useful for my consumers just makes 101 sense to me.
Absolutely agree, so we might be misunderstanding. Events can carry data beyond identifiers, just as your example of purchase order rescheduled. What i'm more referring to is CRUD over entity driven data changes to propagate data to other services and lack any type of business meaning. As you said the event carries enough business meaning. ProductUpdated isn't mean anything.
It depends are what problem you are trying to solve. I work at a large company which has numerous apps in numerous product lines. Some data is common across the org....by replicating this data we remove single points of failure in shared APIs. Of course our "event stack" could go down but most apps could chug along for a bit with stale data.
The biggest problem is that in order to have those business events identified have got to talk to business. And just one occasional talk is not enough, there should be an established 2 way communication. And here we have another problem - usually ( at least what I've seen, and especially enterprises, even successful ones) there are folks can be found with a strong sense of purpose to isolate developers from the business (or vice versa). And these people are often well enough in power to overrule both developers and business. The second problem is the lack of the established methodology on how to approach the software requirements process. Its very difficult for people to keep focused on the core business problem for long enough before they resort to mechanistic mocks ui draw or database tables design . In essence, that roots from the dichotomy of easy vs simple that haunts the industry for almost 2 decades IMO.
One thing i struggle with is forms... You have a CRUD-ish api and you expose the ability to change multiple entity attributes at the same time. Example, you can edit your product and have a single form describing all the fields you can change: Name, Description, Price, etc. Should you prefer 1 event describing all the changes like ProductUpdated ? Or should you split it into multiple events each describing some part of the changes like ProductRenamed, ProductPriceAjusted, etc ? Or a mix of both, if you do a PUT products/:product_id you use ProductUpdated, but if you expose also a POST products/:product_id/adjust-pricing which only update the price, you can also have ProductPriceChanged ?
What I find good about CRUD, is that people can make changes all over the entity on the client side. Double check if everything is okay, then push Save. With commands, this workflow is not possible. All kinds of modal dialogs, popping up, when confirming they push data onto the server. No way to be unsatisfied with the modification of the data, and revert to the initial entity state.
@@dragomirivanov7342 And what i don't like about CRUD, is that people can make changes all over the entity on the client side :) Jokes aside, i think CRUD makes a lot of sense with forms. But when you want to push it a little further, it becomes pretty complex to understand the intent behind a generic PUT. As soon as you have a couple business rules, you should start expanding your CRUD into a CRUD-ish where some general attributes can be set via CRUD but actions have their own end-points At least that's what i think makes more sense to me but it's hard sometimes especially if you allowed editing an attribute via CRUD and now you want to revoke that ability..
Great comment. As with anything, there is no perfect solution imo. Maybe ProductUpdated with a before and after as well as array of modified properties? So a consumer that cares about price changes could say if (eventDetails.changedFields.includes(“price”) // do something if price is lower if (eventDetails.previous.price > eventDetails.current.price)
@@kevinclark1783 This is what I do. `reflect.DeepEqual(old.field,new.field)` and then decide what the command should have been. However supplying `$intent` field in the entity should supply another hint.
It could be, but depending on how your navigating it. They know the workflow, so in that sense yes, but I favor hypermedia and then the client doesn't know where they are sending a request.
Hey @CodeOpinion, I agree that describing domain behavior in events makes more sense than describing data operations, but when a CRUD operation affects multiple data points of an entity at once I find it easier to just say "entity has changed" and have the consumer decipher what happened than dispatch 20 different events, each handled by a separate handler. Especially when regardless of the specific properties that changed all I want is to notify about the general enntity change. Does that mean I'm lazy for not writing code that describes all the changes in a behavioral way, or that there's also a place for CRUD events and it's up to the developer to choose when to use each? Thanks
Why did a crud operation affect 20 data points? Why was this operation performed? Did a user just go „I’m just randomly gonna update 20 properties“. This usually doesn’t happen, so what was their intention?
For data consistency, when it is required, is it best to keep the event messages synchronous? Or better to not use messages for events, but keep events in these situations as internal methods, not distributing them?
Can't say it make much more of a difference. That nothing more then naming convention. To me EmployeeDeleted or EmploymentEnded more or less identical BUT doesn't say nothing about what should system do about it. So if I'm a new guy on a project i won't know does it did what it must do or not no matter how does the event named.
I highly disagree with this one-vs-the-other approach. In my experience it's highly benefitial to have different event domains (e.g. one for business events, one for data change events) living on the same event bus and using references to link them (e.g. firing a order-cancelled event E1, in return a handler fires a customer-balance-changed event E2 with a reference to E1 by event ID). That way you know what changed, why it changed and when it changed, and you can choose to focus on only one domain for any module in your system to keep complexity low. Especially for monitoring and anomaly detection in your business processes that's a godsend.
More often than not, services won't have the entire context to do their part of the job and will have to rpc other services to get information (in your example the payment service might need user info to pass to the gateway). So each event will generate additional chatter between services. How do you address that?
Hey @CodeOpinion, wouldn't it be better to have events that process bulk data instead of single entity.and to apply that pattern throughout the system. e.g. EmployeesDeleted instead EmployeeDeleted etc
I've had a lot of success with business events and trying not to store the "unrelated" data in our system, but what always seems to get me is a new requirement where our users want to see our data alongside data from other systems in the UI. Imagine a table that lists the tracking numbers for all outgoing shipments, but also includes the total order value that isn't stored in your shipping service. Its getting ugly. Now imagine your users want to filter the list to only shipments where the total order value exceeds $x.xx. How do you filter your shipments based on that criteria?
The reality is you really can't. You'll likely need a denormalized projection of all the upstream systems. Old school RDBMS still drives business thinking.
Let's say that I'm integrating with a 3rd Party, and it expects that we send it a model which contains information from 2 different domains in our application. This needs to happen when a 3rd domain publishes a specific event. What would be the right way to do this?
Compose the data from disparate sources as you need to. It could be on demand to the various boundaries or you may have the data already composed. If you're handling the initial event async, then you have a lot more options in terms of failures if you need to sync request/response calls to other boundaries to get that data.
Consider a larger landscape with many systems listening to example product data modifications. Wouldn’t it be more efficient to send also what actually changed in the data to not have all systems need to implement the api/rpc calls and also that it could potentially be slowing down the source system with all the inbound requests? Notifications vs Data events in this case that is.
question: if employee is changed due to editing or deleted, there is no business cause for this effect. what would be the reason to be cited in this case?
So what i'm often missing in these kind of examples is where the context of these events goes. Like which user, which timestamp the event occurred etc.. How would you pass this kind of details around. If I want for example create some kind of logging service I surely need this kind of context per event
@@CodeOpinion And why not make this metadata part of the message, i.e. simply include the full entity in addition to the event. Metadata (headers in Kafka for example) is not as well supported by tooling.
I agree with metadata, Event { type:string, details: object with information needed for that event, metadata: object with anything else you want for logging, debugging }. But if the consumer does something based on a user detail, then imo the user should be part of the “details” because at that point it is not only used for informational purposes
@@kevinclark1783 Meta data sounds like a good solution but where does this meta data get's filled. Currently I've implemented (php/symfony) it by sending the users id and current timestamp to the behavioral methods on my domain model. But this causes a lot of duplicate / boilerplate code. Would you inject some sort event factory to your model which includes this meta data or pass it to the method in a 'meta' value object?
Been working with event driven IDM solutions for the better part of 20 years, it is very difficult to get you head around, especially CRUD if you only know stateful programming or systems from the start.
Faking Event Sourcing using a request -> db -> CDC -> transform -> broker loses so much useful information. And yes, means all downstream services are now tightly coupled to your schema. Very brittle design.
At first I was skeptical. Where is he going with this? Then I saw the CDC box in the diagram, effectively exposing all the internals of service B. What an abomination! People do everything to avoid actually designing their software. Btw besides reporting (data-warehousing) there is also search services and caching that might justify peeking a bit in te data of others. But no service that performs business logic as part of a workflow. They should communicate on business object/event level.
if that's a trend, it's a strange one for sure... like a really cumbersome way to manually sync databases? someone ought to tell them, databases can do that automatically ;-) btw your audio is badly out of sync on your last couple of videos.
@@CodeOpinion I noticed it as well. Video seems to lag behind audio, it's pretty distracting. I had to stop watching the video and just listen to the audio so I could focus on the content, which was great by the way!
You just inversed the coupling of the Order and Payments service. One needs to know about the other. More important question is who should know about whom and I say Payments shouln't care at all about orders and should faciliate only the money taking/refunding process and when to do what is order services business logic. Payments can be used by other services and doesn't need to change when a new client comes along.
This approach also solves the problem that usually occurs for people is that. You have multiple consumers of the events and everything need to have different data by handling the event in memory and having a handler for that that actually invokes a command to said service, which you can do with transactional outbox, you now know exactly what data is needed for that command.
It goes the same for example Shipping service. When you place an order, you could call /ship in the order service and know exactly what information is needed for shipping to ship this. In real world. When a customer orders something from a store and asks this to be shipped and there is no IT systems. The peope from the store would go to post office for example and tell them to ship that. Issuing a command basically.
From one of your other videos. You publish the OrderPlaced event, have an handler for that, that uses MediatRHangfireBridge to invoke a ShipRequest and have a ShipRequestHander in background process that has an handler for it. You can also choose which data to incorporate in the ShipRequest when creating in, data that you already have in memory for example and you could also fetch some other data in the ShipRequest handler later that is needed to invoke the actualy shipping service.
@@CodeOpinion more like abused than used 😉 that’s why it’s crucial that you have solution architect that knows what event driven architecture actually is and what it isn’t.
I don't see the point of this video and I disagree with most of it. Data driven events can be very useful for separating the load on different services. One service may need 100 times processing power of another dependant service, hence decoupling. This is one of the main reasons for the micro service architecture approach. Why is this not even mentioned in this video? Also, I am not sure the other naming convention discussed here is better. The system should react in predictable ways to predictable events. If we have the same Employee Deleted event named in 3 different ways based on the business context it will produce a lot of confusion (esp. for new developers) and unnecessary code bloat.
I never suggested having the same event be named multiple different ways. Data centric events can be useful in query/reporting purposes when you need to compose disparate data. That's very different than needing data for another service and you expect consistency because you need to enforce business rules. We clearly have a different view of what services are. Processing power and how you physically deploy a logical boundary is limitless. That doesn't make them separate "microservices". Hard disagree with "one of the main reasons for microservices". That notion is one of the main reasons why so many systems are in a coupling nightmare and can't make sense of microservices.
@@CodeOpinion Again, disagree with most of your comment. Indeed we have different views on what is a microservice and what good architecture/software engineering practices are. Perhaps my view on this is skewed from my experience with high load, low latency systems. Agree to disagree.
I suffered from data propagation between services, and yeah it's bad. The coupling between services was terrible, we had the service's database and another schema that was holding a partial copy of other services data. All was propagated through a CDC which was emitting data centric events on a service bus in an async way, no ordering guarantee. So you'd have to use versions to consume or ignore the events. It was your text book example of what not to do.
Spot on. Events should be business events, e.g., Order Received, Inventory Picked, Product Out of Stock, Order Shipped, etc.
Exactly 💯
The trend however seems more and more it's being used for data distribution.
@@CodeOpinionevents are for communication, signalling - so using them for data transfer isn't wrong, but if people are just thinking of them like the outbox pattern, "row created", "row changed"... they're missing the internal role of events for interaction between components inside systems, rather than just for relay between systems
@@CodeOpinion I can see the problem though. Lets say we have a new event that does not match any of the event types we have now, so it isn't any of Order Received, Inventory Picked, Product Out of Stock, Order Shipped but something like inventory was taken by law enforcement as evidence of a crime. Now how do we fit this? Do we create a new event type for this rare event type? It is possible this inventory that we deducted would come back later and we have to increment it again. Do we create a new event for that?
@@etexasWhat's stopping you on creating that new event?
"EmployeeDeleted" sounds like a murder investigation and labor lawsuit about to happen
"prepare for deallocation" ;-)
I don't think this stops at events. I've seen many people misunderstand the ideas behind object-oriented programming, where a lot of people don't understand how to use and define objects and end up with an extremely coupled architecture, so I'm not surprised that the same issues will appear at a higher level. Unless you put the time and effort to understand the domain, it's extremely difficult to build complex systems that are easy to change and, unfortunately, a lot of people don't do that and implement solutions based on misunderstood concepts and so they are creating a lot of accidental complexity.
Yes, a thousand times yes, it's "InventoryAdjusted", not "ProductUpdated", and it's "EmploymentEnded", not "EmployeeDeleted". But how can you encode business concepts in your code when all you do is resolve JIRA tickets?
Agree'd it's a fundamental problem of "jira ticket resolver". But we as an industry enable it by thinking of ourselves as a tool specific developers. Eg, I'm a React Developer.
@@CodeOpinion I was reading some comments and on this topic, I actually want to brag about an inhouse project i'm developing at my company. It's a SPA-like application made 100% with HTMX. I was mostly a backend dev and most of my frontend experience was in React. HyperMedia was a little mind bending with a traditional (non-nestable) router but it's working out quite nicely. I actually never have to worry about state management and consistency since the database people have already solved that for me :D
I see that OOP problem, when I do a job interview, there's 1 of 10 can explain OOP in a meaningful way. Some candidates take classes as groups of functions.
This is easily solved by defining the processess in their own domain (e.g. as bpmn process description) and infering business events from the state changes of instances of that flow.
No need to do it the other way round (i.e. infering the process from events) as long as the event names from the process state transitions are unique.
if someone has a problem sharing data from one service and interpreting it for the needs of the other service then I'd reccomend using a monolith. It solves all wrong interpetations.
I think I know what you're getting at, but what would really have helped me in this video was throwing a scenario at the wall of how this "misuse" of event-driven-architecture would be processed and how it could blow up in my face in the future. Other than that another outstanding video! love your extremely professional presentation btw!
This happens over and over again with architecture, patterns, etc... What sounds good in theory doesn't always work well in practice so it gets altered to fit real-world usage.
The biggest is DEVOPS. It as never supposed to be a role in an organization / department. It was supposed to be a collaborative process between DEVelopers and OPerations to facilitate faster deployments. Now look at what it became.
We used event sourcing in the last project and we started with CRUD-based events. It wasn't really that much of an issue since there was (and ever would be) only one way most business objects could be created/updated/deleted - a user hitting a create/update/delete button.
In one of the domains there was sort of a workflow, basically the main functionality of the app. This was where we had to focus more on the behavior rather than data, because the history of the business object would matter - the order of events and where the events came from (from a business POV). For example if the object was created by a user, a message broker from another system or imported while migrating data, the app might skip some steps of the flow or execute different commands.
The other use for behavior-focused naming was logging - most of the time it was enough to just log individual events and nothing more.
To be honest, it was a cool experience building an app in a different way but for future projects I'll prefer the bog-standard DDD. There's way too much work wiring up events into CQRS and there was way too many issues with just the event sourcing pattern itself, some of them we didn't even have time to fix.
I see a trend away from the over abstracted OOP/DDD world and just solving the problem. It turns out that, while powerful, it takes a perfect storm that rarely exists in practice.
Maybe the level of abstraction and DDD were not the right tool for the actual problem in the first place? DDD is not needed for simple CRUD like processes, its in fact overkill. You don't need a lot extendibility and flexibility? Fine, don't over-engineer and don't abstract too much. When I started I thought that DDD is a great idea for everything, but in fact, it is not, as every tool. Especially in architecture every concept has pros and cons, it's like a scale that must match the combination of pros and cons of the chosen solution to the actual requirements. You will *always* give something up for something else. The question is just if that trade off actually satisfies your requirements, functional and non-functional. And architecture evolves, do YAGNI/KISS, today CRUD might he better than CQRS, 3 years later, the project and company grew, new requirements appear, reevaluate and CQRS/DDD *might* be a good choice by now.
This seems a little simplified to me. Usually, when an event happens, there is almost always data associated with it coming from the source system of record. Otherwise, an event is almost meaningless. New User Create = data about the user, Employee Address Updated = the address data, Order Placed = order data, etc. Sending data with events is the easiest way to decouple systems, and not have to add additional overhead of API calls.
In a lot of systems, events often are more of an afterthought, used in very limited cases, rather than the philosophy throughout the entire system - the "driven" part is very important. I have seen mostly a small set of integration events - rather "notifications" - just carrying data and not signifying behavior. And I miss the concept of Domain Event within a domain. So in my view, they are not valuing events as much. Perhaps it points out the problem of lack of design - or even bad upfront design. It is also cool to say that you have an "Event-driven architecture" although events are not the driving anything.
Ya it could be an afterthought, as well as a "I need data from this service and we want to use EDA". The data distribution part is what is going to cause the coupling even if its not temporal via EDA.
Good point, and this failure is seen in a broader set of APIs, RESTful too can be slapped on existing services within consideration of consumers of the interface. Leading to lipstick API led architecture, which is just spaghetti integration over API.
About the example with Order and Payment services. The payment service, a part from the payment method needs to know the amount to charge. This amount will depend on the ordered items, potential discounts or coupons etc. Where this data comes from?
Do we need extra events to share this data? Does the Payment service needs to contact the Order service for that?
I agree with the message in the video, but in real world applications things are more complex than the example and it is hard not to end up sharing a big amount of data.
You don't necessarily have to choose. I like messages that do double duty: they should be an actual business event (like "order shipped") but they should also include a full copy of the entity associated with that event. This makes it much easier to distribute data because any event is also a full snapshot and you can simply throw away older events to compact the topic without having to keep some kind of full snapshot around in another data store.
Why do you need multiple services if they share the same data? Or why does one service need all data of the other service? As mentioned in the video, this couples one service to the Schema of the other service, and doesn’t fulfill one of the main benefits of event driven architecture: decoupling
@@pianochess1882 No it doesn't, because the schema of the entity that is included in the event can be different from the schema that the owning service uses internally. Consumers are _always_ coupled to the event schema. The important thing is that this schema can be evolved independently.
There are many cases where one service needs data from another. For example in an e-commerce system there will be dozens of different recommendations that all ultimately need to know what products there are to sell.
The example shows the issue with sending a lightweight event. The payment service is trusting the client to send the correct payment details.
End of the day different services will need different levels of detail of data in other services. Either you send a lightweight event and have a service to retrieve details or publish a heavyweight message. If you use the first approach you are introducing some coupling (unless you have another SOA-type layer on top of microservices)
I can see a few cases where the service receiving the event doesn't care at all why the data has changed, only that it has changed and even just to what.
One example would be a webshop getting a message about a stock change in the warehouse. Why the stock changed is not interesting for the webshop, even by how much is not interesting, only how much the warehouse has in stock is relevant for the webshop. Getting multiple different messages telling the webshop that new stock came in, that something was sold, something was thrown out or inventory check showed items were missing just means the webshop has to parse through irrelevant information - that increases chances of bugs or missed messages because a new stock change reason hasn't been implemented in the webshop.
Sometimes the simplest way is the correct way.
In general I agree with the event types but sometimes they cause more problems than they solve.
The warehouse complexity sure does not need to be exposed to the shop.
In DDD, you speak about domain translation when you cross boundaries.
The shop should have a translation component that translates the events emitted by the warehouse domain to "simplified" events for the shop.
@@Fred-yq3fs"The shop should have a translation component that translates the events emitted by the warehouse domain to "simplified" events for the shop."
Eh, what???
I do understand what you are saying. But I agree with the point of the video, have a ItemInventoryChanged event with an iD, previous, current, and quantityChanged. Instead of just a ProductIpdated event with all properties of the entity
A service can produce multiple events, ie ProducrtSold as well as InventoryChanged, and different consumers can react. That’s why DDD must be looked at as a whole and not just isolated to one domain. Each domain should at least know the best way to publish any updates that are happening in it
I was about to disagree strongly, but luckily watched all the way to the end. As you said, there are perfectly valid scenarios for the data-centric approach. And it’s not only “reporting”, but any other secondary use case for the data being propagated. Granted, those use cases are often analytical in nature.
Glad you watched till the end. Often time it is for reporting and UI composition type purposes rather than needing data to enforce business rules.
Good points. The problem does not constrained to event driven architecture. This should be the mindset used in a lot different aspects too. I have seen too many (still seeing now) code that external APIs, internal domain artifacts (“service”) are providing bunch of data centric operations.
We do both. Each change results in a list of CRUD events over what state changed in the local domain model (one event for each concept that was changed), and in the metadata for this list is the name of the action that triggered this change. For consumers who want to project the data of the source into their own schema, they can do that. For consumers who want to listen to specific actions taken, they can do that. No need to choose one over the other.
Well said!
Pretty much applies to any design pattern / tech / etc. The difference between (ab)using design patterns and following design patterns.
I think the issue lies in a mindset where you can say "we use event driven / domain driven / etc, therefore our architecture is impeccable".
Devil is in the details, high level concepts are meaningless if the same amount of thought doesn't go into all the individual parts of a system. That's where the spaghetti is born regardless of the architecture you follow.
So this seems to be the same issues facing many companies that have monoliths at the moment, lack of architectural boundaries, and making their code modules (or microservices) dependent on other parts creates brittle systems.
I know a few companies personally that have just remade their systems as microservices, rather than refactoring to make them independent as a modular monolith before splitting to microservices... this stems from a lack of understanding the benefits of true decoupling
Your videos are pure gold, thanks for sharing!
In 5:43, you talk about how to distribute data from a client to multiple services. I am very interested which implementation approaches exist to reliably perform these steps. Do you have a video about that aspect, or would you plan to do one?
In your example I don’t see how the orderId is going to be enough for the payment system. How was the payment system notified of the amount to be charged?
BTW I’m not disagreeing with the overall message. Just that your example seems incomplete.
It's probably highly related to the types of applications I build and have built, but I have such a hard time finding a good fit for my apps where I benefit from really decoupling differents parts of the system.
Sure I can use events and queues to offload processing to later for performance. But I've yet to see really big benefits of having 50 databases that only knows its thing, instead of 1 that can easily lookup data in queries instead of asking another system or have duplicated data.
I'm sure there are types of apps that benefits and even require it, but the things I've built over 25 years I don't see the big benefits. And I've really tried, because I often like to use the new shiny toy just because I want to learn it.
Ultimately it comes down to whether the organisation one works in, has reached a certain scale and complexity that a single shared database is becoming a problem (data ownership enforcement, conceptual integrity of the domain model, performance and load, traceability etc). The problem usually is not the tech its how the organisation is designed and how it communicates amongst its various departments. Good boundaries in the organisation design should lead to good modularity in software, then whether to have those modules split up in async/sync services running in their own processes, is a decision you can defer until you need it.
If you don't have a problem that a solution solves, don't use that solution no matter how cool or shiny. Don't try to invent a problem to justify using a solution.
@@cybernetic100 Yes that makes a lot of sense. And often, I’m have been the only developer or the team has been very small and responsible for the whole system.
I don't like duplicated data, but you don't necessarily need to duplicate data.
For example, I work on a SAAS product where basically everything needs to know about tenants, so we have a tenant database, and we have a service in front of that database. We don't force every service to have a copy of the tenant data. Other services have their own data, and that includes a tenant id, but if they want to know something about the tenant they need to ask the service. Nothing needs to know everything about a tenant though, so there is no "get whole tenant entity" endpoint.
The main advantage I've found is that you can change how data is stored and deploy one service, rather than having to redeploy several services when things change.
We previously had a shared database for rules, the UI would write to it, and 3 other services would read the rules and evaluate them. Every time we added a new type of rule we needed to redeploy 4 services together. We put a rules service in front, which has a crud API for the UI and an RPC API for rules evaluation, and now changes to rules only affect that service.
We use a mix of crud/domain/RPC/commands/events where appropriate. For example when a tenant is deleted we need to raise an event so that other services delete everything related to that tenant, for GDPR.
I believe in "the simpler the better" approach.
I see benefits of splitting a monolith into microservices only when you assign different teams (owners) to each of the microservices, so you decouple not only the code, but the release schedule, the coding pace. It comes with a cost of contracts sharing and maintenance.
Sometimes I see attempts to split solutions to microservices within a single team in the initial phase with no actual ownership splitting. This brings more complexity with no benefits at all.
It would be nice to have business events like EmployeTerminated causally linked to technical events like EmployeDeleted, CredentialsRevoked, etc.
I am not sure I fully agree with your definition of event driven systems, for me, the "driven" part of that name essentially means that the event carries enough business relevant state (call it a specialised case of Event Carried State Transfer), to "drive" the consumer i.e. for a consumer to be able to make local decisions without having to contact the producer service.
For example, I might have a "purchase order rescheduled" event being emitted when my purchasing department reschedules a purchase order's delivery date, in this event I will not only include the purchase order id, but also what was the previous delivery date and what's the new delivery date. Events are about facts and the fact is that the delivery date changed so this is the minimum information that will make sense for this business event which is also useful for may be 90% of the workflow consumers within the org. I will need to make sure I am not emitting anything that's not under the control and ownership of my bounded context, but including enough data for the events to be relevant and useful for my consumers just makes 101 sense to me.
Absolutely agree, so we might be misunderstanding. Events can carry data beyond identifiers, just as your example of purchase order rescheduled. What i'm more referring to is CRUD over entity driven data changes to propagate data to other services and lack any type of business meaning. As you said the event carries enough business meaning. ProductUpdated isn't mean anything.
It depends are what problem you are trying to solve. I work at a large company which has numerous apps in numerous product lines. Some data is common across the org....by replicating this data we remove single points of failure in shared APIs. Of course our "event stack" could go down but most apps could chug along for a bit with stale data.
The biggest problem is that in order to have those business events identified have got to talk to business. And just one occasional talk is not enough, there should be an established 2 way communication. And here we have another problem - usually ( at least what I've seen, and especially enterprises, even successful ones) there are folks can be found with a strong sense of purpose to isolate developers from the business (or vice versa). And these people are often well enough in power to overrule both developers and business. The second problem is the lack of the established methodology on how to approach the software requirements process. Its very difficult for people to keep focused on the core business problem for long enough before they resort to mechanistic mocks ui draw or database tables design . In essence, that roots from the dichotomy of easy vs simple that haunts the industry for almost 2 decades IMO.
One thing i struggle with is forms... You have a CRUD-ish api and you expose the ability to change multiple entity attributes at the same time. Example, you can edit your product and have a single form describing all the fields you can change: Name, Description, Price, etc. Should you prefer 1 event describing all the changes like ProductUpdated ? Or should you split it into multiple events each describing some part of the changes like ProductRenamed, ProductPriceAjusted, etc ? Or a mix of both, if you do a PUT products/:product_id you use ProductUpdated, but if you expose also a POST products/:product_id/adjust-pricing which only update the price, you can also have ProductPriceChanged ?
What I find good about CRUD, is that people can make changes all over the entity on the client side. Double check if everything is okay, then push Save. With commands, this workflow is not possible. All kinds of modal dialogs, popping up, when confirming they push data onto the server. No way to be unsatisfied with the modification of the data, and revert to the initial entity state.
@@dragomirivanov7342 And what i don't like about CRUD, is that people can make changes all over the entity on the client side :)
Jokes aside, i think CRUD makes a lot of sense with forms. But when you want to push it a little further, it becomes pretty complex to understand the intent behind a generic PUT. As soon as you have a couple business rules, you should start expanding your CRUD into a CRUD-ish where some general attributes can be set via CRUD but actions have their own end-points
At least that's what i think makes more sense to me but it's hard sometimes especially if you allowed editing an attribute via CRUD and now you want to revoke that ability..
Great comment. As with anything, there is no perfect solution imo. Maybe ProductUpdated with a before and after as well as array of modified properties? So a consumer that cares about price changes could say if (eventDetails.changedFields.includes(“price”) // do something if price is lower
if (eventDetails.previous.price > eventDetails.current.price)
@@kevinclark1783 This is what I do. `reflect.DeepEqual(old.field,new.field)` and then decide what the command should have been. However supplying `$intent` field in the entity should supply another hint.
Don't you think that the client service in this instance is becoming the choreographer/workflow?
It could be, but depending on how your navigating it. They know the workflow, so in that sense yes, but I favor hypermedia and then the client doesn't know where they are sending a request.
Hey @CodeOpinion, I agree that describing domain behavior in events makes more sense than describing data operations, but when a CRUD operation affects multiple data points of an entity at once I find it easier to just say "entity has changed" and have the consumer decipher what happened than dispatch 20 different events, each handled by a separate handler. Especially when regardless of the specific properties that changed all I want is to notify about the general enntity change. Does that mean I'm lazy for not writing code that describes all the changes in a behavioral way, or that there's also a place for CRUD events and it's up to the developer to choose when to use each? Thanks
Why did a crud operation affect 20 data points? Why was this operation performed? Did a user just go „I’m just randomly gonna update 20 properties“. This usually doesn’t happen, so what was their intention?
For data consistency, when it is required, is it best to keep the event messages synchronous? Or better to not use messages for events, but keep events in these situations as internal methods, not distributing them?
Can't say it make much more of a difference. That nothing more then naming convention. To me EmployeeDeleted or EmploymentEnded more or less identical BUT doesn't say nothing about what should system do about it. So if I'm a new guy on a project i won't know does it did what it must do or not no matter how does the event named.
I highly disagree with this one-vs-the-other approach. In my experience it's highly benefitial to have different event domains (e.g. one for business events, one for data change events) living on the same event bus and using references to link them (e.g. firing a order-cancelled event E1, in return a handler fires a customer-balance-changed event E2 with a reference to E1 by event ID). That way you know what changed, why it changed and when it changed, and you can choose to focus on only one domain for any module in your system to keep complexity low.
Especially for monitoring and anomaly detection in your business processes that's a godsend.
More often than not, services won't have the entire context to do their part of the job and will have to rpc other services to get information (in your example the payment service might need user info to pass to the gateway). So each event will generate additional chatter between services. How do you address that?
Never mind. I did see your other video and answer was projections
Am literally dealing with some of these integrations and proposing redesigns this week for my current project
Hey @CodeOpinion, wouldn't it be better to have events that process bulk data instead of single entity.and to apply that pattern throughout the system. e.g. EmployeesDeleted instead EmployeeDeleted etc
In your example how can payment service know how much to bill?
I've had a lot of success with business events and trying not to store the "unrelated" data in our system, but what always seems to get me is a new requirement where our users want to see our data alongside data from other systems in the UI. Imagine a table that lists the tracking numbers for all outgoing shipments, but also includes the total order value that isn't stored in your shipping service. Its getting ugly. Now imagine your users want to filter the list to only shipments where the total order value exceeds $x.xx. How do you filter your shipments based on that criteria?
Yes, UI or ViewModel Composition is a challenge: ua-cam.com/video/ILbjKR1FXoc/v-deo.html
The reality is you really can't. You'll likely need a denormalized projection of all the upstream systems. Old school RDBMS still drives business thinking.
Let's say that I'm integrating with a 3rd Party, and it expects that we send it a model which contains information from 2 different domains in our application. This needs to happen when a 3rd domain publishes a specific event. What would be the right way to do this?
Compose the data from disparate sources as you need to. It could be on demand to the various boundaries or you may have the data already composed. If you're handling the initial event async, then you have a lot more options in terms of failures if you need to sync request/response calls to other boundaries to get that data.
Consider a larger landscape with many systems listening to example product data modifications. Wouldn’t it be more efficient to send also what actually changed in the data to not have all systems need to implement the api/rpc calls and also that it could potentially be slowing down the source system with all the inbound requests?
Notifications vs Data events in this case that is.
{
"$event": "SQLTransactionExecuted",
"Statements": "INSERT INTO ..."
}
Your videos are helping people like me become better developers, Thanks! 🙏
Happy to help!
question: if employee is changed due to editing or deleted, there is no business cause for this effect. what would be the reason to be cited in this case?
Then let it be CRUD and don't publish an event.
So what i'm often missing in these kind of examples is where the context of these events goes. Like which user, which timestamp the event occurred etc.. How would you pass this kind of details around. If I want for example create some kind of logging service I surely need this kind of context per event
Often those types of details are metadata of the message. Often you can think of the event itself have an envelope that contains that data.
@@CodeOpinion And why not make this metadata part of the message, i.e. simply include the full entity in addition to the event. Metadata (headers in Kafka for example) is not as well supported by tooling.
I agree with metadata, Event { type:string, details: object with information needed for that event, metadata: object with anything else you want for logging, debugging }. But if the consumer does something based on a user detail, then imo the user should be part of the “details” because at that point it is not only used for informational purposes
@@kevinclark1783 Meta data sounds like a good solution but where does this meta data get's filled. Currently I've implemented (php/symfony) it by sending the users id and current timestamp to the behavioral methods on my domain model. But this causes a lot of duplicate / boilerplate code.
Would you inject some sort event factory to your model which includes this meta data or pass it to the method in a 'meta' value object?
Been working with event driven IDM solutions for the better part of 20 years, it is very difficult to get you head around, especially CRUD if you only know stateful programming or systems from the start.
Faking Event Sourcing using a request -> db -> CDC -> transform -> broker loses so much useful information. And yes, means all downstream services are now tightly coupled to your schema. Very brittle design.
6:51 be good if you showed how the client is updated once order placed and payment made
You can't just change the sponsoring segment text like that, we got used to the old one!!
Thank you for this concise yet fantastic video. It just hits the right spot!
At first I was skeptical. Where is he going with this? Then I saw the CDC box in the diagram, effectively exposing all the internals of service B. What an abomination! People do everything to avoid actually designing their software.
Btw besides reporting (data-warehousing) there is also search services and caching that might justify peeking a bit in te data of others. But no service that performs business logic as part of a workflow. They should communicate on business object/event level.
Thanks for the comment and glad you hung on to see where I was going with it. Maybe I should cut to the chase quicker?
Your audio is fantastic. What mic and setup are you using?
Really? ha, I usually think it needs to be improved. Just a blue yeti that's hanging forward/above the camera
This is such an important detail, thank you!
Very great content! 🔥 I would use a "bpmn event-driven" orchestrator like Camunda 8 + Kafka
When you say broker are you talking about rabbitMq or Kafka?
Correct
Incredible videos, as always Derek.
Thanks!
if that's a trend, it's a strange one for sure... like a really cumbersome way to manually sync databases? someone ought to tell them, databases can do that automatically ;-)
btw your audio is badly out of sync on your last couple of videos.
Trying to sort out the sync. Someone else mentioned it.. unfortunately, I'm not noticing it but I'll try and sort it out.
@@CodeOpinion I noticed it as well. Video seems to lag behind audio, it's pretty distracting. I had to stop watching the video and just listen to the audio so I could focus on the content, which was great by the way!
Is it just me, or is the audio out of sync in this video?
I had a comment about this about my last video. I don't notice it, but not to say you're wrong. Is it the entire video or somewhere specific?
@@CodeOpinion I noticed it immediately. It seems to be the whole video
Thanks a whole lot for this insightful video.
Good video. But maybe you could sync your audio next time. 😊
I think it's been fixed since newer videos.
It's all about that common sense that isn't so common.
Thanks for sharing, really helpful.
Excellent video!
Great as always!
No comments! thumbs up.
an event is just data appended to a log
You just inversed the coupling of the Order and Payments service. One needs to know about the other. More important question is who should know about whom and I say Payments shouln't care at all about orders and should faciliate only the money taking/refunding process and when to do what is order services business logic. Payments can be used by other services and doesn't need to change when a new client comes along.
This approach also solves the problem that usually occurs for people is that. You have multiple consumers of the events and everything need to have different data by handling the event in memory and having a handler for that that actually invokes a command to said service, which you can do with transactional outbox, you now know exactly what data is needed for that command.
It goes the same for example Shipping service. When you place an order, you could call /ship in the order service and know exactly what information is needed for shipping to ship this.
In real world. When a customer orders something from a store and asks this to be shipped and there is no IT systems. The peope from the store would go to post office for example and tell them to ship that. Issuing a command basically.
From one of your other videos. You publish the OrderPlaced event, have an handler for that, that uses MediatRHangfireBridge to invoke a ShipRequest and have a ShipRequestHander in background process that has an handler for it. You can also choose which data to incorporate in the ShipRequest when creating in, data that you already have in memory for example and you could also fetch some other data in the ShipRequest handler later that is needed to invoke the actualy shipping service.
video and audio are a bit off sync.
Useful!
Its, not it's. :)
"Event-Driven Architecture lost it's way"
No, like always there is a problem in people, not architecture.
Agreed, the interpretation and how it's used has changed.
@@CodeOpinion more like abused than used 😉 that’s why it’s crucial that you have solution architect that knows what event driven architecture actually is and what it isn’t.
I don't see the point of this video and I disagree with most of it. Data driven events can be very useful for separating the load on different services.
One service may need 100 times processing power of another dependant service, hence decoupling. This is one of the main reasons for the micro service architecture approach. Why is this not even mentioned in this video?
Also, I am not sure the other naming convention discussed here is better. The system should react in predictable ways to predictable events. If we have the same Employee Deleted event named in 3 different ways based on the business context it will produce a lot of confusion (esp. for new developers) and unnecessary code bloat.
I never suggested having the same event be named multiple different ways. Data centric events can be useful in query/reporting purposes when you need to compose disparate data. That's very different than needing data for another service and you expect consistency because you need to enforce business rules. We clearly have a different view of what services are. Processing power and how you physically deploy a logical boundary is limitless. That doesn't make them separate "microservices". Hard disagree with "one of the main reasons for microservices". That notion is one of the main reasons why so many systems are in a coupling nightmare and can't make sense of microservices.
@@CodeOpinion Again, disagree with most of your comment. Indeed we have different views on what is a microservice and what good architecture/software engineering practices are. Perhaps my view on this is skewed from my experience with high load, low latency systems. Agree to disagree.
Nice try with the title, but I'm not gonna sit here and watch boring stuff that I already know! Tldw
Thanks for the comment, I guess?
EmployeeDeleted.