I was 100% agree until the very last minutes about testing. You should indeed create integration tests when there is no behavior to test : i.e for the simple queries features. But for the command features, if you have some logic, you should be testing it through the use case, not through the domain model (domain model is an implementation detail). You can do so by using the dependency inversion principle. This makes the tests very fast and deterministic and totally compatible with TDD.
One of the reasons we went with VSA is to have a decently scalable modular monolith that is relatively easy to break out into "feature clumps" if necessary. The other main reason we went VSA that it's waaaaay easier to debug a vertical slice than it is to debug the default clean architecture. You don't need to jump into 10 different files (API + IService + Service + IRepository + Repository + ...), it"s just all there, in the single file. If you need to change the feature, you fix, the one file, it's a no-brainer "where" you should fix the file (this is a LOT less obvious in Clean Architecture IMO, so I fix it on the controller, the service or the domain?) and you are guaranteed not to alter any of the other features in unexpected ways. The main disadvantage of this way of working is code duplication, but we try to refactor where necessary and write tests when there is code duplication, when necessary. In one of Jimmy's talks, he mentions this "feature" of VSA. When you need to unify code, there's an extra step involved. You no longer mindlessly re-use functions that are used for other features, but you need to explicitly op-in. Thus far, unless we needed to optimise performance (read, replace EF Core with Dapper or caching), we've consistently used the same frameworks across features: For commands: API call -> MediatR call -> MediatR pipelines -> Command Handler (command validation, business validation, domain state updates) -> Persistence (EF Core) -> Domain Events (Notifications) (leads to Integration Events if necessary) -> API Response For queries: API call -> MediatR call -> MediatR pipelines -> Query Handler -> EF Core or Dapper or a Cache -> API Response To know how all of our slices work, all you need to know is how API's work in .net core, a few MediatR interfaces, EF Core and Dapper. That's it. Most (if not all) of those are pretty much common knowledge nowadays.
Very well explained, Thank you. As you mentioned, lack of consistencies across the features and introducing different style of organizing codes are concerning, IMO it makes it hard for developers to get familiar with different styles of architecture introduced in each feature. it requires knowing code base very well again in terms of understanding many applied patterns. Not all engineers are in the same level and understand all patterns very well.
I honestly believe that at scale it might be useful, as far as you have consistency guidelines across those layers. However, that might defeat the whole selling argument
That’s the reason there are code reviews, if you don’t follow conventions or architectural patterns then the code change is rejected the key is the technical leader who reads the code.
In this particular case, rather then thinking in terms of integration vs unit tests, I would put the focus on the scope, i..e. coarse-grained vs fine-grained tests. For instance, having coarse-grained unit tests, i..e that involves the handler and all the dependencies (with test doubles, preferably fakes) will be very robust in terms of refactoring and should be able to exercise most of behaviours. Then you can add a small subset of coarse-grained integration tests that exercise the application boundaries and integrations, e.g. web api/controllers, databases, etc. Regarding the consistency case, it feels like more of a team decision, and it seems that it shouldn't be a problem to refactor something (no change in behaviour) to follow an equivalent and more consistent shape across the codebase.
The point is that with VSA the existing system doesn't guide you to a path. I know you can still do it, however, to do it efficiently, you might lose some of the advantages of VSA.
Regarding testing: even if you see your slice mostly as black box I would still "simulate" the "real resources" and e.g. replace true DB with some in memory storage. This way the tests still don't know much about the internals of a slice but are still fast (enough).
I tend to do it that way, but one of the goals of VSA is also to avoid abstractions and rely mostly upon Integration Testing. The cool thing is that it's an architecture that can cope with many styles.
Hello great content. thank you. What do you think of having each feature in a sealed assembly that can only be accessed from its command or query? I think is the cleanest aproaxh because is literaly a blackbox
That's one of the good options for a modular monolith. However, if you are thinking about exposing the command/query, it's like the package by component you can find here: dzone.com/articles/package-component-and
If that feature needs those 4 tables joined, you write the code to achieve that inside the handler of that feature. Read the term Handler as a black box. You can use ADO.NET, Dapper, EF, or whatever. Then, you have the possibility of refactoring it and extracting that query to wherever you want.
If they're in the same logical grouping -> just join them. If you're using the "modular monolith" approach and all 4 are in separate modules, you might need to make a couple of extra calls to gather the information or consider that your modules are too fine-grained...
Is this infrastructure folder contains the implementation of external services communication? Like repositories, Http API integrations, Blob storage and so on? is it shared and global to all features? It seems pretty similar to Infra layer of the layered approach, right?
Not properly. You don't want to bring things like shared repos or API Clients on Vertical Slice Architecture. However, for some small things like creating a DB connection, you need to keep them apart. As an example, if we think about a conventional CRUD repository, each method will likely be in a different slice, while the connection code might be reused.
@@gui.ferreiraI think I get. The infra folder contains only configuration for infrastructure services and global reusable code. And the slices (features) contains the implementation details and functionalities itself. Right?
Hi Gui, Thanks for the effort creating those valuable videos. A quick question to you. Is it a must to use Mediatr pattern for this? or do we have other options?
One of the main goals of Vertical Slice Architecture is to have Features isolated and self-contained. Based on that, ideally, you have configuration as part of the slice. One of the advantages of doing that way is preserving the developer experience benefits like avoiding merge conflicts or the ability to move/delete a feature easily. Keep in mind, it's my interpretation based on goals and principles.
Thanks a lot Gui for the video 👍 How to use Common Compiled Queries in Vertical Slice? Should I place then in the DbContext or create a single class for them all or create a stand alone class for their related entity and place them where they belong?
My suggestion is: Start by writing it in the dirtiest way first inside the handler. Then, refactor it. If you see duplication, then extract it to a common place (Class, Function, whatever). Make sure it's duplication and not accidental duplication. Jimmy explains that quite well here: ua-cam.com/video/SUiWfhAhgQw/v-deo.html
I wonder if the feature/slice must be configured by itself, Can i reusable it later? Example I have feature and I want to use that feature in 2 places (API and CLI) or maybe more Because for CLI APP it's doesnt need the API Controller and vice versa
I was 100% agree until the very last minutes about testing. You should indeed create integration tests when there is no behavior to test : i.e for the simple queries features. But for the command features, if you have some logic, you should be testing it through the use case, not through the domain model (domain model is an implementation detail). You can do so by using the dependency inversion principle. This makes the tests very fast and deterministic and totally compatible with TDD.
I agree with your comment. It's in sync with my opinions. I should have mentioned the typical CRUD applications in my example.
One of the reasons we went with VSA is to have a decently scalable modular monolith that is relatively easy to break out into "feature clumps" if necessary. The other main reason we went VSA that it's waaaaay easier to debug a vertical slice than it is to debug the default clean architecture. You don't need to jump into 10 different files (API + IService + Service + IRepository + Repository + ...), it"s just all there, in the single file. If you need to change the feature, you fix, the one file, it's a no-brainer "where" you should fix the file (this is a LOT less obvious in Clean Architecture IMO, so I fix it on the controller, the service or the domain?) and you are guaranteed not to alter any of the other features in unexpected ways.
The main disadvantage of this way of working is code duplication, but we try to refactor where necessary and write tests when there is code duplication, when necessary. In one of Jimmy's talks, he mentions this "feature" of VSA. When you need to unify code, there's an extra step involved. You no longer mindlessly re-use functions that are used for other features, but you need to explicitly op-in.
Thus far, unless we needed to optimise performance (read, replace EF Core with Dapper or caching), we've consistently used the same frameworks across features:
For commands:
API call -> MediatR call -> MediatR pipelines -> Command Handler (command validation, business validation, domain state updates) -> Persistence (EF Core) -> Domain Events (Notifications) (leads to Integration Events if necessary) -> API Response
For queries:
API call -> MediatR call -> MediatR pipelines -> Query Handler -> EF Core or Dapper or a Cache -> API Response
To know how all of our slices work, all you need to know is how API's work in .net core, a few MediatR interfaces, EF Core and Dapper. That's it. Most (if not all) of those are pretty much common knowledge nowadays.
Thanks for adding so much value with this comment 🙏
I am using the exact same approach, except separating domain project. But inside domain project, still using vertical slices
Very well explained, Thank you.
As you mentioned, lack of consistencies across the features and introducing different style of organizing codes are concerning, IMO it makes it hard for developers to get familiar with different styles of architecture introduced in each feature. it requires knowing code base very well again in terms of understanding many applied patterns. Not all engineers are in the same level and understand all patterns very well.
I honestly believe that at scale it might be useful, as far as you have consistency guidelines across those layers. However, that might defeat the whole selling argument
That’s the reason there are code reviews, if you don’t follow conventions or architectural patterns then the code change is rejected the key is the technical leader who reads the code.
Would be amazing if you make a vertical slice minmal api series. The way you think/explain is very simple and easy to follow 🙌
Thanks! 🙏
Great suggestion. Let me think about it.
@@gui.ferreira yes please it will be really the best series u can make it paid course if u want i will still buy it
Thanks! I need to find how to approach it.
In this particular case, rather then thinking in terms of integration vs unit tests, I would put the focus on the scope, i..e. coarse-grained vs fine-grained tests. For instance, having coarse-grained unit tests, i..e that involves the handler and all the dependencies (with test doubles, preferably fakes) will be very robust in terms of refactoring and should be able to exercise most of behaviours. Then you can add a small subset of coarse-grained integration tests that exercise the application boundaries and integrations, e.g. web api/controllers, databases, etc.
Regarding the consistency case, it feels like more of a team decision, and it seems that it shouldn't be a problem to refactor something (no change in behaviour) to follow an equivalent and more consistent shape across the codebase.
The point is that with VSA the existing system doesn't guide you to a path. I know you can still do it, however, to do it efficiently, you might lose some of the advantages of VSA.
Regarding testing: even if you see your slice mostly as black box I would still "simulate" the "real resources" and e.g. replace true DB with some in memory storage. This way the tests still don't know much about the internals of a slice but are still fast (enough).
I tend to do it that way, but one of the goals of VSA is also to avoid abstractions and rely mostly upon Integration Testing.
The cool thing is that it's an architecture that can cope with many styles.
Hello great content. thank you. What do you think of having each feature in a sealed assembly that can only be accessed from its command or query? I think is the cleanest aproaxh because is literaly a blackbox
That's one of the good options for a modular monolith.
However, if you are thinking about exposing the command/query, it's like the package by component you can find here:
dzone.com/articles/package-component-and
How and where you write a query that join 4 database tables in Vertical Slice Architecture?
If that feature needs those 4 tables joined, you write the code to achieve that inside the handler of that feature.
Read the term Handler as a black box.
You can use ADO.NET, Dapper, EF, or whatever.
Then, you have the possibility of refactoring it and extracting that query to wherever you want.
If they're in the same logical grouping -> just join them.
If you're using the "modular monolith" approach and all 4 are in separate modules, you might need to make a couple of extra calls to gather the information or consider that your modules are too fine-grained...
Is this infrastructure folder contains the implementation of external services communication? Like repositories, Http API integrations, Blob storage and so on? is it shared and global to all features?
It seems pretty similar to Infra layer of the layered approach, right?
Not properly.
You don't want to bring things like shared repos or API Clients on Vertical Slice Architecture.
However, for some small things like creating a DB connection, you need to keep them apart.
As an example, if we think about a conventional CRUD repository, each method will likely be in a different slice, while the connection code might be reused.
@@gui.ferreiraI think I get.
The infra folder contains only configuration for infrastructure services and global reusable code.
And the slices (features) contains the implementation details and functionalities itself.
Right?
@@haraheiquedossantos4283 I think you got it 😉
Hi Gui, Thanks for the effort creating those valuable videos. A quick question to you. Is it a must to use Mediatr pattern for this? or do we have other options?
Hi! Not at all.
I explain that here: ua-cam.com/video/caxS7806es0/v-deo.htmlsi=PaTfAjmsnZu3-Kyk&t=467
You can call your "Handling code" directly
@@gui.ferreira perfecto ❤️🙏🏻👍🏻
5:33 "The slice should be able to configure itself"
Why? What is the reasoning behind this?
One of the main goals of Vertical Slice Architecture is to have Features isolated and self-contained. Based on that, ideally, you have configuration as part of the slice. One of the advantages of doing that way is preserving the developer experience benefits like avoiding merge conflicts or the ability to move/delete a feature easily.
Keep in mind, it's my interpretation based on goals and principles.
Thanks a lot Gui for the video 👍
How to use Common Compiled Queries in Vertical Slice?
Should I place then in the DbContext or create a single class for them all or create a stand alone class for their related entity and place them where they belong?
My suggestion is: Start by writing it in the dirtiest way first inside the handler.
Then, refactor it. If you see duplication, then extract it to a common place (Class, Function, whatever). Make sure it's duplication and not accidental duplication.
Jimmy explains that quite well here: ua-cam.com/video/SUiWfhAhgQw/v-deo.html
I wonder if the feature/slice must be configured by itself, Can i reusable it later?
Example I have feature and I want to use that feature in 2 places (API and CLI) or maybe more
Because for CLI APP it's doesnt need the API Controller and vice versa
I think you are describing what Simon Brown calls Package by Component. Check it here: dzone.com/articles/package-component-and
Hello !! I have a question. This approach can be used only in APIs or it can be used in a front end application also ??
Greater question. It also fits perfectly to the front end. As an example, component-based frameworks such as React, will naturally help you that.
stop using mediatr pls, what's wrong with you
ua-cam.com/video/caxS7806es0/v-deo.htmlsi=PaTfAjmsnZu3-Kyk&t=467