Check out my new course From Zero to Hero: Logging in .NET and use code LOG20 at checkout for 20% off: bit.ly/loggingdotnet valid for the first 400 purchases!
I find it perfectly acceptable to implement mapping in the ctor, PROVIDED that the reverse direction is not required or is defined in a method underneath. It would not be worthwhile to overhaul the entire architecture for this. My approach would be the same: Simply add a 'MapToEntity' method below the constructor. The main reason for this is: If the DTO needs adjustment - which indeed occurs more frequently than with the DB model - you inevitably have to check the mapping as well. These two components are closely tied and mutually dependent, and in my opinion, they belong together. Of course, it's accurate to state that the DTO doesn't need to know anything about the mapping, but there is also (almost) nothing to know, particularly if we adhere to your first rule and avoid business logic in the mapping. The developers working on it can save time searching for the mapping and are less likely to forget it because they can adjust the mapping directly in the DTO. Therefore, it either prevents errors or at least saves time. I also appreciate the philosophy behind Mapster. Although I haven't personally worked with it, if it maps one-to-one and flags mapping issues at compile-time, I don't see a reason why it shouldn't be used.
Your logic breaks down when an object could map to different dtos. Making an extension will be the only future proof way. An example of this is DTOs for REST vs Grpc DTOs
Completely agree. We do that also with our 'one way out DTOs'. We also have readonly properties to make clear that they're not deserialized from the outside world. So, they're only for serialization. You can even add more entities to the constructor and also partially map to your dto. Didn't have any constraints yet and they're also easy to update. The incoming DTOs mostly have MapToQuery or MapToCommand methods.
I can also understand the advantages of Mappers. But it depends on your architecture. In the old days, we always had some kind of "contract" library that was used by server and client. So in that case, a mapper makes sense. Because the contract library can't reference the domain/application for the constructor mapping of the entity. But most apis provide a openapi specification and each client can generate the client and DTOs by that specification on its own. So, no contract library necessary anymore.
@@metaltyphoon Why does the logic break down? Surely it's not a problem if a mapping is repeated for multiple DTOs, you have to do that for all mappings, no matter where or how it's implemented? Or vice versa, multiple entities for one DTO, that's no problem either. It's more work, of course, so I'd probably also rather go for Mapster. But in principle I don't find ctor mapping bad, it's just a very simple solution for the same problem. There are two more advantages by the way: * The DTO can be readonly, that saves a lot of problems. * I can add divergent properties to it very easily, e.g. a value calculated by logic. This is then a ctor parameter, or a required property.
@@Palladin007 with that approach you cant turn a DTO into a Domain object. Also, you can already make thing readonly by using `init` setter. Calculated logic, is exactly what you never want in a mapper 😂. Thats the entire premise of the video.
I think the extension method conversion approach is better and I prefer it. But I don't see anything necessarily wrong with your DTO knowing the about the Domain object. However, I think automatically Mapping DTOs to Domain objects is a bad practice in general because there might be business constraints and decisions to be made so that the domain objects are not in the wrong state. The forward mapping is fine but the backward mapping should be deliberate and intentional like in a business logic method or subfunction with proper validation and constraints
@@torrvic1156 Nothing. Domain -> DTO is fine however DTO -> Domain should be done manually for each scenario where you want to create Domain Objects from DTOs. Like I said, the danger is that the DTOs might produce an inconsistent domain state. For example setting `IsEmailSent` to true without sending the email
Completely agree on all the points. I have seen all 3 mistakes, mostly 1 and 3rd on few projects. I just can stand that mappers can have dependencies. The good point in favor of manual mapping is that it's much harder to pass dependencies in the static method rather in the mapping profile
I completely agree with all of these rules. Mentioned mistakes follow from the Single Responsibility principle violation. It is a great example for such a violation in places where you do not expect them to be at all. Awesome video as always! Thank you for your content!
Thank you so much for sharing Nick! This is exactly what I searched for. Extension methods is the way to go.
Рік тому+19
I sometimes used the constructor on DTO to construct it out of the domain object. In general it works fine ... for sure there is no reverse direction but that is normally not required. There are many cases where you need to map domain object to a DTO to provide some sort of a 'view' into the domain. However reverse is usually needed in some sort of an action (Create/Update methods which receive Create/Update DTOs) and there I would usually manually create/update the domain object.
I also do this quite often. Sure I don't want my domain to know about the dto, but it's pretty hand to have my dto turning into a domain (if possible and it doesn't mess around with too much logic)
I agree, it's not bad. Personally, i prefer to have static method FromAlbum(Album) and instance method ToAlbum() over ctor, but ctor is fine i guess, as long as serializers can do their job. The purpose of adapters layer (if we use language of hexagon arch) is to map domain to specific API. Domain model is 'the representation' of some concept, so yes, DTO might know it and use it in its contract. DTO is Data Transfer Object, so it's handling transferring of model to api
I really don't see a problem with the third method. In my mind, the purpose of the DTO is to transfer data and get it to where it's going, which is in its final form, and so having mappers in the DTO actually makes sense. Besides, this way you can see in one place the DTO definition, what it is supposed to be mapped to, and how it is mapped.
@@unskeptableActually, using an extension method for mapping is not quite the same as integrating the mapping directly into the DTO. The main distinction lies in the separation of concerns. With extension methods, the mapping logic is decoupled from the DTO. This means the extension, which knows about both the DTO and the model, handles the mapping. Consider a scenario where the model is part of one project and the DTO comes from a separate package. If you can't modify the DTO, how would you manage the mapping? In essence, the DTO should remain agnostic of the model, ensuring cleaner and more maintainable code.
Manual written static mappic classes in conjunction with readonly DTOs are easy and fast to implement, everything is explicit, they are easy to understand and easy to maintan. A few years ago I worked in a project, where we used AutoMapper and it was much more complicated than manual mapping, so I see no ebenfit in using AutoMapper or similar libraries
@@PatricSjoeoetrouble with automatic mapping is that it works well in cases where models are almost the same, but in these cases you probably do some overengineering as you might as well use the same class In real, complex case, models either start the same and then get more and more different or are very different from the start. In this case writing all those mappings is similar work to using pure code, but you loose performance, easy debugging and most importantly - make failures in runtime or UTs instead of build/write time. And when you write ut for mapping it means you have more work than writing it manually and not needing to test
I’ve tended to use the extension method based approach you show towards the end. It is debug-able, straightforward and does not require knowledge of a 3rd party library. Solid advice about unit tests and avoiding mock of auto mapper.
Its a common mistake among developers to use and create multiple tools to make it faster to code. The problem is that the priorities are now different, exliciitness and readability are the top priorities leasing to less errors and easier maintenance
You have uncovered a couple of issues with one of my projects! We use automapper but occasionally map objects “manually”. Plus I believe there are a couple of places where we have some inserted business logic. Thanks for this!
now I feel proud of myself because I came up with the extension method idea by myself one day. But thanks I've seen other mappers you mentioned in some of the project and never realized whats wrong with them.
The second point helps you stay DRY. Even though it looks like shared code, I had to upgrade automapper where the contract changed. I wish we had created a mapping class and interacted vs injecting imapper. We ended up having bugs because some non-standard mappings were missed in manual updates too
I agree with all of them and personally prefer the extension method as that will catch/automatically update changed property names even without unit test as compile time.
What if the incoming model is a List and you need to map it to a String (separated by some value). In my mind this is mapping, but maybe you see it as business logic?
The mocking of IMapper I see a lot! Mappers should be static method IMO, if it feels like you don't want a static method, it's probably because you are about to fall into one of these anti-patterns.
I just don't get the appeal of injecting the mapper in the first place. I started my current project years ago using automapper and I am now ripping it out. I really decided to start ripping it out when the automapper library started pushing that the right way to use it was to dynamically inject it. I agree with Nick that the mapping code is part of the implementation of the code under test, which means, why would I need to dynamically change it through injection? Seems really strange to me. Anyway, manual mocking code (that gets unit tested) for me now.
The last problem you point out is the hardest for people to understand. Again, it is a separation of concerns issue since DTOs should not be concerned with object mapping details. But if the DTO is meant to serve only the namespace where it resides (which is usually the case), many developers don't see the value in making mapping a top-level citizen. Especially when, like you rightly point out, it should contain no business logic whatsoever. DTOs are always hard to understand. Ideally, they shouldn't even exist and it's another discussion altogether when one should and should not create DTOs. But I think that no matter how thin the layer we are abstracting is, I agree with you that we should follow on the same principles everywhere. Which in this case is: Don't.Mix.Concerns.
It's great that you point out problem 2. I see it and other variations of it quite a bit, and sadly it can often go unnoticed on pull requests. It's an unfortunate by-product of the dogmatic way so many people teach dependency injection and unit testing, without ever explaining the true semantic purpose of mocking in the context of a unit test, which would ALWAYS require including an explanation of what you don't want to mock in a unit test.
@nickchapsas so where would your static mapper class live? I have been using the constructor/implicit operator combo for my manual mapping code. But I am open to understanding the critisim against it. So I started to look at where I would put the static mapper class. I can't really think of any other place other than in the layer that houses the DTO. Also, I see that your static mapper class is called SpotifyMapper. Does that imply that you would have one static mapper class that contains the mapping code for all of your spotify dto objects? Or would each DTO have it's own static mapping class?
BTW SonarCube has a concern about the last approach with extension. It says that you should move that extension should be moved as a method to the result class (if both are located in the same project).
Of course it has, because that method doesn't add anything. Constructors exist, it isn't a bad word. It is a misuse of the Builder pattern to try to extract this out to another class. Constructor is fine (as long as it's the DTO constructed from a Domain object, not the other way around)
I use something similar like 3rd one. I have separate Converter class, which know about both entities and DTOs and i inject in this logger for logging errors when it can't convert one to another. It may happens when enums changes and some new values don't know how to convert it to apropriate DTO's value or back if it is possible.
I use the implicit operator trick now and then. You're right that it works only in one direction, and you have to mind circular dependencies. But that mapping happens in one direction only to begin with. In my situation, I have a result type from a Dapper query, and I need to treat it like a base-table entity type. I thought the implicit conversion operator was a nice way to do that. I like your extension method solution also, but I don't have the visceral reaction you do to the implicit conversion operator. Both approaches are static, so there's no temptation to inject business logic. Also, the implicit conversion is constrained on arguments, so you *can't* introduce any odd dependencies. (at least I don't htink so)
Interestingly, in regards to inserting business logic, I didn't find one mention or word on "resolvers". What's lacking in this video and following comments is what the fix is. At a fundamental way I agree, but have my hands tied by the powers of be. End of the day you are still going to MOQ the setup of that business logic in the tests. The resolvers/business logic end up hitting a database/api, etc. No1 rule is you MOQ those! As a step forward and suggestion to help others, say I have business logic from a service I could fire them all off with: 1) task1/task2 await
@nickchapsas - Hi Nick, how are we supposed to do mapping if the 2 objects are NOT 1-to-1? Like what I always encounter on a daily basis as I work in an integration/middleware where the bulk of our work is mapping our domain object to a client's domain object. We cannot get away from having to put business logic in there cause the fields are completely different. For example, some of the fields we're mapping to needs computation or logic as it requires information from different fields of our domain object. Hope I make sense.
In that case, you don‘t use a mapping library. You write a service class that converts A to B. At least that's what I would do. And you unit test that sucker as good as possible.
I hear you about a mapper only having one responsibility of mapping one object to another, but sometimes it really feels like external services are needed. In a project I'm working on that has localization, I'll receive one object from data storage that stores a localization key, and my mappers inject my localization service which hydrates that key in to a fully localized string. How can a design like that be cleaned up to avoid the injection?
I've done the constructor thing and the cast thing...mostly because I didn't know where a mapper should then live in my project? Should it just exist in the domain layer, just not in the definition of actual domain objects? How many mapper classes will I end up with?
I use type references in a dependency injected service to map a model to entity and vice versa. Has worked just great for me. public interface IMapToModel where T : Entity { void Apply(T entity, TModel model); }
@nichchapsas, while there are some good ideas and tips in this video, some important aspects aren't entirely true (from a testing perspective). First of all, the AccountMappingProfile (for AutoMapper) should obviously be tested seperately; you want to make sure you've got well behaving operators. It's an isolated unit and should be tests in isolation, all good here. Second, the specific strategies (i.e. specific business logic) implemented in this concrete mapper should be considered implementation details and thus considered entirely irrelevant for consumers (of the IMapper interface). What is being presented here is coupling the system under test (AccountService) to a specific implementation of IMapper, and this approach, when implemented on larger scale, eventually makes it harder to refactor the system. And, the major (and perhaps most important) point is that the AccountServiceTest should only verify the behaviour of the its dependencies; it should stay as agnostic to data as possible and stick to verification of the interaction of its dependencies, in this case the IMapper.Map() interaction (remember, you don't care about the data, you care about the interaction(s)).
I can totally understand why we shouldn't use serviced inside our mapping classes. But what exactly is the solution? Just pass needed values as parameters? So in this example just a good encrypted password that came from the service earlier?
But you can isolate and write specific tests against the mapper itself. So why retest the same functionality redundantly in other unit tests where the mapper is an injected dependency? What other injected services do you typically do that for? And if mappers are special such that you shouldn't ever mock them in tests, what is the defining criteria to determine that so you can take the same approach for other services?
I use the implicit method - Guilty - But only to go up the application tree, not down. A business object should never know anything about converting DOWN to the DTO object. That's for the repository to care about.
I have been a little disappointed with using automapper recently as it is very difficult to properly map records with it. Especially where you are trying to not inject business logic by accident. An example might be ‘map a dto to a domain record for a ‘create’ operation, where you want to generate a unique id. What you inevitably have to do is to either use weakly typed properties dictionary when using mapping (and then make sure your mapper pulls those extra values in) or set some default value for the property and use a ‘with’ expression to then recreate your record before sending it further.
Or just ignore it and set a default on your destination type ? You're putting logic in the mapping process. Think of it that way : you should be able to construct an object without automaper. If you do any other than putting a default value in the object itself, you can't.
I totally agree with a static class having functions that map, rather then adding new constructores or addition methods on the dto, in fact i would so far to say that you should almost just use record without a body, and just use the default constructor
Hallelujah. Can't hear it enough that people don't like automapper, because of all the reasons you say. Though I see it used on nearly all the projects I work on, because "it's so easy" and most of the time "You shouldn't program what can be done automatically". However, Especially in your account example, you see that there is nothing automatically about some mappings. All mappings need carefull attention, because you need to say (imo) what has to be done when source properties are not present or target properties are not present, worst case you end up with default values for (new) properties.
When creating a custom mapper, how can i implement unit tests to verify all properties are mapped, and any conversion worked? i cannot just use reflection to check each property as the properties could have different names and types. But i still need to verify that all properties where mapped or some developer could break it later on. Also to verify that of someone adds a property to the DTO, how do you verify it also has a test to make sure it is beeing mapped?
DTO stands for Data TRANSFER Object, not Data Mapping Object. I agree that putting there mapping logic is not good solution, it kind of breaks SRP (Single Responsibility Principle) for DTOs, although I don’t think it’s terrible-there are worst things that people do. I usually create 2 interfaces: IMapper and ICollectionMapper, the last one with implementation: CollectionMapper that uses IMapper to map a single element and just orchestrates using it for collections or sequences with the support of null inputs if necessary. Personally, I prefer to have it injected-I treat it as a component. I mock it only for very rare, specific scenarios (e.g. to test very specific cases that require very specific data, and mocking the mapper makes the test more readable).
what about logic in mapper configuration object when you want an specific logic for include or exclude field depend on certain behavior, where do you put all that logic? for example, exclude null fields, empty array and so on. Or if you are creating a new role in your app and your api catch only the ids related to this role, how do you converte those ids in a permission in your databse
The problem is using Automapper. The last snippet is the best way imo. Automapper does stuff....automatically. To me that is worse than the "How does my password get generated"
Hi @nickchapsas, we want to use a service within the mapping to retrieve an id for a string value (CompanyName -> CompanyId). After viewing this video I have doubts that this is a good idea. Whats your opinion in this case?
I agree with all three, but I wouldn't create an extension for the dto to obtain the domain object because: 1/ i can't do the opposite (and I shouldn't, the domain should not know whatever dto will be used down the line, maybe you have a web app and you want to show just the name and the year of the album, maybe you have just a LED which turns red if the year is older than 2000); 2/ i don't want to glue together the dto and the domain object by adding an extension method -- wherever i go, i carry the knowledge to obtain the domain object and me, as a dto, I couldn't care less about the domain... My purpose in life (as a dto) is to be used by others, and not to get others the domain object. Therefore: i would keep the static mapper class, but not as an extension method.
Hi Nick. Thanks for the video. It seems you don't like using libraries like AutoMapper and others to map objects. Can you please share approaches to map from an entity to a DTO? An entity might contain a picture property (e.g. user photo) or just huge number of properties and the corresponding DTO has just few of those. Obviously it is not efficient to load a full entity from the DB when we just need few properties.
when I have a linq query and I do at last something like .Select(new Dto {...}). Did I understand you correctly, that I put that better to static mapper method?
OOP solves a lot of problems, but it also brings problems with it. Object mapping are only necessary because of OOP. I prefer C# myself, but I also realize it's not perfect, while being my fav language
The only problem where OOP shines is encapsulation. If you have mutable data, and you need to prevent it from entering invalid states, and your language is insuffictiently expressive enough to do that through the type system, then encapsulation is absolutely the tool for the job. However the core pillar of OOP, grouping behavior / logic together with state is actively an anti-pattern. Inheritence in itself is as well. Polymorphism is valuable, but Inheritence creates extremely heavily coupled and inflexible code. Just like grouping data with behavior creates heavily coupled and inflexible code. And you can also have Polymorphism without inheritence, through interfaces. >Object mapping are only necessary because of OOP Why? You do realize that defining custom data types isn't an OOP thing right? Custom Data types, composed from smaller data types are a thing in nearly every language, from purely imperitive languages, to OOP languages, to functional languages. Everything higher level than assembly gives you the ability to define structs, records, or classes.
The reason I use AutoMapper is ProjectTo. And the fact that AutoMapper can reuse existing mappings in ProjectTo method. Is there a way how to achieve this manually? Because as far as I know, calls to extension methods cannot be translated into Expression, so the mapping will be done in memory and not translated to SQL.
Just my 2 cents... Stop doing it... We're stuck with it in a large project and we will be refactoring it out of the way for many years to come... There are just too many gotchas and edge cases when mapping expressions (most of them came from buggy interactions between WebApi OData vs AutoMapper projections but still), let alone the arguments exposed in that video... We've stopped rolling it out into new services for about 3 years now and just using plain Linq .Select(x=>new DTO {}) to map them and everything became just so much easier to trace down. AutoMapper is just a disaster waiting to happen when you have a medium to large project with lots of profiles. It's managable for small projects but it's still easier to track down what's actually going on when it's not in the way.
Once you realize that the purpose of a domain object is to change state, not represent it, you'll never run into #3. Stop loading domain objects simply to transform them into a dto. Use a data transformation layer or better yet, CQRS, and do your data transformation with a lightweight ORM like Dapper. Most of the bad use of AutoMapper I see is trying to transform entities into dtos/views.
I thought that I was making the first mistake, but after further inspection I'm not injecting business rules into the mapping. For context, I have an old web application that uses MSSQL geography data types because at that moment I thought that it was a good idea for be used with PowerBI and other reports, but my DTO uses classes with .net geography data types which, as you may know, are not 100% data types compatible with their database's counterparts, so in my mapper I have a .FromText method to do the conversion. I know that in some repository class/methods I do make the second mistake, so I'm taking notes for future projects.
I've built my own mappers using reflection previously, run once and build a dictionary of properties, then you can remap them tons of times with no major hit to performance cept on boot. I have also used this methodology to make models which when passed to a frontend template in mvc will build out a whole form.
@@ripple123 fair enough, I was contemplating adding something that would keep a monitor on the state of the code and store the reflected data externally, but it seemed like a lot of work to avoid a 200ms lag on boot and could introduce a lot of bugs.
I strongly disagree with the third point. "Things should be simple as possible, but not simpler". Domain object handling conversion goes along this principle. In all my code, I try to hide this type from "external" world so right from the start I know the hiding has the cost -- I have to create mirror type (DTO) to be the connector with the world. Because of this domain object should support two actions -- converting from and to DTO. Btw. such approach rules out even slight consideration of mocking mapping, because mapping/conversion is embedded into domain object.
That means your domain model needs to be aware of the DTO, which is bad practise. Your domain model will live in your domain project, while your DTO would be in either your application layer, or your presentation layer. neither of these should be referenced by the domain layer. Now you could have the mapping inside of the DTO however (as this layer would have reference to the domain layer). But i wouldn't recommend it.
I don't see the problem of mocking Mapper. The problem lies in not unit testing the mapping profiles if you choose to do so. The tests are ridiculously easy to write and it's a better way of finding mapping mistakes than having them jumbled up with the logic inside the caller method. I find it more reasonable, as I can put some thought into generating good random data to make sure they're working properly instead of generating whatever while I'm worried about testing business logic.
So in the case of point 1 where the AccountService was being injected just to create a temporary password (as you said, not a great example), I guess this is something that should happen outside before mapping occurs? Then we know that this mapping profile does nothing else except mapping which follows the single responsibility principle?
That's how I do it. The danger I usually run across is that if you start injecting services, at some point you might have an implementation of your AccountService e.g. access a database (And the mapper does not know since it deals with the interface), so your mappers start becoming incredibly slow, or they start doing way more than simply "mapping"
What should I do if my boos keeps wanting to put business logic in the database despite me telling him that it's a terrible practice and the reason we keep having long running queries that result in deadlocks and kill overall performance of the application? His reasoning for putting business logic in the database is that it can be changed without having to do a whole new build. But that is rarely the case. I just don't know what else to tell him at this point.
I had a boss like that. He had been coding since the 90s. He loooved stored procedures. Everything we did at that company was anti-patterns. I gave up and moved to a new job.
@@StrandedKnight84 You are describing by boss... ha ha. He wants everything to be stored procs. No matter how many times I tell him it's not feasible to do it, he tries to convince me otherwise. He also doesn't want to learn new tools. He wants to stick with using tools that he used to used back in the 90s even though he doesn't code anymore.
Really great video Nick! I'vee actually been wondering whats the best practice here and you filled my cup with the answers! :) When getting data from the repository/data layer where one of your properties is a string (that contains json) how do you solve this? Because in my case I have sadly used, for example, Newtonsoft to deserialize the value inside the mapper (I guess thats a red flag right?)
On one of my projects I rely quite heavily on automapper, using various features like custom converters, conditional mapping, Ctor param mapping. I was already aware of the possible issues you mentioned in the video, but after watching I went and reviewed the code again. Alhough there's logic in my mappers, especially in the custom converters, I see them all as mapping logic. Basically the project functions as a bridge/proxy type application that consumes a restapi of some application, maps the data into logical, enriched objects that are returned to the requester. The problem I was facing that this api only gives out objects in a 'raw' format, meaning all 'relational' properties are filled with Guids. Status property; Guid. ProductType; Guid. linked objects; Guid. etc. etc. So what I did was create a Collector class containing the raw object, but also the prefetched related data. The most common related data (lookup/dropdown type data, like status) are made available from a service that has cached all that metadata, other data like linked objects I fetched and added to the collector just before invoking the mapper. The mapper (basically the Custom Converter) then composes the enriched object based on the raw object and related data as available in the collector. Nowhere in the mappers and custom converters any service is used or data is generated, though there might be some cornercases. At one spot I have a src enum property with 3 possible values, but the property is nullable. So in my target object I use an Enum with 4 options, one of them named 'UNKNOWN'. Imho still mapping logic ;-)
Biggest problem with mappers: using them. It's easy to make mistakes and therefore they will need to be unit tested to be sure they work. Just write the mapping by hand
Only in Net community, endless debate on mappings, layers, clean code etc Ok java community is similar, but for me simple code without thinking where to put this class (or one file for one record class, it's not normal to have file with one line of code) From outside Typescript, python, rust, go and frameworks like Django, Laravel, rails (examples) doesn't give a shit about it. Build things, kiss, don't over engineering your project and don't be dogmatic, every project is different, good programmer writes good code, clean architecture and layers doesn't
I prefer the manual mapping method as it is always available and no need to add any NuGet or make any change to any code as it is built in to the language. This is the extension mapping method.
Another problem is AutoMapper doesn't support injecting dependencies into profiles - you have to write code for it to do so. Also - I think visibility isn't the problem when you inject a dependency
I have never liked using libraries to do object mapping. Extension methods are a simple and reusable option for mapping and with extension methods is not necessary to learn the details of a library just to map an object.
Always confused why people use auto papers… it takes no time at all to make a manual extension method to do the same.. and these days you can just ask chatgpt to do the donkey work but then you have something concrete and testable. Ok someone can add new fields that aren’t getting mapped but that’s a unit testing issue more than anything imho
I disagree with the testing logic. It's a unit test, specific about that code. Mocking a mapper has the benefit of asserting the right object was passed in by reference and also allows for much cleaner setups in your tests as you can pass in empty objects and mock any response
I disagree strongly on the last point. After watching Jimmy Bogard's NDC talk called "Crafting Wicked Domain Models", I changed my position on the POCO vs Domain argument
Check out my new course From Zero to Hero: Logging in .NET and use code LOG20 at checkout for 20% off: bit.ly/loggingdotnet valid for the first 400 purchases!
I find it perfectly acceptable to implement mapping in the ctor, PROVIDED that the reverse direction is not required or is defined in a method underneath. It would not be worthwhile to overhaul the entire architecture for this.
My approach would be the same: Simply add a 'MapToEntity' method below the constructor.
The main reason for this is: If the DTO needs adjustment - which indeed occurs more frequently than with the DB model - you inevitably have to check the mapping as well. These two components are closely tied and mutually dependent, and in my opinion, they belong together.
Of course, it's accurate to state that the DTO doesn't need to know anything about the mapping, but there is also (almost) nothing to know, particularly if we adhere to your first rule and avoid business logic in the mapping. The developers working on it can save time searching for the mapping and are less likely to forget it because they can adjust the mapping directly in the DTO. Therefore, it either prevents errors or at least saves time.
I also appreciate the philosophy behind Mapster. Although I haven't personally worked with it, if it maps one-to-one and flags mapping issues at compile-time, I don't see a reason why it shouldn't be used.
Your logic breaks down when an object could map to different dtos. Making an extension will be the only future proof way. An example of this is DTOs for REST vs Grpc DTOs
Completely agree. We do that also with our 'one way out DTOs'. We also have readonly properties to make clear that they're not deserialized from the outside world. So, they're only for serialization.
You can even add more entities to the constructor and also partially map to your dto.
Didn't have any constraints yet and they're also easy to update.
The incoming DTOs mostly have MapToQuery or MapToCommand methods.
I can also understand the advantages of Mappers. But it depends on your architecture.
In the old days, we always had some kind of "contract" library that was used by server and client. So in that case, a mapper makes sense. Because the contract library can't reference the domain/application for the constructor mapping of the entity.
But most apis provide a openapi specification and each client can generate the client and DTOs by that specification on its own. So, no contract library necessary anymore.
@@metaltyphoon Why does the logic break down?
Surely it's not a problem if a mapping is repeated for multiple DTOs, you have to do that for all mappings, no matter where or how it's implemented?
Or vice versa, multiple entities for one DTO, that's no problem either.
It's more work, of course, so I'd probably also rather go for Mapster. But in principle I don't find ctor mapping bad, it's just a very simple solution for the same problem.
There are two more advantages by the way:
* The DTO can be readonly, that saves a lot of problems.
* I can add divergent properties to it very easily, e.g. a value calculated by logic. This is then a ctor parameter, or a required property.
@@Palladin007 with that approach you cant turn a DTO into a Domain object.
Also, you can already make thing readonly by using `init` setter.
Calculated logic, is exactly what you never want in a mapper 😂. Thats the entire premise of the video.
you always release videos about topics right after i start using things 😂
I mean ....
It's like he reads minds 😂😂
His courses are a Trojan horse and he spies on us
I think he is watching my releases at this point!
I think the extension method conversion approach is better and I prefer it. But I don't see anything necessarily wrong with your DTO knowing the about the Domain object.
However, I think automatically Mapping DTOs to Domain objects is a bad practice in general because there might be business constraints and decisions to be made so that the domain objects are not in the wrong state. The forward mapping is fine but the backward mapping should be deliberate and intentional like in a business logic method or subfunction with proper validation and constraints
True especially when you want to represent data in a certain way.
So what do you suggest? What should we map DTOs to then?
@@torrvic1156 Nothing. Domain -> DTO is fine however DTO -> Domain should be done manually for each scenario where you want to create Domain Objects from DTOs.
Like I said, the danger is that the DTOs might produce an inconsistent domain state. For example setting `IsEmailSent` to true without sending the email
@@odytrice thanks for clarification!
Completely agree on all the points. I have seen all 3 mistakes, mostly 1 and 3rd on few projects. I just can stand that mappers can have dependencies. The good point in favor of manual mapping is that it's much harder to pass dependencies in the static method rather in the mapping profile
This is all a bit above me at the moment, but I do love how animated you are over it!! As my skills increase I dare say I'll come back to this video.
I completely agree with all of these rules. Mentioned mistakes follow from the Single Responsibility principle violation. It is a great example for such a violation in places where you do not expect them to be at all. Awesome video as always! Thank you for your content!
Thank you so much for sharing Nick!
This is exactly what I searched for. Extension methods is the way to go.
I sometimes used the constructor on DTO to construct it out of the domain object. In general it works fine ... for sure there is no reverse direction but that is normally not required. There are many cases where you need to map domain object to a DTO to provide some sort of a 'view' into the domain. However reverse is usually needed in some sort of an action (Create/Update methods which receive Create/Update DTOs) and there I would usually manually create/update the domain object.
I also do this quite often. Sure I don't want my domain to know about the dto, but it's pretty hand to have my dto turning into a domain (if possible and it doesn't mess around with too much logic)
I thought I am the only one who uses mapping through constructor 😀
Maybe it's bad but its's easy and fast and without making much complexy
I agree, it's not bad. Personally, i prefer to have static method FromAlbum(Album) and instance method ToAlbum() over ctor, but ctor is fine i guess, as long as serializers can do their job.
The purpose of adapters layer (if we use language of hexagon arch) is to map domain to specific API. Domain model is 'the representation' of some concept, so yes, DTO might know it and use it in its contract. DTO is Data Transfer Object, so it's handling transferring of model to api
The only problem with ctor is null handling. Imagine new myObj(objDto). If objDto ist null, you can not return myObj null as well.
I really don't see a problem with the third method. In my mind, the purpose of the DTO is to transfer data and get it to where it's going, which is in its final form, and so having mappers in the DTO actually makes sense. Besides, this way you can see in one place the DTO definition, what it is supposed to be mapped to, and how it is mapped.
In a large enough real world project, you tipically have multiple dtos mapping to the same ‘entity’ for example.
@@unskeptable Agree. I love having the mapper (constructor) next to the properties it is mapping. :)
@@unskeptableActually, using an extension method for mapping is not quite the same as integrating the mapping directly into the DTO. The main distinction lies in the separation of concerns. With extension methods, the mapping logic is decoupled from the DTO. This means the extension, which knows about both the DTO and the model, handles the mapping. Consider a scenario where the model is part of one project and the DTO comes from a separate package. If you can't modify the DTO, how would you manage the mapping? In essence, the DTO should remain agnostic of the model, ensuring cleaner and more maintainable code.
Really cool topic, had several issues in my mind that Nick mentioned, now I've changed my outlook. Thank you!
Manual written static mappic classes in conjunction with readonly DTOs are easy and fast to implement, everything is explicit, they are easy to understand and easy to maintan. A few years ago I worked in a project, where we used AutoMapper and it was much more complicated than manual mapping, so I see no ebenfit in using AutoMapper or similar libraries
How did you make Automapper complicated? :O
the real ebenfit
Auto mapper libraries are an anti-pattern.
Why?
@@PatricSjoeoetrouble with automatic mapping is that it works well in cases where models are almost the same, but in these cases you probably do some overengineering as you might as well use the same class
In real, complex case, models either start the same and then get more and more different or are very different from the start. In this case writing all those mappings is similar work to using pure code, but you loose performance, easy debugging and most importantly - make failures in runtime or UTs instead of build/write time. And when you write ut for mapping it means you have more work than writing it manually and not needing to test
I’ve tended to use the extension method based approach you show towards the end. It is debug-able, straightforward and does not require knowledge of a 3rd party library.
Solid advice about unit tests and avoiding mock of auto mapper.
Its a common mistake among developers to use and create multiple tools to make it faster to code. The problem is that the priorities are now different, exliciitness and readability are the top priorities leasing to less errors and easier maintenance
Thanks for the hints. I personally sometimes mock my mapper and do not use the profile directly. It feels always wrong, and now I know it was wrong
You have uncovered a couple of issues with one of my projects!
We use automapper but occasionally map objects “manually”.
Plus I believe there are a couple of places where we have some inserted business logic.
Thanks for this!
I'd worked in a project with a lot of business logic in AutoMapper, it was a nightmare to mantain
now I feel proud of myself because I came up with the extension method idea by myself one day. But thanks I've seen other mappers you mentioned in some of the project and never realized whats wrong with them.
The second point helps you stay DRY. Even though it looks like shared code, I had to upgrade automapper where the contract changed. I wish we had created a mapping class and interacted vs injecting imapper. We ended up having bugs because some non-standard mappings were missed in manual updates too
Do you have a video about how to do mapping the right way?
I agree with all of them and personally prefer the extension method as that will catch/automatically update changed property names even without unit test as compile time.
What if the incoming model is a List and you need to map it to a String (separated by some value). In my mind this is mapping, but maybe you see it as business logic?
The mocking of IMapper I see a lot! Mappers should be static method IMO, if it feels like you don't want a static method, it's probably because you are about to fall into one of these anti-patterns.
We are cursed with being afraid of static methods and be in love with abusing interfaces in .NET
I just don't get the appeal of injecting the mapper in the first place. I started my current project years ago using automapper and I am now ripping it out. I really decided to start ripping it out when the automapper library started pushing that the right way to use it was to dynamically inject it. I agree with Nick that the mapping code is part of the implementation of the code under test, which means, why would I need to dynamically change it through injection? Seems really strange to me. Anyway, manual mocking code (that gets unit tested) for me now.
The last problem you point out is the hardest for people to understand. Again, it is a separation of concerns issue since DTOs should not be concerned with object mapping details. But if the DTO is meant to serve only the namespace where it resides (which is usually the case), many developers don't see the value in making mapping a top-level citizen. Especially when, like you rightly point out, it should contain no business logic whatsoever. DTOs are always hard to understand. Ideally, they shouldn't even exist and it's another discussion altogether when one should and should not create DTOs. But I think that no matter how thin the layer we are abstracting is, I agree with you that we should follow on the same principles everywhere. Which in this case is: Don't.Mix.Concerns.
what's tl;dw showing in the clip from 0:57 - 1:42?
It's great that you point out problem 2. I see it and other variations of it quite a bit, and sadly it can often go unnoticed on pull requests. It's an unfortunate by-product of the dogmatic way so many people teach dependency injection and unit testing, without ever explaining the true semantic purpose of mocking in the context of a unit test, which would ALWAYS require including an explanation of what you don't want to mock in a unit test.
@nickchapsas so where would your static mapper class live? I have been using the constructor/implicit operator combo for my manual mapping code. But I am open to understanding the critisim against it. So I started to look at where I would put the static mapper class. I can't really think of any other place other than in the layer that houses the DTO. Also, I see that your static mapper class is called SpotifyMapper. Does that imply that you would have one static mapper class that contains the mapping code for all of your spotify dto objects? Or would each DTO have it's own static mapping class?
I would be very interested in his view about this as well
BTW SonarCube has a concern about the last approach with extension. It says that you should move that extension should be moved as a method to the result class (if both are located in the same project).
This is bizarre. Lower level objects should not have to concern themselves with higher level objects (the DTO in this case).
//NOSONAR 👋
Of course it has, because that method doesn't add anything.
Constructors exist, it isn't a bad word. It is a misuse of the Builder pattern to try to extract this out to another class.
Constructor is fine (as long as it's the DTO constructed from a Domain object, not the other way around)
Great video, fully agree!
I use something similar like 3rd one. I have separate Converter class, which know about both entities and DTOs and i inject in this logger for logging errors when it can't convert one to another. It may happens when enums changes and some new values don't know how to convert it to apropriate DTO's value or back if it is possible.
I use the implicit operator trick now and then. You're right that it works only in one direction, and you have to mind circular dependencies. But that mapping happens in one direction only to begin with. In my situation, I have a result type from a Dapper query, and I need to treat it like a base-table entity type. I thought the implicit conversion operator was a nice way to do that. I like your extension method solution also, but I don't have the visceral reaction you do to the implicit conversion operator. Both approaches are static, so there's no temptation to inject business logic. Also, the implicit conversion is constrained on arguments, so you *can't* introduce any odd dependencies. (at least I don't htink so)
The issue is to do with SOLID principles.
Interestingly, in regards to inserting business logic, I didn't find one mention or word on "resolvers". What's lacking in this video and following comments is what the fix is. At a fundamental way I agree, but have my hands tied by the powers of be. End of the day you are still going to MOQ the setup of that business logic in the tests. The resolvers/business logic end up hitting a database/api, etc. No1 rule is you MOQ those!
As a step forward and suggestion to help others, say I have business logic from a service I could fire them all off with:
1) task1/task2 await
Have you considered making a zero to hero series for localization? I would love to take that as a course.
About the first mistake with business logic. Doest that mean that the password should be left empty in the mapper and set in the service?
@nickchapsas - Hi Nick, how are we supposed to do mapping if the 2 objects are NOT 1-to-1? Like what I always encounter on a daily basis as I work in an integration/middleware where the bulk of our work is mapping our domain object to a client's domain object.
We cannot get away from having to put business logic in there cause the fields are completely different. For example, some of the fields we're mapping to needs computation or logic as it requires information from different fields of our domain object. Hope I make sense.
In that case, you don‘t use a mapping library. You write a service class that converts A to B. At least that's what I would do.
And you unit test that sucker as good as possible.
I hear you about a mapper only having one responsibility of mapping one object to another, but sometimes it really feels like external services are needed. In a project I'm working on that has localization, I'll receive one object from data storage that stores a localization key, and my mappers inject my localization service which hydrates that key in to a fully localized string. How can a design like that be cleaned up to avoid the injection?
var obj = GetObjectFromStorage(id);
var dto = _mapper.Map(obj);
dto.localizationKey = localizationService.HydrateKey(obj.localizationKey);
I've done the constructor thing and the cast thing...mostly because I didn't know where a mapper should then live in my project? Should it just exist in the domain layer, just not in the definition of actual domain objects? How many mapper classes will I end up with?
I use type references in a dependency injected service to map a model to entity and vice versa. Has worked just great for me.
public interface IMapToModel where T : Entity
{
void Apply(T entity, TModel model);
}
@nichchapsas, while there are some good ideas and tips in this video, some important aspects aren't entirely true (from a testing perspective).
First of all, the AccountMappingProfile (for AutoMapper) should obviously be tested seperately; you want to make sure you've got well behaving operators. It's an isolated unit and should be tests in isolation, all good here.
Second, the specific strategies (i.e. specific business logic) implemented in this concrete mapper should be considered implementation details and thus considered entirely irrelevant for consumers (of the IMapper interface). What is being presented here is coupling the system under test (AccountService) to a specific implementation of IMapper, and this approach, when implemented on larger scale, eventually makes it harder to refactor the system.
And, the major (and perhaps most important) point is that the AccountServiceTest should only verify the behaviour of the its dependencies; it should stay as agnostic to data as possible and stick to verification of the interaction of its dependencies, in this case the IMapper.Map() interaction (remember, you don't care about the data, you care about the interaction(s)).
I can totally understand why we shouldn't use serviced inside our mapping classes. But what exactly is the solution? Just pass needed values as parameters? So in this example just a good encrypted password that came from the service earlier?
on point .👌
I either use a tool like automaper, or just define my own generic IMapper and have dedicated implementations of these
I love implicit/explicit operator, it works like a magic 😆
But you can isolate and write specific tests against the mapper itself. So why retest the same functionality redundantly in other unit tests where the mapper is an injected dependency? What other injected services do you typically do that for? And if mappers are special such that you shouldn't ever mock them in tests, what is the defining criteria to determine that so you can take the same approach for other services?
great video! what about re using a mapping into the projection of EFCore? is that posible?
I use the implicit method - Guilty - But only to go up the application tree, not down.
A business object should never know anything about converting DOWN to the DTO object. That's for the repository to care about.
I have been a little disappointed with using automapper recently as it is very difficult to properly map records with it. Especially where you are trying to not inject business logic by accident. An example might be ‘map a dto to a domain record for a ‘create’ operation, where you want to generate a unique id. What you inevitably have to do is to either use weakly typed properties dictionary when using mapping (and then make sure your mapper pulls those extra values in) or set some default value for the property and use a ‘with’ expression to then recreate your record before sending it further.
Or just ignore it and set a default on your destination type ?
You're putting logic in the mapping process.
Think of it that way : you should be able to construct an object without automaper. If you do any other than putting a default value in the object itself, you can't.
I totally agree with a static class having functions that map, rather then adding new constructores or addition methods on the dto, in fact i would so far to say that you should almost just use record without a body, and just use the default constructor
Hallelujah. Can't hear it enough that people don't like automapper, because of all the reasons you say. Though I see it used on nearly all the projects I work on, because "it's so easy" and most of the time "You shouldn't program what can be done automatically". However, Especially in your account example, you see that there is nothing automatically about some mappings. All mappings need carefull attention, because you need to say (imo) what has to be done when source properties are not present or target properties are not present, worst case you end up with default values for (new) properties.
When creating a custom mapper, how can i implement unit tests to verify all properties are mapped, and any conversion worked?
i cannot just use reflection to check each property as the properties could have different names and types.
But i still need to verify that all properties where mapped or some developer could break it later on.
Also to verify that of someone adds a property to the DTO, how do you verify it also has a test to make sure it is beeing mapped?
DTO stands for Data TRANSFER Object, not Data Mapping Object. I agree that putting there mapping logic is not good solution, it kind of breaks SRP (Single Responsibility Principle) for DTOs, although I don’t think it’s terrible-there are worst things that people do.
I usually create 2 interfaces: IMapper and ICollectionMapper, the last one with implementation: CollectionMapper that uses IMapper to map a single element and just orchestrates using it for collections or sequences with the support of null inputs if necessary.
Personally, I prefer to have it injected-I treat it as a component. I mock it only for very rare, specific scenarios (e.g. to test very specific cases that require very specific data, and mocking the mapper makes the test more readable).
what about logic in mapper configuration object when you want an specific logic for include or exclude field depend on certain behavior, where do you put all that logic? for example, exclude null fields, empty array and so on. Or if you are creating a new role in your app and your api catch only the ids related to this role, how do you converte those ids in a permission in your databse
How would you, then, fix the 1st problem? Where does the business logic of mapping go? How would that look like?
The problem is using Automapper. The last snippet is the best way imo. Automapper does stuff....automatically. To me that is worse than the "How does my password get generated"
Hi @nickchapsas, we want to use a service within the mapping to retrieve an id for a string value (CompanyName -> CompanyId). After viewing this video I have doubts that this is a good idea. Whats your opinion in this case?
Yeah I've been in a project thats fallen into the "Domain Logic" in the mapper trap. Took me ages to rework all that out.
I agree with all three, but I wouldn't create an extension for the dto to obtain the domain object because:
1/ i can't do the opposite (and I shouldn't, the domain should not know whatever dto will be used down the line, maybe you have a web app and you want to show just the name and the year of the album, maybe you have just a LED which turns red if the year is older than 2000);
2/ i don't want to glue together the dto and the domain object by adding an extension method -- wherever i go, i carry the knowledge to obtain the domain object and me, as a dto, I couldn't care less about the domain... My purpose in life (as a dto) is to be used by others, and not to get others the domain object.
Therefore: i would keep the static mapper class, but not as an extension method.
Hi Nick. Thanks for the video. It seems you don't like using libraries like AutoMapper and others to map objects. Can you please share approaches to map from an entity to a DTO? An entity might contain a picture property (e.g. user photo) or just huge number of properties and the corresponding DTO has just few of those. Obviously it is not efficient to load a full entity from the DB when we just need few properties.
when I have a linq query and I do at last something like .Select(new Dto {...}). Did I understand you correctly, that I put that better to static mapper method?
OOP solves a lot of problems, but it also brings problems with it. Object mapping are only necessary because of OOP. I prefer C# myself, but I also realize it's not perfect, while being my fav language
The only problem where OOP shines is encapsulation.
If you have mutable data, and you need to prevent it from entering invalid states, and your language is insuffictiently expressive enough to do that through the type system, then encapsulation is absolutely the tool for the job.
However the core pillar of OOP, grouping behavior / logic together with state is actively an anti-pattern. Inheritence in itself is as well.
Polymorphism is valuable, but Inheritence creates extremely heavily coupled and inflexible code. Just like grouping data with behavior creates heavily coupled and inflexible code.
And you can also have Polymorphism without inheritence, through interfaces.
>Object mapping are only necessary because of OOP
Why? You do realize that defining custom data types isn't an OOP thing right? Custom Data types, composed from smaller data types are a thing in nearly every language, from purely imperitive languages, to OOP languages, to functional languages.
Everything higher level than assembly gives you the ability to define structs, records, or classes.
YES 🙏🙏
Btw, I wonder if the NewAccountRequest really needs to be in UI layer. Then AccountService needs to depend on UI layer which we don’t want?
I use the explicit / implicit operators on the DTOs. I think this is ok.
How to right hash and dehash Guid Id from database, and not send it to FE, with automapper ? Like Dto with string Id, and Guid Id in Entity ?
I very much want to implement my own logging provider. would your course be helpful?
The reason I use AutoMapper is ProjectTo. And the fact that AutoMapper can reuse existing mappings in ProjectTo method. Is there a way how to achieve this manually? Because as far as I know, calls to extension methods cannot be translated into Expression, so the mapping will be done in memory and not translated to SQL.
Just my 2 cents... Stop doing it... We're stuck with it in a large project and we will be refactoring it out of the way for many years to come... There are just too many gotchas and edge cases when mapping expressions (most of them came from buggy interactions between WebApi OData vs AutoMapper projections but still), let alone the arguments exposed in that video... We've stopped rolling it out into new services for about 3 years now and just using plain Linq .Select(x=>new DTO {}) to map them and everything became just so much easier to trace down. AutoMapper is just a disaster waiting to happen when you have a medium to large project with lots of profiles. It's managable for small projects but it's still easier to track down what's actually going on when it's not in the way.
Once you realize that the purpose of a domain object is to change state, not represent it, you'll never run into #3. Stop loading domain objects simply to transform them into a dto. Use a data transformation layer or better yet, CQRS, and do your data transformation with a lightweight ORM like Dapper. Most of the bad use of AutoMapper I see is trying to transform entities into dtos/views.
I love simplicity, and some developers love to complex things...
I thought that I was making the first mistake, but after further inspection I'm not injecting business rules into the mapping. For context, I have an old web application that uses MSSQL geography data types because at that moment I thought that it was a good idea for be used with PowerBI and other reports, but my DTO uses classes with .net geography data types which, as you may know, are not 100% data types compatible with their database's counterparts, so in my mapper I have a .FromText method to do the conversion.
I know that in some repository class/methods I do make the second mistake, so I'm taking notes for future projects.
After some years I have come to the same conclusion - create a separate static mapper 👍
We had a lot of IMapper mocking in my project. We had everyone fix it after a full code review.
Modern programming: manually mapping using an automapper and losing type safety in the process
Personally i prefer mapping my objects manually, and i kind of avoid Automapper since it uses Reflection.
I've built my own mappers using reflection previously, run once and build a dictionary of properties, then you can remap them tons of times with no major hit to performance cept on boot. I have also used this methodology to make models which when passed to a frontend template in mvc will build out a whole form.
apparently mapperly is pretty good and uses source generation instead of reflection
@@ripple123 fair enough, I was contemplating adding something that would keep a monitor on the state of the code and store the reflected data externally, but it seemed like a lot of work to avoid a 200ms lag on boot and could introduce a lot of bugs.
I created my own without knowing these solutions 🤦🏼♂️ Why do I always do that! Does your course cover this kind of stuff? I love your content
I like the content talking about what _NOT_ to do
I strongly disagree with the third point. "Things should be simple as possible, but not simpler". Domain object handling conversion goes along this principle. In all my code, I try to hide this type from "external" world so right from the start I know the hiding has the cost -- I have to create mirror type (DTO) to be the connector with the world. Because of this domain object should support two actions -- converting from and to DTO. Btw. such approach rules out even slight consideration of mocking mapping, because mapping/conversion is embedded into domain object.
That means your domain model needs to be aware of the DTO, which is bad practise.
Your domain model will live in your domain project, while your DTO would be in either your application layer, or your presentation layer. neither of these should be referenced by the domain layer.
Now you could have the mapping inside of the DTO however (as this layer would have reference to the domain layer).
But i wouldn't recommend it.
I don't see the problem of mocking Mapper. The problem lies in not unit testing the mapping profiles if you choose to do so. The tests are ridiculously easy to write and it's a better way of finding mapping mistakes than having them jumbled up with the logic inside the caller method. I find it more reasonable, as I can put some thought into generating good random data to make sure they're working properly instead of generating whatever while I'm worried about testing business logic.
So in the case of point 1 where the AccountService was being injected just to create a temporary password (as you said, not a great example), I guess this is something that should happen outside before mapping occurs? Then we know that this mapping profile does nothing else except mapping which follows the single responsibility principle?
That's how I do it. The danger I usually run across is that if you start injecting services, at some point you might have an implementation of your AccountService e.g. access a database (And the mapper does not know since it deals with the interface), so your mappers start becoming incredibly slow, or they start doing way more than simply "mapping"
Nick, for the DTO scenario…the adapter pattern
What should I do if my boos keeps wanting to put business logic in the database despite me telling him that it's a terrible practice and the reason we keep having long running queries that result in deadlocks and kill overall performance of the application? His reasoning for putting business logic in the database is that it can be changed without having to do a whole new build. But that is rarely the case. I just don't know what else to tell him at this point.
I had a boss like that. He had been coding since the 90s. He loooved stored procedures. Everything we did at that company was anti-patterns. I gave up and moved to a new job.
@@StrandedKnight84 You are describing by boss... ha ha. He wants everything to be stored procs. No matter how many times I tell him it's not feasible to do it, he tries to convince me otherwise. He also doesn't want to learn new tools. He wants to stick with using tools that he used to used back in the 90s even though he doesn't code anymore.
I’ve always thought these were common sense, until I found business logic in the mapProfile file at work
Really great video Nick!
I'vee actually been wondering whats the best practice here and you filled my cup with the answers! :)
When getting data from the repository/data layer where one of your properties is a string (that contains json) how do you solve this?
Because in my case I have sadly used, for example, Newtonsoft to deserialize the value inside the mapper (I guess thats a red flag right?)
On one of my projects I rely quite heavily on automapper, using various features like custom converters, conditional mapping, Ctor param mapping. I was already aware of the possible issues you mentioned in the video, but after watching I went and reviewed the code again. Alhough there's logic in my mappers, especially in the custom converters, I see them all as mapping logic.
Basically the project functions as a bridge/proxy type application that consumes a restapi of some application, maps the data into logical, enriched objects that are returned to the requester. The problem I was facing that this api only gives out objects in a 'raw' format, meaning all 'relational' properties are filled with Guids. Status property; Guid. ProductType; Guid. linked objects; Guid. etc. etc.
So what I did was create a Collector class containing the raw object, but also the prefetched related data. The most common related data (lookup/dropdown type data, like status) are made available from a service that has cached all that metadata, other data like linked objects I fetched and added to the collector just before invoking the mapper. The mapper (basically the Custom Converter) then composes the enriched object based on the raw object and related data as available in the collector.
Nowhere in the mappers and custom converters any service is used or data is generated, though there might be some cornercases. At one spot I have a src enum property with 3 possible values, but the property is nullable. So in my target object I use an Enum with 4 options, one of them named 'UNKNOWN'. Imho still mapping logic ;-)
Well, got into a discussion about this yesterday, I believe you're watching.
Okay boss, but what do we do when we are having to map a calculated value? What to do instead of injecting the service into mapper?
Biggest problem with mappers: using them. It's easy to make mistakes and therefore they will need to be unit tested to be sure they work. Just write the mapping by hand
Started watching this to see if I do something I shouldn't. For using C# for around 15 years, I have no idea what a mapper is 😅
Only in Net community, endless debate on mappings, layers, clean code etc
Ok java community is similar, but for me simple code without thinking where to put this class (or one file for one record class, it's not normal to have file with one line of code)
From outside Typescript, python, rust, go and frameworks like Django, Laravel, rails (examples) doesn't give a shit about it. Build things, kiss, don't over engineering your project and don't be dogmatic, every project is different, good programmer writes good code, clean architecture and layers doesn't
The implicit operator killed me at an ehr job.... stuff was just magically converting to another thing until I discovered the horror.
For me, implicit operator does the job nicely.
One more common problem is when you need to map more then one collection into new one, i. e. you need join data for mapping
I've done my share of shitty mappings. I've learned much the last couple of years about mapping, and I now usually do it without a 3rd party library.
I prefer the manual mapping method as it is always available and no need to add any NuGet or make any change to any code as it is built in to the language. This is the extension mapping method.
But that’s not fun, u can’t make the company or others rely on u if u don’t use automapper
There are evil in this world
Another problem is AutoMapper doesn't support injecting dependencies into profiles - you have to write code for it to do so.
Also - I think visibility isn't the problem when you inject a dependency
Mate, there''s a reason it doesn't support it... It is not a "problem" but completly by design.
Because you sould not do it.
I have never liked using libraries to do object mapping. Extension methods are a simple and reusable option for mapping and with extension methods is not necessary to learn the details of a library just to map an object.
Always confused why people use auto papers… it takes no time at all to make a manual extension method to do the same.. and these days you can just ask chatgpt to do the donkey work but then you have something concrete and testable. Ok someone can add new fields that aren’t getting mapped but that’s a unit testing issue more than anything imho
Maybe they mock the IMapper because they are making api or database calls from the mapper? :D
God I hope not xD
I disagree with the testing logic. It's a unit test, specific about that code. Mocking a mapper has the benefit of asserting the right object was passed in by reference and also allows for much cleaner setups in your tests as you can pass in empty objects and mock any response
Automapper. Oh no. I still have nightmares ... 😮💨
I disagree strongly on the last point. After watching Jimmy Bogard's NDC talk called "Crafting Wicked Domain Models", I changed my position on the POCO vs Domain argument