Nick, according to Microsoft you should implement both UseSeeding and UseAsyncSeeding. They state, "EF Core Tooling currently relies on the synchronous version of the method and will not seed the database correctly if the UseSeeding method is not implemented".
It depends on where you execute the code from. If you run it via the dotnet ef migrations command then yes that is true. If you use it in your own startup code it is sufficient to use the async one.
@@drewfyre7693 wait serious? So both are needed? how anoying is this! Like, will people constantly try and make the sync version just call the async version with .. GetAwaiter etc?
It is also possible to use the IEntityTypeConfiguration and on the builder call builder.HasData(entity); -> There are some caviats ofcause, but this lets you make the seeding as part of the migration, without editing the generated migration code (Which is a really bad idea afaik). One of the caviats is that you cannot set a Navigation property, but have to utilize the Id properties for foreign keys etc. However really nice note in regards to making a seperate program to run the Migrations, I personally have been considering how to, and just ended up doing the "Not so smart thing" you mentioned. (Works alright, as I dont have multiple instances running ever) :) Great work!
THIS is the cleanest option! This way you can also generate script for seed (along with your migrations), and you can run your scripts however you just want.
I totally agree and couldn't see any pros using the new method over this one. We already had HasData method which would create Seed code right inside the Migration files. So for example for a new module in my app if I require a new parameter in the parameters table, I totally can seed it right along with the new module's table migrations. The only problem for this approach is that the Migration and Migration.Design files gets too long if you try to Seed some really large tables or large datas. But I can live with that.
I think i would extend this with some config to choose between different seed data sets - e.g., pass in a command line arg like --seed=... As well as implement an unseed function to clear out seed data and any related / subsequently created data (without necessarily blowing away and recreating the database) Having a range of seed scenarios to choose from that can be reset easily is very helpful for manual testing and demos
would be great if you make a video on how to do the implementation of a diferent project for migrations and how to depend the run the application once that container finish
@@JustinAdler a fresh db for each test sounds a bit of an overkill imho. I would argue that it would make more sense to run the tests against a shared test container database and not a clean database. That's how your production is running too. If your tests are failing because of sharing a test container then you either have a problem in your test/fixture setup or there is a problem in the services code itself, in which case it's good to catch it.
@ single test container. Unique db per test. When you start adding transactions to tests to prevent tests fighting each other you get brittle tests. I want to have all my tests running at once to save time. Things blow up quick if all the parallel tests are hitting a shared db.
So we have an additional abstraction and „magic“ Callback just to have the code 5 lines higher. Seems like an aweful tradeoff. The Migration seeding is the way to go. Another app or container just to initialize some data or migrate the data Model seems like a high overhead and should be used carefully and only if necessary.
it might seem like overhead, until suddenly you're app in a "scale out" sitatuation. With scale _out_ this means the website 'starts' each time for each scale out unit. So Nick was saying that you don't want to keep seeding data each time, etc. Sure there is a 'do we have any movies data' check there .. but .. it's possible that two+ apps can start at the same time and they both return 'zero results' because of a nice race condition and now both apps are trying to seed data at the same time. sorta thing. so it depends on your app/scenario/etc and how you check for data, etc.
@@JustinAdler there are can be two types of seeding like develop seeding to run service for work and production seeding which can be implemented in another way. And bogus leed to that way conclusion.
I don't get the point of "UseAsyncSeeding". It seems like it gets triggered once you call "EnsureCreatedAsync" and if that is the case, why not just put your seeding logic right after that? This has the benefit of not having to know or care about how you trigger the seeding call and you don't have to "remember" to use the async seeding method that he keeps reminding us about. I must be missing something here.
Thanks Nick, love this approach. Though, not sure if this has the API to run the migration from embedded scripts in executing assembly, just like it can be done with DbUp nuget package, as I prefer running database scripts for my migrations.
That bogous license is $10/year. Content creators constantly simping for patreons and the like, while won't support the tools they are using. Got to love the double-standards
Apart from the EF Core mechanism itself, this is generally a very bad approach. We are talking about throwing data directly into the database. Whether it is to population data for manual testing in the local environment or staging, or automatic testing. The only correct way is to use the entire application infrastructure through its entry points (whether these are API endpoints or command handlers). Data in the database should be produced only by a full pass of the source data through the application layer, business layer and infrastructure layer. This is the only way to guarantee data consistency in terms of business logic rules and their correct validation. As an example. Let's assume you have data representing appointments for a doctor's visit. In the way you present it, there is no problem creating all the appointments for the same hour, or appointments whose duration exceeds the maximum allowed time, or on the contrary, creating 1000 appointments lasting one minute on the same day. This is a bad solution at its core. This type of approach can only be used for non-business data such as dictionary data. But unfortunately I see that it is common...
The only problem with using the entry points is you may need to setup data in an invalid state, to verify you are guarding against bad data. I have seen plenty of cases where the business value to fix bad data wasn't there, as the modifications required to do that would be extensive, and the preferred solution was to guard against it or code around it. Not ideal, by any means. If you never experience that then I envy you.
@@renynzea Honestly, I can't imagine such a case. Input data validation and business logic rules are some of the most important things and should ensure that the state of the system should always be consistent and compliant with the requirements. Do you want to assume that the data you have in the system may be incorrect? How do you even imagine that? Do you want to validate data retrieved from the database every time it is read? Data can only be correct in one way, but incorrect in thousands of ways. You will never be able to protect yourself from every possible case. If such a case ever occurs, it is an error/exception and should be treated in a special way (hence the exception). If you assume that the user will provide incorrect data, then ok, you validate it. But if you assume that the system you wrote produces incorrect data and you want to protect yourself from it... well, how can you be sure that the code you created to protect it is not also flawed... a vicious circle...
@@Lukasz-1985 I agree with you. That is how it should be. But it sounds like you have been fortunate to only work on applications that don't have crap data and/or crap code. Some of us live in the world where devs get handed tasks, they fail at those tasks, no one caught the error in review, the bug gets out into the wild, and then the team has to fix it somehow. And often the fix is the path of least resistance. Cause time, money, and resources. For a new application, or any application that has managed to stay pristine I agree, your approach is way, way better.
@@renynzea Oh no, unfortunately I am not so lucky to work only in projects where everything is under control :) I have worked on many projects that were shitty, but as far as I remember the cause of errors was never the data itself. Usually, incorrect data was the result of errors in business logic. Then the only solution was simply to fix the original errors in the code + scripts updating the data if possible. There is no other preventive measure for this. That is why I am allergic to all intermediate, unprofessional solutions that are repeated in subsequent new projects because the point is that we learn from the mistakes, preferably those of others :)
@@renynzea It is not always a bug either, sometimes it is just changing requirements over time result in something that used to be acceptable no longer being acceptable.
1. Don't be an idiot. 2. ???? 3. Profit. Literally those are the steps. AI ain't doing shit, you need the expert to direct it, analyze it, and so on. Go watch internet of bugs of that doesn't make sense to you
Glad to see I’m not the only person using rider pressing run instead of debug
I do it all the time as well, to the point I keep pressing run even when I want to debug and have to stop and debug again.
You hardly make that mistake with Visual Studio 😆😆🤣
Nick, according to Microsoft you should implement both UseSeeding and UseAsyncSeeding. They state, "EF Core Tooling currently relies on the synchronous version of the method and will not seed the database correctly if the UseSeeding method is not implemented".
It depends on where you execute the code from. If you run it via the dotnet ef migrations command then yes that is true. If you use it in your own startup code it is sufficient to use the async one.
@@drewfyre7693 wait serious? So both are needed? how anoying is this! Like, will people constantly try and make the sync version just call the async version with .. GetAwaiter etc?
"Just a random number will do" Types 420 🌿
It might be useful to examine code first vs db and model first approaches and the benefits/drawbacks of each
Seems a lot nicer than what im used to. Will try it next time i need to seed test data. Thanks..
It is also possible to use the IEntityTypeConfiguration and on the builder call builder.HasData(entity); -> There are some caviats ofcause, but this lets you make the seeding as part of the migration, without editing the generated migration code (Which is a really bad idea afaik).
One of the caviats is that you cannot set a Navigation property, but have to utilize the Id properties for foreign keys etc.
However really nice note in regards to making a seperate program to run the Migrations, I personally have been considering how to, and just ended up doing the "Not so smart thing" you mentioned. (Works alright, as I dont have multiple instances running ever) :)
Great work!
THIS is the cleanest option! This way you can also generate script for seed (along with your migrations), and you can run your scripts however you just want.
I totally agree and couldn't see any pros using the new method over this one. We already had HasData method which would create Seed code right inside the Migration files. So for example for a new module in my app if I require a new parameter in the parameters table, I totally can seed it right along with the new module's table migrations.
The only problem for this approach is that the Migration and Migration.Design files gets too long if you try to Seed some really large tables or large datas. But I can live with that.
"John Wick and ... others". James has really fallen off 🤣
I think i would extend this with some config to choose between different seed data sets - e.g., pass in a command line arg like --seed=...
As well as implement an unseed function to clear out seed data and any related / subsequently created data (without necessarily blowing away and recreating the database)
Having a range of seed scenarios to choose from that can be reset easily is very helpful for manual testing and demos
2:24 shouldn't that dbContext instance be disposed along with the scope that you get it from??
That's what the `await using` is for
First time I'm seeing Bogus, that's a cool looking library.
would be great if you make a video on how to do the implementation of a diferent project for migrations and how to depend the run the application once that container finish
I like it. Streamlines things considerably.
wasnt ensurecreated a total no no when using migrations?
thats a nice way to do seeding
we do something like that for tests.
How would you handle the cleanup of it?
Tests should use test containers and ideally have their own db per test for pure test isolation
@@JustinAdler a fresh db for each test sounds a bit of an overkill imho. I would argue that it would make more sense to run the tests against a shared test container database and not a clean database. That's how your production is running too.
If your tests are failing because of sharing a test container then you either have a problem in your test/fixture setup or there is a problem in the services code itself, in which case it's good to catch it.
@ single test container. Unique db per test. When you start adding transactions to tests to prevent tests fighting each other you get brittle tests. I want to have all my tests running at once to save time. Things blow up quick if all the parallel tests are hitting a shared db.
This is great but what else we can do is pass --seed into command and handle that, I think that approach makes things easier
I really enjoy other movies with person names.
So we have an additional abstraction and „magic“ Callback just to have the code 5 lines higher. Seems like an aweful tradeoff.
The Migration seeding is the way to go. Another app or container just to initialize some data or migrate the data Model seems like a high overhead and should be used carefully and only if necessary.
it might seem like overhead, until suddenly you're app in a "scale out" sitatuation. With scale _out_ this means the website 'starts' each time for each scale out unit. So Nick was saying that you don't want to keep seeding data each time, etc. Sure there is a 'do we have any movies data' check there .. but .. it's possible that two+ apps can start at the same time and they both return 'zero results' because of a nice race condition and now both apps are trying to seed data at the same time. sorta thing.
so it depends on your app/scenario/etc and how you check for data, etc.
Event the ef teams says you shouldn't run your migrations automatically in your app. Even if you AREN'T seeding data.
@@JustinAdler there are can be two types of seeding like develop seeding to run service for work and production seeding which can be implemented in another way. And bogus leed to that way conclusion.
I don't get the point of "UseAsyncSeeding". It seems like it gets triggered once you call "EnsureCreatedAsync" and if that is the case, why not just put your seeding logic right after that? This has the benefit of not having to know or care about how you trigger the seeding call and you don't have to "remember" to use the async seeding method that he keeps reminding us about. I must be missing something here.
Why I have feeling intellisense does not work for Nick as it should in this video
Because it didn’t
Hard Core 🤟
Thanks Nick, love this approach.
Though, not sure if this has the API to run the migration from embedded scripts in executing assembly, just like it can be done with DbUp nuget package, as I prefer running database scripts for my migrations.
That bogous license is $10/year. Content creators constantly simping for patreons and the like, while won't support the tools they are using. Got to love the double-standards
As long as I can't inject services into the seeding method it's kinda useless to me.
And AFAIK there's no way to make that happen right now.
Seed from the main DbContext's `OnConfiguring` method in which case it's as simple as injecting the required references into the datacontext
20 years in the business, never heard the word "seeding" before in a database context.
You may be deaf
That seems strange
nice
Correctly and EF shouldnt be used together
Apart from the EF Core mechanism itself, this is generally a very bad approach. We are talking about throwing data directly into the database. Whether it is to population data for manual testing in the local environment or staging, or automatic testing.
The only correct way is to use the entire application infrastructure through its entry points (whether these are API endpoints or command handlers). Data in the database should be produced only by a full pass of the source data through the application layer, business layer and infrastructure layer. This is the only way to guarantee data consistency in terms of business logic rules and their correct validation.
As an example. Let's assume you have data representing appointments for a doctor's visit. In the way you present it, there is no problem creating all the appointments for the same hour, or appointments whose duration exceeds the maximum allowed time, or on the contrary, creating 1000 appointments lasting one minute on the same day.
This is a bad solution at its core. This type of approach can only be used for non-business data such as dictionary data. But unfortunately I see that it is common...
The only problem with using the entry points is you may need to setup data in an invalid state, to verify you are guarding against bad data. I have seen plenty of cases where the business value to fix bad data wasn't there, as the modifications required to do that would be extensive, and the preferred solution was to guard against it or code around it. Not ideal, by any means. If you never experience that then I envy you.
@@renynzea Honestly, I can't imagine such a case. Input data validation and business logic rules are some of the most important things and should ensure that the state of the system should always be consistent and compliant with the requirements. Do you want to assume that the data you have in the system may be incorrect? How do you even imagine that? Do you want to validate data retrieved from the database every time it is read? Data can only be correct in one way, but incorrect in thousands of ways. You will never be able to protect yourself from every possible case. If such a case ever occurs, it is an error/exception and should be treated in a special way (hence the exception). If you assume that the user will provide incorrect data, then ok, you validate it. But if you assume that the system you wrote produces incorrect data and you want to protect yourself from it... well, how can you be sure that the code you created to protect it is not also flawed... a vicious circle...
@@Lukasz-1985 I agree with you. That is how it should be. But it sounds like you have been fortunate to only work on applications that don't have crap data and/or crap code. Some of us live in the world where devs get handed tasks, they fail at those tasks, no one caught the error in review, the bug gets out into the wild, and then the team has to fix it somehow. And often the fix is the path of least resistance. Cause time, money, and resources.
For a new application, or any application that has managed to stay pristine I agree, your approach is way, way better.
@@renynzea Oh no, unfortunately I am not so lucky to work only in projects where everything is under control :) I have worked on many projects that were shitty, but as far as I remember the cause of errors was never the data itself. Usually, incorrect data was the result of errors in business logic. Then the only solution was simply to fix the original errors in the code + scripts updating the data if possible. There is no other preventive measure for this. That is why I am allergic to all intermediate, unprofessional solutions that are repeated in subsequent new projects because the point is that we learn from the mistakes, preferably those of others :)
@@renynzea It is not always a bug either, sometimes it is just changing requirements over time result in something that used to be acceptable no longer being acceptable.
Hi Nick, need your suggestion. In this growing AI generation. How can a normal developer like us need to grow. What's the process? What's the key?
1. Don't be an idiot.
2. ????
3. Profit.
Literally those are the steps. AI ain't doing shit, you need the expert to direct it, analyze it, and so on. Go watch internet of bugs of that doesn't make sense to you
Literally the same way as has always been done. AI should not be part of your learning.
All I know is that you need a good seed to start
AI is a tool. Learn to use it effectively, just like every other tool.