Even if you don't use an ECS, it is still probably a good idea to follow "composition over inheritance" as a design rule. That gives most of the non performance related benefits of ECS without ECS. ECS is more or less just a specific optimization you can choose to do if you follow that design rule. If it later turns out you have performance problems, you can pretty easily transition to an ECS (or directly perform the "array of struct" -> "struct of array" transformation yourself)
I think the key idea is to know your constraints and build only what you need. General purpose systems and large teams have very different constraints and requirements compared to a small team and highly specific system. The real trick is building only what you need, minimally and effectively, but in such a way that it leaves room to expand in the future if necessary - you have to keep the bigger picture and future in mind even if you're not yet building it, or there is a risk of painting yourself into a corner and having to rewrite
Very good advice for general game programming, and nice to have the example in Odin. Megastructs are not for game programming only, I think most software would benefit from this approach. Instead of classes, object instances and their methods, you just have chunks of data that allow for a large set of behaviour, and functions operating on that data.
I believe the "distinct int" can be done in C, by just wrapping an int in a struct. typedef struct { int id; } Entity_Id; void do_something_with_an_entity(Entity_Id eid) { for (Entity_Id i = {0}; i.id < 10; ++i.id) { //... } }
How do you multithread it? If you can't reason about your data it's hard to scale multithreading (especially deterministically). I get most games don't need multithreading, but for something like mass scale rts it would be nice..
Yeah, for multithreading, you'd be looking at bespoke systems that don't fit here I've only multithreaded rendering before, so I'm only speculating here... You could split routines into threads if they don't affect each other Let's say physics in one, updating status effects in another Or you could use some spatial data structure to query entities nearby each other and send results to a job queue
(Messed up the sizes, which is really important for this, so changing it. I said it was 7 megs, but it 28 megs (7*4)) How to scale it depends on needs (which youve supplied). So, for a mass scale RTS, I built a prototype of a chunk based "ECS" (though, this idea would work for the other 2 options also). So, each block of data was a compile time constant size, 4096 being really solid. My ECS was also double buffered, and allowed reads from last frames data by any thread AND made things faster for me (bonus!) Then, for my systems I basically did: foreachcore( process chunk_stride_start+core_i ) {I say "ECS" but it was a chunked structure of arrays system per archetype (much like most ECSs are under the hood)} Its part of my billion active entity updates / second (prototype) project. I got there on my AMD 5950x + 3090 (though the video card was just mostly idling). The only entities I had in at the time however was projectiles flying all over, and so more experimentation is needed. { I dont personally care about strict deterministic behaviour (which is a long story), but given storing both old and new states, it should be very doable. I havent put a huge amount of thought into it yet though } A couple of big takeaways. Gameplay data is very often (even for an RTS) a very small portion of all the data a game needs to deal with. So, doing things like double buffering of it (or potentially even more) is very often not a huge deal. Lets say you had a million units, like actual individually instructable units (not like what Total War does). Even at full res for position, youre going to have 3 floats for position, and 2 for orientation (4 of you must). So thats 7*4 or 28 megs on the outside and can absolutely be shrunk if you need more units/less data, so 56 megs for updating its Pos+Orient data. 1 2048x2048 texture is 16 megs by itself (granted, youll be storing that as something like DXTC or whatever, but still) Going >1million units is still doable to about 10million, but over 10 or 20 million Id start coming up with ways to process groups of them more like particles than units.
@@marcsh_dev My units were easily 2kb in size for a basic unit implementation (for a game with mechanics, not a battle demo. ofcourse this is not including game date which was just a pointer/unit id to another table somewhere), not even with all the behaviors I currently support in Cosmonarchy Not sure about chunks btw, I have not tried/seen such implementation. I tried making an ecs, but got pretty complicated pretty quickly when I wanted to start implementing deterministic multithreading (which I need for lock step multiplayer)
2kb of dynamically changing data per frame? Thats the data Im talking about. You can have all sorts of static / per unit data, and only occasionally changing data in different spots. The point of the double buffering is solely for the small per frame subset of data.
Re Chunks: If you have a flat array, its relatively simple. You can even just do it with a single command on many different languages and systems. Ie they often have a 'parallel_for( my_huge_array, num_cores )'. Its also easy enough to go from something like std::vector to motive::vector_chunk, and just have a set of pointers to vector chunks It is slightly tricky to add and remove entities from the chunk, but not hugely different than doing it from a big array
Brilliant video!
Thanks, Bill :)
Even if you don't use an ECS, it is still probably a good idea to follow "composition over inheritance" as a design rule. That gives most of the non performance related benefits of ECS without ECS.
ECS is more or less just a specific optimization you can choose to do if you follow that design rule.
If it later turns out you have performance problems, you can pretty easily transition to an ECS (or directly perform the "array of struct" -> "struct of array" transformation yourself)
I think the key idea is to know your constraints and build only what you need.
General purpose systems and large teams have very different constraints and requirements compared to a small team and highly specific system.
The real trick is building only what you need, minimally and effectively, but in such a way that it leaves room to expand in the future if necessary - you have to keep the bigger picture and future in mind even if you're not yet building it, or there is a risk of painting yourself into a corner and having to rewrite
Very clear video thank you. The examples make it easy to follow. Maybe the code samples could be larger for us mobile watchers though 😅
Thank you! I'll make sure they are larger next time
Very good advice for general game programming, and nice to have the example in Odin. Megastructs are not for game programming only, I think most software would benefit from this approach. Instead of classes, object instances and their methods, you just have chunks of data that allow for a large set of behaviour, and functions operating on that data.
Could you make the code a bit bigger in next videos? It's hard to read on mobile screens.
I believe the "distinct int" can be done in C, by just wrapping an int in a struct.
typedef struct
{
int id;
} Entity_Id;
void do_something_with_an_entity(Entity_Id eid)
{
for (Entity_Id i = {0}; i.id < 10; ++i.id)
{
//...
}
}
potentially enum EntityId : int {} don't recall if it works well in plain C though
You don’t even need to do this
typedef int entity;
@@snesmocha no, that's still an int, i.e. weak type. compiler won't help
wish typedef covered this case
@@snesmocha That's just an alias for an int. A struct is a complex type, and the type checking will work.
Am i the only one who thought this was about Amazon ecs?😅 Great video either way. Always nice to learn more about game dev👍.
Spot on explanation, I would love to see more of those. Great channel btw!
Great video!
Thanks!
I believe Odin's [dynamic] has stable pointers so even if it grows you don't have the problem of losing the reference to it
Nice video, Dylan!
How do you multithread it? If you can't reason about your data it's hard to scale multithreading (especially deterministically). I get most games don't need multithreading, but for something like mass scale rts it would be nice..
Yeah, for multithreading, you'd be looking at bespoke systems that don't fit here
I've only multithreaded rendering before, so I'm only speculating here...
You could split routines into threads if they don't affect each other
Let's say physics in one, updating status effects in another
Or you could use some spatial data structure to query entities nearby each other and send results to a job queue
(Messed up the sizes, which is really important for this, so changing it. I said it was 7 megs, but it 28 megs (7*4))
How to scale it depends on needs (which youve supplied). So, for a mass scale RTS, I built a prototype of a chunk based "ECS" (though, this idea would work for the other 2 options also). So, each block of data was a compile time constant size, 4096 being really solid.
My ECS was also double buffered, and allowed reads from last frames data by any thread AND made things faster for me (bonus!)
Then, for my systems I basically did:
foreachcore( process chunk_stride_start+core_i )
{I say "ECS" but it was a chunked structure of arrays system per archetype (much like most ECSs are under the hood)}
Its part of my billion active entity updates / second (prototype) project. I got there on my AMD 5950x + 3090 (though the video card was just mostly idling).
The only entities I had in at the time however was projectiles flying all over, and so more experimentation is needed.
{ I dont personally care about strict deterministic behaviour (which is a long story), but given storing both old and new states, it should be very doable. I havent put a huge amount of thought into it yet though }
A couple of big takeaways. Gameplay data is very often (even for an RTS) a very small portion of all the data a game needs to deal with. So, doing things like double buffering of it (or potentially even more) is very often not a huge deal.
Lets say you had a million units, like actual individually instructable units (not like what Total War does). Even at full res for position, youre going to have 3 floats for position, and 2 for orientation (4 of you must). So thats 7*4 or 28 megs on the outside and can absolutely be shrunk if you need more units/less data, so 56 megs for updating its Pos+Orient data. 1 2048x2048 texture is 16 megs by itself (granted, youll be storing that as something like DXTC or whatever, but still)
Going >1million units is still doable to about 10million, but over 10 or 20 million Id start coming up with ways to process groups of them more like particles than units.
@@marcsh_dev My units were easily 2kb in size for a basic unit implementation (for a game with mechanics, not a battle demo. ofcourse this is not including game date which was just a pointer/unit id to another table somewhere), not even with all the behaviors I currently support in Cosmonarchy
Not sure about chunks btw, I have not tried/seen such implementation. I tried making an ecs, but got pretty complicated pretty quickly when I wanted to start implementing deterministic multithreading (which I need for lock step multiplayer)
2kb of dynamically changing data per frame?
Thats the data Im talking about. You can have all sorts of static / per unit data, and only occasionally changing data in different spots.
The point of the double buffering is solely for the small per frame subset of data.
Re Chunks:
If you have a flat array, its relatively simple. You can even just do it with a single command on many different languages and systems. Ie they often have a 'parallel_for( my_huge_array, num_cores )'.
Its also easy enough to go from something like std::vector to motive::vector_chunk, and just have a set of pointers to vector chunks
It is slightly tricky to add and remove entities from the chunk, but not hugely different than doing it from a big array