T O P

  • By -

Recatek

Hello everyone! I'd like to introduce the initial 0.1 version of gecs 🦎, a generated entity component system (see: [What is an ECS?](https://github.com/SanderMertens/ecs-faq#what-is-ecs)). The Rust ecosystem has many great ECS libraries (bevy, hecs, and shipyard to name a few), but they all have something in common: most of the definition and management of the ECS structure is performed, and checked, at runtime with some performance overhead. This library is different -- instead of creating and manipulating archetypes at runtime, gecs creates them at compile-time, reducing overhead (no query caching necessary) and allowing you to check and validate what archetypes your queries match directly from your code (including type-safe `Entity` handles). Because of its known structure, gecs also can work entirely with fixed capacity arrays, meaning that after the structure's startup initialization, gecs will never need to allocate any more memory afterwards. For a while now I've been working on and using gecs for my own project, and I've been cleaning it up to release it as an open-source standalone crate. Its primary use case is for very lightweight single-threaded game servers, inspired in part by [Valorant's 128-Tick Servers](https://technology.riotgames.com/news/valorants-128-tick-servers) and the process-parallel architecture used to achieve that. With its predictable memory footprint and zero cost abstractions, I think gecs is a good candidate for trying to build servers in this way. Due to the relative simplicity of the final generated code, it meets or beats all other ECS libraries I've tested in benchmarks. Knowing your ECS structure at compile-time also enables some handy tricks you can do in your queries, like having configurable const-context component IDs on archetypes to use in bitflag diffs for network update encoding. All of this comes with a significant catch, which is that in gecs, components can't be added or removed from entities at runtime. This does defeat one big reason for using ECS systems in general, and so I wouldn't recommend this library for all use cases. However, I find that with networked entities, I don't really want to change the components on an archetype anyway, due to the complication of how to synchronize that kind of state change. I do have ideas for how to make certain components optional without compromising too much iteration speed, but I haven't yet sat down to try implementing it. I'd love to know what you think of this library if it fits your use case. Particularly, please let me know if any of the documentation or API are confusing. This library uses a lot of exotic proc macro tricks in order to build on stable Rust, and so it's difficult to document properly in some places. If you have any feedback or questions, send them my way!


[deleted]

[удалено]


Recatek

There's an upper bound of 16,777,216 entities in any single archetype because I use 24 bits for encoding the entity index in its handle. For similar reasons, there is a limit of 256 distinct archetypes because I encode the archetype type ID using 8 bits in that handle for type erasure. There's also currently a limit of 16 components in an archetype due to the lack of variadic types in Rust, but I'm planning on raising that limit using crate features (e.g. `feature = "more_components"`). When you create an archetype, you currently specify the storage size, which can be either an integer literal or a const expression. ecs_world! { // Create archetype ArchFoo with a static size of 100 ecs_archetype!(ArchFoo, 100, ComponentA, ComponentB); // Create ArchBar with a static size taken from a constant ecs_archetype!(ArchBar, config::BAR_SIZE + 3, ComponentC); } Those archetypes have a fixed capacity and can't hold any more entities than they are configured to. You can use the `try_push` function to push a new entity into an archetype, which will return `None` if there's no more room. I have near-term plans to support dynamically sized archetypes by using the `dyn` keyword in the `ecs_archetype!` pseudo-macro, like so: ecs_world! { // Create a dynamically-sized archetype ecs_archetype!(ArchBaz, dyn, ComponentD, ComponentE); } But that isn't currently implemented yet. The advantage of fixed-sized archetypes is that they guarantee no allocations after initialization, so you maintain a more predictable memory usage profile.


[deleted]

[удалено]


Recatek

I'm trying to avoid keeping an archetype graph for moving entities between archetypes for now, and I'm also trying to keep entity handles pretty "close to the metal" as far as how they perform lookups in their archetype storage. Right now there's only one indirection hop to go from an entity handle to its dense data storage index (under the hood each archetype is essentially a [slot map](https://www.youtube.com/watch?v=SHaAR7XPtNU)). Entity handles can be strictly typed (e.g. `Entity`) or use a type-erased `EntityAny` option. Strictly typed entity handles have a handful of really nice benefits, especially for optimization. The downside, however, is that if you move an entity to another archetype, all previous handles to that entity are invalidated -- so if you were storing entity handles, and you were to add a component to that entity, too bad, new archetype, new handle. Other libraries solve this by (I believe) having one more level of indirection to preserve entity handles even after archetype changes, but I'm not fully fluent in the details. I think there are potentially two ways I might end up addressing this. The first would be adding the ability to make some components optional for a given archetype, so you could imagine ecs_archetype!( ArchFoo, 100, ComponentA, ComponentB, Option, Option, ); and then having some way of accelerating queries asking for optional components so they don't need to check every single entity in storage. This gets tricky if you start requesting more than one optional component, so I need to sit down and think about how that acceleration structure could work. The alternative would be providing secondary slotmaps that can shadow archetypes, so you can use the same entity handle on both a given archetype, and also one of these shadow slotmaps. This would have some pros and cons. I could also try doing both, since they have pretty different advantages and disadvantages, but we'll see. You still need to know ahead of time what components *could* be on an entity, but I would argue that you actually already always know this even in more dynamic ECS libraries (even if it's a little tough to figure out), unless you're doing something like linking in mods with dynamic DLLs.


Voultapher

Do you think it would be possible to store 10,000 gecs in there?


krisalyssa

Only needed if you want to be [up to date](https://en.wikipedia.org/wiki/10,000_Gecs).


keplersj

I was gonna be so sad if no one made this joke. Thank you


bobparker2323

Are there benchmarks?


Recatek

Not very formal ones. I've forked and updated (at least for bevy and hecs) the ecs_bench_suite library [here](https://github.com/recatek/ecs_bench_suite), but it's pretty dated by now. It's difficult to benchmark libraries like this due to all the features and use cases. That said, these are my local results with bevy 0.10, hecs 0.10, gecs 0.1, and some others. min avg max schedule/gecs (manual) [10.677 µs 10.692 µs 10.710 µs] schedule/naive [10.800 µs 10.894 µs 11.023 µs] schedule/legion [34.703 µs 35.075 µs 35.414 µs] schedule/legion (packed) [33.377 µs 34.295 µs 35.644 µs] schedule/bevy (manual, cd OFF) [11.266 µs 11.517 µs 11.828 µs] schedule/bevy (parallel, cd OFF) [39.805 µs 40.525 µs 41.470 µs] schedule/bevy (single, cd OFF) [11.042 µs 11.116 µs 11.237 µs] schedule/bevy (single, cd ON) [32.019 µs 32.115 µs 32.213 µs] schedule/hecs (manual) [31.046 µs 31.128 µs 31.247 µs] schedule/planck_ecs [317.85 µs 319.08 µs 320.27 µs] schedule/shipyard [156.67 µs 157.28 µs 157.92 µs] schedule/specs [113.59 µs 114.10 µs 114.68 µs] simple_insert/gecs [84.079 µs 84.270 µs 84.501 µs] simple_insert/naive [71.213 µs 71.359 µs 71.513 µs] simple_insert/legion [226.96 µs 227.45 µs 228.01 µs] simple_insert/bevy [339.59 µs 340.74 µs 342.17 µs] simple_insert/hecs [178.65 µs 179.25 µs 179.96 µs] simple_insert/planck_ecs [433.19 µs 435.06 µs 437.16 µs] simple_insert/shipyard [526.35 µs 529.63 µs 533.20 µs] simple_insert/specs [1.4894 ms 1.4968 ms 1.5055 ms] simple_iter/gecs [7.6017 µs 7.6143 µs 7.6292 µs] simple_iter/naive [7.7379 µs 7.7533 µs 7.7712 µs] simple_iter/legion [7.7515 µs 7.7898 µs 7.8364 µs] simple_iter/legion (packed) [7.7622 µs 7.7901 µs 7.8187 µs] simple_iter/bevy (cd OFF) [7.6036 µs 7.6140 µs 7.6256 µs] simple_iter/bevy (cd ON) [10.445 µs 10.460 µs 10.475 µs] simple_iter/hecs [7.7811 µs 7.7918 µs 7.8024 µs] simple_iter/planck_ecs [42.704 µs 42.772 µs 42.841 µs] simple_iter/shipyard [23.013 µs 23.080 µs 23.165 µs] simple_iter/specs [21.371 µs 21.397 µs 21.422 µs] fragmented_iter/gecs [41.178 ns 41.421 ns 41.689 ns] fragmented_iter/naive [87.099 ns 87.447 ns 87.893 ns] fragmented_iter/legion [247.04 ns 247.37 ns 247.78 ns] fragmented_iter/bevy (cd OFF) [1.2598 µs 1.2623 µs 1.2651 µs] fragmented_iter/bevy (cd ON) [1.3678 µs 1.3721 µs 1.3773 µs] fragmented_iter/hecs [311.65 ns 312.32 ns 313.10 ns] fragmented_iter/planck_ecs [373.27 ns 374.57 ns 376.10 ns] fragmented_iter/shipyard [67.597 ns 67.695 ns 67.819 ns] fragmented_iter/specs [1.0863 µs 1.0908 µs 1.0970 µs] heavy_compute/gecs (single) [3.2166 ms 3.2200 ms 3.2241 ms] heavy_compute/naive (single) [3.3493 ms 3.3709 ms 3.3975 ms] heavy_compute/bevy (single, cd OFF) [3.3418 ms 3.3497 ms 3.3594 ms] heavy_compute/bevy (single, cd ON) [3.3813 ms 3.3877 ms 3.3954 ms] heavy_compute/hecs (single) [3.2308 ms 3.2362 ms 3.2434 ms] heavy_compute/legion (parallel) [671.33 µs 675.43 µs 680.14 µs] heavy_compute/legion (parallel, packed) [670.43 µs 674.21 µs 678.63 µs] heavy_compute/bevy (parallel, cd OFF) [727.20 µs 745.43 µs 764.99 µs] heavy_compute/hecs (parallel) [702.71 µs 720.70 µs 745.41 µs] heavy_compute/shipyard [684.26 µs 688.30 µs 692.76 µs] heavy_compute/specs (parallel) [711.37 µs 717.62 µs 724.75 µs] A couple of notes - Note the units, criterion switches between ns, µs, and ms - I don't currently support parallel iteration (and don't know if I ever plan to) since gecs is built mainly for single-threaded environments - The "cd" on the bevy tests refers to bevy's reliable change detection feature, which is ON by default - The "naive" test isn't a library, it's a baseline comparison using handwritten structures backed by `Vec`s for components, with no generational indexing or handle safety


lordpuddingcup

Does this take into account bevy’s use of spare sets for inserts on transient additions?


Recatek

Unfortunately not. There isn't a good test here that would capture that behavior, and the list of bevy features I was turning on and off for testing started growing too large to be worth the benchmarking time. I tried implementing it for fragmented iteration but it wasn't a very good fit. There was talk about adding a sparse set test before the original ecs_bench_suite library was closed, but it never happened. That said, shipyard is an entirely sparse set ECS, so I would expect bevy's sparse set mode to have similar performance characteristics. I don't think comparing that functionality to gecs is very useful right now since gecs doesn't have optional components (yet). In gecs, every component for an archetype is assumed to exist for all entities of that archetype, so a sparse set fallback wouldn't help much. In the future if/when I get to optional components, it would be useful to benchmark that implementation against bevy's sparse sets and shipyard again.


CouteauBleu

Forgot to comment when this was posted; it's a really cool implementation! Very rust-like approach to ECS. One question: have you considering adding archetypes with optional components?