Rant: Entity systems and the Rust borrow checker ... or something.

Поділитися
Вставка
  • Опубліковано 13 вер 2018
  • Commentary on the closing keynote from RustConf 2018, which you can view here:
    • RustConf 2018 - Closin...
  • Ігри

КОМЕНТАРІ • 587

  • @DigitalDuo2211
    @DigitalDuo2211 5 років тому +228

    Jonathan lives in The Witcher's universe; he went from day-time to night-time in 35 minutes!

  • @fschutt.maps4print
    @fschutt.maps4print 5 років тому +394

    I've worked roughly 16 months fulltime in Rust now. And I have to say - I do agree with most of the points you made. In 30k lines, the borrow checker has saved my ass 3 - 10 times in a way where I in hindsight knew that those bugs would have been very hard to catch. The other times it was more or less just annoying because I knew that some things were safe, but the borrow checker was overly restrictive or didn't "get" what I wanted to express, so I had to refactor my code just to make it pass the checker - but that doesn't mean that the checker "forced me to write good code", no, the borrow checker was just too dumb to get what I mean and I had to write more syntax to make it understand.
    I also don't hit the borrow checker problems that much anymore, but yesterday was an exception for example where I had a really nasty borrowing problem. I've come to realize that Rust takes a long-ass time to write but once it's written, it's pretty solid and you don't run into last-minute-please-fix-issues. That is both a good and a bad thing. On the one hand, it allows you to schedule and plan things somewhat easily. On the other hand, productivity in Rust is pretty low, because many libraries are simply missing because good code, which Rust forces you to write, ( *not necessarily courtesy of the borrow checker* ) takes long to write and there is a lot of friction. Also compile times are abysmal, because of the whole type-checking (yes, lots of generics and macros take time to type-check), which is simply something that decreases the productivity.
    I did not use Rust because of the borrow checker though, I find it nice and certainly useful (esp. in regards to preventing iterator invalidation), and it's a new, unique feature in PL design (at least I am not aware of any other language that has a borrow checker) but people give it too much credit. There are other features that Rust has that prevent large swaths of bugs - discriminated unions and exhaustive pattern matching (versus switch on a number with fallthrough default case), or Option / Result error handling - those things are great.
    On the other hand, I don't write game engines, but I knew that I have to possibly maintain my code for decades. Which is one of the reasons I chose Rust for a project. In hindsight, not the best move, because programming is in the end, a business and you need to ship on time. In hindsight, I'd rather deal with debugging a few NullPointerExceptions rather than spending 6 months writing a certain library that doesn't exist in Rust. So I think that if you want to get your language off the ground, libraries (esp. if game-focused) are a huge deal.
    There is certainly truth to the argument that Rusts way of preventing mistakes via borrow checking is just *one* way, not *the* way, to prevent logical mistakes. Because in the end, you don't want to prevent pointer mistakes, you want to prevent logic mistakes (which express themselves as pointer misuse). For example, a bug I had in Rust was that I forgot to update a cache after reloading a file - the borrow checker passed fine, but it didn't know about my application logic or the fact that I had to update my cache. You *can* have two mutable pointers to some memory just fine, it's just that it's a common logic mistake to update one and forget that you have another pointer. But that's, at its core, a logic mistake and Rusts way of mutability-checking is just one way to prevent against one class of bugs.
    However, Rust gives me good tools to at least try to prevent these mistakes, mostly by leveraging the type system. For example, if I have a function B that needs a struct C, and the only way to get a struct C is by calling function A first - then I have a logical "chain" where I can't forget to call function A before function B (Rusts move semantics help a bit here). Otherwise it wouldn't compile. Or (in the example) I want to render something, but need an "UpdateKey" struct first and the only way to get that is by calling the generational-update function. This is how at least I have learned to build APIs that are essentially fool-proof, so that you shouldn't be able to forget to update something before you call something else.
    But overall, I still think Rust is a decent language that at least tries to push PL research forward and this video highlighted a good argument, so thanks for this video. I am hopeful to see how Jai pans out.

    • @trolledwoods377
      @trolledwoods377 4 роки тому +5

      @Sam Claus Not entirely sure that the optimization is super great though, there was a talk about the stylo css engine for firefox that was written in rust, and there he mentioned how much memory those Option types uses. I think it's in very specific cases those optimizations are made, and not generally. That said I do like how rust does a lot of things, and the borrow checker is pretty neat when you're not used to manually deallocating things in cpp, but instead come from c#

    • @richardgomes5420
      @richardgomes5420 4 роки тому +7

      There's no way the borrow checker knows what you are intending to do. I can only know what you've actually done. If you actually done in unsafe way... that is an error. In this case, you have to redo what you were primarily intending to do in some sort of safe way. This is not really "turning off the borrow checker" but "learning how to implement your intentions in some sort of actual safe code". And, if you watched these both talks... this is the primary point defended since the beginning: the borrow checker "forces" you to rethink how to do things and implement them safely.

    • @Booone008
      @Booone008 4 роки тому +11

      @@richardgomes5420 As Jon explained, the implementation of ECS that the borrow checker "forced" the original presenter to implement still doesn't solve the stale reference safety problem. She still had to tack the GenerationalIndex mechanism on top of it to actually ensure correctness ("the thing at this location/index is actually the component I am looking for now, not some replaced version of it").
      In this case, refactoring your system and only storing a (non-generational) EntityIndex instead of a memory pointer satisfies the borrow checker, giving you a false sense of "this is now safe", but it only solves some symptoms of the problem, not the problem itself -- the index can still refer to the wrong component.

    • @berthold64
      @berthold64 4 роки тому +12

      tldr rust is annoying and gets in your way lol

    • @keldwikchaldain9545
      @keldwikchaldain9545 4 роки тому +21

      @@Booone008 There's a bit of a misunderstanding here (and in the video in general) of what "safe" means in the rust world. "safe" is a very specific and well-defined term, and in that definition, the non-generational index is "safe", despite being incorrect. "safe" in the rust world is very specific to disallowing code from having mutable pointer aliasing and use after free bugs (as well as a couple other things), which definitely doesn't include "is this thing semantically correct in my program".

  • @WIImotionmasher
    @WIImotionmasher 5 років тому +30

    This is the first time Ive listened to Jonathan Blow talk about programming and actually understood what he was talking about.

  • @RumataEstor
    @RumataEstor 4 роки тому +277

    I think Catherine's main point was that borrow checker right from the beginning showed all the problems of the OO design, which can be implemented in C++ without any complaint from the compiler but causing various problems down the road. It shows that using C++ developers were able to spot those problems by means of very careful thoughtful implementation/design or serious debugging afterwards. However with Rust's borrow checker these problems are obvious compilation errors, which allows to try some other designs with less efforts without affecting correctness.
    And yes, the ECS solution she proposed mostly works around borrow checker and replaces pointers with "manual index allocation in Vec", however the borrow checker would still prevent the consumers from keeping the references temporarily obtained from systems for later usage.

  • @distrologic2925
    @distrologic2925 4 роки тому +18

    The problem in the example is the indexing. What the Rust borrow checker validates is references e.g. "pointers". The borrow checker guarantees that no memory is freed as long as there are still references to it, and it also guarantees that there are never ever multiple writing references or both writing and reading references to the same memory at the same time. There is no way to access invalid memory or corrupt memory using "safe" (standard) Rust. The problem with the indexing is, that the implementations in the standard libraries just intentionally crash when directly indexing an array with an index out of bounds. So it is definitely still possible to try to access invalid memory and Rust will let it slide and you will crash. However, those functions are always documented to panic in specific conditions, and for indexing, there are alternatives that don't panic, but return an Option type, which you manually have to unwrap at the call site. That is the point where the programmer has to decide how he will handle the case where the memory he is requesting is not available. By returning a type which reflects the possible unavailability of the requested memory, the programmer is forced to acknowledge this possible failure and must implement a solution, or explicitly ignore any errors and let the program crash.

    • @brians7100
      @brians7100 3 місяці тому +1

      are you a rust programmer

    • @distrologic2925
      @distrologic2925 3 місяці тому

      @@brians7100 i tried so hard...

  • @philipkristoff
    @philipkristoff 3 роки тому +110

    I really like Rust, and I have been making small games in it in my spare time. Johnathan is right, what is presented does not solve the problem, it solves the crash symptom. The real problem will be the same no matter the language: pointers, gc, borrow checker or something new. Entities might die, and you have to handle that, in a safe and reliable way.
    But this critique, of the struct of vecs, does not disqualify Rust as a great language for game dev (or other things). You can still use Rust to solve the real problem, and Rusts borrow checker is going to help you in other ways (concurrency being one).
    One of the major advantages or rust is the memory safety, but a memory security flaw in a game is probably not nearly as critical as one in a database, web server or a operation system. So it comes down to personal preference. I personally like the strictness of the Rust compiler, but that is what I value, others might value the flexibility of a language like c or even python. You do you.
    To sum up : Jonathan says "this is not a silver bullet" and he is right, but that doesn't mean that Rust is a bad language or tool for the job.

    • @Ian-eb2io
      @Ian-eb2io Рік тому +2

      But I think Rust is flexible. It's key difference is that the compiler enforces all the things that they've been trying to cram into C++ over the last 15 years. The main problem with C++ is that they've resisted moving towards a design where the compiler can check a lot of ways to fail for you. In both languages you have to always be thinking does this thing still exist, what happens when I let another thing use it and when will it stop existing? But the Rust compiler helps you by doing a lot of checks for you to make sure.

    • @georgeokello8620
      @georgeokello8620 6 місяців тому +2

      @@Ian-eb2ioFlexibility and rust don’t go together especially when it’s sacrificed at the altar for memory safety.
      Rust is a potentially good language mainly for domains where other languages have tackled use cases where the language has its rules of how it addresses resource problems and rust can offer optimized way to solve that problem.
      Rust is going to greatly fail at the areas where the domain problem within the language is unknown and you have to experiment to solve that problem only to find out that rust just cuts off x percent of certain experiments to adhere to it’s memory safety rules.

  • @1schwererziehbar1
    @1schwererziehbar1 2 роки тому +25

    I love hearing ideas on technical stuff from someone who has already proven his expertise by making a full game. Thanks for the upload.

  • @TheSulross
    @TheSulross Рік тому +19

    reminds of the original MacOS of 1984 which tried to run in 128K of memory. The heap memory manager allowed memory compaction and so instead of a direct pointer to memory objects, one got a pointer to a pointer (a handle). This made possible for memory blocks to be moved for heap compaction and also made it possible to deal with when the memory object no longer existed in the heap. This was good for managing read-only resources which could always be reloaded from the executable file if had been flushed.

    • @rogo7330
      @rogo7330 3 місяці тому +1

      Isn't this just like pointers work today in OS on CPU with MMU? Your pointer mapped to page, and page can be moved or even not exist until you really need it?

  • @AndrewRogers
    @AndrewRogers 5 років тому +20

    I feel that your comment at roughly 39:00 is kind of what I experienced from learning Rust this summer. Because the C++ compiler would let me do something that the Rust borrow checker would not, I was forced to reevaluate past habits or patterns and it changed how I code C++ as well to be more mindful.

  • @BrandonCaptain
    @BrandonCaptain 4 роки тому +6

    Let us also not forget that array[i] is just syntactic sugar for *(&array+i) in basically any language, which is itself just pointer arithmetic. By using this syntax you may be allowing some compiler warnings about out-of-bounds accesses, but only if your compiler supports it and it's turned on (eg, clang likely won't complain unless you're using clang -analyze).

    • @BrandonCaptain
      @BrandonCaptain 4 роки тому +1

      Andrea Proone yes, that's better 😂

  • @BrandonCaptain
    @BrandonCaptain 4 роки тому +50

    Hia, C++ guy here.
    Reference-counted smart pointers, like std::shared_ptr and it's weak-pointer counterpart std::weak_ptr, don't have to inform all the other pointers that point to the resource when it is destructed. Destruction simply decrements the reference count, which is just a uint that each smart pointer accesses atomically by reference (ie, thread-safe, and they don't have their own copy of the uint). So it's essentially a lazy-check (which we love, since we only pay for the operation when we actually need it): weak_ptrs won't keep resources alive, and in order to even dereference them you must lock them. If the lock fails, it means the resource no longer exists (ie, all the shared_ptrs that pointed to it have gone away, even if weak_ptrs still exist). The danger in all this then boils straight down to whether the programmer is checking for null after locking a weak_ptr, which is programming 101.

    • @highdownhybrid
      @highdownhybrid 4 роки тому +7

      To shorten this a little, shared_ptr doesn't just count references, it also counts weak references. weak_ptrs keep a "shared_ptr control block" alive (beyond the live of the actual allocated object). This control block allows to test for an expire.

    • @OMGclueless
      @OMGclueless 4 роки тому +7

      @@highdownhybrid If you're creating and destroying a bunch of entities and every time you create a reference to one of them you allocate and keep around a shared_ptr control block that outlives the object that's still a lot of scattered garbage taxing your memory allocator which is exactly what's under contention in most game engines (on the CPU side, there are other things bottlenecking the GPU). It's about the same amount of overhead in bytes and memory accesses as an entity system and leaves a bunch of dangling data structures that get lazily deallocated later instead of one block of memory in the entity manager.

    • @zemlidrakona2915
      @zemlidrakona2915 3 роки тому +1

      @@OMGclueless The problem is the std::shared_ptr is generically crappy for most complex stuff. You can do the same thing by putting the reference count in a base class and you can include a weak count there too if you need it. Another advantage of this is that your smart pointer becomes normal sized again unlike std::shared_ptr which is double sized. The only down side is you may keep some unused memory around a bit longer but it will eventually get collected when all the weak pointers go.

    • @OMGclueless
      @OMGclueless 3 роки тому

      @@zemlidrakona2915 How does the weak reference count help there? Is the idea that you can run the destructor/free other held resources when the strong count goes to zero and only reclaim the memory later when the weak count goes to zero? The main benefit of weak_ptr or entity references in an ECS is that you can reclaim the memory used by an object without waiting for everything that references it to be gone.

    • @zemlidrakona2915
      @zemlidrakona2915 3 роки тому +1

      @@OMGclueless Yes basically. I would say the benefit of reclaiming what is often a small piece of memory a bit sooner rather than later is of dubious value. You aren't even reclaiming the whole thing as you still have the control block. Finally in a pointer heavy environment you will use more memory anyway since you will use two 64 bit pointers (on 64 bit architectures ) rather than a single pointer for every smart pointer plus any extra memory caused by the double allocation. Finally if they control block is far from the object you have the possibly of an extra cache miss. If you used make shared to put the block and the object together then you didn't need to double pointer anyway so it's a waste. Fortunately it's isn't too hard to write a better system in C++ yourself.

  • @FutureShock9
    @FutureShock9 5 років тому +14

    Not used to following Jon all the way through the talk, but I think I pretty much got it this time.

  • @DeusExAstra
    @DeusExAstra 4 роки тому +34

    On the subject of weak pointers, what you describe isnt how C++ weak pointers work. In fact, C++ weak_ptr solves this exact problem you're describing... which is why that's the preferred solution for game objects to point to other objects. C++ shared_ptr/weak_ptr work with a shared block of memory that they point to that contains 2 reference counts. One tells you how many strong references are (those that keep the object alive) and the other tells you how many weak references. You dont need any large lists of pointers to objects pointing at your target object, that's totally unneeded. It all works because each smart pointer just goes to the single control object to update/check the references in an atomic way. Also, there's no ambiguity about when to check for null... a weak_ptr must be locked before using it. The process of locking it gives you a shared_ptr, which you then check for null. If it's not null, it will remain valid until it's destroyed. The whole thing works really well and all you have to do is make sure you dont store shared_ptrs/unique_ptrs unless you own that object and want to keep it alive. Anyone who is just observing an object should store a weak_ptr.

    • @nickwilson3499
      @nickwilson3499 Рік тому +3

      The point of having a list of pointers to objects that have a reference to remove the reference from each object. The entity needs to be destroyed regardless of whether or not there are still strong references to the entity. I think you're solving the wrong problem

    • @holyblackcat7676
      @holyblackcat7676 3 місяці тому +1

      The only problem with using pointers like this is that's it's hard to copy/clone the game state, while it would be easy with IDs.
      @nickwilson3499 All pointers between objects would be weak, the only strong pointer is in the entity list. When locking weak pointers, the resulting strong pointers are supposed to not outlive the function that uses them.

  • @camthesaxman3387
    @camthesaxman3387 2 роки тому +7

    The same problem you described manifests itself in Super Mario 64 (as the cloning glitch) and Ocarina of Time (as stale reference manipulation) because both games use raw pointers to entities that get deleted, but references to them aren't updated in some cases.

  • @MrD0r1an
    @MrD0r1an 3 роки тому +13

    The main problem the borrow checker solves is memory safety (prevent buffer overflow, use after free, data races etc.) These types of bugs are the most dangerous because they often allow for arbitrary code execution (this has happened a lot in practice). This is especially important in the web, which is the use case rust was designed for. Of course rust will not save you from all bugs, but I think there is value in memory safety, which the borrow checker can guarantee. Unless you use unsafe, which is not always avoidable (e.g. when interfacing with hardware), but at least they limit the potential sources of memory issues.

    • @exapsy
      @exapsy Рік тому +4

      Rust, was definitely NOT made for the web.
      Web = Async + Multithreading. Because it has clients, and clients = a pool of things you have to serve = a server = need as much parallel and asychronous code to serve everybody.
      Rust hasn't even figured out how to do ASYNC yet 10 years after. It literally has libraries to implement async runtime (tokio, async-std) and it even have had controversies around those libraries. The last time I tried to use TONIC + Rocket together asychronously I just had to encounter tons of async-multithreading issues and the code just never compiled because of async issues or when it did compile it had problems with a thread staying open so you had to kill the program twice etc. You spent time on meaningless things which means much more money spent when you try async+multithreading code, something you dont do in language such as Golang.
      Golang is made and designed for the web. Rust was designed to be a direct competitor to C++, more like a low level systems programming language rather than a web development language.

    • @genericuser1546
      @genericuser1546 Рік тому

      @@exapsy Dude...go read up on who made rust, and why. Rust was absolutely made for the web, just not the part of the web you are thinking about :p

    • @exapsy
      @exapsy Рік тому +2

      @@genericuser1546 When you say "Rust was made for the web" you mean "Web development". And web development requires a lot of sync and threads.
      And threads + sync is the ultimate weak point of Rust. It's absolutely ridiculous that Rust has reached a point where you have two -unreleased- runtime libraries to support a language feature that should have existed since its design. Asynchronous code and multithreading with async code.
      Two parts that are very crucial of the web.

    • @genericuser1546
      @genericuser1546 Рік тому

      @@exapsy I agree on Rust + async. It so far has sucked anytime I touched it. And is the only part of rust where I encountered a foot gun. But I don't think it's a component that's "very crucial of the web."
      > threads + sync is the ultimate weak point of Rust
      I've never had a problem with Rust + threads. So I don't see this, care to elaborate?

    • @genericuser1546
      @genericuser1546 Рік тому +1

      @@exapsy Ah, scrap everything I said above. We aren't on the same page lol. No when I said "Rust was made for the web" I was playing with the words (I obviously don't mean web development) :p
      Again just go read up on who created rust, and why. You'll get what I mean.

  • @keldwikchaldain9545
    @keldwikchaldain9545 4 роки тому +142

    There's a lot of great points in this video that I really like, but I do have a gripe that there's some misunderstanding of what the word "safe" is intended to mean. "safe" in rust is a very specific and well-defined term that is exclusively intended to refer to rust without unsafe and the things which that prevents you from doing, namely mutable pointer aliasing and use-after-free. Rust code may be "safe" without being correct.

    • @Kruglord
      @Kruglord Рік тому +25

      Yeah, good point. 'Safe' really just means 'not undefined behaviour,' it can still be wrong in a logical type of way. And to Jon's credit, he does make that point at around the 40:00 mark or so, when he essentially says "it's a Vec of data, so even if what's in there is invalid, at least it _was_ a correct form of that data" which is exactly the point. It might be 'wrong' but it's not 'undefined'

    • @bobweiram6321
      @bobweiram6321 Рік тому

      Ah hah! So the Rustafarians are a bunch of conniving scum! Why are you redefining "safe" instead of picking a term which better fits your definition, or just create a new one? It's so you can trick the unsuspecting victim into accepting Rust as safe under the common definition, but when the victim discovers Rust isn't safe, you reveal your true definition. You rustards need a garbage collector because your bullshit is piling up!

    • @materialistpigeon16
      @materialistpigeon16 10 місяців тому +1

      @@bobweiram6321 what do you think the common definition of safe is and how does rust's definition deviate

    • @doyouwantsli9680
      @doyouwantsli9680 9 місяців тому

      All cult like programming movements do this. Change the meaning of very common English words, so 90% of people think you mean actual safety not specifically rust-safe. Same thing with "clean code".

    • @sirhenrystalwart8303
      @sirhenrystalwart8303 2 місяці тому +4

      This is so tedious. Rust should have chosen a different word instead of coopting a word with an established meaning, which makes these discussions impossible.

  • @user-hj6db7fp4i
    @user-hj6db7fp4i 5 років тому +152

    I disagree when you say she didn't realize she was re-implementing an allocator by using Vecs to store entities and indexs to refer to them. She specifically refers to the case of accessing the index of an entity that's been deleted as being very similar to a use-after-free bug, and indexes as being pseudo-pointers (ua-cam.com/video/aKLntZcp27M/v-deo.html), and later on calls a new vec implementation (for generation indexes) an allocator (ua-cam.com/video/aKLntZcp27M/v-deo.html).
    I think how she uses the term "safe" may have misled you. Saying something is "safe" has a very specific meaning in Rust: no race-conditions, no double-frees, and no use-after-frees; most else is still fair game. That's why when she says it's "safe" but isn't great, she means it compiles (doesn't violate ownership, lifetimes, etc.), but that doesn't mean it's good (the code may still have serious logical flaws). I think it's really interesting to consider this almost a step backwards (C++ programs very well may segfault in debug builds due to these logical errors, whereas Rust won't), it seems like all higher-level languages would suffer from that too.
    At 39:00, when you say the borrow checker may have helped by forcing the developer to explore other options, I think that's exactly what Catherine is referring to when she says the borrow checker helps. It's turned a memory access error into a smaller possible set of logic errors. With Rust lifetimes, it wouldn't be possible to double-free or use-after-free an entity, only a pseudo-use-after-free bug is possible (using the index after it's been freed and been re-allocated). The way Vecs are implemented in Rust, it isn't possible for an index to point to a deallocated object (either you have a Vec of optionals, or when you remove an item the Vec shifts later items to the left). I think this naturally forces you to consider this use-after-free logical error, which would naturally lead you towards the generational solution (or some other solution, or at least into a wall) when developing.
    One last nitpick, I dislike how you're pulling apart code that in the context of the presentation is shown as code with problems in it. If you're going to do a rant on code I think you should at least wait till she shows the best examples, so you don't end up fighting a straw-man.
    Enjoyed the discussion though overall, it's really educational engaging with C++ programmers as someone who's only written lower-level code in Rust. Hopefully others more knowledgeable about Rust can chime in, I'm definitely no Rust expert

  • @tubebrocoli
    @tubebrocoli 5 років тому +1

    Maybe the best solution in this case is some form of shared Option that records clearing history in debug?
    Because we have two problems to address here: we want the program to be robust and try to recover itself in production, but in development we want to catch the incoherence as early as possible and know what caused the incoherence in order to debug easily. Just knowing that the memory was cleared is not enough data to address the issue directly.

  • @borkborkas8422
    @borkborkas8422 5 років тому +124

    After looking up a component, in let's say the Vec, the borrow checker makes it impossible for you to accidentally keep or transfer the address of the component. This is automatically enforced at compile time so there's no run-time cost, and you don't need any explicit company policy on that matter.
    Without this check you might accidentally keep a pointer to a component that gets destroyed somewhere along the way.

    • @patrolin
      @patrolin Рік тому +8

      but the Vec does nothing to prevent you from holding onto an old entityId,
      you still need another system (the generational index) to solve the actual problem...

    • @ayoubbelatrous9914
      @ayoubbelatrous9914 Рік тому

      just make the checks debug only or release only then strip them away in distribution, or use a sanitizer the checking is not the problem here because we are dealing with runtime problems, and during the runtime there exist side effects from IO meaning the compile time checks will only take you so far as ensuring memory must be always valid and force you to provide a dummy substitute for a component. so in my opinion borrow checking for game engine ECS just makes your life harder for no reason. but still good for other stuff don't get me wrong.

    • @zxcaaq
      @zxcaaq Рік тому

      this is an invalid solution because you can still hang on to it. you're solving the wrong problem.

    • @marcossidoruk8033
      @marcossidoruk8033 11 місяців тому +2

      Did you watch the video? Getting strong vibes you didn't.

    • @taragnor
      @taragnor 6 місяців тому +3

      @@patrolin Holding onto an old entityId isn't problematic as long as the entityId's are unique. Because ultimately you call your getEntitybyId function and it returns a failure when you try to use a bad entityId. Ideally for good programming practice you'd stop using that dead Id, but even if you didn't, at worst you're getting a wasted attempt to access a bad entity each frame, which is more of a non-critical optimization issue. The real problem comes in with non-unique ids, like if you try to do something like saying entityId = vector index, in which case when you put in something new into that index, you now could end up getting the wrong entity, because the entities can share the same id.

  • @dennisferron8847
    @dennisferron8847 5 років тому +21

    The static type system ends at ensuring the same type is accessed, whether it actually refers to another entity. To catch that you'd need a dependent type system. I've got a game engine design which addresses the root problem. In my engine I used something I call an Interaction Model. The Interaction is an object whose lifetime is the lesser of all the things it references. It receives its dependencies in its constructor and cleans up in its destructor, but its lifetime is controlled by the system. So like in your example of one entity following the other, neither entity A nor entity B would manage a pointer to each other. Instead, there would be a Follow(A,B) interfaction which manages the relationship. The engine would delete the Follow object if either A or B is to be destructed, but destroys the Follow interaction first so that A and B are still valid in Follow's destructor (so it can access them for cleanup).

  • @spiveeforever7093
    @spiveeforever7093 5 років тому +2

    One use case I have found for the borrow checker is equivalent to what is already possible in Jai: linting references lasting between frames/equivalent. Lifetimes happen to be a useful way of expressing "This reference must be destroyed before the end of this loop iteration"
    I also find this to be the major limitation of lifetimes however.... since most objects that require 'outlives' relationships, have lifetimes that don't map directly to the structure of your program.
    One thing that I would love to figure out in the realm of static analysis/type checking is a realistically useful way of enforcing "this entity must outlive this other entity", when those entities both live over a variable number of frames.
    Of course this can be enforced at runtime by simply making the parent entity destroy the child entity during its own destruction code, which will often be a useful (and opt-in!) way of detecting and resolving bugs, but there seems to be an open question in terms of useful static models that could make these kinds of patterns less bug prone.
    Ultimately modelling the arbitrary-ness of mutable state is just hard and is basically half of the reason programming is a paid profession, but it would be nice to come up with some kind of opt-in static analysis other than "pure functional programming only"

  • @ChaotikmindSrc
    @ChaotikmindSrc 3 роки тому +5

    "crashes are not as bad as you think",
    yes, i often try to design my code so that it clearly crash if something is wrong, saves me à lot of debugging time.

    • @ishdx9374
      @ishdx9374 3 роки тому

      Crashes are good as long as you personally coded it (for example using unwrap)

  • @_xeere
    @_xeere 2 роки тому +8

    You can actually use special datatypes to re-enable the borrow checker on the IDs, you can turn them into an opaque type with a lifetime then it will be checked.
    It would also be impossible to store one of these references at that point so a bit useless, you have to disable the checking if you want multiple mutable references to anything.

  • @teslainvestah5003
    @teslainvestah5003 Рік тому +4

    for the purpose of learning where your problems are in development, crashes are very good, but nothing is better than failing to compile.

    • @jebbi2570
      @jebbi2570 Рік тому +1

      Use unsafe and bam you can compile.

    • @teslainvestah5003
      @teslainvestah5003 Рік тому +3

      @@jebbi2570 and _then_ you have an antipattern so simple you can grep for it.

  • @timothyvandyke9511
    @timothyvandyke9511 4 роки тому +3

    I think you're probably right that she just turned the borrow checker off manually as you say... BUT I will say that it's causing me to design my program differently. My initial plan was similar to your "I'll just give references to things and it will all work out" And the second I started down that path my program no longer compiled which made me start exploring and googling and now I'm here after quite the tangent. I've now learned that ECS exists and that it's *probably* the best Rusty way to do game development. Whether or not it is the best way to do game development itself is a different beast. That said, what you're talking about at 39:09 actually happened to me. It made me explore. I've now learned (hopefully) better ways, and I can move forward once I make Rust happy. I'm sure there's better game dev languages, but I chose this one because I found the language interesting. I've had several frustrations with it. It's difficult to make seemingly insignificant changes for debugging. That said, I definitely feel like my code is stronger because of the compiler. Maybe it is a feeling :shrug: Guess we'll find out :P

  • @user-hk3ej4hk7m
    @user-hk3ej4hk7m 4 роки тому +2

    I've seen a couple of drawing and engineering design programs use an "entity reference" model, where if you need to use an entity, you instantiate a shared pointer from it (and they make it quite crear, you should not save save a copy for you) every time you need to use it, similarly to a weak pointer. To me it's basically exactly the same as using an entity index but the part of getting the entity based on a unique id is abstracted behind the reference class methods.

  • @anotherelvis
    @anotherelvis 5 років тому +6

    At 33:00 he sums up the arguument that the borrow checker does not provide generational indexes out of the box. If you store your entities in a Vec with fixed size and use integers for indexes, then you will eventually need to reuse slots to save memory, and that requires you to add a generation-id to distinguish between the old inhabitant of the slot and the new inhabitant of the slot.
    AFAICS the project named generational-arena on github provides a nice container type with generational indexes. I haven't tried it but it seems to work. But Jonathans point is that you could have writtten the same code in C++.
    The usercomment at 49:10 sums up the benefits of rusts borrow checker. You get some aliasing guarantees and some promises regarding parallel code. Jonathan is probably right about productivity. If you need to build your game engine from scratch then rust will provide some friction and force some redesigns before you get it right, but if you use a premade container type such as generational-arena or an ECS-library such as specs or if you copy the design from Catherine's video, then you will experience less friction. So in short rustbecomes more productive when you use existing libraries that enforce nice programming patterns.

    • @AlFredo-sx2yy
      @AlFredo-sx2yy 8 місяців тому

      "So in short rust becomes more productive when you use existing libraries that enforce nice programming patterns."
      So... just like any other programming language?
      So in short, [insert programming language here] becomes more productive when you use existing libraries that enforce nice programming patterns.
      That's not exactly an argument for or against Rust tbh. Because C is way older and has far more mature premade libraries and support, but that does not say anything about the language itself.

  • @masondeross
    @masondeross 11 місяців тому +3

    This point of contention boils down to a simple misunderstand: safety and correctness are not the same thing in formal computer science, but engineers often use the terms interchangeably. When Catherine says safety, she means it in the CS theory sense; a program can be 100% safe and 100% incorrect, or 100% unsafe and 100% correct, or any other combination. Accessing memory holding incorrect information is an incorrect outcome and thus the program has a logical flaw, but if it is done by accessing properly owned memory in a way that accounts for errors related to those system calls irrespective of the validity of the data held, then it can still be simultaneously safe. Safety is not good enough on it's own to say a program is good, and neither is program correctness; sometimes you do need an unsafe program to correctly solve a problem with the options you have, and sometimes an unsafe program always behaves correctly.

    • @anon_y_mousse
      @anon_y_mousse 4 місяці тому

      Maybe my definition of correct is different from the egghead's definition, but I consider a program correct if it runs without errors. That may be an impossible goal as far as future-proofing your program is concerned, but if and until it's proven incorrect then it is correct, at least as far as I'm concerned. One could probably argue over whether it was ever correct if someone figures out how to break it in the future, but as long as that future point is 20 years or more, then it'll be fine.

  • @Olodus
    @Olodus 5 років тому +47

    In software engineering in general I am really glad something like Rust is gaining traction. Better static analysis is a good way to ensure safety of our programs while not making the performance of the product worse. Stopping data races and use-after-free things at compile time is huge for security and I would certainly think the industry should take the productivity loss that might give them, since they should have checked the security of their products better in the first place.
    In games though, security isn't that big of a deal and productivity is MUCH more important. So I agree a borrow checker and Rust in general might not be perfect for games (it isn't the worst since it still runs fast and maybe it could give some more optimizations but it isn't that much better).
    I also agree that this specific example is a stretch to call something the borrow checker "pushed you towards" or "helped you with".

    • @mavhunter8753
      @mavhunter8753 Рік тому +10

      Compiling a large Rust program is like watching a glacier melt… Until that improves Rust is not worth it.

    • @lucasjames8281
      @lucasjames8281 5 місяців тому

      @@mavhunter8753the first time yes, but subsequent compiles are not too slow, but I agree the compiler is too slow at the moment

  • @Abubalay
    @Abubalay 5 років тому +54

    The borrow checker is not completely gone just because you're using (generational) indices. It still provides two major benefits:
    1) When you temporarily convert an index to a pointer, it will track both that pointer and any pointers to store behind it. The `Vec`s will never go away, but they may be resized (if you actually use `std::vector`/`Vec`, which you may very well do at least early on). The `Vec`s will never go away, but you may unintentionally store a pointer in them to something that will.
    2) At a larger scale, when you're figuring out how to actually process the `Vec`s' contents, the borrow checker will still prevent data races (due to parallelism) and iterator invalidation (due to inserting/removing elements at the wrong time). This is great, and is heavily taken advantage of by the Specs project she mentions. You also see Unity's new ECS caring a lot about this, but they handle it all with debug-mode runtime checks.

    • @jblow888
      @jblow888  5 років тому +21

      Sure, that's all fine. But if I am allowed for a minute to rephrase the thesis of the original talk as "I found a way to get the borrow checker to be less intrusive into what I am doing, so it wouldn't bother me any more, and that made me happy", how do we know that won't also happen in many of these other cases as people figure out how to work with the language and styles evolve? And in the cases where the borrow checker remains useful ... which of those are handled approximately as well by alternative features that are simpler and less-intrusive (for example, a simple facility for runtime checking to guard variables under mutexes, etc).
      (Also I had a rant on the "data races due to parallelism" in the Q&A to this rant that I cut for the UA-cam video, just because it was off-topic and even less coherent, but in general I feel that the whole "prevent data races" rhetoric is an overclaim that it would be a good idea for the Rust community to become more precise about. Maybe I need to give a more-coherent version of that rant another time).

    • @Abubalay
      @Abubalay 5 років тому +28

      Well, all I can say there is that there is an increasing amount of software written in Rust where people seem to be handling the borrow checker just fine. This case is the exception that proves the rule, so to speak- Rust handles most software fine, and this case stands out as being particularly hard.
      I'd also note that Rust has always been designed around punching controlled holes in the core, language-level borrow checker, in order to extend it. The compiler has no particular understanding of `Vec`, `Rc`, etc. There's no reason not to view this "generational indices into an array" as anything fundamentally different.
      ("Data races" *is* the more-precise version- we're careful not to say that it handles general race conditions.)

    • @jimhewes7507
      @jimhewes7507 3 роки тому +2

      In that Catherine West video and blog she seemed to use a caricature of C++ in some ways. Maybe that was to exaggerate the point, I don't know. I wouldn't use C++ that way. I get that inheritance hierarchies can lack flexibility and also that games need to arrange data to be cache-friendly. But I don't have need for shared pointers in a single threaded application for example. That's a beginner error. You can handle ownership with unique_ptr and then use raw pointers as temporary "borrowed" pointers. Since using smart pointers, I've never have memory leak problems. About general statement that pointers are bad so you should use indexes---well, if the vector is known to have static lifetime and its size is never expected to change, then a pointer to an element is no different than an index anyway.
      You're saying that an element of a vector might be, or may contain, a pointer to something that gets deleted. But the way to do this in C++ is that the pointer contained in the vector is a unique_ptr and is the owner. To temporarily reference the thing that unique_ptr points to you can get a raw pointer from it. But _you should never assign a raw pointer to anything_.
      When defending this vector-as-memory-allocator solution, I've heard it said that the borrow checker solves memory problems, not logic problems. But isn't memory allocation a logical problem as well? Your logic is wrong because you forgot to free memory when you were supposed to. Your logic is wrong.
      Also in West's C++ design, she notes that object orientation leads her to require lots of accessors. But it's been my experience that when you have to start adding too many accessors to a class that's a signal you should re-think your design. Something is not as modular as it could be.

    • @phillipvance864
      @phillipvance864 2 роки тому

      @@jblow888 Hi Jon, or anyone that reads this. Is there a link to a stream/another video containing the rant on "data races due to parallelism"? I'd also like to see that if it exists

    • @sirhenrystalwart8303
      @sirhenrystalwart8303 3 місяці тому

      @@jimhewes7507 I agree. Most of the criticism of c++ I see from the rust community seems like all they have seen and written was c++98. There are things I wish c++ would lock down more tightly (e.g., returning references to a temporary should be illegal, enums should not be ints), but aggressive compiler warnings and "modern" c++ gets you 95% of the way there.

  • @android22g26
    @android22g26 4 роки тому +4

    In game programming, it's always tempting (and frequently worthwhile) to subvert any type of standard memory management. This includes garbage collection and even malloc/free. You want to avoid memory management's CPU overhead, GC pauses, heap fragmentation, and nondeterministic runtime memory requirements. The last of these is particularly important on consoles. So we have memory pools/arenas, which as you note, is what is implemented here with Vecs. An added benefit is that, when designed right, pools are super-efficient at deallocation. Rather than calling free() 1000 times, you call free() once on the pool of 1000 entities. Or if you're really crazy, you allocate the pool on the stack using alloca(), and deallocation is a closing brace.
    As you mentioned, memory management is somewhat orthogonal to the issue of dealing with freed entities which are still referenced by other entities. The best solution I've encountered, which comes with a clear performance tradeoff, is to not have entities refer to each other directly. You have a graph of entity nodes and relationship edges. Store the relationships in hash tables. When you destroy an object, remove all of its relationships. Of course this is more expensive than direct pointers or index offsets, but it removes an important set of errors. I worked on a couple of popular PC games in the early 2000s which successfully used this approach.

    • @jimhewes7507
      @jimhewes7507 3 роки тому +1

      Yes. I've done a lot of programming in embedded systems (not games) and I _never_ want to use a generalized type of dynamic memory allocation. Either memory is statically allocated or you use a static vector to allocated entities of the same type as you mentioned. The reason of course is: what would you do if memory allocation failed? Also, in a system that has a single purpose in life, you usually know before hand what the maximum memory requirements are so you don't need dynamic memory allocation.
      In reading some of the cheerleading for Rust and it's borrow checker, it seems that the problems they claim to solve in C++ for example could have been solved by just using C++ correctly or used a known design. As you also mention, if you have a problem with entities referring to each other it can be solved in different ways. You can using the observer pattern or some other kind of message passing. Or you can use code that is external to the two entities to represent the relationship of those entities to each other. I don't mean to bash Rust because I like it just fine. But it seems people advocating for it over C++, for example, aren't using C++ correctly. Too much talk of shared/weak pointers where unique_ptr should really be used. And so on.

    • @swapode
      @swapode 3 роки тому

      ​@@jimhewes7507 It seems you're pretty much describing the fundamental crux with C++. Of course you can do everything correctly - but it's incredibly easy to make mistakes, not because the solution itself is harder but because there's a lot of dissonance within the language and everything around it.
      After using it for a while now, I love Rust. But its strengths for me are much more subtle and harder to put in words than the big marketing points. All parts seem to just click together in the right way. Many people criticize the friction Rust introduces but I've come to argue that Rust simply tends to move friction upfront instead of allowing you push it to the back where it can be much more problematic.

  • @mybigbeak
    @mybigbeak 5 років тому +21

    46:25 Make a component that acts as vec, but store (id,gen, ) enum and replaces get[id] with get[id,gen] and return an option. super safe easily doable generically so all "vecs" act consistently

  •  5 років тому +1

    On the idea that ids share a parallel with references without the borrow checker.
    The main point of the borrow checker is to avoid race conditions in multi threaded environments. Having an id to an object in multiple places does not trigger the borrow checker, and by any means, we need a way to refer to bound data in some way. Actual workers, the threads, need a method of obtaining an actual reference to the element. This implies that threads are either spawned with a reference to the whole ECS system, or that the system itself spawns threads and attaches the dependencies to them in some way (maybe by state decomposing, see Raph Leviens talk on xi-win). In both implementations, the borrow checker steps in and guaranties that there will be no race conditions on the data.
    The borrow checker is often not useful, since it makes pointer references nearly impossible. Alex Stepanov responded to a comment that iterators should be removed because they are unsafe with the statement that you can write unsafe code with just the division operator, and it doesn't mean we need to remove the division from the language, but we need to educate the programmers on how to use it. Still I love Rust and it made me a better programmer(educated in some way).

  • @rubenpartono
    @rubenpartono 3 роки тому +47

    It's amazing how well Jon articulates his thoughts on the fly. If I had to come up opinions and guesses around such complex and subtle issues on the spot, I'd just be a babbling mess 😂 on the spot.

    • @Ciph3rzer0
      @Ciph3rzer0 3 роки тому +12

      He has notes, he said

    • @YoloMonstaaa
      @YoloMonstaaa 3 роки тому +11

      he also talks in slo mo

    • @franciscofarias6385
      @franciscofarias6385 2 роки тому +6

      It's because these are the kind of things he's been thinking about for decades. It's just a product of experience.

    • @mavhunter8753
      @mavhunter8753 Рік тому

      Me too.

    • @Ian-eb2io
      @Ian-eb2io Рік тому

      @@franciscofarias6385 No, it is because as he mentions he has notes.

  • @suyjuris
    @suyjuris 5 років тому +13

    Very interesting talk! The way I like to see things, a programs execution is in one of three states: Either it is correct, or the execution is wrong but that fact is not observable, or it is observably wrong The program always starts in the first area, then a bug may cause it to move into the second, causing further misbehaviours, eventually leading to the third state, e.g. in the form of a crash. So, moving quickly from the second to the third state is essential for debugging. A programming environment should thus try to keep the second area as small as possible and littered with opportunities to crash. Buf, if all you are doing is constraining the program to not enter the third area, nothing has been gained.

    • @ElPikacupacabra
      @ElPikacupacabra 2 роки тому +1

      The Rust idea is probably to keep all programs in area 1, but like you you said, it may end up building a block to area 3. Fundamentally, it's the fact that resource intensive applications need nontrivial memory management. Rust will only help you if you are guided down the path of coding "normally" and not worrying about memory. There's a niche for Rust there indeed, but it still means that you will "code in C" for demanding applications.

  • @threelettersyndrom
    @threelettersyndrom 3 роки тому +45

    I know this video is old, but I just realized that "turning the borrow checker off" forces the programmer to use an option type, and Rust in turn forces the programmer to always handle both cases of the option. This, in the end, will force (rather than remind as you talked about earlier) the programmer to handle the missing reference problem.
    Not sure if that's sufficient information for you to change your mind, but I think it's a relevant consideration.

    • @abcq2
      @abcq2 3 роки тому +6

      By using a pair of (array index, generation number) as a reference instead of just the array index or raw pointer, you get a reference with two capabilities, which are: get a pointer to the referenced object; and, check that it is from the correct generation (ie not being used after free).
      You CAN combine those two properties into a single "get real pointer" function that checks the generation and returns the null version of an option type, which is a good idea, but the language does not FORCE you to do it. You could have the "get real pointer" function simply return whatever object was currently at that index, which would be valid (as far as memory safety and the compiler are concerned) but not logically correct.
      You make a case in favour of option types, but either way it's not really relevant to the borrow checker.

    • @khoavo5758
      @khoavo5758 Рік тому +1

      "Forcing the programmer to handle the problem" doesn't sound like a good justification for a language feature as complicated as the borrow checker. If the programmer is good, the problem would get solved anyway. Do you expect someone like Jon to write a program so bad that it wouldn't handle having a pointer to a stale entity? I don't think that would ever happen.
      If I put a feature that introduces as much friction as the borrow checker, then I'd expect it to help me solve the problem, not merely pointing it out and leave me on my own to handle it.
      Maybe the borrow checker would help beginners, but Jon wanted to make a language for good programmers. So his decisions are different.

    • @sosa_enjoyer
      @sosa_enjoyer Рік тому +4

      @@khoavo5758You're missing the point. During this video he says multiple times himself that bugs such as these are tricky to remember to fix and implement safeguards against, regardless of your programming skill. "Good" programmers still make mistakes. Also, the borrow checkers job is not to "fix" your mistakes. Again like Jon says, one of its main perogatives is to force habits and good practices (an argument on if those are valid habits is another story). This toxic idea of "gud programmers don't make mistakes" is childish, reminds me of people who shill c as the only good lang and force horrible void* code down everyone else's throat to prove some sort of genius.

    • @khoavo5758
      @khoavo5758 Рік тому +1

      I never said good programmers don't make mistake, I said good programmers always solve the problem. And the borrow checker didn't help solve the problem at all in this case.

    • @sosa_enjoyer
      @sosa_enjoyer Рік тому

      @@khoavo5758your comment implies that the borrow checker is just some kind of bloat a "good programmer" wouldn't need (because of some percieved ignorance to the fact great programmers can still make mistakes). Also, This is obviously just one case and I would say overall the checker still creates healthy patterns one should strive for in their code.

  • @MortenSkaaning
    @MortenSkaaning 5 років тому +2

    at least one of the good thing of the generational index is that you can make it fatal to access old indices, so you would never operate on deleted/corrupted mem.

  • @kim15742
    @kim15742 4 роки тому

    Wow, that is a very interesting take on exactly the issues I am facing

  • @jptbaba
    @jptbaba 5 років тому +6

    I still vouch for rust, if you really need to break the borrow checker, you can always branch out to unsafe, you just need to ensure that it is isolated enough for you to not be causing potential bigger problems there.
    I stopped half way though, about weak pointer. You are basically talking about Option

  • @FaZekiller-qe3uf
    @FaZekiller-qe3uf 11 місяців тому +2

    There's no uncertainty in the behavior of the program at runtime assuming no unexpected outside interference. This is because of Rust's safety model. No undefined behavior, I feel is important. The type system is also mostly enjoyable.

  • @bmazi
    @bmazi 2 роки тому +9

    Another approach to track for destroyed entities is an event system. Imagine there is entity A that cares about entity B. A may subscribe to something like OnEntityDestroyed event of B (which means pushing a callback pointer to event stack), so when B is destroyed, it fires callback chain which happens to have all the required logic related to B destruction, including A making itself acknowledged that B is no more.

  • @smartislav
    @smartislav 4 роки тому

    OMG, I soooo agree with you about Go. Rust, however, is very safe and quite productive. The learning curve is very steep though.

  • @not6793
    @not6793 4 роки тому +3

    Making a handle with the index of the entity but added + 1 to it makes sense, so you can always say that 0 is invalid.

    • @RigelOrionBeta
      @RigelOrionBeta 4 місяці тому

      Exact same principle as nullptr in memory addressing.

  • @SimonFreePlus
    @SimonFreePlus Місяць тому

    I realize I’m 5 years too late to this thread, and maybe somebody else has already pointed this out, but the difference between using a generational index and just “turning the borrow checker off” is this:
    1) the indexes cannot be used like pointers in that they cannot be dereferenced at any time, but only when we are being given access to the underlying storage
    2) this means that the system can be designed such that you have complete control over when an entity has read or write access to the underlying storage
    3) for example, an entity could be allowed to mutate itself in some kind of update method, but be passed a context that provides read only access to other entities that it has keys for, but some kind of larger system could be passed a mutable context that allows read write access to any entities
    This may not be immediately obvious for a game engine, but for other kinds of similar “object soup” applications where, for example, mutation can only happen in the context of a command architecture, the ability to prevent some subsystem from “hiding away” a pointer to an object and mutating it whenever it wants or reading it when the domain logic implies it has an invalid value, this is an architectural win.
    My experience is largely in the kind of world where we do this kind of thing with automatic reference counting and weak pointers, and the problems described above are very common pitfalls.

  • @slutmonke
    @slutmonke 5 років тому

    Definitely going to check out the blog post she says she's going to put out to see if this is addressed there.

  • @AlbertoFiori92
    @AlbertoFiori92 4 роки тому +3

    I think that the concept of Rusty safety is that the generational array returns an option. This means that it is "safe" to use indices as weak references without the associated cost.

    • @blarghblargh
      @blarghblargh Рік тому

      yes. also, the array + generational index pattern works fine in other languages. the borrow checker is designed for much more fine grained use than high level architectural advice. this particular pattern is a poor selling point.

  • @ZeroZ30o
    @ZeroZ30o 5 років тому +2

    @Jonathan Blow - I know you said it's not the subject of this video but I'm interesting in knowing your stance on entity component systems, since I haven't seen many arguments against them.
    If you've already answered on this I would be fine with just being pointed to whatever video you've done it in.

    • @seditt5146
      @seditt5146 4 роки тому +1

      One con in my opinion is that the code just feels like it loses structure. With OOP you can trace the structure of your code so easy it is second nature but with ECS I personally just find it more difficult. Yes you have your components and you can see what Entities consist of however attempting to navigate around and piece together a Single entity is a pain in the ass depending on how you implemented it. OOP on the other hand all your data for a single object is right there in front of you in a nice neat box. I am working on Bench marking the two methods as we speak so sorry I can not provide pros and cons there, just overall user friendliness is a bit lower.

  • @lechurrajo
    @lechurrajo 5 років тому +1

    I guess you could mix the flag idea with reference counting to avoid checking every frame.
    When you want to delete an object, its flag gets set. When objects that reference it update, they see the flag, reduce its reference counter and nullify their pointer.
    The downside of this is that a deleted object may stay unfreed an arbitrary number of frames, but this system may be faster than hashing so it's a reasonable tradeoff

  • @XxXuzurpatorXxX
    @XxXuzurpatorXxX Рік тому

    Modern C++ people would force the instigator of a link between two entities to give the linkee a callback to inform about that linkee going out of scope. This problem often shows up in UI work, where signals between an UI element and its handler may go out of scope in random moments and each of the linked elements need to handle the other partner being dead. With lambda expressions it is actually quite simple to do.

  • @elahn_i
    @elahn_i 5 років тому +19

    No, this is not "turning off the borrow checker." It is memory safe and free from data races. However, the borrow checker does not prevent logic errors, which is what occurs if you follow a stale "reference" without checking it. She's saying the borrow checker pushes you in the right direction, not that it solves all your problems for you.
    One way to solve the problem is to use an accessor that checks the generation and make it the only way to access the component (edit: as you mentioned in the Q&A). Using traits and generics, you can leverage the type system to ensure correctness at compile time if that's important to you. Or if you want "fail fast" runtime crashes, simply panic!() when a code path should never execute.
    The borrow checker enforces lifetimes and prevents mutable aliasing in safe rust. Is it worth moving that checking from the programmer's brain to the compiler? It is to me. If you don't want its help, simply declare the function/block unsafe. Being able to grep for "unsafe" and audit those sections extra carefully is great.
    Edit: The "pain" caused by the borrow checker goes away after the first few weeks of programming in Rust. To me, it never feels painful, it feels like the borrow checker has got my back and saves me heaps of debugging time.

    • @davidste60
      @davidste60 5 років тому +2

      Elahn Ientile It seems to me that she has created her own unchecked memory system inside a Vec. How has the borrow checker pushed her in the right direction? Hasn't she deliberately avoided it?

    • @elahn_i
      @elahn_i 5 років тому +9

      In this case, the push was "don't use raw pointers" which led to using an index lookup scheme. Even if you don't check liveness, the memory is still allocated, initialised and the same type, so it's memory safe. The need to do the liveness check still exists in any index lookup scheme in any language. Also, it's only "unchecked" (for liveness, it's still checked for memory safety) if you index into the Vec manually, using an accessor function to enforce that check is easy enough.

    • @davidste60
      @davidste60 5 років тому +1

      Good points.

  • @austecon6818
    @austecon6818 5 місяців тому

    Okay NOW I understand the power of the compilation time execution of arbitrary code to enforce that every programmer on a large team doesn't forget to obey certain project-specific rules... genius. Perfectly solves the problem without creating friction in every other aspect of the codebase...

  • @RikNauta
    @RikNauta 4 роки тому +26

    I think what you're not taking into account is that to you still need to de-reference the Index and then use the borrow checker to enforce liveness & correctness during the scope of an operation. The problem that it solves is that you want long-lived references but with the option of reqling them if someone else needed them while you didn't.
    So no, she is not turning the borrow checker off she is deferring it's enforcement to the point where the code actually cares.

    • @harindudilshan1092
      @harindudilshan1092 Рік тому +5

      She is creating a custom allocator. And it bypasses the borrow checker using indexes.

    • @yokunjon
      @yokunjon 7 місяців тому

      @@harindudilshan1092 No.

    • @taragnor
      @taragnor 6 місяців тому

      @@harindudilshan1092 Unless I'm really misunderstanding how custom allocators work in Rust, I don't think that will have any effect on the borrow checker. While allocators do use raw pointers, ultimately you're still giving you a Rust struct that's bound by Rust lifetimes and Rust rules like the borrow checker. Unless the entire project is in unsafe code and just uses raw pointers, I don't see how it would bypass the borrow checker. From the code snippet there it looks like they had conventional Vecs with stuff in them. If you've got a Vec then you have to follow the rules of Rust when accessing it, even if you allocated it with a custom allocator.

    • @harindudilshan1092
      @harindudilshan1092 6 місяців тому

      @@taragnor borrow-check is designed to guarantee (no multiple writes etc) certain conditions. Since we use convoluted scheme, borrow checker is unable to verify that it does violate them. We can see that it does indeed violate the principles.

    • @taragnor
      @taragnor 6 місяців тому

      @@harindudilshan1092 I don't see why the borrow checker can't verify this stuff, at least under the code snippets I've seen in the video. There's nothing special or unsafe going on with a bunch of Vec types, that's idiomatic rust. You'd need RefCell, or raw pointers + unsafe blocks to circumvent the borrow checker.

  • @InfiniteQuest86
    @InfiniteQuest86 Рік тому +4

    Rust is really good for what it does. But it doesn't have to be used for everything like some people think.

    • @user-ov5nd1fb7s
      @user-ov5nd1fb7s 17 днів тому

      Maybe not for everything but for most things.

  • @charlesrosenbauer3135
    @charlesrosenbauer3135 5 років тому +7

    I think Rust's borrow checker is designed to focus on the most common, simplest cases of memory management. Stopping the most common and simplest causes of race conditions and use-after-free bugs. If the programmer simply forgets that some data is freed, or isn't aware that the data they're modifying may be being modified by another thread, it'll help stop those kinds of cases. It helps keep small, stupid mistakes from becoming big headaches later on.
    In this kind of case, where the memory management behavior is quite a bit more complicated, this is a case where the borrow checker does a sub-par job. It does still verify that you won't get those kinds of bugs, but it won't check all the other things you're interested in knowing.
    I think having some kind of optional borrow-checker may not be a bad idea. It could cover the common cases pretty well. Then when you run into these more complex problems where it really doesn't help, you can fall back on some other solution. I'm sure Jai's compile-time execution will likely be powerful enough though for someone to just implement this as a library.

    • @memtransmute
      @memtransmute 3 роки тому +1

      That the borrow-checker only accepts a subset of "safe" programs (w.r.t. to its safety rules) and rejects only a subset of logically wrong programs is on purpose. Like any other type systems, it turns out we can't have both *soundness* and *completeness*, if the language is sufficiently complex. Rust makes sure its borrow-checker is sound, but does not guarantee that all faults are caught or all valid programs are accepted. This also holds for any other static analysis systems that one may come up with - such systems are good at catching some bugs of a particular class or several of such classes, but cannot verify the correctness of a general program. There will always be some incorrect programs that are not caught, or some correct programs that are rejected.

    • @memtransmute
      @memtransmute 3 роки тому

      But at least having a sound foundation is a good start - we can always extend such type systems so as long as the extensions are safe, it's just a lot of work to get right.

  • @edwarddejong8025
    @edwarddejong8025 5 років тому +4

    Mr. Blow's points on the logical flaws in the Rust lecture are quite logical and sound. Any long-time programmer is like a river rock, smooth around the edges, worn by the passage of so much water.

  • @iceX33
    @iceX33 5 років тому +2

    The main concept behind a canonical ECS implementation is separation from state and behaviour.There is "code" (you can call it data structures) which manages the state. And there are systems, which query the data they need from the state holding data structures and perform data transformations, or side effects, like sending data to graphics card, network or disk. `GameStates` struct is part of the code which manages the state. It is a struct of arrays which is good for data locality. A system can borrow a component (which holds data only) from this struct of arrays for data transformation. The component does not belong to this system though. This is where the Rust restrictive memory model shines. If a system can only borrow a value. It can't mutate it. It can only create a new value and pass it to state manager. State manager can than decide to change it's value with the new one, performing the version increase etc... It is specifically interesting if you want to multithread the data transformation logic. Every thread borrows the values but they don't own it. At the ende they create something like an event for the managing data structure with new values, and the managing data structure is the one, which joins the event and performs actual mutation. So I would say the described technique is not agains the borrow checker technique in Rust. IMHO it supports it very well.
    Now I am actually curious how Specs (docs.rs/specs) does the data management and data transformation in details.

    • @iceX33
      @iceX33 5 років тому

      Small correction, in Rust we can also borrow something, which we can mutate. However Rust compiler makes sure that there is only one mutable borrowed value at a time (Following the pattern multiple reads - single write). The main benefit of the borrowing which I did not mention in previous comment is that a borrowed value can't be owned by something else. If we want to pass a borrowed value to something which wants to own this value, we will have to clone it.

    • @iceX33
      @iceX33 5 років тому

      Sorry I need to add another update after I partially rewatch the talk by Catherine. I believe the main point that Catherine is making is that in Rust because of the strict memory management it's harder to design a cyclical object graph, because an instance can be own only once.
      For example, writing a double linked list in Rust is non trivial, because every node is "owned" by previous and next node. There are special types in Rust which let you move the ownership checks from compile type to run time, but they are rather a code smell and also cumbersome to use.
      As I mentioned before, ECS dictates that data is owned by one thing and just borrowed by systems for data transformation. The data is stored in a rather tabular way, where if we want to express a relationship between entities we use an index (optionally with generations).

  • @Plasticcaz
    @Plasticcaz 5 років тому +22

    I think you're right that the borrow checker doesn't help that much. I think the Option is what would actually help.
    In C, if an entity was deleted, you would return NULL when you passed an invalid EntityId to the Entity System. This is fine as long as every programmer knows that NULL is a possible return value, but if they don't, you could crash if you are lucky, or have a horrible bug if you aren't. Rust uses the Option type to "force" the programmer to consider the fact that something might be missing.
    Typically Rust tries to offload a lot of checking onto the type system, and I believe this, more so than the borrow checker is one of it's strengths. I understand this way of programming isn't for everyone, but that's the way Rust does it.
    I do like your philosophy on getting rid of friction, and the friction of Rust is definately a downside. It's mostly your talk of lowering friction in programming that has me interested in your language.

    • @buttonasas
      @buttonasas 5 років тому

      Isn't that what the function description is for? I expect a comment on every function I use from libraries that includes input and output format cases/description.
      I guess forcing it does make it more mindless, which may be good or may be bad.

    • @buttonasas
      @buttonasas 5 років тому

      I was specifically doubting whether it is good that the compiler forces you to do things that are correct. It's probably good, though.

    • @connormccluskey9103
      @connormccluskey9103 5 років тому +2

      I think the borrow checker does ease you into a better idea though. Instead of storing direct `&T` to other components/entities, you have an id that you can use on a manager and get back an `Option` which then forces you to check whether the entity is still alive. If you tried to store `&T` then you would get lifetime errors because you can't prove that the other entity lives as long as the entity that is pointing to it.

    • @connormccluskey9103
      @connormccluskey9103 5 років тому +1

      Watch the talk she did, she talks about generational indices. Basically you have an `(u32, u32)` where these are 2 unsigned integers, the first is the actual index, and the second is the "generation" the entity is. When you grab the component, it checks if that entity is still alive and not "raised" (a new entity allocated in that spot). If it wasn't alive it returns `None`/no component.

  • @batmansmk
    @batmansmk 5 років тому +6

    I agree the borrow checker didn't help her. But she added Option in the Vec, so she HAS TO check if she got SOME or NONE when getting a result from her Vec. So as far as I know, it's hard to make the error you describe.

    • @tototitui2
      @tototitui2 8 місяців тому +1

      I think John totally missed that fact. Tagged Enums solved the problem in this "somebody has to check" way. Then the borrow checker will help because if you start to store references to that unpacked value it will start to yell at you! The worse is that he mentioned this "good citizen" thing, this is the part where the borrow checker help.

  • @bool2max
    @bool2max 5 років тому

    Fucking love listening to Jonathan. So glad he streams.

  • @cheako91155
    @cheako91155 2 роки тому +1

    ua-cam.com/video/aKLntZcp27M/v-deo.html A type is used to force checking the GenerationIndex to `get` an Entity. "Rust's Privacy Rules" would prevent the `Vec` from being accessed directly, and is the part of the example code addressing this observation. The borrow checker ensures you'll always be comparing against memory that holds a valid GenerationIndex, so does not solve other "what if" situations. With the Borrow checker and Type Privacy working in tandem, the example application would almost ensure the desired result from an ABA test. I say almost because the `set` function takes a GenerationIndex, where I think the type should be generating those(so it's impossible to get wrong) and returning `(Id, GenerationIndex)`.

  • @sithys
    @sithys 5 років тому +4

    I study and implement programming languages at work. My team owns an interpreter hosted in the cloud. For the question about scripting languages, I think your answer was a very good answer. I find that only other engineers can actually use a language that I create, and that the dream of giving a language to a non-programmer doesn't seem possible. Languages do provide a nice layer of abstraction that can hide many details from the user, though this is most important in situations where the underlying technology (like C++) is causing so much pain that the engineers think that a new language is worth the maintenance effort. I can see this would be a common occurrence in C++, though hopefully in Jai the language stands on it's own, and if programmers have problems with it, you just fix the language (if the fix is consistent with the stated philosophy of the language) instead of hiding problems in the language behind another language.

  • @YellowCable
    @YellowCable Рік тому

    very interesting comments, thanks

  • @chaquator
    @chaquator 8 місяців тому +1

    rust has the c++ same smart pointers under a different name (box) and ive seen plenty of rust code deal with the borrow check by regressing to that, basically back to the same messy OOP style as java and c++

  • @porky1118
    @porky1118 Рік тому +1

    39:38 It's not exactly the same as turning the borrow checker off. Now if the program crashes, it's not a segfault or something like that, but the error is handled at the language level. And if you try to read invalid data, it always crashes. If data has been destroyed, there's no way you could still access it. But if it has been reallocated since then, there might still be a problem.

  • @blenderpanzi
    @blenderpanzi 5 років тому +26

    I see only one small advantage of the borrow checker in this case and it wasn't even mentioned: If you use such an index to "check out" an entity the "pointer" you get is a borrow and you're not able to store it anywhere. You can use it there in the scope you checked it out, but then you have to let it go.

    • @OMGclueless
      @OMGclueless 4 роки тому

      Doesn't this also create a problem though? Because being a borrow *also* means you can't check out any *other* pointers from the ECS to use alongside which is a totally reasonable thing to want to do.

    • @blenderpanzi
      @blenderpanzi 4 роки тому +1

      @@OMGclueless You can have multiple *immutable* borrows.

    • @OMGclueless
      @OMGclueless 4 роки тому

      @@blenderpanzi Sure, but there are entirely reasonable things you might try to do that won't work. For example if you want to look up two physics components to read and update at the same time -- can't do that. Or suppose you want to call a mutable method on a component, and that method wants to be able to look up other components in the ECS -- can't do that.

    • @blenderpanzi
      @blenderpanzi 4 роки тому

      @@OMGclueless You could find workarounds for that, but yes, it's a trade off. Depends what you want- Don't know what would turn out best for the given problem.

    • @OMGclueless
      @OMGclueless 4 роки тому

      @@blenderpanzi The only real workaround is for the ECS to not return mutable references but instead Rc instances or something that you're expected not to store. i.e. Throwing out the borrow checker because it does more harm than good, and also adding reference counting overhead. Any API in Rust that returns mutable references by ID must necessarily have a catastrophic failure condition if you request the same ID twice even in unsafe code (or it must borrow the whole data structure when you get that reference so you can't even make the second request).

  • @ibrozdemir
    @ibrozdemir 4 роки тому

    yes, at the beginning B was following A, after A dies, B (instead of crashing) will go to a random point on the map (if a new A' comes and assign this GameState vec list).. i agree; this is only getting rid of the crashing problem, but you can not prevent player saying "hey what the hell is B doing, where is he going, didnt A died"

  • @eddypdeb
    @eddypdeb 5 років тому +3

    I see what you mean by "going around the borrow checker", you are *sort of* right. I am saying "sort of" because *because* the borrow checker would be complaining about RefCell-s and other similar items, Catherine was forced to realize the various entities' data should be accessible and with the strong type system, implement basically another type of smart pointer. It might sound like I am trying to contradict you while saying the same thing you said, but the thing is that Rust, in Catherine's implementation, does not allow direct access to data in the Vec-s thanks to the strong type system and because the actual index is not public, and you have to access all components though the GenerationalIndex, which is, in fact, a smart-pointer-like type (BTW, reference counted pointers is just one flavour of smart pointer in Rust, there are many of them, and you can even make your own).
    To make things more clear, most(all?) of Rust smart pointers are actually implemented in the stdandard library, as opposed to be part of the core language, and one can implement any number of other smart pointers and have the type system make sure anyone using them simply *can't* "store the raw one somehwere". This is, in fact, how idiomatic locking should work, you have a Mutex smart pointer, you request a "locked reference" from your Mutex - which is also *not* a raw pointer, but another opaque pointer - , then, on scope end, the "locked reference pointer" gets automatically dropped/deallocated and the mutex is released. In all this process the actual memory location on which is operated is 100% opaque.
    I hope this makes things more clear.

  • @GordonWrigley
    @GordonWrigley 4 роки тому

    you could push that index lookup into the language as something that looks like a special type of pointer, just to make it syntactically nicer

  • @pseudo_goose
    @pseudo_goose 9 місяців тому

    I think weak pointers can be a lot simpler than you describe. The pointer itself doesn't need to be nullified, nor does the system have to maintain a list of backreferences. It could just store two counts in the allocation, for counting strong and weak references. The value contained in the allocation can still be estructed when the strong reference count goes to zero, but the allocation will still live for the sake of the weak pointers, until the weak reference count goes to zero, indicating that all of the weak pointers have handled the event and let go of the allocation.

  • @haruruben
    @haruruben 4 роки тому

    42:00 what you're saying makes sense... but to be fair I'm not familiar with Rust or this borrow-checker concept. Could be this person isn't really taking advantage of it to its full extent or how it was intended. From what I can tell about borrow-checker is it trying to do some kind of extra layer on top of pointers to keep track of things... I think... learning Rust ...it's on my list of things to do

  • @drd2093
    @drd2093 Рік тому

    It is just turning off the borrow checker. The integer id is just an abstracted pointer, which is guaranteed to be data-aligned with incorrect data and cause a subtler bug.

  • @rogo7330
    @rogo7330 3 місяці тому

    So, if I understood correctly, the original problem was that entity (not its memory, that is the symptome) is not valid already to interact with. And because in the talk she missleaded dereferencing pointer for interaction with the thing that is not there anymore and instead now replaced it with the case wich can be simillar to when you save file descriptor number and keep reading it even after it was closed and new thing got the same number after. I can think of several ways how to propely handle this problem AND not assume that we will never exhaust our integer capacity, and it probably would be something like TCP packets window would work: check that each "openned" entity was successfully "closed" and after that we can reuse this number for new entities. Yeah, seems like that problem was already solved in networking and IO long ago)

  • @shavais33
    @shavais33 4 роки тому

    re: smart pointers around 26:00 - I don't think the bug he describes here is what happens? My understanding of smart pointers is that they essentially implement reference counting? So anything that points at B is going to actually be pointing at a pointer to B that is coupled with a reference count, and the act of referencing B increments the count and the act of dereferencing B decrements the count. And this is done in a thread safe way. So I don't think this bug he's describing is what happens. Unless he's talking about some kind of smart pointer arrangement that I'm not familiar with, which could easily be the case. But I've been listening through JB's playlist on "A Programming Language for Games" along with some other JB talks, and in them I've encountered this general memory management topic a few times, and my current understanding of his actual objection to reference counting is that if you do reference counting you end up doing some form or another of garbage collection because of the need to deal with circular references, and GC in general has just been kind of a performance quagmire for games, every which way its been tried and it's been tried a lot of ways.

  • @FaZekiller-qe3uf
    @FaZekiller-qe3uf 11 місяців тому

    Rust's std::rc::Weak returns an Option and is a weak pointer so it can be used as a reference to an Entity.

  • @astroid-ws4py
    @astroid-ws4py 3 роки тому

    Maybe WebAssembly could change that, There are solutions that allow to embed a WebAssembly interpreter inside of general applications and maybe it could allow to code both the game engine itself and both the game scripting in the same high performant language like C++ / Rust and when time comes compile that scripting code with the engine code together inorder to get the fastest result without shipping the interpreter with the game itself

  • @flyingsquirrel3271
    @flyingsquirrel3271 5 років тому +7

    33:10 those are Vecs of "Option" enums. To use one of them, you have to check if it contains a Value or not explicitly. The only way around that is to use the "unwrap" method - by using it you tell the compiler explicitly that you don't want to check it, although your program could panic.
    Edit: formatting

  • @sollybrown8217
    @sollybrown8217 Місяць тому

    The packing on those structs always triggered my OCD. It does not matter that it is example code.

  • @OMGclueless
    @OMGclueless 4 роки тому +7

    There's actually a more fundamental problem with Rust's memory model than the borrow checker. Which is that the Rust compiler is allowed to assume that mutable references do not alias. So, for example, the following code *cannot* be allowed to compile: "p1 = entities.get(id1); p2 = entities.get(id2);" because what happens if id1 == id2? If you write only safe Rust the borrow checker won't even let you write this, because the first statement ties the lifetime of "p1" to "entities" and borrows "entities" exclusively for as long as "p1" is live. If you use the "unsafe" keyword you can return some kind of handle from "get()" that locks the id for as long as the handle lives, and the second call would then have to fail at runtime. What you can't do is use unsafe code to return a mutable entity with no extra accounting, because that is undefined behavior and the world will blow up -- this is "Rust doesn't allow data races" in practice; even if you don't write code that assumes p1 != p2 the compiler will.

    • @0xCAFEF00D
      @0xCAFEF00D Рік тому

      That seems crazy to me. But I can't find any information on Rust to the contrary and the Rust book says: "At any given time, you can have either one mutable reference or any number of immutable references.".
      It seems _incredibly_ onerous to me.
      Say we've got a spell in our videogame that swaps HP between two entities. Do you actually have to do something like this?
      Pseudocode:
      a=get_mut_ref(arr, id_a)
      b=get_immut_ref(arr, id_b) // is this even allowed? I'd feel dishonest for assuming it's not even if the book seems to say exactly that
      tmp=a.hp
      a.hp=b.hp
      close a
      close b
      //close both references because of the "either". I presume closing references happen on scope closure by default as they're destroyed. But if there's no manual closing that would make this even harder
      b_mut=get_mut(arr, id_b)
      b_mut=tmp
      And this is just writing two values. I don't know how Rust programmers manage to program in this environment. And even more confusing to me: I couldn't write this as a function that takes the arguments to swap. I need to send the array and the two ids. Because if I'm passing in two mutable references that presumes I've already broken the rules. And if for whatever reason the entities are in different arrays I need to write another function that specializes for that case.

    • @OMGclueless
      @OMGclueless Рік тому

      @@0xCAFEF00D Yes, if you want to swap the HP of two entities, the basic API of a hash map doesn't work. You have a few options:
      1. Look up the entities multiple times and make a copy of the hp so that you don't hold both references at once.
      let hp1 = entities.get(id1).hp;
      let hp2 = entities.get(id2).hp;
      entities.get_mut(id1).hp = hp2;
      entities.get_mut(id2).hp = hp1;
      2. Use an experimental API that does this specifically.
      let [e1, e2] = entities.get_many_mut([id1, id2]).unwrap();
      std::mem::swap(e1.hp, e2.hp);
      If you're using a crate like specs for your entity storage that doesn't have the latter API, I think you're just stuck doing the inefficient option #1.

  • @MyAce8
    @MyAce8 5 років тому

    the way I see it is that there are a few minor quality of life improvements the option type is boxed so you'll never accidentally use a "none" object. In order to unbox the object you need to do a pattern match which can be handled gracefully in a way that easier to debug. What it doesn't prevent is bad references to non "none" objects. Also because the borrow checker steers you away from idiotic decisions it's arguably easier to arrive at the correct implementation even if there is very little discrepancy in the end product.

    • @MyAce8
      @MyAce8 5 років тому

      P.S. I'm not a rust programmer, but I just started watching the talk cited and it seems more of a defense of the borrow checker rather than the borrow checker makes these specific patterns easier. or in other words in most cases the borrow checker is great, and in situations where its not great you actually just have shitty programming patterns. While I can't vouch for how great writing code with a borrow checker is, I can say that if you can't write what you want with one then you probably don't understand the problem too well.

    • @taragnor
      @taragnor 6 місяців тому

      The only possible bad outcome is that you give it a non-unique id and it returns an object you didn't want, but that's more a pitfall of reusing ids. Ideally you have some kind of system to ensure your ids are unique, so you can't get a repeat.

  • @samuellourenco9526
    @samuellourenco9526 2 роки тому

    What if, B checks if is being followed (by A, D, whatever). If it gets destroyed, it checks whoever if following them, references them, and nulls the pointer on them to itself. I don't know if it is expensive, but it only needs to be done on the destructor. It implies a bilateral relation, though.

  • @ardawanx
    @ardawanx 4 роки тому +3

    Can i know more about the language you’re creating ?

  • @petrupetrica9366
    @petrupetrica9366 4 роки тому

    Hi, just wanted to mention that in the "modern c++ way" in this case, the solution with the entity ID would probably be preferred. shared_ptr (the reference counted one) is meant to be only used across threads when it is unclear from one thread's perspective on whether the object on the other thread has been deallocated or when both of the threads are owners of the memory and the last one using it needs to deallocate it, hence it has atomic ref count. However if it was the case that entity A was working on another thread, by using a weak_ptr in entity A which points to B (a shared_ptr which is managed by the entity system) you would actually have this layer of indirection as with the index, because you will have to check whether the weak_ptr is expired and if not, lock it so it can't be destroyed while you are operating on it, this is what it was designed for and it solves that problem.

    • @SaidMetiche-qy9hb
      @SaidMetiche-qy9hb 4 роки тому

      Or just don't make the entity hold any kind of reference to the other entity. Instead have a function that holds pairs of references for the follower and followed entity. That solves the problem completely.

    • @SaidMetiche-qy9hb
      @SaidMetiche-qy9hb 4 роки тому

      And if any of the entities in that couple get freed or no longer exist just remove the pair from the list. That's simple

    • @petrupetrica9366
      @petrupetrica9366 4 роки тому

      @@SaidMetiche-qy9hb Even if you have a list of follower-followed references in a class, there is nothing to prevent you from accessing those references raw in the update function which does the logic on that pair of references, the problem is how do you make sure these references are still pointing to valid objects, weak_ptr provides that mechanism of checking whether the entity is alive and it also enforces it. It's not just having to remove the pairs any time one of the entities dies from the list as you pointed out, in the case where the removal is happening on another thread, you could already be working on the update logic when one of the entities is removed from the list, which will most probably result in a crash so you would have to have some sort of synchronization mechanism which weak_ptr already provides. If it is single threaded, then yes, you can make sure that these pairs are removed from the list before that update method is called, however there is no mechanism that would enforce that, so you will have to keep in mind at all times to cleanup all these lists of groups of entities you would have for each type of logic (ex: following an entity), a task which in itself can be trivial or not depending on the design and implementation.

    • @SaidMetiche-qy9hb
      @SaidMetiche-qy9hb 4 роки тому

      @@petrupetrica9366 Nah the way you do it is by checking with the allocator directly or the entity manager, which the systems should have a reference to, in a standard ECS it's not that hard to do that.

    • @SaidMetiche-qy9hb
      @SaidMetiche-qy9hb 4 роки тому

      The following is called an action, and it has relation to a data structure

  • @grumbel45
    @grumbel45 5 років тому +18

    I'd say the borrow checker worked fine here. While there is still the possibility to mix up entity ids and access the wrong objects, the borrow checker does ensures that you stay within those type boundaries. You can't accidentally write over bytes belonging to a completely different type of object or outside the array. Furthermore it would be relatively easy to make sure that entity ids are guaranteed to be correct by wrapping them up in a class and not just allowing any random integer.
    Rust isn't going to prevent all bugs, but it seems to do a good job at limiting the amount of damage they can do.

    • @peterdelevoryas4684
      @peterdelevoryas4684 5 років тому +2

      Is it worth it to add it to JAI or any other non-garbage-collected language though? The downside is that it requires the programmer to annotate lifetimes in places where it can't be inferred (more verbosity) and it increases compile time (though that could be toggled on/off for debugging). Also, another nitpick, after using this index + buffer pattern (with typed indices!) in Rust a lot, I would say it's still very easy to make a mistake! Frequently, I have had multiple buffers of objects of the same type, and mixed up the indices, so even typing the indices didn't help in that case. I think ultimately it's still way more important to make debugging and development easier/faster. Just my opinion though, and Rust does all that really well!

    • @schwajj
      @schwajj 5 років тому +2

      (Only watched this video, not the full RustConf video...)
      The borrow checker has nothing to do with it. Implementing the same pattern in a non-borrow-checking type system (such as C++) would have exactly the same safety properties as the Rust version. You still can’t accidentally become confused about the type of an object in a component array.
      In other words, the borrow checker is only part of the Rust type system. Once you sidestep it, it continues to “work fine” in a trivial sense, but what does that really mean if you have exactly the same sorts of error cases that you would have in C++?
      There are classes of programs where the borrow checker is great. There may even be ways to implement a game engine without sidestepping it. But as soon as you start implementing a scheme that could be (and has already been) implemented in C++, then it’s clearly nonsensical to credit the borrow-checker with guiding you to that scheme.
      This seems to be the central reason that Jonathan started to rant, and I agree.

    • @grumbel45
      @grumbel45 5 років тому +4

      The point isn't that the borrow checker allowed you to write this, but that it didn't get in the way. A lot of more ugly ways to solve the problem with raw pointers wouldn't work in Rust, but the nice way that everybody is already using anyway did.
      Rust's priority is to be a memory safe language, it exists to stop you from writing bugs that allow arbitrary code execution. This pattern doesn't bypass that. Using this pattern in C++ doesn't give the same projection, you can still use a raw pointer and access arbitrary memory. Pointers are part of the C++ language and you can't disable them, you just can try to use them less.
      The problem is that Jonathan is thinking about this only from the programmer's perspective. But this isn't about the programmer, this is about the sysadmins that have to patch the software at 3am because there was yet another buffer overflow. From a programmer's perspective a buffer overflow is quite harmless, it doesn't stop the software from working under normal circumstances, it doesn't corrupt data and everything is fine. They are essentially invisible. It only starts becoming a problem when an attacker tries to exploit them and than they can become really really bad, like "people are going to die"-bad.
      That's the problem Rust is trying to fix and when it can do that by only getting into the way a little bit, that's mission accomplished.
      That said, I do agree that it is a problem that the user is essentially forced to write their own memory manager here and that can have problems as well, but that doesn't stop Rust from still being a wastely safer language than something like C++.

    • @peterdelevoryas4684
      @peterdelevoryas4684 5 років тому

      @@grumbel45 Good point!

    • @schwajj
      @schwajj 5 років тому +1

      Maybe I was being too pedantic. I was responding specifically to your statement that "the borrow checker does ensure that you stay within those type boundaries". This is incorrect, and I guess I tripped on it and missed the point you were trying to make. The borrow checker is only part of the Rust type system, and it is not the part that "ensures that you stay within those type boundaries". That's the just the normal thing that all static type systems do, C++ included. That's all I was saying.
      Also contributing to my original reply was (what I perceived to be) Jonathan's characterization of the RustConf video as saying that the borrow-checker guided her to the ECS scheme. I subsequently skimmed though the RustConf video, and didn't hear her say anything like that (I may have missed it). Anyway, when I saw you mis-statement about the borrow-checker, I jumped on it rather than stepping back to consider whether it might be a mis-statement, and if so what was the point you were actually trying to make. Oh well, that's how it goes with text communication between strangers on the internet. :-)
      I never claimed that Rust isn't vastly safer than C++ in general, just that the common errors in an ECS are the same in Rust and C++.
      I could pick things out in your last reply to argue with (Buffer overflows are only harmful when attackers start to exploit them... what? No wonder you don't think that use-after-free errors in a Rust ECS are a big deal), but there wouldn't be any useful point. You think Rust is great, I think Rust is great. We just differ on exactly how much of a foot-gun C++ is.
      Cheers!

  • @roberthickman4092
    @roberthickman4092 5 років тому

    Getting entities from a central database is really just an implementation of the single source of truth idea.

  • @openroomxyz
    @openroomxyz 2 роки тому

    Like how clear you articulate your thoughts it's really amazing. I love this holistic multi-perspective view.

  • @peterdelevoryas4684
    @peterdelevoryas4684 5 років тому +3

    "Fighting the borrow checker" is way over-stated. If you know what you're doing, 99% of the time you won't be violating the aliasing rules in Rust. The times when you do fight it are because the current borrow checker is lexically based, so things you trivially know are valid aren't considered correct. That will be changed with the non-lexical lifetimes (nll) borrow checker, which is smarter, but overall, I don't think you get very much benefit from the borrow checker. I'm a very inexperienced developer (I just graduated this spring and have only made a few things in my whole life), but I used Rust to make a Python compiler, and I ended up with the same problem: when I needed to use a graph, I used indexes instead of references, and I still ended up debugging index-out-of-bounds. I ended up making way more mistakes using indices with the wrong buffers than I ever did with things the borrow checker could actually verify. Overall, I still love Rust, but I use it for all of the features other than safety

    • @elahn_i
      @elahn_i 5 років тому

      It's okay to use "unsafe" when implementing graph data structures. ;) It's easy to fall into the trap of obsessively writing everything in safe rust, even when the problem is a lot easier or more performant to solve with judicious use of "unsafe." Although, if there's a reliability or security requirement, someone should probably audit the unsafe code carefully before it goes into production.

  • @Boopers
    @Boopers 4 роки тому

    This is not how a weak pointer works though. There is no list of pointers that get nulled, which would be pretty much impossible to implement anyways. It's usually just two reference counts. One for the owners and one for the weak references. The object still remains in memory, but it is basically flagged dead once the count of owners reaches zero.

  • @0xCAFEF00D
    @0xCAFEF00D 5 років тому +5

    It would be nice to have a link to the talk in the description.

    • @GaMatecal
      @GaMatecal 5 років тому +1

      ua-cam.com/video/aKLntZcp27M/v-deo.html

    • @Jokler13
      @Jokler13 5 років тому +1

      ua-cam.com/video/aKLntZcp27M/v-deo.html

    • @GaMatecal
      @GaMatecal 5 років тому +1

      4m too late :p

    • @jblow888
      @jblow888  5 років тому +9

      Good point. Adding it.

    • @GaMatecal
      @GaMatecal 5 років тому +1

      Good joke jon, good joke :p

  • @TomKaitchuck
    @TomKaitchuck 5 років тому +3

    You are correct that the core issue that needs to be dealt with is that relationships need reflect the possibility that entities can go away. Unlike C++, the Rust compiler will reject code that doesn't deal with that issue. If you do deal with it, either in the way suggested in the original talk or either of the two alternatives you mention it will work in Rust. (Sometimes with a modicum of extra typing to explain it to the compiler.)
    I think the disconnect is you have enough experience to know not to write the code incorrectly. Having compiler that raises the issue forces me to think about it and come to the same conclusions as you even though I lack your domain knowledge.
    Just as strongly typed languages force you to think about types, Rust forces thinking about ownership and lifetimes, you can't forget that entities can go away. Once you think about this it is easy to design it correctly.
    You mentioned that using the map is a way to "turn off" the validation of lifetimes. This is true in a sense, but it also "turns off" type checking. Another way of phrasing the same thing is that indirection via an id allows you to generalize over both lifetime and entity type. The entity with the reference no longer needs to care about such things, but you must handle the case when acting on the reference. Of course it is possible to erroneously assert that the type is what you expect and still exists. However this fails in an obvious way.
    This is why the 'needing to use generations' is not a problem. A new developer would start with a string or a uuid for an entity id and use a map to store them. And later optimize by moving to an int and an Vector. When making the optimization, you would obviously add generations because the problem is clear. Forgetting to add generations is possible, and the borrow checker won't stop it, just as the type checker won't prevent you from asserting it is an incorrect type. But given that you've been forced to identify the issue and deal with it it's an unlikely mistake, and basic testing would quickly detect it.
    Rust did achieve it's goal: you thought about the problem more, and came up with good solutions.

    • @Ian-eb2io
      @Ian-eb2io Рік тому

      I can say that no amount of experience makes the problem go away in complex C++ code. The Rust compiler seems to pick up a lot of those cases in Rust code.

  • @greatbullet7372
    @greatbullet7372 5 років тому

    Ever thought about something like an update collection?

  • @peppeppeppepp
    @peppeppeppepp 5 років тому

    The thing that he talking abound since around 30:00 for several minutes would be true if there is no generation in the entity id type. It is crucial there and it was not emphasized enough in the original talk.

  • @malusmundus-9605
    @malusmundus-9605 Рік тому

    "Nobody has entities hang around for an extra frame..."
    ...

  • @pierreyao7221
    @pierreyao7221 Рік тому

    The key to the safety of her code is the Option around here variables in the vec. Most likely when an entity is dropped (deallocated) she turns the Option to None. In rust you cant use an Option meaningfully (i.e access the enclosed entity) without checking if it's Some and only then be able to access the internal object. That's how she will guarantee that an index doesn't take her to garbage memory.

    • @koh-i-noor123
      @koh-i-noor123 Рік тому +1

      What if you reuse the slot without updating IDs? You will "reference" wrong entity. You can't move things in that Vec and it can only grow, so you can run out of memory if you can spawn new entities. Unless - you remember to set ID to None/null/empty when you get None from GameState. I'd say this is still some kind of a "use-after-free" error, but you will feel safe because borrow checker didn't complain :P

    • @yokunjon
      @yokunjon 7 місяців тому

      @@koh-i-noor123 Have you watched the talk? That's where generational part comes in. Borrow checker's main goal is avoiding undefined behaviours, not avoiding bugs. Undefined behaviours are subset of bugs, but not accounts for all the bugs out there. So it is still your responsibility to avoid rest of the bugs out there (like using wrong entity). That's why generational index is used.

  • @FrankTinsley
    @FrankTinsley 5 років тому +57

    Does he have a rant specifically about why he doesn’t like the entity component system pattern?

    • @xthebumpx
      @xthebumpx 5 років тому +12

      or one about OOP

    • @rmdec
      @rmdec 5 років тому +23

      He doesn't dislike it from what I understood. He's trying to get across 3 subtle points I think.
      1) ECS is not the sole solution, but other solutions have drawbacks that could be worse than ECS drawbacks.
      2) Rust should not be credited for making one realize the benefits of ECS just because it made someone realize classic OOP couldn't model their specific game.
      3) The borrow-checking being on the conservative side causes enough friction and the fact that ECS implies a custom memory management already, make it perhaps not worth picking up to him.

    • @nothke
      @nothke 5 років тому +3

      ua-cam.com/video/ZHqFrNyLlpA/v-deo.html

    • @MondSemmel
      @MondSemmel 3 роки тому

      I think the following talk (23 minutes long) also touches on this stuff: ua-cam.com/video/dS6rCaDSwW8/v-deo.html

    • @stysner4580
      @stysner4580 3 місяці тому

      @@rmdec"Rust should not be credited for making one realize the benefits of ECS just because it made someone realize classic OOP couldn't model their specific game." what a moot point, no-one is making that point. The point being made is that ECS fits Rust really well because some of the issues ECS has will become obvious due to the borrow checker, making you fix the problem because the compiler won't let you make the mistake. That's not crediting Rust for the benefits of ECS, that's crediting Rust for guiding you towards something like ECS.

  • @shavais33
    @shavais33 4 роки тому +1

    re: dissing on scripting languages (around 57:00 ish) - I've had to do quite a bit of side-by-side work and reimplementation work, working with C++ side-by-side with Python or going from C++ to Python or from Python to C++, or JavaScript to C# or C# to Javascript, etc., and for me, I find that I can produce a working solution, including the debugging time, far and away faster in dynamic duck typed language that does memory management for me than I can in a language that uses strict typing and makes me do memory magement. Hands down that's the case. The scripting languages are more rapid for me, absolutely for sure. It's hard to put a finger on exactly why, all I know is it definitely takes a lot less time over all. Maybe my intelligence is limited? I dunno, but it is certainly the case. Dynamic objects and duck typing reduces friction a lot, as does automatic memory management. A lot. But - it also limits performance and granularity of control to an unacceptable degree for real time 3D game programming.

    • @Jack-hd3ov
      @Jack-hd3ov 4 роки тому +1

      >I find that I can produce a working solution, including the debugging time, far and away faster in dynamic duck typed language that does memory management for me than I can in a language that uses strict typing and makes me do memory management.
      Of course, so do I (in most cases) and I think others probably do too.
      >It's hard to put a finger on exactly why
      I would guess it's because there's less scaffolding to put up with scripting languages, you can just get to the meat of the program logic.
      >Maybe my intelligence is limited?
      I would doubt that, you listed off multiple languages you've experience in. I'm sure you're quite competent.
      I'd conclude by saying that, while your points about productivity are correct, using languages like C which require you to type more and do things like memory management manually mean that the end product is of a higher quality than it would be had you used a scripting language but you already made that point. Personally, I think scripting languages are the right tool for some jobs and using something like C for said jobs might even be considered obtuse but the applications of scripting languages are in fewer numbers than most people think.

    • @shavais33
      @shavais33 4 роки тому

      @@Jack-hd3ov Thanks for the kind reply. In a few different JB videos that I've watched, now, I've seen him say on no uncertain terms that languages like Python and JavaScript Are Not more rapid than languages like C and C++. He says that this is a common belief that is completely wrong. I think I agree with JB about practically everything else he says, but I just don't see that one. If higher level languages aren't more rapid, why do we have them? If languages that are more expressive, that is, that let you do more work with less code without losing clarity, if those languages aren't more rapid, why do we have them, and why is JB trying to make one? Why make Jai if languages like Python and JavaScript aren't more rapid than languages like C and C++ ? I think JB is brilliant, but I feel like there's some kind of oddly dissonant disconnect there.
      Somebody said to Mike Acton that for the programming they're doing, they really just don't care about performance. Mike replied that people not caring about performance is why we have to wait 30 seconds for Word to start up. Ok, true enough - but Word could almost certainly be written entirely in a scripting language, while barely paying any attention to performance concerns at all, and still not end up taking 30 seconds to boot up! The developers of Word have to be doing something in a ridiculously awful way if it's really taking 30 seconds to start. If you do something that badly it doesn't matter what language you use. When I do profiling, whether I'm using C#, C++ or Python, I don't typically find Everything going slow. I usually find some specific bottleneck or another, and most everything else is perfectly fine. Even stuff that you'd expect to perform badly is doing fine. I would say that generally, if we do a bit of profiling, we can fix the bottlenecks, even if we're working in a high level language - unless 1. we've botched the design so badly that by that point we can't, or 2. we actually really need to do quite a lot of work in the space of a handful of milliseconds like you do in 3D games.
      But I don't think there are that many purposes that really require an enormous amount of work to get done (on the part of the code involved that the application developers are writing) in the space of a handful of milliseconds? I think people who think that languages like Python and JavaScript are too slow for most purposes or who think that they're not a lot more rapid than C or C++ must just not have very much experience with higher level languages. Honestly, I think outside of a few performance critical areas, like real time simulation / animation and such, the main reason to program in C++ is really nothing other than to garner higher wages. Which is a very good reason! But I think the reasons that C++ garners higher wages are mostly false perceptions. It isn't that C++ produces higher quality, it's that people smart enough to use C++ effectively are more likely to produce higher quality no matter what language they're using.

    • @Jack-hd3ov
      @Jack-hd3ov 4 роки тому +1

      @@shavais33 There's a lot to unpack here, I disagree with most of what you said but I'll try to be unopinionated throughout my reply, here goes.
      You said that scripting languages like JavaScript and Python weren't slower than compiled languages like C and C++ (I'll take "scripting language" to mean "language which is run using an interpreter and has some solution for automatic resource management like GC), implying that scripting languages are on-par or even faster than compiled languages in terms of execution speed (I've taken "rapid" to mean "rapid at runtime" here, if you meant that the programmer could rapidly produce code then I believe that scripting languages rarely offer this and when they do the trade-off at runtime is not worth it but I'll assume you meant execution speed).
      Most scripting languages are orders of magnitude slower than most compiled languages, you can find many benchmarks online comparing C to Python and JavaScript to C++ etc. here is one comparing C to Python: benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/python3-gcc.html, Mandelbrot is 158.2 *times* slower in Python than it is in C and it takes (get ready) 1022 *times* the amount of memory that the C program takes. Interpreted languages aren't just slow in practice, they're slow in principle: a program in a language like C is just a program running under an operating system (most of the time). Whereas, a program in a language like Python is a program which is running a program (Python is a C program!), you will never be able to create an interpreter which runs code faster than a normal program would run because the interpreter itself is a normal program. All of this is apparent before even thinking about garbage collection or green threads or the fact that scripting languages usually follow the model of "everything is an object" which is inherently bad for performance.
      You ask why, if they're slower than compiled languages, do we use interpreted scripting languages and the answer is that we don't. In areas like game development, scientific computing and systems/embedded development, scripting language are, for the most part, nowhere to be found. This is mostly because of performance reasons but in the case of systems/embedded development, it's actually impossible to write something like a standalone operating system in an interpreted language.
      Scripting languages are (over)used in the cases where we think we can afford to use them. JB talks about this a lot and so do people like Mike Acton and Casey Muratori: computers have gotten so fast and large amounts of memory so commonplace that for simple applications you can use scripting languages without realising that they're slow, this isn't because they run fast but because you think the speed at which they run at is fast. Most likely because you're using the speed at which you personally can do the calculations in your program for comparison.
      A point JB makes regularly is that computers are a lot faster than people think, it's just that most people don't really know how to write fast applications. People write text editors and chat clients using technologies like React Native and because we all have what in the 1990s would have been considered super computers these applications usually only hang for a couple of seconds when doing tasks which are moderately more intensive than usual and take up amounts of memory which are tiny in comparison to what we have available. Most people think, or seem to think, that this is normal and that if an app written in X interpreted language is running slow then it's just because they've reached the limit of how fast the computer is. At this point, the CPU has become such an abstract thing that most people have no idea how to directly interface with it and if they're experiencing a performance problem with some software they'll just blame the hardware. If every application was written in high quality C code they would run so fast that the only bottleneck you would experience would be human reaction time. The Jai compiler is a perfect example of this; it runs so abnormally fast that it's almost hard to believe that it did anything at all and it's only getting faster. This is because JB actually knows what he's doing and how to harness modern hardware. Not only is he using this knowledge to build Jai but he's also making things like SIMD more accessible to every-day programmers through the higher-level language features of Jai.
      Another thing you mentioned: "Why make Jai if languages like Python and JavaScript aren't more rapid than languages like C and C++?" confused me slightly, do you think that Jai is an interpreted language? If so then please know that it's not, it is supposed to be a replacement for C++ and compiles to native machine code. The only part of Jai which involves interpretation is the arbitrary compile-time code execution but all of this happens at compile-time and once you have an executable it is just that, there is no bytecode or interpreter involved.
      You also brought up a question which Mike Acton answered during his cppcon talk about not caring for fast execution speeds by exclaiming that this was the reason why he had to wait 30 seconds for MS Word to launch, I had watched this and found it quite amusing while also sadly true and I thought you would follow by saying something about how interpreted languages shouldn't be used in these kinds of applications but you then went on to say that if they were then MS Word wouldn't take as long to launch. I fundamentally and wholeheartedly disagree with this: interpreted languages are inherently slower than compiled ones and especially when people writing in them don't pay attention to performance, not only would Word take longer to boot if it were written in a scripting language but it would take longer to do anything.
      >The developers of Word have to be doing something in a ridiculously awful way if it's really taking 30 seconds to start.
      I agree with this and I'd add that the developers of all MS products must be doing whatever they're doing in a ridiculously awful way for them to run in the way which they do (poorly and slowly).
      >(everything runs fast unless) we actually really need to do quite a lot of work in the space of a handful of milliseconds like you do in 3D games.
      Going back to your question of why Jai needs to exist when we have x, y and z languages, Jai is supposed to be built for making high performance games which involve real-time 3D calculations for graphics and physics and that's before rendering so there you go.
      >I think people who think that languages like Python and JavaScript are too slow for most purposes or who think that they're not a lot more rapid than C or C++ must just not have very much experience with higher level languages.
      I also disagree with this fully, I have personal experience in using scripting and interpreted languages (Python/Ruby/Java) for any task and then switching to a compiled language (C) for any task and noticing the dramatic speed increase. I also enjoy writing in C more than I enjoy writing in any other language. Additionally, when I now do some work in interpreted languages I notice that they're slower.
      >Honestly, I think outside of a few performance critical areas, like real time simulation / animation and such, the main reason to program in C++ is really nothing other than to garner higher wages.
      There is probably a lot of truth in this and I think you would enjoy reading this satirical fake interview with Bjarne Stroustrup: harmful.cat-v.org/software/c++/I_did_it_for_you_all
      >people smart enough to use C++ effectively are more likely to produce higher quality no matter what language they're using.
      This is probably the statement of your's I disagree with the most and I'm very against the mindset that it illustrates. You don't have to be some genius to use C++, in fact I would argue that the vast majority of C++ programmers are clueless and language features like templates and RAII make programmers feel like they are smart because in order to get to a point at which they could use these features to get anything done they had to spend a significant amount of time learning how they work, there seems to be a general confusion around the link between complexity and quality; just because something is hard to understand doesn't mean that you need to be super smart to use it or that it is even any good, most of the time in fact these things aren't actually any good at all. The real clever people are the people who are able to recognise the importance and benefits of simplicity. Programming is not really that difficult it's just that C++ and other modern languages have so many convoluted features which make the programmer's job harder, most people don't even notice this and instead conclude that programming itself is difficult when this is not the case at all. People who value simplicity are the people who will reliably produce the best code in almost any language you give them because they only need the core constructs that almost every language provides.
      Note: at around 1600 words this text field is very buggy/slow to type in, just an example of a simple task which is being done poorly by a popular interpreted language.

  • @mybigbeak
    @mybigbeak 5 років тому +1

    34:47 perhaps, but can you add a generation number on a malloc? I think it's the generation number right at the type that makes this actually safe.

    • @jblow888
      @jblow888  5 років тому +3

      My point is that "safe" and "unsafe" are just statements about what the symptoms of a mistake might look like. They don't have much or any bearing on whether the mistake is actually made, and it's preventing the mistake that is the important thing.

    • @eddypdeb
      @eddypdeb 5 років тому +1

      @@jblow888 "safety" in Rust has a very clear definition and relates to memory safety - i.e. no use after free, double free, null dereferencing, and you are guaranteed you have no data races - it is not meant to address logical errors, such as the case where you implement your custom memory allocator, where, effectively, you correctly pointed you are the sole responsible to make sure the references are valid. If you do, all the problems appear again, but the strong type system and the borrow checker can help you implement a sane architecture.

  • @NeutronNoir
    @NeutronNoir 5 років тому +1

    Hmm in this case if the entity is "freed" the Option in Vecs would be None instead of Some(blah blah blah), so the script or update function or whatever looking for this entity would "see" that it's not there anymore. There would be no reuse of invalid data I think.

    • @OMGclueless
      @OMGclueless 4 роки тому

      Once it's freed something else can come along and use that memory. So if I have a handle to it and expect to see Some(blah blah blah) but someone else has put Some(bloop bleep) there instead my program would then be incorrect (though still safe by Rust's semantics).

  • @alkeryn1700
    @alkeryn1700 4 роки тому

    you could have each entities have a pointer to an array of pointer to the others entities that point to that entity so when you delete it you can remove the pointer entities that point to it have.
    the cost would be effectively 64bit + n * 64 bit where n is the number of entities that have a pointer to it.
    and the computational cost would be only at delete time as you go look at that array only before removing an entity or when a new entity points to it.
    ofc you could use indexes instead of pointers as well which might be more cache friendly but then you'd have to take into account them "shifting" if you pop something at the middle of your array unless you use something like a btree or a linked list but then that'd ruin the caching.
    Well it is always easier when you know exactly how much memory you are gonna use, another possibility is entities using an offset (which would be a size_t) to point to others, that way it is still a cache friendly index and the pointed entity won't change if you decide to pop something that isn't inbetween the two entities (altough deleting a vector element is fairly inneficient).
    You can also avoid resizing your vec alltogether by just removing an entity content without shrinking the vec and having a list of "free" spots in the vec.
    Anyway there are a lot of differents solution none of which are perfect, it all depends of your memory access pattern.

    • @stysner4580
      @stysner4580 3 місяці тому +2

      Eliminating memory indirection is one of the major performance benefits on an ECS.

    • @alkeryn1700
      @alkeryn1700 3 місяці тому +1

      ​@@stysner4580 definitely, it was pm what i was talking about at the time i think.
      the most performant things may not be as dynamic, it depends how much you plan to move your data too i guess.
      if you constantly move data around some data structures may be more efficient than a simple vector.

    • @stysner4580
      @stysner4580 3 місяці тому +1

      @alkeryn1700 true. My ECS implementation favors running the system over getting components by entity index, but that doesn't fit every case.

  • @yokljo
    @yokljo 5 років тому +32

    At 19:00 your description of a weak pointer's implementation and performance is quite incorrect.
    std::shared_ptr creates a tiny "control block" allocation that contains:
    - the actual pointer to your allocated data.
    - the "shared" count integer.
    - the "weak" count integer.
    If you make a std::weak_ptr from a std::shared_ptr, it basically makes a copy of the control block pointer and increments the weak count.
    When that shared_ptr dies, it decrements the shared count to 0, it frees the actual pointer inside the control block, but leaves the control block alive becuase the weak count is still > 0.
    When you try to turn a weak_ptr into a shared_ptr (you can't access the data via weak_ptr, so it's hard to forget to check because you have to turn it into a shared_ptr first) it will check if the shared count on the control block is > 0 and give you a valid pointer, or if it's equal to 0 then it gives you a null shared_ptr.
    When a weak_ptr dies, it decrements the weak count, then if at that point both the shared and weak counts are equal to 0 then it will free the control block.
    I think often the shared_ptr will store a copy of the pointer inside the control block next to its control block pointer just so you don't have to dereference twice to get to the data.
    So really, they are quite efficient, and don't store huge amounts of data. std::shared_ptr has no clue where the weak pointers are, but it knows how many there are.
    Since they are still pointers, serialisation/deserialisation is still not great.

    • @yokljo
      @yokljo 5 років тому +9

      Also, as other people have already said, the API she presents means if you ask for a deallocated ID, the system returns an Option (which you mention at 47:30). It's impossible to get the T from the Option in Rust without checking first if the Option is not set to None, so you cannot accidentally use the data (stale, wrong data, as you said).
      She also recommends using the slotmap library:
      github.com/orlp/slotmap
      so you can use that "guaranteed" safe implementation, and don't actually have to implement any of this yourself.

    • @stanislavblinov8454
      @stanislavblinov8454 5 років тому +4

      It is correct that there isn't a need for any lists of weak references that Jon described. However, here's a few points about shared_ptr/weak_ptr:
      1) They aren't *necessarily* pointers. You could supply your own logic for "allocating" and "destroying" the data, so serialization could in fact be realized with them.
      2) You don't necessarily know if the control block and data are two separate memory locations, and as such, don't necessarily know if it's one or two dereferences to get to the data.
      3) There's an enforced shift in responsibility: it's the weak_ptr that has to deallocate (or destroy) the control block, which forces you to use somewhat global state, at the very least for control blocks. This precludes the usage of perhaps more efficient allocation. When you always use an indirect mapping via an ID, there's always one authority over the storage. For example, you might allocate all your data in the main thread using a fast thread-local allocator. You can't really do that with weak_ptrs.

    • @campzilla
      @campzilla 5 років тому +2

      This implementation of weak pointers for the entity lookup case is basically a more memory alloc heavy version of the entity hash / array lookup - each entity has an extra alloc for the shared pointer

    • @jblow888
      @jblow888  5 років тому +12

      You are confusing a specific implementation of weak pointers for the general concept of weak pointers, which has been around since at least the 1960s. There are many different ways to implement weak pointers, obviously. What I discussed in the talk was the way of implementing them that seems to me to be the most direct and reasonable way to do it given the use case we were discussing. The way you're talking about sounds strictly worse.

    • @Mal_
      @Mal_ 5 років тому +4

      How exactly is this strictly worse than keeping a dynamic amount of pointers to weak pointers around and managing it?

  • @Trunks7j
    @Trunks7j 5 років тому +16

    Good discussion. Small clarification, I think you're using the term "Object oriented" to refer to inheritance specifically, when talking about what's problematic. You can certainly have object oriented design with no inheritance (via composition).

    • @espadrine
      @espadrine 3 роки тому

      Isn’t composition just C structures? Wouldn’t that definition make C object-oriented?

    • @Trunks7j
      @Trunks7j 3 роки тому

      @@espadrine Honestly composition and object-orientation are orthogonal but related concepts. Object-oriented really just means combining code and data, whereas composition is a design pattern meaning bottom-up assembly. In your example, C is still not object-oriented, because you're only "composing" structures at the data level, not at the object level (data+methods).

    • @espadrine
      @espadrine 3 роки тому +4

      @@Trunks7j Methods are just functions whose first parameter is implicitly "this", though.

    • @Trunks7j
      @Trunks7j 3 роки тому +5

      @@espadrine True true, you're right, you could write object-oriented C code.