Crust of Rust: std::collections

Поділитися
Вставка
  • Опубліковано 27 лис 2024

КОМЕНТАРІ • 118

  • @karljohannisson
    @karljohannisson 8 місяців тому +7

    Dude this video is so good. It's Rust, but these concepts feel deeper and applicable to many languages. Thanks for sharing your knowledge to the world.

  • @wesleychaffin4029
    @wesleychaffin4029 Рік тому +31

    The best Rust channel on UA-cam strikes again 😄

  • @lordmelbury7174
    @lordmelbury7174 10 місяців тому +14

    "I think I wrote part of this"
    ~ Legend!

  • @jdiehl2236
    @jdiehl2236 Рік тому +15

    Nice haircut, Jon! Always stoked to see you've released new Rust content.

  • @driedurchin
    @driedurchin Рік тому +44

    Would love a Crust of Rust on alternate allocators, when you might actually need one (i.e. you benchmarked and determined the default allocator is insufficient), and how you would employ them in practice.

  • @joffreybluthe7906
    @joffreybluthe7906 Рік тому +5

    Thoroughly enjoyed this video! Your videos are usually too advanced for me but this was perfect for where I'm at :)

  • @zanisxeroxhou9900
    @zanisxeroxhou9900 Рік тому +143

    Still waiting for your decrusting Tokio, Rayon, Axum, rust-gpu or wgpu!

  • @VivekYadav-ds8oz
    @VivekYadav-ds8oz Рік тому +6

    Never seen a binary heap explained as a glass with different densities of liquids. Pretty brilliant if you ask me.

  • @drip2245
    @drip2245 Рік тому +3

    Another bing bong banger from your boy Jon. Can't wait to fully consume this piece of content.

  • @yamacgulfidanalumni6286
    @yamacgulfidanalumni6286 8 місяців тому

    your Rust videos are unmatched

  • @Kornosol
    @Kornosol Рік тому +2

    Best serious on rust so far

  • @lorink5937
    @lorink5937 Рік тому +3

    note to self:
    vec has retain filter. allocates on heap only (unlike smallvec). no iter because no index state.
    set is just map with empty keys. stdlib hashmap is DoS resistant by using OSs random seed. usually does linear probing ie fills stuff into other bucket by hasing stuff again.
    btree is like binarytree but with a sorted list in each node.

  • @papydoctor
    @papydoctor Рік тому +3

    About the VecDeque collection. A nice feature to add is to be able to reserve a capacity in a power of 2 and have the underlying process working with binary and logical operator instead of mod operator. :)

  • @darthmoomoo
    @darthmoomoo Рік тому +3

    The Foundation (R)(TM) is going to have an aneurysm when they discover the name of Jon's series. Jk (or maybe not).
    Anyway, your videos are always comprehensive. Love it!

  • @KohuGaly
    @KohuGaly Рік тому +5

    There is in fact a heap analog to the radix sort. It's called radix heap and it's amazing. Basically, you store a chain of Vecs, where Nth vec has items which differ form the last popped item in first N digits. It only works if the inserted priorities are always larger than the last popped priority. But the performance boost is quite substantial.

  • @SimonBuchanNz
    @SimonBuchanNz Рік тому +27

    One note is that the implementation of vec![] that just pushes is only a book example, the actual implementation is basically just Box::new([$items]).into_vec().
    This lets the compiler place the item construction directly into the allocated memory (though unfortunately not guaranteed, and currently isn't in debug), and the into_vec call just adds a capacity equal to the array length.
    Also IIRC my CS lecturer said Deque is pronounced "deck", but he was German, so who knows!

    • @casperes0912
      @casperes0912 Рік тому

      I definitely say de-queue for that one, but I don't really know

    • @carterthaxton
      @carterthaxton Рік тому +7

      I’ve mostly heard “deck” in various industries, and in academic circles. I’m from the US.

  • @CyborusYT
    @CyborusYT Рік тому +57

    *Vec::new does not allocate.* It has a capacity of 0 and a dangling pointer, the minimum size doesn't kick in until you push an item. If it did allocate, it couldn't be const.

    • @jonhoo
      @jonhoo  Рік тому +40

      Yep, hence the name "min_non_zero_size" for the constant in the code. You're right I should probably have called that out explicitly though - it's an important thing to be aware of!

    • @tialaramex
      @tialaramex Рік тому +13

      Also, while we're on the topic: HashMap::new() doesn't allocate either. It too has capacity 0 even though it's not const because it has a pointer inside it, pointing to an empty control structure, which would not be legal in const context as I understand it. Also, even when the HashMap grows, it doesn't jump to 64 buckets, it starts with just 4 buckets (enough space for 3 key,value pairs) then 8 (enough for 7 pairs) then 16, 32, 64 now at 87.5% maximum occupancy, so e.g. if you ask for HashMap::with_capacity(60) you get a HashMap with 128 buckets, max capacity 112 items, as a the next smallest size would be over-crowded when 60 items are added.

    • @CyborusYT
      @CyborusYT Рік тому +2

      @@tialaramex I checked out the code, and it looks like HashMap::new isn't const because it needs the `Default` impl on its hasher and allocator. It calls hashbrown::HashMap::with_hasher_in, which is const itself, but is called with `Default::default()` args.

  • @act0r399
    @act0r399 Рік тому +19

    Would be very nice Jon, if you reviewing tokio and other stuff with async programming in rust, last video was 3 years ago, after that many things change, so, would be nice

  • @juchemz
    @juchemz Рік тому +2

    1:33:31 The std version of HashMap (hash brown) actually does not shuffle entries around like you suggest. When an entry is removed the slot is filled with a so-called "tombstone" value to indicate that something was there in the past

  • @maygodguideustohaven
    @maygodguideustohaven 11 місяців тому +2

    This video is soo good

  • @Dygear
    @Dygear Рік тому +3

    This was excellent, I learned a lot in a very short time. Realized I didn't need the Bytes Crate's `.put` method to extend a Vec with a Vec I can just use `.extend`.

  • @1vader
    @1vader Рік тому +18

    28:00 Actually, the lifetime is also required bc custom allocators can be non-static, in which case leak would only be allowed to leak for the lifetime of the allocator.

    • @homelikebrick42
      @homelikebrick42 Рік тому +2

      also another reason to make it generic is to support the cases when T cannot live for 'static

  • @jestarray
    @jestarray Рік тому +1

    I didn't know about hashmaps and "entry, so thanks!

  • @Araldo123
    @Araldo123 Рік тому +7

    Your representation of the btree with B=4 is not entirely correct. There should be 5 blocks of 4 entries on the second level, each one fits in-between the entries from the root. This way the number of layers L required doesn't grow as fast (I think it can contain about (1+B)^L entries).

    • @Araldo123
      @Araldo123 Рік тому +7

      I got it wrong too, B=4 apparently means 3 entries per block, with 4 blocks on the 2nd layer, 16 on the 3rd etc. So about B^L capacity.

    • @VivekYadav-ds8oz
      @VivekYadav-ds8oz Рік тому +1

      also the number of pointers he explained isn't entirely true. There's a pointer to a new block (which may be null) b/w every pair of nodes. So we're not really saving space in that sense.

  • @jocketf3083
    @jocketf3083 Рік тому +2

    Crust of Rust is Awesome!

    • @jonhoo
      @jonhoo  Рік тому

      Wow, thank you!

  • @jakesnell7707
    @jakesnell7707 Рік тому +2

    this is epic!

  • @marcoherrera-rendon9533
    @marcoherrera-rendon9533 7 місяців тому +2

    55:36 Jon accidentally teaching us about uterus anatomy 😂

  • @mahmoudel-hawawshy5198
    @mahmoudel-hawawshy5198 Рік тому +11

    Could you please do a stream where you implement some of these data structures (or possibly others) because i think that's a really hard thing to do in rust.
    it would be really appreciated if you could do that.

  • @brandonkauffman7256
    @brandonkauffman7256 Рік тому +2

    We need to find a way to get your videos to ship with clippy. This was incredible. Looking forward to your previous videos

  • @АнатолийБугаков-е9г

    Very useful lesson, thanks! For BTreeMap you start with index 7. In reality though 7 wouldn't be enough for item to go "to the left". How does Rust determine the right value for "7" to make sure enough elements can go to the left?

  • @DrIngo1980
    @DrIngo1980 Рік тому +2

    @jonhoo At 1:54:55 you say that floating point numbers are hashable but not orderable. I get the hashable bit, but why would floating point numbers not be orderable? I understand that not every floating point number can be represented in binary (I think IEEE 754 covers this if I remember correctly), but the "not orderable" part doesn't make immediate sense to me. Could you point me to a good explanation, please (or explain it here briefly)? Thank you. Oh, and great video! Learned a bunch. Nicely done.

    • @SimonBuchanNz
      @SimonBuchanNz Рік тому +3

      From the f32 docs:
      "NaN has some potentially unexpected behavior:
      * It is unequal to any float, including itself! This is the reason f32 doesn’t implement the Eq trait.
      * It is also neither smaller nor greater than any float, making it impossible to sort by the default comparison operation, which is the reason f32 doesn’t implement the Ord trait.
      ..."
      You can use the total_order() method in most contexts (eg .max_by(f32::total_order) instead of just .max()) if you're ok with the IEEE specified order, but that does things like consider the sign of NaNs and 0, which can be surprising if you're not familiar with IEEE float details!

  • @shanemarchan658
    @shanemarchan658 8 місяців тому

    very informative.

  • @jodgey4
    @jodgey4 7 місяців тому

    I wish that the std vector used a growth rate different than 2, as i think a few papers showed that something closer to 1.5 was more efficient in most cases

  • @Baptistetriple0
    @Baptistetriple0 8 місяців тому

    at 27:16 the lifetime of leak is 'a because it has to match the lifetime of the allocator, allocators may be unstable but you could make a stack allocator so every collections allocating would be tied to the lifetime of that allocator, and if you leak the collection you don't want the leaked ref to exceed that allocator lifetime

  • @gustavwengel2776
    @gustavwengel2776 Рік тому +1

    Really great video. Though isn't your description of linear probing closer to double hashing than a traditional linear probing, which just looks in the next bucket?

    • @jonhoo
      @jonhoo  Рік тому +1

      You could be right. My impression is that what people mean when they say linear probing isn't entirely well defined, but that could be ignorance on my part!

    • @gustavwengel2776
      @gustavwengel2776 Рік тому +2

      @@jonhoo It seem to me based on a wikipedia glance (en.wikipedia.org/wiki/Linear_probing) that it's essentially just taking the "next" bucket in the list until you get a free one (where quadratic probing is e.g. jumping multiple times). I'm not sure if there are multiple interpretations though. Great video either way!

  • @antifa_communist
    @antifa_communist 3 місяці тому

    Can you do a video on stdin, stdout and the terminal? Like looking at the std::io::Read and Write traits, core::fmt::Formatter and core::fmt::Display. Maybe also how you can do more advanceed thing like making text and background coloured and how you can respond to inputs to make interactive things in the terminal

  • @mysticrustacean4065
    @mysticrustacean4065 Рік тому +4

    I had a date today. My girlfriend told me either Jon Gjengset's Crust of Rust or her. Now I am proud rustacean following crust of rust, and ofcourse SINGLE

  • @meanmole3212
    @meanmole3212 6 місяців тому

    What does moving SmallVec over function boundaries mean, like capturing it to closures? Or does it mean that if I pass SmallVec to functions as reference the whole thing will be copied even though it is a reference? 26:46
    Edit: Okay, I think you pretty much answered that later.

  • @RoloEdits
    @RoloEdits Рік тому +2

    What's the syntax in the type parameters of a vec where 'A = Global'? I understand Global refers to the global allocator, but what's the = part? I have seen it before with Phantom Data as well, but I'd love to know more about what it means. It's also a hard question to Google haha.
    Edit: can also see the same kind of thing with the HashMap S = RandomState.

    • @homelikebrick42
      @homelikebrick42 Рік тому +5

      it means that Global is the default type for A if you dont specify it
      its the reason that you can do Vec without specifying the allocator, the default Global allocator will be used

    • @RoloEdits
      @RoloEdits Рік тому +1

      ​​@@homelikebrick42 thanks for answering. I remember this in the book, but I remember the example being in a trait so didn't connect it to a type. A lot of things just clicked.
      Edit: by that I mean I thought it was some trait syntax, like with associated types, and the context held a different meaning outside of it.

    • @vzwGrey
      @vzwGrey Рік тому +2

      `A = Global` simply means the type parameter `A` will default to `Global`. So if you create a vector `Vec` without explicitly specifying what `A` should be, then `A` will default to `Global`, i.e. `Vec` is the same as `Vec`.
      The same can be done for const generics. For example `struct MyArray {...}` means `N` will default to `10` if not specified otherwise.

  • @ovidiu_nl
    @ovidiu_nl Рік тому +4

    Why can't you efficiently pop from the start of a Vec by simply incrementing the pointer to the heap and decrementing both length and capacity?

    • @andreapiseri8521
      @andreapiseri8521 Рік тому +11

      you could, but then the pointer would not correspond to the one you originally allocated. if you wanted to free or realloc the memory, you would need to retrieve the original pointer.
      One way you could do this would be to store an additional field, let's call it "offset". When freeing or reallocating, you can decrement the pointer by offset to get back the original one.
      this could introduce some weirdness with the amortized asymptotic cost of the operations (the standard vector operations guarantee that the amortized cost for each push/pop operation in a sequence is O(1)). For example, if you decide that on reallocation you should set "offset" to 0 and not leave space for freed elements, then you might have the following sequence of operations:
      1) you pop from the front
      2) you push to the back
      3) you push to the front
      you might assume (3) to not allocate, since you just popped from the front and freed up a space, however pushing to the back could have just reallocated and you now lost that space.
      if instead you don't reset offset to 0, then the sequence:
      1) push to back
      2) pop from front
      3) repeat
      will result in a vector of constantly bounded length that holds an infinitely increasing amoumt of memory, and you definitely don't want that.

    • @ovidiu_nl
      @ovidiu_nl Рік тому

      @@andreapiseri8521 Thanks for the answer! Couldn't you have the Vec be responsible only for dropping its remaining capacity and have the popped value be wrapped in a Box that would be responsible for dropping that location?

    • @andreapiseri8521
      @andreapiseri8521 Рік тому

      the first solution might not look bad on paper, but it actually still doubles the upper bound for the inefficiency of memory consumption of vec, since:
      the standard implememtation of doubling the capacity on each allocation guarantees that the allocated memory from a vec is never more than double the highest length that the vec has held in the past.
      if your allocation strategy for the new vec is the same, then say you have 2^k + 1 elements in your vec, that is you just reallocated and have offset=0, capacity=2^(k+1).
      you can pop from the front 2^k times, then push to the back another 2^k times: the maximum length the vector ever reached is 2^k+1, however you now have to reallocate (since the last element falls at position "offset+len = 2^(k+1)+1", which means you will double the capacity to 2^k+2, which is four times the max length your vector ever reached.
      if you try to be clever and figure out that you can reallocate just 2^(k+1) again, then you will run into the following problem:
      you have len=2^k, cap=2^k, offset=0.
      1) pop front
      2) push back
      repeat
      each push back will reallocate, but it will keep the original capacity because len = 2^k, repeating the cycle.

    • @andreapiseri8521
      @andreapiseri8521 Рік тому

      ​@@ovidiu_nl unfortunately it's not that easy, at least not without having control over the allocator's architecture.
      Allocators generally give out memory in predetermined sizes from different pools, splitting the allocation in a "box" part and a "rest" part would break this strategy. Additionally, heap allocators in particular sometimes store metadata next to the actual allocated data, which you couldn't do for the "rest" of the vec because the "box" is in the way.

    • @andreapiseri8521
      @andreapiseri8521 Рік тому +1

      finally, you would need to notify the allocator that you've performed this split, which (even if you had an interface to do it) could incur a pretty big runtime cost

  • @sachinmalik6109
    @sachinmalik6109 Рік тому +1

    Could you do a walk through of the reqwest crate please?

  • @fionawashere1086
    @fionawashere1086 Рік тому +2

    "Maybe an hour and a half ... :D
    Goes on recording for 2:45:03
    Love it!

  • @Bwallker
    @Bwallker Рік тому +1

    2:00:00 I don't understand why only the method that returns a mutable reference to the key is unsafe. Shouldn't the ones that returned shared references to the key be unsafe? If the key is an Cell for example I could call key.set to the change the value from eg 1 to 1 Billion and break the invariants of BTreeMap.
    Edit: Changed example from atomics to cell because aparently they aren't Ord.

    • @valshaped
      @valshaped Рік тому +3

      "It is a logic error for a key to be modified in such a way that the key’s hash, as determined by the Hash trait, or its equality, as determined by the Eq trait, changes while it is in the map. This is normally only possible through Cell, RefCell, global state, I/O, or unsafe code. The behavior resulting from such a logic error is not specified, but will be encapsulated to the HashMap that observed the logic error and not result in undefined behavior. This could include panics, incorrect results, aborts, memory leaks, and non-termination."
      -The Rust Standard Library, std::collections::HashMap
      So you're allowed to do it, and it's not unsafe, (in that it's not allowed to produce undefined behavior,) but it's an unforced error on your part and it *will* make your program very sad.

  • @rainerwahnsinn3262
    @rainerwahnsinn3262 Рік тому

    How does a `HashMap` manage memory allocation efficiently? Wouldn't random keys give random hashes which result in a very sparsely populated underlying `Vec` because the hashes are all over the place? For example, your diagram showed the underlying Vec with some key-value in index 7, while the other indices where empty.

    • @rainerwahnsinn3262
      @rainerwahnsinn3262 Рік тому +1

      Ah, seems like the generated hashes are distributed somehow up to the length of the underlying `Vec`. Some comment in 1:27:44.

  • @nicolaslopeza2251
    @nicolaslopeza2251 Рік тому +1

    Can someone explain a little more what he means with rotate at minute 47:48? I dont get why its better on vecdeque than in vec

    • @phil6758
      @phil6758 Рік тому +2

      I'm new to rust but what I gathered is you don't have to shift all the elements in vecdeque. Just pop left then push right and change the start and end position whereas with a vec you'd have to shift every single element left then push the item on the end?

    • @nicolaslopeza2251
      @nicolaslopeza2251 Рік тому

      @@phil6758 I think I see it much clearer now. Thanks

  • @rainerwahnsinn3262
    @rainerwahnsinn3262 Рік тому

    Why doesn't `VecDeque` use contiguous memory without wrapping around? Like a `Vec` that grows in both directions. It seems it would offer the best of both worlds: queue and stack semantics while still coercing to a slice. I don't understand why the whole memory wrapping juggling is necessary? Just for a bit of memory efficiency? Seems like a large tradeoff to throw away the benefits of a slice.

  • @darkpikachu_.
    @darkpikachu_. Рік тому +2

    great vid, will make sure to watch all of it👍😇🫡

  • @mcpecommander5327
    @mcpecommander5327 Рік тому

    Couldn’t you manually deallocate a leaked box/vec by turning the ref into a pointer and calling from_raw before dropping?

  • @simonfarre4907
    @simonfarre4907 Рік тому +1

    Does Vec reallocate 2x the memory? I think there are plenty of studies that show 1.5x is usually best as it gives the possibility of memory re-use on re allocations, but maybe thats not how Rust's Vec do things?

    • @jonhoo
      @jonhoo  Рік тому +2

      github.com/rust-lang/rust/issues/111307

    • @simonfarre4907
      @simonfarre4907 Рік тому

      @@jonhoo and Spicy - thanks!

    • @mk72v2oq
      @mk72v2oq Рік тому +1

      I don't know what studies you are referring to, but 2x is the best way to grow a vector, because amortized analysis proofs this mathematically.
      Other values like 1.5x simply are weird quirks, trying to abuse implementation details of particular hardware/architecture. But if you need a true generic best overall - it is 2x.

    • @simonfarre4907
      @simonfarre4907 Рік тому

      @@mk72v2oq I'm referring to the major vec implementations at Facebook and Google for instance, but there has been many, many others.
      As far as 1.5 is due to hardware quirks is wrong afaict, because you can actually mathematically prove memory never getting reused (leading to fragmentation) in a lot of memory allocators due to memory continually creeping forwards as 2x, the previously allocated memory can never be reused with the new allocation to actually fit the new allocation in the previous allocations. So I don't know what you mean by platform specific quirks. Simply proven by;
      1 + 2^1 + 2^2 .. + 2^n = 2^(n+1) - 1. This is of course assuming that realloc is not being used, which was the case for vector in C++ because of how the standard defines non relocatable values. If I'm not mistaken, Rust does the same? So I think you're absolutely wrong about it being "abuse of hardware"
      Besides; if the cost of actual memory allocation is the issue; a growth factor will not solve your problems necessarily, a better domain analysis will, where you know how much memory you need to allocate, up front, making it a 1 time cost.
      Java used it for ArrayList, MSVC++ for std::vector used to have it, I am unaware of any changes they have made.
      In the end though, I'm sure 2x is just fine for all intents and purposes. But it is an issue I have with rust more broadly, that memory allocation control is so much more difficult to customize for obvious reasons. It is a tradeoff inherent to the design of the language.

    • @mk72v2oq
      @mk72v2oq Рік тому

      @@simonfarre4907 you are making assumptions about allocators. This is an implementation detail.
      Amortized analysis is theoretical, like BigO notation, does not make assumptions about memory and allocator implementations. I.e. in the real world you totally can make an example where liner lookup O(n) will be faster than O(1), but it does not mean O(n) is faster than O(1) overall.
      Companies know their hardware and systems, so they do make assumptions. But this is far from generic case.

  • @MrTrollland
    @MrTrollland Рік тому

    cant they make a pointer lookup table for linked lists? so that element access is const

  • @Ruakus
    @Ruakus 7 місяців тому

    What would happen if linear probing results in a cycle? I.e Hash(Hash(key) + key) = Hash(key)

  • @nicolaiunrein
    @nicolaiunrein Рік тому +3

    at 1:06:43 you say that a Set is faster than a Vec for Set operations. I found that most often iterating the Vec and using something like find or position is faster up until a very large number of elements because modern CPUs can optimise these kinds of operations really well. So if you need maximum performance, benchmarking with realistic payloads can be very helpful.

    • @spicybaguette7706
      @spicybaguette7706 Рік тому +4

      TBH you should probably always do benchmarking if you care about performance

    • @nicolaiunrein
      @nicolaiunrein Рік тому

      Well the point I was trying to make is that you should not rely solely on complexity when it comes to real life code performance. Even though a HashMap lookup is O(1) and finding an element in a Vec is O(n), the latter will be faster more often than one might think.

    • @1vader
      @1vader Рік тому +1

      Comparing the std hasher or even when you use a fast hasher? Also, what do you consider to be a "very large number"? It's probably true for larger numbers than most people would think but pretty sure I've tested this on some Advent of Code problems in the past and got better results with a fast hash set. I guess more complex objects probably also favor hashing whereas a Vec will remain faster for longer if you're just storing ints. Also, you made a typo in your last comment. Finding an element in a Vec is O(n).

  • @stephenhowe4107
    @stephenhowe4107 Рік тому

    @2:45 - For VecDeque - what does empty look like? Surely Start and End are the same?
    And so if full, it cannot be Start & End are the same because empty and full would have he same state.

    • @jonhoo
      @jonhoo  Рік тому +1

      I think technically they store head + length, which doesn't have the same problem.

    • @VivekYadav-ds8oz
      @VivekYadav-ds8oz Рік тому +1

      Well, when full, start and end are off by one, not same. However, what you're describing is a problem if there's just one element. Then start and end would point to the same element. So, yes, if you're storing start and end separately you do need to explicitly check for "empty" state. In our CS classes we used to do it by assigning start = end = -1, or you could say (None, None) in Rustic terms.

    • @stephenhowe4107
      @stephenhowe4107 Рік тому

      @@VivekYadav-ds8oz _Well, when full, start and end are off by one, not same_
      That is what I think. Fullness is basically when end +1 == start (allowing for circular buffer wrap around).
      _However, what you're describing is a problem if there's just one element. Then start and end would point to the same element_
      No. If start and end are the same => that is empty
      If start + 1 == end, that is 1 element in the buffer (allowing for circular buffer wrap around).

  • @meanmole3212
    @meanmole3212 6 місяців тому

    How slow is binary heap's retain?

  • @MrScientistguy
    @MrScientistguy Рік тому +2

    Can you do a stream where you go more into the haskell-style abstractions that you mentioned in the result/option section? (Monad, Applicative, Functor, etc)?

  • @adityamwagh
    @adityamwagh Рік тому +1

    Could you please share your Firefox config file?

    • @jonhoo
      @jonhoo  Рік тому +2

      github.com/jonhoo/configs/blob/master/gui/.mozilla/firefox/chrome/userChrome.css

    • @adityamwagh
      @adityamwagh Рік тому

      Thank you!

  • @larry_the
    @larry_the Рік тому

    What is the whiteboard program?

  • @applePrincess
    @applePrincess Рік тому +1

    34:40 now they know

  • @ACIB708
    @ACIB708 Рік тому

    Please a crust of rust of egui+eframe+rerun!!!

  • @lachidong7818
    @lachidong7818 10 місяців тому

    yah its been WILL

  • @antifa_communist
    @antifa_communist 2 місяці тому

    Decrust ratatui please

  • @julians.2597
    @julians.2597 8 місяців тому

    OrderedSet is quite nonsensical, but oh well, i wouldnt expose the ordering

  • @jonnyso1
    @jonnyso1 4 місяці тому

    As you can see, the real reason a HashMap is not really order one is because Jon can't draw a straight line.

  • @dronestrikejr
    @dronestrikejr Рік тому +2

    First

    • @lexolexoh
      @lexolexoh Рік тому +1

      Nth ~3hr since upload

  • @aydtsang
    @aydtsang Рік тому

    crust of crab language*