Rust Allocators and Memory Management

Поділитися
Вставка
  • Опубліковано 8 вер 2024
  • In this video I go over some basic Linux memory management concepts, and talk about the pros and cons of a few Memory Allocators in Rust.

КОМЕНТАРІ • 20

  • @Hector-bj3ls
    @Hector-bj3ls Рік тому +10

    I love a good "level 2" understanding of a subject.

    • @meanmole3212
      @meanmole3212 3 місяці тому

      it is good but wait until you get a taste of level 7

  • @meyou118
    @meyou118 27 днів тому +1

    love this - glad i stumbled on this

  • @seikatsu_ki
    @seikatsu_ki Рік тому +5

    We yearn for serious topics, thank you for sharing with us your digging deep's journey!

    • @masmullin
      @masmullin  Рік тому +1

      Thank you. This is one of my favourite compliments I've received.

  • @TehGettinq
    @TehGettinq Рік тому +3

    Ahh a habs fan, fellow rust programmer and vim user. Delightful combination. Thanks for the video.

  • @semigsedem
    @semigsedem Рік тому +4

    Many thanks, I learned a lot. Just the right detail level for me I think :)

  • @irlshrek
    @irlshrek Рік тому +2

    absolutely loving your content!

  • @CamaradaArdi
    @CamaradaArdi Рік тому +1

    Really good video. Please do another one with the heap analyzer you mentioned

  • @mike-barber
    @mike-barber Рік тому +3

    Nice one! Might be good to critique the system allocator on Alpine too, since it's not just glibc, and seems to perform quite poorly in some cases. Nice to have mimalloc and jemalloc available to work around it.

    • @masmullin
      @masmullin  Рік тому +2

      Oh, now I'm kicking myself. That's a really good idea

    • @mike-barber
      @mike-barber Рік тому

      @@masmullin thanks! I think it could be quite interesting indeed!

    • @masmullin
      @masmullin  Рік тому +2

      Looks like musl (the standard libc of alpine) has a bespoke malloc implementation (elixir.bootlin.com/musl/latest/source/src/malloc/mallocng/malloc.c).
      This allocator is significantly slower than glibc (and jemalloc/mimalloc). The good news is that it's just as easy to replace the allocator in musl as it is with glibc.
      By switching to mimalloc+musl, the test application shown at the end of the video performs only about 4% slower than mimalloc+glibc (roughly on par with glibc alone), and musl alone is 38% slower than mimalloc+musl. jemalloc_perf+musl is the same as mimalloc+musl, but with the high memory initial overhead as seen with jemalloc+glibc.

  • @user-zq8bt6hv9k
    @user-zq8bt6hv9k Рік тому

    Interesting, thanks for the work

  • @terrnnoo7007
    @terrnnoo7007 Рік тому

    Didn't quite catch why wouldn't allocator give back 2559 dirty pages to OS if these 64 bytes are in use. Does allocator want us to free all requested memory to give those pages back or bcs we wrote data to these 10 Mb but freed only 9.9 Mb?

    • @masmullin
      @masmullin  Рік тому +1

      This is difficult to explain, sorry for the confusion.
      There's two types of allocation in Linux. One uses sbrk to move something called the break for the heap up and down. Think of the break like a line. In the case where you move the break up 10mb, either in one big jump, or many small jumps, then you use all of that 10mb, the you free all the memory other than the very top; the allocator cannot move the break back down because that very top is still being used.
      The other way to allocate is via mmap. If you use mmap by hand, you can grab a 10mb chunk of memory, use it, and the mark 9.9mb of that memory as DONT_NEED, I've not seen that sort of behaviour when an allocator uses mmap to grab memory and then give it to you via malloc/free. In the case where an allocator uses mmap, it will (hopefully) mark that 10mb chunk of memory as DONT_NEED when you are completely done with it.
      also, allocators try to be smart with mmap. Eg jemalloc will wait to mark an mmap as dont_need for some amount of time in case you ask for more memory.

    • @terrnnoo7007
      @terrnnoo7007 Рік тому

      @@masmullin So if allocator uses sbrk syscall there is particular reason why 9.9 Mb isn't freed (bcs 64 bytes are located at top of the 9.9 Mb). But in case of mmap it seems like nothing prevents allocator from freeing 9.9 Mb if it wants so, bcs mmap doesn't increase brk segment address but instead giving us pages of memory somewhere. So is it true that 'Dirty Pages' are really possible only while using sbrk, bcs if allocator use mmap it can call free syscall on freed(by allocator API) memory pages?