I'm impressed programming language runtimes are still letting the OS manage the buckets of memory allocation instead of always over-commiting pages and doing the correct thread-off that saves kernel invocations for malloc buckets.
@@blinking_dodoyes, if there would be any kind of remote code execution exploitation and my goal would be to slow the system down, this would be a funny thing to do :)
An allocator backed by a memory arena/slab (if one knows the amount of memory they will need) is a good “have your cake and eat it too” solution to avoid fragmentation and maintain low latency. Your deep dive here was very very well done! Super valuable and approachable by all experience levels. Keep up the great work!
I've had to implement malloc/free for a Uni course, all the ways you can do things is very interesting! It is also interesting to debug... I was using rust at first, but even `printf` allocates memory, which is great if your `malloc` is crashing :)
@@shambhav9534 C's printf has the same problem, you have to use fprintf(stderr, msg). The problem isn't that you can work around it, is that you have a crash, so you use printf to see where it crashes, but it seems to crash before any printf (because allocating memory is what crashes the system)
Someone in chat rightfully pointed out we probably could've gotten out of this one with an arena, but I agree compacting GCs are the generic solution to this.
Well, GCs don't solve memory leaks either. And whether they actually provide *useful* cache locality is kinda a crapshoot. GCs are great sometimes, but they're not magical either. They solve use-after-free, but memory-use-after-free is arguably the easiest thing to solve. (After all, I need features to not use-after-close my file descriptors and use-after-FIN my socket handles, and use-after-unlock my mutexed data and ...)
I fucking love your videos. Especially rust ones. Amazing! I feel like i've always had a hunch about this as my mental model, but seeing it concretely explained is so good.
Another great example of why I love Rust: the control to choose whether to care about problems like this or not. For the type of work I do, I'll have RAM to spare, so I can afford to be a little more implicit with my management. But for some stuff, I can also go the complete opposite direction and maximize every last byte of memory.
@@nickwilson3499 as the video explains tho, it's not a memory leak, it's a worst-case fragmentation.in a more practical program, all those gaps in memory would be filled with smaller data structures anyway.
You mean like the Allocator template argument in C++ std::unordered_map? Looks kind of like Rust only does it at a global level. The example here also shows it making thousands of little allocations when populating a hashmap of known size, which would have been possible to preallocate unboxed. I'm also not seeing the strategic options of e.g. Gregory Collins' hashtables package. This is the sort of tuning I'd expect to see in Chapel, but its associative maps appear fairly limited too.
Thank you. This was a really detailed, and highly interesting walkthrough. I really would like to thank you for sharing this. Saved to my library you have a good day smart kind human
This was very interesting. I followed it roughly but I probably wouldn’t ace test questions about this from just watching the video. It’s all about trade offs. The thing is, it quickly becomes more complicated than what I would want to reason about when writing most programs. Since this is on top of libc-malloc all these issues would be there in c as well. But there are so many levels of indirection. Operating system, allocator, maybe garbage collector and then language abstractions. The levels of indirection most often saves you from having to deal with really stupid stuff like implementing your own paging in your little app that has a bunch of data in it. Sometimes it seems like you want to talk directly to the hardware but the second you want to run anything on multiple different systems all the levels of abstraction softens the rough edges. It’s just not clear at what point you would want to make choices about these kinds of issues. And it’s very hard to make general automatic defaults. I guess the reasoning should be, the defaults are very good and if you really run into trouble, profile and pick the lowest hanging fruit.
Ahh, ok - so not a leak like, "memory is mysteriously disappearing and only the OS knows where", but rather, "memory inefficiently allocated". I was a little worried going into this that the promises I'd been told about Rust's memory safety would turn out to be disappointing lies, haha. Glad that's not the problem.
@@fasterthanlime I mean, if your apparently-correctly-written program, in normal usage, gradually grows in memory size until it (or the system) is forced to quit, that sounds fairly unsafe to me. Not as bad as leaking sensitive info or corrupting data, granted.
@@jcdyer3 How so? My understanding was that things have one owner at any given time, and once execution leaves the scope containing the object, or the owner is collected, the thing is collected, too. Without invoking explicitly unsafe behavior, how would you permanently leak memory?
@@Erhannis You never permanently leak memory, as it would be reclaimed at program exit, but you can lose track of your objects and keep allocating more without letting the old ones go our of scope
well ... all I can say it: Another reason I'm glad I can use compacting GC at work, instead worrying about stuff like this. It's INTERSTING, yes. But I don't have time for puzzles, when a client expects results...
26:04 paying for drops to the operating system ? that's insane, we don't pay that with garbage collectors, there's a thread to do that so our current thread will run smoothly
It won't make a syscall every time a value is dropped... glibc's malloc and free only do so when the memory they have at disposal isn't enough, otherwise it's just simple bookkeeping... Also, having destructors run in a separate thread is a very BAD idea when having native resources... I had to fight C#'s GC because it didn't have to run the finalizer of a OpenGL-related object in a different thread than the one it was created.
@@tesfabpel oh yes "simple" book keeping. It's not that simple either way. It's usually a sparse linked list or some red/black tree. I'm some cases it's much more complex than that with buckets and lots of things because they try really hard to avoid fragmentation. Meanwhile GCs take a much larger heap block from the OS and manage it themselves. And they can (usually) just do heap compaction and hardly suffer from fragmentation. They also pay much less in syscalls. As committing pages is basically free, there's no reason why you wouldn't prealloc a huge amount of heap, unless you're worried about fragmentation of the heap of the process. The real difference between GC and alloc/dealloc inline is where you pay the cost, on allocating or when deallocating. With manual memory management you pay nothing when deallocating, but allocation isn't cheap or even that predictable. GCs are ridiculously fast on allocation, it's not even a joke. It's as cheap as a barrier and a pointer increment. And they have a good side effect, no memory crashes. I find it funny that people think GCs made things slow when it fact it's always double dispatching and excessive use of objects. (which causes allocations that have to be paid either way). But GCs make it very evident the cost as they have their thread spinning and doing things. Meanwhile in C++ you happily pay constructor calls and never notice. But deallocating is "free", pun intended, just let it leak...
@@tesfabpel also what you were doing to C#, I never had that problem of having to free things in the same thread. Well, don't use destructors for that, they're meant to managed objects, not unmanaged system resources. The fact that you can use memory management to manage other kinds of resources is a impedence mismatch on programming languages. GCs don't replace RAII on system resources, ironically. You just have to implement Disposing(false) properly and sprinkle "using (opengl) {}" everywhere. And I think that the fact C++ uses RAII for everything is a mistake that complicates the runtime. C# IDisposable isn't great, it's one of the few things the Java/JVM has better. But I bet they didn't had to deal with COM compatibility, so there's a reason for the IDisposable design. Or even simpler, use a Object Pool class and reference count objects, like the COM does. You're not required to use the GC for everything. Sometimes doing things manually is fine. Just do a proper ".Dispose()" and suppress collection. Do it from the thread you want to dispose, don't fight the GC. It's simple as that. You probably were using the IDisposable wrong. I think you have to use a mashall reference and run your Thread render in the STA apartment if you really want the GC to fire destructors there. What you're doing is highly unusual on that environment. That's not a fault of the GC but of OpenGL being stupid. (as usual)
hey @fasterthanlime is there a way to know if it is a real memory leak? did you try valgrind or heaptrack and what did they report? I am having a similar issue that is reported as a leak by these tools and I am not sure if it is. Thanks for this video, learned a lot.
It's really hard to say without taking a look at the codebase itself! Most people's definition of "is it a leak" is "is it a problem in practice / does it keep growing". Otherwise it's just.. memory usage.
Should be worthwhile to note that Rust's memory safety guarantees are about dangling pointers, double free and such, but not about memory leakage. Structures with mutual pointers between instances cannot be freed easily, must use "Weak" references.
How would this fare using the MESH allocator? That one that was going around a few years back that used some virtual memory tricks to let it merge together partially-empty memory pages if each page's holes overlapped with the other's data.
Yessss, gonna feel nice for guessing fragmentation, even if I had no idea why and had to watch the explanation parts a few times:). Regarding what to do about it, I'm not certain that's generally applicable - tho it seems so - , but in this case you had "interlacing lifetimes" of memory allocations as root cause, no? Method 2 doesn't interlace them and because of that that free memory isn't used. Another solution should be separate allocators for the separate uses of course.
Rust doesn't have a compacting GC that will move memory by itself. However, Rust code can move values, which would break async code. Async functions are self-referential state machines, and Pin makes it impossible to move them.
@@fasterthanlime I'm still confused what is "move values" if it doesn't rewrite pointers into it... or also, disallow moving values if there exist references into it, Pin being kind of implicit by that?
Cool low level stuff. Lack of actually useful tools - everyone on "big" tasks playing with ML/AI/Clouds... And there is just no reward for making small "tools" that actually doing something useful now.
As someone who's now learning Rust, a title like this is NOT encouraging. >_< LOL I thought Rust was supposed to save you from yourself in regards to memory safety.
glad it's clickbait and not a real memory leak^^ if it was a real memory leak, memory still would be gone after clear or reset^^ so it doesn't leak memory but how 3rd method was implemented is not good. Nonetheless, great video 👍
To be frank, it's a bit confusing to watch. What's the trim code, what's the reset code? I see short glimpses of code that jump from left to two columns, then back to left, then split to show the graph... gah! Then it goes straight to the measurements without knowing what any of those actually does... that's where you lost me. 😅
The title is the question I was asked - the video elucidates it. If every commenter being salty about titles whilst still learning something useful in the video spent that energy elsewhere, we would have solved the climate crisis already.
The video emphasizes at the end that this is something common to all memory allocators - only moving/compacting garbage collectors solve that problem generally.
@@EzequielRegaldo It might be harder to debug in C++ since you'd have to write 10x more lines on code to get a grasp on what's going on. In Rust you just use dhat heap profiler. In C++ you frequently need to create tools yourself (from my experience)
Isn't this just a mediocre programmer writing stupid code? Do people really think you can completely ignore how memory works and still write efficient software?
It's not - the example was golfed down from a real-world codebase to something small enough to study in isolation. A lot of people were stumped by exactly what was going on. Calling people mediocre and their code stupid doesn't make you look cool and isn't welcome on this comment section.
@@fasterthanlime If the code was not critical in the first place then why are we talking about it? If it was indeed critical then whoever wrote this didn't know what they were doing. Also just because a code excerpt comes from a real-world codebase doesn't magically make it perfect. The code was a textbook example of memory fragmentation, it really is dumb. EDIT: After reading the code myself, I think mediocre was an understatement. It was super obvious from reading the code itself. Even the real-world code was awful. They were collecting an entire database to create the inverse_map, which is obviously a memory fragmentation issue. A commit fixes it *accidentally*, by trying to save memory by streaming instead of collecting the entire db.
Knowing about fragmentation is one thing, but seeing it happen in practice is so useful! Thanks a lot
That's why I was so excited when it landed in my inbox! I often struggle to find good real-world examples of concepts and this was a gift ☺️
I'm impressed programming language runtimes are still letting the OS manage the buckets of memory allocation instead of always over-commiting pages and doing the correct thread-off that saves kernel invocations for malloc buckets.
@@fasterthanlime would this fragmentation be exploitable?
@@blinking_dodoyes, if there would be any kind of remote code execution exploitation and my goal would be to slow the system down, this would be a funny thing to do :)
your content always plants a massive smile on my face ; you have such a kind soul ; thank you for this content !!!!!
!!!!!!!!
(5:44) Je malloce, tu mallocs, vous m'allocez lol
An allocator backed by a memory arena/slab (if one knows the amount of memory they will need) is a good “have your cake and eat it too” solution to avoid fragmentation and maintain low latency.
Your deep dive here was very very well done! Super valuable and approachable by all experience levels. Keep up the great work!
I've had to implement malloc/free for a Uni course, all the ways you can do things is very interesting!
It is also interesting to debug... I was using rust at first, but even `printf` allocates memory, which is great if your `malloc` is crashing :)
Sounds like a Jacob Sorber video. ;)
Thats unfortunate yea
Well, you could've easily worked around using Rust's printf.
@@shambhav9534 C's printf has the same problem, you have to use fprintf(stderr, msg).
The problem isn't that you can work around it, is that you have a crash, so you use printf to see where it crashes, but it seems to crash before any printf (because allocating memory is what crashes the system)
"I have no tools because I've destroyed my tools with my tools"
- James Mickens, The Night Watch (recommended)
Great content! Loved seeing a light deep-dive into kernel at work (+bonus with Rust!) with satisfying explanation
Thanks so much!
See this is why we build compacting garbage collectors ;) It's not just for the improvement in cache locality...
Someone in chat rightfully pointed out we probably could've gotten out of this one with an arena, but I agree compacting GCs are the generic solution to this.
Then your GC leaks even more memory 🤡
Well, GCs don't solve memory leaks either. And whether they actually provide *useful* cache locality is kinda a crapshoot.
GCs are great sometimes, but they're not magical either. They solve use-after-free, but memory-use-after-free is arguably the easiest thing to solve. (After all, I need features to not use-after-close my file descriptors and use-after-FIN my socket handles, and use-after-unlock my mutexed data and ...)
@@pfeilspitze Moving garbage collectors specifically fix fragmentation too.
This was great; definitely something that will stay with me, and remember when designing solutions.
I fucking love your videos. Especially rust ones. Amazing! I feel like i've always had a hunch about this as my mental model, but seeing it concretely explained is so good.
Another great example of why I love Rust: the control to choose whether to care about problems like this or not. For the type of work I do, I'll have RAM to spare, so I can afford to be a little more implicit with my management. But for some stuff, I can also go the complete opposite direction and maximize every last byte of memory.
yeah the choice of leaking memory is always handy lmao
@@nickwilson3499 as the video explains tho, it's not a memory leak, it's a worst-case fragmentation.in a more practical program, all those gaps in memory would be filled with smaller data structures anyway.
You mean like the Allocator template argument in C++ std::unordered_map? Looks kind of like Rust only does it at a global level. The example here also shows it making thousands of little allocations when populating a hashmap of known size, which would have been possible to preallocate unboxed. I'm also not seeing the strategic options of e.g. Gregory Collins' hashtables package. This is the sort of tuning I'd expect to see in Chapel, but its associative maps appear fairly limited too.
Thank you. This was a really detailed, and highly interesting walkthrough. I really would like to thank you for sharing this.
Saved to my library
you have a good day smart kind human
This was very interesting. I followed it roughly but I probably wouldn’t ace test questions about this from just watching the video.
It’s all about trade offs. The thing is, it quickly becomes more complicated than what I would want to reason about when writing most programs.
Since this is on top of libc-malloc all these issues would be there in c as well. But there are so many levels of indirection. Operating system, allocator, maybe garbage collector and then language abstractions.
The levels of indirection most often saves you from having to deal with really stupid stuff like implementing your own paging in your little app that has a bunch of data in it.
Sometimes it seems like you want to talk directly to the hardware but the second you want to run anything on multiple different systems all the levels of abstraction softens the rough edges.
It’s just not clear at what point you would want to make choices about these kinds of issues. And it’s very hard to make general automatic defaults.
I guess the reasoning should be, the defaults are very good and if you really run into trouble, profile and pick the lowest hanging fruit.
Ahh, ok - so not a leak like, "memory is mysteriously disappearing and only the OS knows where", but rather, "memory inefficiently allocated". I was a little worried going into this that the promises I'd been told about Rust's memory safety would turn out to be disappointing lies, haha. Glad that's not the problem.
Leaking memory wouldn't be unsafe though! There's even a method in the standard library for that: Box::leak
@@fasterthanlime I mean, if your apparently-correctly-written program, in normal usage, gradually grows in memory size until it (or the system) is forced to quit, that sounds fairly unsafe to me. Not as bad as leaking sensitive info or corrupting data, granted.
@@Erhannis Rust's memory safety promises makes no guarantee that that won't happen.
@@jcdyer3 How so? My understanding was that things have one owner at any given time, and once execution leaves the scope containing the object, or the owner is collected, the thing is collected, too. Without invoking explicitly unsafe behavior, how would you permanently leak memory?
@@Erhannis You never permanently leak memory, as it would be reclaimed at program exit, but you can lose track of your objects and keep allocating more without letting the old ones go our of scope
"I didn't hide my email address well enough, and received a fascinating puzzle." ...Good end???
Really appreciate these technical deep dives! Thanks!
"It does not use very much memory at all" "1GB"
Me:"WTF"?
pmap -X gives nicer output than cat /proc//maps :)
Excellent work as always Amos!
Arena allocators such as Bumpalo in rust mitigate this issue
Please no spoilers, I'm still working on that one
well ... all I can say it: Another reason I'm glad I can use compacting GC at work, instead worrying about stuff like this. It's INTERSTING, yes. But I don't have time for puzzles, when a client expects results...
I learned TONS from this. Thank you!!!
this was enlightening! thank you!
A truly herculean effort
Thank you very much for the vid!
Write a book. Don't waste time on this. You're a good teacher with good mastery.
26:04 paying for drops to the operating system ? that's insane, we don't pay that with garbage collectors, there's a thread to do that so our current thread will run smoothly
It won't make a syscall every time a value is dropped... glibc's malloc and free only do so when the memory they have at disposal isn't enough, otherwise it's just simple bookkeeping... Also, having destructors run in a separate thread is a very BAD idea when having native resources... I had to fight C#'s GC because it didn't have to run the finalizer of a OpenGL-related object in a different thread than the one it was created.
@@tesfabpel oh yes "simple" book keeping.
It's not that simple either way.
It's usually a sparse linked list or some red/black tree. I'm some cases it's much more complex than that with buckets and lots of things because they try really hard to avoid fragmentation.
Meanwhile GCs take a much larger heap block from the OS and manage it themselves. And they can (usually) just do heap compaction and hardly suffer from fragmentation.
They also pay much less in syscalls.
As committing pages is basically free, there's no reason why you wouldn't prealloc a huge amount of heap, unless you're worried about fragmentation of the heap of the process.
The real difference between GC and alloc/dealloc inline is where you pay the cost, on allocating or when deallocating.
With manual memory management you pay nothing when deallocating, but allocation isn't cheap or even that predictable.
GCs are ridiculously fast on allocation, it's not even a joke. It's as cheap as a barrier and a pointer increment.
And they have a good side effect, no memory crashes.
I find it funny that people think GCs made things slow when it fact it's always double dispatching and excessive use of objects. (which causes allocations that have to be paid either way).
But GCs make it very evident the cost as they have their thread spinning and doing things. Meanwhile in C++ you happily pay constructor calls and never notice. But deallocating is "free", pun intended, just let it leak...
@@tesfabpel also what you were doing to C#, I never had that problem of having to free things in the same thread.
Well, don't use destructors for that, they're meant to managed objects, not unmanaged system resources.
The fact that you can use memory management to manage other kinds of resources is a impedence mismatch on programming languages.
GCs don't replace RAII on system resources, ironically. You just have to implement Disposing(false) properly and sprinkle "using (opengl) {}" everywhere.
And I think that the fact C++ uses RAII for everything is a mistake that complicates the runtime.
C# IDisposable isn't great, it's one of the few things the Java/JVM has better.
But I bet they didn't had to deal with COM compatibility, so there's a reason for the IDisposable design.
Or even simpler, use a Object Pool class and reference count objects, like the COM does.
You're not required to use the GC for everything. Sometimes doing things manually is fine. Just do a proper ".Dispose()" and suppress collection. Do it from the thread you want to dispose, don't fight the GC. It's simple as that.
You probably were using the IDisposable wrong. I think you have to use a mashall reference and run your Thread render in the STA apartment if you really want the GC to fire destructors there.
What you're doing is highly unusual on that environment. That's not a fault of the GC but of OpenGL being stupid. (as usual)
This was fascinating!!!
loved it
hey @fasterthanlime is there a way to know if it is a real memory leak? did you try valgrind or heaptrack and what did they report? I am having a similar issue that is reported as a leak by these tools and I am not sure if it is. Thanks for this video, learned a lot.
It's really hard to say without taking a look at the codebase itself! Most people's definition of "is it a leak" is "is it a problem in practice / does it keep growing". Otherwise it's just.. memory usage.
Should be worthwhile to note that Rust's memory safety guarantees are about dangling pointers, double free and such, but not about memory leakage. Structures with mutual pointers between instances cannot be freed easily, must use "Weak" references.
what vscode extensions are you using?
Awesome vid!
Is the example code available somewhere? I would love to dive into it
What I take away from all this: "Phew!... Thought for a moment it was Rust's fault". Also, don't litter, keep it solid
How would this fare using the MESH allocator? That one that was going around a few years back that used some virtual memory tricks to let it merge together partially-empty memory pages if each page's holes overlapped with the other's data.
I'm planning on doing a video about arenas & the MESH allocator :) Excited for that.
Yessss, gonna feel nice for guessing fragmentation, even if I had no idea why and had to watch the explanation parts a few times:).
Regarding what to do about it, I'm not certain that's generally applicable - tho it seems so - , but in this case you had "interlacing lifetimes" of memory allocations as root cause, no? Method 2 doesn't interlace them and because of that that free memory isn't used.
Another solution should be separate allocators for the separate uses of course.
What is the name of that font in VSCode?
Very cool
Seems to me there should be a way to do this this uses 3 orders of magnitude less memory. Why or why not?
I thought Rust had the ability to rearrange memory, and that's what Pin prevented. Have I totally misunderstood the purpose of pinning?
Rust doesn't have a compacting GC that will move memory by itself. However, Rust code can move values, which would break async code. Async functions are self-referential state machines, and Pin makes it impossible to move them.
@@fasterthanlime I'm still confused what is "move values" if it doesn't rewrite pointers into it... or also, disallow moving values if there exist references into it, Pin being kind of implicit by that?
What font do you use? ;)
Iosevka
what VSCode theme are you using?
Usually GitHub light / GitHub dark
@@fasterthanlime thank you for the reply!
So it's more like unintentional memory overprovisioning, rather than a memory leak?
More like fragmentation, yeah!
How differently would it behave on Windows?
Hard to tell since it has s completely different allocator, but the basic idea is the same. Wouldn't be too hard to find out, if you're curious!
is that a custom font for your vsc?
That's typeof.net/Iosevka !
I wonder if Alice knows about this co-Alicing.
30:42 garbage collectors and memory compression !
check mate
This is literally the next thing mentioned in the video
Cool low level stuff.
Lack of actually useful tools - everyone on "big" tasks playing with ML/AI/Clouds...
And there is just no reward for making small "tools" that actually doing something useful now.
It's like glibc assumes a variable page size OS
?
Tu malloques ! xD
Avec l'orthographe correcte et tout, c'est nickel
Why the heck indeed
Double it and give it to the next person
As someone who's now learning Rust, a title like this is NOT encouraging. >_< LOL I thought Rust was supposed to save you from yourself in regards to memory safety.
Memory leaks are actually memory safe.
glad it's clickbait and not a real memory leak^^ if it was a real memory leak, memory still would be gone after clear or reset^^ so it doesn't leak memory but how 3rd method was implemented is not good. Nonetheless, great video 👍
The point is not to allocate those little vecs.
Wait, wouldn't the plural of Linux be "Linuces?"
I have used a lot of C and C++ applications. I haven't had a memory leak.
I used two Rust apps. One of them was leaking memory
dhat-heap is a very good memory profiler.
Are there really people who find inlay hints useful and not confusing?
Yes, me!
To be frank, it's a bit confusing to watch. What's the trim code, what's the reset code? I see short glimpses of code that jump from left to two columns, then back to left, then split to show the graph... gah! Then it goes straight to the measurements without knowing what any of those actually does... that's where you lost me. 😅
The description has a link to the repository! That might help :)
Ohno, it's MADV_DONTNEED :O
ua-cam.com/video/bg6-LVCHmGM/v-deo.html#t=58m23s
...so it doesn't leak memory at all, it just allocates inefficiently. Misleading title
The title is the question I was asked - the video elucidates it. If every commenter being salty about titles whilst still learning something useful in the video spent that energy elsewhere, we would have solved the climate crisis already.
So we can go back to C++ ? Feels weird overcomplicated lang
The video emphasizes at the end that this is something common to all memory allocators - only moving/compacting garbage collectors solve that problem generally.
Sadly, you'd have the same problem in C++ too.
@@shambhav9534 thats why i preffer stay with C++, i dont see a real benefit changing
@@EzequielRegaldo It might be harder to debug in C++ since you'd have to write 10x more lines on code to get a grasp on what's going on.
In Rust you just use dhat heap profiler. In C++ you frequently need to create tools yourself (from my experience)
@@EzequielRegaldo For every problem that Rust doesn't solve, it solves a thousand others.
Isn't this just a mediocre programmer writing stupid code? Do people really think you can completely ignore how memory works and still write efficient software?
It's not - the example was golfed down from a real-world codebase to something small enough to study in isolation. A lot of people were stumped by exactly what was going on. Calling people mediocre and their code stupid doesn't make you look cool and isn't welcome on this comment section.
@@fasterthanlime If the code was not critical in the first place then why are we talking about it? If it was indeed critical then whoever wrote this didn't know what they were doing. Also just because a code excerpt comes from a real-world codebase doesn't magically make it perfect. The code was a textbook example of memory fragmentation, it really is dumb.
EDIT: After reading the code myself, I think mediocre was an understatement. It was super obvious from reading the code itself. Even the real-world code was awful. They were collecting an entire database to create the inverse_map, which is obviously a memory fragmentation issue. A commit fixes it *accidentally*, by trying to save memory by streaming instead of collecting the entire db.
I'd really like to see you on @Computerphile! I think it would fit perfectly.
What font do you use?