- Notice that the reason the program fills up the memory is not because of the memory directly requested with malloc (which is not used in this program - just allocated), but rather because of the memory being filled up with the bookkeeping required to maintain all the allocated blocks which are rapidly allocated. (If I would have just allocated a large block, it would not fill up the memory) - I recommend checking the return value of malloc when you are trying this yourself, since it can return 0 in case it cannot allocate which can help you debug - Forgot to mention that by default you will only see these logs from the kernel if you are in the first TTY session of the computer (for example if you are connecting via SSH you won't see the messages), thus a recommended way to check out the kernel log is to use the dmesg command
I discovered this when hosting Minecraft servers on Linux and realizing that in many cases, the heap of the JVM exceeded the VM usage, which can cause this to kick in. Very fascinating find!
I've noticed that too - even if minecraft itself reports a fairly constant heap usage, the process will use more and more memory, to the point where I eventually have to restart it to avoid my system crashing
@@kreuner11 The JVM will still overallocate overtime past the heap flags. From my experience, it helps to keep the heap usage atleast 1 GB under what the OS RAM limit is. I was merely pointing out how setting the heap in extreme cases causes the OOM killer to kick in. (This is also assuming you don't have any swap enabled)
Can you tell ppl who have the same idea of running a MC server on Linux how to avoid running into the issue of the server being killed because of the process termination?
You can actually set the flag to never kill the process, and if that process fills up all the memory, the system's gonna kernel panic with the message reading something like "System locked up on memory"!
Which is really bizarre, considering malloc() is defined to return null when you run out of memory, and no other operating system crashes due to not being able to allocate memory you asked for.
The name for the Out Of Memory manager seems obvious when read out, but it could also come from the old scifi story of the OOM. The OOM was a celestial being who, unhappy with his work, decided to remove all of creation. He deleted everything until he got to himself. Then he consumed himself, leaving nothing.
@@vr10293 Not even GPT-4 knows the name of this story. Neither does any of the search engines I tried return anything about any celestials named "OOM". It therefore stands to reason, it does not exist. Which makes sense, after all, this OOM consumed every trace of himself.
Love this video and your channel, it's such a gold mine for engineering things that I've thought about but never had the enthusiasm to really dive into myself
It's the long runnninh processes that can make your Linux system completely un-recoverable when it's oom. Example, a Firefox browser running from right after boot time and getting oom as it keeps opening newer tabs. Or... A Kdenlive video editing session. Before you know it, you're oom and all your work is now *completely* lost, since the system just can't recover. At first I thought it's just being slow as it's trying to move from physical to swap space, but quite a few times I let my computer just try to recover over almost a day, but it never did. Albeit, it was on a laptop with 8GB RAM and hdd.
I do hardware monitoring for data centers for a living and sometimes this occurs and causes the system to be unreachable. A reboot will typically fix it but then I've got to inspect for whether or not this happened because of a high amount of correctable errors or just because someone's got a memory leak in their application that's been running for months.
I had a notebook with Great personality, but only 4 of ram. Every time that boy used that memory, everything stopped, don't ask my why. I am sure it had swap, but idk. So i needed to decorate de raising skinny elephants for that time
So another observation is that I tried running a similar program (allocate 4K pages until malloc returns NULL, while printing the sum of allocated memory), and that was on a 64-bit system with 48-bit MMU width(?), and a NULL was returned after a little less than 2^48 bytes. This is based on the fact that Linux doesn’t allocate physical frames for each allocated virtual page, unless it becomes dirty, which in that case, will crash much earlier than the first case, because it is allocated in physical memory
Im using a New laptop with 16 gb of ram, when im using visual studio code + open league of legends, i get out of memory, but my sistema get completly freeze and i have to force the shutdown, looks like the kernel is unable to kilo the process
Is there any way this could lead to a system freeze? Or is this an absolutely bullet proof OOM mechanism? Because I saw a system freeze while it was idling, journal just stopped without signs/warnings, system was completely frozen and unresponsive, SSH not working etc... Unfortunately I don't have logs of memory usage. Happened 3-4 times over the course of a few months actually. I'm at a loss.
i've had my system freeze when it ran out of RAM, but the kernel wasn't actually frozen, it was just swapping a lot (i could get to a tty, half an hour after i pressed ctrl+alt+f2). I've tried two things to stop this: 1) disabling memory overcommitment. This makes Linux deny requests for more RAM than there actually is. 2) setting a memory limit for my user using systemd. I would say 2 is the better option, since 1 might still freeze or crash important apps and is hard to configure well.
Very informative video! I love the pragmatism of just writing 5 lines of C code to do the thing plain and simple, and finding the source code + documentation of the actual kernel, that was very good. Out of curiosity, what do you do as your career?
I remember the day when I needed to render some scene in Blender (Steam version) on CPU and a few times when PC ran out of memory, machine just has been frozen. Nothing was working, but the turn off button. I guess there's more factors in a case like this, but "running long" makes some spotlight, thanks.
You should have used Top or any of it's derivetives to show the memory consumption grow in realtime. First time I found this was making a server that transferes files and I had made a mistake where I didn't free the memory and it kept shutting down the server. I used htop and immediately saw what was happening.
One tip to remember is if you can, put your swap area on a different physical device and bus - this goes for SSDs too. When a system starts thrashing (I forget the specific modern terminology, but it's spending more time managing than giving time to non-os processes to run) the i/o busses can get flooded as the system tries to move process from memory to swap exacerbating the issue. It's rarely a problem these days, best investment for any computer is to upgrade its ram.
What are your thoughts on the size of swap these days for memory? I started with hp-ux 8 in the 80's where memory was extremely expensive so I had to calculate the virtual set size of every process under load and balance the server spec accordingly, soon as the load started peaking every month I could see all the drive lights in the data centre flash like crazy on the swap packs.
There's also systemd -oomd, which tries to improve on this older method, not always successfully. (Seems problematic on Linux desktop without adjustments.)
Is there a reason why malloc simply doesn’t return a null when trying to alloc more mem then available? Why it tries so hard to overcommit and then memkill instead of refusing straight away?
It does return null. But a short time later the kernel can see that available memory is dangerously low and kills it anyways. So the malloc doesn’t kill the program, it just makes it more likely to be killed. Edit: This is not true for all OSes. Some will return a valid virtual memory address , but will throw an error or terminate the process if it ever tries to write to that virtual memory region.
Because it's more memory efficient. Applications often allocate much more memory than they actually use. Overcommit is a kernel tunable though. You can simply disable it and get behavior identical to Windows.
Because of fork(). Back when processes swapped, and in the operating systems that only allocate memory they have, that's why it works. But once UNIX started paging, fork() would set the data pages read-only, then copy them if one of the processes tried to write to it. (vfork() was a way to get around this, abandoned in favor of the OOM Killer.) So, basically, you call fork(), then try to access something already allocated, and the OS has to find somewhere to copy the page to that you already have access to. The only reason applications "allocate more than they use" is fork(). Nobody writes code to significantly allocate more memory than they use and expect them to run properly on operating systems that actually allocate memory when you ask for it.
that is a surprisingly nice and simple self-preservation mechanism. Does Windows have something simmilar to this? 'cause as far as I can remember Windows Machines just end up bluescreening, but I ain't too sure.
This might enplane why I have sometimes get hang when i run out of memory as all my programs have been open since boot (web browser, discord, terminal). its just trying to decide what to kill, or I have a different issue....
In my experience with Lubuntu, it just freezes. Windows' modern memory management is far better. There are videos from Microsoft developers on UA-cam about this (Windows 7-10 era).
There is no better memory management in Windows. It is simply doesn't have overcommit. You can disable overcommit in Linux to have behavior identical to Windows.
@@mk72v2oq memory is much better managed in windows lmao even the cache system, the swap file auso automatic increase as you system needs more lmao....
How can I check if this function is activated on my machine. I use a small AWS EC2 instance for learning and it is quite simple to stall this machine by using to much memory. To me it looks like that the kernel is not killing the mem consumpting task.
Great video. Short and to the point. However, this is the default and there are alternative implementations. As a desktop user, I think it's embarassing that the system just locks up while the OOM killer is figuring out what to kill. In an ideal world, I think we should have a minimal kiosk GUI running on console 0 and then you can simply sigstop the main desktop session and allow the user to make a choice. But even a couple of decades ago, it should've been possible to just pre-select processes to kill. I would prefer Steam to get killed before Firefox and Firefox before Gnome Shell, for instance, but I would understand that some would want Firefox to get killed first since that would likely solve the problem and allow them to keep gaming.
I watched this video a week ago and my web server crashed yesterday. Checked the logs, and it turns out it was the OOM killer that killed the process. Quite a coincidence haha.
Tabstops are canonically every 8th character. Some weird people set them to 4 or even 2 instead (which is quite understandable when working in deeply nested languages, to be fair, especially if on a small display or using a high font size (after all, one of the chief benefits of tabs are the accessibility-friendliness of them)), but they're really "supposed" to be mod 8. Of course, it shouldn't matter: if you use tabs for indentation, and spaces for alignment, as everyone should, it comes down purely to personal preference and matters not to other people - unlike when far too many people abuse spaces for indentation or in the case of the OpenBSD dudes, use an abominable mixture of hardcoded-length tabs and spaces (seriously, OpenBSD is great and all, but their style guide is really something else), thereby forcing their [probably awful to the majority of the code's viewerbase] indentation preferences onto everyone else Personally I use both 4 and 8 width tabstops daily, and some more exotic lengths every now and then too
The primary need for the OOM Killer is due to fork(). Otherwise, malloc() could just return null when it fails, like on every other operating system in forever, including original versions of UNIX. But once UNIX went from swapping to paging and fork() didn't actually allocate swap space or RAM for the pages that weren't written, systems needed some way of dealing with needing to actually allocate memory that the kernel already told the process was allocated. Thus was born the OOM Killer that randomly executes processes with no recourse to correctness, instead of informing the process allocating the memory that none is left. fork() is probably the worst thing to happen to operating systems in many decades.
Surprised to see no discussion of overcommit. In theory the application you made here should only waste address space, but not actually use much memory since it never touches the memory it allocates. This means malloc only returns nullptr when all address space is used, even if you have allocated more memory than is actually possible to use. When you actually start touching all the memory, it will fault in and actually start allocating memory usage, and eventually you'll get OOM during a memory write or read operation, instead of during the call to malloc. I suppose you must have disabled overcommit for this video to get the more intuitive behavior that happens on Windows, or the OOM was only for the address space and not actual memory usage.
I made a video generatjng python script with no regards for optimization and it used all my 16G ram. Everuthkmg just froze, it never woke up so I had to unpower the computer. I did this like 10 times and my UEFI was corrupted and had to reflash. I could not write any drives. Took me a week to debug to fknd out the problem was in the motherboard. It was absurd.
That shouldn't happen with this program: you did not actually used any of this memory that you asked with malloc, since you did not "touched" it. You can malloc your entire RAM twice and kernel will not allocate any resources for your program since it did not interacted with those pages. What's most likelly happend is that malloc itself filled the memory running in the loop and creating more and bigger structures to store all those allocations to be able to free() and realloc() them.
What happen when AmigaDos run out of everyting, Cancel or free space and Continue. If computer science return 40 years ago. We will see an evolition. ;)
- Notice that the reason the program fills up the memory is not because of the memory directly requested with malloc (which is not used in this program - just allocated), but rather because of the memory being filled up with the bookkeeping required to maintain all the allocated blocks which are rapidly allocated. (If I would have just allocated a large block, it would not fill up the memory)
- I recommend checking the return value of malloc when you are trying this yourself, since it can return 0 in case it cannot allocate which can help you debug
- Forgot to mention that by default you will only see these logs from the kernel if you are in the first TTY session of the computer (for example if you are connecting via SSH you won't see the messages), thus a recommended way to check out the kernel log is to use the dmesg command
I discovered this when hosting Minecraft servers on Linux and realizing that in many cases, the heap of the JVM exceeded the VM usage, which can cause this to kick in. Very fascinating find!
I've noticed that too - even if minecraft itself reports a fairly constant heap usage, the process will use more and more memory, to the point where I eventually have to restart it to avoid my system crashing
You should appropriate allocate the memory for them using the Xmm flags and so on
@@kreuner11 The JVM will still overallocate overtime past the heap flags. From my experience, it helps to keep the heap usage atleast 1 GB under what the OS RAM limit is. I was merely pointing out how setting the heap in extreme cases causes the OOM killer to kick in. (This is also assuming you don't have any swap enabled)
Even with a daily restart?
Can you tell ppl who have the same idea of running a MC server on Linux how to avoid running into the issue of the server being killed because of the process termination?
More videos on the Linux kernel like this where you also dig out the source to show how it works. It’s super fascinating
So does it make a loud bang and burst into flames before or after it terminates the process?
only if you play minecraft
mine does the Wilhelm scream before dramatically leaping off of my desk
Mine does the Wilhelm scream before dramatically leaping off of @@SpiritFTV's desk
Just download more RAM bro
during. thats how it terminates the process
You can actually set the flag to never kill the process, and if that process fills up all the memory, the system's gonna kernel panic with the message reading something like "System locked up on memory"!
Which is really bizarre, considering malloc() is defined to return null when you run out of memory, and no other operating system crashes due to not being able to allocate memory you asked for.
@@darrennew8211 It panics because the operating system is unable to allocate any more memory for itself
@@specialopsdave yeah, I doubt macOS or windows would fare better given the same conditions
The name for the Out Of Memory manager seems obvious when read out, but it could also come from the old scifi story of the OOM. The OOM was a celestial being who, unhappy with his work, decided to remove all of creation. He deleted everything until he got to himself. Then he consumed himself, leaving nothing.
Any idea of the name of the story?
@@vr10293 Not even GPT-4 knows the name of this story. Neither does any of the search engines I tried return anything about any celestials named "OOM". It therefore stands to reason, it does not exist. Which makes sense, after all, this OOM consumed every trace of himself.
Short, clear explanations with references to kernel docs and source code. Thanks!
What a legend. All the good stuff vim, C and Linux.
Vim users are toxic , they sacrifice themselves for just 25% of increase coding Speed
@@dipanshukumar5504 you consider 25% too low?
but in Windows
@@ripsivis Bro I use Arch but with VS Code
People just use whatever you want and whatever you're comfortable with :3
Thank you for keeping the video short and to the point!
You can use `watch -n 1` command to make the `free -h` command auto refresh. It would be great to see the memory drop in real time.
It supports fractions, so 0.1 will report ten times per second.
Love this video and your channel, it's such a gold mine for engineering things that I've thought about but never had the enthusiasm to really dive into myself
This channel is a gold mine of knowledge 🙌🙌
this type of video in specific is so interesting to watch!
I love the video idea and execution, fast and straight to the point, very nice of you that you explained how it chooses a victim, subbed
I LOVE this short and interesting technical videos!
It's the long runnninh processes that can make your Linux system completely un-recoverable when it's oom.
Example, a Firefox browser running from right after boot time and getting oom as it keeps opening newer tabs.
Or... A Kdenlive video editing session. Before you know it, you're oom and all your work is now *completely* lost, since the system just can't recover.
At first I thought it's just being slow as it's trying to move from physical to swap space, but quite a few times I let my computer just try to recover over almost a day, but it never did.
Albeit, it was on a laptop with 8GB RAM and hdd.
I do hardware monitoring for data centers for a living and sometimes this occurs and causes the system to be unreachable. A reboot will typically fix it but then I've got to inspect for whether or not this happened because of a high amount of correctable errors or just because someone's got a memory leak in their application that's been running for months.
There is no unrecoverable state if you know magic SysRq keys.
Not counting the kernel panic of course.
I know this feeling, i have a 4GB computer and Firefox crashes constantly.
@@mk72v2oq Yo, i'm not highly knowledgeable of Linux, thanks for this tip!
@@mk72v2oq The magic print screen key + BUSIER backwards (REISUB).
Nice video, straight to the point, I love it!
Your channel is glorious and beautiful. Thank you!
I had a notebook with Great personality, but only 4 of ram. Every time that boy used that memory, everything stopped, don't ask my why. I am sure it had swap, but idk. So i needed to decorate de raising skinny elephants for that time
quick and informative, very nice.
So another observation is that I tried running a similar program (allocate 4K pages until malloc returns NULL, while printing the sum of allocated memory), and that was on a 64-bit system with 48-bit MMU width(?), and a NULL was returned after a little less than 2^48 bytes. This is based on the fact that Linux doesn’t allocate physical frames for each allocated virtual page, unless it becomes dirty, which in that case, will crash much earlier than the first case, because it is allocated in physical memory
Im using a New laptop with 16 gb of ram, when im using visual studio code + open league of legends, i get out of memory, but my sistema get completly freeze and i have to force the shutdown, looks like the kernel is unable to kilo the process
Didn't know what to expect, but in the end this solution really makes sense.
Is there any way this could lead to a system freeze? Or is this an absolutely bullet proof OOM mechanism?
Because I saw a system freeze while it was idling, journal just stopped without signs/warnings, system was completely frozen and unresponsive, SSH not working etc... Unfortunately I don't have logs of memory usage. Happened 3-4 times over the course of a few months actually. I'm at a loss.
I had the same behavior!
@@kekulta Do you have an idea what could cause this? Which kernel are you running?
i've had my system freeze when it ran out of RAM, but the kernel wasn't actually frozen, it was just swapping a lot (i could get to a tty, half an hour after i pressed ctrl+alt+f2). I've tried two things to stop this: 1) disabling memory overcommitment. This makes Linux deny requests for more RAM than there actually is. 2) setting a memory limit for my user using systemd.
I would say 2 is the better option, since 1 might still freeze or crash important apps and is hard to configure well.
Sometimes the program doesn't malloc when there is no more memory, so the system freezes, since it's on memory limit for other programs
Bro, just learn Magic SysRq keys.
Thank you! Learned good thing.
Very informative video! I love the pragmatism of just writing 5 lines of C code to do the thing plain and simple, and finding the source code + documentation of the actual kernel, that was very good.
Out of curiosity, what do you do as your career?
Thanks! Currently I am a CS and entrepreneurship student
3 Minutes full of usefull information.
I remember the day when I needed to render some scene in Blender (Steam version) on CPU and a few times when PC ran out of memory, machine just has been frozen. Nothing was working, but the turn off button. I guess there's more factors in a case like this, but "running long" makes some spotlight, thanks.
You should have used Top or any of it's derivetives to show the memory consumption grow in realtime. First time I found this was making a server that transferes files and I had made a mistake where I didn't free the memory and it kept shutting down the server. I used htop and immediately saw what was happening.
One tip to remember is if you can, put your swap area on a different physical device and bus - this goes for SSDs too. When a system starts thrashing (I forget the specific modern terminology, but it's spending more time managing than giving time to non-os processes to run) the i/o busses can get flooded as the system tries to move process from memory to swap exacerbating the issue. It's rarely a problem these days, best investment for any computer is to upgrade its ram.
What are your thoughts on the size of swap these days for memory? I started with hp-ux 8 in the 80's where memory was extremely expensive so I had to calculate the virtual set size of every process under load and balance the server spec accordingly, soon as the load started peaking every month I could see all the drive lights in the data centre flash like crazy on the swap packs.
The typing sound tho 😻
The video is so well made.
There's also systemd -oomd, which tries to improve on this older method, not always successfully. (Seems problematic on Linux desktop without adjustments.)
This used to happen a lot when I hosted 3 concurrent Minecraft Worlds on my server.
If you are not writing to the allocated page, how is the kernel allocating pages ?
What about combined processes that takes a lot of memory, or a process that has been running long enough ?
underrated YT channel
Thank you 👍🏻
Is there a reason why malloc simply doesn’t return a null when trying to alloc more mem then available? Why it tries so hard to overcommit and then memkill instead of refusing straight away?
To my knowledge this is what it does.
It does return null.
But a short time later the kernel can see that available memory is dangerously low and kills it anyways.
So the malloc doesn’t kill the program, it just makes it more likely to be killed.
Edit: This is not true for all OSes. Some will return a valid virtual memory address , but will throw an error or terminate the process if it ever tries to write to that virtual memory region.
It does
Because it's more memory efficient. Applications often allocate much more memory than they actually use.
Overcommit is a kernel tunable though. You can simply disable it and get behavior identical to Windows.
Because of fork(). Back when processes swapped, and in the operating systems that only allocate memory they have, that's why it works. But once UNIX started paging, fork() would set the data pages read-only, then copy them if one of the processes tried to write to it. (vfork() was a way to get around this, abandoned in favor of the OOM Killer.) So, basically, you call fork(), then try to access something already allocated, and the OS has to find somewhere to copy the page to that you already have access to.
The only reason applications "allocate more than they use" is fork(). Nobody writes code to significantly allocate more memory than they use and expect them to run properly on operating systems that actually allocate memory when you ask for it.
-What happens when linux runs out of memory?
-IMPOSSIBLE
that is a surprisingly nice and simple self-preservation mechanism.
Does Windows have something simmilar to this?
'cause as far as I can remember Windows Machines just end up bluescreening, but I ain't too sure.
Windows doesn’t have an OOM killer and doesn’t overcommit.
Great vid!
This might enplane why I have sometimes get hang when i run out of memory as all my programs have been open since boot (web browser, discord, terminal). its just trying to decide what to kill, or I have a different issue....
Can someone explain the reason of the square roots in the source code part?
now i know why my mongodb server was stopped thanks for the clarification
not every linux system apparently, my system froze after getting to 32 gb limit
I may sound nooby but how can you split the screen to a working terminal and vim in tty mode
Check out my video about multiple windows on vim
In my experience with Lubuntu, it just freezes. Windows' modern memory management is far better. There are videos from Microsoft developers on UA-cam about this (Windows 7-10 era).
Two good channels are (@)NT-dd8rw and (@)m2kt.
There is no better memory management in Windows. It is simply doesn't have overcommit. You can disable overcommit in Linux to have behavior identical to Windows.
@@mk72v2oq memory is much better managed in windows lmao even the cache system, the swap file auso automatic increase as you system needs more lmao....
Need more videos like this
So what if you have a process that consumes a lot of memory and cpu time?
When my system goes out of memory it freezes. It never closed the process responsible. I had to do it manually
Me when running LLMs on CPU (even 64GBs of RAM is not enough to contain me)
How can I check if this function is activated on my machine. I use a small AWS EC2 instance for learning and it is quite simple to stall this machine by using to much memory. To me it looks like that the kernel is not killing the mem consumpting task.
You can go to the aws console, go to your ec2 instance, > monitoring > get system logs
If there are any oomreaper kills, they'll show in there :)
My linux system generally just freezes and doesnt recover... what could cause that?
What is that formula badness_for_task, how does it work?
Thanks. Rather killing process, cant it alert and and user input?
How did you get the cli and vim on the same terminal?
Check out my video about multiple windows on Vim for more info
Thanks!
Great video. Short and to the point. However, this is the default and there are alternative implementations. As a desktop user, I think it's embarassing that the system just locks up while the OOM killer is figuring out what to kill. In an ideal world, I think we should have a minimal kiosk GUI running on console 0 and then you can simply sigstop the main desktop session and allow the user to make a choice. But even a couple of decades ago, it should've been possible to just pre-select processes to kill. I would prefer Steam to get killed before Firefox and Firefox before Gnome Shell, for instance, but I would understand that some would want Firefox to get killed first since that would likely solve the problem and allow them to keep gaming.
isn't Linux supposed to do some virtual memory magic to not empty the reserves given that you aren't using the malloc'd pages?
Oh the system probably allocated like a few tens of GBs, the OOM was done solely by the actually touched pages.
Cool example!
how does the kernel use this little ram?
Now how does windows handle it? Blue screen ?
By default the OOM Killer gets activated and kills processes to fix the situation.
I watched this video a week ago and my web server crashed yesterday. Checked the logs, and it turns out it was the OOM killer that killed the process. Quite a coincidence haha.
very very cool video
My gentoo machine just freezes completely. Have to press the poor power button for few seconds -.-
Using Gentoo but don't know such basic thing as Magic SysRq keys? Weird.
@@mk72v2oq Am I an omniscient? xD Thank you, I will dig into it.
very nice
That's a neat demo, thank you.
But... 8 spaces per indent?! 😕
The ✨ default Vim experience ✨
Yah this was a new Debian VM I installed, so haven't configured Vim yet
Tabstops are canonically every 8th character. Some weird people set them to 4 or even 2 instead (which is quite understandable when working in deeply nested languages, to be fair, especially if on a small display or using a high font size (after all, one of the chief benefits of tabs are the accessibility-friendliness of them)), but they're really "supposed" to be mod 8. Of course, it shouldn't matter: if you use tabs for indentation, and spaces for alignment, as everyone should, it comes down purely to personal preference and matters not to other people - unlike when far too many people abuse spaces for indentation or in the case of the OpenBSD dudes, use an abominable mixture of hardcoded-length tabs and spaces (seriously, OpenBSD is great and all, but their style guide is really something else), thereby forcing their [probably awful to the majority of the code's viewerbase] indentation preferences onto everyone else
Personally I use both 4 and 8 width tabstops daily, and some more exotic lengths every now and then too
Linux kernel uses 8 spaces too.
The primary need for the OOM Killer is due to fork(). Otherwise, malloc() could just return null when it fails, like on every other operating system in forever, including original versions of UNIX. But once UNIX went from swapping to paging and fork() didn't actually allocate swap space or RAM for the pages that weren't written, systems needed some way of dealing with needing to actually allocate memory that the kernel already told the process was allocated. Thus was born the OOM Killer that randomly executes processes with no recourse to correctness, instead of informing the process allocating the memory that none is left. fork() is probably the worst thing to happen to operating systems in many decades.
But if I were to add a 1hr delay before eating up the entire memory, my process wouldn't be killed.
INTERESTING
for me it just freezes the whole system. (I was playing minecraft with Distant horizons mod)
bro uses 10 characters of intendation
Surprised to see no discussion of overcommit. In theory the application you made here should only waste address space, but not actually use much memory since it never touches the memory it allocates. This means malloc only returns nullptr when all address space is used, even if you have allocated more memory than is actually possible to use. When you actually start touching all the memory, it will fault in and actually start allocating memory usage, and eventually you'll get OOM during a memory write or read operation, instead of during the call to malloc. I suppose you must have disabled overcommit for this video to get the more intuitive behavior that happens on Windows, or the OOM was only for the address space and not actual memory usage.
I made a video generatjng python script with no regards for optimization and it used all my 16G ram. Everuthkmg just froze, it never woke up so I had to unpower the computer. I did this like 10 times and my UEFI was corrupted and had to reflash. I could not write any drives. Took me a week to debug to fknd out the problem was in the motherboard. It was absurd.
Oomkill, yeah that happens if you don't define cgroups correctly.
for me it freezes up and thats all
nice
💖💖💖💖
Idk about my linux box, but when android runs out of memory it kills a bunch of apps.
👍!
when your 8gib ram and 4gib swap fill up you go and buy 32gib ram
true story
That shouldn't happen with this program: you did not actually used any of this memory that you asked with malloc, since you did not "touched" it. You can malloc your entire RAM twice and kernel will not allocate any resources for your program since it did not interacted with those pages.
What's most likelly happend is that malloc itself filled the memory running in the loop and creating more and bigger structures to store all those allocations to be able to free() and realloc() them.
That is a good point, thanks! added to the pinned comment :)
Basically the anti-trashdev protections are built into the os... nice!
Came to know about you on @LowLevelLearning 's tweet
So this is the origin of the big bang
Get your hard drive 98% full and Linux will freeze up also.I was downloading to the wrong drive, it filled my OS SSD drive.
MEGA file sharing website: Inspiron 700m gets softlocked.
In sure he needed to check man for malloc to see that he needed stdlib...
Virtual memory goes burrrr... Nah idk.
I need more memory, please. Maybe zram ?
wbu windows ?
What happen when AmigaDos run out of everyting, Cancel or free space and Continue. If computer science return 40 years ago. We will see an evolition. ;)
bad things
Mallocious!
it hung
for me linux mint crashed and have to reboot oopsies!
And Windows just bluescreens