You have no idea how happy you've made me by finally increasing the number of actual computer science videos on this channel. You guys are doing a great job.
One of the most interesting parts of my Computer Science degree was learning about operating systems by stepping through the Minix source. Although things are more complex now with multi-core CPUs because now more than one thing (or more than one bit of the same thing) can run at the same time.
Because they're delicious. The black one obviously tastes like ... choosing my words carefully here ... licorice. It didn't stand a chance. Too delicious.
Killing them.. I remember having a conversation about host process management from an OS perspective in a café once. Told a guy semi-loudly "Remember that we also need to kill all the children of the host" (referring to child processes) "Before we kill the host." Coffee lady came by asking whether she has to call the cops on us or whats going on.
"... unless the process is just very boring and just kills itself..." That's a bit... harsh? (Yeah, I know, it's perfectly usual programming lingo. :) )
Very good video. I'd like to suggest looking at I/O scheduling. Both windows and linux can become very unresponsive if doing heavy I/O. I've seen cases, even on a very powerful PC, where responding to a mouse click can take many seconds during heavy I/O (disk backup, file downloads etc). Perhaps look at the various scheduling algorithms and their pros and cons.
Spike Evans Linux default I/o scheduler is utter pile of trash. I don’t know what use cases it works well for but anything interactive is nearly impossible under high I/o load unless you change the I/o scheduler. Windows the usual issue is swapping (not enough physical RAM)
@@OverKillPlusOne I think that the mouse and keyboard should have priority over all else. I don't believe windows is a RAM issue. I have and 8 core 16 thread CPU with 32 gig of ddr4 ram. I think, I don't know for sure, there is a single I/O queue that's the problem. In any case, it's my computer and when I click the mouse or type it should listen to me.
@@spikeevans1488 Yeah, this is a very big problem. Even when there's not a huge lockup, modern OSes have measurably higher delay on typing or mousing than in the 90s, and the delay was even lower in the 80s.
@@OverKillPlusOne the use cases it work well for is basically just as a headless server. So that probably explains why it's so much more accepted in the server industry than the home computer industry still. It would be nice if Linux could adapt its priorities based on whether you're using a GUI or not. Like if your home rig is also being a NAS for your laptop, you might want it to prioritise the file transfer if the screen has gone to sleep. If you're watching stuff on your laptop you're not likely using the desktop. And if you're on the desktop, why would you stream stuff to your laptop? So a file transfer could go 10% slower and you wouldn't mind because you're using the desktop at that time.
@@kaitlyn__L Dayjob involves running thousands of Linux based servers. On anything I/O taxed we have to change the default scheduler else things like databases get crippled because of I/O staying in the queue for multiple seconds when you reach I/O limits (like on the older fleet of hard drive based database servers). The default deadline scheduler has two queues, a read and a write, and will starve the write queue blocking the world up under heavy I/O. Linux has completely separate handling for other I/O events.
I remember hearing about an incident involving MacOS's cooperative multitasking. Every now and then a network full of macs would stall, then get flooded with packets. It was eventually narrowed down to a specific machine and when the admin went and investigated, nothing seemed to be out of the ordinary. After a few occurrences of this, the admin sat down and watched the user at work. Sometimes when working with the mouse (might have been drawing), he'd hold the mouse button down for much, much longer than was normal, and with the input handler being tied up on the long mouse press would result in nothing else in the computer getting to run, including the network stack. As a result, no packets were passed on from the machine in question and the buffer would just fill up, resulting in a flood when the user released the button and the network stack got some processing time again. A gentle word with the user not to hold the mouse button down so long fixed the problem. Don't know if it's true or if I remembered all the details correctly. I didn't have any luck tracking down the original before making this post.
Steve is one of my favourite profs and Computerphile. Also loving the array of vintage computers in the background. Hmm, talking about that: he should do an episode only about them. Show every of them and give a brief history and hardware specs.
ArumesYT we had all the Bagley. He exists in a 4 dimensional state across all of space and time and we only see a three dimensional projection of him. His true wisdom is beyond our comprehension. Ancient writings suggest he derives his power from Diet Coke.
I have not seen this type of printer paper since 1993 when we used to load it into Honeywell DPS7 mainframe chain printers. Love this channel so much, really interesting.
So penny dropped moment.. with 'process explorer' in Windows you can change the priorty of a program. So presumably then that is the pre-emptive part of the OS figuring out how much time to allocate to each one, no? Now I'm going to have to go buy jellies!
Almost. The priority value Windows lets you assign is used as one input by the scheduler to decide when and how long a program runs, but that algorithm has a lot of other internal parameters that users can't (and almost certainly wouldn't want to) directly touch.
@@prontosolutions4370 sorry dude, I'm a Unix guy just trying to figure this stuff out. Didin't realise I was going to be called out by people like you who clearly know everything!
Got it, but doesn't the OS need a running process to handle other processes as well? If so, how does it manage to first suspend itself and then run another process?
Dr Steve Bagley I still propose a new episode (episodes) you would give some information of the array of vintage computers showing up in your videos behind your back. Please explain what they are, in which year they were introduced into the marked of the 1980s and what impact they had back in the time.
Handling interrupts are important because some will manage the context switching. A “hard” real-time system, where EVERY interrupt runs the scHeduler which likely forces a context switch... which means there are fewer CPU cycles available to crunch numbers.
On a non-real-time system, most interrupts get serviced and then the Control is passed back to the previously running process. In a real-time system, EACH INTERRUPT MAY CHANGE WHICH PROCESS GET CONTROL NEXT, so, after the interrupt is processed, the scHeduler is run to determine which process needs to cope with the most recent interrupt. The scHeduler takes extra time over and above the usual context switch. The higher the interrupt rates, the more frequently the scheduler is run, taking up cycles “regular” processes may be burning. I have run across people who think a process is so important it needs to be run during an interrupt... and, so, the system had problems. Some folks have no idea how priorities need to be set. In a usual (non real-time) scheduler, priority will float up and down based upon whether a process uses it’s time slice or not. If the whole slice is used, the process’ priority will drop,, giving it a longer time slice next time. Likewise, if it does some kind of I/O syscall, the priority will be bumped up and the time sslice (quantum) will be shortened. I/O bound processes float “up” in priority and compute bound processes float down. This implements a form of “fairness”. In a real-time system, priorities are set and do not change and you need to plan out ahead of time which processes get what kind of priority. (Don’t ask, this is a long and sordid story where I had a lot of resistance from the “key” programmer as I tried to get a FORTH-coded application into a Unix (well, LynxOS) system; I was the considered as the “young punk” at the time and everything I knew about OS internals was “just dogma”.)
I knew how co-operative multi tasking worked but not preemptive. That always seemed to get glossed over as just "the operating system takes control". Very cool that it bounces between processes very quickly just to keep things even. It seems to me like that bouncing would make it easier to run on multiple cores/CPUs, is that right? Assuming the memory address of the program can be accessed by both of course. Like, it's already divided up into slices in the preemptive model, so how difficult would it be to have a second CPU take from, say, the back of the runnable pile every time too? Just thinking about how the token bouncing and everything took up a lot of overhead in the Christmas video.
Does asyncronous code work in a similar way to multitasking in Operating Systems, but instead of switchig processes in the CPU you switch blocks of code executing in the main thread?
Not in the main thread, but yes, an asynchronous task may run in a separate thread. This allows the main thread to continue while that task is running.
Was mechanical interrupt a thing because of single-core CPUs? If only one process can be run at a time and if that got hung up, you would need a way to kill the program. Or is that understanding wrong?
Could you guys do a video (if you haven't already) about how hyperthreading works and what makes it different from the multi-tasking you're talking about here?
How does the OS make sure to not contaminate the accumulator/registers when it forcefully takes control from the program so as to not break whatever it's calculating?
Each process has an area in its context where those are stored. It needs to do this regardless of whether it's done cooperatively or preemptively. How exactly this is accomplished varies from one architecture to another.
I shall add that preemptive multi-tasking is possible because modern CPUs (x86 and ARM alike) have a clock mechanism. The OS instruct the CPU to run a clock and generate an interrupt every X milliseconds. The OS also register their own handler when this interrupt occurs. So all the OS has to do is to wait for the CPU to stop any execution, switch to kernel mode and call the OS handler which then calls the OS scheduler.
This is such a cool way to visualize multitasking, both in the way the video was done at first and also with the diagram and candies! I loved it! I think I’ll show this to one of my instructors at my college and see if he’ll use it as a demonstration video for one of his classes. On a side note, the instructor I had for that class, before he retired, demonstrated multitasking by having students at the front of the class as processes and he would act as the OS and would tell the students to switch places as per the control-passing and timer-interrupt methods.
When you're starting a process ( as a process ), you give control to the OS anyways, so it doesn't matter. It can take out the process which is currently running.
On create other stuff is happening, like memory allocation. Also, new processes might have a different priority than other waiting processes. For example, an OS might decide to run processes with the lowest priority coefficient (a function of priority and waiting time). A new process can make the scheduler favor it over a slightly lower priority coefficient.
Question: How does the OS keep track of it's interrupt clock, if it doesn't currently have control of the CPU? I remember learning about interrupts back in high school, but it was a long time ago, and it was a pretty rudimentary explanation.
Interrupts can be both hardware and software. The CPU can have an internal clock (or even rely on an external one) to send a periodic interrupt signal to one of its input pins (or something similar). This interrupt moves control back to the OS.
Thank you for this video! I think it very clearly explains the key concepts. With regard to the preemptive multiprocessing, how is the external interrupt handled? Is it done on a hardware or software level? Also, how is the external interrupt frequency chosen? Can it be adjusted? If a process encounters a regular interrupt, does it override the external interrupt? Sorry, I have lots of questions. Maybe a better question: where should I go to learn more details about this topic?
When the interrupt is received by the processor, it saves the state of the current process and jumps to the code that's been assigned to handle the interrupt.
It's only "blocked" because the program was waiting on the OS to do something. The OS itself knows when it's finished that thing, and moves the task to the other pile. The task itself does not "perceive" a difference between blocked and runnable, it's entirely the OS's organisational structure and nothing more.
Kernel sets an external timer, then passes control to the user process. Once the time is up, the timer sends an interrupt to the CPU, which in turn executes kernel code (i.e. the scheduler). Rinse and repeat.
Has anyone worked with closet-sized watercooled computers in the nineties? There was a feature on the operating system MVS that was TSO, the time sharing option, where the operating system gave time slices to runnable programs. Other transaction based systems, did their own scheduling. Disclaimer: that's how I understand it, before anyone bashes me.
yay for ZX Spectrum tape loading colours/sounds (even though it wasn't capable of multitasking when loading from tape, given that it used CPU timing to decode the tape audio)
A thread is not a new process, a process has at least one thread though,at least one execution path in other words, yea you could see a thread like a process but it is more lightweight, also that thread is scheduled by your code and not by OS's scheduling algorithm in general.
Mikey Nice answer but I would like to add that it is still the os that schedules the threads (even though you have limited influence, for example you can tell the os to wait for the execution of all the threads usually by using a function called join() before proceeding with the execution of the father process)
Threads are sometimes referred to as "lightweight processes". Context switches are expensive between processes, but switching between threads is much quicker. Threads within the same process still share the same memory space as well as other things common to the process they belong to.
This video is nice, I'm usually not super exited about this professor's videos because I'm not into hardware stuff (os stuff is interesting though), but this one is really creative and original and interesting.
What a very comprehensive video, and an explanation for how the OS can escape even infinite loops! I've been wondering how that is possible since forever.
What I have learned is that I need an external interrupt to exit the infinite loop of looking at my phone- open the same useless apps- decide it is a waste of time- open youtube on my pc- start watching something- decide it is a waste of time- take my phone.........
Can you run the videos in series instead of in parallel? It will be very funny to see how you guys had to record the sequence out of order so that after editing it makes sense XD. Also, where is the unboxing!? I was promised an unboxing video. XD
Outside of process scheduling and context switching is also watchdog timers which is why a problem thats just while(true) {} doesn't freeze your entire computer. To take the jelly baby analogy further in fact the OS itself is one of the jelly babies. OSes tend to take into account how much time a process was using a system call into the OS and consider that part of its task time.
Not necessarily, some processors have separate "super" (OS) and "user" (applications) states, with their own separate stacks and/or registers. So the OS isn't just a jelly baby then.
It is basically the same for multi-core processors, just with n running states. So let's say 4 cores. That'll result in 4 running states and they all share the connections to blocked and runable states. You can easily see how that'll speed things up :)
Today cpus have multiple cores and even some sneaky computers have multiple cpus. Indeed multiple tasks are running in a realtime real parallel fashion.
Awesome vid! I've heard of jelly babies before but never seen them, thank you! Oh and the computer stuff was cool too I guess. (please accept that in the spirit intended, I've run out of words to appreciate the time you all spend making these!)
Is that why we call ACM magazine ACM Queue? Btw, since Moore's law will end in 2025 is it possible to look at the mother's board and try to programming taking on account PCI latency (cacm vol.60, No.4, page 48-54 Attack of the killer Microseconds) ?
Well if that got you giggling, prepare to have a sock put in your mouth: Browse to C:\Windows\System32 directory... hit 'n' and scroll down to ntoskrnl.exe Yes that's right NT OS Kernel, you've been using the Windows NT Operating System Kernel all this time and you didn't even know it! ;)
P.S. If you weren't aware the system actually loads that at boot time and that's what runs during the lifetime of the OS, that IS the kernel it's using... And that's where you have to patch patchguard out from if you for whatever reason want to remove it, and that's where all the kernel APIs reside also. Calling any API that needs to interact with the kernel, ends up in a call to a kernel mode API within ntoskrnl.exe (It's called 'exe' but it's not really an exe like a normal usermode application it IS THE KERNEL!)
Operating Systems don't wait for the process to hand control back. This is how Windows 3.1 used to do things, and is why a single errant program could hang the computer. Simply running a program with an infinite loop would bring the whole system down! Modern Operating Systems use a Programmable Interrupt Timer to take control of the CPU back from the process and programs can block these but must be careful to only do this for the duration of atomic operations. Furthermore, it is called multi-tasking by people who write Operating Systems professionally. I have been in the Software Development field and have never once heard a Software Engineer refer to the Operating System as a "Multi-Programming" OS. It is a Multitasking OS. Please do not "learn" from this guy, who is clearly a "Computer Scientist" who hasn't spent a day of his life working in Software Development.
Computerphile single handedly keeping the dot matrix paper industry alive.
You have no idea how happy you've made me by finally increasing the number of actual computer science videos on this channel. You guys are doing a great job.
One of the most interesting parts of my Computer Science degree was learning about operating systems by stepping through the Minix source. Although things are more complex now with multi-core CPUs because now more than one thing (or more than one bit of the same thing) can run at the same time.
@@existenceisillusion6528 and now it powers the Intel Management Engine!
Why you eating my processes? No wonder my pc keeps crashing
Wait until he starts accepting cookies
@@kattenelvis1778 great. Now my coffee is all over the place. 😂
Because they're delicious. The black one obviously tastes like ... choosing my words carefully here ... licorice. It didn't stand a chance. Too delicious.
Exquisitely presented. That UA-cam plaque in the background is well-deserved. Computerphile remains one of my favorite channels.
An example of Multi Programming: I'm watching this video while I wait for my code to compile.
No. That's Multitasking. Multiprogramming would be if you were multitasking watching FOX News and a commercial at the same time.
@@JDines lol
Killing them.. I remember having a conversation about host process management from an OS perspective in a café once. Told a guy semi-loudly "Remember that we also need to kill all the children of the host" (referring to child processes) "Before we kill the host."
Coffee lady came by asking whether she has to call the cops on us or whats going on.
That's hilarious!
The concept of showing it with the different Dr. Bagley clones is brilliant! and not easy to do!
"... unless the process is just very boring and just kills itself..."
That's a bit... harsh?
(Yeah, I know, it's perfectly usual programming lingo. :) )
Gettin pretty fancy with the editing!
Concurrency is not parallelism
Indeed. »Read the 1978 CSP paper -- it’s deep and wise« (Rob Pike about C.A.R. Hoare’s seminal paper »Communicating Sequential Processes«).
Very good video. I'd like to suggest looking at I/O scheduling. Both windows and linux can become very unresponsive if doing heavy I/O. I've seen cases, even on a very powerful PC, where responding to a mouse click can take many seconds during heavy I/O (disk backup, file downloads etc). Perhaps look at the various scheduling algorithms and their pros and cons.
Spike Evans Linux default I/o scheduler is utter pile of trash. I don’t know what use cases it works well for but anything interactive is nearly impossible under high I/o load unless you change the I/o scheduler. Windows the usual issue is swapping (not enough physical RAM)
@@OverKillPlusOne I think that the mouse and keyboard should have priority over all else. I don't believe windows is a RAM issue. I have and 8 core 16 thread CPU with 32 gig of ddr4 ram. I think, I don't know for sure, there is a single I/O queue that's the problem. In any case, it's my computer and when I click the mouse or type it should listen to me.
@@spikeevans1488 Yeah, this is a very big problem. Even when there's not a huge lockup, modern OSes have measurably higher delay on typing or mousing than in the 90s, and the delay was even lower in the 80s.
@@OverKillPlusOne the use cases it work well for is basically just as a headless server. So that probably explains why it's so much more accepted in the server industry than the home computer industry still.
It would be nice if Linux could adapt its priorities based on whether you're using a GUI or not. Like if your home rig is also being a NAS for your laptop, you might want it to prioritise the file transfer if the screen has gone to sleep. If you're watching stuff on your laptop you're not likely using the desktop. And if you're on the desktop, why would you stream stuff to your laptop? So a file transfer could go 10% slower and you wouldn't mind because you're using the desktop at that time.
@@kaitlyn__L Dayjob involves running thousands of Linux based servers. On anything I/O taxed we have to change the default scheduler else things like databases get crippled because of I/O staying in the queue for multiple seconds when you reach I/O limits (like on the older fleet of hard drive based database servers). The default deadline scheduler has two queues, a read and a write, and will starve the write queue blocking the world up under heavy I/O. Linux has completely separate handling for other I/O events.
I remember hearing about an incident involving MacOS's cooperative multitasking.
Every now and then a network full of macs would stall, then get flooded with packets. It was eventually narrowed down to a specific machine and when the admin went and investigated, nothing seemed to be out of the ordinary. After a few occurrences of this, the admin sat down and watched the user at work. Sometimes when working with the mouse (might have been drawing), he'd hold the mouse button down for much, much longer than was normal, and with the input handler being tied up on the long mouse press would result in nothing else in the computer getting to run, including the network stack. As a result, no packets were passed on from the machine in question and the buffer would just fill up, resulting in a flood when the user released the button and the network stack got some processing time again. A gentle word with the user not to hold the mouse button down so long fixed the problem.
Don't know if it's true or if I remembered all the details correctly. I didn't have any luck tracking down the original before making this post.
Reminds me of the however-many-mile email problem story.
@@kaitlyn__L Ah, the 500-mile email. I remember that one. Caused by mail software having a default timeout of only a few milliseconds, wasn't it?
Steve is one of my favourite profs and Computerphile. Also loving the array of vintage computers in the background. Hmm, talking about that: he should do an episode only about them. Show every of them and give a brief history and hardware specs.
You deserve an oscar for this!
This was a really fun idea for the editing of the video Sean, great job!
My mind is blown.
I just can’t believe they made that exact same shirt in two colors.
So a 12 core CPU gets you to the diabeetus faster then.
Got it.
Example of multi-programming: while I'm watching this my evil clones are taking over the world.
You guy’s are missing out. When Bagley gave us this lecture we also got to eat the jelly babies.
Yeah, but you only had one Bagley.
ArumesYT we had all the Bagley. He exists in a 4 dimensional state across all of space and time and we only see a three dimensional projection of him. His true wisdom is beyond our comprehension. Ancient writings suggest he derives his power from Diet Coke.
Will there be a video on hyperthreading?
Looking forward to both future videos!
I have not seen this type of printer paper since 1993 when we used to load it into Honeywell DPS7 mainframe chain printers. Love this channel so much, really interesting.
So penny dropped moment.. with 'process explorer' in Windows you can change the priorty of a program. So presumably then that is the pre-emptive part of the OS figuring out how much time to allocate to each one, no? Now I'm going to have to go buy jellies!
Almost. The priority value Windows lets you assign is used as one input by the scheduler to decide when and how long a program runs, but that algorithm has a lot of other internal parameters that users can't (and almost certainly wouldn't want to) directly touch.
you needed this vid to figure that out?
@@prontosolutions4370 sorry dude, I'm a Unix guy just trying to figure this stuff out. Didin't realise I was going to be called out by people like you who clearly know everything!
@@prontosolutions4370 Somebody at some point needs something to figure something out. That's how it works.
@@SteveGouldinSpain Kinda similar to nice (and ionice)
This is the most clear explanation I've ever herd for this topic!
Loving the production value haha
Got it, but doesn't the OS need a running process to handle other processes as well? If so, how does it manage to first suspend itself and then run another process?
Dr Steve Bagley I still propose a new episode (episodes) you would give some information of the array of vintage computers showing up in your videos behind your back. Please explain what they are, in which year they were introduced into the marked of the 1980s and what impact they had back in the time.
Handling interrupts are important because some will manage the context switching. A “hard” real-time system, where EVERY interrupt runs the scHeduler which likely forces a context switch... which means there are fewer CPU cycles available to crunch numbers.
On a non-real-time system, most interrupts get serviced and then the Control is passed back to the previously running process.
In a real-time system, EACH INTERRUPT MAY CHANGE WHICH PROCESS GET CONTROL NEXT, so, after the interrupt is processed, the scHeduler is run to determine which process needs to cope with the most recent interrupt. The scHeduler takes extra time over and above the usual context switch. The higher the interrupt rates, the more frequently the scheduler is run, taking up cycles “regular” processes may be burning.
I have run across people who think a process is so important it needs to be run during an interrupt... and, so, the system had problems. Some folks have no idea how priorities need to be set.
In a usual (non real-time) scheduler, priority will float up and down based upon whether a process uses it’s time slice or not. If the whole slice is used, the process’ priority will drop,, giving it a longer time slice next time. Likewise, if it does some kind of I/O syscall, the priority will be bumped up and the time sslice (quantum) will be shortened. I/O bound processes float “up” in priority and compute bound processes float down. This implements a form of “fairness”.
In a real-time system, priorities are set and do not change and you need to plan out ahead of time which processes get what kind of priority. (Don’t ask, this is a long and sordid story where I had a lot of resistance from the “key” programmer as I tried to get a FORTH-coded application into a Unix (well, LynxOS) system; I was the considered as the “young punk” at the time and everything I knew about OS internals was “just dogma”.)
I knew how co-operative multi tasking worked but not preemptive. That always seemed to get glossed over as just "the operating system takes control". Very cool that it bounces between processes very quickly just to keep things even.
It seems to me like that bouncing would make it easier to run on multiple cores/CPUs, is that right? Assuming the memory address of the program can be accessed by both of course. Like, it's already divided up into slices in the preemptive model, so how difficult would it be to have a second CPU take from, say, the back of the runnable pile every time too?
Just thinking about how the token bouncing and everything took up a lot of overhead in the Christmas video.
Does asyncronous code work in a similar way to multitasking in Operating Systems, but instead of switchig processes in the CPU you switch blocks of code executing in the main thread?
Not in the main thread, but yes, an asynchronous task may run in a separate thread. This allows the main thread to continue while that task is running.
A new concept from computerphile.. *_I LOVED IT_*
Was mechanical interrupt a thing because of single-core CPUs? If only one process can be run at a time and if that got hung up, you would need a way to kill the program. Or is that understanding wrong?
Could you guys do a video (if you haven't already) about how hyperthreading works and what makes it different from the multi-tasking you're talking about here?
Bit of a weird trippy out-of-body experience with Prof. Bagley. Good video, cheers.
What’s going on in the close up shots of the page? Looks like an algorithm is trying to stitch the image together
pre-emptive multitasking... Looks like Amiga and its operating system was 10 years ahead of everything else back in 1985.
How does the OS make sure to not contaminate the accumulator/registers when it forcefully takes control from the program so as to not break whatever it's calculating?
Each process has an area in its context where those are stored. It needs to do this regardless of whether it's done cooperatively or preemptively. How exactly this is accomplished varies from one architecture to another.
@Conner McKay
Whenever a context switch happens, all registers have to be saved.
@LueLou wow I didn't know that. I hoped that part would've been covered in the video, but it sounds like it will come soon in another one 😊
What about Multi threading?
Exceptional video! Explains it very well using some cool techniques :)
It’s like Boris Johnson had an illegitimate child.
I shall add that preemptive multi-tasking is possible because modern CPUs (x86 and ARM alike) have a clock mechanism. The OS instruct the CPU to run a clock and generate an interrupt every X milliseconds. The OS also register their own handler when this interrupt occurs. So all the OS has to do is to wait for the CPU to stop any execution, switch to kernel mode and call the OS handler which then calls the OS scheduler.
What do you consider to be "modern"? The Amiga was capable of pre-emptive multitasking using interrupts, using a CPU released in 1979.
Wouldn't the state of the program move from CREATE to RUNABLE instead of straight to RUNNING? Or is there something I'm missing?
This is such a cool way to visualize multitasking, both in the way the video was done at first and also with the diagram and candies! I loved it! I think I’ll show this to one of my instructors at my college and see if he’ll use it as a demonstration video for one of his classes.
On a side note, the instructor I had for that class, before he retired, demonstrated multitasking by having students at the front of the class as processes and he would act as the OS and would tell the students to switch places as per the control-passing and timer-interrupt methods.
but you can't eat humans
Dave S Exactly! That’s why this is superior!
Why doesn’t create go straight into runnable?
When you're starting a process ( as a process ), you give control to the OS anyways, so it doesn't matter. It can take out the process which is currently running.
On create other stuff is happening, like memory allocation. Also, new processes might have a different priority than other waiting processes. For example, an OS might decide to run processes with the lowest priority coefficient (a function of priority and waiting time). A new process can make the scheduler favor it over a slightly lower priority coefficient.
make a video on hard drive data and securely wiping them
Question: How does the OS keep track of it's interrupt clock, if it doesn't currently have control of the CPU?
I remember learning about interrupts back in high school, but it was a long time ago, and it was a pretty rudimentary explanation.
Interrupts can be both hardware and software. The CPU can have an internal clock (or even rely on an external one) to send a periodic interrupt signal to one of its input pins (or something similar). This interrupt moves control back to the OS.
Thank you for this video! I think it very clearly explains the key concepts. With regard to the preemptive multiprocessing, how is the external interrupt handled? Is it done on a hardware or software level? Also, how is the external interrupt frequency chosen? Can it be adjusted? If a process encounters a regular interrupt, does it override the external interrupt?
Sorry, I have lots of questions. Maybe a better question: where should I go to learn more details about this topic?
When the interrupt is received by the processor, it saves the state of the current process and jumps to the code that's been assigned to handle the interrupt.
Hello! Someone can tell me how the cpu knows how to move a blocked task to the runnable status without running that task? Thanks
It's only "blocked" because the program was waiting on the OS to do something. The OS itself knows when it's finished that thing, and moves the task to the other pile. The task itself does not "perceive" a difference between blocked and runnable, it's entirely the OS's organisational structure and nothing more.
@@kaitlyn__L Thanks, you are very kind!
I love this channel so much. Thanks for another great video.
This looks nice but, does not the OS also run on the CPU? How does it manage CPU usage for other processes while also using the CPU?
Kernel sets an external timer, then passes control to the user process. Once the time is up, the timer sends an interrupt to the CPU, which in turn executes kernel code (i.e. the scheduler). Rinse and repeat.
No bears were harmed for this video🤣
Has anyone worked with closet-sized watercooled computers in the nineties? There was a feature on the operating system MVS that was TSO, the time sharing option, where the operating system gave time slices to runnable programs. Other transaction based systems, did their own scheduling. Disclaimer: that's how I understand it, before anyone bashes me.
Steve, the ultimate process termiantor.
Giving me flashbacks to OS class
yay for ZX Spectrum tape loading colours/sounds (even though it wasn't capable of multitasking when loading from tape, given that it used CPU timing to decode the tape audio)
Nice editing
Thanks for the video! Looking forward to the scheduling and context switching videos. :)
so if i write a threaded program, does the computer see it as multiple processes?
A thread is not a new process, a process has at least one thread though,at least one execution path in other words, yea you could see a thread like a process but it is more lightweight, also that thread is scheduled by your code and not by OS's scheduling algorithm in general.
Mikey Nice answer but I would like to add that it is still the os that schedules the threads (even though you have limited influence, for example you can tell the os to wait for the execution of all the threads usually by using a function called join() before proceeding with the execution of the father process)
Threads are sometimes referred to as "lightweight processes". Context switches are expensive between processes, but switching between threads is much quicker. Threads within the same process still share the same memory space as well as other things common to the process they belong to.
This video is nice, I'm usually not super exited about this professor's videos because I'm not into hardware stuff (os stuff is interesting though), but this one is really creative and original and interesting.
Nice editing work!
Sean impersonating MS-DOS 4.00 "File? What file?"
As opposed to Unix:
He's a file, he's a file, _you're a file,_ I'm a file! Are there any more files I should know about?
Actually incredible explanation, thank you!
"A modern operating system like UNIX" >.>
I've learned more in those 12 min. than in 3 months of Uni.. thx!
Thanks for the english CCs
"The process is completely unaware there is another program running." What Intel told you not to specter that to happen? ;)
"not to specter that to happen?"?
@@programorprogrammed Very poor joke on "expect that to happen". :P
@@TechyBen Yea they did not spectre anyone to find out, but then they went with the nuclear option and then had a public meltdown! :P lol
0:29 seconds in: "Now obviously we can't." Great, good to know, nice video!
What a very comprehensive video, and an explanation for how the OS can escape even infinite loops! I've been wondering how that is possible since forever.
Yep. Most proccesors have a programmable timer that fires interrupts for exactly this purpose.
What I have learned is that I need an external interrupt to exit the infinite loop of looking at my phone- open the same useless apps- decide it is a waste of time- open youtube on my pc- start watching something- decide it is a waste of time- take my phone.........
What kind of sweets are you using / eating in this vid? (edit for clarification: I got what kind but not the brand / name ^_^)
Jelly Babies. Those were made by Bassetts but other sweet manufacturers are available.
@@DrSteveBagley Got to get this or similar for my upcoming workshop on SOA. Great and inexpensive idea!
Can you run the videos in series instead of in parallel? It will be very funny to see how you guys had to record the sequence out of order so that after editing it makes sense XD.
Also, where is the unboxing!? I was promised an unboxing video. XD
That process died of starvation. Poor scheduling algorithm and/or implementation.
Thank you very much for the Turkish subtitle
Ah, now I get what they were trying to teach me during my computer science course.
so now I wonder how asynchronous programming works
Perhaps you should mention the "runnable" state is actually a first in, first out scheduling queue.
awesome video, needs more views
Love your channel - so much great information. Also Objectivity. The best youtubes!
Outside of process scheduling and context switching is also watchdog timers which is why a problem thats just while(true) {} doesn't freeze your entire computer. To take the jelly baby analogy further in fact the OS itself is one of the jelly babies. OSes tend to take into account how much time a process was using a system call into the OS and consider that part of its task time.
Not necessarily, some processors have separate "super" (OS) and "user" (applications) states, with their own separate stacks and/or registers. So the OS isn't just a jelly baby then.
Just like time sharing in the old days.
process termination is yum
Wow what a great video! Looking forward to videos about the other topics he mentioned.
this is true on a single core processor but i haven't seen any single core processors since like 2000
If the processor has more that one core doesn't mean that this isn't true. This is actually how processes are scheduled.
It is basically the same for multi-core processors, just with n running states. So let's say 4 cores. That'll result in 4 running states and they all share the connections to blocked and runable states. You can easily see how that'll speed things up :)
@@Byter09 so basically ... for muilti core just replace "cpu" with "core" , :)
@@logangraham2956 Make that a scalar core. Super scalar cores with SMT architecture on the other hand are a different story again.
A lot of microcontrollers are still single core. And AMD launched their Athlon X2 in 2005, not 2000.
this topic is use to known as concurrency
Today cpus have multiple cores and even some sneaky computers have multiple cpus. Indeed multiple tasks are running in a realtime real parallel fashion.
First 1, then 2, then 3 ... In the next video I expect to see 4 Steves on a screen!
Not five, then eight, etc? ;)
I like the round robin process scheduler
Again, brilliant!!
Awesome vid! I've heard of jelly babies before but never seen them, thank you! Oh and the computer stuff was cool too I guess.
(please accept that in the spirit intended, I've run out of words to appreciate the time you all spend making these!)
Have been wondering about this for eons.
Is that why we call ACM magazine ACM Queue? Btw, since Moore's law will end in 2025 is it possible to look at the mother's board and try to programming taking on account PCI latency (cacm vol.60, No.4, page 48-54 Attack of the killer Microseconds) ?
Video is nice , i got much information. Thanks for it.
But I want to request that , please add some programming videos also.i liked whatever u teach
ooooh there's candy on the desk
I want some jelly babies now damnit
10:32 A modern OS like...Windows NT? There hasn't been a version of NT for quite a while! At least, not with that _name_ .
afaik, all modern windows versions are based on Windows NT. So not really with the same name, but deeply inside they are still Windows NT.
Well if that got you giggling, prepare to have a sock put in your mouth: Browse to C:\Windows\System32 directory... hit 'n' and scroll down to ntoskrnl.exe
Yes that's right NT OS Kernel, you've been using the Windows NT Operating System Kernel all this time and you didn't even know it! ;)
P.S. If you weren't aware the system actually loads that at boot time and that's what runs during the lifetime of the OS, that IS the kernel it's using... And that's where you have to patch patchguard out from if you for whatever reason want to remove it, and that's where all the kernel APIs reside also. Calling any API that needs to interact with the kernel, ends up in a call to a kernel mode API within ntoskrnl.exe (It's called 'exe' but it's not really an exe like a normal usermode application it IS THE KERNEL!)
Operating Systems don't wait for the process to hand control back. This is how Windows 3.1 used to do things, and is why a single errant program could hang the computer. Simply running a program with an infinite loop would bring the whole system down! Modern Operating Systems use a Programmable Interrupt Timer to take control of the CPU back from the process and programs can block these but must be careful to only do this for the duration of atomic operations.
Furthermore, it is called multi-tasking by people who write Operating Systems professionally. I have been in the Software Development field and have never once heard a Software Engineer refer to the Operating System as a "Multi-Programming" OS. It is a Multitasking OS.
Please do not "learn" from this guy, who is clearly a "Computer Scientist" who hasn't spent a day of his life working in Software Development.
1:46 Five nights at computerphile
Great! I understood that idea.
This is called "the interesting bit" and we'll do another video about that. Twice.
great stuff, thanks
Now I have to go and buy some sweets ...
OK, so terminating processes makes you fat.
I should stop terminating processes, I'll tell my nutritionist at the next visit.