and vs co-routine ? where would be a co-routine in the schema OR what would be the schema of a co-routine if its not in / don't share the same structure / if it's a concept of another paradigme
Here the threads you are talking about are kernel level threads? So the structure would be: CPU containing processes where each process can have multiple kernel level threads (all kernel level threads share the same address space, but they have separate registers, stack, etc) and each kernel thread can have multiple user level threads ?
A question for you, a thread or more are always executing on same CPU/Core that the process is pegged on and moves with it to different CPU/Core if the process moves ?
When you start a process (a dormant program now executing) it starts off with 1 thread. Each process (program) is given a chunk of memory for use by the OS. When we start a new thread from WITHIN a process, it's almost like starting a new process, BUT it's not; starting a new process means starting a new instance of the program entirely. That's expensive, so threads let us avoid having to create entirely new processes when it makes sense. When a process creates a new thread (now going from single-threaded to multi-threaded process) the new thread is going to make use of the same memory that was allocated by the OS for that process and it should since threads live in a process. For example, let's say I had a global variable "a = 1" at the top of my program and then I spawned a new thread from within my currently running process. My new thread would be able to see that global variable "a=1" and also update it "a=2" and all other threads would be able to see that update. This means threads can communicate with each other. In summary, in a way threads are like running multiple processes, but instead of actually creating multiple processes (programs) and paying a big cost, we're re-using memory. Nice reading here: www.backblaze.com/blog/whats-the-diff-programs-processes-and-threads/ Some code here: www.geeksforgeeks.org/multithreading-c-2/
"address" can be seen as a reference to a location in the computer's memory that is allocated to the process. The OS should have a map from addresses to memory locations and it should protect the location reserved to a process, so the other processes are blocked from accessing it. That location can store the process' code and/or data. For the sake of understanding the video, the multiple threads in a process have access to the shared addresses within its process.
To go further and have an idea of "virtual" or "physical" memory, let's define "memory". Traditionally, memory refers to RAM memory sticks, which usually provide faster access to the code and data they store, as compared to other types of storages -disks - that usually are Hard Drives (HDs) or Solid State Drives (SSDs). However, RAM sticks cost more, so they tend to be more limited in terms of maximum storage capacity. An OS may use the disk to emulate more memory storage capacity for the processes. So a process may "perceive" it has X amount of virtual memory; some of that may be actually a representation to the physical memory (Y) and some, a virtualization (Z) - the data is actually stored on a disk. So the process may have a storage capacity of X = Y + Z
this didnt help me at all
I love short and effective conceptual videos thank you Udacity
Short and straight to the point. Thanks for this explanation.
What do you mean by code, data and files?
Could you use more jargon please ? 🙄
Lady was yapping fr
and vs co-routine ? where would be a co-routine in the schema OR what would be the schema of a co-routine if its not in / don't share the same structure / if it's a concept of another paradigme
Thanks for the concise explanation!
I'm just gonna go take the course on Udacity.
Sounds very useful but seems out of place cause I lack context.
Here the threads you are talking about are kernel level threads?
So the structure would be:
CPU containing processes where each process can have multiple kernel level threads (all kernel level threads share the same address space, but they have separate registers, stack, etc) and each kernel thread can have multiple user level threads ?
Really well explained!
what are the benefits of multiprocessor system for a single threaded process?
A question for you, a thread or more are always executing on same CPU/Core that the process is pegged on and moves with it to different CPU/Core if the process moves ?
Which multithreading is beneficial one to one or many to many ?
Really helpful, thank you!
Thank you in 2020!
Nice
Amazing explanation! Cheers : )
Good explanation. Thank you :)
beta ap k 5 marks extra
You are cute Amulya! Have a great day!! I love you!
Thank you
What does it mean about "share all of the virtual to physical adress mappings"??
When you start a process (a dormant program now executing) it starts off with 1 thread. Each process (program) is given a chunk of memory for use by the OS.
When we start a new thread from WITHIN a process, it's almost like starting a new process, BUT it's not; starting a new process means starting a new instance of the program entirely. That's expensive, so threads let us avoid having to create entirely new processes when it makes sense.
When a process creates a new thread (now going from single-threaded to multi-threaded process) the new thread is going to make use of the same memory that was allocated by the OS for that process and it should since threads live in a process.
For example, let's say I had a global variable "a = 1" at the top of my program and then I spawned a new thread from within my currently running process. My new thread would be able to see that global variable "a=1" and also update it "a=2" and all other threads would be able to see that update. This means threads can communicate with each other.
In summary, in a way threads are like running multiple processes, but instead of actually creating multiple processes (programs) and paying a big cost, we're re-using memory.
Nice reading here: www.backblaze.com/blog/whats-the-diff-programs-processes-and-threads/
Some code here: www.geeksforgeeks.org/multithreading-c-2/
"address" can be seen as a reference to a location in the computer's memory that is allocated to the process. The OS should have a map from addresses to memory locations and it should protect the location reserved to a process, so the other processes are blocked from accessing it. That location can store the process' code and/or data. For the sake of understanding the video, the multiple threads in a process have access to the shared addresses within its process.
To go further and have an idea of "virtual" or "physical" memory, let's define "memory". Traditionally, memory refers to RAM memory sticks, which usually provide faster access to the code and data they store, as compared to other types of storages -disks - that usually are Hard Drives (HDs) or Solid State Drives (SSDs). However, RAM sticks cost more, so they tend to be more limited in terms of maximum storage capacity. An OS may use the disk to emulate more memory storage capacity for the processes. So a process may "perceive" it has X amount of virtual memory; some of that may be actually a representation to the physical memory (Y) and some, a virtualization (Z) - the data is actually stored on a disk. So the process may have a storage capacity of X = Y + Z
@@full_stack9810 that's a very good example!
@@CliffordFajardo well explained!
theres a thread among us
Schier