I fully agree. It is one of the best videos showing the issues with multithreading and how it compares to multiprocessing. It really deserves a higher piority in youtube search.
@@ErikS- yup, 18:24 clear my mind about it. I was always wondering the real impact of it. damn right I was for using threading, but yet, I understand now the utility of multiprocessing. +1 🦾
Very nice full presentation. The short of it is that "Python" doesn't support parallel execution. For most programmers, when you talk about having multiple threads, the assumption is that those threads can and will execute in parallel. Unfortunately, Python was designed with single core CPU in mind so even though the idea of threads have existed for a while in computing, code wasn't likely to be run on a multithreaded/multicore/multicpu machine to do anything in parallel. It was just the operating system giving out small slices of time to execute one thread or another and it was perceived like both were happening at the same time -- very much like your graphs show. Python, like most interpreted languages, cannot get over this problem because of the synchronization and locking needed to share access to data across threads so they inherently can only allow one "Python interpreted" thread to run at a time. Only library implementations in C can get around this under the hood by spawning real threads on Python's behalf to do work. Or this "multiprocess" approach, which creates a new process and an independent Python interpreter with entirely separate program state and memory. This approach isn't really a Python solution because any programming language can spawn a new OS process (provided a library is available to access fork() and exec*() system calls) and then the OS will execute that process in parallel on a multicore machine. But the thing about multiple processes is that it's harder and slower to share and synchronize data between processes than it is threads. It may not be an issue in some cases if not much synchronization is needed (the case if only an end result matters at the end of parallel work), but it can be a severe a limitation. The last thing I'll say is that often times IO driven or IO heavy applications don't really need a performance boost of true parallel execution. The wait for IO (disk and network for example) are so slow compared to CPU execution that most threads would be waiting for IO anyways. With proper async-io setup (kqueue, select, epoll, IO completion ports) you can use a single thread to handle and dispatch thousands of IO requests and still be bottlenecked by IO. This is how/why people can still write "performance intensive" applications with interpreted languages and compete with a language like C or C++. Maximizing IO efficiency is simply something that sometimes C/C++ won't offer any benefit for so much "slower" languages appear to be just as fast.
I was able to combine multithreading under multiprocessing using threadpoolexecutor and processpoolexecutor thinking I could achieve true parallelism. But you’re right there is no real benefit doing multiprocessing 99% if your program is IO bound. The extra overhead and slowdown spawning processes is simply not worth it.
What i can’t understand from the video is why multi-threading (even if non parallel) should help, in theory, in IO heavy applications. Can you help me?
@@alessandropolidori9895 When it comes to IO the operating system is doing this in the background. So this means, you can schedule an other thread and the operating system will still do the work in the background and if your thread is scheduled again, the OS maybe finished his work and you are able to continue your work
@@alessandropolidori9895 In theory, one thing where it can help is if the hot data that is processed, can e.g. mostly be cached in some of the direct for a cpu available cache line in L1 or L2 cache. An example could be a redis like implementation with a bloom filter (very small memory that can definitly deny if data is not in the slow data store behind and 99% or so sure if it is there). And for such a scenario, it's of course helpful if for different data stores, each one works on a different cpu core, so that the bloom filter is already in the ultrafast L1 or L2 cache in case. To be honest, for python scenarios, this is a bit far off - as you would usually implement such things in a system language like C, C++, Rust, Ada, or even Golang (there indeed exists a redis clone). The latter is an example of a language that still has its own run time and garbage collection, but optimized for such tasks. The more practical example is that in IO heavy tasks, some individual tasks will block. (Classic example fetch data from a SQL database or an url). Now you certainly don't want all other tasks to wait for it. The modern approach for it is async - but this is relatively new in Python (something like 4 years "young") and multithreading were the answer before the async implementations were available and production ready. It's also nowadays a simple way (but slightly less performant than the async implementations in most cases) to alter code and have this (mostly) non-blocking behaviour if you don't want or can't refactor the implementation. In general, nowadays I'd recommend either optimize the program to just run on one cpu core or to run on all, nothing in between. You can't really mix the use cases and still be performant anyway. In python, you'd end up fighting a lot with GIL (global interpreter lock) and if you have to put both use cases into one application, I'd suggest to have two different programs that communicate e.g. with a message queue asynchronously. I remember a lot of headaches with machine learning optimized implementations (e.g. Spacy) in combinations with a web server (running with wsgi). Short story: don't do it - separate them 🙂
I notice a 20-50% slower file copy with Python compared to system, for example with shutil.move() on Windows, I'm running the file copy on a separate pyQt thread... juste renaming the file which should not take any noticeable processing time. Do you think another method might be faster? I'm asking because I expect C++ would be as fast as the system in this case, not 20-50% slower...
Absolutely the best video on youtube describing how threading works in Python, with concise demonstrations and a well thought of script and presentation. 10/10, subscribed
Excellent explaination about the most complicated questions that I have ever come across in an interview setting. Even though, this is an after math I am super glad to learn in with such a thought clarity. This is how you become fear-less!!! Thank you Dave ❤!
Brilliant demonstration! It really helped lock in my understanding of these pythons terms (especially when they seem to be used interchangeably elsewhere online)
Yay, it's so interesting to see a visual representation of something that I have been figuring out during my work for a few years with threading\multiproc. When you understand it on instincts but not so visualized and vivid.
Thankyou Dave, i'm so glad youtube algo's bought your video to my daily feed. A really fascinating insight into thread and processes and the presentation style was perfect. Best wishes.
What an incredible video. I’ve just been blindly picking one or the other, not sure the differences between either one, but this makes everything so clear. I’m so glad I found it!!
One of the best lectures on multiprocessing and threading that I ever saw. Thanks for the guide and info, this will help me improve my own lectures on the subject
I really, really don’t understand why you don’t have more followers. Keep up the good work. This is really well done! Informative and straight out fun!
I'm glad that I discovered this channel but unfortunately a bit late (just today, this video) which is 3 yrs old as of now. I feel he is not active anymore because the latest video on this channel is also 2 yrs old. However, I hope he sees my comment and comes back with a similar video using Python 3.13 which has been just released and has the option of disabling GIL lock. I will do my own tinkering and would be interested in comparing with other experts like Dave. Thank you so much for your work.
That explains the one intern I had, who wouldn't want to believe that threads are simultaneous. He said he had some python experience, but we use java.
A few things to note: The GIL (and therefore sequential thread execution within a process) are only an issue in CPython, not in (most) other python interpreters. Jython for example has true parallel threads. Also most other languages have them. This is mostly a python problem
This is by far the most comprehensive and easily consumable video on any CS learning I've ever seen. Great job! Giving you a sub for sure. Keep it up Dave!
This is hands down the most thorrow video on a topic. And youtube shows me this exactly 1 year after I desperately needed it. Better late than never i guess.
You can run parallel threads using PdP (Parallel distributed Processing) if you have a process that can run non serial...obviously there is networking overhead. Great video--lots of ground to cover.
It was a very good and impressive presentation. listening to it made me feel as if David Attenborough was describing the lyrebird like in the bbc documentary. :) Thank you for your effort...
I wish people in general were aware of the amount of intellectual work put into the devices we use daily. We are giving things for granted in society and losing respect to science and math, the very things that sustain our civilization.
My rule of thumb from trial and error is that you should always leave about 1 core free for each set of 8 (using python and Linux). So 2 cores free for 16 cores would be 14 max. Otherwise the system just bogs down and you get less performance and greater chance of hanging up.
This video is amazing! I don't usually go to youtube for programming content because its all just copy paste. This is one of the most informative and useful vids I've come across in a long time. I love the graphics/ visuals. I don't know how you managed to make multithreading and multiprocessing so engaging but bravo! 👏 Keep up the great work and thank you for the content!
Just a few weeks ago I went through this discovery myself when writing a wordle solver in python. This video would have been very helpful at that time. Everything explained here is spot on.
This has been the best explanation of the differences between the two I have seen. My only gripe is I really wanted to see this same data but also include a column for true single threaded work with no threading or multiprocessing enabled. How much lower than the 2million is it? That would have been helpful to see. Otherwise this was excellent and helped clarify what I need yo use when. Thanks so much.
You should expect it is more than 2 million, not less. Threading has some overhead, and since none of the operations in this example are i/o bound you never get that overhead back.
Nice work, would be nice to have more examples of comparison by types of operations. For example image processing, machine learning cross validation, audio processing, etc. I find it is not always simple to pick between threading or pooling as most applications are a mixture of cpu and data calls. I guess it could be broken into smaller steps but, I'd love to see how to structure that efficiently.
I'd be curious if there are differences between python implementations when it comes to threading and multiprocessing. Might be interesting to see if pypy, for example, is more performant in spawning tasks. This is the first video from you I've seen and really enjoyed it!
An excellent presentation on the pros and cons of threading and multiprocessing. Have you used CPU affinity and priority to mitigate the non-deterministic performance of a process. I am trying to setup a multiprocess environment where the main process is running a Tkinker GUI and starts a second process to control and data acquisition module. Thanks for your outstanding effort in your presentations.
Hi Phillip. I believe you can control the affinity of the workers, but for a multiproceessing Pool this is not a simple task! I also do not know if it's possible when running on Windows or MacOS. As for your program, this sounds very similar to things I have done at work, where I have a front-end event based GUI and multi background threads communicating with various instruments. One thing I would reccommend is using Qt, I used to use TK because it is arguably much simpler to get a program up and running, but as the program becomes more complex, QT's build in libraries are very helpful.
Awesome, no pressure and yet informative! Good work! Thanks a lot! allthough i knew the topic well from uni, i could deepen my understanding with this!
Understand this on the language you are working on is essential for optimization and prevent bugs. For example I learned recently that T-SQL run some queries in parallel. So sometimes you need to use loops
Nice video. Masy aspets on this video can be found in Amdahls law (just google). Especially when you devide the speed up factor from Amdahls law by the number of used processors to get an efficiency estimate. This leads to the same results you show in the video.
Thanks for this video. I think this will help me out at work. Some idiot is sending us a database table output where each row of data is a json file of each column header and 1 row of data. And we're getting upwards of 50k files a day which could just be a single file that's only a few MB. I thought that multiprocessing was the way to go for this thinking that that would help with all the io calls, but it looks like multithreading is it.
3:37 "By ensuring only a single thread can execute at any one time thread safety is guaranteed in python and by thread safety there can be no deadlocks" That's not even close to true. Having only one thread executing at a time does simplify certain types of races but it absolutely does not get rid of deadlocks. As soon as you can have communication between threads (shared access to variables counts as this) you can implement _some_ version of mutexes and thus you can have deadlocks. In fact even if you never start a second thread you can generate deadlocks!
So (coming from the NodeJS world) I understand that NodeJS is multithreaded out of the box for I/O operations and it can be multiprocessed with Workers... This is why it can be so fast because the « single thread thing » is in fact delegating I/O operation to a multithread sub layer (LibUV)...
I found out about this, so I was curious if I was right, and thankfully there's a video for it. I made a program that replicated itself 32 times, and if I press X, it all activates. Each of the programs is basically an autoclicker that hits every 1 millisecond. I tried using 0.1 or more, but it seems to me that I can only hit 1 millisecond worth of it. I wasn't that familiar with threading; however, multiprocessing has ever since helped me do extreme mathematical computations. (Although I did use threads to activate them all.)
Processes don't need pipes or ques to access another processes memory it just means there is a managed way of sharing data, Cheat Engine is software that openly shows how 1 process AKA CheatEngine can alter another process's memory without any support from the process it's messing with.
This is the entire concept behind NodeJS. NodeJS runtime operates in a multi-process paradigm. There mutli-threads operations in NodeJS but it is not natively multi-threaded. Similar to Python. Almost identical.
I can't imagine the effort and time you have invested for making this video.. Very informative
Thank you very much!
I fully agree.
It is one of the best videos showing the issues with multithreading and how it compares to multiprocessing. It really deserves a higher piority in youtube search.
@@ErikS- yup, 18:24 clear my mind about it.
I was always wondering the real impact of it.
damn right I was for using threading, but yet, I understand now the utility of multiprocessing.
+1
🦾
Where can we have full code? Can you give a GitHub link plz? Also explain any risk of doing this multiprocessing
Too bad this video is incorrect on almost ever level.
Very nice full presentation. The short of it is that "Python" doesn't support parallel execution. For most programmers, when you talk about having multiple threads, the assumption is that those threads can and will execute in parallel. Unfortunately, Python was designed with single core CPU in mind so even though the idea of threads have existed for a while in computing, code wasn't likely to be run on a multithreaded/multicore/multicpu machine to do anything in parallel. It was just the operating system giving out small slices of time to execute one thread or another and it was perceived like both were happening at the same time -- very much like your graphs show.
Python, like most interpreted languages, cannot get over this problem because of the synchronization and locking needed to share access to data across threads so they inherently can only allow one "Python interpreted" thread to run at a time. Only library implementations in C can get around this under the hood by spawning real threads on Python's behalf to do work. Or this "multiprocess" approach, which creates a new process and an independent Python interpreter with entirely separate program state and memory. This approach isn't really a Python solution because any programming language can spawn a new OS process (provided a library is available to access fork() and exec*() system calls) and then the OS will execute that process in parallel on a multicore machine. But the thing about multiple processes is that it's harder and slower to share and synchronize data between processes than it is threads. It may not be an issue in some cases if not much synchronization is needed (the case if only an end result matters at the end of parallel work), but it can be a severe a limitation.
The last thing I'll say is that often times IO driven or IO heavy applications don't really need a performance boost of true parallel execution. The wait for IO (disk and network for example) are so slow compared to CPU execution that most threads would be waiting for IO anyways. With proper async-io setup (kqueue, select, epoll, IO completion ports) you can use a single thread to handle and dispatch thousands of IO requests and still be bottlenecked by IO. This is how/why people can still write "performance intensive" applications with interpreted languages and compete with a language like C or C++. Maximizing IO efficiency is simply something that sometimes C/C++ won't offer any benefit for so much "slower" languages appear to be just as fast.
I was able to combine multithreading under multiprocessing using threadpoolexecutor and processpoolexecutor thinking I could achieve true parallelism. But you’re right there is no real benefit doing multiprocessing 99% if your program is IO bound. The extra overhead and slowdown spawning processes is simply not worth it.
What i can’t understand from the video is why multi-threading (even if non parallel) should help, in theory, in IO heavy applications. Can you help me?
@@alessandropolidori9895 When it comes to IO the operating system is doing this in the background. So this means, you can schedule an other thread and the operating system will still do the work in the background and if your thread is scheduled again, the OS maybe finished his work and you are able to continue your work
@@alessandropolidori9895 In theory, one thing where it can help is if the hot data that is processed, can e.g. mostly be cached in some of the direct for a cpu available cache line in L1 or L2 cache. An example could be a redis like implementation with a bloom filter (very small memory that can definitly deny if data is not in the slow data store behind and 99% or so sure if it is there). And for such a scenario, it's of course helpful if for different data stores, each one works on a different cpu core, so that the bloom filter is already in the ultrafast L1 or L2 cache in case. To be honest, for python scenarios, this is a bit far off - as you would usually implement such things in a system language like C, C++, Rust, Ada, or even Golang (there indeed exists a redis clone). The latter is an example of a language that still has its own run time and garbage collection, but optimized for such tasks.
The more practical example is that in IO heavy tasks, some individual tasks will block. (Classic example fetch data from a SQL database or an url). Now you certainly don't want all other tasks to wait for it. The modern approach for it is async - but this is relatively new in Python (something like 4 years "young") and multithreading were the answer before the async implementations were available and production ready.
It's also nowadays a simple way (but slightly less performant than the async implementations in most cases) to alter code and have this (mostly) non-blocking behaviour if you don't want or can't refactor the implementation.
In general, nowadays I'd recommend either optimize the program to just run on one cpu core or to run on all, nothing in between. You can't really mix the use cases and still be performant anyway. In python, you'd end up fighting a lot with GIL (global interpreter lock) and if you have to put both use cases into one application, I'd suggest to have two different programs that communicate e.g. with a message queue asynchronously. I remember a lot of headaches with machine learning optimized implementations (e.g. Spacy) in combinations with a web server (running with wsgi). Short story: don't do it - separate them 🙂
I notice a 20-50% slower file copy with Python compared to system, for example with shutil.move() on Windows, I'm running the file copy on a separate pyQt thread... juste renaming the file which should not take any noticeable processing time.
Do you think another method might be faster? I'm asking because I expect C++ would be as fast as the system in this case, not 20-50% slower...
Absolutely the best video on youtube describing how threading works in Python, with concise demonstrations and a well thought of script and presentation. 10/10, subscribed
This is the most comprehensive video I've ever seen on multithreading and multiprocessing. Great job!
Excellent explaination about the most complicated questions that I have ever come across in an interview setting. Even though, this is an after math I am super glad to learn in with such a thought clarity. This is how you become fear-less!!! Thank you Dave ❤!
Amazing explanation, could not find such clarity anywhere else !
The amount of work done here is unblievable. Thank you so much
Massively underrated video. Saved it in my library. Thank you sir.
It is my first time to figure out the multiprocess and threading in Python. Thanks a lot.
Glad it helped!
This is the very best explanation of threading vs multiprocessing that I have ever seen. Well done!
this video is amazing, honestly one of the best I've ever seen, thank you from the bottom of my heart for dedicating so much time to creating it❤️
The best I have ever watched on multiprocessing v/s threading!! The visualizations were a complete treat ❤
This channel is a hidden gem!
This sooooooo great.... probably the best explanation on UA-cam
fantastic data visualization with the activity charts, i will be checking out more of your videos
Brilliant demonstration! It really helped lock in my understanding of these pythons terms (especially when they seem to be used interchangeably elsewhere online)
The best video in UA-cam explaining the concept! Thanks
Yay, it's so interesting to see a visual representation of something that I have been figuring out during my work for a few years with threading\multiproc.
When you understand it on instincts but not so visualized and vivid.
Brilliant representation of the concept. Thanks for all you effort.
Thankyou Dave, i'm so glad youtube algo's bought your video to my daily feed. A really fascinating insight into thread and processes and the presentation style was perfect. Best wishes.
Literally the best video ive seen yet on this topic. Keep posting man!
2 minutes into this I already understand it better than all other readings I did online. Nice!
Hands down best video on python multithreading and multiprocessing.
What an incredible video. I’ve just been blindly picking one or the other, not sure the differences between either one, but this makes everything so clear. I’m so glad I found it!!
Brilliant work!! Best video on multithreading/processing I've seen in a while
This video was so informative, even for someone who is unfamiliar with the concept
You deserve a lot more recognition
One of the best lectures on multiprocessing and threading that I ever saw. Thanks for the guide and info, this will help me improve my own lectures on the subject
Woah, just found your channel. This is truly a goldmine.
I am impressed with your use of visual aids in explaining how all this works. It definitely makes a lot more sense.
This video is a marvellous craftsmanship
I really, really don’t understand why you don’t have more followers. Keep up the good work. This is really well done! Informative and straight out fun!
As someone who does data analysis and plotting with Python, thank you. So much.
This video is completely underrated.
One of the best video that I have seen on the internet...This video forced me to subscribe this channel.
I'm glad that I discovered this channel but unfortunately a bit late (just today, this video) which is 3 yrs old as of now. I feel he is not active anymore because the latest video on this channel is also 2 yrs old. However, I hope he sees my comment and comes back with a similar video using Python 3.13 which has been just released and has the option of disabling GIL lock. I will do my own tinkering and would be interested in comparing with other experts like Dave. Thank you so much for your work.
That explains the one intern I had, who wouldn't want to believe that threads are simultaneous. He said he had some python experience, but we use java.
Very well explained :) I can see your number of subscribers growing at a steady pace mate. Keep it up! Good stuff!
Thank you very much! Yes steady growth is encouraging 😊
A few things to note:
The GIL (and therefore sequential thread execution within a process) are only an issue in CPython, not in (most) other python interpreters.
Jython for example has true parallel threads. Also most other languages have them. This is mostly a python problem
Very nice explanation . keep up the good work.
Outstanding video. “Like” is an understatement. So clear and informative.
This is a masterpiece, honestly. Content-wise is very informative, but the way you represent everything is like watching a sci-fi movie.
This is by far the most comprehensive and easily consumable video on any CS learning I've ever seen. Great job! Giving you a sub for sure.
Keep it up Dave!
Unbelievable I found this video! Now open my mind about Python! Please make video like this agaiinn!
This is the best video explanation on this topic, WOW
This is hands down the most thorrow video on a topic. And youtube shows me this exactly 1 year after I desperately needed it.
Better late than never i guess.
You can run parallel threads using PdP (Parallel distributed Processing) if you have a process that can run non serial...obviously there is networking overhead. Great video--lots of ground to cover.
One major improvement I've found is taking your CPU intensive Python code and writing it in this language called "C". Joking aside, great video!
It was a very good and impressive presentation. listening to it made me feel as if David Attenborough was describing the lyrebird like in the bbc documentary. :) Thank you for your effort...
I wish people in general were aware of the amount of intellectual work put into the devices we use daily. We are giving things for granted in society and losing respect to science and math, the very things that sustain our civilization.
Great video. The best multiprocessing v/s threading graphical explanation on the hole internet. Thanks for the dedication. New subscriber.
My rule of thumb from trial and error is that you should always leave about 1 core free for each set of 8 (using python and Linux). So 2 cores free for 16 cores would be 14 max. Otherwise the system just bogs down and you get less performance and greater chance of hanging up.
This video is amazing! I don't usually go to youtube for programming content because its all just copy paste. This is one of the most informative and useful vids I've come across in a long time. I love the graphics/ visuals. I don't know how you managed to make multithreading and multiprocessing so engaging but bravo! 👏 Keep up the great work and thank you for the content!
Just a few weeks ago I went through this discovery myself when writing a wordle solver in python. This video would have been very helpful at that time. Everything explained here is spot on.
This has been the best explanation of the differences between the two I have seen.
My only gripe is I really wanted to see this same data but also include a column for true single threaded work with no threading or multiprocessing enabled. How much lower than the 2million is it?
That would have been helpful to see. Otherwise this was excellent and helped clarify what I need yo use when. Thanks so much.
You should expect it is more than 2 million, not less. Threading has some overhead, and since none of the operations in this example are i/o bound you never get that overhead back.
Nice work, would be nice to have more examples of comparison by types of operations. For example image processing, machine learning cross validation, audio processing, etc. I find it is not always simple to pick between threading or pooling as most applications are a mixture of cpu and data calls. I guess it could be broken into smaller steps but, I'd love to see how to structure that efficiently.
I'd be curious if there are differences between python implementations when it comes to threading and multiprocessing. Might be interesting to see if pypy, for example, is more performant in spawning tasks.
This is the first video from you I've seen and really enjoyed it!
Knew all of that already (wish it was more tl;dw - like 2mins) but think it's super extensive + informative for a beginner.
An excellent presentation on the pros and cons of threading and multiprocessing. Have you used CPU affinity and priority to mitigate the non-deterministic performance of a process.
I am trying to setup a multiprocess environment where the main process is running a Tkinker GUI and starts a second process to control and data acquisition module.
Thanks for your outstanding effort in your presentations.
Hi Phillip. I believe you can control the affinity of the workers, but for a multiproceessing Pool this is not a simple task! I also do not know if it's possible when running on Windows or MacOS.
As for your program, this sounds very similar to things I have done at work, where I have a front-end event based GUI and multi background threads communicating with various instruments.
One thing I would reccommend is using Qt, I used to use TK because it is arguably much simpler to get a program up and running, but as the program becomes more complex, QT's build in libraries are very helpful.
Good Explanation, next time i know exactly which one is better for my purpose.
Excellent video, superbly made. Thanks for posting.
It took me to 7:50 to realize that the voice is Ai generated. Good job. Also thanks for the helpful indepth video!
I wish growth to your channel. A very informative video with amazing visualization. There would be more of this in my recommendations.
Awesome, no pressure and yet informative! Good work! Thanks a lot! allthough i knew the topic well from uni, i could deepen my understanding with this!
Understand this on the language you are working on is essential for optimization and prevent bugs. For example I learned recently that T-SQL run some queries in parallel. So sometimes you need to use loops
Awesome attention to details 😀
Nice video. Masy aspets on this video can be found in Amdahls law (just google). Especially when you devide the speed up factor from Amdahls law by the number of used processors to get an efficiency estimate.
This leads to the same results you show in the video.
Incredible video and crystal clear explanations. Hope to see more !
Thanks for this video. I think this will help me out at work. Some idiot is sending us a database table output where each row of data is a json file of each column header and 1 row of data. And we're getting upwards of 50k files a day which could just be a single file that's only a few MB. I thought that multiprocessing was the way to go for this thinking that that would help with all the io calls, but it looks like multithreading is it.
This was a super quality educational video, thanks so much!
3:37 "By ensuring only a single thread can execute at any one time thread safety is guaranteed in python and by thread safety there can be no deadlocks"
That's not even close to true. Having only one thread executing at a time does simplify certain types of races but it absolutely does not get rid of deadlocks.
As soon as you can have communication between threads (shared access to variables counts as this) you can implement _some_ version of mutexes and thus you can have deadlocks.
In fact even if you never start a second thread you can generate deadlocks!
Well said, I thought this too! This part of the video threw me off a lot!
Agreed, this is where I almost stopped watching and just fast forwarded through the rest of the video.
So (coming from the NodeJS world) I understand that NodeJS is multithreaded out of the box for I/O operations and it can be multiprocessed with Workers... This is why it can be so fast because the « single thread thing » is in fact delegating I/O operation to a multithread sub layer (LibUV)...
PEP 703 go brrr! I'm excited to try it on python 3.13
Excellent work, very informative! Thanks a ton for your time!
I found out about this, so I was curious if I was right, and thankfully there's a video for it. I made a program that replicated itself 32 times, and if I press X, it all activates. Each of the programs is basically an autoclicker that hits every 1 millisecond. I tried using 0.1 or more, but it seems to me that I can only hit 1 millisecond worth of it. I wasn't that familiar with threading; however, multiprocessing has ever since helped me do extreme mathematical computations. (Although I did use threads to activate them all.)
Very informative video. Thanks a lot !
Awesome video and so very well explained. Thank you so very much. It was excellent.
Perfect visualisation and well presented content. Thank you for your efforts!
Glad you enjoyed it!
Thank you so much for your content! very useful and I really enjoyed the way you structure and visualize your video! Thank you!
1ºclass work, Congratz
Great explanation!. How do you achieve that graphs?
Really informative video¡¡ I struggled a bit with the accent and speed but it's really good¡
the plotting looks interesting, any chance for tutorial or code?
Thanks for this -- looking forward to more of your work!
Nice video and explanations. How do you create graphs in this video?
I used matplotlib and multiprocessing to speed it up. ua-cam.com/video/NZ3QWpW8kv8/v-deo.html I actually go over these videos in this youtube video.
Really nice explanation.
Processes don't need pipes or ques to access another processes memory it just means there is a managed way of sharing data, Cheat Engine is software that openly shows how 1 process AKA CheatEngine can alter another process's memory without any support from the process it's messing with.
Great video! One question, how did you create the time series visualizations of threads and processes?
This should become a standard teaching tool in university comp sci classrooms.
Thanks for the detailing. Excellent
Can you explain how you did the multithreading animations? How did you collect data and how did you create such an animation? Very cool video. Thanks!
He explains it pretty well around 6:30. Not sure what he used to plot the results tho
Você tá de parabéns, um dos vídeos mais bem didáticos que vi sobre python. Bom trabalho e certamente irei ver mais vídeo seu!
Two processes could exchange data via shared memory.
This was incredibly helpful!
Hope your channel grow well
Me too! Thanks for watching!
Thanks for the really great information.❤
Great explanation! Thanks for clarifying.
Wow! Really well explained
This is the entire concept behind NodeJS. NodeJS runtime operates in a multi-process paradigm. There mutli-threads operations in NodeJS but it is not natively multi-threaded. Similar to Python. Almost identical.
HOLY CRAP THIS IS A GREAT USE CASE!
I love your channel :) you are a 3 blue 1 brown in the making, if not better