in function shared_value_increment, you can add a layer of loop to increase the execution time. On my Laptop, it will always shows 100 if there is only one line to execute.
Thank you so much for this series. I really like how your examples build off each other so its easy to introduce the new concept. As you said, it's hard for our brains to compute concurrently
Thank you so much. This improved (and corrected) my understanding of mutex drastically. If i may make a request, I'd like you to illustrate difference between data race and race conditions using c++. There's no video content on the internet explaining these two terms via c++. I have read a few answers on various forums, but they weren't really helpful as they ranged from claiming these concepts are the same to others claiming they are totally unrelated terms.
Hi, Thanks and I really liked the way you explained this concept...But I have a small doubt and hope some one can answer it. Does the mutex lock actually locking the other threads to access the shared_value or is it locking the other threads from executing the instructions which are inside the mutex block "shared_value = shared + 1". My doubt is, lets say if one thread is exciting the function shared_value_increment(), which has the mutex_lock...lets say I have a another fuction which just returns the value of shred_value as void return_shared_value(){ return shared_value; } lets say if I am calling this return_shared_value in another thread, will the first thread which has the muter_lock in shared_value_increment() block the second thread which is just trying to read the value or does the mutex lock only lock the other threads which are trying to increment the value using shared_value_increment()
Mutex is effectively creating a 'critical section' so no other thread can access that code (i.e. enter the critical region). If you want to protect the actual shared_value you can use an std::atomic (covered in another video on this series here: ua-cam.com/video/f_C4eYxBWdQ/v-deo.html )
Unfortunately, I haven't seen the fluctuations demonstrated so far on my local machine. I tried up to 100,000 threads and the result is always constant and as expected.
Are you running with optimizations by chance? That's quite lucky that you are getting a consistent value :) In reality, machines won't spawn 100,000 threads either, so perhaps there is a resource limit to how many threads can run as well that's getting more predictive behavior. That's what makes these bugs tricky however!
@@MikeShah No optimization. 100 threads, -> 7,100 microseconds 1,000 threads -> 85,000 10,000 threads -> 1,200,000 100,000 threads -> 33,300,000 The static int gets incremented, I don't have the feeling I hit a thread limit. Otherwise, why would it get incremented to 100,000 with 100,000 threads? mutex or not doesn't affect the result.
@@SaschaRobitzki Interesting -- only thing I can think is the compiler getting rid of the threads and inlining everything. Very strange that this works without a mutex. Perhaps trying two different share variables that are accumulating and add some more work in the 'shared_value_increment' function.
@@MikeShah I switched to Microsoft's compiler and used a vanilla project in VS 2022. I got the same result as before. I added more work to shared_value_increment(), but that didn't make any difference. I guess there must be something wrong with my code, though it looks exactly like on your screen here.
The main difference between a binary semaphore and a mutex is that a binary semaphore can be released by a task other than the task that acquired it, while a mutex can only be released by the task that acquired it.
@@MikeShahCool, glad to hear that because the content of your videos is great, it will be good to hear everything you say. Thank you for your good work!
If you're using something like pthread's library, then you do initialize the lock. No need to here. And in reality (see the next video in playlist), we probably want to wrap the mutex and use some sort of 'scoped guard' :)
in function shared_value_increment, you can add a layer of loop to increase the execution time. On my Laptop, it will always shows 100 if there is only one line to execute.
May get lucky, but a lock is needed to ensure operating system interrupt and context switch does not interfere and you get a data race 🙂
Thank you so much for this series. I really like how your examples build off each other so its easy to introduce the new concept. As you said, it's hard for our brains to compute concurrently
Cheers -- thank you Damon!
Thank you so much. This improved (and corrected) my understanding of mutex drastically.
If i may make a request, I'd like you to illustrate difference between data race and race conditions using c++. There's no video content on the internet explaining these two terms via c++.
I have read a few answers on various forums, but they weren't really helpful as they ranged from claiming these concepts are the same to others claiming they are totally unrelated terms.
Noted -- will add this to the wishlist for concurrency series
Been a great lecture , also learning your c++ series, Great series
Cheers!
Clear explanation, good lesson structure, and very informative. Your lessons are great, thank you
Cheers!
Thanks Mike. I have been studying this. This helps.
Cheers!
Great video, but please, fix your sound
Cheers!
Thank you for the video.
Why did you declare the std::mutex globally? And not inside the function?
Can we reuse that mutex class anywhere?
std::mutex was made global because every thread needs to be able to access (i.e. check if the mutex is available).
Hi,
Thanks and I really liked the way you explained this concept...But I have a small doubt and hope some one can answer it. Does the mutex lock actually locking the other threads to access the shared_value or is it locking the other threads from executing the instructions which are inside the mutex block "shared_value = shared + 1".
My doubt is, lets say if one thread is exciting the function shared_value_increment(), which has the mutex_lock...lets say I have a another fuction which just returns the value of shred_value
as
void return_shared_value(){
return shared_value;
}
lets say if I am calling this return_shared_value in another thread, will the first thread which has the muter_lock in shared_value_increment() block the second thread which is just trying to read the value or does the mutex lock only lock the other threads which are trying to increment the value using shared_value_increment()
Mutex is effectively creating a 'critical section' so no other thread can access that code (i.e. enter the critical region). If you want to protect the actual shared_value you can use an std::atomic (covered in another video on this series here: ua-cam.com/video/f_C4eYxBWdQ/v-deo.html )
Unfortunately, I haven't seen the fluctuations demonstrated so far on my local machine. I tried up to 100,000 threads and the result is always constant and as expected.
Are you running with optimizations by chance? That's quite lucky that you are getting a consistent value :) In reality, machines won't spawn 100,000 threads either, so perhaps there is a resource limit to how many threads can run as well that's getting more predictive behavior. That's what makes these bugs tricky however!
@@MikeShah No optimization.
100 threads, -> 7,100 microseconds
1,000 threads -> 85,000
10,000 threads -> 1,200,000
100,000 threads -> 33,300,000
The static int gets incremented, I don't have the feeling I hit a thread limit. Otherwise, why would it get incremented to 100,000 with 100,000 threads? mutex or not doesn't affect the result.
@@SaschaRobitzki Interesting -- only thing I can think is the compiler getting rid of the threads and inlining everything. Very strange that this works without a mutex. Perhaps trying two different share variables that are accumulating and add some more work in the 'shared_value_increment' function.
@@MikeShah I tried GCC's -fno-inline-small-functions and -fno-inline with no effect.
@@MikeShah I switched to Microsoft's compiler and used a vanilla project in VS 2022. I got the same result as before. I added more work to shared_value_increment(), but that didn't make any difference. I guess there must be something wrong with my code, though it looks exactly like on your screen here.
Mutexes and Binary Semaphores are definitely not the same thing, although they are similar.
True, mutex, binary semaphore, and counting semaphores are different.
The main difference between a binary semaphore and a mutex is that a binary semaphore can be released by a task other than the task that acquired it, while a mutex can only be released by the task that acquired it.
Great video, how would i do if i want to increment a value that is not global ?
a value that i would pass to the function and that i wanna be able to retrieve in the main later
@@PDL_AlexBibou You can still pass a value by reference for instance :)
@@MikeShah ah i see thanks for the answer man
You may not be aware, your audio is choppy. The beginning of some of your words is chopped off, making it difficult to understand those words.
Should be fixed on all newer videos 🙂
@@MikeShahCool, glad to hear that because the content of your videos is great, it will be good to hear everything you say. Thank you for your good work!
Cheers! @@gregwoolley
Don't we have to initialize mutex lock
If you're using something like pthread's library, then you do initialize the lock. No need to here. And in reality (see the next video in playlist), we probably want to wrap the mutex and use some sort of 'scoped guard' :)
@@MikeShah Your videos are awesome.
@@wika96 Cheers, thank you for the kind words.