Fun nerd trivia: - A single CPU core runs multiple instructions concurrently, the CPU core just guarantees that it will appear AS IF the instructions were run serially within the context of a single thread. This is achieved primarily via instruction pipelining. - A single CPU core often executes instructions totally out of order, this is unimaginatively named "Out Of Order (OOO) execution". - A single core also executes instructions simultaneously from two DIFFERENT threads, only guaranteeing that each thread will appear AS IF it ran serially, all on the same shared hardware, all in the same core. This is called Hyperthreading. And we haven't even gotten to multi-core yet lol. I love you content Jeff, the ending was gold!
But in a Hyperthreaded systems tasks just do not appear to be executed serially, they actually are executed serially ... the only difference is that the system is going to coordinate the execution of other tasks/threads while waiting for the previous one, that is probably blocked waiting for a I/O response ... If you have a 16 core processor with 32 logical processors, it doesn't mean it can execute 32 thread simultaneously ...
@@RicardoSilvaTripcall hyperthreads are in many cases parallel by most meaningful definitions, due to interleaved pipelined operations on the cpu, and the observability problem of variable length operations. For an arbitrary pair of operations on two hyperthreads, without specifying what the operations are, and the exact cpu and microcode patch level you can not say which operation completes first even if you know the order in which they started.
Keep in mind the JS world also calls any higher order function "callback" (like the function you'd pass to Array.map), whereas elsewhere afaik it only refers to the function you pass to something non-blocking.
Also when we say "one-core," that means "one-core at a *time*" -- computer kernels are concurrent by default, and the program's code will actually be constantly shifting to different CPUs, as the kernel manages a queue of things for the processor to do. Not too unlike the asynchronous system that javascript has, kernel will break each program you're running into executable chunks, and has a way to manage which programs and code get more priority.
L1 is the fastest so having data available there is pretty significant. its also grown much in size to the point that it can basically cache all the memory a longer running task will need now. if L1 was so insignificant it wouldn't cause there data desync issues across threads
@@orbyfiedit could be more inefficient if only one process took all the CPU core for himself during all his life time. Probably the process isn't switched between cores, but it is being swaped in and out with others on the same core for the sake of concurrency. Also take in account the hit rate that a cache may have.
6:13 To see how all of your cores utilizing, you can change the graph from 'Overall utilization' to 'Logical Processor' just by right clicking on the graph -> Change graph to -> Logical Processor.
It's a pretty good overview on how much more of a clusterfuck the code becomes once you add workers to it. And it didn't even get to the juice of doing fs/database/stream calls within workers and error handling for all of that.
@@angryman9333 Promise will run user written function in main thread blocking manner. Async function is just syntactic sugar for easier creation of promises. WIthout browser asynchronous api's or web workers it doesn't run code in parallel mode.
less useful than you might think. the operating system's scheduler may bounce a thread around on any number of cores. doesn't make it faster but spreads the utilization around.
although it's called concurrent, schedulers still can only work on one task at a time. It will delegate a certain amount of time to each task and switch between them (context switching). The switch Is just fast enough to make it seem truly "concurrent". If a task takes longer than the delegated time, the scheduler will still switch and come back to it to finish.
Little known fact, you can also do DOM related operations on another thread. You have to serve it from a separate origin and use the Origin-Agent-Cluster header, and load the script in an . But you can still communicate with it using postMessage, and avoid thread blocking with large binary transfers using chunking. This is great for stuff that involves video elements and cameras. I use it to move canvas animations (that include video textures) off the UI thread, and calculating motion vectors of webcams.
My brain: dont run it 8 years of programming: dont run it the worker thread registering my inputs to the console as I type it: dont run it Jeff: run it. **RUNS IT**
just a heads up for your CPU; the 12900K doesn't have 8 physical cores, it indeed has 16, 8 performance and 8 efficiency cores, the performance cores have hyperthreading enabled but not the efficiency cores so you have 24 threads in total
I have had hours long lectures in college level programming classes on the differences between concurrency and parallelism and the first 3 minutes of this video did a better job of explaining it. Shout outs to my bois running the us education system for wasting my money and my time 💀
It's probably not their fault you failed to understand something so simple. Literally 1 minute on google would have cleared up any misunderstanding you had
@@maskettaman1488if you have to pay to study and then you have to sell yourself to a tech corp to learn something is not that great of a system and it should not exist IMHO
@maskettaman1488 lmao im not saying i misunderstood it im saying fireship is much more consice and still gets all the relevant information across compared to college despite the fact that i dont have to pay fireship anything
a small detail at 3:17 your i9 has 16 physical cores not 8. Only half of them have hyperthreading (because there are 2 types of physical cores in that cpu). That's why it has 24 threads instead of 32
I did use this back in 2018. I don't know how much it improved, but error handling was painful. Also, when you call postMessage(), v8 will serialize your message, meaning big payloads will kill any advantage you want. And also, remember that functions are not serializable. On the UI, I completely killed my ThreeJS app in production when I tried to offload some of its work to other threads :D Apart from that, you should NEVER share data between threads, that's an anti-pattern.
each value should be a random value, and you should sum them in the end to ensure the compiler / interpreter does not optimize all the work away because it detected that you never used the values
Pretty sure compiler won't be able to optimize side effects like this, since worker and the main thread only interact indirectly through events on message channel.
Aren't they? My understanding is that they are threads managed by the runtime, which in turn is responsible for allocating the appropriate amount of real threads on the O.S.
JavaScript is referred to high level, single threaded, garbage collected, interpreted || jit compiled, prototype based, multi-paradigm, dynamic language with a, non-blocking event loop
I remember when I first learned workers, I didn’t realize k could use a separate js file so I wrote all of my code in a string, it was just a giant string that I coded with no ide help. That was fun.
Adding more cores might still provide gains in a VM scenario depending on the hypervisor. As long a your VM isn't provisioned all physical cores the hypervisor is at liberty to utilize more cores and even up to all physical cores for a short amount of time resulting in increased performance for bursting tasks
You can also pass initial data without needing to message the thread to start working, however, that one I feel like its better to use for initialization like connecting to a database.
I achieved something similar in TS, but rather than locking the queue, I ensured that the jobs that could cause a race condition had a predictable unique ID. By predictable, I mean a transaction reference/nonce...
Well multiprocessing is much more mature than workers thread since multiprocessing has been the primary methods for concurrency in python, but for js it’s always been async.
Hang on… wouldn’t a volume-renderer in three.js be doing things like interpolation between z-slices in the fragment shader? Could certainly see workers being useful for some data processing (although texture data still needs to be pushed to the gpu in the main thread). Care to elucidate? Was it maybe interpolating XYZ over time, like with fMRI data or something? That would certainly benefit…
Fireship, the "S" sounds in your video sound really harsh. Consider using a de-esser plugin or a regular compressor plugin and your stuff will sound fantastic. Cheers.
Im sure those of us who went through the pain stalking process of closing an infinite loop in our past were saying "who in their right mind would do this?"
I recently did a little side project where I needed to use a worker in a web app. The gist of the project is given a winning lottery number, how many “quick picks” or random tickets would it take to finally hit.
I'm thinking out loud here, but have a genuine question - Could you use workers combined with something like husky to do all pre-commit/push/etc checks at once? For example, I may have a large unit/integration test suite, followed by a large e2e test suite, along with code quality checks and so on... All of which are ran in sequence potentially taking upwards of a few minutes to complete. Could workers be used to run these jobs together at once?
I was asked the same question for an internship interview in 2021 whether js is multi threaded or not and I said yes because of worker threads. The interviewer said no, it is not and hung up the phone :(
Because the answer is not "yes", it's "yes IF". Yes IF the environment supports it. If the interviewer said a terminal "no", it meant "not in our environment". Hanging up without an explanation still makes him an asshole, and trust me, you don't wanna work with those anyway...
I could feel my brain trying to stop me writing what I knew was an infinite loop but I did it anyway. I trusted you Jeff!
0:31 concurrency incorporates parallelism
what you should is asynchronism
@@ko-Daegu who are you talking to, schizo?
me too!
@@ko-Daegu can I have some of what you're having
@@ko-Daegu you really thought you are being smart with that remark, didnt you? Only problem is that you are wrong
Fun nerd trivia:
- A single CPU core runs multiple instructions concurrently, the CPU core just guarantees that it will appear AS IF the instructions were run serially within the context of a single thread. This is achieved primarily via instruction pipelining.
- A single CPU core often executes instructions totally out of order, this is unimaginatively named "Out Of Order (OOO) execution".
- A single core also executes instructions simultaneously from two DIFFERENT threads, only guaranteeing that each thread will appear AS IF it ran serially, all on the same shared hardware, all in the same core. This is called Hyperthreading.
And we haven't even gotten to multi-core yet lol. I love you content Jeff, the ending was gold!
in the spectre and meltdown era, we like to say “guarantees”
But in a Hyperthreaded systems tasks just do not appear to be executed serially, they actually are executed serially ... the only difference is that the system is going to coordinate the execution of other tasks/threads while waiting for the previous one, that is probably blocked waiting for a I/O response ...
If you have a 16 core processor with 32 logical processors, it doesn't mean it can execute 32 thread simultaneously ...
@@RicardoSilvaTripcall hyperthreads are in many cases parallel by most meaningful definitions, due to interleaved pipelined operations on the cpu, and the observability problem of variable length operations. For an arbitrary pair of operations on two hyperthreads, without specifying what the operations are, and the exact cpu and microcode patch level you can not say which operation completes first even if you know the order in which they started.
@@ragggs Lol! Maybe guarantee* (unless you're Intel)
@@RicardoSilvaTripcall Uhhhh. No. Sorry.
Moments like 0:52, the short memorable description of callback functions, is what makes you a great teacher. Thanks man!
Keep in mind the JS world also calls any higher order function "callback" (like the function you'd pass to Array.map), whereas elsewhere afaik it only refers to the function you pass to something non-blocking.
@@kisaragi-hiu a fact that caused me much grief coming into JS from systems level.
Love to see jeff going in depth on this channel, would love more videos like this one.
That's why I made this channel. I've got a long list of ideas.
@@beyondfireship wonderful. Keep it up
@@beyondfireship then do them! PLEASEEEE
Also when we say "one-core," that means "one-core at a *time*" -- computer kernels are concurrent by default, and the program's code will actually be constantly shifting to different CPUs, as the kernel manages a queue of things for the processor to do. Not too unlike the asynchronous system that javascript has, kernel will break each program you're running into executable chunks, and has a way to manage which programs and code get more priority.
wouldnt that be kind of ineffective though, it wouldnt be able to take full advantage of the CPU cache, so i hope it does it as rarely as possible
@@orbyfieduhh, different CPU cores use the same L2-L3 cache. L1 Cache is per core but they’re small and meant for minor optimisations.
L1 is the fastest so having data available there is pretty significant. its also grown much in size to the point that it can basically cache all the memory a longer running task will need now. if L1 was so insignificant it wouldn't cause there data desync issues across threads
Then…why do I only see one core active when running simple Python code…?
@@orbyfiedit could be more inefficient if only one process took all the CPU core for himself during all his life time. Probably the process isn't switched between cores, but it is being swaped in and out with others on the same core for the sake of concurrency. Also take in account the hit rate that a cache may have.
That chef analogy about concurrency and parallelism was genius. Makes it SO much easier to understand the differences.
6:13 To see how all of your cores utilizing, you can change the graph from 'Overall utilization' to 'Logical Processor' just by right clicking on the graph -> Change graph to -> Logical Processor.
It's a pretty good overview on how much more of a clusterfuck the code becomes once you add workers to it. And it didn't even get to the juice of doing fs/database/stream calls within workers and error handling for all of that.
"Clusterfuck", I had the same word in mind 😭😂
0:31 concurrency incorporates parallelism
what you should is asynchronism
just use Promises, it'll process all your asynchronous functions concurrently (very similar to parallel)
@@angryman9333 Promise will run user written function in main thread blocking manner. Async function is just syntactic sugar for easier creation of promises. WIthout browser asynchronous api's or web workers it doesn't run code in parallel mode.
@@angryman9333a what?
Task Manager --> Performance tab --> CPU --> Right click on graph --> Change graph to --> Logical Processors
Thanks for shouting out code with ryan! That channel is criminally underrated
Lots of comments about memorable descriptions, shoutout to the thread summary at 3:30. Your conciseness is excellent.
That ending was possibly one of your best pranks ever, a new high watermark. Congratulations 😂
I'd like to see a video on JavaScript generators and maybe even coroutines.
For sure, this is a really cool thing and I'm not sure how to actually use it.
generics maybe ?
garbage collector in more details ?
benchmarkign agiants pythonic code, just to get people triggered ?
the `threads` package makes working with threads much more convenient. it also works well w/ typescript.
It would be nice if you right click on the cpu graph and *Change graph to > Logical Processors*, so we can see each thread separately.
Thanks!
less useful than you might think. the operating system's scheduler may bounce a thread around on any number of cores. doesn't make it faster but spreads the utilization around.
@@crackwitz Do you mean that we will not see each core graph plotting one thread?
Really cool, I actually saw the other video about nodejs taking it up a notch when it came out.
i watched a similar video early this year, but your way to deliver content is amazing, keep going
- what could be better than an infinite loop?
- infinite loop on 16 threads
although it's called concurrent, schedulers still can only work on one task at a time. It will delegate a certain amount of time to each task and switch between them (context switching). The switch Is just fast enough to make it seem truly "concurrent". If a task takes longer than the delegated time, the scheduler will still switch and come back to it to finish.
IM STILL STUCK OVER HERE, HELP!?!?!?!?
MY PC WONT SHUTDOWN, ITS BEEN 5 MONTH'S...
keep up the great work, love your vid's!
This man just explained a lot within 8mins! Getting your pro soon.
Little known fact, you can also do DOM related operations on another thread. You have to serve it from a separate origin and use the Origin-Agent-Cluster header, and load the script in an . But you can still communicate with it using postMessage, and avoid thread blocking with large binary transfers using chunking. This is great for stuff that involves video elements and cameras.
I use it to move canvas animations (that include video textures) off the UI thread, and calculating motion vectors of webcams.
that looks handy! thanks for sharing
that might just help with a few of my projects
do you have any examples on github?
@@matheusvictor9629 yes
Sounds very interesting! I have a project where I think this would be useful.
My brain: dont run it
8 years of programming: dont run it
the worker thread registering my inputs to the console as I type it: dont run it
Jeff: run it.
**RUNS IT**
Thanks, now I know what script I should include in my svgs
aside from the outstanding quality, this ending was quite funny and hilarious! keep it up, your content is TOP 🙇🚀
Spawning workers in Node is not new, but support for web workers in browsers is comparatively new. Good shit man.
0:31 concurrency incorporates parallelism
what you should is asynchronism
best comic relief at the end ever, love you Jeff
just a heads up for your CPU; the 12900K doesn't have 8 physical cores, it indeed has 16, 8 performance and 8 efficiency cores, the performance cores have hyperthreading enabled but not the efficiency cores so you have 24 threads in total
😮
Oh yeah, right. So that's why the CPU didn't go to 100% after using 8 cores.
You forgot the: 🤓
But at 6:57 his cpu did go to a 100% with 16
because hyperthreading is shit
Good vídeo!
Next time change the CPU graph with right click to see each threat graph.
Hope it helps!
Wow, didn't know that. Thanks!
3:17 Dude the 12900k has 16 physical cores (8p+8e) and a total of 24 threads since only the p cores have hyper-threading ❗
the cook analogy was great and i now understand
with the amount of time I've spent on this video because of the while loop, even the algorithm knows who my favourite youtuber is
It's like you read my client's requirement and came into support
I have had hours long lectures in college level programming classes on the differences between concurrency and parallelism and the first 3 minutes of this video did a better job of explaining it. Shout outs to my bois running the us education system for wasting my money and my time 💀
It's probably not their fault you failed to understand something so simple. Literally 1 minute on google would have cleared up any misunderstanding you had
@@maskettaman1488if you have to pay to study and then you have to sell yourself to a tech corp to learn something is not that great of a system and it should not exist IMHO
@maskettaman1488 lmao im not saying i misunderstood it im saying fireship is much more consice and still gets all the relevant information across compared to college despite the fact that i dont have to pay fireship anything
Yes Yes Yes, and exactly extra Yes! Thank you Bro for this contribution! You are speaking out of my brain! Best Regards!
3:11 Notice how in that graph, the only languages faster than Java are all systems languages, with no VM based languages capable of beating it
I am stuck step programmer 😂😂
rule #34 is calling
break;
I would watch out or youll het multi threaded
a small detail at 3:17 your i9 has 16 physical cores not 8. Only half of them have hyperthreading (because there are 2 types of physical cores in that cpu). That's why it has 24 threads instead of 32
I think he just said that so people would comment, increasing the algorithm rizz
@@somedooby I wouldn't be surprised TBH, you certainly can't get that audience so quickly without knowing all the tricks
Also right click the cpu graph and choose logical processors to show the threads in individual graphs. Makes it easier to visualize IMHO.
0:15 I already know this and already using this
BLOB to create new Worker and going I use max 4 to 8 as one for each core
¡Wow! Just yesterday I was watching some videos about worker threads because I will use them to speed up the UI in my current development 😄
I'm still amazed at how you find such accurate images as the one at 0:32 🤔
6:42 bro really doubled it and gave it to the next thread
I did use this back in 2018. I don't know how much it improved, but error handling was painful. Also, when you call postMessage(), v8 will serialize your message, meaning big payloads will kill any advantage you want. And also, remember that functions are not serializable. On the UI, I completely killed my ThreeJS app in production when I tried to offload some of its work to other threads :D
Apart from that, you should NEVER share data between threads, that's an anti-pattern.
seeing my cpu throttle and core usage rise in realtime was impresive :)
each value should be a random value, and you should sum them in the end to ensure the compiler / interpreter does not optimize all the work away because it detected that you never used the values
Pretty sure compiler won't be able to optimize side effects like this, since worker and the main thread only interact indirectly through events on message channel.
I thought worker threads were virtual threads. you learn something new everyday!
Aren't they? My understanding is that they are threads managed by the runtime, which in turn is responsible for allocating the appropriate amount of real threads on the O.S.
JavaScript is referred to high level, single threaded, garbage collected, interpreted || jit compiled, prototype based, multi-paradigm, dynamic language with a, non-blocking event loop
And you can still program with multiple threads... 😂
I remember when I first learned workers, I didn’t realize k could use a separate js file so I wrote all of my code in a string, it was just a giant string that I coded with no ide help. That was fun.
Adding more cores might still provide gains in a VM scenario depending on the hypervisor. As long a your VM isn't provisioned all physical cores the hypervisor is at liberty to utilize more cores and even up to all physical cores for a short amount of time resulting in increased performance for bursting tasks
pro tip: create a loop like this.
for (let i = 0; i < 2; i++) {
i--;
}
This will make you pass the interview no-more questions asked.
😂
man i been wanting something about workers for so long
A single x86 core can actually run more than one command at a time. And the n64 can run 1.5 commands at a time when it uses a branch delay slot.
love how you tell us to leave a comment if it's locked like we can even do that
I tried the while loop thing and somehow my computer became sentient. Y'all should try that out.
Hyperthreading generally gives a 30% bump in performance, your test demonstrated that handily.
7:40 I knew the joke coming from mile away
nice one 😂😂😂😂
people watching on phone:
“that level of genjutsu doesn’t work on me”
You can also pass initial data without needing to message the thread to start working, however, that one I feel like its better to use for initialization like connecting to a database.
In python, handling Race Condition is easy,
Use Queue, and Lock 😊
I achieved something similar in TS, but rather than locking the queue, I ensured that the jobs that could cause a race condition had a predictable unique ID. By predictable, I mean a transaction reference/nonce...
Well multiprocessing is much more mature than workers thread since multiprocessing has been the primary methods for concurrency in python, but for js it’s always been async.
I just recently experimented with the Offscreen Canvas handling rendering on a separate worker thread. Pretty cool.
I once made a volume rendering thingie with Three.JS and it really, REALLY benefited from Web Workers, especially interpolation between Z slices.
Hang on… wouldn’t a volume-renderer in three.js be doing things like interpolation between z-slices in the fragment shader? Could certainly see workers being useful for some data processing (although texture data still needs to be pushed to the gpu in the main thread). Care to elucidate? Was it maybe interpolating XYZ over time, like with fMRI data or something? That would certainly benefit…
great topic, thanks 👍
Fireship, the "S" sounds in your video sound really harsh. Consider using a de-esser plugin or a regular compressor plugin and your stuff will sound fantastic. Cheers.
6:30 Do you want to run() the jobs or double the workers and give it to the next run()?
And remember, don't make promises you can't keep
Wow, Love this tick - tock snippet
I executed the while-loop on the orange youtube and I couldn't change the volume.... Thanks.
UNLIMITED VIEW TIMES!! AWESOME!! What a great video!
You are a youtube genius man
8:20 my tab is still not responding. What should I do?
try throwing it out of the window
Ending is the moment you are glad you watched it on a mobile device
Niiiice we have the exact same machine! (And thanks for the video!)
Day 5, I'm still stuck with the window open, I tried exit the house and get back in. Rick is still singing.
Please help, I have been trapped here for 5 years, the browser still won't let me out.
ahh now i can watch my cat memes at 0.001 ms speed after making javascript multi threaded
HELP It's been 84 years and I still haven't been able to close my browser
Woulda been cool if you set it to show core usage on taskmgr
Amazing🔥
Elixir is faster than I thought and getting faster with the new JIT compiler improvements.
in 3:11, how is swift that slow, its static and compiled and does not use garbage collection.
Thank you for making me willingly crash my chrome and carrying web dev space on your back for wide audiences.
Love it, we are already doing that with our Lambdas - cause why not use the vCores when you got them 😍
Epic!
It's FLAT!
i wish you showed the CPU usage on each logical processor on task manager instead of the overview
Awesome video ending
That trick to force us to hear the sponsor block could only come from you 🤣🤣🤣
It’s been 4 hours and my computer has now caught fire and is playing the interstellar theme song, help!
Im sure those of us who went through the pain stalking process of closing an infinite loop in our past were saying "who in their right mind would do this?"
I recently did a little side project where I needed to use a worker in a web app. The gist of the project is given a winning lottery number, how many “quick picks” or random tickets would it take to finally hit.
I'm thinking out loud here, but have a genuine question - Could you use workers combined with something like husky to do all pre-commit/push/etc checks at once?
For example, I may have a large unit/integration test suite, followed by a large e2e test suite, along with code quality checks and so on... All of which are ran in sequence potentially taking upwards of a few minutes to complete.
Could workers be used to run these jobs together at once?
E2E will bottleneck regardless, because of quadrillion OS APIs it has to interact on start, majority of them are synchronous.
@1:57, uptime 4:20:00
Just wanted to point out the memory usage for worker threads is crazy high.
I think because of multiple nodejs runtimes required now, maybe? I don't know...
Don't need proof I believe you bro!
I wonder if there are things like mutex locks to help with the synchronisations of shared resources?
Exactly. I don't know why I keep hearing otherwise
Just brilliant
I was asked the same question for an internship interview in 2021 whether js is multi threaded or not and I said yes because of worker threads. The interviewer said no, it is not and hung up the phone :(
Because the answer is not "yes", it's "yes IF". Yes IF the environment supports it. If the interviewer said a terminal "no", it meant "not in our environment". Hanging up without an explanation still makes him an asshole, and trust me, you don't wanna work with those anyway...
Hi from Vietnam, where the kitchen image was taken.
On Firefox, only this tab got froze when executing empty while loop true, I refreshed the page and I was good