These Blazingly Fast Videos have been fun to make. Should I keep going? Should we keep making more? Also, we are SOOOO close to 100k subs, what should we do to celebrate?
As a professional node developer I 100% agree, Node is not meant to be super fast and be used on the most crowded/critical endpoints of the application, it has more to do with fullstack's fairly new role in the industry. In the old days companies had front-end and back-end teams that probably never met each other, they would have an internal documentation site with most of the information they needed and at the last steps of the waterfall chain the software it will run mostly fine, but we're talking months of development with no progress being seen by the no-code folks who control the money. In current times the market demands software built faster and progressively, the "no-code folks" want to ensure the feasibility of the project as soon as possible, so we changed to agile methodologies and the fullstack role emerged A developer capable of handling the 4 elements, front-end, back-end, database and whatever the client wants and he has to do it fast, so changing between technologies might not be a good idea - I would say that's pretty stressful in the long therm and can lead to burnout, to mitigate stress and bugs they use the same languages for back-end, front-end and DBmgnt like any business in a free market if they require an specific an team is required for a "blazing fast" backend they will have it but those are very specific cases, 90% of services would not have a significant workload nor care for a 200ms response, and if they do they'll probably scale horizontally or distribute copies across the world. that's the software reality: development time is more important than performance time
Yeah, I mean, how fast does Node have to be? 500 requests per second is not nothing. If Ruby on Rails is performant enough for a lot of use cases, I'm sure Node is too.
At the end of the day it really isn't a huge deal to use a different language for the backend and still use js/ts on the frontend. Most developers who have been programming for a couple of years probably know js and another language that can be use for backend services, and wouldn't have a problem switching between languages frequently while developing their own full stack project or while working as a full stack developer at a company. I think one of the notable advantages of node is for new devs who want to be able to work on both a frontend/backend of their own software while only knowing one language.
While I'm waiting for Elixir, it would be interesting to also explore these numbers on a multicore machine! (Both performance and developer ergonomics)
I'm looking forward to that Elixir video. I am really curious to see how it's performance stacks up against Go and Rust. Also, If I recall, in that video you did with Theo regarding unit testing, both of you agreed that you shouldn't touch Elixir and I am curious as to why. I understand that both of you are really picky regarding languages and their type systems, but I feel that Elixir has ways around its dynamic types that in many ways are either similar to typescript, some ways a bit worse, and in some ways a bit better (pattern matching on specific values and types). And Elixir has certain tools such as elixir_ls and dialyzer that help as well to make error detection during development easier.
For people who are interested in Elixir, here is a comparison between Erlang and Go performance: www.dcs.gla.ac.uk/~trinder/papers/sac-18.pdf Elixir is based on Erlang's BEAM virtual machine so it should be pretty much the same. To summarize: Erlang/Elixir can spawn processes faster, but Go has better throughput than Erlang/Elixir and in general Go is more performant. But keep in mind that Erlang/Elixir is not all about performance, it's main strengths are fault tolerance and reliability. In that regard its much better than Go.
This video's thumbnail (gopher licking bun with crazy eyes) is my current favorite developer meme. I can't help it, it's just so funny :D. Also wherever a tech community mentions "blazingly fast", that sound effect plays in my mind automatically lol
Lock-free is like async/futures/promises. It tends to scale well horizontally (more threads and cores), but does worse per-core - especially if there is a lot of contention on the resource. The new thing is wait-free data structures that are not only lock-free, but also avoid long compare-and-swap loops.
It's really mind boggling how people could think a pure interpreted language can be fast like a compiled one. The difference between them, what assembly and machine language are, these are literally among the first things you learn at university during a basic computer engineering course... How in the hell could someone think a language, like Javascript, which uses strings in switch-case statements can be "fast"?
💯Exactly. I was thinking the very same thing. Node/JS is blazingly fast when compared with other [interpreted] languages of the same class like Python, PHP, Perl etc. Context matters. It's one of the fastest interpreted languages available. The only language I recall in the interpreted class that beats out Node/JS is Lua. But to pit it against a compiled or even byte-code compiled language like say Java is ludicrous. It says more about the people making such argument than the merits of test. It's like arguing a Crown Victoria can hold it own against a F1 race car.🤷🏾♂
Nice analysis. I'm not a fan of Go. The syntax is weird (I think they worked too hard not to look like Java). However, ..., GOs benchmark would have looked better on a multicore CPU which is what most PC are running except maybe on small embedded systems. If I am not mistaken, GO (like Java) uses a tracing GC and this will usually run on a separate thread and if this thread can actually map to an actual separate core then you get your memory management running in parallel. That is why languages and/or runtimes that use something like a tracing GC are normally very good for high throughput software on appropriate hardware. Whereas things like Python, Swift, ... which use a reference counting based memory manager, or Rust, Zig and others where you manually manage your memory will (in common implementation) actually trace the heap and manage memory on the same thread that is executing your code (except for a thread to manage circular references in Python, Swift, ...). On a single core, the tracing GC will still use an additional thread (at least one) but this will grab execution time from the single core and still involve context switching between the treads whereas Zig and Rust will not have this. So ... if you love GO's syntax and your not running your software on a hand held calculator chip, you will get good productivity and good performance.
You forgot to mention that Go has an insane concurrency model with go routine. It makes building multi threading code 1000% easier. To me this is the big selling point. I would be curious to see the benchmark with more vCPU. Were you on only 1?
I’m an absolute beginner with Go but was thinking the same thing. Maybe the experiment’s parameters are not a good fit for the concurrency model of Go?
@@bendotcodes rust does it via 3rd party packages - Tokio. Zig has it built into the stdlib, where it’s a bit easier to find and follow the code. It’s encouraged ( idiomatic even ) to roll your own custom event loop based on the stdlib example. I’ve been doing go professionally for just on 7 years now, but I never really understood the internals of how goroutines work, until learning zig. It’s awesome for that.
@@steveoc64 it's weird to say "you need third party packages", when Rust put a lot of work into making the native async code work completely customizably. It's not "you need to", it's "you get to". Keep in mind, this same system works when you're building a kernel, or in a 1kB ram microcontroller. Although I wouldn't recommend it (for performance), it's not even that hard to implement an executor, it's logically just a queue of work to do and a way to go to sleep and wake up.
0:33 I must comment my guess in advance. I guess Go, which is better than Rust because Go is a word that means moving, whereas Rust is a word that means why your bike doesn’t work because you left it in the rain you silly sausage
Another banger of a video prime, but a have a question, when the OG of backend languages Java will be tested against node to see which is the slowest language of them all?
@@ThePrimeagen definitely worth the wait, i will note, i expected Go to do a little better than it did, It also would be cool to rerun these tests on a multi-core server and compare the differences
Multicore machine is one thing, comparing the memory usage would be another - especially if you built it in a way to compare GC performance (or lack of it)
What are your thoughts on the ergonomics of async Rust? Personally, for me it's bad enough to push me towards using Deno for most web-y things, but I would really like to go "all-in" on Rust.
I am a bit surprised at Rust versus Go I would have thought they would be a bit closer. I really like Rust but I am using Go for a new project because I couldn't really get up to speed quick enough with Rust.
@@ThePrimeagen Why not? You don't like the .NET / Microsoft ecosystem? The language is too... "fancy" / Java-like? .NET Core seems to be blazingly fast!
Hello, Prime! I love these videos. I was wondering if a Java vs Node vs Go vs Rust video is in the pipeline or not? Nonetheless, have a nice coconut-oily day!
I’m not familiar enough with the V8 JIT compiler to know exactly when it kicks in. I’m curious if you know whether or not the JS code would have been compiled to bytecode by the time you collected data? There will definitely be a significant performance difference between the interpreter executing the JS code while it’s warming up, and after the JIT compiler kicks in and bytecode is being executed instead.
so for small functions it is ~1 - 3 runs, large functions (1000+ chars) it tends to be 7 - 15 (all depending on size). All code was jitted within the first couple requests. :)
Really want to see what happens when you go multi-core. I'd imagine rust/zig/go would pull away further but would highlight differences in how they handle concurrency across cores.
You should definitely add memory usage for each on top of speed performance. Memory has a higher cost than when we used bare metal servers, and it adds up quickly. Rust has massive pros for that compared to go and even more to node.
You should really talk about the BEAM though, rather than focusing on the language itself don’t you think ? I wouldn’t expect much better performance from Erlang for instance but that’s just a hunch. Getting Haskell perf would be very interesting due to its output being a binary but it’s a beast: hard to learn but beautiful language (opinion based on my limited experience though) Great channel, keep it up :)
I did much simpler tests - like counting to one billion in a cycle - the (best) results: Node-Js = 11.5 sec, Bun = 12.0 sec, Go=5.7 sec, Java=5.2sec, Rust=110nano, Zig=52nano, Python - some ten minutes :). All code variants were written as simply and close to the basic example as possible, compiled, and optimized for speed. All tests were running on the same laptop i7 Ubuntu 22.
I live for hear you say blazingly fast, I don't care about the video, I don't even care about golang or node, I just want to hear you saying BLAZINGLY FAST
I can double the blazingly fast programming languages (videos) by watching in twice the speed. Booom! Go is now double the speed. Zig twice as blazingly fast. Node...
The details were not given about the TCP and TLS specifics and I'd guess those implementations are the bottlenecks, not the language. I run 650k concurrent TLS connections and double that without TLS in a tuned Go server on a single core VM in production. It took a lot of tuning to get there, but it was all networking.
I’m not sure I really understand the method of this experiment, could somebody explain in a bit more detail? How does an increasing size of the queue mean faster speed? (the method in the rust vs bun vs node video with requests/second made more sense to me, at least)
Prime, there's a Go package called `fasthttp`, and it says to be 10x faster (there's benchmarks) than Go built-in net/http. Maybe in the future, you could give it a try :) I think it would be very interesting to see if a different http implementation really speed things up. As the fasthttp docs says: It's an http server implementation for really performance servers. Thanks for your hard work and cool videos :D
fasthttp shouldn't be used other than for very very veeeery specific scenarios -- it doesn't actually implement HTTP, it implements something "like HTTP", and thus you will find all sorts of fun compatibility problems with it in the real world, where you might not be able to control the clients hitting the server. Additionally, it's very unlikely that the http server portion of this was the bottleneck. 99% of the time it's the logic inside of the handlers, not the http stack itself. I.e. even if fasthttp is "10x faster than net/http", if net/http currently only impacts 1% of the performance, 10x means almost nothing. Not to mention that "10x" is only in specific scenarios, not regular scenarios, where performance is often only marginally better.
Hey, it would be interesting to see how much performance you can etch out with C over rust and zig. Maybe even assembly, to see how much faster it can get over C? I know that I get better performance over even C++ when I use C, because I tend to write much simpler code that is way less abstract.
That's not really language's fault, you can write everything you write in C in C++, in fact, I'd argue C-style C++ should be your default, only using advanced C++ features when really necessary.
Zig and C would probably be equally fast if written correctly, language is not the main concern, the optimizations made by the programmer are and both of these languages are extremely flexible and designed so that you can optimise your code as much as you want. Rust on the other hand makes it demonstrably harder to optimise, if you already fight the borrow checker writing naive implementations you are going to be fighting it 100x when optimising. Refactoring is also harder in (idiomatic) Rust because everything is sort of encapsulated into traits and some things in your code are executed implicitly and thus it is harder to reason and change te behaviour of the program. Also forget about RAII, that is something that will make everything 100x slower and if you do memory pools that you manage yourself then there goes the borrow checker. So yeah, idiomatic Rust would probably be a lot slower than optimised C and Zig, if you want real high performance Rust shouldn't be your first choice.
@@eleanorbartle53541. If you write the assembly with care it will be faster, you just need to know what you are doing. 2. clang isn't the best C compiler, that is gcc wich does not use LLVM and produces code that is equal or faster to clang in 90% of the cases, sometimes significantly so. Also Rust wound most likely be significantly slower.
in last few years and after .NET 5, .NET community release lot's of benchmark that .NET was faster than go and close to Rust can you make a test like this for C# (.NET) ? also can be a representation of True OOP language with inheritance
The reason I like this video is because of the title. I wouldn't have watched this video if it wasn't for the title. Instead of "Go is faster than Zig", if it were "Zig is faster than Go", I'd be like, "Well duh. Of course". But it said that opposite, which piqued my curiosity. And of course, half way through the video I realized, oh, he isn't showing that Go if faster than Zig. This was just a click-bait to make me watching something unnecessarily. Having already wasted 5 minutes of my life, I figure I double-down and waste another 5 minutes and write this post. Stockholm syndrome and all that.
I wish this also had a language that was actually used in the industry to see how these flavor of the month languages (except node) compare to them and if its worth actually learning them
Both Rust and Go are used extensively in the industry though? Sure they're not at Java/Python levels of adoption, but they're well past flavor of the month.
Just what I would've expected. But I wonder how managed languages like Java and C# compare to Go. I'm guessing they'd be neck and neck but between Node and Go.
What actually makes a language faster than another ? I mean what is actually going on that allows it to make the most use of its environment (software or hardware?) to execute the instructions faster? Or is it actually due to the language executing instructions more efficiently due to a smarter/optimized process flow?
THe amazing part of go, is that you can take almost any backend dev and give him that server in go and he will be able to maintain it probably in the same day. You can't say the same about rust, and even zig. It's kinda easier than rust, but has a much more complexity than go. It seems like best way to iterate is to just start your product with Go and move quickly, and when you see that go is just not gonna cut it anymore - rewrite in rust. And you know what? Thanks to go complexity it will be much easier to reason about.
Wait, are you telling me that node is about half the speed of straight up Rust? That's way faster than it has any right to be. Like, I know that JS is JIT compiled and all that stuff. But just doing obj.someProp has to recursively check if someProp exists on the object, or in the object's prototype, or in the prototype's prototype and so on until it reaches a null value. JS runtimes have to be doing crazy stuff to make it so that those things are efficient.
These Blazingly Fast Videos have been fun to make. Should I keep going? Should we keep making more?
Also, we are SOOOO close to 100k subs, what should we do to celebrate?
Write your own UI framework; call it "SmoothBrainJS".
As for the catchline? _Blazingly fast_
YES keep going please! That's so fun! Thank you for your content ❤
Also, nice moustache 🇫🇷🥖
Yes please. Do one on Carbon!
we should do a celebration, BLAZINGLYYYY FASTTTT!!
I now can't look at framework repos and not laugh when I see blazingly fast.
that was a goal
me too
@@mateusramos1742 the question is, does it "cheat" and interop with C/++/rust/zig or is it purely written in go?
@@Mempler purely written in go.
Blazing fast
You and Fireship are my favorite code content creators. Funny and thorough. Need more of this on UA-cam.
As a professional node developer I 100% agree, Node is not meant to be super fast and be used on the most crowded/critical endpoints of the application, it has more to do with fullstack's fairly new role in the industry.
In the old days companies had front-end and back-end teams that probably never met each other, they would have an internal documentation site with most of the information they needed and at the last steps of the waterfall chain the software it will run mostly fine, but we're talking months of development with no progress being seen by the no-code folks who control the money.
In current times the market demands software built faster and progressively, the "no-code folks" want to ensure the feasibility of the project as soon as possible, so we changed to agile methodologies and the fullstack role emerged
A developer capable of handling the 4 elements, front-end, back-end, database and whatever the client wants
and he has to do it fast, so changing between technologies might not be a good idea - I would say that's pretty stressful in the long therm and can lead to burnout,
to mitigate stress and bugs they use the same languages for back-end, front-end and DBmgnt
like any business in a free market if they require an specific an team is required for a "blazing fast" backend they will have it but those are very specific cases, 90% of services would not have a significant workload nor care for a 200ms response, and if they do they'll probably scale horizontally or distribute copies across the world.
that's the software reality:
development time is more important than performance time
Best argument for node i have seen in a while. I'll save this and going to use this forward 😂
Yeah, I mean, how fast does Node have to be? 500 requests per second is not nothing. If Ruby on Rails is performant enough for a lot of use cases, I'm sure Node is too.
Agree. Thanks for sharing!
@@hypergraphic that’s what load balancers are for. Just increase the number of node nodes
At the end of the day it really isn't a huge deal to use a different language for the backend and still use js/ts on the frontend. Most developers who have been programming for a couple of years probably know js and another language that can be use for backend services, and wouldn't have a problem switching between languages frequently while developing their own full stack project or while working as a full stack developer at a company. I think one of the notable advantages of node is for new devs who want to be able to work on both a frontend/backend of their own software while only knowing one language.
Programming entertainment at its peak. Dude your videos are hilarious and super information packed at the same time
While I'm waiting for Elixir, it would be interesting to also explore these numbers on a multicore machine! (Both performance and developer ergonomics)
you people and elixir
Elixir is life, elixir is Brazil ^^
@@MarcosVMSoares elixir é nois
@@MarcosVMSoares elixir é agro, elixir é pop, elixir é tech, tá na globo é elixir.
@@MarcosVMSoares elixir is from brazil, lua is from brazil, come to brazil
I'm looking forward to that Elixir video. I am really curious to see how it's performance stacks up against Go and Rust.
Also, If I recall, in that video you did with Theo regarding unit testing, both of you agreed that you shouldn't touch Elixir and I am curious as to why. I understand that both of you are really picky regarding languages and their type systems, but I feel that Elixir has ways around its dynamic types that in many ways are either similar to typescript, some ways a bit worse, and in some ways a bit better (pattern matching on specific values and types). And Elixir has certain tools such as elixir_ls and dialyzer that help as well to make error detection during development easier.
For people who are interested in Elixir, here is a comparison between Erlang and Go performance: www.dcs.gla.ac.uk/~trinder/papers/sac-18.pdf
Elixir is based on Erlang's BEAM virtual machine so it should be pretty much the same.
To summarize: Erlang/Elixir can spawn processes faster, but Go has better throughput than Erlang/Elixir and in general Go is more performant.
But keep in mind that Erlang/Elixir is not all about performance, it's main strengths are fault tolerance and reliability. In that regard its much better than Go.
@@Sairysss1 just call Rust through a NIF when you need to speed up Elixir
@@qx-jd9mh it depends on the type of work. Many small function calls would probably be slow due to the overhead of calling to the NIF.
@@LtdJorge did you actually look at benchmarks that compare the overhead of nifs?
I came here with blazingly fast speed after the notification
hhhh
DO IT
It is rare to find a video on UA-cam that you do not have to watch at 1.5x.
I try to get to the point. I'm just so sick of videos that are just the most slow roll pieces of content ever.
I don't know whether to be proud or ashamed that as soon as Prime showed the Go code, I immediately noticed it wasn't formatted with gofmt.
it wasn't. I accept the fact that i hate the formatting and i am the only one using the repo, so suck it.
@@ThePrimeagen finish him 😂
This video's thumbnail (gopher licking bun with crazy eyes) is my current favorite developer meme. I can't help it, it's just so funny :D. Also wherever a tech community mentions "blazingly fast", that sound effect plays in my mind automatically lol
Nice benchmark and amazing 11/10 mustache!
Dare you to do a benchmark using PHP 8 and Swoole
nope
but yes, its a great mustache
@@ThePrimeagen 🐔
Lock-free is like async/futures/promises. It tends to scale well horizontally (more threads and cores), but does worse per-core - especially if there is a lot of contention on the resource. The new thing is wait-free data structures that are not only lock-free, but also avoid long compare-and-swap loops.
wait free... I am getting too old for this shit
This is interesting... But how does one achieve await free in node? Everything in there is a Promise
@@cenowador this doesn't apply to node at all.
It's really mind boggling how people could think a pure interpreted language can be fast like a compiled one. The difference between them, what assembly and machine language are, these are literally among the first things you learn at university during a basic computer engineering course... How in the hell could someone think a language, like Javascript, which uses strings in switch-case statements can be "fast"?
Because even JS gets compiled to machine code at runtime.
The slowness comes from dynamic types and gc
If it hits the JIT, it’s machine code
💯Exactly. I was thinking the very same thing. Node/JS is blazingly fast when compared with other [interpreted] languages of the same class like Python, PHP, Perl etc. Context matters. It's one of the fastest interpreted languages available. The only language I recall in the interpreted class that beats out Node/JS is Lua. But to pit it against a compiled or even byte-code compiled language like say Java is ludicrous. It says more about the people making such argument than the merits of test. It's like arguing a Crown Victoria can hold it own against a F1 race car.🤷🏾♂
It's actually trivial to imagine, if, say, an interpreted language has multithreading and asynchronous features, where's compiled language doesn't.
You had me at "Miku with a rocket launcher"
Nice analysis. I'm not a fan of Go. The syntax is weird (I think they worked too hard not to look like Java). However, ..., GOs benchmark would have looked better on a multicore CPU which is what most PC are running except maybe on small embedded systems. If I am not mistaken, GO (like Java) uses a tracing GC and this will usually run on a separate thread and if this thread can actually map to an actual separate core then you get your memory management running in parallel. That is why languages and/or runtimes that use something like a tracing GC are normally very good for high throughput software on appropriate hardware. Whereas things like Python, Swift, ... which use a reference counting based memory manager, or Rust, Zig and others where you manually manage your memory will (in common implementation) actually trace the heap and manage memory on the same thread that is executing your code (except for a thread to manage circular references in Python, Swift, ...).
On a single core, the tracing GC will still use an additional thread (at least one) but this will grab execution time from the single core and still involve context switching between the treads whereas Zig and Rust will not have this.
So ... if you love GO's syntax and your not running your software on a hand held calculator chip, you will get good productivity and good performance.
You forgot to mention that Go has an insane concurrency model with go routine. It makes building multi threading code 1000% easier. To me this is the big selling point. I would be curious to see the benchmark with more vCPU. Were you on only 1?
I’m an absolute beginner with Go but was thinking the same thing. Maybe the experiment’s parameters are not a good fit for the concurrency model of Go?
You will find that zig and rust do the same thing as go … async frames (coroutines) spread over cpu cores using a thread pool.
@@steveoc64 Oh didn't knew about that, thanks for sharing! I know nothing about Rust maybe it's time to get my hand dirty 😅.
@@bendotcodes rust does it via 3rd party packages - Tokio. Zig has it built into the stdlib, where it’s a bit easier to find and follow the code. It’s encouraged ( idiomatic even ) to roll your own custom event loop based on the stdlib example.
I’ve been doing go professionally for just on 7 years now, but I never really understood the internals of how goroutines work, until learning zig. It’s awesome for that.
@@steveoc64 it's weird to say "you need third party packages", when Rust put a lot of work into making the native async code work completely customizably. It's not "you need to", it's "you get to". Keep in mind, this same system works when you're building a kernel, or in a 1kB ram microcontroller.
Although I wouldn't recommend it (for performance), it's not even that hard to implement an executor, it's logically just a queue of work to do and a way to go to sleep and wake up.
0:33 I must comment my guess in advance. I guess Go, which is better than Rust because Go is a word that means moving, whereas Rust is a word that means why your bike doesn’t work because you left it in the rain you silly sausage
Bro really went:
Zig 🏃💨
Go 🏃💨
Node 🤨
I love watching these experiments. I keep meaning to make my own video about this but they taker forever to make. I admire your commitment.
Another banger of a video prime, but a have a question, when the OG of backend languages Java will be tested against node to see which is the slowest language of them all?
been waiting weeks for this video, LETS GO
was it worth it?
@@ThePrimeagen definitely worth the wait, i will note, i expected Go to do a little better than it did, It also would be cool to rerun these tests on a multi-core server and compare the differences
Multicore machine is one thing, comparing the memory usage would be another - especially if you built it in a way to compare GC performance (or lack of it)
What are your thoughts on the ergonomics of async Rust? Personally, for me it's bad enough to push me towards using Deno for most web-y things, but I would really like to go "all-in" on Rust.
i really don't mind it. it takes a moment to get use to, but once you do, its pretty straight forward.
If you don't mind Boxing, I'd say
I am a bit surprised at Rust versus Go I would have thought they would be a bit closer. I really like Rust but I am using Go for a new project because I couldn't really get up to speed quick enough with Rust.
This is the video we wanted, and the video we needed.
and you got it
going into this i predict zig
dude! the intro, I was almost choking
Great channel. Love your content
Your name is blazingly cool
Yes, Keep making cool videos. And... I guess that zig is going to be the fastest in this video.
I would like to formally request a video on C# vs Go
i will literally never do c# :)
@@ThePrimeagen Why not? You don't like the .NET / Microsoft ecosystem? The language is too... "fancy" / Java-like? .NET Core seems to be blazingly fast!
@@ThePrimeagen 😂😂
@@ThePrimeagen Can you tell me why?
@@quachhengtony7651 +1
Hello, Prime! I love these videos. I was wondering if a Java vs Node vs Go vs Rust video is in the pipeline or not? Nonetheless, have a nice coconut-oily day!
love this dude, amazing content
I think elixir is a pretty good language for simple request handling. (It’s kinda garbage for actual computation though)
i know nothing about elixir, we will see
@@ThePrimeagen You'll love it! It's an incredibly fun language; a mix of Erlang, Ruby, Haskell, and I think even a bit of Prolog.
@@verified_tinker1818 what a rollercoaster. I pooped when you said ruby and splooged when you said haskell
@@ThePrimeagen do Julia too sometime!
@@ThePrimeagen Check Elm also.
Love these videos!
I’m not familiar enough with the V8 JIT compiler to know exactly when it kicks in. I’m curious if you know whether or not the JS code would have been compiled to bytecode by the time you collected data? There will definitely be a significant performance difference between the interpreter executing the JS code while it’s warming up, and after the JIT compiler kicks in and bytecode is being executed instead.
so for small functions it is ~1 - 3 runs, large functions (1000+ chars) it tends to be 7 - 15 (all depending on size).
All code was jitted within the first couple requests.
:)
@@ThePrimeagen awesome, thanks for the info!
hey prime, what do you think about make a video benchmarking go versus java (using quarkus or springboot as it framework)?
i just wont do java. sowwy :(
You hate java?
I can contribute a spring boot version
@@ThePrimeagen admirable,
a man of principles refusing to go into the java path
@@ThePrimeagen but what about Kotlin?
I’ve seen a lot of UA-camrs but ThePrimeagen is the most blazingly fast one.
I'm really enjoying these benchmark style videos, keep up the good work!
You videos always have a way of making me laugh. Thanks 😀
Have you ever done some of these tests with TinyGo?
Really want to see what happens when you go multi-core. I'd imagine rust/zig/go would pull away further but would highlight differences in how they handle concurrency across cores.
correct, i would have to come up with a multicore strat with node first
@@ThePrimeagen potentially multiple instances of the node server equal to number of cores and hit them on different ports
If you need parallelism, Rust will likely win, simply because it is written specifically for the fast-multithread-code usecase.
It's time for a collaboration with Dave Plumber
I've seen the whole video but now I just mostly come back for 0:00 - 0:14. Cracks me up every time.
It is the greatest intro that has ever happened
Love the videos man, especially the last part about frontend stuff 🤣
You should definitely add memory usage for each on top of speed performance. Memory has a higher cost than when we used bare metal servers, and it adds up quickly. Rust has massive pros for that compared to go and even more to node.
True. We have a Java server solution and CPU usage was never the problem, RAM is the issue.
0:10 made me yell laugh. I had to explain to my wife what was so funny.
Haskell or Elixir next ?
elixir is on its way
You should really talk about the BEAM though, rather than focusing on the language itself don’t you think ?
I wouldn’t expect much better performance from Erlang for instance but that’s just a hunch.
Getting Haskell perf would be very interesting due to its output being a binary but it’s a beast: hard to learn but beautiful language (opinion based on my limited experience though)
Great channel, keep it up :)
I did much simpler tests - like counting to one billion in a cycle - the (best) results: Node-Js = 11.5 sec, Bun = 12.0 sec, Go=5.7 sec, Java=5.2sec, Rust=110nano, Zig=52nano, Python - some ten minutes :). All code variants were written as simply and close to the basic example as possible, compiled, and optimized for speed. All tests were running on the same laptop i7 Ubuntu 22.
"Blazingly" buzzword got me into FANG
smurt move
Prime -- have you checked out Nim? I would be interested to see how you feel about it and what you think about the DX.
no! NO!
@@ThePrimeagen What do you not like about nim?
Zig actually has some pretty good container implementations in its stdlib
i was pleasantly surprised
Zig is gaining some traction
any more zig videos? it would be nice to see you try out more languages
Can you throw in some classic for comparison? Like C# and Java?
I live for hear you say blazingly fast, I don't care about the video, I don't even care about golang or node, I just want to hear you saying BLAZINGLY FAST
Prime can you throw in java / c++ in this test bench. Please ?
I like the Gopher from the thumbnail who vomits an upside down Nike logo at a bun
Can we all just appreciate that he called the project "zig-me-daddy"
comparison was fair enough
When playing your character you sound like the Japanese voice of Tobi in Naruto, yes the one with the orange mask who disappears blazingly fast !
I lost it in Node....`silence`...`blinggg effect`
I can double the blazingly fast programming languages (videos) by watching in twice the speed. Booom! Go is now double the speed. Zig twice as blazingly fast. Node...
The first time I learned about Zig was when I looked at ncdu's page. ncdu 2.0 uses it instead of C, which is interesting
Zig is just a superior C, even if all you do is use it as a compiler. Just the build system alone is worth it.
Would be interesting to see something similar with JS (node) vs Python.
They're both so blazingly slow that the profile would never finish and there would be nothing to report.
love your videos man!
@ThePrimeagen is Elixir blazingly fast?
That silence afer saying node got me, jajajajaja
I've caught myself using blazingly fast at work, so thanks for that. Not in a serious context, we primarily use Rails and Elixir.
I love your thumbnails
ty
I actually would love to see your take on Svelte. Looking forward
This is a backend benchmark, Svelte is used in fronted. Doesn't make sense
Have you ever tried worker threads with node?
no, very curious with them.
@@ThePrimeagen I read an article where Node was able to outperform Go with the use of worker threads.
I laughed at node's intro, well played
i thought it was great
The details were not given about the TCP and TLS specifics and I'd guess those implementations are the bottlenecks, not the language. I run 650k concurrent TLS connections and double that without TLS in a tuned Go server on a single core VM in production. It took a lot of tuning to get there, but it was all networking.
Svelte!!!! 🙏🏼🙏🏼🙏🏼
I love you editing XD, Good video as usual
Its the editor that does all the good moves. I am just the good looking model.
I’m not sure I really understand the method of this experiment, could somebody explain in a bit more detail? How does an increasing size of the queue mean faster speed? (the method in the rust vs bun vs node video with requests/second made more sense to me, at least)
Evidentially, nobody understands it either.
I love the script kiddies roast
You are speaking blazingly fast! I had to change video's speed to 0.25. Oh my Go!
i will always speak fast, sowwy
Prime, there's a Go package called `fasthttp`, and it says to be 10x faster (there's benchmarks) than Go built-in net/http.
Maybe in the future, you could give it a try :)
I think it would be very interesting to see if a different http implementation really speed things up.
As the fasthttp docs says: It's an http server implementation for really performance servers.
Thanks for your hard work and cool videos :D
fasthttp shouldn't be used other than for very very veeeery specific scenarios -- it doesn't actually implement HTTP, it implements something "like HTTP", and thus you will find all sorts of fun compatibility problems with it in the real world, where you might not be able to control the clients hitting the server. Additionally, it's very unlikely that the http server portion of this was the bottleneck. 99% of the time it's the logic inside of the handlers, not the http stack itself. I.e. even if fasthttp is "10x faster than net/http", if net/http currently only impacts 1% of the performance, 10x means almost nothing. Not to mention that "10x" is only in specific scenarios, not regular scenarios, where performance is often only marginally better.
@@LiamStanley1 I see, thanks for explaining
"10x faster" than built-in net/http, Sir I think the term you're looking for is *BLAZINGLY FAST*
The same goes for json, there are optimized modules
Hey, it would be interesting to see how much performance you can etch out with C over rust and zig. Maybe even assembly, to see how much faster it can get over C? I know that I get better performance over even C++ when I use C, because I tend to write much simpler code that is way less abstract.
That's not really language's fault, you can write everything you write in C in C++, in fact, I'd argue C-style C++ should be your default, only using advanced C++ features when really necessary.
Zig and C would probably be equally fast if written correctly, language is not the main concern, the optimizations made by the programmer are and both of these languages are extremely flexible and designed so that you can optimise your code as much as you want.
Rust on the other hand makes it demonstrably harder to optimise, if you already fight the borrow checker writing naive implementations you are going to be fighting it 100x when optimising. Refactoring is also harder in (idiomatic) Rust because everything is sort of encapsulated into traits and some things in your code are executed implicitly and thus it is harder to reason and change te behaviour of the program. Also forget about RAII, that is something that will make everything 100x slower and if you do memory pools that you manage yourself then there goes the borrow checker.
So yeah, idiomatic Rust would probably be a lot slower than optimised C and Zig, if you want real high performance Rust shouldn't be your first choice.
@@eleanorbartle53541. If you write the assembly with care it will be faster, you just need to know what you are doing.
2. clang isn't the best C compiler, that is gcc wich does not use LLVM and produces code that is equal or faster to clang in 90% of the cases, sometimes significantly so. Also Rust wound most likely be significantly slower.
Prime , do you have any recommendations for videos or books for learning go?
Can you please include dotnet core in your future videos comparison? Cheers
i will not C# pretty much ever
Go, go, go!
good good
love the intro. node...
great moment
The no compliments for node really got me 😂
What are your thoughts about carbon?
in last few years and after .NET 5, .NET community release lot's of benchmark that .NET was faster than go and close to Rust
can you make a test like this for C# (.NET) ?
also can be a representation of True OOP language with inheritance
The reason I like this video is because of the title. I wouldn't have watched this video if it wasn't for the title. Instead of "Go is faster than Zig", if it were "Zig is faster than Go", I'd be like, "Well duh. Of course". But it said that opposite, which piqued my curiosity. And of course, half way through the video I realized, oh, he isn't showing that Go if faster than Zig. This was just a click-bait to make me watching something unnecessarily. Having already wasted 5 minutes of my life, I figure I double-down and waste another 5 minutes and write this post. Stockholm syndrome and all that.
I wonder if Nim is blazingly fast as well, compared to Zig and Go!
Do you have any good light themes recommendation for neovim ?
Cool video too ;)
I wish this also had a language that was actually used in the industry to see how these flavor of the month languages (except node) compare to them and if its worth actually learning them
Both Rust and Go are used extensively in the industry though? Sure they're not at Java/Python levels of adoption, but they're well past flavor of the month.
JSONMessage vs JsonMessage vs JavaScriptObjectNotationMessage, which is better?
I'd love to see the same tests in a multicore system, my guess is go would shorten the difference somewhat with rust/zig
NODE. (Uncomfortable silence)
Just what I would've expected. But I wonder how managed languages like Java and C# compare to Go. I'm guessing they'd be neck and neck but between Node and Go.
What actually makes a language faster than another ? I mean what is actually going on that allows it to make the most use of its environment (software or hardware?) to execute the instructions faster? Or is it actually due to the language executing instructions more efficiently due to a smarter/optimized process flow?
THe amazing part of go, is that you can take almost any backend dev and give him that server in go and he will be able to maintain it probably in the same day.
You can't say the same about rust, and even zig. It's kinda easier than rust, but has a much more complexity than go.
It seems like best way to iterate is to just start your product with Go and move quickly, and when you see that go is just not gonna cut it anymore - rewrite in rust. And you know what? Thanks to go complexity it will be much easier to reason about.
Wait, are you telling me that node is about half the speed of straight up Rust? That's way faster than it has any right to be.
Like, I know that JS is JIT compiled and all that stuff. But just doing obj.someProp has to recursively check if someProp exists on the object, or in the object's prototype, or in the prototype's prototype and so on until it reaches a null value. JS runtimes have to be doing crazy stuff to make it so that those things are efficient.
its a simple test so somethings are not present in its implementation. More complex projects can be significantly slower.
I don’t see yet video, but I think that Zig is blazingly fast😃