This is very similar to what we found at one of my old roles when we wrote our first Rust microservice. It was faster and more memory efficient than similar Node services, but not so much that it was worth retraining our Engineers to continue using Rust in the future. My advice to people these days is that they shouldn't use Rust for its speed (with some caveats) they should only choose it if their developers enjoy using it. I love Rust but not everyone does and that's fine. Optimise for engineers not machines.
To add some clarity to this comment: If I told you our Rust service was 5x more memory efficient than our equivalent Node service, you'd be impressed. If I told you it saved 16MB of RAM, you quite rightly wouldn't be. Though I no longer remember the exact numbers, speed improvements were similarly underwhelming but that makes sense as both services had to talk to a remote DB which would eat up most of the execution time.
@@Gisleburt Rust primary feature is correctness speed and effecient is bonus perks if you want to build a Billing System which can be prone to security and correctness you build it with Rust not Javascript regardless of how scale ur system is
@@tonyhart2744 Not sure if you meant to reply to someone else, have misunderstood my point, or are intentionally misrepresenting it to make a strawman. 1. I agree that the main benefit of Rust is its correctness rather than its speed. You'll spend less time on fixes and maintenance than you will in most other languages. This is how I sell it to people (that and how enjoyable I find it to write) 2. Rust is not more secure than JavaScript, this point makes me concerned that you think Rust is some kind of magic bullet. Its more secure than other low level languages because of its memory model (though, since you may still want to turn off some of the checks you can still run into memory related security issues). Memory management mistakes make up about 70% of security issues so both Rust and JavaScript mostly avoid these. That leaves the other 30% which both are equally prone to, eg mishandling credentials or sql injection, etc. There's nothing wrong with writing a billing system in JS. I've worked on billing systems in JS, PHP and Java and while they've all had their problems, none of them were security issues that Rust could have helped with. While Rust is great at preventing a lot of errors, its certainly possible to make mistakes. I've written code that did exactly the opposite of what I expected in Rust due to a mistake in some boolean logic. 3. Finally, the point I was making was in the expense of training Rust engineers. I actively encourage, help and support people who are learning Rust. It's a great language, coding in it makes me happy in a way no other language does, and its tooling and community are second to none. Its quickly growing in usage and here for that. That said, if I need to make a billing system and all I have are JS engineers, its going to be much cheaper to build it in JS than retrain my engineers and potentially replace the ones who quit because they weren't interested in learning Rust. The extra compute resources and even the extra maintenance costs are going to pale in comparison to the cost of engineers.
Even though these optimizations don't benefit a real-world application, I still don't believe this is wasted time -- there is still value in debunking an unfair comparison. People who believed that unfair comparison are less able to make good decisions on which tool to use.
What I've seen so far, Hetzner isn't doing too badly. But in fairness, I do live in Germany. But it's the primary dedicated VM provider my company will choose. Of course, some workloads will be on Google Cloud too. Regarding the main topic. Well, what you show isn't the typical thing I understand under premature optimization. That would be doing that optimization while still in early design phases, leading to a more complicated design that's actually probably less efficient than if one had started without heeding performance so much in the start and had later optimized. Though I guess the term still kind of applies, but perhaps more fittingly you show off optimization without having the real situation in mind. Premature optimization by my measure would likely end up in the same situation.
Great explanation on how to optimize your code but also a nice reminder that when you don't optimize the real bottleneck for the user, it's just wasted time 😂 (until it's the real one)
Yes. I do want to emphasize something I think it more important though. It's not just a waste of time, but my optimized code is objectively worse than Vagelis' un-optimized code. I don't think my optimizations are terrible, but there are some new concepts like thread_local!, there is a line of unsafe to allow usage of the data internal to the thread_local outside the function it's contained within (though, that unsafe can be removed w/o performance impact, the code to do so is rather complex), and the manual allocation of space for serde_json takes some knowledge regarding BufWriters. It might take a Junior rust programmer 15 to 20 minutes to understand the nuances going on here. With Vagelis' code, junior rust programmers can immediately look at that code and know exactly whats going on within seconds. The optimizations I've done are not just a waste of time right now, but they are an ongoing waste of time by being technical debt going forward (unless of course, your doing some hard number crunching and need that extra cpu overhead the optimizations allow). Simpler code is always preferable to complex code, unless there is significant performance impact to the simplicity; as maintenance is simpler, and adding new features is easier.
Thanks! My Mom bought me the tuque many years ago, and it's special to me. I have to wear it now because I haven't had a hair cut in a really long time and I can't stand having hair in my face.
Is this a valid comparison? What if the benchmarking tool (oha) is limited by the number of workers, or your Internet speed/ping, or even the hosted machine's Internet speed? Clearly there's some difference, perhaps the optimized version would handle more concurrent requests without these bottlenecks before falling off. EDIT: just finished the video, if it's the network then maybe in this case the optimisation isn't useful but it would be if this was a library that other people used
Also, I wonder if we can make the case that reducing CPU will reduce power consumption. If you're self-hosting this would reduce electricity costs. It's also better for the environment.
@13:00 area - "normal - no leaps between runs" - now that confuses me. When I'm feeling "normal", I go a long period without having the runs. Seem like the original code had the exact opposite problem - performance was "impacted" ... ba dum bum
@@masmullin Really cool stuff, let us know if you find more hosting platforms, because you got yourself a new subscriber. I really liked this video - I enjoyed the tangent and the excuse to touch many topics. keep it up
This is very similar to what we found at one of my old roles when we wrote our first Rust microservice. It was faster and more memory efficient than similar Node services, but not so much that it was worth retraining our Engineers to continue using Rust in the future. My advice to people these days is that they shouldn't use Rust for its speed (with some caveats) they should only choose it if their developers enjoy using it. I love Rust but not everyone does and that's fine. Optimise for engineers not machines.
To add some clarity to this comment: If I told you our Rust service was 5x more memory efficient than our equivalent Node service, you'd be impressed. If I told you it saved 16MB of RAM, you quite rightly wouldn't be. Though I no longer remember the exact numbers, speed improvements were similarly underwhelming but that makes sense as both services had to talk to a remote DB which would eat up most of the execution time.
@@Gisleburt Rust primary feature is correctness speed and effecient is bonus perks
if you want to build a Billing System which can be prone to security and correctness you build it with Rust not Javascript regardless of how scale ur system is
@@tonyhart2744 Not sure if you meant to reply to someone else, have misunderstood my point, or are intentionally misrepresenting it to make a strawman.
1. I agree that the main benefit of Rust is its correctness rather than its speed. You'll spend less time on fixes and maintenance than you will in most other languages. This is how I sell it to people (that and how enjoyable I find it to write)
2. Rust is not more secure than JavaScript, this point makes me concerned that you think Rust is some kind of magic bullet. Its more secure than other low level languages because of its memory model (though, since you may still want to turn off some of the checks you can still run into memory related security issues). Memory management mistakes make up about 70% of security issues so both Rust and JavaScript mostly avoid these. That leaves the other 30% which both are equally prone to, eg mishandling credentials or sql injection, etc. There's nothing wrong with writing a billing system in JS. I've worked on billing systems in JS, PHP and Java and while they've all had their problems, none of them were security issues that Rust could have helped with. While Rust is great at preventing a lot of errors, its certainly possible to make mistakes. I've written code that did exactly the opposite of what I expected in Rust due to a mistake in some boolean logic.
3. Finally, the point I was making was in the expense of training Rust engineers. I actively encourage, help and support people who are learning Rust. It's a great language, coding in it makes me happy in a way no other language does, and its tooling and community are second to none. Its quickly growing in usage and here for that. That said, if I need to make a billing system and all I have are JS engineers, its going to be much cheaper to build it in JS than retrain my engineers and potentially replace the ones who quit because they weren't interested in learning Rust. The extra compute resources and even the extra maintenance costs are going to pale in comparison to the cost of engineers.
Even though these optimizations don't benefit a real-world application, I still don't believe this is wasted time -- there is still value in debunking an unfair comparison. People who believed that unfair comparison are less able to make good decisions on which tool to use.
What I've seen so far, Hetzner isn't doing too badly.
But in fairness, I do live in Germany.
But it's the primary dedicated VM provider my company will choose. Of course, some workloads will be on Google Cloud too.
Regarding the main topic. Well, what you show isn't the typical thing I understand under premature optimization. That would be doing that optimization while still in early design phases, leading to a more complicated design that's actually probably less efficient than if one had started without heeding performance so much in the start and had later optimized. Though I guess the term still kind of applies, but perhaps more fittingly you show off optimization without having the real situation in mind. Premature optimization by my measure would likely end up in the same situation.
Deutsche Anbieter scheinen international irgendwie nicht gut Werbung zu machen 🤷 aber auf jeden Fall ein unerwartetes shout out an Der Bauer😂
I'm away from hetzner, their network setup is just too strange. I'm with Netcup at the moment.
Great explanation on how to optimize your code but also a nice reminder that when you don't optimize the real bottleneck for the user, it's just wasted time 😂 (until it's the real one)
Yes. I do want to emphasize something I think it more important though. It's not just a waste of time, but my optimized code is objectively worse than Vagelis' un-optimized code.
I don't think my optimizations are terrible, but there are some new concepts like thread_local!, there is a line of unsafe to allow usage of the data internal to the thread_local outside the function it's contained within (though, that unsafe can be removed w/o performance impact, the code to do so is rather complex), and the manual allocation of space for serde_json takes some knowledge regarding BufWriters. It might take a Junior rust programmer 15 to 20 minutes to understand the nuances going on here.
With Vagelis' code, junior rust programmers can immediately look at that code and know exactly whats going on within seconds.
The optimizations I've done are not just a waste of time right now, but they are an ongoing waste of time by being technical debt going forward (unless of course, your doing some hard number crunching and need that extra cpu overhead the optimizations allow).
Simpler code is always preferable to complex code, unless there is significant performance impact to the simplicity; as maintenance is simpler, and adding new features is easier.
Hey! Love you videos. Would you mind cranking the gain a bit more when you record?
It's very low in fairness..
Great video. I really like the real-life comparison.
Also, the hetzner /linode comparison is interesting.
BTW: great tuque!
Thanks! My Mom bought me the tuque many years ago, and it's special to me. I have to wear it now because I haven't had a hair cut in a really long time and I can't stand having hair in my face.
very interesting! also happy to know that hetzner exists!
Is this a valid comparison? What if the benchmarking tool (oha) is limited by the number of workers, or your Internet speed/ping, or even the hosted machine's Internet speed? Clearly there's some difference, perhaps the optimized version would handle more concurrent requests without these bottlenecks before falling off.
EDIT: just finished the video, if it's the network then maybe in this case the optimisation isn't useful but it would be if this was a library that other people used
Also, I wonder if we can make the case that reducing CPU will reduce power consumption.
If you're self-hosting this would reduce electricity costs. It's also better for the environment.
This would be an interesting project.
Unless you're self hosting I don't think that is worth your time
Interesting video!
Thanks!
The real danger - she'll leave you
She was always a mutable borrow. Ring ceremony not performed, ownership was never assigned.
I enjoyed this video.
thanks.
May I ask why not use web data which is the standard way to share state in actix web framework?
Because state is not being shared.
@@masmullin I am using mutex on users stored as struct as shared data which works fine. I have not benchmarked my code.
@13:00 area - "normal - no leaps between runs" - now that confuses me. When I'm feeling "normal", I go a long period without having the runs. Seem like the original code had the exact opposite problem - performance was "impacted" ... ba dum bum
How did you discover hetzner?
Random comment on Reddit.
@@masmullin Really cool stuff, let us know if you find more hosting platforms, because you got yourself a new subscriber.
I really liked this video - I enjoyed the tangent and the excuse to touch many topics. keep it up
You might have leaked your IP just FYI.