► What should I test next? ► AWS is expensive - Infra Support Fund: buymeacoffee.com/antonputra ► Benchmarks: ua-cam.com/play/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn.html&si=p-UOaVM_6_SFx52H
@@MelroyvandenBerg Apache is very underrated web server. NPM event does a very good job handling thousands requests/s. I use it as load balancer, static files server and reverse proxy.
💡 That is really interesting. I could swear it's the other way around, of course I believe your benchmark. My server was attacked via DoS and I swapped my NGINX reverse proxies on the host and in Docker for Caddy and my server was able to handle requests again and stabilised. You may be able to find my blog post on this under my name, where I also compared the configs I was using. Probably my NGINX was just misconfigured.
Oh, I just finished your video and saw the CPU metric. My simple VPS has a reduced CPU and ran into CPU locks, that could explain its behavior. In my case, Caddy was a good choice.
Nice, thanks for the benchmarks :). We were just discussing whether we want to switch from nginx to caddy (due to the easier SSL management, but latency is more important to use). Still hoping for Sqlite vs Postgres :D
@@lemonardot Both are relational SQL databases, not what I would call "completely different!". Why would you not want to compare them, just because they follow different architectures? Both can be used on the same server as the backend, doing the exact same job (which happens very often, the same with MySql. I wouldn't compare them in the use case where the DB is running on its own server). It's like saying "you can't compare .Net to Node.js etc"
@@lemonardotBoth SQLite and PostgreSQL are relational databases, so comparing them isn't totally off, especially for things like ERP or bookkeeping applications. Sure, SQLite is file-based and runs on the same machine, which can be a plus or minus depending on the setup, but both handle core database tasks like writes, updates, and selects. They definitely differ on things like permissions, client-server setup, advanced features Postgres provides, and latency, so you could argue pros and cons there. But for many use cases, SQLite’s simplicity is actually an advantage, and plenty of apps could use it without giving up anything essential. it’s surprising how many companies still pay for MS SQL or Oracle when SQLite would work fine and make things way simpler to deploy and maintain. Yeah, it doesn’t support things like stored procedures, but how critical are those really? When indeed required, one could often use Python or Bash scripts to achive the same goal.
@@johnswanson217 There is weirdness around rewrite rules, location matching, aliasing, root folder / try_files. I've definitely spent too much time configuring nginx on occasion because of its confusing nature.
@@johnswanson217 I think that depends on how basic that config should be. Caddy is just a couple lines in your Compose file, and your team’s all set straight up to staging a PoC. I know people have been running Caddy in production, but I wouldn’t. I do only Nginx, HAProxy and AWS API Gateway in production. Some goodies are available only in Nginx Plus. Setting up Nginx the right way for production isn’t easy at all. But this depends on your requirements of course.
can you do this with haproxy, and envoy added in. Since envoy is seemingly the standard for modern service meshes and haproxy is the old school approach to such a system. Would be nice to see how much performance we've lost over time
@@spell105 Because I already have done that and switched from Nginx to Caddy. Nginx is not worth it spending the time and configuring if the reverse proxy by itself is not a bottleneck. There are just to many configuration options to read, understand and configure even for the most simple things. Caddy does everything you want by default with much better and shorter documentation.
You could also try to increase threads in the thread pool, but since you only used two vCPUs I didn't increase it further. But feel free to play around with the settings and see if you get more out of it.
@@AntonPutra I see, no problem! I think that make sense when we are bottle-necked by 2 vCPUs. I'm wondering how far we can stress-test Nginx actually. What about a m7a.16xlarge? And set the threads in Nginx to 64. And maybe fine-tune the kernel. Instead of starting small with 10, 100, 1000 requests per seconds, you can safe yourself time & money by starting from a higher through-put from the start, let's say starting with 10k request per second and increase heavily.
Hey, thanks for the video. As a Java dev, I've always wondered about the performing characteristics of Java web servers. - Apache Tomcat vs Eclipse Jetty vs Undertow
Comparing Nginx and Pingora can help determine whether migrating from Nginx to Pingora is worthwhile. Although Pingora is a framework, developing test requirements for it should not be difficult.
actually, in one of the next videos, i'll do nginx vs traefik. in the first test, i'll compare them as standalone proxies, and in the second test as ingress controllers
@AntonPutra haha wow that's amazing! I am surprised to hear that these tests cost that much.. Can I support you in any way? Your videos really help me learn more about these technologies in an enjoyable way
probably nginx, since traefik is written in go as well. And one could assume that nginx, since it existed forever, was optimised to the core. Basically, I doubt even the rust based proxies can beat it, let alone go based proxies.
another graet video Anton ❤ btw fireship mentioned you in one his latest video in his other channel named Beyond Fireship. video title - JavaScript performance is weird (1:42)
Here is an idea for a very insteresting, though a bit difficult to set-up test: compareGC pauses & throughput on ZGC vs G1GC vs Shenandoa on a recent Java version
Nginx is passing 25K requests to the apps behind it (12.5K each), Caddy is passing 4K requests to the apps behind it (2K each). That explains why the apps behind Nginx are higher on CPU usage. They are simply processing more requests (about 6 times more).
At the end of the test, sure, but in both the Apache test and this one, when Nginx processes the same amount of requests at the beginning of the test, CPU load on the applications is much higher. The most likely explanation is this, but I'll test it in the next video - github.com/antonputra/tutorials/pull/334
@@faysh88 Oh yes I see what you mean. The end results are somewhat explainable. But they should match for the first half where Caddy was keeping up with Nginx in req/s
Hi Anton, a couple of.questions related to the cert? @2:05 you mention you've created a self signed cert authority where did you create the CA? what did you use for creating the CA? I think the 2 certs on the servers, Nginx and Caddy, are TLS certs to allow the clients to verify that the servers are who they say they are. Why do you deploy a cert in che K8s Clients? is it for MTLS?
i've been using cfssl for pki last couple of years. 1. Generate CA 2. Generate 2 certs for each server using that CA 3. Use CA directly in the client to verify proxies cets since they were issued by self sign CA In the github in the client you can actually see the CA itself with private key, which actually you don't need in the client it just to create tls 'kind' secret in k8s you required to provide a key as well. I don't use mTLS in this case. CA + key - github.com/antonputra/tutorials/blob/main/lessons/221/tests/1-test/1-tls.yaml#L9-L10
Thanks for the video! Do you have any video on the tests you do, and how they are implemented? If not it would be nice to have a video like that so that we can see how can we test such applications etc.
Hi, great video! I wanted to ask if you can recommend programs/services to generate load outside of eks/kube ecosystem, for smaller apps. Also, are You using prometheus to gather all this data to visualize in Grafana or something else? Thanks in advance 🙏
Thanks for doing the benchmark. Can I use Nginx in Windows server and achieve great performance as it has in Linux? More importantly, can we use Nginx in production for free?
Hi Anton, Can you do a time series performance test next, please. Timescale vs Influxdb and any other good databases. May include the size of the data folder on disk at the end of the test.
NGINX vs Traefik? NGINX seems very useful for both webserver and proxy.. But traefik is a lot easier to use as proxy/ingest.. I think it will result in the same thing as Caddy but im curious, have you done this one before?
About the app behing nginx experiencing higher cpu usage, one factor could be with how nginx handle keep alive connection with the application, I think by default it keeps the session alive for a 1000 requests by default then recycle the connection.
Hi, thanks for the video. the data result is somehow a shock for me. Even i think you should update nginx vs apahce with new change and new version of ngnix. Although you show Apache as reverse proxy is better than ngnix. 🤣 Next video i would say Apache vs Caddy. since the king is still ngnix, just to close with 3 well known servers tests. About which server is best for beginners, for it is not caddy. It is and always be, Apache, because with no config by default is setup to serve static files and even more easy when using PHP, since you dont need to setup as reverse proxy like ngnix or caddy with fpm.
thanks for your feedback. with each new benchmark, i accumulate more insights and improvements from experts in the field, so i'll definitely refresh the benchmarks when i get enough improvement. i say caddy is easier for beginners mostly due to its built-in ssl/tls resolvers; with nginx and apache, you need an extra step.
@@AntonPutra HI, about tls / ssl on apache , last time i config and remember on VPS, only need to indicate the path of crt and private key file and activate ssl mode. And you also said for small projects you also recommended caddy, which i dont agree, for several reasons, on majority of sharehost, they give you apache not caddy, where you dont need to worry about config files or even ssl. Apache on VPS, less config files and defaults are enough even with htacess file, for small changes, without mess up main config file. Apache on Docker container is even more simple, just indicate document root and with PHP no need extra container. To switch for caddy only if i am using ngnix and it is faster than ngnix, which you show ngnix > caddy. For small projects and low traffic apache is enough, no need to complicate things with caddy or ngnix. keep it simple. ☺
I use Caddy for about everything when I need a simple reverse proxy, just wish they could optimize it further. If only Nginx would have the ease of configuration like Caddy it would be perfect.
Hi @AntonPutra, I love the video, please can we get a tutorial on how you developed your great Grafana dashboard to monitor your webservers, and client app, but this time if the Nginx is on Kubernetes (EKS, AKS, GKE). Thanks. Love the Dashboard.
@AntonPutra since in the reverse proxy test Nginx has mich higher throughput the application VMs behind it are processing much more number of requests than the application VMs behind Caddy. I guess this might be the explanation behind the unusually high CPU usage.
no, i was referring to it at the same level of RPS. looks like the issue is with the upstream keep-alive setting - github.com/antonputra/tutorials/pull/334
The best of Caddy is their website and promises. What they deliver is different. For example, SSL certificates is not automated, but a not implemented feature. So, still better to stay with well known and stable nginx than with hypped new techs
unlikely, the payload size is only ~75 bytes, and nginx does not apply compression on such small sizes. but i'll double-check before the next benchmark with traefik. also, compression is performed on the nginx side, so nginx CPU usage would go up, not the applications behind it.
Caddy has a big following in self hosting space. Many users running services from home don't have needs beyond what Caddy provides, and users don't need to become configuration experts. Nginx always was and probably will remain the recommended pro tool
@@AntonPutra Thank you very much. I am very curious how many/much GET requests can ALB handle. AWS said that millions/unlimited per second, but i do not believe marketing phrases.
Actually, you _increased_ the server load by using the transform encoder in Caddy for logs. It's much more efficient to write JSON logs to disk. This is not a fair comparison.
@AntonPutra also what you're basically testing is "which server has more efficient gzip". It's a huge increase in CPU usage to enable it. In production, you would probably use precompressed files with Caddy's file_server to avoid the CPU cost per request.
► What should I test next?
► AWS is expensive - Infra Support Fund: buymeacoffee.com/antonputra
► Benchmarks: ua-cam.com/play/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn.html&si=p-UOaVM_6_SFx52H
NGINX vs Pingora? Pingora has a nice story from cloudflare
Here's an interesting experiment. Logging platforms
Elastic (EFK) VS Loki-grafana
It's interesting... what is more expensive? Aws azure gcp oci ?
ocaml vs elixir vs go
Prisma vs Drizzle
Thanks to that one guy who commented for this.
😊
pls do dart angel next
jk, don't actually do it
We asked and we received.. Thank you so much!
my pleasure!
U the best
You sir, are criminally undersubbed!
haha ❤️
This is honestly amazing good work 👍
Thank you! I'll do Traefik in a few days.
Last year I almost migrated from Apache to Caddy. Then I remember the golden rule: don’t follow the hype
true
but that is how new job opportunities are created,... now you would have been working on migrating back
Migrate from Apache to Nginx though ;P
@@MelroyvandenBerg Apache is very underrated web server. NPM event does a very good job handling thousands requests/s. I use it as load balancer, static files server and reverse proxy.
我太喜欢这类视频 多款web服务器同时竞技 一定很精彩!👍👍👍
💡 That is really interesting. I could swear it's the other way around, of course I believe your benchmark. My server was attacked via DoS and I swapped my NGINX reverse proxies on the host and in Docker for Caddy and my server was able to handle requests again and stabilised.
You may be able to find my blog post on this under my name, where I also compared the configs I was using. Probably my NGINX was just misconfigured.
Oh, I just finished your video and saw the CPU metric. My simple VPS has a reduced CPU and ran into CPU locks, that could explain its behavior.
In my case, Caddy was a good choice.
thanks for your feedback! i'll try to find your blog post
Nice, next video please compare Nginx, HAProxy, and Envoy as reverse proxies.
ok will do, but first i need to update ruby test with native implementation without rails
In my testing nginx was significantly faster, but who knows with this better testing
@@AntonPutra and traefik pls
Nice, thanks for the benchmarks :). We were just discussing whether we want to switch from nginx to caddy (due to the easier SSL management, but latency is more important to use).
Still hoping for Sqlite vs Postgres :D
Banana to Apple for Sqlite to Postgres bro. it's all depends on what you want to do for this one.
yes, i'll do sqlite soon, it's just a weird comparison, but i got a lot of requests to do it.
SQLite vs Postgres?? what! dude both are completely different! SQLite is file-based while Postgres is client-server! How can you compare both!
@@lemonardot Both are relational SQL databases, not what I would call "completely different!". Why would you not want to compare them, just because they follow different architectures? Both can be used on the same server as the backend, doing the exact same job (which happens very often, the same with MySql. I wouldn't compare them in the use case where the DB is running on its own server).
It's like saying "you can't compare .Net to Node.js etc"
@@lemonardotBoth SQLite and PostgreSQL are relational databases, so comparing them isn't totally off, especially for things like ERP or bookkeeping applications. Sure, SQLite is file-based and runs on the same machine, which can be a plus or minus depending on the setup, but both handle core database tasks like writes, updates, and selects.
They definitely differ on things like permissions, client-server setup, advanced features Postgres provides, and latency, so you could argue pros and cons there. But for many use cases, SQLite’s simplicity is actually an advantage, and plenty of apps could use it without giving up anything essential.
it’s surprising how many companies still pay for MS SQL or Oracle when SQLite would work fine and make things way simpler to deploy and maintain. Yeah, it doesn’t support things like stored procedures, but how critical are those really? When indeed required, one could often use Python or Bash scripts to achive the same goal.
Cool can you test traefik
yes, next i have to refresh the ruby video using the native implementation, and then i'll do nginx vs traefik
@@AntonPutrathank you for all the work!
@@AntonPutrai think you can close the chapter after traefik. Nginx seems to be the king, still
@@AntonPutra super!
Traefik vs nginx please
Thanks :)
It makes sense that C is much faster than go.
but caddy is the most "advanced" based on the landing page 😊
I wouldn’t say it does, Caddy isn’t about performance but rather about smoother DX for the not too ops savvy devs.
@@Nick-yd3rcNginx config can't be more easier... If the dev can't configure basic reverse proxy in Nginx he shouldn't be a dev.
@@johnswanson217 There is weirdness around rewrite rules, location matching, aliasing, root folder / try_files. I've definitely spent too much time configuring nginx on occasion because of its confusing nature.
@@johnswanson217 I think that depends on how basic that config should be. Caddy is just a couple lines in your Compose file, and your team’s all set straight up to staging a PoC. I know people have been running Caddy in production, but I wouldn’t. I do only Nginx, HAProxy and AWS API Gateway in production. Some goodies are available only in Nginx Plus. Setting up Nginx the right way for production isn’t easy at all. But this depends on your requirements of course.
Great Video :)
A comparison between nginx and openlitespeed would be quiet interesting.
ok noted!
Would love to see a traefik comparison here as well.
Seems like go could be causing some fun here.
yes, i'll do nginx vs traefik in a few days
can you do this with haproxy, and envoy added in.
Since envoy is seemingly the standard for modern service meshes and haproxy is the old school approach to such a system.
Would be nice to see how much performance we've lost over time
yes i'll do envoy and haproxy soon
nice, Nginx would always be a good choice for reverse proxy
yeah, it's used by at least 30% of all websites
TLDR: Nginx about 2x faster than Caddy.
I think its best to start with Caddy and switch to Nginx after 10k RPS.
make sense
Or just start with Nginx, why even switch?
@@spell105 Because I already have done that and switched from Nginx to Caddy. Nginx is not worth it spending the time and configuring if the reverse proxy by itself is not a bottleneck. There are just to many configuration options to read, understand and configure even for the most simple things. Caddy does everything you want by default with much better and shorter documentation.
Why not just use nginx? It’s battle tested and proven
@@RandomShowerThoughts Use nginx if you have prior knowledge. Caddy is stable as well just less well known.
PR by Mohammed to improve Caddy - github.com/antonputra/tutorials/pull/336
Thanks for mentioning me. :)
You could also try to increase threads in the thread pool, but since you only used two vCPUs I didn't increase it further. But feel free to play around with the settings and see if you get more out of it.
thanks for the help! i tried 16 and 32, and the result was pretty much the same
@@AntonPutra I see, no problem! I think that make sense when we are bottle-necked by 2 vCPUs. I'm wondering how far we can stress-test Nginx actually. What about a m7a.16xlarge? And set the threads in Nginx to 64. And maybe fine-tune the kernel. Instead of starting small with 10, 100, 1000 requests per seconds, you can safe yourself time & money by starting from a higher through-put from the start, let's say starting with 10k request per second and increase heavily.
Hey, thanks for the video. As a Java dev, I've always wondered about the performing characteristics of Java web servers.
- Apache Tomcat vs Eclipse Jetty vs Undertow
That’s what we’ve been waiting for 🤩
I'll do nginx vs traefik in a few days
@@AntonPutrathx man. I appreciate very much the beauty of benchs you’ve doing ❤️
Comparing Nginx and Pingora can help determine whether migrating from Nginx to Pingora is worthwhile. Although Pingora is a framework, developing test requirements for it should not be difficult.
okay, i keep getting requests for Pingora, so i'll do it soon
Are gzip compression levels same for both? Hence it can make big difference...
level 3 for nginx and whenever caddy uses by default for gzip
github.com/antonputra/tutorials/blob/main/lessons/221/nginx/nginx.conf#L96
What would perform better in as a reverse proxy for kubernetes nginx or treafik? Btw nice video as always!
actually, in one of the next videos, i'll do nginx vs traefik. in the first test, i'll compare them as standalone proxies, and in the second test as ingress controllers
@AntonPutra haha wow that's amazing! I am surprised to hear that these tests cost that much.. Can I support you in any way? Your videos really help me learn more about these technologies in an enjoyable way
probably nginx, since traefik is written in go as well. And one could assume that nginx, since it existed forever, was optimised to the core. Basically, I doubt even the rust based proxies can beat it, let alone go based proxies.
Awesome job. Traefik is a must for next test.
thanks! yeah, I'll do trafaek in a few days!
@@AntonPutra oh, I found out that you already did tested traefik with nginx or caddy (bgcfEW_Yh7E) year ago
you are a legend
😊
another graet video Anton ❤
btw fireship mentioned you in one his latest video in his other channel named Beyond Fireship. video title - JavaScript performance is weird (1:42)
cool thanks!!
Here is an idea for a very insteresting, though a bit difficult to set-up test: compareGC pauses & throughput on ZGC vs G1GC vs Shenandoa on a recent Java version
ok, interesting. i may do it in the future if i get more requests like this; it's very niche
Gold content, as always
Nginx is passing 25K requests to the apps behind it (12.5K each), Caddy is passing 4K requests to the apps behind it (2K each). That explains why the apps behind Nginx are higher on CPU usage. They are simply processing more requests (about 6 times more).
At the end of the test, sure, but in both the Apache test and this one, when Nginx processes the same amount of requests at the beginning of the test, CPU load on the applications is much higher. The most likely explanation is this, but I'll test it in the next video - github.com/antonputra/tutorials/pull/334
At the end you mean. But the cpu usage was bigger at the begining of the test. So, It's something else going on behind the scene
@@faysh88 Oh yes I see what you mean. The end results are somewhat explainable. But they should match for the first half where Caddy was keeping up with Nginx in req/s
it depends of what is the connection between loadbalancer and worker.. HTTP 2 or 1.1?
You could also try HTTP/3
I came, listen methodology, appreciate how good video is, check results and like video. This is the way.
Hi Anton, a couple of.questions related to the cert?
@2:05 you mention you've created a self signed cert authority
where did you create the CA?
what did you use for creating the CA?
I think the 2 certs on the servers, Nginx and Caddy, are TLS certs to allow the clients to verify that the servers are who they say they are.
Why do you deploy a cert in che K8s Clients? is it for MTLS?
i've been using cfssl for pki last couple of years.
1. Generate CA
2. Generate 2 certs for each server using that CA
3. Use CA directly in the client to verify proxies cets since they were issued by self sign CA
In the github in the client you can actually see the CA itself with private key, which actually you don't need in the client it just to create tls 'kind' secret in k8s you required to provide a key as well.
I don't use mTLS in this case.
CA + key - github.com/antonputra/tutorials/blob/main/lessons/221/tests/1-test/1-tls.yaml#L9-L10
NGINX vs Pingora?
ok added to my list!
The new cloudflare baby vs grandpa 😂 would love to see this
that one is probably most interesting, since cloudflare actually uses it.
Isn’t pingora a framework rather than a complete web server?
River would be the equivalent. Pingora is just a library.
Wow! Cool. Was more hope of caddy actually ( Thank you for your job and such a helpful information!
Thanks for the video!
Do you have any video on the tests you do, and how they are implemented? If not it would be nice to have a video like that so that we can see how can we test such applications etc.
i have a few. you can search for '"prometheus" on my channel, but i'm planning to refresh some of them soon.
I think HAProxy would be a good candidate for next benchmark
yes, i'll do it in a few days, standalone as well as with an ingress controller
Hi, great video! I wanted to ask if you can recommend programs/services to generate load outside of eks/kube ecosystem, for smaller apps. Also, are You using prometheus to gather all this data to visualize in Grafana or something else? Thanks in advance 🙏
Perhaps nginx compresses the files which the nodes then need to decompress leading to higher cpu usage?
this is most likely the issue (absence of keep alive on upstream) - github.com/antonputra/tutorials/pull/334
You are perfect !!!!!
thanks for comparison videos
my pleasure! 😊
Thanks for doing the benchmark. Can I use Nginx in Windows server and achieve great performance as it has in Linux? More importantly, can we use Nginx in production for free?
for sure, you can use it for free, but i'm not an expert in windows, but most likely you can 😊
Nice, please MariaDB vs MySQL (or vs Postgres). Best channel!
Very good video. I'd suggest you to compare nginx and openlitespeed.
ok, openlitespeed.. will do in a few weeks!
Hi Anton, Can you do a time series performance test next, please. Timescale vs Influxdb and any other good databases. May include the size of the data folder on disk at the end of the test.
yes, it’s actually maybe interesting! i'll put it at the top of my list!
NGINX vs Traefik?
NGINX seems very useful for both webserver and proxy.. But traefik is a lot easier to use as proxy/ingest.. I think it will result in the same thing as Caddy but im curious, have you done this one before?
i have, but i'll make another one soon, comparing standalone reverse proxies as well as ingress controllers
About the app behing nginx experiencing higher cpu usage, one factor could be with how nginx handle keep alive connection with the application, I think by default it keeps the session alive for a 1000 requests by default then recycle the connection.
Hi, thanks for the video. the data result is somehow a shock for me.
Even i think you should update nginx vs apahce with new change and new version of ngnix.
Although you show Apache as reverse proxy is better than ngnix. 🤣
Next video i would say Apache vs Caddy. since the king is still ngnix, just to close with 3 well known servers tests.
About which server is best for beginners, for it is not caddy. It is and always be, Apache, because with no config by default is setup to serve static files and even more easy when using PHP, since you dont need to setup as reverse proxy like ngnix or caddy with fpm.
thanks for your feedback. with each new benchmark, i accumulate more insights and improvements from experts in the field, so i'll definitely refresh the benchmarks when i get enough improvement. i say caddy is easier for beginners mostly due to its built-in ssl/tls resolvers; with nginx and apache, you need an extra step.
@@AntonPutra HI, about tls / ssl on apache , last time i config and remember on VPS, only need to indicate the path of crt and private key file and activate ssl mode.
And you also said for small projects you also recommended caddy, which i dont agree, for several reasons, on majority of sharehost, they give you apache not caddy, where you dont need to worry about config files or even ssl.
Apache on VPS, less config files and defaults are enough even with htacess file, for small changes, without mess up main config file. Apache on Docker container is even more simple, just indicate document root and with PHP no need extra container.
To switch for caddy only if i am using ngnix and it is faster than ngnix, which you show ngnix > caddy. For small projects and low traffic apache is enough, no need to complicate things with caddy or ngnix. keep it simple. ☺
Great video. Congrats!
Can you elaborate on the testing software?(the soft used to create all the requests)
Tks!
thanks you! sure i use open source prometheus + grafana to visualize metrics
@@AntonPutra I am not reffering to monitoring part ... but to the soft that is creating the requests.
I really like your videos! Can you please compare nginx and haproxy?
yes i'll do it in a few days
What program you use to vizualize the results?
Thank you. Awesome 👍
my pleasure, i'll do more tests like these with open-source web servers
mantab bro
thanks! 😊
I'd be interested in seeing Nginx vs HAproxy as HAP is a competing proxy to Nginx
yes i'll do haproxy in a few days
I use Caddy for about everything when I need a simple reverse proxy, just wish they could optimize it further.
If only Nginx would have the ease of configuration like Caddy it would be perfect.
makes sense. if someone could help optimize caddy, i'd gladly refresh this benchmark!
what tool, do you use to perform benchmark?
Nginx might be sending more request headers to the proxied app, which would explain higher CPU usage
unlikely, but I'll test and see if it's true for the next nginx vs traefik comparison
Could you please compare Caddy with Traefik? They are the trending new options nowadays
Hi @AntonPutra, I love the video, please can we get a tutorial on how you developed your great Grafana dashboard to monitor your webservers, and client app, but this time if the Nginx is on Kubernetes (EKS, AKS, GKE). Thanks. Love the Dashboard.
Thank you! I'll do that in the future. This time, I deployed Nginx and Caddy on their own EC2 instances (m7a.large).
@@AntonPutra Thanks.
Awesome test, whenever you can, can you please test nginx, traefic and haproxy for reverse proxy?
yes, i'll do Traefik in a few days
@AntonPutra since in the reverse proxy test Nginx has mich higher throughput the application VMs behind it are processing much more number of requests than the application VMs behind Caddy.
I guess this might be the explanation behind the unusually high CPU usage.
no, i was referring to it at the same level of RPS. looks like the issue is with the upstream keep-alive setting - github.com/antonputra/tutorials/pull/334
well, use caddy for early development and move back to nginx when mature
makes sense
Lol so much for people who were hyping caddy
i hope someone can optimize caddy, and i'll be glad to refresh this benchmark! But i think Caddy mostly great for beginners
@@AntonPutra
At the end of the day, caddy is written in Go, there is an upper threshold for how well it can compete with the state-of-the-art C code.
The best of Caddy is their website and promises. What they deliver is different. For example, SSL certificates is not automated, but a not implemented feature. So, still better to stay with well known and stable nginx than with hypped new techs
Can you also compare it with Litespeed?
It's just not possible for a golang app to beat a c++ (or a rust) one
maybe, but it mostly depends on the implementation
Nginx is C.
Thanks
my pleasure!
@11:44 @AntonPutra maybe strong compression/decompression induced by nginx to the apps behind it?
unlikely, the payload size is only ~75 bytes, and nginx does not apply compression on such small sizes. but i'll double-check before the next benchmark with traefik. also, compression is performed on the nginx side, so nginx CPU usage would go up, not the applications behind it.
I don't know why, but when I watch it, it feels like I'm playing DOOM.
I thought that u will add traefik also, exists colify that migrate from caddy to traefik, maybe there is performance is main reason 🤔
Can you test please comparing with spring cloud gateway(webflux based version, I can help with configuration if needed) and maybe traefik.
ok interesting will do at some point
Can you compare between Json Vs Sql
you mean like document database vs relational? yeah, i'll do postgres vs mongo soon
What about a Go be Bun vs Node vs php-rpm (laravel or other similar framework)?
Caddy all the way!!!
just got a PR with caddy improvement - github.com/antonputra/tutorials/pull/336
what about NGinx vs Pingora
added to my list
The perfGOAT returns
😃
Please Nginx vs openlitespeed
ok, interesting suggestion
@AntonPutra thanks 😊
Can you also check treafik?
Maybe a apache2, nginx, caddy, treafik 😮😮
yes, i'll do Traefik in a few days, i did Apache2 about a week ago
Can you please test HAProxy, Nginx and Traefik?
yes i'll do haproxy soon
Hey! Could you do one with spring virtual threads vs no virtual threads vs node.js?
yes i can, i'll do more Java soon!
@@AntonPutra great, thanks! looking forward to it!
Caddy has a big following in self hosting space. Many users running services from home don't have needs beyond what Caddy provides, and users don't need to become configuration experts. Nginx always was and probably will remain the recommended pro tool
agree!
With something like Nginx Proxy Manager, the need for being an expert is also not there.
What about traefik vs nginx?
Maybe nginx vs AWS ALB someday?😅
actually, i might test it.. nginx ingress controller vs aws load balancer controller based on application lb. both layer 7 load balancers, will see
@@AntonPutra Thank you very much. I am very curious how many/much GET requests can ALB handle. AWS said that millions/unlimited per second, but i do not believe marketing phrases.
Do envoy next!
soon!
nginx vs traefik. both as ingress controller (!) with default config and hpa max 3
yes, i'll do it in a few days, standalone proxies as well as ingress controllers with default settings
Anton Putra, sound like Indonesian name :D
yeah i get it a lot :)
HaProxy please
ok i'll do haproxy in a few days
I've think of replacing Apache with either Nginx or Caddy in some cases. This points me in the right direction. Thanks 👍
wait a few days, I'll do Nginx vs Traefik 😊
nice
thanks!
nginx vs traefik and vs litespeed plz :D
I would like to know who disagree that video, at the time of writing I can see 2 👎... those 2 guys are so ridiculous!!!!
😊
FastAPI vs Expressjs 🙏
ok will do soon!
Bro please test pingora next
ok, thanks! i'll include it in the next poll
Do Pingora vs Nginx
Comparing a web server based on NON GC language vs GC language... Automatic result
Actually, you _increased_ the server load by using the transform encoder in Caddy for logs. It's much more efficient to write JSON logs to disk. This is not a fair comparison.
thanks for the feedback, i'll verify your point
@AntonPutra also what you're basically testing is "which server has more efficient gzip". It's a huge increase in CPU usage to enable it. In production, you would probably use precompressed files with Caddy's file_server to avoid the CPU cost per request.
Interesting...
performance isn't everythingggggg
true, Caddy still has a valid use case, especially for anyone who doesn't want to spend time optimizing infrastructure.
FACE CAM!
😊
Test traefik v3
in a few days
Nginx v Envoy would be good, both written in C.
will do
Both nginx and caddy are bad at reverse-proxying and load-balancing. Haproxy ftw
Really? Nginx Ingress is the most popular ingress controller as far as I know.. which is reverse proxy by nature
Nginx vs haproxy for reverse proxy
thanks, noted!
do one with haproxy and nginx
sure, i may include it in nginx vs traefik video