Nginx vs Apache Performance

Поділитися
Вставка
  • Опубліковано 9 лют 2025

КОМЕНТАРІ • 285

  • @AntonPutra
    @AntonPutra  3 місяці тому +6

    🔴 To support my channel, I'd like to offer Mentorship/On-the-Job Support/Consulting (me@antonputra.com)

    • @unom8
      @unom8 3 місяці тому +3

      NATs vs Kafka
      Kafka vs IBM MQ

    • @MDFireX5
      @MDFireX5 3 місяці тому

      у этого чела проблемы ... где fast api ?

    • @MusKel
      @MusKel 3 місяці тому +5

      NATS vs Kafka vs Redis streams, 😁

    • @davidmcmartins
      @davidmcmartins 3 місяці тому +1

      Node.js vs Elixir (Phoenix framework)

    • @tombyrer1808
      @tombyrer1808 3 місяці тому

      Nginx vs nodejs/deno/bun? (only node would be fine; we know how the other 3 compare)

  • @NaourassDerouichi
    @NaourassDerouichi 3 місяці тому +89

    Please just accept my gratitude for all the benchmarks you're doing and making public. Also, keep doing whatever tests you find relevant. Cheers!

  • @dimasshidqiparikesit1338
    @dimasshidqiparikesit1338 3 місяці тому +122

    nginx vs caddy vs traefik please! and maybe try pingora?

    • @dimasshidqiparikesit1338
      @dimasshidqiparikesit1338 3 місяці тому +6

      and IIRC, nginx drop requests when overloaded, while caddy tries to answer all requests by sacrificing response time

    • @ayehia0
      @ayehia0 3 місяці тому

      would be so cool

    • @AntonPutra
      @AntonPutra  3 місяці тому +12

      will do!

    • @amig0842
      @amig0842 3 місяці тому

      @@dimasshidqiparikesit1338 why Pingora when there is River?

  • @inithinx
    @inithinx 3 місяці тому +137

    Please include caddy next time! I wonder how golang works in this case!
    Also, next time try to do brotli compression as well.
    Cheers!

    • @TheChrisR12
      @TheChrisR12 3 місяці тому +7

      It would be interesting to see how caddy compares to Nginx and apache.

    • @chu121su12
      @chu121su12 3 місяці тому +5

      caddy, zstd compression, h3

    • @PragmaticIT
      @PragmaticIT 3 місяці тому +2

      Caddy vs nginx please

    • @MariosMich
      @MariosMich 3 місяці тому +10

      traefik vs caddy vs nginx: the ultimate benchmark

    • @hermes6910
      @hermes6910 3 місяці тому +1

      I agree, caddy would be very interesting.

  • @lucsoft
    @lucsoft 3 місяці тому +102

    Traefik and Caddy!

    • @almaefogo
      @almaefogo 3 місяці тому +5

      1 vote for this, and comparem them to nginx

    • @severgun
      @severgun 3 місяці тому

      traefik not a web server

    • @almaefogo
      @almaefogo 3 місяці тому

      @@severgun that's true, I the comparison I wanted is as reverse proxy instead of web server

    • @HVossi92
      @HVossi92 3 місяці тому +3

      He already did a performance benchmark between traefik and caddy

    • @almaefogo
      @almaefogo 3 місяці тому

      @@HVossi92 yeah but I wanted to see how it compares to nginx since that's what I'm using right now and I have been thinking of switching to traefik because I have been having some strange issues that I can't really pinpoint and was wondering if it could something to do with nginx

  • @nisancoskun
    @nisancoskun 3 місяці тому +64

    Adding "multi_accept on" directive to nginx config might help availability on high loads.

    • @inithinx
      @inithinx 3 місяці тому +1

      Is this not the default behaviour?

    • @MelroyvandenBerg
      @MelroyvandenBerg 3 місяці тому +16

      @@inithinx Nope.. You need to fine tuning not only your database.. Like I told ANton before. But you also need to fine-tune Nginx

    • @inithinx
      @inithinx 3 місяці тому

      @@MelroyvandenBerg makes sense.

    • @AntonPutra
      @AntonPutra  3 місяці тому +9

      Thanks! I'm actually going over the NGINX configuration right now, making sure it's properly optimized for the next test!

  • @ValerioBarbera
    @ValerioBarbera 3 місяці тому +5

    I was searching for this kind of comparison for years.

  • @AndriiShumada
    @AndriiShumada 3 місяці тому +30

    these Grafana charts are kinda ASMR for me :)

  • @nnaaaaaa
    @nnaaaaaa 3 місяці тому +16

    I've run nginx serving as reverse proxy in the 30K r/s range for production workloads, the way nginx handles tls is kind of naive and could be improved. Basically what is happening is that there is almost always uneven distribution of work across worker processes and it dogpiles with tls overhead. limiting the tls ciphersuites used can help mitigate this so that there is less variance in how long TLS handshakes take on aggregate. also multi_accept on is you friend.

    • @AntonPutra
      @AntonPutra  3 місяці тому +6

      Thanks for the feedback! I'll see if I can monitor each individual worker/thread next time

    • @nnaaaaaa
      @nnaaaaaa 3 місяці тому

      ​@@AntonPutrathis mostly happens from dealing with production loads where you have a diverse set of tls client implementations. not everyone will choose the same ciphersuites. this is an example of things often omitted from synthetic benchmarks because people just dont think of it.

  • @rajeshnarayanappa4639
    @rajeshnarayanappa4639 3 місяці тому +4

    Amazing tests. You got a subscriber for this bloody good content

  • @lukavranes2921
    @lukavranes2921 3 місяці тому +8

    another amazing performance benchmark and just the one I wanted to see rerun from your old videos. many thanks and great job
    I'm still curious about the results tho. I'm really looking forward to seeing someone explain why nginx crashed in the last test
    also I think that apache's compression algorithm is causing the high cpu usage in the first 2 tests and it would perform more like nginx if compression was off (but that's unrealistic to find in the real world)
    many thanks again and looking forward to the next x vs y video, this second season is very informative

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      thank you! i got a couple of PRs to improve apache and nginx. if they make a significant improvement, i'll update this benchmark

  • @MattiasMagnusson
    @MattiasMagnusson 3 місяці тому

    This was really interesting, i used to be running Apache alot a few years ago, and like you, i switched for the huge performance benefit of Nginx (in most cases apparently). Now, i don't do any loadbalancing using nginx or apache but this was really interesting to me as HA is always something i have been looking for but never really managed to do (lack of hardware and knowledge in my homelab). Earned the sub well done!

  • @kamurashev
    @kamurashev 3 місяці тому +19

    Cool, but Apache (ngnix probably too) has so many things to configure, eg prefork/worker mpm, compression rate etc.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      true, i do my best to make these test fair

  • @ReptoxX
    @ReptoxX 3 місяці тому +8

    Just searched yesterday if you already uploaded a benchmark between nginx and caddy and you just now uploaded nginx vs apache. Great starting point :)

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      I'll make nginx vs caddy vs traefik soon

  • @karsten600
    @karsten600 3 місяці тому +1

    Valuable benchmarks! Tip: There is this insane resonance on the audio of this video (and probably more of your videos), so when you pronounce words with s, I can feel a brief pressure in my ear from my brain trying to regulate the intensity. 😅

    • @AntonPutra
      @AntonPutra  3 місяці тому

      thanks for the feedback, i'll try to reduce it

  • @zuzelstein
    @zuzelstein 3 місяці тому +6

    Elixir/Gleam vs nodejs/bun/deno. Really interesting to see where Erlang VM shines.

  • @fumped
    @fumped 3 місяці тому +5

    nginx as reverse proxy with static content caching and apache as dynamic web server is a killer combo!

  • @guoard
    @guoard 3 місяці тому +14

    Great. please do the same test for Nginx vs Haproxy too.

  • @antonztxone
    @antonztxone 3 місяці тому +1

    Definitely there should be caddy and traefik in this tests! Thanks for this kind of videos!

    • @AntonPutra
      @AntonPutra  3 місяці тому

      I'll do those two as well soon

  • @GameBully2K
    @GameBully2K 3 місяці тому +1

    Amazing test I did the same test with Grafana K6 but between Nginx and Openlitespeed.
    Your test definitely explains why cyberpanel is the most performant out of the open source hosting software I tested. it uses a combination of apache and openlitespeed ( I think the perform a reverse proxie with apache and serve the website using openlitespeed )

  • @marknefedov
    @marknefedov 3 місяці тому +10

    We had experienced an interesting issue with Go application and Nginx when migrated from Pyhton to Golang, that Nginx uses A LOT more tcp packets to communicate with golang apps, at first it overloaded a load balancer cluster and then the application itself, we still haven't figured out what happened, because we also were in the process of migrating to Traefik, but it looks like Go and Nginx really want to split requests into a lot of packets since the most load came from TCP reassembly, and there were a lot more sockets in waiting ACK then usual.

  • @Future_me_66525
    @Future_me_66525 3 місяці тому +1

    Love it with the cam, keep it up

  • @mohammadalaaelghamry8010
    @mohammadalaaelghamry8010 3 місяці тому +1

    Great video, as always. Thank you.

  • @MAK_007
    @MAK_007 3 місяці тому +2

    love u anton

  • @AIExEy
    @AIExEy 3 місяці тому +14

    ngix vs pingora please! great content keep up the good work!

  • @konstantinp440
    @konstantinp440 3 місяці тому +1

    Thank you very much for your hard work 😊

  • @jerkmeo
    @jerkmeo 3 місяці тому

    love your performance test....you've saved me a lot of time on product selection!

  • @davidmcken
    @davidmcken 3 місяці тому +8

    Given my exposure to both apache and nginx, this lines up. You want something to serve static content nginx is king. I am concerned about what is happening around that 80% though. The way I see them nginx is like a scythe able to cut through a metric boatload of requests and apache like a swiss army knife with a boatload of tools available to do everything that has ever come up in my travels (this is where I sense apache's slowness comes from, its versatility). I guess the car analogy is nginix can do a 1/4 mile straight faster but apache could do a rally better as its more adaptable.
    I have a non-compliant endpoint that uses api_key HTTP header and it took effort just to get nginix to leave it alone and then I route that path to an apache container where it gets fixed.

    • @MattHudsonAtx
      @MattHudsonAtx 3 місяці тому +1

      i have found i can make nginx do everything apache does, including serve php and all that application-layer stuff people do with apache. it's not especially advisable, though.

    • @davidmcken
      @davidmcken 3 місяці тому

      @@MattHudsonAtx the invalid header issue I mentioned I haven't found a way to do it with nginix, at best I can get it to pass it through for something else to deal with using the ignore_invalid_headers directive.
      Given I was trying to stay just using the nginix proxy manager which is handling everything else I would love to know an alternative way.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      thanks for the feedback!

  • @TweakMDS
    @TweakMDS 3 місяці тому +8

    I wonder if apache and nginx use a different default compression level. The test results hint at this (even though both state 6 as default in their docs), and diminishing returns on a higher compression level might be hurting apache in this test. There might be some improvements investigated by skipping compression on files less than 1kb (which I think is a best practice), as well as setting the same gzip compression level on both services.

    • @AntonPutra
      @AntonPutra  3 місяці тому +2

      thank you for the feedback! i'll double check compression levels next time

  • @leonardogalindo2068
    @leonardogalindo2068 2 місяці тому +1

    Please a video for how to measure a microservice resource usage, how to benchmark a python service for example, for calculate cloud cost

  • @alekc7292
    @alekc7292 3 місяці тому +1

    very good and good diagram for test scenarios is beautiful and understandable

  • @SimDom3
    @SimDom3 4 дні тому

    OpenResty is supposedly more performant than Nginx in serving dynamic webapps -- any chance of Nginx vs. Open Resty?
    Huge thanks for amazing regularly updated benchmarks.

  • @YourComputer
    @YourComputer Місяць тому

    Would love to see the same server benchmarks but with PHP.

  • @milendenev4935
    @milendenev4935 3 місяці тому +1

    Ok thank you very much for really providing these insights! I was in the making of my own reverse proxy, and this is some key data. I think I might have made a RP better than both of those. 😏

    • @AntonPutra
      @AntonPutra  3 місяці тому

      my pleasure, they have a lot of built in functionality

  • @NDT080
    @NDT080 3 місяці тому +7

    Some sort of freak: - Add IIS to the test

    • @chralexNET
      @chralexNET 3 місяці тому

      A lot of organizations (corporations mostly) use IIS though, so even if IIS is bad then it would still be worthwhile to show how bad it is.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      ok interesting, i'll try it out

  • @Blink__5
    @Blink__5 3 місяці тому +1

    i know a lot of people already asked for this, but i also want to see Traefik and Caddy

  • @HeyItsSahilSoni
    @HeyItsSahilSoni 3 місяці тому +2

    When looking at the 85% cpu breakpoint, one thing I could think of was some form of a leak, maybe try to slow down the request increase rate, it might show different results.

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      thanks, i'll try next time

  • @simon3121
    @simon3121 3 місяці тому +1

    You’re English is very good. Not sure whether your pronunciation of ‚throughput‘ is your signature move or not. I noticed it in multiple videos..

  • @MelroyvandenBerg
    @MelroyvandenBerg 3 місяці тому +7

    Again Anton, great test, but you forget to fine-tune the servers again. Just like the database test. You shouldn't use the defaults.

    • @_Riux
      @_Riux 3 місяці тому +7

      Why not? Don't you think most people will use the default settings? Imo this way of testing is probably the most representative of real world performance. Of course it's also interesting to see how far you can optimize, but this is definitely useful.

    • @willi1978
      @willi1978 3 місяці тому +4

      there should be sane defaults. many setups will run with defaults.

    • @willl0014
      @willl0014 3 місяці тому +4

      Agreed the defaults should be representative of the average

    • @sudifgish8853
      @sudifgish8853 3 місяці тому +4

      @@_Riux wtf no, in the "real world" people actually configure their servers, or it's just a hobby project where nothing of this matters.

    • @ooogabooga5111
      @ooogabooga5111 3 місяці тому +2

      @@_Riux People who have defaults have no traffic, if you want to talk about traffic and performance, tuning the server is a must.

  • @ckamiable
    @ckamiable 16 днів тому

    Антон привет! видео супер. какие генераторы/инструменты для нагрузки веб серверов используешь?

  • @pengkus
    @pengkus 3 місяці тому +1

    really great video! can you do a nginx vs tengine next? it claimed that it has a better performance than nginx and I'm very curious about it, love your vid

  • @danielwidyanto5142
    @danielwidyanto5142 3 місяці тому

    Saya pikir orang Indo, ternyata bukan. But it's a great video (and I'm still sticking to Apache - PHP MPM coz I've never had such a huge traffic... except for the DDOS event).

    • @AntonPutra
      @AntonPutra  3 місяці тому

      yeah, i'm not 😊 i heard apache php integration is very good

  • @mrali_18
    @mrali_18 3 місяці тому +8

    Please compare Nginx and HAProxy.

    • @krisavi
      @krisavi 3 місяці тому

      That would need various workloads of reverse proxy. Ones that filter traffic and others that don't as HAproxy doesn't do web server part.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      ok will do!

  • @kebien6020
    @kebien6020 3 місяці тому +1

    For the reverse proxy tests, can you test with the swiss army knife of reverse proxies: Envoy proxy?
    It supports TLS, mTLS, TCP proxying (with or without TLS), HTTP1, 2 and even HTTP3, multiple options for discovering target IPs, circuit breaking, rate-limiting, on the fly re-configuration, and even Lua scripting in case all of that flexibility isn't enough.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      i did it in the past maybe a year ago or so but will definitely refresh it with new use cases soon

  • @neoko7220
    @neoko7220 3 місяці тому +3

    Please compare PHP on Swoole/Roadrunner/FrankenPHP Server versus Rust, Go, Node.js

  • @andreialionte3269
    @andreialionte3269 3 місяці тому +8

    do REST VS GRPC

  • @vasilekx8
    @vasilekx8 3 місяці тому +2

    Perhaps you need to try the previous version to fix problems with nginx, or build it from source too?

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      i may try something in the future

    • @rh4009
      @rh4009 3 місяці тому

      I agree. Both the 85% CPU behaviour and the much higher backend app CPU usage feel like regressions.

  • @chralexNET
    @chralexNET 3 місяці тому +3

    I would like to see a test with NGINX Stream Proxy Module which acts as just a reverse TCP or UDP proxy, not as a HTTP proxy. I for example, use this for some game servers where I reverse proxy both TCP and UDP packets. I setup NGINX for this because it seemed like the easiest thing to do, but I don't know if it has the best performance.

    • @krisavi
      @krisavi 3 місяці тому +1

      That could be one of the comparisons with HAProxy that is also TCP proxy capable.

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      Interesting, I'll try to include it in one of the new benchmarks

  • @rwah
    @rwah 3 місяці тому +4

    How do you configure Apache MPM? Fork mode or Event mode?

    • @AntonPutra
      @AntonPutra  3 місяці тому

      i use event more, here is origin config - github.com/antonputra/tutorials/blob/219/lessons/219/apache/httpd-mpm.conf#L5-L12
      i also got a pr with improvement - github.com/antonputra/tutorials/blob/main/lessons/219/apache/httpd-mpm.conf#L10-L18

  • @vish-vm8bn
    @vish-vm8bn 3 місяці тому +9

    Can You Please Start series on Docker Networking tips or Anything related to DevOps
    it will be helpful Learning from your Experience

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      i'll try to include as many tips as i can in the benchmarks 😊

  • @toniferic-tech8733
    @toniferic-tech8733 3 місяці тому +1

    Did you use RSA or ECDSA certificates? Because ECDSA should be used most of the time, as they are faster to transmit (less bytes in TLS handshake).
    Also, nowadays, when used as Reverse Proxy, the connection to the backend servers (i.e. downstream) should be also encrypted, and not cleartext.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      I used RSA in both proxies, and regarding the second point, it's good to have but difficult to maintain, you constantly need to renew the certificates that the application uses.

    • @toniferic-tech8733
      @toniferic-tech8733 3 місяці тому

      I don’t agree. Internal certificates can be automated with internal CA and ACME, or external CA (e.g. Let’s Encrypt) or long-lasting certificates.

  • @dasten123
    @dasten123 3 місяці тому +1

    very interesting

  • @90hijacked
    @90hijacked 3 місяці тому +1

    took me a while to realize this isn't OSS nginx, have not played around with the F5 one, does it come with its builtin metrics module ? or what did u use to export those?
    great content as always!

    • @patryk4815
      @patryk4815 3 місяці тому

      this is OSS nginx

    • @rafaelpirolla
      @rafaelpirolla 3 місяці тому +1

      oss doesn't come with metric node module. latency can only be measured at the client; server cpu/mem/net is not nginx metric module's responsability

    • @patryk4815
      @patryk4815 3 місяці тому

      @@rafaelpirolla don't know what you talking about, k8s expose cpu/mem/net stats for every POD

    • @90hijacked
      @90hijacked 3 місяці тому

      @@rafaelpirolla makes sense that latency was obtained from clients, thank you!!
      worked around this once using otel module + tempo metrics generator, but that was rather convoluted / unsatisfactory approach

    • @AntonPutra
      @AntonPutra  3 місяці тому +3

      yeah, it's open-source nginx. Also, the most accurate way to measure latency is from the client side, not using internal metrics. In this test i collect cpu/memory/network for web servers using node exporter since they are deployed on standalone VMs

  • @MadalinIgnisca
    @MadalinIgnisca 3 місяці тому +1

    All the time I had stability with Apache, but with Nginx occasionally I had warnings in my alerts as service was restarting

    • @AntonPutra
      @AntonPutra  3 місяці тому

      It's very common in production to quickly fill up all available disk space with access logs; this is issue number one.

  • @Cyanide0112
    @Cyanide0112 3 місяці тому

    Can you try others? Like Envoy? There are some other "obscure" ones .. I wonder if you can test those

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      i tested envoy in the past but i think it's time to refresh

  • @IK-wp5eq
    @IK-wp5eq 3 місяці тому

    11:35 higher cpu for apps behind nginx indicate that they have more work to do because nginx must be sending more data per second to apps than Apache.

  • @gpasdcompte
    @gpasdcompte 3 місяці тому

    A 4th test with the apache "allowoverride none" would be nice, i've heard it improve performance, but never tried :/

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      ok i'll take a look!

  • @mkvalor
    @mkvalor 3 місяці тому

    Something isn't quite right here. In all 3 tests, you show the requests per second synchronized until a major failure happens. The time log at the bottom seems to indicate these requests per second metrics are being gathered over the same time period.
    Yet how can this be possible when one web server has a significantly higher latency, measured at the client, than the other? Once the latency difference hits 1ms, that means we should notice at least 1,000 fewer requests per second for each second that passes after that moment -- accumulating as time goes by. And, of course, this difference should accumulate even more quickly the higher the latency goes.
    It looks to me like you (accidentally?) decided to normalize the graphs of each contest so the RPS would match until one of the servers failed.
    Or if not, what am I missing here?

  • @SAsquirtle
    @SAsquirtle 3 місяці тому +5

    I feel like the intro parts are kinda spoilery even if you're blurring out the graph legends

  • @kokamkarsahil
    @kokamkarsahil 3 місяці тому +9

    Is it possible to benchmark pingora as well? It will be easy to use it after river became available so will wait for it in future!
    Thanks a lot for the benchmark!

    • @AntonPutra
      @AntonPutra  3 місяці тому

      yes just added pingora in my list

  • @qatanah
    @qatanah 3 місяці тому +1

    hi, what tools are you using for monitoring and benchmark graphs?

  • @simonecominato5561
    @simonecominato5561 3 місяці тому

    In the last test, are the Rust applications running in the same instance as the server? It seems like the Rust application in the Nginx case is stealing processor time to the server.

    • @Krokos11
      @Krokos11 3 місяці тому +3

      At 1:26 he explained where everything is hosted. Applications have separated machines

    • @simonecominato5561
      @simonecominato5561 3 місяці тому

      @@Krokos11 I missed it, thanks.

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      yeah, in this test web servers are deployed on dedicated vms

  • @konga8165
    @konga8165 3 місяці тому +1

    Caddy, traefik, and envoy proxy!

  • @KTLO-m8p
    @KTLO-m8p 3 місяці тому

    How are you exporting the results into the graphing software? Can you explain what softwares those are to do that so I can recreate this setup?

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      sure, I use Prometheus to scrape all the metrics and Grafana for the UI. it's all open source, and I have a bunch of tutorials on my channe

    • @KTLO-m8p
      @KTLO-m8p 3 місяці тому

      @@AntonPutra thanks!

  • @kameikojirou
    @kameikojirou 3 місяці тому +2

    How does Caddy compare to these two?

    • @AntonPutra
      @AntonPutra  3 місяці тому

      i'll add it as well soon

  • @TadeasF
    @TadeasF 3 місяці тому +1

    I'd be very interested nginx VS caddy

  • @simonlindgren9747
    @simonlindgren9747 3 місяці тому +1

    Please test some more experimental servers too, like maybe rpxy/sozu compared to envoy.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      ok i'll take a look at them

  • @fateslayer47
    @fateslayer47 3 місяці тому +1

    I'm looking at benchmarks and feeling good about choosing nginx even though my website gets 1 user per month.

  • @sPanKyZzZ1
    @sPanKyZzZ1 3 місяці тому +1

    One future idea test, job schedulers

  • @idzyubin720
    @idzyubin720 3 місяці тому +1

    Compare go-grpc and rust-tonic please
    Tonic contributors fix many issues and increase performance

  • @muhammadalfian9057
    @muhammadalfian9057 Місяць тому

    Next openlitespeed vs nginx vs apache please

  • @Chat_De_Ratatoing
    @Chat_De_Ratatoing 3 місяці тому +1

    those benchmarks are so much more useful and truthful than the "official" benchmarks from the devs

  • @markg5891
    @markg5891 3 місяці тому +1

    I've noticed this weird behavior of nginx as a reverse proxy to a backend server too. Even if that backend server itself is just serving static data, the mere act of being a reverse proxy seems to cause a rather big performance hit for nginx. Weird.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      thanks for the feedback

  • @kariuki6644
    @kariuki6644 3 місяці тому +1

    I’m curious how Java spring webflux compares to spring boot

  • @MadalinIgnisca
    @MadalinIgnisca 3 місяці тому +1

    Why would you activate compression instead of serving pre-compressed files?

    • @AntonPutra
      @AntonPutra  3 місяці тому

      I didn't get the question. You use compression to improve latency and overall performance. With a payload that is four times smaller, it takes less time to transmit over the network.

  • @HowToLinux
    @HowToLinux 3 місяці тому +8

    Please do Nginx vs HaProxy

  • @VijayGanesh-s5q
    @VijayGanesh-s5q 3 місяці тому +5

    Will you make a comparison between the best frameworks of zig(zzz), rust(axum), go(fiber). I have been waiting for this long time.

  • @ksomov
    @ksomov 3 місяці тому +1

    please compare the performance of nginx and haproxy

  • @pable2
    @pable2 3 місяці тому +2

    Like the others said, with Caddy would be amazing

  • @jimhrelb2135
    @jimhrelb2135 3 місяці тому

    I feel like network usage in itself is related to request/s, in that if one webserver is able to satisfy more requests per time, it's prone to having more network usage within that same timeframe.
    Why not network usage per request?

    • @AntonPutra
      @AntonPutra  3 місяці тому

      it's common to use rps, requests per second metric to monitor http applications

  • @ziv132
    @ziv132 3 місяці тому +1

    Can you add Caddy

  • @amig0842
    @amig0842 3 місяці тому +1

    Please compare River reverse proxy with Nginx

  • @koko9089nnn
    @koko9089nnn 3 місяці тому

    Can you do `envoy` please? it is widely used by Google GCP

  • @hatersbudiman7058
    @hatersbudiman7058 3 місяці тому +1

    Next Caddy and open litespeed

  • @bhsecurity
    @bhsecurity 3 місяці тому +1

    I always wanted to see this.

  • @nexovec
    @nexovec 3 місяці тому +1

    nginx vs Caddy please!

  • @Kanibalvv
    @Kanibalvv 3 місяці тому

    you need to check kernel params... tcp_mem default is always to low, that can explain nginx problem.

  • @malcomgreen4747
    @malcomgreen4747 3 місяці тому

    Test start at 5:21

    • @AntonPutra
      @AntonPutra  3 місяці тому +1

      i have timestamps in each video

    • @malcomgreen4747
      @malcomgreen4747 3 місяці тому

      ​@@AntonPutra nice thank you

  • @MrDocomox
    @MrDocomox 3 місяці тому +1

    check istio gateway vs nginx.

  • @MrAmG17
    @MrAmG17 3 місяці тому

    Cowboy , Erlang and other high performers for future videos

    • @AntonPutra
      @AntonPutra  3 місяці тому

      will do soon, but first ruby on rails 😊

  • @erickvillatoro5683
    @erickvillatoro5683 3 місяці тому

    Please do Traefik vs nginx ingess controller!!!

  • @GuedelhaGaming
    @GuedelhaGaming 3 місяці тому +2

    Nginx vs YARP

  • @AntonPutra
    @AntonPutra  2 місяці тому

    🍿 Benchmarks: ua-cam.com/play/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn.html&si=p-UOaVM_6_SFx52H

  • @DominickPeluso
    @DominickPeluso 3 місяці тому +1

    Redbean and caddy please

  • @ghostvar
    @ghostvar 3 місяці тому +1

    We usually using these two, nginx for ssl dan reverse proxy and apache for php handler :/

    • @AntonPutra
      @AntonPutra  3 місяці тому

      yeah apache has nice php integration

  • @nomadvagabond1263
    @nomadvagabond1263 3 місяці тому +1

    You blur the texts, but the colors give them out🥲 chose colors that arent related to the technology.

  • @VirendraBG
    @VirendraBG 3 місяці тому

    Try this test with Dynamic HTML Content fetched from SQL Databases.

  • @damianszczukowski1912
    @damianszczukowski1912 3 місяці тому +1

    compare apache/nginx to traefik and caddy

  • @MrCustomabstract
    @MrCustomabstract 3 місяці тому +1

    FastAPI would be cool

  • @MelroyvandenBerg
    @MelroyvandenBerg 3 місяці тому +2

    Ps. latest Nginx is version 1.27.2 actually, right? Maybe it's the "latest" version on your system, but it's not The lastest version.

    • @AntonPutra
      @AntonPutra  3 місяці тому

      i used latest "stable" version not from the mainline