How To Test Software Performance

Поділитися
Вставка
  • Опубліковано 4 січ 2025

КОМЕНТАРІ • 44

  • @fmkoba
    @fmkoba 3 роки тому +6

    Dave, you are a friggin wizard, I had literally just created a jira ticket to start writing performance tests using k6 when I got this video’s notification on my phone

  • @gpzim981
    @gpzim981 3 роки тому +13

    Excelente video Dave.
    I would like to suggest one subject for a video:
    The practical difference between:
    Unit tests
    Integration tests
    Component tests
    Contract based tests
    System tests
    Acceptance tests
    E2E tests
    And any other type of test you think would be relevant
    The lack of standardization of these namings makes the understanding of software testing very frustrating since every source that you look for describes each test type slightly or completely differently.
    Would be great to see your opinion on when and how to apply each of them in a context of Continuous Delivery and which ones should be prioritized ( test pyramid? )

    • @OggerFN
      @OggerFN 3 роки тому +1

      I don't think a pyramid would be suited as a lot of these mean the same thing or cover each other.
      What I would rather want to see is a video where Dave would show what kinds of tests to implement and how.
      Many write tests.
      The hard part is writing good and appropriate tests.
      For example there should not be concurrency load tests in the unit tests run on every commit (if you ask me)

    • @TARJohnson1979
      @TARJohnson1979 3 роки тому +2

      There's a good reason why there's a lack of standardization around these namings: it's because the factors that are important (and worth optimising for) vary a lot from context to context.
      Exactly what's useful in your context requires an understanding thereof.
      For example, it's almost always worth distinguishing fast, isolated tests which exercise code, from slower tests on a deployed system via an API, where it's worth trading off the possibility of interference on state for faster batch runtime.
      For many systems, there are also other intermediate scales where it makes sense to build tests. This will definitely require some insight into your system to say where those lines would lie.
      Don't get hung up on names. Instead, write tests at the level at which interesting behaviour emerges, and optimise over time for the tests which give you useful feedback fastest.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +6

      Its a good suggestion, I will think about it, thank you.

    • @Kitsune_Dev
      @Kitsune_Dev 3 роки тому

      That’s a lot of tests

  • @fmkoba
    @fmkoba 3 роки тому +3

    mind blown by the idea of “resting the performance” of the performance tests

  • @k3agan
    @k3agan 3 роки тому +2

    This is so perfectly timed. It's like he read my mind

  • @benfellows2355
    @benfellows2355 3 роки тому

    Great little introduction to performance testing: concise, direct and informative. With even experience reports included for good measure (pun intended). Anecdotes are underrated in videos like these. Thank you! 👍🏻

  • @muray82
    @muray82 5 місяців тому

    Performance Eng here: Video is missing the crucial element that needs analysis Throughput/Latency are mentioned but the third vital element so "Error rate" is missing. You can have high throughput / low latency simply because system/component is opening circuit breaker and throws 50X / 40X or response is even HTTP 200 but lacks crucial info because safe defaults are returned. It might be intuitive but I saw Load test results that didn't check what are the response codes or if they contain expected data. LT was "green" because the automated check for Latency was ok. What if 20% of the calls failed but "retry" would fix it - is it acceptable? What error rate should we allow 1%- 5% OR maybe it should be zero if we are running Load test on component? This is vital part of Performance Testing that didn't had enough coverage in this video.

  • @jonathanaspeling9535
    @jonathanaspeling9535 3 роки тому +1

    Brilliant thank you! Great insights

  • @snygg-johan9958
    @snygg-johan9958 3 роки тому +1

    What kind of metrics would you recommend a team to measure to be able to hand hard facts over to stake holders to show progress in moving towards cicd?

    • @queenstownswords
      @queenstownswords 3 роки тому +1

      1. accurate repeatable test model (i.e. 1, 5, 10 users sending requests synchronously or asynchronously depending on the requirement).
      2. establish a system level point to test the performance. (an API endpoint for example)
      3. run the test at least 3 times and up to 10 times to establish there is little variability between the test passes.
      4. measure the throughput and latency (jmeter has an aggregate report that does well in this regard).
      5. establish an environment that can be used for all future performance test runs (use a cloud provider like azure for this)
      6. establish clear 'triggers' to know that a new performance test run is needed. (like the performance test end point's code has changed)
      7. document everything so you can report on it to all stakeholders.
      If you have proven that there is little variability between the test runs, have valid metrics (aggregate report) and an established baseline from which to measure future test runs, you have the foundation to hand-off to stakeholders and a way to measure performance changes in a pipeline.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +1

      If you mean metrics fro CD rather than the perf of the SW, then Stability & Throughput are what I'd recommend. I talk about those here: ua-cam.com/video/COEpO1vEBHc/v-deo.html
      This is one of my older videos, so the sound is a bit ropey but I think that the ideas are good.

    • @snygg-johan9958
      @snygg-johan9958 3 роки тому

      @@ContinuousDelivery Thank you!
      That was exactly was I was after.
      I really appreciate your videos btw.

    • @snygg-johan9958
      @snygg-johan9958 3 роки тому

      @@queenstownswords Thanks for the extensive answer!

  • @RudhinMenon
    @RudhinMenon 3 роки тому +1

    a humble subscriber here

  • @skipodap1
    @skipodap1 3 роки тому

    Another tremendously helpful video. Our team is starting performance testing. And similar to another commenter, there are certain aspects of our prod environment that we can't control (at least any time soon) in a test environment -- so in other words, we lose certain ability to control inputs (hurting our ability to be scientific). But it seems like component testing may be a good place for us to invest as we should still be able to control inputs.
    I need to take that anatomy of a deployment pipeline course. Read the book, it was great.
    This video gave a lot of good guidance.
    Thanks again.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +1

      Thanks, I am pleased that you found it helpful. In the situation that you describe, I'd agree that component perf testing will help. I'd also suggest, if you aren't already, that you add good monitoring to your production system so that you can see real-world performance too. It's a "lagging indicator" but it will tell you the truth of your system.

  • @astroblurf2513
    @astroblurf2513 3 роки тому +3

    Love the shirt!

    • @OggerFN
      @OggerFN 3 роки тому +1

      Yeah. I think there could be a nice pun with 'You shall not pass' regarding to acceptance tests

  • @danielevans1680
    @danielevans1680 3 роки тому

    You note that any performance test should be run on a controlled environment - a sensible thought! However, what would your approach be if there were only a single instance of the critical piece of infrastructure, the one in production? Bite the bullet and test on it anyway (despite known variations with load), or present results from other, differently performing but better controlled infrastructure, with a caveat?
    Of course, the ideal answer is "procure a duplicate system", but I suspect this piece of critical infrastructure was a significant proportion of the IT budget, with little chance of getting another one.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +1

      Under those circumstances, and something I should probably also have mentioned in the video, I would ensure that I have good production monitoring of performance. This is important anyway to validate your tests, but in the case of not being able to reproduce a copy of your prod env for performance testing it is essential. You can do component perf testing in non-prod environments, but your throughput and latency numbers will be very fake. I think that whole system perf testing in these circumstances is probably not worth the effort.

    • @danielevans1680
      @danielevans1680 3 роки тому

      Monitoring (or the lack of!) has certainly turned out to be a major thorn in the side of this project, so that makes sense - there certainly have been cases where issues have only been found through "doing XYZ is always so slow!".
      (The case in question also suffered from an odd Catch-22, whereby the performance in production was considered critical, but what a "production use case" was very difficult to determine for years - "if only someone had the bright idea to talk to clients", he says in retrospect)

  • @RoelBaardman
    @RoelBaardman 3 роки тому

    What I don't hear you say is that taking small deliberate steps towards production is important.
    I'd say we can assume that our test setup mirrors production well, but in order to verify (especially when you're getting started), it makes sense to me to use monitoring of production systems to see if this assumption is valid.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +3

      Sure, if you watch any of my stuff my recommendation for everything is to do it in small steps.

  • @swarajray1995
    @swarajray1995 3 роки тому +3

    Performance Engineer here
    1:51 that's not latency,
    latency is the time it takes for a request/response to be transmitted to/from the processing component
    latency + processing time = response time
    So time between initiation of action to getting result is Response time not latency

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +2

      Not really, latency is the difference between cause and effect en.wikipedia.org/wiki/Latency_(engineering) so comms time + processing time is the latency.
      "comms time to + processing time + comms time from" is the response time.

    • @swarajray1995
      @swarajray1995 3 роки тому +1

      @@ContinuousDelivery as you are googling it, search for response time vs latency yourself , above wiki page also does not mean latency includes processing time

    • @swarajray1995
      @swarajray1995 3 роки тому +1

      And no authentic article will mention as latency includes processing time because it doesn't

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +3

      Well if that were the case the measure of latency is meaningless. The comms latency that you are describing includes the processing time of assembling packets, communicating them from the OS to the network card and from the network HW to the wire. The time it takes the signal to travel down the wire and the time it takes to receive the signal and get it off the wire and up through the stack. That its all processing time, except for the time it takes the signal to transit the wire! Latency is the distance between the initiation of an event and its effect, so latency only makes sense in the context of what the effect that you want is. You seem to be talking about network latency, or at least comms latency, as the only thing, but it is more than that.

    • @swarajray1995
      @swarajray1995 3 роки тому +1

      @@ContinuousDelivery your assumption is as wrong as the spelling of throughput there

  • @michaeljuliano8839
    @michaeljuliano8839 Рік тому

    So I'm faced with having to create performance tests for a third-party system. It's not clear to me how to modify what you present here to cope with the fact that the system is outside of our control.

  • @donaldhobson8873
    @donaldhobson8873 3 роки тому +1

    If you want to compare 2 versions of your software, sure, keep everything else the same.
    If you want to know how fast it is for the end user, well the end user has a bunch of different machines, OSs, browsers etc.
    You can time it again and again on the same setup and get exactly the same results. And then users with older browsers complain that its too slow.

  • @emonymph6911
    @emonymph6911 3 роки тому

    12:00 is sooooo confusing can we get a code example please? So we write a test to test if the test is fast enough for our test that we add code to for our speed test in our test wtf

  • @ssssssstssssssss
    @ssssssstssssssss 3 роки тому

    Shouldn't you use accuracy in performance testing for systems that produce approximate rather than exact results? Because in such systems, you need to balance the accuracy and latency.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому

      Context matters a lot. Latency is kind in financial trading for example, they want predictable latency, even if it means being a little slower, (slow here is a relative term 😉). The reason that I recommend "Pass/Fail" tests is about accuracy, it focusses you on trying to achieve a repeatable measure of performance at some given tolerance (margin of error) based on how you define the performance thresholds.

  • @caraziegel7652
    @caraziegel7652 3 роки тому +1

    ahh, love that shirt

  • @revietech5052
    @revietech5052 3 роки тому +1

    1:37 You spelt Throughput wrong