Measuring Software Delivery With DORA Metrics

Поділитися
Вставка
  • Опубліковано 21 чер 2022
  • If we want to do a better job of software development, we need some way to define what “better” means. The DORA metrics give us that. So what are the DORA metrics and how should we use them? They provide measures that evaluate the quality of our work and the efficiency with which we do work of that quality. So good scores on these metrics mean that we build better software faster.
    In this episode, Dave Farley, author of "Continuous Delivery” and “Modern Software Engineering” describes how we can apply these measurements to drive software development to deliver on this state-of-the-art approach, but also explores a few of the common mistakes that can trip us up along the way. DORA stands for DevOps Research & Assessment and is now a group at Google focused on measuring SW dev performance using scientifically justifiable research and analysis techniques.
    _____________________________________________________
    📚 BOOKS:
    📖 "Continuous Delivery Pipelines" by Dave Farley
    paperback ➡️ amzn.to/3gIULlA
    ebook version ➡️ leanpub.com/cd-pipelines
    📖 Dave’s NEW BOOK "Modern Software Engineering" is available here
    ➡️ amzn.to/3DwdwT3
    📖 The original, award-winning "Continuous Delivery" book by Dave Farley and Jez Humble ➡️ amzn.to/2WxRYmx
    Accelerate, The Science of Lean Software and DevOps, by Nicole Fosgren, Jez Humble & Gene Kim ➡️ amzn.to/2YYf5Z8
    The DevOps Handbook, by Gene Kim, Jez Humble, Patrick Debois & John Willis ➡️ amzn.to/2LsoPmr
    Measuring Continuous Delivery - Steve Smith ➡️ leanpub.com/measuringcontinuo...
    NOTE: If you click on one of the Amazon Affiliate links and buy the book, Continuous Delivery Ltd. will get a small fee for the recommendation with NO increase in cost to you.
    _____________________________________________________
    🔗 LINKS
    “How to Misuse DORA DevOps Metrics” ➡️ • Lunch & Learn How to M... (use still at 15:11)
    Nichole Fosgren ➡️ nicolefv.com
    DORA “Quick Check” ➡️ www.devops-research.com/quick...
    “DevOps Enterprise Guidebook” ➡️ cloud.google.com/blog/product...
    “You can’t measure SW productivity” - Martin Fowler ➡️ martinfowler.com/bliki/Cannot...
    -------------------------------------------------------------------------------------
    Also from Dave:
    🎓 CD TRAINING COURSES
    If you want to learn Continuous Delivery and DevOps skills, check out Dave Farley's courses
    ➡️ bit.ly/DFTraining
    📧 Get a FREE "TDD Top Tips" guide by Dave Farley when you join our CD MAIL LIST 📧
    The best way to keep in touch with the latest discussions, events and new training courses, get FREE guides and exclusive offers. ➡️ www.subscribepage.com/tdd-top...
    -------------------------------------------------------------------------------------
    CHANNEL SPONSORS:
    Equal Experts is a product software development consultancy with a network of over 1,000 experienced technology consultants globally. They increase the pace of innovation by using modern software engineering practices that embrace Continuous Delivery, Security, and Operability from the outset ➡️ bit.ly/3ASy8n0
    Octopus are the makers of Octopus Deploy the single place for your team to manage releases, automate deployments, and automate the runbooks that keep your software operating. ➡️ octopus.com/
    SpecFlow Behavior Driven Development for .NET SpecFlow helps teams bind automation to feature files and share the resulting examples as Living Documentation across the team and stakeholders. ➡️ go.specflow.org/dave_farley
  • Наука та технологія

КОМЕНТАРІ • 44

  • @softwarearchitecturematter4482
    @softwarearchitecturematter4482 Рік тому +13

    Really liked the statements in the video.
    "There is no silver bullet. Software Engineering to too complicated to do well.
    DORA metrics are trailing indicators , They tell you how you did , not how are you going to do.
    Approaches like Continuous Integration, TDD and Continuous Deployment predict good scores on these trailing indicators."
    Vikas

  • @robertgrant721
    @robertgrant721 Рік тому +10

    For me SAD AF is how I feel after a lifetime of software development without these approaches. Well better late than never. This video series is an absolute goldmine. Thank you for making them.

  • @orange-vlcybpd2
    @orange-vlcybpd2 Рік тому +1

    Short and on point as always, thanks!

  • @thedazman67
    @thedazman67 Рік тому

    Brilliant content as always. The scenario you describe is exactly the situation my current organisation is in. It very neatly describes my reasons as to why I am leaving the organisation.

  • @reinerjung1613
    @reinerjung1613 Рік тому

    Ah yes. And always a very good video. And thanks for all the source references.

  • @JordiMartinezSubias
    @JordiMartinezSubias Рік тому

    Great content as always. I have been using the DORA metrics as key indicators of team’s growth in terms of efficiency and quality of their delivery for some years, and I 100% agree with your comments and the overall point of the video. I believe my experience confirms your statements.
    Although, I kind of missed more on your opinion about the correlation between the DORA metrics and delivery performance. Why does this correlation exist?

  • @roelesch
    @roelesch Рік тому +3

    I feel like this is very true, but within the context of software development.
    In a larger context, we're trying to effect change of some sort. Building software is a means to that end, but it might very well be that writing software - however well done - is not the most effective way to bring about the change we seek. Therefore I feel it is important to attempt to measure the impact of our software on the situation, such that we can be sure that our development is bringing about the desired change. This is probably very hard and very context dependent.

  • @EldonElledge
    @EldonElledge 7 місяців тому +1

    I have the audio version of your Modern Software Engineering book. Great book and easily became one of my most recommended books to developers.

  • @simonlee8562
    @simonlee8562 Рік тому

    Great watch as always! I watched a presentation the other day re Flowmetrics....is it a case of either/or with Flow/DORA or could they be used in conjunction? I seem to remember some leading indicators as part of the Flow demo....

  • @jangohemmes352
    @jangohemmes352 Рік тому

    Just when I was looking for something to listen to in the car!

  • @reinerjung1613
    @reinerjung1613 Рік тому +6

    Every metric which becomes a benchmark for performance (and productivity is also a sort of performance) loses its ability to be a good metric. This is one thing I learned from the social sciences. The key argument here is: If you rank people by a metric - aka benchmark them - than they try to optimize their behavior towards the metric, but the metric is usually an indirect measure of something you are really interested in. So the optimization by the people may not be an improvement of the something, but only of your metric. This also an issue in university rankings, school grades, people competing to have the fastest graphics card (people optimize their code in a way that the metric gets better values, but not to be more performant overall).

    • @bobthemagicmoose
      @bobthemagicmoose Рік тому +1

      One trick is to keep the metric secret or ever changing but this comes at a cost of a loss of transparency. This is perhaps why apparently arbitrary bosses that will randomly berate poor performers can actually be rather motivating towards the correct goals (Jobs and Gates famously come to mind, but I hear Ellen D. is in the same boat as well). While perhaps effective at keeping people focused on the correct goals, that management style has serious obvious drawbacks though.

    • @AgustinAmenabarL
      @AgustinAmenabarL Рік тому

      I keep hearing this quote; about metric loosing their value when turned into KPIs. I often use it myself when encountering "lazy" KPIs.
      The great positive thing about the DORA metrics is: Even if you game them you are a way better performer.
      I challenged my teams to game them, and to actually game the numbers, you need to start doing a lot of positive things like test and deployment automation, configuration as code, work on observability, and keep an eye on stability and recoverability.

  • @AgustinAmenabarL
    @AgustinAmenabarL Рік тому +2

    I have 2 followup questions. I have been a big fan of DORA metrics for years! When I adopted them, it allowed us to move from mid level to elite in many metrics (others just high), and still gives us actionable information on how to work better. Before we were expecting velocity to tell us how well we were doing 😑
    What is the relationship of the DORA state of DevOps and the Puppet or BMC State of DevOps?
    What are good leading metrics for team performance? Or at least decent ones?

    • @mcwtfd7555
      @mcwtfd7555 Рік тому +1

      I'd look at what Reiner Jung said in the comments earlier. I'd avoid metrics for team performance. It seems like your team is heading in the right direction from noting the feedback from DORA and using it in a constructive way. Keep doing that and your work will continue to improve.

  • @bryanfinster7978
    @bryanfinster7978 Рік тому +5

    It's good to know you're accredited.

  • @bobthemagicmoose
    @bobthemagicmoose Рік тому +4

    Unfortunately for any metric, there is usually a lag between undesirable behavior and the negative consequences. This allows many poor performers to be long gone before the damage is done. For code that might mean poorly written, but up to spec, code which isn't discovered until someone wants to make a simple change and the whole thing comes crashing down. Same principle applies to construction crews and heads of state.

  • @michaelrstover
    @michaelrstover Рік тому +2

    I've seen people object that these correlational studies aren't controlling for the relative difficulty of the product that different teams are working on. So, it might be the case that "elite" teams - those who deploy frequently with fewer regressions, are, on average, working on very simple things, whereas "low performing" teams are, on average, working on incredibly complex problems. Is there any effort to control for this sort of thing in the statistics?

    • @bobthemagicmoose
      @bobthemagicmoose Рік тому +1

      One issue I could see is that "elite" teams are focused on a single known technology while the other teams have to bounce around and are always needing to inboard to new tools and systems.

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому

      I am pretty sure that there is some statistical control for type of project, but I can't remember the details. The method is described in some detail in the "Acceleratre" book and the DORA group, now part of Google, are evolving and maintaining the analysis on an ongoing basis.
      Practical demonstrations that you assumption is incorrect are rife. Tesla and SpaceX amongst many others apply these metrics and techniques.

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому +2

      @@bobthemagicmoose I am pretty sure that this isn't the case, I seen to remember something about tool diversity being bigger in elite groups (I may be mis-remembering that though). Even if that is true, that would suggest that "bouncing around, onboarding new tools" reduces efficiency and quality. Which I certainly believe is often true.

  • @KarolGallardo53
    @KarolGallardo53 Рік тому

    How can we use these metrics when the projects are brand new and during all the setting, we cannot release to the public because the basic functionalities take several sprints? Even if we split the basic functionalities, they are not shippable within one single sprint

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому +2

      My advice is to start sprint 1 with the goal of making whatever you build releasable, even though you won't release it because it won't make sense to do so yet. But it should be built to production quality, tested to production quality and be deployable. This approach forces you to create a functioning deployment pipeline, and start off writing tests and running them as part of your pipeline. This is more work on this first story or two, but it is time well spent, because this is the easiest time to create the simple starting version of your pipeline, it forces you to address important concerns while everything is the simplest it will ever be, and it will make the development of every story that comes later that little bit easier. This is a GREAT way to start a project!
      I did a video about this a while ago: ua-cam.com/video/eozFlgu6ByY/v-deo.html

    • @KarolGallardo53
      @KarolGallardo53 Рік тому

      @@ContinuousDelivery Awesome, thank you very much for your input!

  • @nevokrien95
    @nevokrien95 7 місяців тому

    At times you do want to hack something together thats really not maintainable.
    For insurance in papers on ml ots quite common for the code to be throwen out after the paper is written.
    Which can make for ridiclously awful code since u r trying to deliver as quickly as possible with 0 ne3d to mainta8n later

  • @carstenrasmussen1159
    @carstenrasmussen1159 Рік тому

    Yes. You should be careful what you measure as success. If you goal function for GIA is to reduce C2O. Maybe you end up getting T1000

  • @orange-vlcybpd2
    @orange-vlcybpd2 Рік тому +25

    SAD AF is your emotion when you have to debug a badly designed system.

  • @user-cu4bk2gm6q
    @user-cu4bk2gm6q Рік тому +1

    Unfortunately, the DORA report does not tell the true story or there is no proper guide how it should be measure? My company recently adopted this DORA report and every project share their DORA report,... Unfortunately, it was evaluate by the team or 'DevOps' engineer role in their team. Surprisingly most project is above 70% or even 80% which I highly doubt because many of them don't really write proper test coverage, is still using TBD, they work in silos (they have FE and BE team), most if not all of them understand DevOps as just between Devs and Ops, and they specifically have a role for DevOps engineer.
    Recently some project shared their 'DevOps' practice, tools and pipeline. After asking few questions, I found that they're having 2 branch one DEV and another MASTER, and they're also using TBD... That means, a feature must first merge into DEV to be manually tested before they can go to MASTER. If there is a hotfix needed, they could take up to 1 week to deploy to PROD. Furthermore, their feature branch could sometimes last more than 1 sprints. And for feature branch, they practice 1 Dev per branch. I could see so many problems and disaster to work in such project and you could see that the team or their 'DevOps' engineer still manage to evaluate their project above 70%.
    It's due to the questions are answered by the team and it depend on how they interpret.
    They could be commit daily to feature branch, in this case, they would state that they are doing Continuous Integration (even though they don't really practice it because from my experience working with people working in feature branch, they tend NOT to work on small changes and would usually not commit frequently; furthermore, when they need to resolve conflict, you would see they would further delay the commit).
    Unfortunately, the management only look are the report and don't truly look into the details or meanings, and they were actually proud to push this to every project; we also have other kinds or report or evaluation and it's definitely doesn't reflect the actual state, the problem is the team themselves evaluate and give the score instead of someone who is well versed in such topic to perform the evaluation; furthermore, when they setup the group that bring such topic, they don't evaluate who is right, and people just volunteer mainly due to extrinsic motivation and someone who is only rely on numbers.

  • @mahdikarimi6467
    @mahdikarimi6467 Рік тому

    "the metrics are extremely useful, but not when they are treated like a goal"

  • @StreetsOfBoston
    @StreetsOfBoston Рік тому

    I don't think that teams are dumb when gaming metrics.
    They, instead, are thinking about their paycheck or bonus. They know very well what they are doing when gaming the metrics/system.
    What they are failing at, is to be able to push back at perverse incentives. Often, that is very hard to do.

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому +1

      No one said that the teams were dumb, the metrics were dumb. If you have metrics to force behaviour that people don't understand or believe in, people will nearly always find a way to subvert them.

    • @StreetsOfBoston
      @StreetsOfBoston Рік тому

      @@ContinuousDelivery Sorry, I may have misunderstood your words around 15:00 or so, where you said that "humans were dumb that they love a simple target" :)
      You're absolutely correct that folks will game metrics they don't believe in.

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому +1

      @@StreetsOfBoston Sorry, that is confusing! I was talking about the people that set the metric, not those that gamed it.

  • @justintomlinson9311
    @justintomlinson9311 Рік тому +2

    Dora is ok for build the thing right but has no attempt to measure build the right thing. Value or outcome delivered in relation to the hypothesis (not feature) postulated Is just ignored. Just as with velocity it’s possible to have great Dora metrics and be delivering no customer value. Useful but incomplete.

    • @magnusaxelqvist7634
      @magnusaxelqvist7634 Рік тому +1

      It also creates faster feedback/learning loops, so that you can validate/throw away/pivot your assumptions/business ideas much faster.

    • @Microman6502
      @Microman6502 Рік тому +4

      They are productivity metrics. They’re measuring the teams ability to get to those answers. Obviously the faster you can iterate through ideas in production, the greater your capacity to get to a point where you can prove or disprove your hypotheses. The question ‘did I build the right thing’ isn’t the question these metrics are trying to answer. That is ultimately measured by your financial performance but is addressed by a different set of measures.

  • @AntonioGranjo
    @AntonioGranjo Рік тому

    Great video and great tshirt. You can c&p this comment in all your videos.

  • @fanemanelistu9235
    @fanemanelistu9235 Рік тому

    Talking 20 min about how to use DORA metrics without defining or at least listing said DORA metrics.

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому +3

      It both defines and lists them:
      Stability = Change Failure Rate & Mean time to Recover
      Throughput = Lead Time & Deployment Frequency
      These definitions, and explanations of them are used throughout to the video.

  • @zoranProCode
    @zoranProCode Рік тому

    But or bot?