Did Microservices Break DORA?

Поділитися
Вставка
  • Опубліковано 15 лис 2022
  • This year's "State of DevOps Report" from DORA has been published and there are some interesting, and surprising findings. One of them says that "Loose-coupled architecture leads to more burnout". This is completely opposite to what previous year's results have said - so what's going on?
    In this episode Dave Farley looks at some of the findings from this year's report, and explains why this report matters. Dave also offers his critique of the report and highlights some of the interesting things it has to say about security, cloud, and loose-coupled architecture, as well as asking if the new addition to the best metrics in software really stand up to scrutiny. (It's not all about micro services ).
    -----------------------------------------------------------------------------------
    🖇LINKS
    🔗State of DevOps Report 2022: ➡️ cloud.google.com/devops/state...
    🔗 SLSA Framework (Security): ➡️ slsa.dev
    🔗 DORA Community Group: ➡️ sites.google.com/view/doracom...
    -----------------------------------------------------------------------------------
    ⭐ PATREON:
    Join the Continuous Delivery community and access extra perks & content!
    JOIN HERE ➡️ bit.ly/ContinuousDeliveryPatreon
    -------------------------------------------------------------------------------------
    🚨 NEW Acceptance Testing COURSES:
    Take a look at my collection of 3 courses on ATDD. Learn a BDD-approach to Acceptance Testing: analyse problems in a way that helps to determine which product features to develop, learn to write better stories and specifications, reduce reliance on manual testing and produce better outcome-focussed software for your users.
    CHECK OUT THE LIST OF 3 COURSES HERE ➡️ courses.cd.training/pages/abo...
    -------------------------------------------------------------------------------------
    📚 BOOKS:
    📖 Dave’s NEW BOOK "Modern Software Engineering" is available here
    ➡️ amzn.to/3DwdwT3
    📖 The original, award-winning "Continuous Delivery" book by Dave Farley and Jez Humble ➡️ amzn.to/2WxRYmx
    📖 "Continuous Delivery Pipelines" by Dave Farley
    Paperback ➡️ amzn.to/3gIULlA
    ebook version ➡️ leanpub.com/cd-pipelines
    NOTE: If you click on one of the Amazon Affiliate links and buy the book, Continuous Delivery Ltd. will get a small fee for the recommendation with NO increase in cost to you.
    -------------------------------------------------------------------------------------
    CHANNEL SPONSORS:
    Equal Experts is a product software development consultancy with a network of over 1,000 experienced technology consultants globally. They increase the pace of innovation by using modern software engineering practices that embrace Continuous Delivery, Security, and Operability from the outset ➡️ bit.ly/3ASy8n0
    Octopus are the makers of Octopus Deploy the single place for your team to manage releases, automate deployments, and automate the runbooks that keep your software operating. ➡️ oc.to/Dave-Farley
    SpecFlow Behavior Driven Development for .NET SpecFlow helps teams bind automation to feature files and share the resulting examples as Living Documentation across the team and stakeholders. ➡️ go.specflow.org/dave_farley
    TransFICC provides low-latency connectivity, automated trading workflows and e-trading systems for Fixed Income and Derivatives. TransFICC resolves the issue of market fragmentation by providing banks and asset managers with a unified low-latency, robust and scalable API, which provides connectivity to multiple trading venues while supporting numerous complex workflows across asset classes such as Rates and Credit Bonds, Repos, Mortgage-Backed Securities and Interest Rate Swaps ➡️ transficc.com
  • Наука та технологія

КОМЕНТАРІ • 95

  • @tieTYT
    @tieTYT Рік тому +50

    My primary concern with the DORA report is the actual data is proprietary. I've never been able to find the actual survey they send out, either (UPDATE: I recently took the survey. It's nothing like the quick survey you can take on the website). What we see is *their* interpretation in summary form.
    Is this scientific? They claim it is, but with these limitations, I don't see how it can be peer reviewed.
    I want to be wrong. Please correct any flaws in my understanding. But without open access to the actual data, I don't see why the DORA report should be trusted any more than a case study on a consultant's website.

    • @andishawjfac
      @andishawjfac Рік тому

      The survey is easily Googleable in 5 minutes, perhaps you should do that instead of asking other people to do the work for you? Why don't YOU go investigate and prove your points right, the burden of proof is on you to prove your statements, not on us to disprove you.

    • @tieTYT
      @tieTYT Рік тому +3

      @@andishawjfac ok, sorry if I've insulted you in some way, but I've googled for more than five minutes and haven't been able to find it. I purchased their book and looked for it there too. If it's not too much hassle, would you mind linking to it directly so I can be sure I find what you found?
      Regardless, the Crux of my argument doesn't depend on public access to the survey *questions.* It's a lack of survey *answers* that concerns me.

    • @manishm9478
      @manishm9478 7 місяців тому +1

      @@tieTYTyes my own understanding from reading the book is that it's very intriguing but by no means conclusive that the data says what they claim. In particular, i don't think they demonstrate causation very well. Meanwhile there are other factors than continuous delivery which they say are also highly correlated with successful companies, such as good leadership.
      I still think the Dora metrics can be useful, but there are some limitations and assumptions around them to understand and see if they will actually help improve your team or business.

    • @tieTYT
      @tieTYT 7 місяців тому

      @@manishm9478 In their defense, and to play The Devil's Advocate, I think they explicitly state they are not proving causation. As a comparison, they say most of the information in the Harvard business review isn't proving causation either, but HBR has a strong influence on business processes.
      Proving causation is a very high bar that most scientific studies don't achieve (is my understanding).

  • @gentooman
    @gentooman Рік тому +11

    I'm convinced that the microservices meme was promoted by cloud providers to get devs to spend more. Monoliths that only need 1 server/container to run aren't very profitble.
    The mere fact that we're using the word "monolith" to describe normal, functional applications is a testament to that.

    • @_shulhan
      @_shulhan Рік тому +2

      Wait until you heard about kubernetes.

  • @gilgamecha
    @gilgamecha Рік тому +38

    Dave Farley is not so much a technologist as an applied epistemologist. It's a rare and invaluable approach to the subject.

    • @Ildorion09
      @Ildorion09 Рік тому +2

      Especially the combination of both.

  • @gunderd
    @gunderd Рік тому +24

    My guess is lots of teams are coming up with ill conceived boundaries for their microservices, and jumping into them before there's really a need. This is especially easy to do if starting out with a junior team (let's face it, most are), poorly understood domain (when breaking new ground this will often be the case), and a blind push for a microservices first approach (because... they're trendy?!). People need to invest time a) learning DDD/software design basics, and b) building well structured modular monoliths while their teams are still small; and only going distributed services when they actually need to scale their teams above ~10 devs and the domain fog has cleared a bit. Distributed systems are hard people! if you can't solve your problem cleanly without that added complexity you will definitely be burning out with it!

    • @figlermaert
      @figlermaert 2 місяці тому +1

      I saw that first hand as an API product owner standing up micro services to handle all the data traffic in our org. Made it waaaaay too complex with four teams with a disparate amount of domain knowledge. I was the only PO that had enough and I was constantly having to teach everyone what to do in their respective micro service.
      That just from the business end. Our devs had everything going against them with a mix of REST and GraphQL end points, C# and Java services, blending MongoDb, Kafka, kubernetez, etc. It was like picking every complex thing you could and throwing it at a wall.

    • @gunderd
      @gunderd 2 місяці тому

      @@figlermaertyou don't happen to be a colleague do you? :-). I feel like this exact situation is playing out in so many places right now. I've personally lived it multiple times, watching the slow motion train wreck play out in front of me while being powerless to do anything about it. I've got so many examples of anti-patterns in my historical experience, from business teams flippantly choosing team (and hence service) boundaries around project deliverables, the classic 'microservice' architecture that's always deployed all-at-once because it's a big ball of tightly coupled distributed mud, bad decisions becoming etched in stone due to the pain of refactoring across service/team/technology boundaries... I reckon I'm just about jaded enough to write a book about my experiences.

    • @figlermaert
      @figlermaert 2 місяці тому

      @@gunderd lol sounds like not but likely could have been! Sadly, and weirdly fortunately, the company I was in that got ourselves in that bucket ended up shutting down.
      We were actually going to a more traditional caching model with we hooks and REST end points (to represent up to date data from our third party system) to solve our problems and working toward abandoning the micro services framework but then the company folded.

  • @craftacademy94
    @craftacademy94 Рік тому +9

    Our industry talks a lot more about continuous delivery practices these last months. That's a good thing ! But it also means that more teams embrace these practices blindly, without really knowing how to apply them effectively.
    I see this every time. Teams think "decoupled architecture" and then go all-in with TDD, trunk-based development, hexagonal architecture. Since they don't know how to apply these techniques, they end up with a brittle test suite, a very cumbersome versioning strategy due to bad CI, and a very coupled "decoupled" architecture due to the absence of strategic thinking about bounded contexts. All in all, they think that "decoupled architecture" = a mess to work with, thus leading to more burnouts.

  • @purdysanchez
    @purdysanchez Рік тому +20

    The biggest problem with the cloud is that it encourages extremely non-portable code as a shortcut to building a scalable solution. Vendor lock-in is a primary business strategy for the cloud providers. You can definitely build systems that don't have lock-in, but at that point you're just using the cloud for server hosting. The amount of integration points that cloud service providers abstract away is a double edged sword.

    • @daveh0
      @daveh0 Рік тому

      Like containerization and kubernetes?

    • @purdysanchez
      @purdysanchez Рік тому +1

      @@daveh0, containerization and kubernetes is building from scratch compared to the current generation of cloud products.

    • @daveh0
      @daveh0 Рік тому

      @@purdysanchez the current generation of cloud products is things like AKS, GKE and EKS. They are not anything like building from scratch. This current generation is a big shift towards commoditization compared to the previous!

    • @purdysanchez
      @purdysanchez Рік тому +1

      @@daveh0, my original comment wasn't talking about Kubernetes services offered by cloud providers. My comment was about how using cloud services makes your code non-portable in exchange for baked in auto-scaling, maintenance, and administration. Cloud providers offer things like functions as a service, turn-key distributed data access services, message services, monitoring, alerting, authentication, authorization, caching, etc.

  • @snan1384
    @snan1384 Рік тому +5

    While Microservices certainly have benefits, there are places where this approach is implemented plainly wrong by a company and can cause such burnout. I worked for few years in a place where responsibilities were segmented to the point where there were teams of 2 responsible for each microservice. This had caused domino effect in attrition few times (one person left company, remaining team member had to carry whole microservice A on his own while training new dev, so he burns out, so other experienced dev is moved to take care of A, so now microservice B is left with one dev, and so on). This was clearly project management issue, but for me it is perfectly understandable that people with similar experience would correlate it with working in microservices architecture). Therefore I wholeheartedly agree that while conducting such research one must focus on asking correct questions, or we will lose our ability to progress. Thanks Dave!

  • @RFalhar
    @RFalhar Рік тому +3

    I feel that the moment Jez Humble and Nicole Forsgren left DORA, the quality of the research plummeted. As you say, it has become more of a marketing tool, than actual objective research.

  • @cjcdoomed
    @cjcdoomed Рік тому +1

    Thanks for clarifying the oddities in this year's report!

  • @leocd277
    @leocd277 Рік тому +5

    Had to replay several times, that t shirt is too distracting 🤣

  • @reidspencer61
    @reidspencer61 Рік тому +2

    I would like to suggest that "Reliability", defined as "How well your services meet customer expectations", is actually a needed metric, despite the dictionary definition of reliability being "the extent to which an experiment, test, or measuring procedure yields the same results on repeated trials" (Merriam-Webster). The "same results" in DORAs case is that of continuously meeting customer expectations. Frustration, and its converse Satisfaction, are both functions of expectation. While these are difficult things to measure accurately in humans, it is important that a software project continuously create satisfaction and not frustration for its users. Expectations by a software system's users change continuously over time. So, in my view, "Reliability" is a measure of how well a software system is keeping up with the changing expectations (infer that as "requirements") of its users. This is similar to Stability being a measure of quality, and Throughput being a measure of efficiency of production. All three of these measures (stability, throughput, reliability), are necessary to avoid obsolescence of a software system. How many times have you walked away from a buggy (low stability), slow to be fixed (low throughput), and increasingly irrelevant (low reliability) system? To me, utility (as in the generation of value) is the ultimate measure of a software system and all three DORA measures factor into the assessment of utility.

  • @supervacuum
    @supervacuum Рік тому

    Thought-provoking conclusions and well-reasoned arguments. Top job!

  • @JavierBonnemaison
    @JavierBonnemaison Рік тому +1

    HI Dave, I would characterize the DORA metrics differently. They are fundamentally flow metrics, in particular throughput (TH) and cycle time (CT). Lead time and MTTR are cycle times, or how long it takes for something to complete, whether it is the time from order to delivery in general (standard definition of lead time) or commit to deploy in this context, while Change failure rate and release frequency are examples of throughput measures, or how many events happen in a given period of time. Change failure rate and MTTR can only be used as measures of stability when collected and viewed as trends. On their own they are only snapshots of two types of throughput. I understand that some people could assume that lower numbers of change failure rate and MTTR indicate higher stability, but these numbers are relative (how mission critical is the system for MTTR, how large is the throughput for change failure rate), so they are not very useful on their own. All of these flow metrics are just inputs into a larger measurement model to manage performance, and they are primarily useful for continuous improvement purposes (on this last point I am sure we agree).

  • @vovPop
    @vovPop Рік тому +2

    For me a microservices architecture approach is being adopted very frequently by inadequately sized (I.e. small, startup) teams… so usually a team manages more than a single service… obviously that burns out the people!

  • @softwaretestinglearninghub
    @softwaretestinglearninghub Рік тому

    thank you for sharing your thoughts on it.

  • @gunnarthorburn1219
    @gunnarthorburn1219 Рік тому

    I agree. I read the State of DevOps Report just a few weeks ago, and I was surprised and a bit disappointed, someting felt wrong. Now after watch this video I understand what felt wrong. The original four questions have rather objective (numeric) answers. How well your services meet customer expectations... is much more subjective. Poor expectations is enough for a good score.

  • @judas1337
    @judas1337 Рік тому +4

    What if the evidence for cloud computing shows that it’s detrimental or at least have no effect on Stability and/or Throughput?
    Would it be in Google’s interest to include it in the report?
    What if Reliability is put into the report due to cloud computing can be shown or assumed to influence it positively but not the originals?

  • @dougr550
    @dougr550 Рік тому

    Curious if the loose coupled + continuous delivery point could also be tied to where companies are in their growth trajectory. By definition if loose coupling is hard and requires more skill, it's going to reduce throughput until you get good at it. High coupling is mostly fine until you try to add a bunch of new features, or worse, are trying to do the same across a growing list of applications that the company or department supports. Loose coupling only becomes important or even beneficial when you start to scale your applications and overall infrastructure in a way that applications with high coupling are no longer viable.

  • @bryanfinster7978
    @bryanfinster7978 Рік тому +1

    There are several challenges with the DORA survey when the questions ask about things like "continuous delivery" and "microservices". Using those terms without first establishing a shared understanding of their meaning will yield poor data quality. Like you, I see CD as an extension of CI and that it's nonsensical to have CD without it. However, there are many who think CD is build/deploy automation or the ability to deliver on-demand once per month. Same with microservices. Lacking a precise definition, they'll answer they are using a microservice architecture even if two or more services must deploy synchronously or services are coupled by the database.
    One possible reason for the "surprising" results is that they are trying to learn too many things at once.

  • @Ulvhamne
    @Ulvhamne Рік тому +2

    One thing I've seen again and again is that there is no definition of APIs, and they keep being changed without any notification to consumers, or providers. This cause massive amounts of unnecessary work everywhere I've seen it.

  • @ricardoamendoeira3800
    @ricardoamendoeira3800 Рік тому +1

    Hey Dave, may I suggest that you do an interview/conversation with the Primogen or Theo?
    It would be really nice to see you talk with someone that doesn't like TDD (like Primogen) or even automated tests at all (Theo), even though they love strong type systems.
    They have reasonably large audiences and they have worked with other constraints that may influence their opinions (Theo has a very small start-up and he says tests would slow down their experimentation with the number of engineers they have).

    • @jimmyhirr5773
      @jimmyhirr5773 Рік тому

      When I search for "the primogen" I get an account that is about playing tabletop RPGs. And there are many people named "Theo." Could you be more specific about how to find the people you mentioned?

  • @LeoOrientis
    @LeoOrientis Рік тому +4

    If you put an engineer in charge of a software system, that person would likely cut through misinterpretations of trendy methodologies and quickly get to the truths that you're expounding in these videos.
    But the people in charge usually aren't engineers. They're careerists. Playing the social game of the modern enterprise.
    For them, the goal isn't even achieving profit for the company as a whole. Like so many of the social games we play, it's a game of status. Of waiting optimistically for the guy above us to slip on a banana peel, so that we can be ready to step in and take their corner office and reserved parking place.
    In the modern enterprise, leaders don't lead. They temporarily preside over systems they didn't make, doing their damnedest to give their overlords the impression that their "kind of thinking" is the better mouse-trap, safe in the knowledge that if they can just sustain that illusion for long enough, they'll be off to their next position before anyone makes an honest attempt to measure the impacts of their decisions.
    So please don't tell these "bosses" that speed and quality aren't in opposition. They will tragically misunderstand.
    Because they create nothing - and know nothing of the systems over which they reign - the only leverage they feel they have is pressure and coercion. To them, increasing speed isn't about a long-term strategy to reduce friction and improve communication. It's about using the carrot and the stick to make sure everyone is constantly rushing about in a blind panic, competing with each of their teammates to achieve the highest "social credit score" - so that they won't be among the discarded when it's time for the next round of arbitrary restructurings. It isn't about doing good work. It's a reign of terror.
    I love to listen to your interview of Allen Holub because he gets to name the elephant in the room: That most software is produced within the often disfunctional system that is the modern enterprise. When consultants like yourselves encounter teams chasing the latest trendy methodologies and getting it wrong, it isn't because they're thick or naive. Rather, the gold that those trends promise isn't even their end goal. Their _pursuit of excellence_ fits into a logic that has nothing to do with software quality: Helping the boss to pretend they're some kind of innovative genius. (And that goal tends to be turtles all the way up.)
    I would even speculate that this social game is so relentless and psychologically compelling that a good-faith engineer, recently promoted to a minor management role, would rapidly and without necessarily wanting to, find themselves quickly abandonning any true commitment to craft or results, and instead beginning to direct all of their energies towards survival in this perverse game of statuses.

    • @jimmyhirr5773
      @jimmyhirr5773 Рік тому

      Have you read Developer Hegemony? It's all about these status games people play in software development hierarchies.

  • @H4KnSL4K
    @H4KnSL4K Рік тому +1

    I like your description of what science is. Unfortunately, there's a lot of 'science' where we just go with the things we like and assume they are true..

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому +1

      Sure the trouble with science is that humans do it. There is politics and rivalries and all of that, the difference though, is that however strong the lobby in science, it will be overturned eventually, because it won't fit the facts. I'd argue that the idea prevalent in Quantum Physics of "shut up and calculate" is an anti-science idea, and it was very strongly pushed. People we actively discouraged from study what QM really means. But now that is changing, and lots of people are actively interested in trying to find that out. Unlike other areas, even if it sometimes takes a long time, Science is still about the underlying truth, and eventually, even if it means waiting for the dogmatists to die, the truth will find a way out.

  • @georgehelyar
    @georgehelyar Рік тому +1

    I think "How well your services meet customer expectations" is worth measuring, but I wouldn't call it reliability, I would call it value.
    Without measuring this, you could be delivering bug free code quickly, but it might not be delivering value if you're building the wrong thing in the first place.

  • @benjaminhammerich1409
    @benjaminhammerich1409 Рік тому

    Is there a book that can be recommended about loosely coupled systems/micro service architecture?

  • @Timelog88
    @Timelog88 Рік тому

    The part in the DORA report I thought most interesting, as it is also a current active discussion topic in the company I work at is the part about SLSA practices and in particular the practice of Two person review. I am still not sure how that fits in the practice of CI/CD where you don't use feature branches, as on the one hand two person reviews require two people apart from the developer who wrote the code to look at the code before it's merged with the trunk, but on the other hand you integrate your local code directly on the trunk multiple times a day.
    How do other see this? Am I missing something subtle here?

    • @thomasking9573
      @thomasking9573 Рік тому

      I think the idea is to do away with PR reviews in favour of pair-programming. There is obviously a trade-off here, some people can't stand pair-programming being one. We do two-person review and not pair programming. We merge in at most once a day. That's not as bad as long-lived branches but certainly not as agile as the pair-programming approach.

    • @thomasking9573
      @thomasking9573 Рік тому

      Just to add, we don't use something like Sonar Qube but I wonder if that would reduce the need to have two-person reviews.

    • @Timelog88
      @Timelog88 Рік тому

      @@thomasking9573 Sonar will not reduce the need for two-person reviews, but will help limit the amount of issues missed by a review. I'd say it flags about 90% of the common stuff you would also want to flag during a manual review, and also takes stuff into account that are hard to quantify manually (like cyclomatic complexity).
      The discussion in our company is still ongoing, but currently I am looking into doing pair programming with short lived branches (like TBD prescribes). But that still leaves the "issue" of when you can't pair as there are limited tooling available that have a good pre-release flow for reviews.

    • @manishm9478
      @manishm9478 7 місяців тому

      Dave shared in a video somewhere of an approach where the human review takes place after the code has already been committed. I can't remember the specifics, but the idea was the automated pipeline should ensure the code will at least work, and the human review can then come along later to check the code does what it's supposed to.
      It does run the risk of code not being reviewed, but also creates a bias towards releasing code which has its own benefits.

  • @farrongoth6712
    @farrongoth6712 Рік тому

    15:48 This part, partly confirms potentially a few theories I had for awhile.
    One is the difference between experience and knowledge, there are things the majority of people can only learn with experience, this isn't my theory obviously but I think this is become increasingly more true for tech.
    And I am of the opinion majority ( greater than 50% ) of skills required for modern non CS programming/development are far better learned by experience, via apprenticeships rather than CS, not say it can't be learned that way but there very few people who can learn it that way, fewer still that can put it into practice via that route. There a few aspects to this to cover, the lack of opportunity for young developers to actually have to develop their own algorithms, is damaging I know it goes against the DRY principle but I think this is true, and I think it extends far beyond just that relatively low level.
    There's further a lack of exposure of low-level and that's a relatively broad term and the relevance is going to vary depending on the project and/or business, but suffice to say they rarely if ever will get experience in making their own abstractions, setting their own build chain, learning the relevance of it, etc.
    Modern development processes have so many layers that it apparently takes 16 years not only to get exposure to all of it, but to also get good at them, some people will reply, well they are expected to learn and keep up with that in there own time, I will explain later why that is becoming less and less viable, and flippantly say well then, fine employers should pay developers for 100 hours of work a week because that how much time they would have to put in to keep up.
    Anecdotally, I can only think of Power Systems Module I had in uni where a lecturer, when he went through calculations would skip large parts of the calculations because he had 20 years of experience and knew shortcuts for the give circuits, the problem with that was no one in the lecture could follow it, even people particularly gifted in mathematics of which I was not one.
    The growth of the tech industry means, only one thing, companies will have to start taking employees that want jobs not careers, so back to a previous point of it taking 16 years to get exposure to all levels, yeah this set of people are not going to do that, or expedite that in their own time, and they are correct not to do so. The alternative is AI and it's looking more and more likely AI will take those jobs well before they take any menial jobs. The other aspect of this is you can only expose yourself to so much, and chance are the projects you get exposure will be relatively small, which only takes you so far.
    There is also the difference between people that can setup the pipelines and the people that can work in and understand those pipelines and the people that have to work in those pipelines, they can, but they can't understand it or don't have the time to understand it. The first two sets require very different sets of skills and both sets are relatively small and there is an even smaller intersection that have both skills, and the latter is probably the largest group, and that going to be the majority of your work force.

  • @chrisjohnson7255
    @chrisjohnson7255 Рік тому +1

    Question about TDD , I recently started a side project , I wrote out the test cases but no code , then coded the solution one test case at a time then I wrote the test code, however I made sure that the test matched the test cases and drove the design from said test cases. Do I get a passing grade? I have already seen that my quality and stability is higher and refactoring stability is insanely easier and gives me high confidence. Anyone can chime in! This approach makes me happier and I feel that it still falls in line with DORA.

    • @jangohemmes352
      @jangohemmes352 Рік тому +2

      Yeah I do think there's some advocates for writing tests right after. I do it myself sometimes too, but it is a bit of a no-no still. I really only have one reason for that but it's a big one:
      Writing the tests afterwards takes away the red step in red-green-refactor. Not seeing your test fail first before writing the code that makes it pass robs you of the verification step that your test *does in fact work!*
      Testing your test is vital! There could be a bug in there making it pass for the wrong reasons. Red-green-refactor makes it a more foolproof method that you don't have to think too much about. Takes away the human error

    • @chrisjohnson7255
      @chrisjohnson7255 Рік тому

      @@jangohemmes352 Does this implementation blur the line between BDD and TDD? Writing test that more align with user experience?

    • @jangohemmes352
      @jangohemmes352 Рік тому

      @@chrisjohnson7255 No I don't really know what you're getting at, could you elaborate? I was making an argument as to why writing unit tests first is better than writing them after.

    • @chrisjohnson7255
      @chrisjohnson7255 Рік тому +1

      @@jangohemmes352 I was thinking that the way I was approaching testing was from the users point of view , how they might expect the code to work if they saw it in snippets. But this not the case and my pattern clearly falls much more in-line with TDD , with the caveat that I start writing the test first.

  • @MasterLJ
    @MasterLJ Рік тому +1

    1. Take a shot every time Dave says something wrong
    2. ???
    3. Celebrate your sobriety

  • @joebowbeer
    @joebowbeer Рік тому

    15:50 this is where some code coverage might bump up the scores

  • @mikemegalodon2114
    @mikemegalodon2114 Рік тому

    thanks for the video

  • @Sergio_Loureiro
    @Sergio_Loureiro Рік тому

    Me at the beginning of the video now. Had to interrupt to say I love the t-shirt.

  • @VincentJOBARD
    @VincentJOBARD Рік тому

    I worked with severals team as a production engineer "Kubernetes expert" who created microservices for a e-commerce platform. and I burnout. Twice in less a year. But I dont think so that was due to loosely coupled architecture. Why ?
    1) The microservices was not following Clean Architecture principles and the microservices was heavly coupled
    2) The teams was programmers only (+PO+SM) which implemented architecture decided by the architecture team. Nobodies really knows about cloud native architecture. Ex, they wrote a java batch inside the microservice that call a datalake, following the architecture team recommandation. Guess what, with this microservice scaled at 6 pods in production ? 6 calls to the datalake each time. Nobody told them that they need to use a CronJob until I have to fix this issues raised in production. They had no skills in team either to work on CI/CD because it is a "Ops" thing. So they depend on different enablers team (Production, Indus, Monitoring). So in fact me as their "K8s expert"
    3) They used a Agile Framework badly implemented by people who don't know nothing about DevOps approach. All the enablers team used push flow kanban (I know...), and couldn't absorb work asked by train team.
    So the cause of burnout for me was due to bad organisation with poorly trained teams for cost killing reason :/

  • @nevokrien95
    @nevokrien95 8 місяців тому

    Its weird to me that they r changing their metrics...
    I would have been way happier if relibilty was part of a new group of metrics that complements dura

  • @craigstatham4397
    @craigstatham4397 Рік тому

    I believe higher throughput *is* the reason for burnout. That’s because there is no trade off between throughput and stability. Any system that has achieved a certain level of stability will not remain at that level if throughput changes - stability will also change unless additional effort is made to maintain the status quo. So increasing throughput requires more effort to sustain a stable system. This is much like electrical circuits. Increasing throughput (electrical current) actually induces further impedance that acts against the flow. To transmit electricity over long distances at the same power level you have to reduce the electrical current but increase voltage otherwise the power losses are too great and the system (literally) becomes burnt out. Software systems seem to follow a same pattern. It’s naive to think you can improve throughput without improving stability - otherwise you will suffer burn out.

  • @Kitsune_Dev
    @Kitsune_Dev Рік тому +4

    Dora the explorer? 😳

  • @humanlytyped
    @humanlytyped Рік тому

    Hah. Dave Farley wore his Halloween t-shirt 😂

  • @MatthewChaplain
    @MatthewChaplain Рік тому

    20:37 I've been arguing this point with my colleagues for some time. For me, there are two kinds of quality: correctness and changeability. But there are also two kinds of correctness: the software does what I intended, and the software does what the user needs. Without the first kind, the software is trash. Without the second kind, the software is useless. An important observation for me was seeing that delivering software that does what I intended can directly effect a change on what the user needs. That is, now that I have made certain tasks easier, other unthought-of use cases emerge. Thus, the ability for the software to change contributes overwhelmingly to the software evolving towards this dual correctness.

  • @matthewmascord591
    @matthewmascord591 Рік тому +2

    "The ability to change a system is a defining characteristic of its quality" - agreed, as the Pragmatic Programmers said, "Good design is easier to change than bad design" - the ETC principle.

  • @barneylaurance1865
    @barneylaurance1865 Рік тому +2

    Maybe you don't become a high performer not using version control systems. Could be 100% of higher performers using vcs, and only 75% of low performers. 100% is 33% more than 75%. The implication is that 25% of lower performers are not using VCS, which still seems like a lot, but I'm sure there are some organizations around still not using any vcs.

    • @vanivari359
      @vanivari359 Рік тому +1

      there are also tools and platforms, which do not use a conventional source code repository, but instead manage the "code" within the platform like many low-code approaches or AWS Lambda for example. There are projects in big organisations out there which use the Openshift UI to configure deployments or develop Lambda services with the AWS web application etc. Some of those platforms are not capable to use an actual VCS.
      I remember one project, in which everything was in git except the configuration and transformation code snippets of Apache Nifi. So if you ask someone from that team, they would answer no in regards to VCS.

  • @LimitedWard
    @LimitedWard Рік тому +1

    This video has oddly coincident timing with Elon Musk's post about turning off microservices at Twitter. Time to find out how loosely coupled their architecture truly is! 😂

    • @saritsotangkur2438
      @saritsotangkur2438 Рік тому

      Loosely coupled and independent/resilient are separate concepts. A load balancer is a very loosely coupled service. If you turn it off, it doesn’t matter how how loosely coupled the rest of the system is, the site will still go down.

  • @jimiwikmanofficial
    @jimiwikmanofficial Рік тому

    I think you have misinterpreted the Iron Triangle. The metric is not speed, it is Time. It has nothing to do with how fast you can produce things, but how much time you have to get the right thing done. Deep thinking cost time.

  • @esra_erimez
    @esra_erimez Рік тому +4

    No, Dora took her friend Boots, a map and a backpack to go exploring.

  • @TheEvertw
    @TheEvertw 2 місяці тому

    Me, I do not think the cloud is inherently good. In fact, quite the opposite. I think it is a new way in which the large providers try to lock us into their over-priced services.

  • @vladinosky
    @vladinosky Рік тому +6

    Stats without definition of the underlying metrics and how the studies are conducted are really pointless. This is NOT science, and I don't see how it helps anyone or any team perform better. It also brings back painful memories of people taking processes and practices, e.g. Agile or Scrum, as dogmas without understanding their purpose.

  • @daveh0
    @daveh0 Рік тому

    Reliability; is Google ownership tainting their research?

  • @kayakMike1000
    @kayakMike1000 Рік тому

    Oh come on... people have been trying to measure performance of dev teams before the Mythical Man Month.

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому +1

      Yes, and not succeeding as far as achieving a predictive model, DORA is different in that respect.

    • @jimmyhirr5773
      @jimmyhirr5773 Рік тому

      Michael, do you believe that it is possible to improve the performance of a dev team?
      If you do believe this, then how do you know?

  • @aslkdjfzxcv9779
    @aslkdjfzxcv9779 Рік тому

    generically, i prefer loosely coupled distributed microservicey architecture over monoliths.

    • @saritsotangkur2438
      @saritsotangkur2438 Рік тому

      I also prefer codebases without legacy code, that’s self documented where it can be and always has in-sync with code comments when necessary, has perfectly defined interfaces at every abstraction level to support every new feature that will be asked of us, never requiring us to refactor. That is what I prefer… is it realistic that we’d actually get this? If yes, then sure go with micro services because the API’s and guarantees you define on day 1 will be good through day 1000 and you’ll never have to work with 20 other teams to get them off of your v1 API which you’ve told them was being deprecated 6 months ago and still haven’t gotten off yet.

  • @a544jh
    @a544jh Рік тому

    The random Patreon text that appears in the middle of your latest videos is extremely distracting.

  • @razvancomsa2276
    @razvancomsa2276 Рік тому

    micromonoliths ftw

    • @chrisjohnson7255
      @chrisjohnson7255 Рік тому

      What’s your favorite part about monoliths? Do you like how quick it is to adjust any aspect of the program without dealing with building nuggets or waiting on another team?

  • @tyronex2416
    @tyronex2416 Рік тому

    🤩 p̲r̲o̲m̲o̲s̲m̲

  • @orstorzsok6708
    @orstorzsok6708 Рік тому

    First of all WTF is DORA?

  • @arcfide
    @arcfide Рік тому +12

    I've spoken with some people deeply aligned with traditional (non-prescriptive) agile methods who mentioned that they did tend to feel some burnout related to the way that daily development life worked for CI/CD systems worked, which in their case(s) tended to follow a Pull-system Kanban approach. What they attributed to the burnout was a sense that nothing was every "finished," and thus, they tended to feel that the world was a continuous grind with no sense of ebb and flow to the work. That monotony of work is what I think contributed to the burnout. I could see that if you focus on very small, incremental deployment of a decoupled system, this could contribute to a lack of connection to the "big picture" and thus, a sense of grander purpose in your work. Without that connection, even though you are "doing" a lot of things, and shipping a lot of things, you could feel like it's all "meaningless" and just more features without any connection to how that is directly affecting the end users (not other teams within the organization who consume your services), which is a good way to get burned out.

    • @dougr550
      @dougr550 Рік тому

      Leadership is always important! Great agile focuses on how we deliver value, and leading those teams is all about telling the story about how we're creating that value. The benefits of the team's output should be shared with them regularly.
      Another thing to consider is making sure that there is time set aside for the team to experiment with new technologies and ways of working. Not only does this ensure that the organization is constantly developing new best practices, it also results in a sense of purpose for the team that allows them to take pride in the work they're doing.

    • @arcfide
      @arcfide Рік тому

      @@dougr550 I can't really disagree with that! I think the problem that I sometimes see is that some of these systems seem to emphasize throughput, and if you try to get too much throughput without also focusing on slack, space, and sustainable pace, there's an issue. The whole question of sustainable pace is I think an underappreciated one, even among people who advocate for it. To me, sustainable pace also has to encompass, as you point out, intentional cultures of growth and personal innovation. I see too many people thinking in terms of "how can we continue to do what we are doing as fast and as long as possible?" Which, to my mind, misses the point entirely. When you're just doing more of the same things that you have been doing, even if it is applied to a different problem or user story, that's still a recipe for burnout, IMO, even if you make it clear why that work has value. Good developers, IME, crave novelty, and need that same level of creative drip to stay at their best for any period of time.

    • @dougr550
      @dougr550 Рік тому +2

      ​@@arcfide I'll come back to the conversation Dave had with Allen Holub where he says "it's really hard to find Lean thinking managers." There seem to be a very limited number of managers who can take the pressure of XYZ needs to be delivered in 6 weeks from now and balance it with the longer term benefits of high performing teams. I see a lot of people who seem to think that you need to push developers to deliver work which makes no sense to me because people aren't becoming software developers by accident or because they couldn't think of anything else to do. This is a group of by and large highly motivated people, so if they don't care about delivering it is most likely something wrong with the environment. In that same conversation Dave and Allen reference Daniel Pink's book "Drive: the surprising truth about what motivates us", which in short finds that intrinsic motivation trumps extrinsic motivation every day of the week. Supporting high performance teams that are doing work they can take pride in is the only logical conclusion to come to if you care about the ongoing health of the organization, but 100% agree there seems to be a limited number of managers who are capable of holding this point of view.

    • @ContinuousDelivery
      @ContinuousDelivery  Рік тому +9

      I do recognise this. I was on a fairly early kanban team and observed this effect. We adopted an approach that has since become my favourite. We used kanban, but we also had iterations. The iterations were really for the human aspect, to give people a sense of achievement and clearly defined opportunities to get together and put their work into its correct context. This worked really well, but it is a subtle thing. I didn't think of this form of burnout when I was talking about it, so thanks for mentioning it.

    • @barneylaurance1865
      @barneylaurance1865 Рік тому

      @@ContinuousDelivery Do you know if there's anything published about this way doing kanban with iterations? It would be good to know more about how you avoid or mitigate harmful effects of deadline pressure as each iteration comes to an end.