How To Build Quality Software Fast

Поділитися
Вставка
  • Опубліковано 1 чер 2021
  • Would you prefer to go fast, or to work with high quality? This has been seen as a trade-off for a long time, but it is a mistake, certainly when it comes to software. There is no trade-off between speed and quality, we can have both. In fact, it is better than that, if you want speed, build better software, if you want better software go faster.
    Software speed, software development speed, matters, we want to create software quickly and efficiently, the assumption that this comes at the expense of high-quality software or high-end software development is simply a mistake.
    In this episode, Dave Farley explores how we can move fast with high quality and how one reinforces the other. Speed and quality are both hallmarks of a Continuous Delivery approach and best practices for building great software.
    -------------------------------------------------------------------------------------
    🎓 CD TRAINING COURSES 🎓
    If you want to learn Continuous Delivery and DevOps skills, check out Dave Farley's courses ➡️ bit.ly/DFTraining
    📚 BOOKS:
    📖 Dave’s NEW BOOK "Modern Software Engineering" is now available on
    Kindle ➡️ amzn.to/3DwdwT3
    (Paperback version available soon)
    In this book, Dave brings together his ideas and proven techniques to describe a durable, coherent and foundational approach to effective software development, for programmers, managers and technical leads, at all levels of experience.
    📖 "Continuous Delivery Pipelines" by Dave Farley
    paperback ➡️ amzn.to/3gIULlA
    ebook version ➡️ leanpub.com/cd-pipelines
    📖 The original, award-winning "Continuous Delivery" book by Dave Farley and Jez Humble
    ➡️ amzn.to/2WxRYmx
    📧 JOIN CD MAIL LIST 📧
    Keep up to date with the latest discussions, free "How To..." guides, events and online courses.
    ➡️ bit.ly/MailListCD
    -------------------------------------------------------------------------------------
    CHANNEL SPONSORS:
    Equal Experts is a product software development consultancy with a network of over 1,000 experienced technology consultants globally. They increase the pace of innovation by using modern software engineering practices that embrace Continuous Delivery, Security, and Operability from the outset ➡️ www.equalexperts.com/
    Harness helps engineers and DevOps teams simplify and scale CI/CD. Sign up for your free account at ➡️ harness.io
    Octopus are the makers of Octopus Deploy the single place for your team to manage releases, automate deployments, and automate the runbooks that keep your software operating. ➡️ octopus.com/
    SpecFlow Behavior Driven Development for .NET SpecFlow helps teams bind automation to feature files and share the resulting examples as Living Documentation across the team and stakeholders. ➡️ go.specflow.org/dave_farley
  • Наука та технологія

КОМЕНТАРІ • 126

  • @Eaglesight
    @Eaglesight 3 роки тому +64

    Dave, I want to see you coding CI style, practical videos of you making medium projects with TDD!
    Would be cool.

  • @SaHaRaSquad
    @SaHaRaSquad 3 роки тому +15

    On a related note, one of the most important things I recently learned about development is that "throwing away" some of the code is not only beneficial in many cases, but also comes with a far lower cost than one intuitively assumes. It helps get rid of discovered design problems in a clean way, forces you to think through the critical parts once more, and sections that are definitely correct and useful can just be copied from the discarded code before you actually get rid of it completely. I've fixed a lot of bugs and performance issues that way, simply because at that time I had a better understanding of the whole program as before.
    If a chunk of code isn't very readable or understandable there's a good chance it can be simplified, and in my experience making a lot of small changes and rearrangements in the code, especially early on, avoids an exponentially larger effort later. So I can definitely understand the claim about faster teams also producing higher quality code. Almost no code can stay unchanged from the start because it was originally based on many wrong assumptions, so why delay the inevitable.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +9

      My friend Dan North talks about the “software half-life” of a team. How long does it take them to re-write half their software. He reckons there is a correlation with good teams and short half-lives. 3-6 months being a good score.

    • @MrAbrazildo
      @MrAbrazildo 3 роки тому

      A few times in life I solve bugs by just refactoring the code. I should improve that technique, but I don't know from where to start.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +2

      @@MrAbrazildo I often find, and fix, bugs through refactoring. To imporve your skills Martin Fowler defined the term 'Refactoring' and has two good books on the subject.
      I did a mini-series of videos on my approach to refactoring legacy code bases, you can see them, for free, here: courses.cd.training

  • @oussamaziani6422
    @oussamaziani6422 3 роки тому +17

    Quality code is efficient and changeable, and it read's so that every line of code is exactly what you expected.
    Quality code is written by someone who care, someone who's not as proud with the product as with the way the product was made.
    Quality code is produced by someone who's disciplined , someone who take poses and doesn't gamble the assumptions instead they would be certain about them.

    • @CTimmerman
      @CTimmerman 3 роки тому +1

      *reads, and "takes ownership".

  • @fabricejaouen4252
    @fabricejaouen4252 3 роки тому +2

    The education program I follow requires the students to fulfil 13 projects. On each project, I've tried to implement quality, before speed.
    How did I do ?
    1. I have to understand each and every line of code I put into the project.
    2. As much as possible, I use test driven development.
    3. Systematic use of Travis CI in order to check out that no change could break the code
    4. Read again the full project code and look into each and every method and class to write down the documentation.
    As a consequence, reaching now the end of the project, I master every single corner and could identify where I can optimise, where I can add functionalities.
    I hope this answers your question. And thank you again: I love so much your videos.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +1

      Thanks, sounds good, I imagine that you found that as a result of your good quality it is easy to change, so now you move faster😉

    • @fabricejaouen4252
      @fabricejaouen4252 3 роки тому

      @@ContinuousDelivery : exactly, starting a project is always a pain in the neck, however, evolutions, improvements are easy to detect :-)
      Thank you again for your wise pieces of advice.

  • @ultimategames6670
    @ultimategames6670 3 роки тому +2

    thank you for sharing your knowledge and experience. my experience is to first focus on the core problem or core function of the software. that also saves time. all other things are extra and can be managed over the remaining time/deadline.

  • @Emerald13
    @Emerald13 3 роки тому +7

    Looking forward to seeing you soon on OReilly!

  • @FlaviusAspra
    @FlaviusAspra 3 роки тому +2

    Great video, thank you Dave!
    1. an initial longer time investment in setting up the architecture and the infrastructure
    2. Continuously "adding" time to each story to refactor, polish, improve the existing code
    The biggest challenge is convincing the business people that 1 and 2 are worth it. It also depends on the company and its technical history.
    I found that the companies willing to listen are those who have burned themselves in the past doing those bad practices.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому

      Thanks! Yes, experience is having screwed-up in lots of interesting ways, it is a harsh, but good teacher.

  • @bassRDS
    @bassRDS 2 роки тому

    Great video, Thank you sir!

  • @_Yaroslav
    @_Yaroslav 2 роки тому +1

    Great video as usual!
    Is there a place for the qa team of manual testers in the process with automated tests? If not it'll be rather difficult to persuade the management to switch to this approach if there are many manual testers in the company.

  • @rothbardfreedom
    @rothbardfreedom 3 роки тому +18

    "Quality is value to someone (who matters)" Jerry Weinberg.

  • @timmartin325
    @timmartin325 3 роки тому +2

    Don't forget a few important things about automated regression checks/ tests 1.) They need maintaining as the application they are testing changes, don't underestimate how much effort that involves 2.) Good testing involving humans making complex decisions about what tests to run and interpreting results which feeds back into that process cannot really be automated. 3.) Computers are good at checking very specific things quickly, but a real person will be able to notice a much broader range of potential problems that automated checks will often miss.

    • @MrAbrazildo
      @MrAbrazildo 3 роки тому +1

      1) They will be likely to fail at compile-time, which is fast to fix. 2) Just simple tests, in and out. 3) A change in the code can lead to a bug that once passed in a test. If it's automated, that old test will run again, accusing failing.

    • @Luxalpa
      @Luxalpa 3 роки тому

      But should also note the counter points: A "good human" that does these interactive tests will not create repeatable tests (because if they were repeatable, then they could just record them as a macro or script, which makes them automatable again), so then the question is how valuable are unreliable tests? Probably not very useful. Sure, you may find a bug that you wouldn't otherwise find, but there are no guarantees and you may never find out if this bug is still around later (because then again that would be automatable).
      Technically speaking, automated tests are manual tests. Because a person needs to sit down and write down what they wanna test. It's no different from actually executing a manual test, and with certain tools like for example macros, it is actually even technically identical. So what extra time do you spend on writing automated tests? None.
      About 1: You don't "maintain" tests, you update tests based on changes. Because tests are really just part of your normal code. The only difference here vs not having tests is that without them it's harder to write and maintain the rest of your code. They shouldn't be seen as something separate, in the same way as comments and documentation (including things like naming variables) shouldn't be seen as separate. All these things are required for you to solve the problems that arise during development. You're not actually spending any more time on things, because you're not creating redundancies at all. You're just making sure that both the problem and its solution are implemented in the system (as opposed to only have the solution) because only this way it allows you to update the program. If you only have the answer without knowing the question then you'll just end up with 42 like the super computer in Hitchhikers Guide To The Galaxy. The result itself may be super precise, but there are no guarantees what the question is that this thing actually answers. Which also means if the question changes (which is very common in programming), then it would be impossible for you to update the solution and therefore you'd have to start from scratch.

    • @timmartin325
      @timmartin325 3 роки тому

      @@Luxalpa Good testing should be finding problems that matter to the end users, that's the the high level goal. I am not sure how relevant repeatability is in that context. I guess overall what I am trying to say is a mix of fast automated checks and intelligent human exploratory testing is a very effective combination. I think a few years ago Microsoft tried to get rid of all their testers, only to rehire them recently with different job titles, although I admit that's anecdotal. A real person finding problems with your code may not be pleasent, but it's ultimately unavoidable whether it is testers or end users.

    • @MrAbrazildo
      @MrAbrazildo 3 роки тому

      @@Luxalpa "tests are part of normal code"? Not if they are independent f()s, as this channel use to say. I use to put them inside the normal f()s, but I admit this pollutes the code, and removing them would cost time. And certain values should be approved only when combined with others (a character should not run against a wall). These kind of situations won't be prevented by your normal testing code - at least in games, they are too many.

    • @MrAbrazildo
      @MrAbrazildo 3 роки тому

      ​@@timmartin325 Repeatability is relevant because, as I said: _"3) A change in the code can lead to a bug that once passed in a test"._
      I agree that human tests catch bugs that are impracticable by automated tests. Once I noticed a bug that took 2 years to arise, in a user point of view. It was caused by an overflow in some bits of a variable, that had pass through an optimization rework. That was expected, indicating the hit on the wall. However, the variable worked with +1 for reading and -1 for writing, to fit the bits field. Outside the class, a local variable (representing the field) in a f() work with the values normally. So, when hit the wall, some tasks were made, and the value has been writing back to the var. The point is that later I implement future checks for that memorized value, but it was wrote not with the overflow value anymore (because it wouldn't fit) - instead of it, a reset value due to bit truncation.
      But this was not enough to raise the bug, because the +1 for reading make it come back to an acceptable value, at the beginning. And combining that with certain characters alignment, consequences passed to be acceptable in a broken geometry: starting (only) from the end (triggering the overflow), completing it at the beginning! _(Geometry could be broken, but the alignment should stay in the same direction)_ . So the victim became unmovable. Plus, a bad luck of characters too close hid the cause.
      To appear, it has to attend to several steps: the overflow not entirely solved locally, the reset (which could crash or lead to absurd) being hidden by the +1 (for read), some specific alignments, certain characters, starts with character(s) in the "wall", complete it with character(s) at the beginning. And a bad luck make me took more time than I should. I solve it fast, however - I may be inspired.
      I baptized it as Age of Aquarius Bug:
      "When the moooooon is in the 7th Hooooouse
      And Jupiter aligns with Mars
      Then peace will guide the planets
      And loooOOOOve WILL STEER THE STARS!"

  • @nelsonochoam
    @nelsonochoam 2 роки тому +1

    Love your content Dave I wish more companies would invest on reaching the point they could do CI/CD

  • @MrAbrazildo
    @MrAbrazildo 3 роки тому +1

    I've been made manual tests. I'll try automated tests.
    7:09, about those 2, I use to force the simplicity of the code: don't use virtual stuff, avoid classes when f() is enough, avoid global variables, and so on. Except for optimization, I don't engage in unnecessary complex structures. Inheritance only in 1 chosen direction: generic to specific or less to more data.

  • @philmarsh7723
    @philmarsh7723 3 роки тому

    I agree. If there's not enough time to do it right, there's always time to do it over!

  • @ClaudioBrogliato
    @ClaudioBrogliato 3 роки тому

    I was really interested on this one and I have a few questions. What would you do if you work on multiple projects and a bug pops out at the end of the sprint? You won't be able to work at that software for weeks, maybe months. Automated tests have their quirks too, e.g. you can click on invisible elements, no checks if the UI matches the expected outcome, no checks on logic if the programmer who wrote the test is the same who wrote the code. How can you address all these problems?

  • @WorthyVII
    @WorthyVII 3 роки тому

    Fantastic video. This makes total sense.

  • @Blob64bit
    @Blob64bit 3 роки тому

    Great video once again!
    I'd really like to hear your thoughts on how to do this quality transition in practice.
    Often in legacy systems I find this transition from poor quality to high quality extremely slow, while building new software is generally fast.
    On the other hand there is pressure for innovation from the clients and the money that flows in from the old system, so improving quality of the old system is more easily understood by leaders.
    I still often feel like the most cost-effective solution is to create new software completely from scratch, but I don't have the experience to back this up.

  • @Luxalpa
    @Luxalpa 3 роки тому +1

    It's funny, because this is something that some algorithms in 3D VFX software also have. For example, if you create a particle simulation in Houdini you can set the maximum number of iteration steps to do in order to improve the accuracy. However, in many cases increasing the number of number of iteration steps doesn't actually decrease the performance. In fact, it usually improves it. Why? Well, because if the simulation is in an accurate state, it can move forward very easily and quickly. However, when it's in a very inaccurate state, a lot of additional computations have to be made in order to improve the state and before you have a clean result, the simulation moves on to the next frame and it has to start all over again, without ever getting to the place where the calculations are simple and easy.

  • @MartinsTalbergs
    @MartinsTalbergs 3 роки тому +9

    Fast, Cheap, High quality. Pick any two.

    • @Ownermode
      @Ownermode 3 роки тому +2

      You could provide all three, but then you would undersell yourself compared to the competition ofc.

    • @llothar68
      @llothar68 3 роки тому +2

      I pick Cheap and High quality. Good luck now

    • @loutragetadk453
      @loutragetadk453 3 роки тому +2

      It's more about Time and Scope, you can only fix one of them. Quality it's just not negociable.

    • @shiskeyoffles
      @shiskeyoffles 3 роки тому

      @@llothar68 lol... Interns?

    • @mikkolukas
      @mikkolukas 3 роки тому

      @@loutragetadk453 Quality is always negotiable

  • @TheGrumpyGameDev
    @TheGrumpyGameDev 3 роки тому +9

    Quality of code for me is inversely proportional to the amount of time it takes to read the code and understand its context.
    If I have to mentally compile the code in order to understand what it does(or worse, ACTUALLY compile it and debug it to gain an understanding) that code is approximately the worst quality it can be.

    • @ciscor8422
      @ciscor8422 3 роки тому +1

      I totally agree. It's terrifying how many developers don't even get the simplest ideas for good code. Like for example speaking names for variables, functions and classes.
      I don't understand how one can develop a whole class in which every variable has a max of 2 characters. And I've seen this way too often in the team I work with.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +6

      Yes, completely agree. My ideal is that even a non technical person could understand what the code in front of them does, even if they can’t see the whole picture.

    • @antoruby
      @antoruby 3 роки тому +1

      @@ciscor8422 what a pain working with freshly written poor code :( When it’s legacy it’s easier to accept, but new and low quality code is demotivating.

    • @thaianle4623
      @thaianle4623 3 роки тому

      This is how bad code could slow you down like immediately. It becomes worse when the issues are overlooked as in "we will refactor later", which is more like never.

  • @dosomething3
    @dosomething3 3 роки тому

    Just amazing

  • @TymoteuszCzech
    @TymoteuszCzech 3 роки тому

    Could you please provide sources for quoted "state of devops report"?

  • @Hayertjez
    @Hayertjez 3 роки тому +1

    Did you ever worked on a project involving hardware? How did you practice automated testing here?

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +1

      Yes, several times. Test in simulation. Architect the system to make the “edges’ of your system not very interesting, not very complex and limit the degree to which concurrency in hardware, us allowed to leak out to concurrency in software.

  • @hadilsabbagh8641
    @hadilsabbagh8641 3 роки тому

    Thank you for your excellent videos! How can I apply Continuous Delivery if I am the single developer of a startup? We are developing a mobile app

    • @RenatoTodorov
      @RenatoTodorov 3 роки тому +1

      Build a CD pipeline, write tests, get edge versions of your app into the hands of real testers (your ceo, cpo or literally any and every one else in your startup) multiple times a day, as soon as you push commits to master. Get feedback from them and continue iterating. That would be a pretty good start.

    • @llothar68
      @llothar68 3 роки тому

      You don't need CD for single person mobile app dev. Android Studio and XCode is doing almost everything already out of the box. It would be overkill. Best you can do is try to get a handful of enthusiast early users and listen to them.

    • @denniscieplik2501
      @denniscieplik2501 3 роки тому +1

      I think it doesn‘t really matter if you are a single developer or a team to use CI/CD. Perhaps you start with watching at the manually processes. Like „What happens after check-in“. Manual tasks often hide in „I just have to push these 5 Buttons“ 😉.
      For such tasks I am using a time box approach. I take some time on a weekly basis to automate a few things or just one.

  • @CasperBang
    @CasperBang 3 роки тому +2

    I'm not sure I follow entirely; but I suppose it depends entirely on your definition of the term quality. You state that time saved on rework alone is an enabling factor, but to me rework is really just another way of saying iteration. We slice a story to get the most outcome for the least work, which is great for delivering value fast - and learning from it - but it inherently calls for rework once a more refined story has matured to make it to implementation, right? If I release code based on an exploratory or perhaps even naive story to learn from; I might be tempted to use a couple of tricks in the book which would certainly cause a lower SIG score, SonarQube score etc. than if I had spent a few more weeks satisfying all lint checks, removing duplication, lowing coupling etc! So I look forward to a future episode and what you may have to say about quality, because up until now I find you have mostly dealt with "doing the right thing" (business value) rather than "doing it right" (technical quality). Also, you appear to just have invalidated, or nullified the cost factor, the classic Good/Cheap/Fast mantra.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +4

      I do mean technical quality, and will cover it in future. I also did mean to call-out the “good/cheap/fast pick one” mantra. I don’t think it holds.

  • @tamaskarsai2072
    @tamaskarsai2072 3 роки тому +1

    I am trying to make quality code, but I need to learn more. I still don't know what are the best practices, and what are clean codes, but after few projects I should improve with practice.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +2

      Yes, it is complex to do it well. I use TDD to guide my designs, testable code shares many of the same properties with "good" code. I also try and make any single piece of code do one thing, if my function or class does more than one thing, I work to split it out into separate pieces. Good luck.

    • @Luxalpa
      @Luxalpa 3 роки тому +2

      Best way to learn is to fail. If I don't know what the best practice is, I just try out all of them. Yeah, this costs me a bit of time and isn't strictly necessary, but I really want to understand what it is that I'm working with, because only then I have the solid base necessary to get to the next level. It's mostly just about building yourself an environment in which you can fail safely and that encourages you to fail often and hard. For example, put out a git commit, do a branch, then just try your major refactor. If it doesn't work, roll back and analyze what you learned about the problem. It can be helpful to think about problems in terms of the scientific method as well. Think about what the problem is, think about possible solutions, WRITE ALL OF THIS DOWN so you memorize it better and don't forget something important, you can write some additional hypothesis about your ideas. For example "this one will probably not scale well" or "this one will probably be quite slow" or "this one is complex". As a general rule, the more hypothesis you put out, the more you are going to learn, because later on you can just look back and see if those hypothesis are correct.
      Another way to look at it is the famous method of learning which is to explain things to others. It uses this exact same process. For example, I myself learned a lot of stuff about coding and the world in general by discussing things on reddit. People constantly challenged my world view, pointed out mistakes, etc. Obviously it requires some critical thinking skills and when working with others you also need to watch like a hawk over your ego to keep that in check, but it is an incredibly valuable skill to learn.
      And remember, if it was easy then everyone would be doing it :P

    • @tamaskarsai2072
      @tamaskarsai2072 3 роки тому +1

      @@Luxalpa I noticed that if I trying to fix a bug, and start explaining it to someone, than it's more likely that I find what causing the bug. When it comes to coding, lately I trying to just write down what the code actually want to accomplish, and first I try to code it on my own, if that doesn't works, than I start to search on google how to do it, afterwards I try to understand why is it working.

  • @oleksiifilippov68
    @oleksiifilippov68 3 роки тому

    Aside from the great meaningful content, that bug's made my day :D

  • @gronkymug2590
    @gronkymug2590 Рік тому

    Dave, please create a video about pragmatic view on joining REST, DDD and CQRS if you believe it is even a good idea. I really would like to see your approach to it 🙂

  • @SylwesterKogowski
    @SylwesterKogowski 3 роки тому +1

    I have to make a big remark on the point of this video.
    I did not yet meet a coder that would willingly want to cut corners on testing (except for amateur programmers that didn't yet heard about testing in the first place),
    The point is then not in convincing coders to do more tests, it is rather to explain the business owners the value of tests, and how to calculate and explain what are they loosing by forcing coders to cut corners. Another problem is that 95% of coders are unable to estimate their time to make a feature and thus they present unrealistic estimates to the product owners and then feel tremendous pressure when the estimate was inevitably surpassed.
    Thus I appeal to all the coders - learn how to estimate your work and you will have easier time when communicating with product owners and much lesser pressure.
    Don't listen to people that say that it is impossible. I assure you that it is quite possible, I am doing this for a long time now, but you have to apply some experience, statistics and probability estimation to this. It is quite possible to make an estimate that is 50% accurate, or 95% accurate, though both would be quite different estimates.
    Statistics science has tools for estimating such things.
    From my experience it is most useful to say to the product owner two estimates - the 50% and the 95% estimate (the second one you could say is the "worst-case" estimate). If you do that and your estimates will truly be 50% and 95% estimates, you will feel much much less pressure, your boss will have much more control over finances and will make better decisions regarding which feature to make and which to pass out on.
    50% of the time my estimates are overestimated - that is what you should strive for as well, and if you will achieve that, you will be that one lucky coder that can come from time to time to the product owner and say "you know, that thing you wanted, I made it in half the estimated time, you can move your project plans much further now, you can tell the shareholders that we've cut the costs and increased the profits again :)".
    As for the CI/CD approach - of course I agree - make the smallest steps possible and you will increase reliability and decrease the pressure even further.
    I would really like to write a book about it, because coders are so poorly prepared to make estimates and to identify ahead all the things that influence those estimates. There is a lot of things that influence the estimate of time, even things such as did you sleep well, but also external things, internal things, communication, planning, atmosphere, season, health, family, are you helping others or maybe asking others for help, are you well acquainted with all the libraries and algorithms you are using etc. It is really a material for one hell of a book, but once learned, it is quite easy to make good estimates.

    • @MrAbrazildo
      @MrAbrazildo 3 роки тому

      I believe you are working in the same kind of project, over and over, just with different details. For games, making 1 says little about making another.

  • @pilotboba
    @pilotboba 2 роки тому

    If manual tests are automated, what do QA people do? I assume devs are writting/automating the tests?

  • @mrJety89
    @mrJety89 3 роки тому +13

    I don't remember anything you just said, but the bug tennis just stuck with me

    • @alchemication
      @alchemication 3 роки тому +2

      Lol. I remember not brushing the teeth metaphore 😁

    • @MrAbrazildo
      @MrAbrazildo 3 роки тому +1

      He said quality and speed use to walk together, despite this being counter-intuitive. You can work a bit faster in the dirt, but if you never clean the workplace, some day you will need to pay your debt.

  • @SixthDemon
    @SixthDemon 3 роки тому

    great...now how to handle the situation of automated testing when the code is strongly coupled with multiple third party hardware ? for example when you are writing software for different com port devices or the devices that are depending on those com port devices. Now from that point you cant just create automated testing on the server and run it - here manual testing starting to make sense. While I agree that manual testing is slowing down process a lot, however as you mentioned it is business related decision, where they need to plan the costs.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому

      Well, reduce the coupling. Use ports & adapters to abstract inputs and outputs to and from the hardware, and test to those abstractions.

    • @SixthDemon
      @SixthDemon 3 роки тому +1

      @@ContinuousDelivery that makes sense and of course that would be implemented by almost everyone. However this still means that manual testing will be present to certain degree (the more software in dependent on external hardware, more manual testing will be required). I am not trying to argue that automated tests are bad - this would be ridiculous of me, my point is that sometimes complete automated testing might be more expensive (especially in short term) and some companies would not provide finances for it. Regardless I appreciate response

  • @timmartin325
    @timmartin325 3 роки тому

    What about context? I think that is overlooked to some extent in this video. A small startup with limited resources trying to quickly release a demo version of some software to a client in a very security concious environment (e.g b2b banking) is going to be different to a FANG company that can release a UI update to a small group of unknowing/ ab test users whose behaviour can be very closely monitored.

  • @JorgeEscobarMX
    @JorgeEscobarMX 2 роки тому

    High quality = High speed.
    I agree, how ever I';m the only one on my team thinking that way.
    What can I do to sell continous integration to my team leaders?

  • @CuulX
    @CuulX 3 роки тому +1

    TDD and CI sounds good. But when I try to figure out how to apply it in practice I have no clue and it seems impossible. Testing requires known input-output pairs, and if the only way to obtain those is to write an algorithm that does it, then you can't write the test, it's impossible. The best you can do is to save earlier computed function values and see if the function changes behaviour later. But often it should change behaviour when other parts are introduced. It quickly becomes computationally intractible to write any tests of value for games, ai, networked real time interactive webapps etc. If it's simple enough that you can write a test for it then that's never where the bug will appear because that part is trivial to write perfect unbuggy static code for without the test. And where you want the test, the that single test is a program that costs 1000000x as much to develop as your whole program. The only way seems then to be "users are the test".

    • @MrAbrazildo
      @MrAbrazildo 3 роки тому

      TDD for each single line, as he said, seems to me a bit overwhelming for complex code. But you can use the unity tests instead: write your algorithm, note some results, and later write a tiny f() that just call it, demanding known input-output pairs.
      But not all apps you will have to run it to know the input-output pairs. In a board game, you know exactly what should happen, most of the time - at least about the game rules.

    • @Luxalpa
      @Luxalpa 3 роки тому +3

      If writing the test takes additional time then you're doing something wrong. It's like pair programming in that sense. If done properly it will never take any additional time, because writing the test is part of the problem solving process, not just some thing that you do because you're told to. For example, I originally started my first tests when I was refactoring my code and needed a way to find out whether it still works after the refactor.
      I can not say much about your algorithm, but an algorithm that creates stuff that you don't know in advance seems wrong to me. For example, if you're creating a game and you're building a path finding system, you have your starting position and the goal positions where your character should end up in. You test for that.
      Later on in the process you may decide to test for the optimal path as well, in which case you find the optimal path in your test example by hand and fixate it. You do not always have to code a strict 100% matching example - your tests are supposed to be changing just as much as your code. It's fine if at first you only test for example that your algorithm stays under a certain length or matches a performance metric without really reassuring that it actually finds the shortest path.
      TDD also has another name: Prototyping. The idea is that you build a simple thing first (usually your test) and then you iterate on that. As said, the test shouldn't take any extra time to develop, because it solves the same problem that you are trying to solve and you do not spend your time writing code, you're spending your time thinking about how to solve this problem i.e. learning. Since the test will help you with learning to solve the problem, none of the time is wasted. However if you write your tests too late, or you test things that you don't really care about, then yes, writing the test is going to be a waste of time. :)
      Edit: A test is really just a mathematical description of the problem. You need to know the problem with all of its mathematical restrictions before you can solve it. As a simple example, your problem may be simple to return 3 values from a function, but then when writing the test you will immediately be confronted with the fact that there are multiple ways to return 3 values (i.e. your problem was ambiguous). You now need to put in the work and improve your problem description by figuring out which method you're actually going to use. This is not wasted time, as you would have to find this out anyway during the process. And sure, during the actual implementation you then may find out that it would be better to use a different method. This is all part of the learning process. But in order to solve the problem, you need to have a solid understanding of the problem, and the test is your hypothesis, which it is literally impossible to solve the problem without.

    • @MrAbrazildo
      @MrAbrazildo 3 роки тому +1

      @@Luxalpa Games is a very tricky field, however. Some tests will call f()s that depend on several variable states, outside them (called f()s). It's a common scenario in games.
      And many games have "continuous computation", using numbers with "precision", which is hard to predict. A bug can arise in the middle of the found path.

    • @CuulX
      @CuulX 3 роки тому

      @@Luxalpa seems like you didn't really read my issues with tests and wht they are impossible. Your examples are pointless because you gave examples of tests that work, proving that the problem they are for is trivial enough that tests are possible. What about non-trivial problems? What about extremely complex systems that can't easily be reduced to smaller problems?
      Even something extremely small can be tricky to test for. If 0.1 + 0.2 != 0.3 then how do I write a test? I could spend a lot of time checking precisely how floating point operations will resolve on my CPU and do that manually, but what if there's no such specification because I'm trying to make a new program that hasn't been done before?

  • @gronkymug2590
    @gronkymug2590 Рік тому

    I can either be listening to you or looking at the jumping up and down bug 🐛😂

  • @LarryRix
    @LarryRix 3 роки тому

    The powerful result of CI/CD is nearly without question or doubt. I am writing this to suggest that there is a way to enhance the TDD + CI/CD model by adding Design-by-Contract. How so? Because DbC offers at least three critical enhancements to TDD in a CI/CD work-cycle.
    First-it spreads "test assertions" into the code where they run any time an object method is executed.
    Second-the very nature of DbC assertions ("tests") points out offending code with precision (e.g. you KNOW where the defect is) and earlier (e.g. no chasing defects up and down the call-stack).
    Third-as a consequence of spreading testing into code as DbC assertions, one reduces the amount of testing code required, resulting in a smaller TDD footprint. This means you get more for less-that is-more testing baked-in contextually as in-context bug-hunters and less TDD, which means less top-level TDD tests to run, but accomplishing more DbC testing with more precision catching bugs earlier than TDD.
    Putting this into an automated test-cycle with a product like Jenkins, I have been on teams where we ran full automated test cycles 3 times per day on 1+ MLOC easily! The feedback cycle is amazingly short, which means a programmer can learn about an integration defect in a matter of hours and either triage and fix it the same day or (in the worst case) triage and fix it the next day.
    That beta/alpha testers can get new code this quickly means they are fully baked-in to the CI/CD cycle. Customers can then get code updates with far fewer defects more quickly along with more Feature Point enhancements! (e.g. that 40-50% increase in "innovations").
    NOTE: In a properly built Design-by-Contract compiler, the DbC code is stripped either completely or in degrees of form for production deliverables. Therefore, you can have a number of deliverables-one with no DbC code lingering (the fastest/most-efficient) and others with various levels/forms/degrees of DbC lingering as defect-hunters in production code used by end-of-line consumers. This means that you have a weapon in the production space to have customers use if you find that there is a hard-to-reproduce bug that slipped through. Thus-you can bring "testing" into your production user-space if needed!

    • @LarryRix
      @LarryRix 2 роки тому

      @@lepidoptera9337 putting aside the rudeness of your response, I am curious as to your meaning.

  • @MartinsTalbergs
    @MartinsTalbergs 3 роки тому +1

    quality coding is like chess.. the winner is who made one less mistake. Redibility is the measure - imagine how much it costs for next developer to understand this part of system.. a day or two? say abou ~ 400$ for every single time a new feature happens to be pushed through this part of system. The total cost is unimaginable

  • @aldob5681
    @aldob5681 3 роки тому

    Third variable is price

  • @lindasegerious9248
    @lindasegerious9248 2 роки тому

    I like that you asked to click like IF we like the content.

  • @Kitsune_Dev
    @Kitsune_Dev 3 роки тому +3

    Can you talk about TDD and Game Development?

    • @cdarklock
      @cdarklock 3 роки тому +2

      That's a tough one, bc a lot of game dev involves emergent behaviour - the previously-unanticipated interaction of discrete systems. Even if you used TDD on the systems, they're intended and expected to interact in unpredictable ways, so it's unclear what the benefit of TDD is on the final product.
      But I'm also interested in the topic.

    • @llothar68
      @llothar68 3 роки тому

      No, he can't. The Game industry is different from what he does.
      Games are still not waterfall developed but delivered.

    • @loutragetadk453
      @loutragetadk453 3 роки тому +1

      @@cdarklock l'm a total newbie in game dev, but even if most of the case can't be anticipated by developers there are always plenty case that can totally be anticipated and automatically tested. Those tests will always accelerate the development.

    • @loutragetadk453
      @loutragetadk453 3 роки тому +2

      There is a video and a whole playlist dedicated to this Theme made by infaillible code.

    • @cdarklock
      @cdarklock 3 роки тому

      @@loutragetadk453 I'm questioning whether it produces a better end result, not whether it has benefits in the development process. If the benefits to the development process aren't leveraged to produce better results, then from the player's perspective, they aren't benefits at all.

  • @nickhuynh6321
    @nickhuynh6321 3 роки тому

    I can't wait until automated test would be so fast that it's done with every keystroke I take to write the software...

    • @kajah05
      @kajah05 2 роки тому

      NCrunch... Resharper...

  • @ferdibra
    @ferdibra 2 роки тому

    Quality Software impact positively on peoples lives. People who have families, dreams and feelings.

  • @rothbardfreedom
    @rothbardfreedom 3 роки тому +1

    12:27 - How can one do that? Are the testers so obnoxious to critical thinking that their actions can be fully transformed in algorithms or are the algorithms smart to the point of being equivalent to human thinking?

    • @gJonii
      @gJonii 3 роки тому +1

      The point I guess is that you can focus on providing manual testing tools and automation to make them more and more redundant or at least make their job faster and faster, with full automated testing with no human input being limiting case if you can remove human testers completely.
      But the benefits he speaks of do seem to come even if you just manage to improve manual testing, automate what you can and make easier and faster what you cannot. After all, the point is, making things faster, even by bit, makes the code quality higher, and vice versa. How much faster you make things is up to you

    • @rothbardfreedom
      @rothbardfreedom 3 роки тому

      @@gJonii And my point is on the "replace manual testing with automated testing" - it seems to me that we would need one of the two things I mentioned above.
      "Make software testing better and faster using automation" would be a better way to put it, in case your context doesn't have either the two things.

    • @timmartin325
      @timmartin325 3 роки тому

      The manual and automated testing labels are not very helpful and cause a lot of confusion. In the real world there is a lot of crossover e.g automated tests still need to be created/designed/debugged by actual people, should those activities be classified as automated?🤔 Or what if someone doing "manual" testing is using sql scripts to populate data in an app does that make what they are doing automated testing, to some degree?

  • @gwgw4143
    @gwgw4143 3 роки тому

    bagus, seperti filsuf

  • @WouterStudioHD
    @WouterStudioHD 3 роки тому +2

    And yet my printer does not work

    • @tylerkropp4380
      @tylerkropp4380 3 роки тому +1

      Haha yeah. Nobody's printer can connect to Wi-Fi.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +1

      Clearly, the printer-driver writers should have gone faster 🤣

  • @alesgaroth
    @alesgaroth 3 роки тому +1

    Having a test fail last night doesn't mean it was broken yesterday. Heisenbugs and flaky tests mean it might not be broken, or was broken the day before, but passed in tests that night but failed the tests last night.
    Most systems that I've worked with that had automated tests had so many flaky tests it looked like a pastry and simply rerunning a failed test didn't mean it would fail.
    I've never seen any suggestions on how to address this situation. Other teams' tests are flaky and so our code that depends on them are also flaky.

    • @ContinuousDelivery
      @ContinuousDelivery  3 роки тому +2

      "Heisnbugs and flaky tests" mean that you aren't controlling the variables well enough and you design and/or tests are not as good as they could be. Code is deterministic if you control the inputs.

    • @alesgaroth
      @alesgaroth 3 роки тому

      @@ContinuousDelivery That may be if you're working on a new project, but as soon as you have any size in your team, someone's pushed a test that ran for them when they tried it. They're perhaps not aware of some source of randomness (or time delay, or a global variable that's set in another test, or a file that's not supposed to exist but does...) that will cause it to fail sometimes.
      Anyway, it's now in the mainline and run every time. Once in a while, sometimes after a month, sometimes after a couple weeks, it fails. A junior member of the team is assigned to fix it, and unable to reproduce (it passes as soon as it's tried on the local machine), so it's marked fixed.
      But, of course, it's not just one test, it's some small percentage of the tests. So eventually you have tests failing most nights, but different ones each night, and they can't be reproduced, so if a full build fails, and it doesn't look like any of your team's code caused it, the devs shrug and say try again. If it does look like your team's tests, a bug is created and assigned to junior member, who is unable to reproduce it.
      Often these tests are at the intersection between two team's code, in the integration tests. Each team assumes the other team will take care of the problem, and it just sits, since we're still getting features in.

    • @michaelrstover
      @michaelrstover 3 роки тому

      @@alesgaroth Also, in my experience, a complex UI with automated UI tests will have flaky tests that sometimes fail and require in depth investigation to determine if the failure was "real" or just happenstance.
      Further, when making a software system with a very large portion being a UI in a browser, how do you write that code TDD? Are you going to startup your browser environment for each test? I'd be snoring by the time it was ready.

    • @ddanielsandberg
      @ddanielsandberg 3 роки тому +1

      @@alesgaroth Hmm, it sounds like you have a broken culture. Just hear me out...
      You are treating the tests as second class citizens and just shrug, create a ticket and pass the boring job of "maintaining" to a junior programmer (also treated as a second class citizen) and then act chocked when it's not resolved and gets worse over time.
      I mean, making sure that you can trust the tests, have good security and performance, fast builds and ease of deployment/configuration/operating and observability is just as important as writing new features. Most shops tend to treat those things as something "we'll get to it if we have time", or "assign it to the new guy". I'm not saying that you are doing that (I don't have insight into your situation), but at the same time why aren't the senior, leads and more experienced people taking upon themselves the fix these issues if they are such a nuisance?

    • @alesgaroth
      @alesgaroth 3 роки тому

      @@ddanielsandberg yeah, I won't argue that it's a broken culture. It's happened at two companies long before I arrived. I'm asking how to get out of such a situation. What kind of culture change is needed. Etc.

  • @m.x.
    @m.x. 3 роки тому +5

    I call bullshit on this one. I wanna see and read those studies. Never seen a project delivered fast and with high quality unless you work extra hours every single damn day.

    • @tylerkropp4380
      @tylerkropp4380 3 роки тому +1

      What kind of projects have you worked on?

    • @LongNguyen-jk5dh
      @LongNguyen-jk5dh 3 роки тому +3

      I think he means spending more time on high quality code rather than quick fixing then come back and fixing more bugs.

    • @tylerkropp4380
      @tylerkropp4380 3 роки тому +4

      @@LongNguyen-jk5dh Good point, I think I recall a phrase: "Do it right or do it twice."

    • @Guido_XL
      @Guido_XL 3 роки тому +1

      It may depend on the application goal and the complexity of it all. If the team can quite clearly envision the way in which the software will be developed to meet the requirements, then things are more controllable than when the project is less clear from the start. If bugs are "just" the result of developer decisions that appear to have been made too capriciously, then the developer can use the fast feedback from daily (or rather nightly) testing and correct the bug early in the process. But, if the bug turns out to be more tedious and incurs the whole team to join in to decide as how to handle it, then speed and quality are not obviously that tightly correlated anymore.
      Speed and quality are probably well correlated, if the main track of development appears to be clear from the start. If the project management can decide on the plan and be confident that the main track will work, then it would not matter that much whether isolated bugs are found early or late in the development phase. The problem is when non-isolated bugs are found that affect a large part of the software. Then, detecting the bug as early as possible will add thrust to the speed-quality engine.
      The quality-speed issue in software is probably not that different from physical development and production. If a new product can be designed by reusing many existing components, and the new product is mainly a redesign, with some new additions to it, then development will be less prone to induce errors that need to be addressed during the development phase and prototyping. But, if the new product is really new and almost every detail needs to be designed from scratch, then it is inevitable that errors will creep in and even high-frequency testing cannot prevent that some development modules are going to be scrapped, requiring a restart.
      This is in software development not much different. Give us a well-known task and we can tell you that we will finish it in 3 weeks, because we know we can do it. Any bug will be a simple one. But, give us a complicated project with problems that we have never handled before, and your guess is as good as mine.

    • @Luxalpa
      @Luxalpa 3 роки тому +1

      I don't see how this contradicts the point. Yes, writing quality software takes a lot of time. But so does writing poor quality software. If you work extra hours every single damn day, this is because of poor project management / planning, mostly because you're fixing both time and scope which you should _never_ do. However it does not have anything to do with TDD or CD. You are not going to save any more time by getting your project into a less efficient state in which it takes you longer to make changes. I think that should be obvious.